Talk

AI on a Microbudget - Methods of Machine Learning Miniaturization

Thursday, May 23

12:35 - 13:05
RoomSpaghetti
LanguageEnglish
Audience levelIntermediate
Elevator pitch

In recent years, the AI field has pursued ever larger models, trained at “eye-watering” cost. In this talk we explore ideas for the rest of us, the GPU-poor. We’ll show you how to make do with less – less computing power, less person power, less data – while still building powerful models.

Abstract

Current progress in AI has seen remarkable capabilities emerging from simple prediction tasks – if we scale them massively. Surprisingly, we get sparks of reasoning and intelligence in a model that was trained to do little more than masked word prediction. Since that realization the AI field has pursued ever larger models, trained at “eye-watering” cost. If scaling is all you need – does it follow that, in practice, money is all you need?

In this talk we explore ideas for the rest of us, the GPU-poor. Taking examples from language processing and computer vision, we’ll show you how to make do with less – less computing power, less person power, less data – while still building powerful models. We will introduce a set of methods and open source tools for the efficient reuse and miniaturization of models, including transfer learning and fine-tuning, knowledge distillation, and model quantization. We will also discuss how to choose efficient model architectures, and investigate ways in which small and specialized models can outperform large models. Our talk aims to provide an overview for ML practitioners, draws from our combined project experience, and is accompanied by a repository of code examples to get you started with building AI on a microbudget.

TagsMachine-Learning, Open-Source, Deep Learning, Natural Language Processing, Algorithms
Participant

Katharina Rasch

Data scientist | computer vision engineer | teacher PhD Computer Science (KTH Stockholm) Freelancer in Berlin

Participant

Christian Staudt

I am a data scientist with 8 years of experience as a freelancer in diverse industries. My mission is enabling my clients and the teams I work with to implement data-driven innovation, making the connection between use cases, algorithms, and the appropriate tech stack. I am mainly focused on machine learning from prototype to deployment, but my work also regularly involves optimization and data mining. I enjoy contributing to open source, and I was active as a community organizer for PyData.