AI on your phone? Tim Dettmers on quantization of neural networks — #41

Tim Dettmers develops computationally efficient methods for deep learning. He is a leader in quantization: coarse graining of large neural networks to increase speed and reduce hardware requirements.

Tim developed 4-and 8-bit quantizations enabling training and inference with large language models on affordable GPUs and CPUs - i.e., as commonly found in home gaming rigs.

Tim and Steve discuss: Tim's background and current research program, large language models, quantization and performance, democratization of AI technology, the open source Cambrian explosion in AI, and the future of AI.

0:00 Introduction and Tim’s background
18:02 Tim's interest in the efficiency and accessibility of large language models
38:05 Inference, speed, and the potential for using consumer GPUs for running large language models
45:55 Model training and the benefits of quantization with QLoRA
57:14 The future of AI and large language models in the next 3-5 years and beyond


Music used with permission from Blade Runner Blues Livestream improvisation by State Azure.

--

Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University. Previously, he was Senior Vice President for Research and Innovation at MSU and Director of the Institute of Theoretical Science at the University of Oregon. Hsu is a startup founder (SuperFocus.ai, SafeWeb, Genomic Prediction) and advisor to venture capital and other investment firms. He was educated at Caltech and Berkeley, was a Harvard Junior Fellow, and has held faculty positions at Yale, the University of Oregon, and MSU.

Please send any questions or suggestions to manifold1podcast@gmail.com or Steve on Twitter @hsu_steve.

Creators and Guests

Stephen Hsu
Host
Stephen Hsu
Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University.
© Steve Hsu - All Rights Reserved