{"version":1,"type":"rich","provider_name":"Libsyn","provider_url":"https:\/\/www.libsyn.com","height":90,"width":600,"title":"37 - Jaime Sevilla on AI Forecasting","description":"Epoch AI is the premier organization that tracks the trajectory of AI - how much compute is used, the role of algorithmic improvements, the growth in data used, and when the above trends might hit an end. In this episode, I speak with the director of Epoch AI, Jaime Sevilla, about how compute, data, and algorithmic improvements are impacting AI, and whether continuing to scale can get us AGI. Patreon: https:\/\/www.patreon.com\/axrpodcast Ko-fi: https:\/\/ko-fi.com\/axrpodcast The transcript:  https:\/\/axrp.net\/episode\/2024\/10\/04\/episode-37-jaime-sevilla-forecasting-ai.html &amp;nbsp; Topics we discuss, and timestamps: 0:00:38 - The pace of AI progress 0:07:49 - How Epoch AI tracks AI compute 0:11:44 - Why does AI compute grow so smoothly? 0:21:46 - When will we run out of computers? 0:38:56 - Algorithmic improvement 0:44:21 - Algorithmic improvement and scaling laws 0:56:56 - Training data 1:04:56 - Can scaling produce AGI? 1:16:55 - When will AGI arrive? 1:21:20 - Epoch AI 1:27:06 - Open questions in AI forecasting 1:35:21 - Epoch AI and x-risk 1:41:34 - Following Epoch AI's research &amp;nbsp; Links for Jaime and Epoch AI: Epoch AI: https:\/\/epochai.org\/ Machine Learning Trends dashboard: https:\/\/epochai.org\/trends Epoch AI on X \/ Twitter: https:\/\/x.com\/EpochAIResearch Jaime on X \/ Twitter: https:\/\/x.com\/Jsevillamol &amp;nbsp; Research we discuss: Training Compute of Frontier AI Models Grows by 4-5x per Year:  https:\/\/epochai.org\/blog\/training-compute-of-frontier-ai-models-grows-by-4-5x-per-year Optimally Allocating Compute Between Inference and Training:  https:\/\/epochai.org\/blog\/optimally-allocating-compute-between-inference-and-training Algorithmic Progress in Language Models [blog post]: https:\/\/epochai.org\/blog\/algorithmic-progress-in-language-models Algorithmic progress in language models [paper]: https:\/\/arxiv.org\/abs\/2403.05812 Training Compute-Optimal Large Language Models [aka the Chinchilla scaling law paper]: https:\/\/arxiv.org\/abs\/2203.15556 Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated Data [blog post]:  https:\/\/epochai.org\/blog\/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data Will we run out of data? Limits of LLM scaling based on human-generated data [paper]: https:\/\/arxiv.org\/abs\/2211.04325 The Direct Approach: https:\/\/epochai.org\/blog\/the-direct-approach &amp;nbsp; Episode art by Hamish Doodles:&amp;nbsp;hamishdoodles.com ","author_name":"AXRP - the AI X-risk Research Podcast","author_url":"https:\/\/axrp.net","html":"<iframe title=\"Libsyn Player\" style=\"border: none\" src=\"\/\/html5-player.libsyn.com\/embed\/episode\/id\/33334332\/height\/90\/theme\/custom\/thumbnail\/yes\/direction\/forward\/render-playlist\/no\/custom-color\/88AA3C\/\" height=\"90\" width=\"600\" scrolling=\"no\"  allowfullscreen webkitallowfullscreen mozallowfullscreen oallowfullscreen msallowfullscreen><\/iframe>","thumbnail_url":"https:\/\/assets.libsyn.com\/secure\/content\/179181972"}