{"version":1,"type":"rich","provider_name":"Libsyn","provider_url":"https:\/\/www.libsyn.com","height":90,"width":600,"title":"#6: The Podcast Guest Who Wasn't\u2013How to Respond to a Steady Rise in AI Hallucinations","description":"As LLMs rapidly advance in capabilities, many also seem to be developing some quirks. This episode of Accelerated Velocity explores the current rise in AI hallucinations. Grace shares a firsthand experience with ChatGPT fabricating information, Peter and Grace discuss safeguards and how to avoid risks with your favorite AI tools, and they later delve into some potential causes of this mysterious rise in these sometimes hilarious and always concerning AI mishaps.&amp;nbsp; Visit our website Subscribe to our newsletter Chatbot Arena - lmarena.ai&amp;nbsp; HubSpot App Marketplace - ecosystem.hubspot.com&amp;nbsp; &amp;nbsp; Chapters 00:00 - Introduction 01:08 - Topic: AI Hallucinations 01:37 - Grace's ChatGPT Experience 04:53 - Hallucination Statistics 05:36 - Real-World Implications 08:47 - Theories Behind Hallucinations 10:45 - Chatbot Arena 11:50 - Speed to Build AI Agents 14:29 - All-in-one Platforms with AI tools 15:29 - Outro &amp;nbsp; Sources&amp;nbsp; \u201cA.I. Is \u200b\u200bGetting More Powerful, but Its Hallucinations Are Getting Worse\u201d by Cade Metz and Karen Weise for The New York Times \u201cWhy AI \u2018Hallucinations\u2019 Are Worse Than Ever\u201d by Conor Murray for Forbes &amp;nbsp; ","author_name":"Accelerated Velocity","author_url":"https:\/\/sites.libsyn.com\/573960","html":"<iframe title=\"Libsyn Player\" style=\"border: none\" src=\"\/\/html5-player.libsyn.com\/embed\/episode\/id\/36567395\/height\/90\/theme\/custom\/thumbnail\/yes\/direction\/forward\/render-playlist\/no\/custom-color\/88AA3C\/\" height=\"90\" width=\"600\" scrolling=\"no\"  allowfullscreen webkitallowfullscreen mozallowfullscreen oallowfullscreen msallowfullscreen><\/iframe>","thumbnail_url":"https:\/\/assets.libsyn.com\/secure\/content\/188329185"}