<?xml version="1.0" encoding="utf-8"?>
<oembed>
  <version>1</version>
  <type>rich</type>
  <provider_name>Libsyn</provider_name>
  <provider_url>https://www.libsyn.com</provider_url>
  <height>90</height>
  <width>600</width>
  <title>Stanford's Duncan Eddy explains why AI isn’t going to destroy humanity, but we need to make it safer</title>
  <description>Duncan Eddy has spent years working in the realm of space satellite communications, and now he’s directing his talents toward AI as the Executive Director of the Stanford Center for AI Safety. In this episode, Duncan speaks with Adario Strange to explain why the commercialization of space will continue to fuel our explorations into the Moon and Mars, and how AI-powered robots may be the primary method for deep space exploration in the future. Then the discussion turns toward the topic of AI safety and the algorithm the Stanford group developed to try to help guide the technology in the right direction. Finally, the area of AI super intelligence comes up, and you may be surprised at what Duncan has to say about that given his role as an AI safety advocate. </description>
  <author_name>MARS Magazine</author_name>
  <author_url>https://www.marsmag.com</author_url>
  <html>&lt;iframe title="Libsyn Player" style="border: none" src="//html5-player.libsyn.com/embed/episode/id/40753550/height/90/theme/custom/thumbnail/yes/direction/forward/render-playlist/no/custom-color/88AA3C/" height="90" width="600" scrolling="no"  allowfullscreen webkitallowfullscreen mozallowfullscreen oallowfullscreen msallowfullscreen&gt;&lt;/iframe&gt;</html>
  <thumbnail_url>https://assets.libsyn.com/secure/item/40753550</thumbnail_url>
</oembed>
