{"version":1,"type":"rich","provider_name":"Libsyn","provider_url":"https:\/\/www.libsyn.com","height":300,"width":600,"title":"When Performance Isn\u2019t Enough: Ensuring the Safety and Security of AI Systems","description":"Artificial intelligence (AI) systems offer tremendous potential, but compared to traditional software, they introduce novel safety and security risks. System theory provides a powerful lens for understanding these risks and developing effective mitigations. In this webcast, we\u2019ll introduce System Theoretic Process Analysis (STPA), a system-theory-based approach to safety analysis. We\u2019ll explain how STPA helps organizations build stronger assurances about the safety and security of complex systems, including those that incorporate AI. What Will Attendees Learn? \u2022 How complex systems fail due to design flaws and unsafe interactions\u2014not just component failures \u2022 How these types of accidents can occur in AI-enabled systems \u2022 How to apply a system-theoretic perspective, including System Theoretic Process Analysis (STPA), to analyzing AI systems \u2022 Practical insights into improving the design, testing, and operational use of AI systems to strengthen safety and security ","author_name":"SEI Webcasts","author_url":"https:\/\/www.sei.cmu.edu\/publications\/webinars\/index.cfm","html":"<iframe title=\"Libsyn Player\" style=\"border: none\" src=\"\/\/html5-player.libsyn.com\/embed\/episode\/id\/41059905\/height\/300\/theme\/custom\/thumbnail\/yes\/direction\/forward\/render-playlist\/no\/custom-color\/88AA3C\/\" height=\"300\" width=\"600\" scrolling=\"no\"  allowfullscreen webkitallowfullscreen mozallowfullscreen oallowfullscreen msallowfullscreen><\/iframe>","thumbnail_url":"https:\/\/assets.libsyn.com\/secure\/content\/201313785"}