{"version":1,"type":"rich","provider_name":"Libsyn","provider_url":"https:\/\/www.libsyn.com","height":90,"width":600,"title":"Episode 611: The Real Risks of Using Non-Vetted AI Platforms with Client Information","description":"Welcome solo and group practice owners! We are Liath Dalton and Evan Dumas, your co-hosts of Group Practice Tech. In our latest episode, we continue our series on AI use within therapy practices by sharing how to explain to your team members why using non-vetted AI platforms is not permissible. We discuss:  What counts as Protected Health Information and a breakdown of the often misunderstood 18th identifier under HIPAA How therapy progress notes and clinical notes are inherently identifying AI re-identification risk and why this is possible Why AI use involving client information must be vetted and HIPAA compliance-compatible What happens when you input data into personal AI platforms What we mean by AI governance, and why personal AI platforms can\u2019t be governed Why lack of AI governance is a significant liability Impermissible disclosures under HIPAA Why proving low probability of compromise is difficult after the fact, and what this means for your ability to mitigate risk Managing the emotional pieces of identifying risk and risk mitigation in your practice  Listen here: https:\/\/personcenteredtech.com\/group\/podcast\/ For more,&amp;nbsp;visit our website. ","author_name":"Group Practice Tech","author_url":"https:\/\/personcenteredtech.com\/group\/podcast\/","html":"<iframe title=\"Libsyn Player\" style=\"border: none\" src=\"\/\/html5-player.libsyn.com\/embed\/episode\/id\/40795760\/height\/90\/theme\/custom\/thumbnail\/yes\/direction\/forward\/render-playlist\/no\/custom-color\/8fc855\/\" height=\"90\" width=\"600\" scrolling=\"no\"  allowfullscreen webkitallowfullscreen mozallowfullscreen oallowfullscreen msallowfullscreen><\/iframe>","thumbnail_url":"https:\/\/assets.libsyn.com\/secure\/content\/200605280"}