Ensure Clinical Oversight: AI tools should support, not replace, the provider’s clinical judgment. Always maintain human oversight in decision-making.
Use Validated and Evidence-Based Tools: Only implement AI technologies that have been validated for clinical use and align with evidence-based practices.
Monitor and Evaluate Outcomes: Regularly assess the effectiveness, accuracy, and unintended consequences of AI tools on client outcomes.
Pursue Ongoing Training: Behavioral health professionals should engage in continuing education to stay current with developments in AI and digital health technologies.
Clarify Scope of Use with Clients: Clearly explain the role and limitations of AI tools as part of informed consent and care planning.
Integrate with Clinical Documentation: Ensure AI-generated content meets documentation standards and integrates seamlessly with clinical workflows.
Align with Organizational Policies: Follow employer or agency guidelines around data storage, tool usage, and integration with EHR systems.
Collaborate Across Disciplines: Engage in interdisciplinary discussions and consultations to evaluate and refine AI applications in practice.
Tailor Use to Cultural and Linguistic Needs: Select and adapt AI tools to ensure cultural relevance and language accessibility for diverse populations.
Informed Consent and Client Autonomy. Clearly disclose when and how AI tools are used. Obtain specific, informed consent from clients regarding AI-assisted care.
Confidentiality and Data Security. Ensure compliance with HIPAA, state laws, and professional codes in handling AI-generated or processed client data. Be transparent about data sharing, third-party vendors, and algorithmic data use.
Equity and Bias Mitigation. Recognize and address algorithmic bias that may disproportionately harm historically marginalized or underserved communities. Avoid relying on AI tools that have not been assessed for fairness and equity.
Professional Responsibility and Accountability. Providers retain full responsibility for clinical decisions, regardless of AI input. Do not defer ethical or clinical responsibility to AI systems or vendors.
Transparency and Integrity. Be honest with clients and colleagues about the limitations, capabilities, and sources of AI tools. Document how AI was used in the decision-making process, particularly when it influences diagnoses, care/service plans, or risk assessments.
Do No Harm (Nonmaleficence). Avoid implementing AI tools in ways that could cause psychological, emotional, or systemic harm. Routinely evaluate tools for safety, including unintended adverse effects.
Competence and Scope of Practice. Use AI tools only within your scope of training and licensure. Seek supervision or expert consultation when AI use enters novel or ethically ambiguous territory.