This is a self-assessment tool for safe and ethical use of AI in clinical learning and practice. Clinicians should aim to check all items consistently. Any unchecked item is a red flag to address in supervision or training.
If you are an SFDPH provider, make sure to review and align your AI-use and practice with the guidelines and policies below:Â
The rapid spread of AI tools means many clients are already experimenting with them for mental health support. Some may bring their experiences into therapy, while others might use these tools privately without telling their therapist. When clients are already using these tools, therapists have an ethical responsibility to make sure that this use: (1) Does not undermine the therapeutic process; (2) Protects client confidentiality and safety; (3) Stays anchored to the client’s therapy goals; and (4) Becomes an opportunity for reflection and collaboration, rather than a source of confusion or risk.
The rise of AI large language models (LLMs) means many clinicians, supervisors, and trainees are beginning to explore how these tools might be used as clinical brainstorming partners. Importantly, AI is not a treatment provider, not a decision-maker, and not a substitute for supervision or professional judgment. This handout provides guidelines for safely and ethically using AI in two main ways: (1) Generic Skill Building: Practicing principles and techniques that apply across many clients; and (2) Case-Specific Use: Generating ideas for de-identified clinical cases when a therapist or supervisor feels stuck.
Ensure Clinical Oversight: AI tools should support, not replace, the provider’s clinical judgment. Always maintain human oversight in decision-making.
Use Validated and Evidence-Based Tools: Only implement AI technologies that have been validated for clinical use and align with evidence-based practices.
Monitor and Evaluate Outcomes: Regularly assess the effectiveness, accuracy, and unintended consequences of AI tools on client outcomes.
Pursue Ongoing Training: Behavioral health professionals should engage in continuing education to stay current with developments in AI and digital health technologies.
Clarify Scope of Use with Clients: Clearly explain the role and limitations of AI tools as part of informed consent and care planning.
Integrate with Clinical Documentation: Ensure AI-generated content meets documentation standards and integrates seamlessly with clinical workflows.
Align with Organizational Policies: Follow employer or agency guidelines around data storage, tool usage, and integration with EHR systems.
Collaborate Across Disciplines: Engage in interdisciplinary discussions and consultations to evaluate and refine AI applications in practice.
Tailor Use to Cultural and Linguistic Needs: Select and adapt AI tools to ensure cultural relevance and language accessibility for diverse populations.
Informed Consent and Client Autonomy. Clearly disclose when and how AI tools are used. Obtain specific, informed consent from clients regarding AI-assisted care.
Confidentiality and Data Security. Ensure compliance with HIPAA, state laws, and professional codes in handling AI-generated or processed client data. Be transparent about data sharing, third-party vendors, and algorithmic data use.
Equity and Bias Mitigation. Recognize and address algorithmic bias that may disproportionately harm historically marginalized or underserved communities. Avoid relying on AI tools that have not been assessed for fairness and equity.
Professional Responsibility and Accountability. Providers retain full responsibility for clinical decisions, regardless of AI input. Do not defer ethical or clinical responsibility to AI systems or vendors.
Transparency and Integrity. Be honest with clients and colleagues about the limitations, capabilities, and sources of AI tools. Document how AI was used in the decision-making process, particularly when it influences diagnoses, care/service plans, or risk assessments.
Do No Harm (Nonmaleficence). Avoid implementing AI tools in ways that could cause psychological, emotional, or systemic harm. Routinely evaluate tools for safety, including unintended adverse effects.
Competence and Scope of Practice. Use AI tools only within your scope of training and licensure. Seek supervision or expert consultation when AI use enters novel or ethically ambiguous territory.