This policy outlines the appropriate and ethical use of Artificial Intelligence (AI) tools within NKY Health to ensure compliance with data privacy policies. It defines key AI-related terms, sets guidelines for the use of AI tools, and emphasizes the protection of proprietary, confidential, and patient health information (PHI).
Artificial Intelligence (AI):
AI refers to the simulation of human intelligence by computer systems. These systems can perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
Machine Learning (ML):
A subset of AI, machine learning involves the use of algorithms that enable computers to learn from and make predictions or decisions based on data, without being explicitly programmed for each specific task.
Natural Language Processing (NLP):
NLP is a branch of AI that focuses on the interaction between computers and humans through natural language. It enables computers to understand, interpret, and respond to human language including hand writing.
Proprietary Information:
Information owned by NKY Health, including research, methodologies, or intellectual property that is not publicly available and is protected from disclosure.
Confidential Information:
Information that is not intended for public disclosure, including but not limited to employee information, patient records, or other sensitive organizational data.
Protected Health Information (PHI):
Protected health information (PHI) is any information in the medical record or designated record set that can be used to identify an individual and that was created, used, or disclosed in the course of providing a health care service such as diagnosis or treatment.
AI Hallucination:
An AI hallucination occurs when an artificial intelligence system, such as a machine learning model or natural language processing tool, generates information or responses that are inaccurate, misleading, or entirely fabricated. This can occur due to limits in the models understanding, biases in the submitted data, or when requesting overly complicated tasks.
The scope of using Artificial Intelligence (AI) in public health encompasses a wide range of applications aimed at improving health outcomes, enhancing decision-making, and optimizing resource allocation. AI can be leveraged for predictive analytics to forecast disease outbreaks, monitor trends, and identify at-risk populations. It also supports data analysis by processing large datasets, uncovering patterns in health data, and generating insights that can inform policy development and health interventions. In addition, AI tools like natural language processing (NLP) can analyze unstructured data such as survey responses or case notes, helping to streamline workflows and improve the efficiency. AI can also assist in clinical decision support, providing healthcare professionals with enhanced diagnostic tools and treatment recommendations. However, the use of AI must be carefully managed to ensure compliance with data privacy laws, including HIPAA, particularly when handling protected health information (PHI). The technology should be used ethically, with a focus on safeguarding sensitive data and ensuring that human oversight is maintained in critical health decisions.
AI tools should only be used for approved public health initiatives, research, and departmental functions. All AI use must align with federal, state, and local laws, including HIPAA regulations. AI applications involving sensitive public health data must be approved by the NKY Health Data Modernization Initiative Steering Committee, Programs Manager for Informatics, Data, EPI, and Analytics, or other designee.
No proprietary or confidential documents should be used as input for AI tools, particularly tools that are cloud-based or hosted by third parties, unless expressly authorized and under contract with those vendors. Staff must ensure that any data input into AI systems does not include any non-public materials or sensitive employee information.
All outputs from AI systems must be reviewed by staff to ensure accuracy and compliance with department policies. AI outputs may contain AI hallucinations or factual errors and should not be considered a final version until thoroughly reviewed by the responsible staff person.
AI tools should not be used in any way that could potentially expose or compromise protected health information unless the tool is fully compliant with HIPAA requirements.
When using AI for healthcare-related analysis, PHI should be de-identified in accordance with HIPAA’s de-identification guidelines before being processed by any AI tool.
Before using any third-party AI service, a thorough assessment of the tool’s compliance with HIPAA and data protection standards must be conducted. Business Associate Agreements (BAAs) must be established with third-party vendors if AI tools will handle PHI.
All PHI used by AI tools must be encrypted both at rest and in transit. Access controls and audit logs must be implemented to track who is accessing PHI and AI systems, and to ensure that the use of PHI is appropriately monitored.
Sensitive or protected data should only be shared and stored using secure methods and should be encrypted when necessary.
No AI tools should be used for processing, analyzing, or storing PHI unless the system meets all legal and regulatory standards, including encryption, audit trails, and role-based access controls.
AI tools should not be used to make decisions directly impacting patient care without human oversight.
All employees and contractors using AI tools must undergo training on the ethical use of AI, data protection practices, and HIPAA compliance. Regular updates on AI tools and related security protocols must be provided.
Regular audits will be conducted to ensure compliance with this policy and to evaluate the security of AI tools handling sensitive information.
Violations of this policy may result in disciplinary action, up to and including dismissal from employment or termination of contract, depending on the severity of the breach.
This policy will be reviewed as needed to adapt to changes in AI technology, regulations, or departmental practices.
Any revisions must be approved by the NKY DMI Steering Committee to ensure continued compliance with applicable laws and standards.
m/d/yyyy
First revision