Table of Contents
With its rapid evolution, generative AI (GenAI) presents exciting opportunities for researchers, including faster scientific progress, more efficient data analysis, and new insights into intricate problems. However, embracing this groundbreaking technology requires careful consideration due to the associated ethical, security, and responsible use concerns.
As a public research institution, we are committed to fostering an environment that embraces the power of AI while upholding our unwavering commitment to academic integrity, safety, and security. These guidelines serve as an initial guide for navigating the ethical landscape of AI-powered research and offer recommendations for integrating GenAI tools, like ChatGPT, and other AI-assisted technologies into your research practices with integrity and rigor.
Recognizing the rapid advancements in AI, these guidelines are deliberately flexible and responsive. We will regularly revisit and revise them to ensure they align with emerging ethical considerations and evolving regulations set by national organizations and funding agencies like the National Science Foundation (NSF) and the National Institutes of Health (NIH).
We invite the entire university community - faculty, staff, and students alike - to actively engage with these guidelines, seek feedback, participate in discussions, and help shape this ethical roadmap together. For up-to-date information on the university's broader AI initiatives and to stay informed about compliance with external guidelines, please visit PSU's AI Initiatives.
Academic authorship signifies a substantial intellectual contribution and accountability for research content. While AI models excel in specific tasks like text generation and data analysis, they lack the necessary depth of understanding and critical judgment expected of authors.
Why AI can't claim authorship?
Intellectual contribution: Though creative or insightful, AI output often stems from pattern recognition rather than a grasp of underlying concepts. This falls short of the substantial intellectual contribution expected from authors.
Accountability: AI's lack of independent thought and judgment makes it difficult to hold it accountable for its output, raising ethical concerns for attributing authorship. Inherent to authorship, responsibility cannot be readily placed on AI models in their current state.
Direct involvement: AI lacks the understanding and expertise required for direct involvement in research. It functions primarily as a support tool, distinct from the active participation of human researchers.
Despite authorship limitations, AI offers valuable tools for research writing, including:
Brainstorming: AI can generate ideas and suggest new topics, sparking your creativity.
Outlining and structuring: Create organized essays and reports with AI's help.
Drafting sections: Let AI draft parts of your paper, giving you a head start.
Paraphrasing and summarizing: Leverage AI to rephrase text or condense existing information.
Grammar and style check: Polish your writing with AI's grammar and style checker.
In short, while AI is a valuable tool it is not a replacement for your expertise and judgment. Always carefully review and edit AI-generated text to ensure accuracy, originality, and ethical soundness. You should also consult comparable position statements and policies from research publications and organizations, including the Committee on Publication Ethics, the World Association of Medical Editors, Elsevier, and JAMA, for additional guidance.
Yes, GenAI can be a valuable tool for graduate students working on their theses and dissertations. However, responsible and ethical use demands transparency. You must disclose AI utilization clearly and informatively, noting how it has contributed to your work.
Disclosures in the Text:
The specific format for disclosing AI use depends on your field and thesis format. Here are some options:
Materials and Methods section: If the dissertation or thesis has a "Materials and Methods" section, AI usage can be disclosed in this section. This approach is common in scientific disciplines where AI is used for data analysis, experimentation, and generating figures or graphs.
Acknowledgments section: If the dissertation or thesis does not have a "Materials and Methods" section, AI usage can be noted in the acknowledgments section. This is a suitable option for disciplines where AI is primarily used for text generation, editing, or creating figures or graphs to illustrate concepts or findings.
Footnotes: For specific references to AI-generated content or analyses within the text of the thesis or dissertation, footnotes can provide a concise and precise method of disclosure. This approach is beneficial when AI tools play a significant role in specific sections.
Separate statement: Alternatively, a separate statement that explicitly details the use of AI in the research and writing process can be appended to the thesis/dissertation or included as a supplementary document.
What to Disclose:
Regardless of the format, your disclosure should include:
Details of AI Usage: Specify the AI tools used (e.g., data analysis, text generation, editing tools) and their settings, input data, and parameters.
Extent of AI Involvement: Explain how AI was used, whether for generating text, editing content, conducting data analysis, creating figures or graphs, or a combination.
Purpose of AI Utilization: Clarify the intent behind AI involvement, such as improving efficiency, exploring possibilities, or enhancing data analysis.
It is important to note that disclosure requirements extend beyond text generation, data analysis, and figure creation. You should also disclose using AI-enabled technologies for experimentation, simulation, or other aspects of the research process.
The following examples illustrate some approaches for documenting AI use in theses and dissertations, enabling readers to comprehend its role:
Example 1 (Text Generation and Editing):
"In Chapter 3, I used the generative AI tool ChatGPT 4 to help generate initial drafts of sections 3.2 and 3.3. I then extensively edited and revised these sections to ensure they accurately reflected my research findings and adhered to academic writing standards. The specific settings used in ChatGPT included a focus on academic tone.”
Example 2 (Data Analysis and Figure Creation):
"The data analysis presented in Chapter 4 was conducted using the AI-powered software package 'StatisticianX'. This software was primarily used for advanced statistical modeling and generating the figures in sections 4.2 and 4.3. However, I conducted the interpretation of the results and selection of models independently and am solely responsible for the conclusions drawn from the analysis."
Example 3 (Combination of Tools):
"Throughout my dissertation research, I utilized various AI tools to enhance my work. Specifically, I used Bard to brainstorm potential research questions and generate outlines for certain sections. For data analysis, I relied on StatisticianX, and I employed the editing capabilities of Grammarly to improve the clarity and conciseness of my writing. In all cases, I carefully reviewed and edited the AI-generated content to ensure accuracy and alignment with my research goals."
While generative AI can streamline the grant-writing process, remember that responsible use is always paramount. While AI excels at producing grammatically flawless text, it may require your guiding hand to capture the creativity and originality often sought in winning proposals. Additionally, AI-generated text can sometimes contain factual inaccuracies or hidden biases, necessitating a thorough human review and editing before submission.
Before diving into AI-powered grant writing, take the time to review the target agency's policies. Some agencies may have specific guidelines or limitations regarding AI tools. Don't hesitate to reach out to program officers for personalized guidance – they're often a valuable resource for navigating these nuances. For example, the National Science Foundation's (NSF’s) guidelines encourage proposers to indicate in the project description the extent to which, if any, GenAI technology was used and how it was used to develop their proposal. NSF's policy emphasizes that while GenAI can aid in developing proposals, researchers are responsible for the accuracy and authenticity of their submissions, including content developed with the assistance of GenAI tools.
In general, the use of AI in research should be guided by the following principles:
Transparency: Disclose AI usage in publications, presentations, and grant applications, documenting models and algorithms employed.
Accountability: Ensure quality and integrity of AI-generated work, verifying its accuracy, unbiased nature, and ethical foundation.
Human oversight: AI should not autonomously make decisions impacting individuals' lives. Human experts must review and approve AI-generated content.
In short, championing responsible AI in your grant writing cultivates trust in your research and safeguards your reputation, ensuring your proposals stand out for both efficiency and ethical grounding.
While AI tools offer potent support in reviewing grant proposals and manuscripts, their function ought to be clearly defined. They excel at pinpointing patterns, inconsistencies, and potential plagiarism, acting as a first scan and highlighting areas demanding further human scrutiny.
For proposals, AI assists by:
Analyzing budgets for inconsistencies or formatting deviations, alerting reviewers to potential concerns, freeing them to spend more time developing research plans and justifications.
Identifying relevant literature by scouring databases for publications you might have missed, granting reviewers a broader understanding of your project's context and potential impact.
Flagging potential ethical concerns, ensuring your adherence to ethical guidelines and standards.
For manuscripts, AI helps by:
Performing plagiarism checks against vast research databases, detecting potential plagiarism or text overlap, especially subtle cases easily missed by human reviewers alone.
Assessing formatting consistency for adherence to journal guidelines, streamlining the review process.
Flagging potential factual errors by analyzing data figures and scientific claims against established databases and knowledge graphs, identifying inconsistencies that might escape human reviewers, and ultimately enhancing the accuracy and quality of your published research.
While AI offers valuable assistance in flagging potential issues and patterns, its limitations in critical thinking, domain expertise, and nuanced judgment render it insufficient for comprehensive review. Assessing originality, significance, and scientific rigor remains the domain of human experts.
However, harnessing AI's promise of efficiency and insights cannot ignore concerns about review integrity and confidentiality. Sharing sensitive research with external AI tools introduces potential vulnerabilities to intellectual property and confidential data. Recognizing these risks, major funding agencies like the National Science Foundation (NSF) and the National Institutes of Health (NIH) have taken decisive steps. NSF restricts reviewers from uploading proposal content to unapproved AI tools, prioritizing review integrity. NIH goes even further, completely banning AI use in grant review.
Given these restrictions and AI's limitations, consult specific agency or journal guidelines on AI disclosure and ethical application in the review process. Ultimately, the final evaluations of scholarly work and grant proposals rest with the discerning eyes of human experts, ensuring adherence to academic standards and agency requirements while appreciating the unique merits of each submission.
Integrating AI into your research necessitates a nuanced approach, demanding both harnessing its transformative potential and prioritizing a commitment to ethical principles and the safeguarding of individual privacy. Carefully navigate this intricate landscape, ensuring AI fuels knowledge advancement while adhering to stringent ethical standards and robust privacy protections.
Ethical Considerations:
Bias and Fairness: Examine your data and algorithms with a critical eye. Identify and mitigate potential biases that could taint your results and lead to unfair outcomes. Consult AI and data science experts for guidance in ensuring ethical and unbiased analysis.
Transparency and Explainability: Make your AI usage transparent. Document your chosen methodologies clearly and explain how your tools process data and reach conclusions. Aim for explanations accessible to both technical and non-technical audiences, building trust and demonstrating accountability for your research.
Accountability and Responsibility: Ensure strict adherence to established ethical principles throughout the research process, actively safeguarding against potential harm. This responsibility encompasses the accuracy, potential biases, and ethical considerations inherent in all AI outputs generated within your research.
Human Oversight and Control: AI should complement, not replace, human judgment and decision-making in research. This translates to:
Active monitoring of AI outputs: Regularly review and verify AI outputs for accuracy and relevance.
Understanding AI limitations: Be aware of the tool's limitations, including biases in training data.
Setting clear boundaries for AI use: Define limits on its data access and decision-making autonomy.
Ensuring ethical compliance: Regularly check that AI use aligns with ethical guidelines and research standards.
Using AI as a tool, not a decision-maker: Employ AI to support, not substitute, human intellect and expertise.
Staying informed on AI developments: Keep up with advancements in AI technology and their implications for research methodologies and ethics.
Privacy Considerations:
Data protection and security: Research involving AI and personal data demands robust safeguards. Implement strict data protection and security measures to shield privacy, prevent unauthorized access, and guarantee the secure handling of personal information.
Informed consent and anonymization: Respect participants' rights from the start. Provide clear and explicit informed consent about data collection, use, and sharing in AI-powered research. Minimize risks by anonymizing data whenever possible, ensuring compliance with IRB protocols and adhering to established privacy policies (see When does AI data collection and training fall under Human Subjects Research regulations?).
Integrating AI into academic research has introduced novel intellectual property (IP) challenges, particularly regarding ownership, data licensing, open-source resources, patentability, and collaborative agreements.
Ownership of AI-generated work: One of the primary issues in academic research is determining the ownership of work created by AI. Researchers often develop and train AI models, but the AI system generates the output and results. Depending on institutional policies and agreements, researchers, institutions, or AI developers – and not people who enter prompts – may claim ownership of the AI-generated work.
Data ownership and licensing: AI often relies on large datasets for training and analysis. In academic research, obtaining the right to use these datasets can be subject to licensing agreements and legal constraints. Researchers must be aware of data ownership and licensing terms, ensuring they have the necessary rights to use the data in their research.
Open access and open-source AI tools: Open-source AI tools and libraries have become common in academic research. These tools often come with specific licenses, and researchers need to understand and adhere to those licenses when using such resources. Additionally, researchers should consider whether they must share their AI models and code as part of open-access principles in academia.
Patents and inventions: AI-generated inventions may be eligible for patent protection. Determining inventorship can be complex when AI systems contribute significantly to developing new technologies. Also, entering information into an AI content generator may be a public disclosure prohibiting patent rights.
Collaboration and licensing agreements: Collaboration between academic institutions, researchers, and industry partners in AI research may involve complex licensing agreements and IP-sharing arrangements. Researchers should be clear about how their contributions are recognized and protected within these collaborations.
While AI offers innovative tools for academic research, maintaining academic integrity and adhering to ethical standards remain paramount. Although Portland State University's Student and Faculty Codes of Conduct don't explicitly mention AI, the core principles of honesty, transparency, and originality. These principles guide researchers in ensuring their work using AI tools and algorithms is authentic, transparent, original, and properly attributed.
Furthermore, researchers must be mindful of the ethical implications of AI and conduct their research with respect for privacy and confidentiality. For detailed information on navigating these principles and potential ethical concerns surrounding AI use, consult PSU's Student and Faculty Codes of Conduct.
The impressive capabilities of generative AI tools like ChatGPT come with a crucial responsibility: safeguarding sensitive and confidential information. These tools process data extensively, requiring the implementation of robust safeguards for research integrity and individual privacy. Here's how you can secure your data:
Minimize Data Exposure:
Shield confidential data: Avoid feeding AI tools with personal identification numbers (PINs), financial information, health records, intellectual property, unpublished research data, or any other legally restricted data.
Share minimal data: Provide only the essential data required for the specific task, avoiding unnecessary exposure of sensitive information. Ask yourself, "Does the AI tool need this specific piece of data to complete its task?"
Assume public disclosure: Information shared with AI tools could become public via future model updates or dataset integrations. Treat all data as if it could be disclosed publicly.
Adjust security settings: Review and tighten security settings within the AI tool itself to enhance data protection. Look for options like anonymization, encryption, and access control features.
Consult university policy: Refer to Portland State's Information Security Policy for clear guidelines on restricted data types and acceptable platforms for handling them.
Maintain Research Integrity:
Review AI outputs: Carefully review AI-generated content before publication or sharing to ensure accuracy, originality, and absence of sensitive information. Look for any unintentional confidential data leaks or biases within the generated text.
Seek guidance: Don't hesitate to consult the PSU’s IT department for additional advice and support on securely handling research data with AI tools. They can offer guidance on anonymization techniques, secure data-sharing practices, and specific university resources for researchers with sensitive data.
The Common Rule defines research as "a systematic investigation to develop or contribute to generalizable knowledge." In the context of AI, data collection can fall under this definition when it directly aims to generate new knowledge, including insights about AI itself or its applications.
Examples include:
Training an AI model: Collecting patient data to identify patterns for medical diagnosis would qualify as research under the Common Rule.
Evaluating AI tools: Gathering data to assess the effectiveness of an AI-powered language translation tool would also be considered research.
However, the Common Rule offers exemptions for specific situations:
Publicly available data: Social media posts, generally accessible information, typically fall outside the Common Rule's scope.
De-identified data: Data where individuals cannot be readily identified is also exempt.
Applying these exemptions to AI research gets tricky. AI tools often rely on vast datasets that might contain a mix of publicly available and de-identified data, potentially leaving individual privacy inadequately protected.
Recognizing this challenge, the 2018 Common Rule committed to regularly revisiting the concept of identifiability. This suggests future refinements to the regulatory framework to address the unique complexities of AI-enabled research better. Meanwhile, researchers utilizing AI should prioritize the ethical implications of their data collection practices and actively safeguard individual privacy. This might involve:
Informed consent: Obtaining clear and informed consent from participants whenever needed.
Anonymization techniques: Implementing robust anonymization strategies to protect individual identities.
Data minimization: Limiting data collection and retention to the minimum required for the research.
By taking these steps, researchers can leverage the power of AI responsibly while adhering to ethical principles and protecting the privacy of individuals contributing to their work.
Note: In preparing these guidelines, ChatGPT (GPT-4, OpenAI) was utilized for initial information gathering and assistance in drafting certain sections. The interaction with ChatGPT occurred in November and December 2023. It is acknowledged that ChatGPT's outputs are based on its training data up to April 2023 and do not represent novel research insights or peer-reviewed information.