Quotes from a Survey on Creativity, Writing, and Reading in the Age of AI:
“... generative AI has significantly eroded any sort of trust between readers and writers.”
"... generative AI is a mediocrity generator… AI reworks sentences in the most predictable, pedestrian manner possible and removes any nuance ...."
ICB readers want to learn about your work and they want to read about it in your own voice.
Content
The text below is a draft of ICB's AI policy (pending revisions and final approval)
Generative Artificial Intelligence (genAI) is defined by Wikipedia as a “subfield of artificial intelligence that uses generative models to generate text, images, videos, audio, software code or other forms of data.”
AI-assisted technologies are software applications that use AI algorithms to perform specific tasks, such as ChatGPT, Google’s Gemini, and Microsoft CoPilot.
This journal recognizes that authors may use genAI tools to prepare their manuscript. Allowable use of genAI for submissions to ICB include gaining insights, reviewing or synthesizing literature, facilitating and supporting data analysis (e.g. to identify suitable code or statistical tests), drafting text, and editing text to improve language and clarity.
Authors who use AI-assisted technology remain responsible for the content and integrity of their manuscript. Producing scientific insights, interpreting data, and drawing conclusions remains the sole responsibility of the authors. Authors must ensure that the manuscript represents the authors’ original ideas, analysis, interpretation, and conclusions.
AI-generated content must not plagiarize, misrepresent, or falsify content.
Authors must ensure that they use AI-assisted technologies in ways that safeguards data privacy, intellectual property, and other rights. To do so may require authors to check the terms and conditions of any AI tool that they use, which may contain clauses that amount to a violation of such rights.
AI-assisted technologies can produce false outcomes, ranging in severity from formatting errors, over content errors such as fabricated citations and facts, to fabrication of evidence in support of desired outcomes. Manuscripts and published articles containing AI-generated errors that substantially alter scientific conclusions will be handled like any other seriously flawed or fraudulent work. If errors are discovered prior to publication, the editors will require authors to address all concerns as part of the revision process. If substantial errors are discovered after publication, the journal may require the authors to issue an erratum or may issue a retraction.
Machine-learning tools. The use of machine-learning tools must be disclosed in the Methods section. The use of machine learning tools should be reported like any other use of computational tools and must meet professional standards for reproducibility and transparency.
GenAI tools. Authors must disclose the use of generative AI technologies upon submission of their manuscript. Editors and reviewers will judge if its use is appropriate. Published articles in which the use of such technologies is subsequently discovered may be retracted. Research articles on the topic of AI (but which do not contain AI-generated content) are not within the scope of this policy.
This journal reserves the right to ask authors who use AI for their corresponding prompts and outputs, which can be supplied as ESM files. Please note that journals may reject or retract any submission that failed to disclose AI use or that is unable to provide their output or submissions which fail to declare the use of AI at any stage of the review process.
Disclosure must be included in the cover letter and in the manuscript.
The use of generative AI tools must be disclosed in a separate section at the end of the manuscript as laid out in the Artificial Intelligence Disclosure (AID) Framework (link: https://crln.acrl.org/index.php/crlnews/article/view/26548/34482). An “AID Statement” will appear in the published article that details the manner in which the authors employed AI in their manuscript. Example text below.
Examples of genAI disclosure statement, formatted as its own section at the end of the manuscript text following the AID guidelines; example taken from the Artificial Intelligence Disclosure (AID) Framework.
“AID Statement
Artificial Intelligence Tool: ChatGPT v.4o and Microsoft Copilot (University of Waterloo institutional instance); Conceptualization: ChatGPT was used to revise research questions; Data Collection Methods: ChatGPT was used to create the first draft of the survey instrument; Data Analysis: Microsoft Copilot was used to verify identified themes coded from open ended survey responses; Privacy and Security: no identifiable data was shared with ChatGPT during the design of this study, only the University of Waterloo institutional instance of Microsoft Copilot was used to analyze any anonymized research data in compliance with University of Waterloo privacy and security policies; Writing—Review & Editing: ChatGPT was used in the literature review to provide sentence-level revisions and metaphor options; Project Administration: ChatGPT was used to establish a list of tasks and timelines for the study.”
Authors must not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans. Authors are also responsible for ensuring that the work is original, that the stated authors qualify for authorship, and the work does not infringe third party rights.
Please note: to protect authors’ rights and the confidentiality of their research, this journal does not currently allow the use of AI-assisted technologies such as ChatGPT or similar services by reviewers or editors in the peer review and manuscript evaluation process. Oxford University Press is actively evaluating compliant AI Tools and may revise this policy in the future.
ICB does not permit the use of Generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original. Image forensics tools or specialized software might be applied to submitted manuscripts to identify suspected image irregularities.
The use of AI or AI-assisted tools is only permitted if studying the performance of these tools is part of the research design or research methods (such as in machine-learning-assisted imaging approaches to generate or interpret the underlying research data, for example in the field of biomedical imaging). In this case, such use must be described in a reproducible manner in the methods section. The description must include an explanation of how the AI or AI-assisted tools were used in the image creation or alteration process, and the name of the model or tool, version and extension numbers, and manufacturer. Authors should adhere to the AI software’s specific usage policies and ensure correct content attribution. Where applicable, authors could be asked to provide pre-AI-adjusted versions of images and/or the composite raw images used to create the final submitted versions, for editorial assessment.
The use of generative AI or AI-assisted tools in the production of artwork, such as for graphical abstracts, is not permitted. The use of generative AI in the production of cover art may in some cases be allowed, if the author obtains prior permission from the journal editor and publisher, can demonstrate that all necessary rights have been cleared for the use of the relevant material, and ensures that there is correct content attribution.
ICB strongly discourages the use of Generative AI or AI-assisted tools to create or edit peer reviews.
Entering text or visuals from a manuscript into a genAI tool may constitute a breach of confidentiality.
ICB relies on reviewers for conducting reviews in accordance with, and in order to uphold, the standards of the journal. While there are potential opportunities arising from generative AI, please ensure that these types of tools and resources are not used as a substitute for your expert opinion and do not supersede your own judgment.
Maintaining confidentiality both throughout and following the review process is important, so please do not share information about this manuscript, its content, or your review with any person or entity, including Large Language Models (LLMs) and AI tools.
Primary sources
Kosmyna, Nataliya, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. "Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task." arXiv preprint arXiv:2506.08872 4 (2025).
Peters, Uwe, and Benjamin Chin-Yee. "Generalization bias in large language model summarization of scientific research." Royal Society Open Science 12, no. 4 (2025): 241776.
Policy sources
authorship and AI tools position paper by the Committee on Publication Ethics (COPE)
Opinion pieces and editorials
Why AI writing is so generic, boring, and dangerous: Semantic ablation (blog "The register" posted Feb 26 2026, written by Claudio Nastruzzi)
Our Survey on Creativity, Writing, and Reading in the Age of AI (blog "Ellipsus" posted April 20 2026, written by Rex Mizrach)