Scribendi Policy on Generative AI
Scribendi Policy on Generative AI
The academic use of generative AI—a technology both transformative and dangerous—requires careful consideration. Its appropriate use in relation to documents submitted for Scribendi's editing and proofreading services is thus relevant. To protect academic integrity, Scribendi will provide services only for academic work that utilizes generative AI in an acceptable manner. If the use of generative AI is unacceptable, an order may need to be cancelled.
To facilitate literature reviews as part of identifying gaps in current research
To help locate potential new sources for subsequent review and evaluation (never blindly accept an AI source)
To support the writing and revision process (but not to generate an entire document, except in particular circumstances [see unacceptable uses for more on this]).
To blindly create content (The document must be the original work of the author and not that of generative AI; the thoughts, ideas, and arguments must be those of the author.)
To create academic documents whose content has been wholly generated by AI
Please see below for more information.
Documents in which generative AI has been utilized appropriately are accepted at Scribendi, though it should be noted that institutions, publishers, and journals will have differing policies on generative AI, with some prohibiting its use and others allowing specific types of use (with varying levels of strictness and severity regarding potential misuse). Their specific policies take priority and should be meticulously followed. The policy at Scribendi clarifies generative AI use solely in relation to clients seeking editing or proofreading.
In a paper, the use of generative AI should be carefully described to ensure transparency and reproducibility. Just as a researcher should note a statistical package used to analyze data (e.g., "We analyzed our data utilizing IBM SPSS Statistics (Version 27)"), the generative AI and its specific uses should be clearly outlined. Some style guides may have further requirements, such as an appendix that includes the specific prompts that were used.
Editorial Action: If it is objectively clear that a client has used AI in an academic document but has not described its use, please provide a recommendation for them to follow this best practice in a margin comment. Please do not make subjective evaluations of whether AI was used; if the use is not clearcut, please simply revise the document as normal based on the service requirements.
If specific content or ideas from generative AI are used, they need to be cited and referenced, as with any source. The reference formatting may vary based on a specific style guide or institutional requirements.
Editorial Action: If a client has clearly utilized AI in their academic work but has not cited it, please provide a recommendation for them to do so in a margin comment. If this occurs often, this recommendation would bear repeating in the Editor Notes as well. We want to do our due diligence in helping clients to avoid potential referencing concerns in relation to AI use. Please do not make subjective evaluations of whether AI was used; if the use is not clearcut, please simply revise the document as normal based on the service requirements.
The unacknowledged use of generative AI should always be avoided, and generative AI should not be used to create unconfirmed content. The document must be the original work of the author and not of generative AI; the thoughts, ideas, and arguments must be those of the author. Content drawn from generative AI has a high level of risk: it may be a fabrication (an AI "hallucination"), objectively incorrect or unsupported, or the unacknowledged work of others, violating copyrights and intellectual property rights.
Editorial Action: If a client indicates they have used generative AI to create the content in an academic document, please report this issue to Customer Service, as you would with a case of large-scale plagiarism. However, it is not the place of Scribendi to make subjective determinations or guesses about whether AI was used inappropriately or not. If a potentially inappropriate use of AI is not clearcut, we should continue with our work as normal; in the end, the responsibility regarding the use of AI rests with the author, and it is not Scribendi's responsibility to investigate and police such use. We cannot start accusing clients of wrongdoing based on guesswork or subjective appraisal. In support of academic integrity, we can deal with clearcut cases and provide guidance for clients who may not understand the risks and concerns related to generative AI. Subjective investigations of misuse, however, do not lie within our scope of operations. A polite margin comment reminding an author to double check the text and ensure that all sources and tools have been cited (or a similar relevant suggestion depending on the context) may be an appropriate option in such cases.
Generative AI can completely fabricate information and provide incorrect and unsupported data, even citing and referencing nonexistent sources and publications. Generative AI may provide outdated information; indeed, AI cannot provide any recent information, studies, or breakthroughs that are not found in the AI model's training data, so current AI models are permanently behind the times (e.g., if an AI was trained on data up to the summer of 2021, no material after this point can be considered when generating a response to a prompt). Generative AI can also create text or content that is severely biased or even offensive, either via hallucinations or the repetition of problematic online content.
Editorial Action: If a client has cited an AI source for content that seems to be unverifiable, it would be worthwhile to provide a margin comment and suggest that they double check and confirm this information in outside sources to guard against AI hallucinations.
Generative AI tools are probabilistic language models that draw on language patterns to predict text and content based on the prompts provided, utilizing the store of data and textual material on which the model was trained. They cannot think; they can neither create new ideas nor accurately synthesize materials to draw new conclusions. By their nature, they cannot provide the original material necessary to meet the academic standard of original research, as they merely reformat previously published material or hallucinate content to fit the prompt.
Editorial Action: If a client cites some claim or conclusion from AI that seems illogical, unclear, or contradictory, please outline the potential concerns in a margin comment.
While generative AI can help with language concerns in the drafting and revision stages, there is a risk that generative AI can change the intended meaning of text. Generative AI cannot understand an author's intent, only patterns in the text that it was trained on, and it can thus change the meaning of sentences, perhaps drastically, so care should always be taken if generative AI is used.
Editorial Action: If a client has used AI to support their work but the end result seems unclear or problematic, please outline the potential concerns in a margin comment.
There are data security and privacy risks when uploading content via prompts to generative AI. Certain materials should never be uploaded. Data arising from human research participants (e.g., personal information or any information that could be used to identify an individual or group) should not be included in a prompt, as the data may be exposed and subsequently shared to others, violating both research and data-security standards. Confidential information (e.g., contractual, financial, or security information) related to a study, project, person, group, business, or institution should never be uploaded. Breaching non-disclosure agreements and data-security legislation could have serious academic, legal, and financial ramifications. Before uploading any material to generative AI, a thorough risk assessment should be conducted.
Editorial Action: If a client has used AI in a way that is problematic in terms of data privacy and security (and for which they would likely be liable), please outline the potential concerns in a margin comment.
While the legal landscape in relation to generative AI is still evolving, it's clear that authors are responsible, both academically and legally, for the work they produce with the aid of generative AI, including any libel charges or infringement of intellectual property rights or copyrights.
Editorial Action: If a client has utilized AI in a way that might clearly and specifically put them at risk of violating copyrights or intellectual property rights, please outline the potential concerns in a margin comment. While there is some general risk in relation to these when using generative AI in general, such general risk does not have to be outlined when completing an order. However, if there is a clear and identifiable concern with a specific use, please do highlight this for the client.
Scribendi has its own proprietary AI tool, called Scribendi AI, but it's important to note that this tool is not generative AI, such as ChatGPT, but rather a predictive AI tool that utilizes machine learning to assist editors with proofreading and editing. Scribendi AI does not operate independently from a human operator and cannot directly make any changes to a document. We guarantee that every document submitted to Scribendi for proofreading or editing is fully edited by a human editor and that we never run text through an AI checker without human oversight. Scribendi AI simply helps editors to efficiently find potential issues, such as grammar, punctuation, and spelling errors, allowing them to quickly consider options, make changes, and return an order in a timely manner. The aim is always to find the optimal solution based on the context and the author's intended meaning, and the editor continuously exercises their editorial judgment when determining the best revisions to implement throughout a document. It is the editor who necessarily reviews every section of text in detail and chooses the revisions to be made, not the tool. The aim of Scribendi AI is simply to increase speed and efficiency and ensure consistency. The actual revisions in a document, however, are always the work of the editor.
At Scribendi, generative AI is never used to evaluate a document or its content. All suggestions and critical commentary in an edited document must be the sole work of one of Scribendi's professional editors. The evaluation of the content within a document can never be informed by generative AI. The use of such generative AI technology is prohibited at Scribendi due to the risks associated with it (e.g., data hallucinations, incorrect statements, data security and privacy issues, and copyright and intellectual property right concerns). It is also prohibited for Scribendi editors to use any predictive AI technology (beyond the Scribendi AI) on a Scribendi document; we cannot upload any client text to any online platform or tool, as this would violate both our data security policy and numerous national and international data regulations.
Our approach to AI use is different for business, personal, and book services than it is for our array of academic services. We do not specifically limit client use of AI with non-academic documents. If AI is used with such documents, an editor should continue with the work and edit the document as usual. However, it may be helpful and pertinent for an editor to point out potential concerns with certain uses of AI, such as potential copyright infringement or the AI hallucination of “factual” material.
Such editorial commentary is encouraged where appropriate and when AI use has been clearly indicated by the client. In such cases, the commentary should always be polite and professional. Particular care should be taken to avoid sounding condescending or accusatory in relation to AI use. The intention of such commentary is to provide information that enables clients to make informed decisions without trying to push them in a particular direction regarding the use of AI in their work.
Last Updated: 09/09/2022