Disclosure of AI Assistance in Peer Review: Transparency, Accountability, and Evolving Norms in Academic Publishing
Reading time - 7 minutes
Introduction
As artificial intelligence tools become increasingly integrated into academic workflows, their presence is no longer limited to authors and editors—it is now extending into peer review. Reviewers are beginning to use AI for tasks such as summarizing manuscripts, checking statistical logic, improving language clarity in their reports, or even drafting portions of feedback. While these tools can enhance efficiency and consistency, they introduce a critical question: Should reviewers disclose their use of AI assistance?
This emerging issue sits at the intersection of transparency, ethics, and trust in academic publishing. As peer review remains a cornerstone of scholarly validation, any change in how reviews are produced must be carefully examined.
The Expanding Role of AI in Peer Review
AI tools are increasingly capable of assisting reviewers in multiple ways. For instance, they can help identify inconsistencies in data reporting, flag potential ethical concerns, or suggest relevant literature. For non-native English-speaking reviewers, AI can also help refine the clarity and tone of their comments.
These capabilities offer clear benefits. They can reduce reviewer fatigue, improve the quality of feedback, and accelerate the review process. However, they also raise concerns about over-reliance, loss of critical judgment, and the potential introduction of bias or inaccuracies—especially if AI-generated suggestions are accepted without verification.
Why Disclosure Matters
At the heart of this discussion is the principle of transparency. Peer review is traditionally understood as an expert-driven, human evaluation process. If AI tools are contributing to that evaluation, even partially, stakeholders may reasonably expect disclosure.
Disclosure serves several purposes:
- Maintaining Trust: Authors, editors, and readers need confidence that reviews are grounded in expert judgment. Knowing whether AI was involved helps contextualize the feedback.
- Ensuring Accountability: If errors or biases appear in a review, disclosure helps determine whether they stem from the reviewer or the tool.
- Normalizing Ethical Use: Open acknowledgment of AI use can help establish community norms and reduce stigma or secrecy around these tools.
Without disclosure, there is a risk of creating a “black box” review process, where the origins of feedback are unclear.
What Should Be Disclosed?
A key challenge lies in defining the threshold for disclosure. Not all uses of AI are equal. For example:
- Minor language polishing may be considered acceptable without formal disclosure.
- Substantive contributions—such as generating critiques, summarizing findings, or suggesting decisions—arguably require transparency.
Journals and publishers will need to clarify what constitutes “material AI assistance.” A tiered approach may be useful, distinguishing between assistive, augmentative, and generative uses of AI.
Ethical Risks of Undisclosed AI Use
Failure to disclose AI involvement can lead to several ethical concerns:
- Erosion of Reviewer Responsibility: If reviewers rely heavily on AI-generated insights, they may inadvertently abdicate their critical role as subject-matter experts.
- Propagation of Errors: AI tools can produce confident but incorrect outputs. Without human oversight, these errors can influence editorial decisions.
- Bias Amplification: AI systems may reflect biases present in their training data, potentially affecting fairness in manuscript evaluation.
- Confidentiality Risks: Uploading unpublished manuscripts into third-party AI tools may violate confidentiality agreements, especially if data handling policies are unclear.
These risks highlight why transparency is not just a procedural issue but an ethical necessity.
Current Policy Landscape
At present, policies on AI use in peer review are still evolving. Some publishers explicitly prohibit uploading manuscripts into AI tools due to confidentiality concerns. Others allow limited use but require reviewers to ensure that no sensitive data is exposed.
A growing number of journals are beginning to recommend or mandate disclosure of AI assistance in review reports. However, there is no universal standard, leading to inconsistency across disciplines and publishers.
This lack of harmonization can create confusion for reviewers, particularly those working across multiple journals with differing guidelines.
Balancing Innovation with Integrity
The goal is not to restrict the use of AI entirely, but to integrate it responsibly. AI can be a powerful ally in improving the efficiency and inclusivity of peer review—especially in addressing reviewer fatigue and supporting diverse reviewer pools.
However, its use must be guided by clear principles:
- Human Oversight: AI should support, not replace, expert judgment.
- Transparency: Significant AI contributions should be openly disclosed.
- Confidentiality संरक्षण: Reviewers must ensure that manuscript data is not exposed to unauthorized systems.
- Critical Evaluation: AI outputs should be verified and not accepted uncritically.
By adhering to these principles, the academic community can harness the benefits of AI while safeguarding the integrity of peer review.
Toward Standardized Disclosure Practices
Looking ahead, the development of standardized disclosure frameworks will be essential. Journals could introduce simple statements such as:
- “AI tools were used to assist in language refinement.”
- “AI was used to summarize the manuscript; all evaluative judgments are my own.”
Such statements would provide clarity without adding excessive burden on reviewers.
Training and awareness will also play a key role. As reviewers become more familiar with AI tools, they will need guidance on ethical usage and disclosure expectations.
Conclusion
The integration of AI into peer review is not a distant possibility—it is already happening. The question is not whether AI should be used, but how it should be used responsibly.
Disclosure of AI assistance represents a crucial step toward maintaining transparency, accountability, and trust in academic publishing. By establishing clear norms and policies now, the scholarly community can ensure that innovation enhances—rather than undermines—the credibility of peer review.
In this evolving landscape, openness will be the foundation that allows both human expertise and artificial intelligence to coexist effectively in the service of rigorous, reliable research evaluation.
