Artificial Intelligence (AI) Use Policy


The Cardiovascular Academy Society has adopted an official policy on the ethical and transparent use of generative artificial intelligence (AI) tools in scholarly publishing, recognizing the increasing impact of these technologies. This policy applies to all stakeholders in the publication process—including authors, reviewers, and editors—and is grounded in the principles set forth by international ethics organizations such as COPE, ICMJE, and WAME.

Scope of AI Technologies and Potential Risks
Generative AI tools—including large language models (LLMs), chatbots, image generators, and code-writing systems—can be used to produce text, visuals, data, and other forms of content. When used responsibly, such tools can enhance productivity and innovation. However, unsupervised use may lead to factual inaccuracies, fabricated references, plagiarism risks, and threats to research integrity.

Guidelines for Authors
 Authors must disclose the use of AI tools in a clear and transparent manner. This disclosure should appear in the cover letter and within the manuscript itself (e.g., in the Methods or Acknowledgements section), and must include the name of the tool, its intended use, and how the output was reviewed.

Since AI tools cannot be held accountable for the content they generate, they may not be listed as authors. The originality, accuracy, and ethical compliance of the content remain the full responsibility of the authors.

Authors are expected to verify the accuracy of AI-generated content, check references, and eliminate any potentially plagiarized material.

Manipulated visuals that distort scientific facts, synthetic data generation, or the use of AI without a methodological basis are considered ethical violations and will not be accepted for publication.

Guidelines for Reviewers
Reviewers must maintain the confidentiality of manuscripts and must not upload manuscript content to any AI platform. Peer review conducted using AI tools poses risks to data security and intellectual property and is therefore prohibited.

However, reviewers may use AI tools in a limited capacity—for example, to improve the clarity or translation of their own written reports. In such cases, AI usage must be explicitly declared in the reviewer comments.

If a reviewer suspects that AI has been used inappropriately or without disclosure, they should inform the editor.

Guidelines for Editors
Editors are responsible for maintaining the confidentiality of submitted manuscripts and peer review reports. Under no circumstances should these materials be processed through AI platforms.

Editorial decisions must be made solely through human judgment.

Editors may use AI tools in a restricted manner only for tasks that do not compromise scientific integrity—such as identifying suitable peer reviewers.

In cases of suspected misuse of AI, editors are responsible for initiating an investigation and acting in accordance with relevant policies.

The Cardiovascular Academy Society is committed to monitoring technological developments through the lens of responsible innovation while upholding research integrity and publication ethics. This policy framework will be reviewed and updated regularly.