From Leiden Madtrics
Generative Artificial Intelligence (GAI) tools like ChatGPT are increasingly finding their way into research and scholarly publishing. This trend brings a pressing challenge: how do academics clearly disclose the use of AI in their research workflows? Right now, many disclosures are either too vague (e.g. “We used ChatGPT to improve clarity”) or missing entirely. Such a lack of precision can undermine transparency and reproducibility, making it harder for readers, reviewers, and editors to assess how AI contributed to the work. In response to this challenge, we developed a new approach to standardise how researchers communicate their use of AI.
What is GAIDeT?
GAIDeT stands for Generative Artificial Intelligence Delegation Taxonomy. This framework was created to help researchers formally describe any assistance they received from AI in the course of their research or publishing processes. Unlike other approaches, such as the CRediT taxonomy (focused on human author roles) or the NIST AI Use Taxonomy (covering AI functions in general domains), GAIDeT is designed specifically for documenting the delegation of tasks to AI within research workflows. It combines the stage of research with the precise role AI played, while ensuring that responsibility always stays with the human researcher. GAIDeT provides a structured checklist for disclosing what was delegated to AI, at which stage, how the AI’s output was used, and the version of AI that was used.
Learn More, Read the Complete Post
Direct to GAIDeT Declaration Generator
Read original article: Read More
Discover more from DrWeb's Domain
Subscribe to get the latest posts sent to your email.
