Institutional Review Boards (IRBs) and Generative Artificial Intelligence (GAI)
Has your university's IRB made a plan for GAI?
Imagine you are a researcher. You have sheets and sheets of data from a study that has qualified for “Exempt” status. There is no information collected that is identifiable, and you have a lot of qualitative data—interviews, survey answers, etc. You have a lot of extra work ahead of you, and you may think to yourself, “Hey! I could use AI for that!”
AI is great at analyzing text and data, and it doesn’t care how much you feed it. It seems like a perfect solution for the tired researcher.
It is tempting, but don’t do it. There are a couple of serious reasons why:
PRIVACY & SECURITY: You have no idea where that information will end up, and even if information is deidentified, AI is very good at “reidentifying” information. Even if the study contains no private information at all, you don’t know where your study information will end up once it is fed into a Large Language Model (LLM), and you don’t know who might have access to it. All research involving human subjects, even most exempt studies, have confidentiality agreements with their human subjects, and you can bet AI and some big tech company isn’t included in the list of those who should have access.
HALLUCINATIONS: We all know that AI can get facts wrong, come to the wrong conclusion, or completely make stuff up. You don’t want your study results to become contaminated by AI hallucinations that, considering the size of the data set, might be impossible to detangle from your actual results.
BIAS: You can’t predict the bias that AI might inject to the analysis of your data. You may have worked for weeks or months to carefully craft a study where bias is minimized, and AI can inject bias into that study in seconds.
RIGHTS: Depending upon what LLM you use, you may or may not have rights to the information you have entered. Most LLMs do not retain rights over what is entered, but you need to be very careful—even when using proprietary systems.
Wait! I why haven’t I head anything about this?
GAI is really new. I know that it doesn’t seem like it is all that new because, if you are like me, you have been bathing in GAI since the moment GPT3 dropped in late November of 2022—but, seriously, it’s only been a little over a year. It’s not very much time for the thick dredge of institutional bureaucracy and rule-making to slowly seep toward change. After all, we have many levels of meetings, faculty senate resolutions, institutional review, legal counsel research, and administrative sign-offs to consider. By the time our IRBs catch up, GAI will have evolved again into something that the IRBs will have to catch up to. It is a dizzying whirlwind of change in GAI facing a very very slow-moving and cautious IRB system. But we can’t wait. We have to urge our universities to consider the issues of GAI right now, and quickly!
Federal Guidance
There is little governmental guidance on GAI in research. There is an advisory document on the Health and Human Services Website that is dated July 2022, which seems to anticipate some of the issues presented by GAI, but it does not serve as an official document, nor does it consider the current status of GAI (Protections (OHRP), 2022). Although the Secretary’s Advisory Committee on Human Research Protection (SACHRP) has been charged with finding solutions to the issue of GAI used in Human Subjects research, their recommendations are forthcoming and have not been adopted by the HHS.
The National Science Foundation includes information only about the use of GAI in merit review processes (National Science Foundation, 2023). The National Institutes of Health has only limited GAI use regarding Peer Review (Lauer et al., 2023). Likewise, the President’s Executive Order on AI does not mention the use of GAI in research, except to encourage the use of research to extend the use of GAI (House, 2023).
Peer Institutions
There is much more guidance offered via our peer institutions regarding the use of GAI in research on human subjects.
The following Colleges have IRB protocols directly related to GAI use regarding Human Subjects:
· University of Tennessee,
· UC Santa Cruz,
· Michigan Tech,
· Cornell University,
· University of North Carolina, Chapel Hill;
· University of Texas, Austin; and
· University of Wisconsin, Madison.
The University of Tennessee's Research Integrity & Assurance provides guidance on the use of artificial intelligence (AI) tools in research activities. The policy emphasizes the need for IRB review whenever an AI tool is used to interact with or obtain data generated by or from human research participants. It also recommends limiting the collection of identifiable participant data to the minimum necessary for research activities, whether used with an AI tool or not. Additionally, researchers are advised to routinely evaluate AI tools used in their protocols for bias and to submit a full description of planned interactions between AI tools and participants to the IRB. (Office of Research Integrity & Assurance, 2024)
Michigan Technological University's policy acknowledges the rapid developments in the use of large language model artificial intelligence tools in various research disciplines. The university emphasizes the responsibility of the author for the final content when using these tools and requires compliance with funding sponsor or publisher requirements for their use in research activities. (Michigan Tech, 2024)
UC Santa Cruz (Douglas, 2024) and UW–Madison (UW–Madison Information Technology, 2024) also provide guidance on the use of generative AI tools, emphasizing the importance of data security and privacy. Both institutions prohibit the submission of personal, confidential, or sensitive information, including Personally Identifiable Information (PII), into generative AI services. They also highlight the potential risks associated with using these tools and the need to comply with institutional policies and legal obligations.
Cornell University offers a framework for using generative AI across different stages of research. The report provides guidelines and perspectives for appropriate use and encourages the development of AI literacy among researchers. The university suggests periodic updates to guidance on generative AI use due to the rapidly changing landscape.(Office of the Vice President for Research & Innovation, Cornell University, 2023)
The University of Texas, Austin emphasizes data protection and stipulates that users should be cautious about entering confidential or sensitive data into GAI tools, as these tools may collect and store data as part of their learning process, potentially exposing proprietary or sensitive information to unauthorized parties. There are clear prohibitions on using GAI tools for certain purposes, such as generating content that infringes copyright or other intellectual property rights, or that helps others break laws or institutional policies.(UTA, 2024)
The University of N. Carolina, Chapel Hill’s Provost suggests that researchers should clearly and transparently disclose and document the use of generative AI in their research. It is important to protect confidential data, review content before publication, and adhere to existing academic policies. Additionally, being alert for phishing and connecting with the appropriate university offices before procuring generative AI tools are recommended. (UNC Office of the Provost, 2024)
In summary, the use of Generative Artificial Intelligence is accompanied at each institution by a set of guidelines and policies that aim to ensure responsible and ethical use of these tools, particularly in the context of data protection, content review, compliance, and prohibited uses.
I highly recommend that every institution be as AI-forward as the research institutions I have mentioned in this article. We need to immediately push for change and carefully craft IRB protocols with GAI in mind. I celebrate those institutions who have gone the extra mile to imagine how and why GAI might challenge confidentiality and integrity in university research, and I thank them for these very useful institutional guidance documents so that we can all begin to craft AI-forward IRB guidelines.
References
CITI. (2023, December 6). Podcast—Impact of Generative AI on Research Integrity [Nonprofit Organization]. Https://About.Citiprogram.Org/. https://about.citiprogram.org/blog/on-tech-ethics-podcast-impact-of-generative-ai-on-research-integrity/
CITI. (2024, January 5). On Tech Ethics: Considerations for Using AI in IRB Operations [Nonprofit Organization]. Https://About.Citiprogram.Org/. https://about.citiprogram.org/blog/on-tech-ethics-podcast-considerations-for-using-ai-in-irb-operations/
Defino, T. (2024, January). As AI-Assisted Research Advances, Experts Share Worries, Oversight Strategies; Collaboration Urged [Health Care Compliance Association]. JD Supra. https://www.jdsupra.com/legalnews/as-ai-assisted-research-advances-8741714/
Douglas, M. (2024, January 23). Information security statement on generative AI [Educational]. UC Santa Cruz News. https://news.ucsc.edu/2024/01/generative-ai-statement.html
House, T. W. (2023, October 30). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Lauer, M., Constant, S., & Wernimont, A. (2023, June 23). Using AI in Peer Review Is a Breach of Confidentiality – NIH Extramural Nexus [Blog]. Extramural Nexus, National Institutes of Health. https://nexus.od.nih.gov/all/2023/06/23/using-ai-in-peer-review-is-a-breach-of-confidentiality/
Michigan Tech. (2024). Use of Large Language Model Generative Artificial Intelligence Tools in Research Activities | Research | Michigan Tech [Educational]. Michigan Technological University. https://www.mtu.edu/research/integrity/artificial-intelligence-tools/
National Science Foundation. (2023, December 14). Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process | NSF - National Science Foundation [Government]. US National Science Foundation. https://new.nsf.gov/news/notice-to-the-research-community-on-ai
Office of Research Integrity & Assurance. (2024). Artificial Intelligence (AI) Tools [Educational]. University of Tennessee, Knoxville. https://research.utk.edu/research-integrity/artificial-intelligence-ai-tools/
Office of the Vice President for Research & Innovation, Cornell University. (2023, December 15). Generative AI in Academic Research: Perspectives and Cultural Norms [Educational]. Cornell University. https://research-and-innovation.cornell.edu/generative-ai-in-academic-research/
OpenAI. (2023, November 14). Terms of use [Commercial]. OpenAI. https://openai.com/policies/terms-of-use
Protections (OHRP), O. for H. R. (2022, July 25). Considerations for IRB Review of Research Involving Artificial Intelligence [Text]. US Department of Health and Human Services. https://www.hhs.gov/ohrp/sachrp-committee/recommendations/attachment-e-july-25-2022-letter/index.html
UNC Office of the Provost. (2024). Generative AI Usage Guidance for the Research Community [Educational]. Provost & Chief Academic Officer - UNC Chapel Hill. https://provost.unc.edu/generative-ai-usage-guidance-for-the-research-community/
University Research Services & Administration. (2023, November 29). IRB Protocols for AI-generated transcripts [Educational]. GSU University Research Services & Administration. https://ursa.research.gsu.edu/2023/11/29/irb-protocols-for-ai-generated-transcripts/
U.S. Department of Health and Human Services (Director). (2022, June 6). The ABCs of 104: Understanding Exemption Categories.
UTA. (2024). Use of Generative Artificial Intelligence – UTA Faculty & Staff Resources [Educational]. University of Texas, Austin. https://resources.uta.edu/research/policies-and-procedures/generative-artificial-intelligence.php
UW–Madison Information Technology. (2024, January 2). Statement on use of generative AI [Educational]. UW–Madison Information Technology. https://it.wisc.edu/statement-on-use-of-generative-ai/