News Details
Researchers: Social scientists must address ChatGPT’s ethical challenges before using it for research
Authored by: Juliana Rosati
Faculty & Research
08/04/23
A new paper by researchers at Penn’s School of Social Policy & Practice (SP2) and Penn’s Annenberg School for Communication offers recommendations to ensure the ethical use of artificial intelligence resources such as ChatGPT by social work scientists.
Published in the Journal of the Society for Social Work and Research, the article was co-written by Dr. Desmond Upton Patton, Dr. Aviv Landau, and Dr. Siva Mathiyazhagan. Patton, a pioneer in the interdisciplinary fusion of social work, communications, and data science, holds joint appointments at Annenberg and SP2 as the Brian and Randi Schwartz University Professor.
Outlining challenges that ChatGPT and other large language models (LLMs) pose across bias, legality, ethics, data privacy, confidentiality, informed consent, and academic misconduct, the piece provides recommendations in five areas for ethical use of the technology:
- Transparency: Academic writing must disclose how content is generated and by whom.
- Fact-checking: Academic writing must verify information and cite sources.
- Authorship: Social work scientists must retain authorship while using AI tools to support their work.
- Anti-plagiarism: Idea owners and content authors should be located and cited.
- Inclusion and social justice: Anti-racist frameworks and approaches should be developed to counteract potential biases of LMMs against authors who are Black, Indigenous, or people of color, and authors from the global South.
Of particular concern to the authors are the limitations of artificial intelligence in the context of human rights and social justice. “Similar to a bureaucratic system, ChatGPT enforces thought without compassion, reason, speculation, or imagination,” the authors write.
Pointing to the implications of a model trained on existing content, they state, “This could lead to bias, especially if the text used to train it does not represent diverse perspectives or scholarship by under-represented groups. . . . Further, the model generates text by predicting the next word based on the previous words. Thus, it could amplify and perpetuate existing bias based on race, gender, sexuality, ability, caste, and other identities.”
Noting ChatGPT’s potential for use in research assistance, theme generation, data editing, and presentation development, the authors describe the chatbot as “best suited to serve as an assistive tech tool for social work scientists.”
Patton is founding director of SAFELab, a research initiative affiliated with Annenberg and SP2, where Landau is co-director and Mathiyazhagan is associate director. SAFELab examines how to support youth of color in navigating grief and violence in social media environments and is dedicated to researching innovative methods to promote joy and healing in digital contexts.
About Penn’s School of Social Policy & Practice (SP2)
For more than 110 years, the University of Pennsylvania School of Social Policy & Practice (SP2) has been a powerful force for good in the world, working towards social justice and social change through research and practice. SP2 contributes to the advancement of more effective, efficient, and humane human services through education, research, and civic engagement. The School offers five top-ranked, highly respected degree programs along with a range of certificate programs and dual degrees. SP2’s transdisciplinary research centers and initiatives — many collaborations with Penn’s other professional schools — yield innovative ideas and better ways to shape policy and service delivery. The passionate pursuit of social innovation, impact, and justice is at the heart of the School’s knowledge-building activities.
People
-
Desmond Upton Patton, PhD, MSW
Brian and Randi Schwartz University Professor
Contact
Email
-
-