The Penn State Center for Socially Responsible Artificial Intelligence (CSRAI) has announced the results of its most recent seed-funding competition. The center awarded more than $159,000 to seven interdisciplinary research projects representing eight colleges and campuses.
Each proposal was evaluated by peers for its connection to the center’s mission, intellectual merit and potential for securing external funding. The awards will support the formation of interdisciplinary research teams and early-stage projects that demonstrate strong potential to obtain external funding. Projects are expected to start spring 2025 and last one to two years.
“We had a record number of submissions this year,” said S. Shyam Sundar, CSRAI director and James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications. “Roughly half the applications had matching funds, showing support from the home units of the principal investigators. Reflecting this, about half the awarded proposals came with over $62,000 of matching funds, thus extending the potential of these seed grants to achieve their objectives and attract significant external funding."
For a second year in a row, generative AI, or large language models, dominated the pool of proposals. However, the seven winning proposals represent a range of topics aligning with different aspects of the center’s mission of promoting socially responsible AI.
“From using AI for tracking climate change adaptation to improving accessibility of Black digital archives, these projects span the spectrum,” said Sundar. “Likewise, the samples targeted for data collection in these projects range from residents of rural Pennsylvania to students in New Zealand."
Synopses of the awarded proposals are shared below, with additional information available on the center’s website.
“A Future Where Assessing Placentas is Possible for All Births” —This project aims to develop PlacentaVision, an AI-based software tool designed to assess and triage placentas via digital photographs, ensuring timely and equitable clinical care across all birth settings. By addressing disparities in access to expert pathology, particularly in low-resource settings, the project seeks to promote health equity and contribute to socially responsible AI innovation.
- Alison Gernand, College of Health and Human Development
- James Wang, College of Information Sciences and Technology
“Artificial Intelligence Driven Workflows for Assisting in Synthesizing Climate Change Assessment for Water Adaptation Progress” — This project aims to develop the Water Adaptation Synthesis Portal (WASP), an AI-powered system that automates evidence extraction to streamline adaptation tracking in California’s water sector. By leveraging small language models and human-in-the-loop techniques, WASP aims to enhance scalability, reduce researcher time and provide real-time insights into climate adaptation progress.
- Christine Kirchhoff, College of Engineering
- Sarah Rajtmajer, College of Information Sciences and Technology
“Communities in the Loop: Developing AI for Black Digital Archives” — This project aims to advance socially responsible AI in Black archives through an interdisciplinary initiative involving scholars and students from diverse fields at Penn State. Housed in the Center for Black Digital Research, it will lay the groundwork for future funding, prepare students for leadership in ethical AI and archives and address critical needs identified by institutions like the U.S. Library of Congress and the Smithsonian Institution.
- Jim Casey, College of the Liberal Arts
- Christopher Dancy, College of Engineering
- P. Gabrielle Foreman, College of the Liberal Arts
“Comparative Analysis of AI Models and Human Judgments for Evaluation of Student Writing With and Without Non-Normative Use of English Language” — This project aims to develop natural language processing tools for formative assessments that account for linguistic diversity by prioritizing students’ understanding over non-normative language use, such as unconventional grammar or idiomatic expressions. By combining relational networks and human-in-the-loop methods, the study will analyze the impact of linguistic diversity on automated assessments, creating equitable educational tools and setting the stage for competitive funding from the U.S. National Science Foundation.
- Matthew Beckman, Eberly College of Science
- Rebecca Passonneau, College of Engineering
- ChanMin Kim, College of Education
- Dennis Pearl, Eberly College of Science
“Planning for AI Infrastructure for Evidence-to-Impact Pipelines” — This project aims to develop scalable AI infrastructure to enhance the research-to-policy pipeline by supporting actionable scientific discovery and data-driven decision-making while addressing risks of inaccuracy and misinformation. By bridging researchers and advanced AI and machine learning methods, the initiative will lay the foundation for integrating innovative tools into policy-relevant research.
- Max Crowley, College of Health and Human Development
- Jonathan Wright, College of Health and Human Development
- Alexander Winters, College of Health and Human Development
“Toward Robust Annotation and Detection of Logical Fallacies in Online Misinformation” — This project aims to combat misinformation and improve online discourse by detecting logical fallacies in social media posts using a novel annotation scheme that accommodates multiple interpretations. By addressing the challenge of low inter-annotator agreement and applying advanced fallacy detection models, the work seeks to enable early interventions in the spread of fallacious arguments and misinformation.
- Kenneth Huang, College of Information Sciences and Technology
- Dongwon Lee, College of Information Sciences and Technology
“Understanding Rural Health Care Attitudes Toward AI” — This project aims to explore perceptions of AI-mediated health care among rural populations, focusing on factors like socioeconomic status, population density and technology exposure that influence trust and compliance with AI-generated medical recommendations. Using a mixed-methods approach, it aims to identify strategies for building trust through transparency and personalization, contributing to equitable health care access and informing public policy and AI governance frameworks.
- Stephen Hampton, Penn State Harrisburg
- Anthony Buccitelli, Penn State Harrisburg
- Bernice Hausman, College of Medicine
- Jennifer McCormic, College of Medicine
- Vida Abedi, College of Medicine
- Michael McShane, College of Medicine
The Center for Socially Responsible Artificial Intelligence, which launched in 2020, promotes high-impact, transformative AI research and development, while encouraging the consideration of social and ethical implications in all such efforts. It supports a broad range of activities from foundational research to the application of AI to all areas of human endeavor. More information can be found on the CSRAI website.
LAST UPDATED DECEMBER 11, 2024