Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host “Diagnose-a-thon,” a competition that aims to uncover the power and potential dangers of using generative AI for medical inquiries. The virtual event will take place Nov. 11-17.
The challenge is open to all members of the University community who have a valid Penn State email address. Participants will be tasked with identifying prompts that lead popular generative AI tools, like OpenAI’s ChatGPT and Google’s Gemini, to produce either accurate or potentially harmful health-related diagnoses.
The event aims to illuminate the emergent strengths and potentially worrisome shortcomings of existing large language models (LLMs) and advance a more credible AI future.
During the event, Diagnose-a-thon participants can use an online form to submit entries across three tracks:
- Patient track. Acting as a patient, participants prompt an LLM to produce a diagnosis for their real or imaginary symptoms.
- Medical professional track. Acting as a medical professional, participants prompt an LLM to produce a diagnosis based on a hypothetical patient case.
- Out-of-the-box track: Participants prompt an LLM about a scenario not included in the patient or medical professional tracks that might lead to a potentially believable medical diagnosis.
A panel of physicians will scrutinize each submission. First- ($1,000), second- ($500) and third-place ($250) awards will be given to participants who submit the highest number of verifiable AI-generated diagnoses, regardless of whether they’re accurate or misleading. Five participants will receive consolation prizes worth $50 each. A $1,000 prize will also be given to the participant who submits the AI-generated diagnosis that would be most harmful if it was acted upon. Participants can submit as many diagnoses as they'd like to any of the tracks, but each participant is eligible to receive only one prize.
Generative AI tools are making tasks faster and easier to complete, but they can also amplify inequalities, spread misinformation and present serious risks. Identifying these strengths and dangers can help improve their use and mitigate their harm when used by patients and medical professionals. It also can inspire new research by CSRAI faculty and promote the design of more socially conscious AI products.
Launched in 2020, the Center for Socially Responsible Artificial Intelligence, promotes high-impact, transformative AI research and development, while encouraging the consideration of social and ethical implications in all such efforts. It supports a broad range of activities from foundational research to the application of AI to all areas of human endeavor.
For questions about the event, visit the Diagnose-a-thon website or contact Amulya Yadav, CSRAI associate director (programs), at auy212@psu.edu, or Bonam Mingole, CSRAI student affiliate, at bjm6940@psu.edu.