loader image
Thursday, December 11, 2025
64.2 F
McAllen
- Advertisement -

Most People Trust Doctors Over AI but See Promise for Cancer Diagnosis

Translate to Spanish or other 102 languages!

New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. Image for illustration purposes
New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. Image for illustration purposes
- Advertisement -

By Society for Risk Analysis (SRA)

Newswise — Washington, D.C. – New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. These and other results of two nationally representative surveys will be presented at the annual meeting of the Society for Risk Analysis Dec. 7-10 in Washington, DC.  

Led by behavioral scientist Dr. Michael Sobolev of the Schaeffer Institute for Public Policy & Government Service at the University of Southern California, and psychologist Dr. Patrycja Sleboda, assistant professor at Baruch College, City University of New York, the study focuses on public perspectives—specifically trust, understanding, potential, excitement, and fear of AI—in the context of cancer diagnosis, one of AI’s most commonly used and impactful applications in medicine. It also examines how these public attitudes vary by demographics, such as age, gender and education.  

- Advertisement -

The study used data from two nationally representative surveys to assess how personal use of AI tools like ChatGPT and general trust in medical AI relate to the acceptance of an AI-based diagnostic tool for cervical cancer. 

Key findings: 

  • Most people still trust doctors more than AI. Only about 1 in 6 people (17%) said they trust AI as much as a human expert to diagnose health problems.  
  • People who have tried AI (like ChatGPT) feel more positive about AI’s application in medicine. Those who had used AI in their personal life said they understood it better and were more excited and trusting of its use in healthcare. (55.1% of respondents had heard of ChatGPT but not used it, while 20.9% had both heard of and used it.) 
  • People see promise, not danger. When participants learned about an AI tool that helps find early signs of cancer, most thought it had great potential and were more excited than afraid. 

“Our research shows that even a little exposure to AI—just hearing about it or trying it out—can make people more comfortable and trusting of the technology. We know from research that familiarity plays a big role in how people accept new technologies, not just AI,” says Sleboda.  

In the first survey, participants reported whether they had heard of or used AI technologies and responded to questions about their general trust in AI for health diagnoses.  

- Advertisement -

In the second survey, participants were introduced to a scenario based on real development in which a research team developed an AI system that can analyze digital images of the cervix to detect precancerous changes (a technology called automated visual evaluation).  Participants then rated on a scale (from 1 to 5) five elements of acceptance for this diagnostic AI tool: understanding, trust, excitement, fear and potential.  

An analysis of the results showed that potential was rated the highest when judging a diagnostic AI tool, followed by excitement, trust, understanding and fear. Identifying as male and having a college degree were associated with greater trust, excitement and potential for the use of AI in healthcare. These participants also expressed lower fear of the use of AI overall. 

“We were surprised by the gap between what people said in general about AI and how they felt in a real example” says Sobolev, who leads the Behavioral Design Unit at Cedars-Sinai Medical Center in Los Angeles, with the goal of advancing human-centered innovation. “Our results show that learning about specific, real-world examples can help build trust between people and AI in medicine.” 

About Society for Risk Analysis 

The Society for Risk Analysis (SRA) is a multidisciplinary, global organization dedicated to advancing the science and practice of risk analysis. Founded in 1980, SRA brings together researchers, practitioners, and policymakers from diverse fields including engineering, public health, environmental science, economics, and decision theory. The Society fosters collaboration and communication on risk assessment, management, and communication to inform decision-making and protect public well-being. SRA supports a wide range of scholarly activities, publications, and conferences. Learn more at www.sra.org

- Advertisement -
- Advertisement -

- Advertisement -

More Articles

Why Family Routines Matter for Kids

Do you eat dinner with your kids around the same time every day? Or maybe you always read a book with them before bed. These kinds of family routines can make a big difference.

TMA Seeks to Award Outstanding Science Teachers 

Are you – or do you know – an outstanding science teacher? The Texas Medical Association (TMA) is seeking entrants for the 2026 Ernest and Sarah Butler Awards for Excellence in Science Teaching competition.

Celebrating 40 Years South Texas Health System® McAllen 

Mega Doctor News South Texas Health System McAllen’s story began in 1919,...

STHS Edinburg Named to Forbes’ 2026 Top Hospitals List

South Texas Health System (STHS) Edinburg has been named to the 2026 Forbes list of America's Top Hospitals. The facility is one of only 253 of the more than 5,400 eligible hospitals nationwide to earn a 5-star rating, including only 13 in Texas.
- Advertisement -
×