loader image
Monday, January 19, 2026
71.3 F
McAllen
- Advertisement -

Most People Trust Doctors Over AI but See Promise for Cancer Diagnosis

Translate to Spanish or other 102 languages!

New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. Image for illustration purposes
New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. Image for illustration purposes
- Advertisement -

By Society for Risk Analysis (SRA)

Newswise — Washington, D.C. – New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. These and other results of two nationally representative surveys will be presented at the annual meeting of the Society for Risk Analysis Dec. 7-10 in Washington, DC.  

Led by behavioral scientist Dr. Michael Sobolev of the Schaeffer Institute for Public Policy & Government Service at the University of Southern California, and psychologist Dr. Patrycja Sleboda, assistant professor at Baruch College, City University of New York, the study focuses on public perspectives—specifically trust, understanding, potential, excitement, and fear of AI—in the context of cancer diagnosis, one of AI’s most commonly used and impactful applications in medicine. It also examines how these public attitudes vary by demographics, such as age, gender and education.  

- Advertisement -

The study used data from two nationally representative surveys to assess how personal use of AI tools like ChatGPT and general trust in medical AI relate to the acceptance of an AI-based diagnostic tool for cervical cancer. 

Key findings: 

  • Most people still trust doctors more than AI. Only about 1 in 6 people (17%) said they trust AI as much as a human expert to diagnose health problems.  
  • People who have tried AI (like ChatGPT) feel more positive about AI’s application in medicine. Those who had used AI in their personal life said they understood it better and were more excited and trusting of its use in healthcare. (55.1% of respondents had heard of ChatGPT but not used it, while 20.9% had both heard of and used it.) 
  • People see promise, not danger. When participants learned about an AI tool that helps find early signs of cancer, most thought it had great potential and were more excited than afraid. 

“Our research shows that even a little exposure to AI—just hearing about it or trying it out—can make people more comfortable and trusting of the technology. We know from research that familiarity plays a big role in how people accept new technologies, not just AI,” says Sleboda.  

In the first survey, participants reported whether they had heard of or used AI technologies and responded to questions about their general trust in AI for health diagnoses.  

- Advertisement -

In the second survey, participants were introduced to a scenario based on real development in which a research team developed an AI system that can analyze digital images of the cervix to detect precancerous changes (a technology called automated visual evaluation).  Participants then rated on a scale (from 1 to 5) five elements of acceptance for this diagnostic AI tool: understanding, trust, excitement, fear and potential.  

An analysis of the results showed that potential was rated the highest when judging a diagnostic AI tool, followed by excitement, trust, understanding and fear. Identifying as male and having a college degree were associated with greater trust, excitement and potential for the use of AI in healthcare. These participants also expressed lower fear of the use of AI overall. 

“We were surprised by the gap between what people said in general about AI and how they felt in a real example” says Sobolev, who leads the Behavioral Design Unit at Cedars-Sinai Medical Center in Los Angeles, with the goal of advancing human-centered innovation. “Our results show that learning about specific, real-world examples can help build trust between people and AI in medicine.” 

About Society for Risk Analysis 

The Society for Risk Analysis (SRA) is a multidisciplinary, global organization dedicated to advancing the science and practice of risk analysis. Founded in 1980, SRA brings together researchers, practitioners, and policymakers from diverse fields including engineering, public health, environmental science, economics, and decision theory. The Society fosters collaboration and communication on risk assessment, management, and communication to inform decision-making and protect public well-being. SRA supports a wide range of scholarly activities, publications, and conferences. Learn more at www.sra.org

- Advertisement -
- Advertisement -

- Advertisement -

More Articles

Revised U.S. Nutrition Standards Put Healthy Eating Front and Center

The American Heart Association welcomes the release of the 2025-30 Dietary Guidelines for Americans, noting in a public statement the strong alignment in key areas between the federal recommendations and the Heart Association’s long-standing advice for heart-healthy eating.

Small Steps Toward Feeling Less Lonely

About one in three people report feeling lonely, according to the Centers for Disease Control and Prevention.

Run, Walk, Inspire: Heroes with Heart 5K Set for Feb. 28

Mega Doctor News Considered the most crucial organ and essential for survival, the...

New Evidence Shows HRT Does Not Increase Dementia Risk in Women

Researchers estimate that in 2021, about 57 million people around the world were living with dementia. 
- Advertisement -
×