loader image
Tuesday, December 9, 2025
72.1 F
McAllen
- Advertisement -

Most People Trust Doctors Over AI but See Promise for Cancer Diagnosis

Translate to Spanish or other 102 languages!

New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. Image for illustration purposes
New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. Image for illustration purposes
- Advertisement -

By Society for Risk Analysis (SRA)

Newswise — Washington, D.C. – New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. These and other results of two nationally representative surveys will be presented at the annual meeting of the Society for Risk Analysis Dec. 7-10 in Washington, DC.  

Led by behavioral scientist Dr. Michael Sobolev of the Schaeffer Institute for Public Policy & Government Service at the University of Southern California, and psychologist Dr. Patrycja Sleboda, assistant professor at Baruch College, City University of New York, the study focuses on public perspectives—specifically trust, understanding, potential, excitement, and fear of AI—in the context of cancer diagnosis, one of AI’s most commonly used and impactful applications in medicine. It also examines how these public attitudes vary by demographics, such as age, gender and education.  

- Advertisement -

The study used data from two nationally representative surveys to assess how personal use of AI tools like ChatGPT and general trust in medical AI relate to the acceptance of an AI-based diagnostic tool for cervical cancer. 

Key findings: 

  • Most people still trust doctors more than AI. Only about 1 in 6 people (17%) said they trust AI as much as a human expert to diagnose health problems.  
  • People who have tried AI (like ChatGPT) feel more positive about AI’s application in medicine. Those who had used AI in their personal life said they understood it better and were more excited and trusting of its use in healthcare. (55.1% of respondents had heard of ChatGPT but not used it, while 20.9% had both heard of and used it.) 
  • People see promise, not danger. When participants learned about an AI tool that helps find early signs of cancer, most thought it had great potential and were more excited than afraid. 

“Our research shows that even a little exposure to AI—just hearing about it or trying it out—can make people more comfortable and trusting of the technology. We know from research that familiarity plays a big role in how people accept new technologies, not just AI,” says Sleboda.  

In the first survey, participants reported whether they had heard of or used AI technologies and responded to questions about their general trust in AI for health diagnoses.  

- Advertisement -

In the second survey, participants were introduced to a scenario based on real development in which a research team developed an AI system that can analyze digital images of the cervix to detect precancerous changes (a technology called automated visual evaluation).  Participants then rated on a scale (from 1 to 5) five elements of acceptance for this diagnostic AI tool: understanding, trust, excitement, fear and potential.  

An analysis of the results showed that potential was rated the highest when judging a diagnostic AI tool, followed by excitement, trust, understanding and fear. Identifying as male and having a college degree were associated with greater trust, excitement and potential for the use of AI in healthcare. These participants also expressed lower fear of the use of AI overall. 

“We were surprised by the gap between what people said in general about AI and how they felt in a real example” says Sobolev, who leads the Behavioral Design Unit at Cedars-Sinai Medical Center in Los Angeles, with the goal of advancing human-centered innovation. “Our results show that learning about specific, real-world examples can help build trust between people and AI in medicine.” 

About Society for Risk Analysis 

The Society for Risk Analysis (SRA) is a multidisciplinary, global organization dedicated to advancing the science and practice of risk analysis. Founded in 1980, SRA brings together researchers, practitioners, and policymakers from diverse fields including engineering, public health, environmental science, economics, and decision theory. The Society fosters collaboration and communication on risk assessment, management, and communication to inform decision-making and protect public well-being. SRA supports a wide range of scholarly activities, publications, and conferences. Learn more at www.sra.org

- Advertisement -
- Advertisement -

- Advertisement -

More Articles

Simple Ways to Reduce Inflammation and Protect Your Heart

Mega Doctor News By American Heart Association If you’ve ever fought off an...

Blanket & Heater Drive for Elderly RGV Residents thru Dec. 31st

The coldest time of the year in South Texas is right around the corner, with overnight lows in the 40s and 50s Fahrenheit predicted during the region’s mild winter season from late December through mid-February and occasional cold fronts bringing brief dips below freezing.

ACS Cervical Cancer Screening Guidelines Updates

According to the World Health Organization (WHO), in 2022 cervical cancer was the fourth most common cancer among womenTrusted Source worldwide, with about 660,000 new cases.

MD Anderson Unveils Breakthroughs in Blood Cancer Treatment

Researchers from The University of Texas MD Anderson Cancer Center presented groundbreaking research at the 67th American Society of Hematology (ASH) Annual Meeting and Exposition.
- Advertisement -
×