Skip to main content

Acceptance of Artificial Intelligence in Healthcare Is Making Inroads

Web Exclusives - In the News

Have you ever asked ChatGPT to create a bio on yourself? Was it spot-on? Somewhat accurate? Total fiction?

ChatGPT and similar programs, such as Google’s Med-PaLM 2 and Bard, and Microsoft’s Bing Chat, are generative artificial intelligence (AI) programs that use large language models (LLMs) to respond to prompts. ChatGPT, for example, is capable of producing text, images, and other media, such as music and videos. It learns by applying neural network machine learning techniques and generates new data with similar characteristics. Similarly, Med-PaLM 2 is an AI program specific to medical applications that uses Google’s LLM to answer health-related questions. This program was the first LLM to perform at an “expert” level on professional medical board examination data sets (MedQA and MedMCQA).1

AI clearly has some useful applications in areas such as customer service and human resources support. But what if ChatGPT or Med-PaLM 2 were to provide healthcare advice to patients and providers? Would you be inclined to trust the information?

Tebra, a healthcare technology company, surveyed 1000 Americans and 500 healthcare professionals about the use of AI in healthcare, and the results may surprise you. Some key takeaways include2:

  • More than 1 in 10 healthcare professionals use AI technologies, and almost 50% have expressed an intent to adopt these technologies in the future.
  • 8 of 10 Americans believe that AI has the potential to improve the quality of healthcare, reduce healthcare costs, and increase accessibility to healthcare.
  • 1 of 4 Americans are more likely to talk to an AI chatbot instead of receiving therapy.
  • 1 of 4 Americans would not visit a healthcare provider who refuses to embrace AI technology.

Although 25% of Americans reported that they would not visit a healthcare provider who refuses to embrace AI technology, they still have important reservations about the technology. Chief among the concerns (53% of respondents) was the belief that AI technology cannot fully replace the expertise and experience of well-trained healthcare providers.2 In addition, almost half (47%) of the respondents reported concerns about the reliability of AI-generated information on diagnoses and treatments. Data privacy and security concerns were expressed by 42% of the respondents, and 33% were concerned that biased algorithms could result in unfair or discriminatory treatment.2

According to the survey results among healthcare professionals, almost 90% are currently not using AI technologies, but many express the intention to do so in the future. When asked how they intend to use it, 52% responded data entry and 42% said to schedule appointments. These are fairly low-level tasks, but 31% of healthcare providers indicated that they would use AI for diagnosis and treatment purposes.

To assess the healthcare professionals’ opinions regarding the quality of answers to common medical questions from ChatGPT, Bard, and Bing, the Tebra survey posed the same questions to each AI program, and the answers were then evaluated by medical professionals. A total of 44% of respondents believed ChatGPT was the best of the 3 programs, followed by Bard (42%), and Bing (14%). Further, the healthcare professionals then assessed medical guidance provided by ChatGPT on self-examinations for various cancers and arthritis. After this exercise, 46% of healthcare professionals reported feeling more optimistic about the use of AI in healthcare.

The rapid rise and improvement in AI technology is remarkable and offers potential healthcare benefits, such as increased time efficiency, cost-savings, and increased accessibility.2 Concerns linger, however, regarding the ability of this technology to reliably assume tasks that require the expertise and experience of human healthcare providers, and many Americans expressed a basic preference for human interactions.2

The complete results from Tebra’s survey can be found at Perceptions of AI in healthcare: What professionals and the public think.

References

  1. Matias Y, Corrado G. Our latest health AI research updates. Google. March 14, 2023. https://blog.google/technology/health/ai-llm-medpalm-research-thecheckup/. Accessed July 21, 2023.
  2. Tebra. Perceptions of AI in healthcare: What professionals and the public think. April 27, 2023. www.tebra.com/blog/research-perceptions-of-ai-in-healthcare/#:~:text=Of%20healthcare%20professionals%20who%20experienced,reduce%20costs%2C%20and%20increase%20accessibility. Accessed July 19, 2023.
Related Items
Tornado Sweeps Through Pfizer’s Storage and Manufacturing Facility
Web Exclusives published on July 28, 2023 in In the News
CMS Expands Access to Telehealth Benefits During COVID-19 Outbreak
Web Exclusives published on March 19, 2020 in Health Policy and Reform, In the News
Tazverik Receives FDA Approval as First Treatment Specifically for Metastatic or Locally Advanced Epithelioid Sarcoma
Web Exclusives published on January 28, 2020 in FDA Approvals, In the News, Select Drug Profiles
Mixed Findings in Annual Cancer Statistics Report
Web Exclusives published on January 21, 2020 in FDA Approvals, In the News, Select Drug Profiles
FDA Grants Approval to Avapritinib for Patients with GIST
Web Exclusives published on January 14, 2020 in FDA Approvals, In the News, Select Drug Profiles
Last modified: July 27, 2023