Search Articles

View query in Help articles search

Search Results (1 to 10 of 269 Results)

Download search results: CSV END BibTex RIS


Is This Chatbot Safe and Evidence-Based? A Call for the Critical Evaluation of Generative AI Mental Health Chatbots

Is This Chatbot Safe and Evidence-Based? A Call for the Critical Evaluation of Generative AI Mental Health Chatbots

Consumers are therefore left to navigate this landscape without guidance on what makes a chatbot safe and effective. However, there is currently no legal, academic, or industry-agreed standard or method for doing this in a way that enables consumers to be meaningful, active collaborators in their own care.

Acacia Parks, Eoin Travers, Ramesh Perera-Delcourt, Max Major, Marcos Economides, Phil Mullan

J Particip Med 2025;17:e69534

Novel Blended Learning on Artificial Intelligence for Medical Students: Qualitative Interview Study

Novel Blended Learning on Artificial Intelligence for Medical Students: Qualitative Interview Study

To address the relevant technologies (ie, natural language processing, large language models, and chatbots), students are introduced to an AI-based smartphone app (Ada Health), which acts as a chatbot to take a symptom-based clinical history and make a suspected diagnosis [47-49]. Students then take a clinical history in groups of two from one of the lecturers, who takes on the role of a patient based on a predefined case vignette.

Zoe S Oftring, Kim Deutsch, Daniel Tolks, Florian Jungmann, Sebastian Kuhn

JMIR Med Educ 2025;11:e65220

Evaluating an AI Chatbot “Prostate Cancer Info” for Providing Quality Prostate Cancer Screening Information: Cross-Sectional Study

Evaluating an AI Chatbot “Prostate Cancer Info” for Providing Quality Prostate Cancer Screening Information: Cross-Sectional Study

While the performance of generative AI chatbots has varied depending on the disease queried, complexity of the query, and brand of chatbot used, these tools show promise for being reliable health information resources in the future [3,5,6].

Otis L Owens, Michael S Leonard

JMIR Cancer 2025;11:e72522

The Effectiveness of a Chatbot Single-Session Intervention for People on Waitlists for Eating Disorder Treatment: Randomized Controlled Trial

The Effectiveness of a Chatbot Single-Session Intervention for People on Waitlists for Eating Disorder Treatment: Randomized Controlled Trial

However, there were serious concerns about the safety of this chatbot when it was modified for wider public use [30]. More specifically, in June 2023, Tessa chatbot deviated from its preprogrammed answers to provide dieting and weight loss advice, which can be particularly harmful in eating disorder settings. The incident received a great deal of publicity and commentary, and Tessa chatbot has not been implemented in any public setting since to our knowledge [31].

Gemma Sharp, Bronwyn Dwyer, Alisha Randhawa, Isabella McGrath, Hao Hu

J Med Internet Res 2025;27:e70874

Global Health care Professionals’ Perceptions of Large Language Model Use In Practice: Cross-Sectional Survey Study

Global Health care Professionals’ Perceptions of Large Language Model Use In Practice: Cross-Sectional Survey Study

Chat GPT, a chatbot powered by GPT-3/4 was released by Open AI in November 2022, incorporating billions of parameters that enable it to comprehend and generate human-like text with the capability of context creation. Its intuitive interface and capacity for prompt engineering have enabled diverse applications across domains [2]. In medicine, recent studies have demonstrated Chat GPT’s potential to support clinical decision-making, summarize complex medical data, and streamline documentation processes.

Ecem Ozkan, Aysun Tekin, Mahmut Can Ozkan, Daniel Cabrera, Alexander Niven, Yue Dong

JMIR Med Educ 2025;11:e58801

Building and Beta-Testing Be Well Buddy Chatbot, a Secure, Credible and Trustworthy AI Chatbot That Will Not Misinform, Hallucinate or Stigmatize Substance Use Disorder: Development and Usability Study

Building and Beta-Testing Be Well Buddy Chatbot, a Secure, Credible and Trustworthy AI Chatbot That Will Not Misinform, Hallucinate or Stigmatize Substance Use Disorder: Development and Usability Study

One researcher has highlighted additional critical concerns that are specific to health care organizations that seek to use AI chatbots, including a need to align chatbot design and security with Healthcare Insurance Portability and Accountability Act (HIPAA) regulations that govern patient protections in care delivery [21]. Optimizing digital tools for substance use is warranted.

Adam Jerome Salyers, Sheana Bull, Joshva Silvasstar, Kevin Howell, Tara Wright, Farnoush Banaei-Kashani

JMIR Hum Factors 2025;12:e69144

The Effectiveness of a Custom AI Chatbot for Type 2 Diabetes Mellitus Health Literacy: Development and Evaluation Study

The Effectiveness of a Custom AI Chatbot for Type 2 Diabetes Mellitus Health Literacy: Development and Evaluation Study

The system (Figure 3) was designed around a custom conversational agent chatbot [26] to address several limitations associated with the public Open AI chatbot interface, particularly with regard to prompt control, user privacy, and the relevance of the source material. To ensure controlled interactions, the chatbot was programmed using a fixed, carefully constructed prompt that guided all responses.

Anthony Kelly, Eoin Noctor, Laura Ryan, Pepijn van de Ven

J Med Internet Res 2025;27:e70131

Development and Systematic Evaluation of a Progressive Web Application for Women With Cardiac Pain: Usability Study

Development and Systematic Evaluation of a Progressive Web Application for Women With Cardiac Pain: Usability Study

In scenario 1 of cycle 1 (sign-in, chatbot, and Event Profile), participants reported a low contrast between the text and the background and small font at sign-in. Participants also wanted clarification that the chatbot was not a real person.

Monica Parry, Tony Huang, Hance Clarke, Ann Kristin Bjørnnes, Paula Harvey, Laura Parente, Colleen Norris, Louise Pilote, Jennifer Price, Jennifer N Stinson, Arland O’Hara, Madusha Fernando, Judy Watt-Watson, Nicole Nickerson, Vincenza Spiteri DeBonis, Donna Hart, Christine Faubert

JMIR Hum Factors 2025;12:e57583

Assessing the Quality and Reliability of ChatGPT’s Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4

Assessing the Quality and Reliability of ChatGPT’s Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4

Reference 4: Evaluation of oropharyngeal cancer information from revolutionary artificial intelligence chatbot Reference 15: Developing an AI-assisted educational chatbot for radiotherapy using the IBM Watson assistantchatbot

Ana Grilo, Catarina Marques, Maria Corte-Real, Elisabete Carolino, Marco Caetano

JMIR Cancer 2025;11:e63677

Primary Technology-Enhanced Care for Hypertension Scaling Program: Trial-Based Economic Evaluation Examining Effectiveness and Cost-Effectiveness Using Real-World Data in Singapore

Primary Technology-Enhanced Care for Hypertension Scaling Program: Trial-Based Economic Evaluation Examining Effectiveness and Cost-Effectiveness Using Real-World Data in Singapore

The program comprises the following three components: (1) remote monitoring of BP with a Bluetooth-enabled BP machine at least once a week with the readings transmitted to the public primary care clinic through the Health Discovery+ app; (2) care team support including monitoring of transmitted readings every month, contacting the patient via teleconsultations if their condition is not well-controlled or needs medication titration; and (3) in-app support with the provision of digital chatbot through helpful

Yi Wang, Shilpa Tyagi, David Wei Liang Ng, Valerie Hui Ying Teo, David Kok, Dennis Foo, Gerald Choon-Huat Koh

J Med Internet Res 2025;27:e59275