Large language models (LLMs) are increasingly being applied in healthcare to support clinical tasks such as medical question answering, patient communication, and summarizing medical records. After analyzing on platforms like the Open Medical LLM Leaderboard and in peer-reviewed papers, I listed leading:1
Community healthcare LLMs
Community healthcare LLMs are open-source LLMs. These models are pretrained, fine-tuned instruction-tuned on biomedical and clinical data. These are best for specific QA or instruction-following tasks.

Source: Open Life Science AI2
Benchmark analysis
- Open-source LLMs like OpenBioLLM-Llama3-70B outperform top models like GPT-4 and Med-PaLM.
- Commercial models like GPT-4-base and Med-PaLM-2 achieve high accuracy.
- Smaller models like Mistral-7B-v0.1 demonstrate competitive performance on selected datasets, despite having only ~7B parameters.
Methodology
This open source medical LLM comparison evaluates models on several datasets including MedQA (USMLE), PubMedQA, MedMCQA, and medical/biology subsets of MMLU. It covers domains like clinical knowledge, anatomy, and genetics.
- Model evaluation: The results for GPT-4 and Med-PaLM-2 are sourced from their official publications. As Med-PaLM-2 does not report zero-shot accuracy, its 5-shot performance is used for comparison. All other model results are presented in the zero-shot setting. Gemini’s results are based on findings from a recent Clinical-NLP paper (NAACL 2024).3
- Model inclusion: Only publicly available models (via API or Hugging Face) are eligible.
- Purpose: For research evaluation only. Models are not approved for clinical use.
- Data transparency: Results for proprietary models like GPT-4 and Med-PaLM-2 are sourced from official publications.
Commercial healthcare LLMs
Commercial medical LLMs are typically developed through institutional, multi-step fine-tuning processes and offer capabilities such as clinical reasoning, medical dialogue, summarization, and safety-aligned outputs.
BERT-like (Encoder-only)
Optimized for encoding and representing biomedical text, these models excel at extracting features for tasks such as classification.
Model | Developer | Year | Parameters (B) | Open Source |
---|---|---|---|---|
BioLinkBERT | — | 2022 | 0.34 | ✅ |
MedBERT | Stanford University | 2021 | 0.017 | ✅ |
Health Acoustic Representations (HeAR) | 2024 | 0.31 | ❌ |
ChatGPT / LLaMA-like (Decoder, instruction/chat-tuned)
Based on LLaMA-style architectures and optimized for interactive tasks and clinical dialogues.
Model | Developer | Year | Parameters (B) | Open Source |
---|---|---|---|---|
Polaris 3.0 | Hippocratic AI | 2025 | 4200 | ❌ |
MEDITRON-70B | EPFL (Swiss AI Lab) | 2023 | 70 | ✅ |
Me-LLaMA | PhysioNet (multi-institution) | 2024 | 70 | ✅ |
OpenBioLLM | – | 2024 | 70 | ✅ |
Radiology-Llama2 | Meta | 2023 | 70 | ✅ |
PMC-LLaMA | Shanghai AI Lab & SJTU | 2024 | 13 | ✅ |
ChatDoctor | UT Southwestern & collaborators | 2023 | 13 | ✅ |
Asclepius | KAIST & Yonsei Univ. | 2023 | 13 | ✅ |
MedAlpaca | Technical University of Munich | 2023 | 13 | ✅ |
Clinical Camel | University of Toronto (Vector Institute) | 2023 | 13 | ✅ |
GatorTron | NVIDIA & Univ. of Florida | 2021 | 8.9 | ✅ |
Hippocrates (Hippo) | Koç University | 2023 | 7 | ✅ |
GPT / PaLM-like (Decoder-only, generative)
Built similarly to GPT-3 or PaLM, these models are fine-tuned for general-purpose text generation and summarization.
Model | Developer | Year | Parameters (B) | Open Source |
---|---|---|---|---|
Med-PaLM 2 | 2023 | 340 | ❌ | |
BioMedLM | Stanford CRFM (MosaicML) | 2022 | 2.7 | ✅ |
PubMedGPT | Stanford CRFM | 2023 | 2.7 | ✅ |
BioGPT | Microsoft Research | 2022 | 0.35 | ✅ |
Commercial healthcare LLMs vs general-purpose LLMs
This benchmark evaluates the supervised fine-tuning performance of healthcare LLMs vs large general purpose models (ChatGPT, GPT-4) on medical question answering tasks.
MedQA:
Multiple-choice medical exam questions based on United States Medical Licensing Examination.

MedMCQA:
Large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.

PubMedQA: Biomedical question-answering benchmark using yes/no/maybe answers.

General-purpose LLMs in healthcare
Model | Healthcare use case example | Method used | Open source |
---|---|---|---|
GPT‑4 | Summarizing patient histories from healthcare notes for clinical decision support4 | RAG (Retrieval-Augmented Generation) | ❌ |
Claude 3 | Head and neck cancer diagnosis and treatment planning in oncology board simulations5 | RAG + Prompt Engineering | ❌ |
Qwen 3 | Medical task reasoning tasks 6 | Continual pretraining + Fine-tuning | ✅ |
Command R+ | Retrieval-augmented pipelines for clinical Q&A and literature review7 | RAG | ❌ |
LLaMA 3 | Hospital discharge summary generation and question answering data8 | Continual pretraining + Fine-tuning | ✅ |
These models require fine-tuning are typically general-purpose and need domain adaptation to perform clinical tasks accurately. You can use these models in healthcare by leveraging:
- Continual pretraining on medical data to help the model better identify medical language by exposing it to clinical notes and biomedical literature (like PubMed).
- RAG to pull data from verified clinical documents to produce accurate responses at runtime.
- Instruction fine-tuning to enable the model learn how to actually answer clinical questions or extract symptoms from text.

Source: MCP Digitalhealth9
For more: LLM fine-tuning and LLM training.
10 Use Cases of Large Language Models in Healthcare
1- Medical Transcription
LLMs can help create medical transcriptions by:
- Listening to the organic dialogue between a patient and clinician
- Extracting important medical details
- Condensing medical data into compliant medical records that align with the relevant sections of an EHR
Real-life use case – Google’s MedLM can capture &transform the patient-clinician conversation into a medical transcription.10
2- Electronic Health Records (EHR) Enhancement
The proliferation of electronic health records (EHR) has accumulated a vast repository of patient data, which, if mined effectively, can become a goldmine for healthcare improvement.
Real-life use case – Google’s MedLM is also used by BenchSci, Accenture, and Deloitte for electronic health records enhancement (EHR).
- BenchSci has integrated MedLM into its ASCEND platform to improve the quality of preclinical research.
- Accenture uses MedLM to organize unstructured data from numerous sources, automating human operations that were previously time-consuming and error-prone.
- Deloitte works with MedLM to minimize friction in finding treatment. They use an interactive chatbot that helps health plan participants better understand the provider alternatives.11
3- Clinical Decision Support
Large language models can summarize complex medical concepts allowing them to support valuable insights in the decision-making process.
Real-life use case – Memorial Sloan Kettering Cancer Center uses IBM Watson Oncology to assist oncologists by analyzing patient data and medical literature to recommend evidence-based treatment options.12
4- Medical Research Assistance
LLMs can parse and summarize vast amounts of data, can extract key findings from new research, providing synthesized insights. For example, one of the most famous LLMs, ChatGPT, is used for text summarization.
Real-life use case – John Snow’s healthcare chatbot helps researchers find relevant scientific papers, extract key insights, and identify research trends. It is particularly valuable for navigating the vast amount of biomedical literature.13
Real-life use case – TidalHealth Peninsula Regional clinicians used the Micromedex with Watson solution for healthcare research, claiming that, clinicians received their answers in less than one minute ~70% of the time.14
5- Automated Patient Communication
Large language models in healthcare can draft informative and compassionate responses to patients’ queries.
Some examples include:
- Medication management and reminders: A chatbot provides patients regular reminders to take their diabetic medication and requests confirmation.
- Health monitoring and follow-up care: A post-operative patient sends their pain and wound status to a chatbot, which determines if the healing process is progressing.
- Informational and educational communication: A patient asks a chatbot how to manage high blood pressure, and the chatbot responds with nutrition and lifestyle tips.
Real-life use case – Boston Children’s Hospital uses Buoy Health, an AI-driven online symptom checker chatbot, that provides patients with instant answers to health-related questions and initial consultations.
The chatbot can triage patients by analyzing their symptoms and advising whether they need to see a doctor.15
6- Predictive Health Outcomes
LLMs can assist in predictive analysis by discerning patterns within data.
Real-life use case – WVU Pharmacists using AI to reduce patient readmission rates: WVU pharmacists use a predictive algorithm to leverage LLMs to determine readmission risk. This approach will examine data from electronic health records (EHRs), which include patient demographics, clinical history, and socioeconomic determinants of health.
Based on this research, the WVU pharmacists identify patients at high risk of readmission and assign care coordinators to follow up with them after discharge. This can help reduce readmission rates.16
7- Personalized Treatment Plans
LLMs can suggest treatment plans tailored to an individual’s medical history and specific needs. Their ability to distill complex patient narratives into actionable insights can ensure that each patient receives a care plan that’s as unique as their health journey.
Real-life use case – Babylon Health: Babylon Health’s AI chatbot provides individualized health recommendations based on the user’s symptoms and medical history. It engages users in a conversation by asking relevant questions to analyze their issues better and giving tailored recommendations.17
8- Medical Coding and Billing
Large language models can automate audit processes by analyzing patient records and EHRs.
For example, Epic Systems, a major EHR provider, integrates LLMs into its software to assist with coding and billing. The LLMs can monitor for anomalies in access patterns to sensitive patient information or inconsistencies in coding and billing practices.18
However, LLMs are not ready for medical coding but promising: Researchers examined how frequently four LLMs (GPT-3.5, GPT-4, Gemini Pro, and Llama2-70b Chat) issued the correct CPT, ICD-9-CM, and ICD-10-CM codes.
Their findings show that there is a significant opportunity for improvement. Researchers discovered that LLMs frequently create codes that transmit inaccurate information, with a maximum accuracy of 50%.19
9- Training and Education
Large language models and generative AI in general can be leveraged as interactive educational tools, elucidating complex concepts or offering clarifications on perplexing topics.
Real-life use case – Oxford Medical Simulation uses LLMs integrated with VR technology to create immersive virtual patient simulations.
These simulations allow students to experience high-pressure scenarios, such as handling a cardiac arrest patient without any real-world consequences.
The LLMs power the virtual patients’ responses, making them more realistic and unpredictable, preparing students for the variability of real clinical environments.20
10- Ethical and Compliance Monitoring
Large Language Models (LLMs) can be employed in healthcare compliance monitoring to ensure adherence to regulations such as HIPAA (Health Insurance Portability and Accountability Act), and GDPR (General Data Protection Regulation).
Real-life use case – FairWarning, a leading provider of patient privacy intelligence, uses LLMs to monitor healthcare organizations for potential privacy violations.
The system scans and analyzes user activity within EHRs to identify potential breaches, such as unauthorized access to patient records.
This helps healthcare providers ensure that all interactions with patient data comply with regulatory requirements.21
Challenges of Large Language Models in Healthcare
Accuracy and reliability
Large language models in healthcare can generate inaccuracies.
For example, when given a medical query, GPT-3.5 incorrectly recommended tetracycline for a pregnant patient, despite correctly explaining its potential harm to the fetus.22
Generalization vs. specialization
Healthcare encompasses a wide range of specialties, each with its nuances. An LLM trained in general medical data might not have the detailed expertise needed for specific medical specialties.
Biases and ethical considerations
Beyond accuracy, there are ethical concerns, like the potential for LLMs to perpetuate biases in the training data. This could result in unequal care recommendations for different demographic groups.
For more details on the challenges of large language models in healthcare, you can check our articles on the risks of generative AI and generative AI ethics.
Benchmark data sources
External Links
- 1. Open Medical-LLM Leaderboard - a Hugging Face Space by openlifescienceai. Open Life Science AI
- 2. Open Medical-LLM Leaderboard - a Hugging Face Space by openlifescienceai. Open Life Science AI
- 3. Clinical NLP Workshop 2024.
- 4. https://medium.com/llmed-ai/summarizing-patient-histories-with-gpt-4-9df42ba6453c
- 5. https://arxiv.org/abs/2403.12140
- 6. https://www.datacamp.com/tutorial/fine-tuning-qwen3
- 7. https://cohere.com/blog/command-r-plus
- 8. https://arxiv.org/abs/2404.04110
- 9. Fine-Tuning Large Language Models for Specialized Use Cases - Mayo Clinic Proceedings: Digital Health.
- 10. Google Launches A Healthcare-Focused LLM.
- 11. How doctors are using Google's new AI models for health care. CNBC
- 12. ResearchGate - Temporarily Unavailable.
- 13. Medical ChatBot | Healthcare ChatBot | Medical GPT.
- 14. IBM Case Studies.
- 15. Buoy Health - IDHA. Boston Children's Hospital
- 16. WVU pharmacists using AI to help lower patient readmission rates | WVU Today | West Virginia University.
- 17. Babylon's AI-enabled symptom checker added to recently acquired Higi's app | MobiHealthNews.
- 18. Artificial Intelligence | Epic.
- 19. Large Language Models Are Poor Medical Coders — Benchmarking of Medical Code Querying | NEJM AI.
- 20. Oxford Medical Simulation - Virtual Reality Healthcare Training. Oxford Medical Simulation
- 21. Protect Patient Privacy with Imprivata Patient Privacy Intelligence - YouTube.
- 22. https://arxiv.org/pdf/2307.15343
- 23. Medical foundation large language models for comprehensive text analysis and beyond | npj Digital Medicine. Nature Publishing Group UK
- 24. [2311.16079] MEDITRON-70B: Scaling Medical Pretraining for Large Language Models.
- 25. [2305.09617] Towards Expert-Level Medical Question Answering with Large Language Models.
- 26. [2305.09617] Towards Expert-Level Medical Question Answering with Large Language Models.
Comments
Your email address will not be published. All fields are required.