BioPharmaTrend
Latest News
All Topics
  • Artificial Intelligence
  • NeuroTech
  • Premium Content
  • Knowledge Center
Interviews
Companies
  • Company Directory
  • Sponsored Case Studies
  • Create Company Profile
More
  • About Us
  • Our Team
  • Advisory Board
  • Citations and Press Coverage
  • Partner Events Calendar
  • Advertise with Us
  • Write for Us
Subscribe
Login/Join

  Latest News

Google Expands MedGemma Collection With Multimodal Health AI Models for Open Development

by BiopharmaTrend   •   July 10, 2025  

Disclaimer: All opinions expressed by Contributors are their own and do not represent those of their employers, or BiopharmaTrend.com.
Contributors are fully responsible for assuring they own any required copyright for any content they submit to BiopharmaTrend.com. This website and its owners shall not be liable for neither information and content submitted for publication by Contributors, nor its accuracy.

# AI & Digital   
Share:   Share in LinkedIn  Share in Bluesky  Share in Reddit  Share in Hacker News  Share in X  Share in Facebook  Send by email

Google Research has released two new models—MedGemma 27B Multimodal and MedSigLIP—as part of its open-source Health AI Developer Foundations (HAI-DEF) initiative, expanding the MedGemma suite of generative models tailored for medical imaging, text, and electronic health record (EHR) applications. These models are intended to support privacy-preserving, locally deployable AI tools across the healthcare and life sciences sectors.

#advertisement
AI in Drug Discovery Report 2025

Google

MedGemma 27B Multimodal builds on earlier MedGemma releases by supporting joint reasoning across longitudinal EHRs and medical images. It achieves 87.7% on the MedQA benchmark, placing it within 3 points of DeepSeek R1 (a larger model) at one-tenth the inference cost, according to Google. Its 4B variant achieves 64.4% on MedQA and generated chest X-ray reports that a US board-certified radiologist judged to be sufficient for patient management in 81% of cases.

MedGemma prompt examples; Google

MedSigLIP, a 400M-parameter encoder derived from the SigLIP architecture, was trained on diverse imaging modalities including chest X-rays, histopathology, dermatology, and fundus photos. It supports classification, zero-shot labeling, and semantic image retrieval across medical datasets, while retaining general-purpose image understanding.

According to Yossi Matias, VP and Head of Google Research, early adoption examples include:

  • Tap Health (India): testing MedGemma’s reliability in clinical-context-sensitive tasks;
  • Chang Gung Memorial Hospital (Taiwan): studying its performance on traditional Chinese-language medical literature and queries;
  • DeepHealth (USA): evaluating MedSigLIP for chest X-ray triage and nodule detection.

All MedGemma models are designed for accessibility: they can run on a single GPU and are compatible with Hugging Face and Google Cloud’s Vertex AI endpoints. The 4B variants can be adapted for mobile deployment. Google emphasizes that while these models offer strong out-of-the-box performance, they are intended for adaptation and validation in domain-specific settings, not direct clinical use.

Click here to access Google's technical report.

Topics: AI & Digital   

Share:   Share in LinkedIn  Share in Bluesky  Share in Reddit  Share in Hacker News  Share in X  Share in Facebook  Send by email
#advertisement
ThermoFisher Scientific: Integrated genetic technologies for cell therapy development
#advertisement
Webinar: AI in Clinical Trials

BiopharmaTrend.com

Where Tech Meets Bio
mail  Newsletter
in  LinkedIn
x  X
gnews  Google News
rss  RSS Feed

About


  • What we do
  • Citations and Press Coverage
  • Terms of Use
  • Privacy Policy
  • Disclaimer

We Offer


  • Premium Content
  • BioTech Scout
  • Interviews
  • Partner Events
  • Case Studies

Opportunities


  • Membership
  • Advertise
  • Submit Company
  • Write for Us
  • Contact Us

© BPT Analytics LTD 2025
We use cookies to personalise content and to analyse our traffic. You consent to our cookies if you continue to use our website. Read more details in our cookies policy.