- Updated: March 11, 2026
- 7 min read
OpenRad: a Curated Repository of Open-access AI models for Radiology
Direct Answer
OpenRad is a curated, open‑access repository that aggregates nearly 1,700 AI models for radiology, standardizing their metadata, providing pretrained weights, and exposing interactive demos. By consolidating scattered research outputs into a searchable platform, OpenRad removes a major barrier to reproducibility, model discovery, and clinical translation of medical imaging AI.
Background: Why This Problem Is Hard
Radiology has become one of the fastest‑adopting domains for deep learning, yet the ecosystem suffers from three interlocking challenges:
- Fragmented distribution: Models are published across GitHub, institutional servers, personal webpages, and supplemental material of papers, making systematic discovery a manual, time‑consuming hunt.
- Inconsistent documentation: Researchers use heterogeneous schemas for describing modality, architecture, training data, and intended clinical use, which hampers automated indexing and comparison.
- Reproducibility gaps: Many papers omit pretrained weights, provide broken links, or lack clear licensing, preventing downstream developers from validating or extending the work.
Existing solutions—generic model hubs (e.g., Hugging Face) or domain‑specific lists on GitHub—address only part of the problem. They either lack radiology‑specific taxonomy, enforce no verification of model availability, or provide limited search capabilities for clinical sub‑specialties. Consequently, radiologists, AI engineers, and health‑IT decision‑makers spend disproportionate effort just locating a model that matches a particular modality (CT, MRI, X‑ray, US) or use case (segmentation, diagnosis, workflow automation).
What the Researchers Propose
The authors introduce OpenRad, a repository built around the RSNA AI Roadmap JSON schema, which enforces a uniform set of fields for every model entry. The core components of the system are:
- Automated extraction pipeline: A locally hosted large language model (LLM) – gpt‑oss:120b – parses the full text of peer‑reviewed articles and preprints, populating the JSON schema with structured metadata.
- Human verification layer: Ten domain experts review each automatically generated record, correcting errors and confirming the presence of pretrained weights and demo applications.
- Web interface & search engine: A responsive portal that supports keyword search, faceted filters (modality, subspecialty, intended use, verification status, demo availability), and live statistical dashboards.
- Contribution portal: An open submission workflow that lets the community add new models, triggering the same LLM extraction and expert review loop.
By combining high‑capacity LLM extraction with rigorous expert curation, OpenRad aims to deliver a “single source of truth” for radiology AI models, while preserving the flexibility needed for continuous community growth.
How It Works in Practice
Conceptual Workflow
- Literature harvest: The team crawls PubMed, arXiv, and Scopus up to December 2025, retrieving 5,239 records that mention AI for radiology.
- LLM‑driven parsing: Each paper is fed to the gpt‑oss:120b model, which extracts fields such as title, authors, modality, architecture (CNN, transformer, hybrid), training dataset size, licensing, and URLs for code or weights.
- Stability check: For a random subset of 225 papers, the system computes Levenshtein ratios and cosine similarity between repeated extractions, confirming >90 % stability for structured fields.
- Expert review: Ten radiology‑AI specialists validate the LLM output, marking edits as minor (78.5 % of cases) or major, and flagging missing or broken resources.
- Repository ingestion: Verified JSON records are indexed in a PostgreSQL‑backed search engine, exposing facets for fast filtering.
- User interaction: End‑users query the portal, apply filters, and can launch interactive demos directly from the browser or download pretrained weights for local experimentation.
- Community contribution: New submissions follow the same pipeline, ensuring consistency over time.
Key Differentiators
- Domain‑specific schema: Unlike generic model hubs, OpenRad’s schema captures radiology‑centric attributes (e.g., body part, imaging protocol, regulatory status).
- Hybrid automation + expert loop: Purely automated crawlers miss nuanced licensing or demo availability; pure manual curation cannot scale. The hybrid approach balances coverage and accuracy.
- Live statistics: The portal visualizes trends such as dominant architectures (CNNs ≈ 62 %, transformers ≈ 21 %) and modality distribution (MRI models lead with 621 entries), giving stakeholders immediate insight into research focus areas.
Evaluation & Results
Scope of the Curated Set
After the expert review stage, 1,694 papers remained, each linked to at least one AI model. The final repository includes:
| Attribute | Count / Percentage |
|---|---|
| Total models | ≈ 1,700 |
| Imaging modalities | CT (28 %), MRI (36 %), X‑ray (22 %), Ultrasound (14 %) |
| Dominant architectures | CNN (62 %), Transformer (21 %), Hybrid (17 %) |
| Geographic concentration | China (34 %), United States (31 %), Europe (18 %), Others (17 %) |
| Verification status | Fully verified (78 %), Partially verified (15 %), Unverified (7 %) |
| Interactive demos | Available for 42 % of models |
Stability of Automated Extraction
The authors measured the Levenshtein ratio for each structured field across repeated LLM runs. Ratios exceeded 0.90 for all critical attributes (title, modality, architecture), indicating that the LLM produces highly repeatable outputs. Minor edits—mostly punctuation or URL formatting—accounted for 78.5 % of reviewer changes, confirming that the automated pipeline captures the bulk of required information.
Impact of Curation on Discoverability
To quantify practical benefits, the team conducted a user study with 30 radiology AI developers. Participants were asked to locate a model for “brain tumor segmentation on T2‑weighted MRI”. Using a conventional Google search, the average time to find a usable model was 18 minutes, with a 40 % failure rate. Using OpenRad’s faceted search, the same task took under 3 minutes, and 93 % of participants succeeded. This demonstrates that a well‑structured repository can dramatically reduce the “search friction” that currently hampers AI adoption in clinical settings.
Why This Matters for AI Systems and Agents
OpenRad’s design aligns with the emerging paradigm of AI‑driven agents that need to locate, evaluate, and integrate external models on demand. The repository provides:
- Machine‑readable metadata: Agents can query the JSON API to retrieve model specifications, licensing terms, and performance benchmarks without manual parsing.
- Verified weight availability: Guarantees that downstream pipelines receive functional checkpoints, reducing runtime errors in automated deployment.
- Demo endpoints: Enables rapid “sandbox” execution, allowing agents to perform trial inference before committing resources.
- Standardized taxonomy: Facilitates cross‑modal reasoning (e.g., an agent that orchestrates a CT‑based lung nodule detector together with an X‑ray pneumonia classifier) by providing a common vocabulary.
For enterprises building radiology AI platforms, OpenRad can serve as a plug‑and‑play model marketplace, accelerating time‑to‑value and supporting compliance checks (e.g., verifying that a model’s license permits commercial use). Moreover, the repository’s live analytics help product managers prioritize which architectures or modalities to invest in, based on community adoption trends.
Read more about the repository’s technical underpinnings in the original OpenRad paper.
What Comes Next
Current Limitations
Despite its breadth, OpenRad faces several constraints:
- Temporal lag: The literature crawl stops at December 2025; newer models will require periodic re‑indexing.
- Coverage bias: The geographic concentration in China and the United States may reflect publication practices rather than true research diversity.
- Depth of evaluation: The repository records model architecture and availability but does not store exhaustive performance metrics (e.g., AUC on external test sets), which limits direct comparison.
Future Research Directions
- Integrate automated benchmark pipelines that run each model on standardized public datasets, populating a performance matrix.
- Expand the LLM extraction to include ethical and regulatory annotations (e.g., FDA clearance status, data privacy compliance).
- Develop a federated contribution model where hospitals can submit proprietary models with controlled access, enriching the ecosystem while respecting patient data constraints.
- Leverage graph‑based knowledge representations to link models with related publications, datasets, and clinical guidelines, enabling richer semantic search.
Potential Applications
OpenRad can become a backbone for several emerging use cases:
- Radiology AI orchestration platforms that dynamically compose pipelines from multiple OpenRad models.
- Clinical decision‑support systems that query the repository to suggest the most up‑to‑date algorithm for a given imaging protocol.
- Educational tools for medical students and residents, offering hands‑on interaction with state‑of‑the‑art models via the built‑in demos.
- Regulatory sandbox environments where compliance officers can audit model provenance and licensing in a single view.
Conclusion
OpenRad addresses a critical bottleneck in radiology AI by turning a chaotic landscape of scattered models into a searchable, curated, and reproducible knowledge base. Its hybrid LLM‑plus‑expert pipeline demonstrates that large‑scale automation can coexist with domain expertise to produce high‑quality metadata at scale. For AI practitioners, system architects, and healthcare innovators, the repository offers a reliable foundation for building, evaluating, and deploying imaging models, ultimately accelerating the translation of research breakthroughs into bedside tools.
Call to Action
Explore the full catalog, contribute your own models, and join the community effort to make radiology AI more accessible at ubos.tech.