How Michigan’s AI Model Spots Hidden Heart Disease From a 10-Second EKG
A 10-second electrocardiogram strip costs a few dollars, takes moments to record, and is available in virtually every clinic on the planet. A PET myocardial perfusion scan – the gold-standard test for diagnosing coronary microvascular dysfunction – costs thousands, requires specialized equipment, and is simply unavailable at many hospitals. Researchers at the University of Michigan have now built an AI model that bridges that gap, predicting the expensive scan’s key measurement from the cheap, ubiquitous EKG.
The implications extend well beyond a single condition. Approximately 14 million people visit emergency rooms or clinics with chest pain every year. Many of them have coronary microvascular dysfunction (CMVD) – a condition in which the heart’s smallest blood vessels fail to deliver adequate blood flow – yet their angiograms come back looking perfectly normal. They leave without a diagnosis, without treatment, and at continued risk of a heart attack. The Michigan model offers a realistic path toward catching those patients before they fall through the cracks.
This article breaks down how the model works, what makes its training approach unusual, how it compares to other AI-driven cardiac diagnostics, and what it means for the future of cardiovascular care.
Why Coronary Microvascular Dysfunction Is So Easy to Miss
CMVD is sometimes called the invisible heart disease. Patients present with classic cardiac symptoms – chest pain, shortness of breath, fatigue – but the large coronary arteries that doctors check during angiography appear completely clear. Because the dysfunction exists at the microvascular level, standard imaging misses it entirely.
This diagnostic blind spot has real consequences. Patients may be told nothing is wrong, discharged from the emergency department, and left to manage worsening symptoms without understanding their cause. The condition can lead to heart attacks, heart failure, and death – outcomes that earlier detection could help prevent.
Until now, confirming CMVD required measuring myocardial flow reserve through PET imaging, a resource-intensive process that most community hospitals cannot offer. The University of Michigan model changes the equation by predicting that same flow reserve measurement from a standard EKG.
How the Michigan AI Model Was Built
The research team, led by Dr. Venkatesh L. Murthy at the U-M Health Frankel Cardiovascular Center and U-M Medical School, took a two-stage training approach that sets this model apart from many predecessors.
Stage One: Self-Supervised Learning on 800,000+ EKGs
Rather than starting with expensive, manually labeled data, the team used self-supervised learning (SSL) to pre-train a deep learning vision transformer on more than 800,000 unlabeled EKG waveforms. During this phase, the model taught itself the “electrical language of the heart” – learning to recognize patterns, rhythms, and subtle signal variations without any human telling it what each waveform meant. This massive unsupervised exposure gave the model a rich foundational understanding of cardiac electrical activity.
Stage Two: Fine-Tuning With PET Scan Labels
Once the model had internalized the broad landscape of EKG signals, researchers refined it using a smaller labeled dataset derived from PET myocardial perfusion imaging scans. This calibration step taught the model to connect specific EKG patterns with the myocardial flow reserve values that define CMVD. The result is a system that extends the predictive power of PET imaging to the far simpler and more accessible EKG.
As Dr. Murthy put it: “Our model creates a way for clinicians to accurately identify a condition that is notoriously hard to diagnose – and often missed in emergency department visits – using a 10-second EKG strip.”
A Surprising Finding About Stress Tests
One of the study’s most practically significant results challenges a longstanding clinical assumption. The researchers found that performance gains were minimal when comparing EKGs taken during exercise stress tests versus simple resting EKGs.
This matters enormously. Stress tests add time, cost, and physical burden to the diagnostic process – and for some patients, they carry genuine risk. If a resting EKG provides the AI model with nearly equivalent diagnostic information, clinicians can skip the stress test entirely for the purpose of CMVD screening. The model appears to be capturing fundamental electrical signatures of microvascular dysfunction that persist regardless of whether the heart is under exertion, rather than relying on the transient changes that traditional stress-test interpretation emphasizes.
Performance Across Multiple Cardiac Tasks
The Michigan model was not built as a single-purpose detector. It demonstrated improved diagnostic accuracy across 12 different demographic and clinical prediction tasks – including tasks that previous EKG-AI models could not perform at all. This breadth suggests the self-supervised learning foundation captures generalizable cardiac pathology patterns, making the model a versatile diagnostic platform rather than a narrow tool.
While the original research does not publish granular accuracy percentages for each of the 12 tasks, the team reports that the model significantly outperformed earlier AI systems across nearly every benchmark tested.
How AI-Enhanced EKG Analysis Compares to Conventional Methods
To appreciate the Michigan model’s potential, it helps to understand just how limited conventional EKG interpretation can be – and how dramatically AI improves upon it. A separate prospective study of 1,490 patients with suspected acute coronary syndrome illustrates the gap clearly.
| Metric | AI-Based ECG Interpretation | Conventional ECG Interpretation |
|---|---|---|
| Overall Accuracy (Occlusive MI) | 84% | 42% |
| Sensitivity | 77% | Not specified separately |
| Specificity | 99% | Not specified separately |
| Negative Predictive Value | 98% | Not specified separately |
In that study, the AI system – a CE-certified smartphone-based algorithm – identified occlusive myocardial infarction in 108 of 1,490 patients (7%) and correctly ruled it out in 1,382 cases (93%). Conventional interpretation caught only 42% of occlusive MI cases, meaning more than half went undetected by human readers alone.
Broader scoping review data reinforces these findings. Across multiple studies, AI models for ECG-based MI detection have achieved overall accuracy ranging from 70% to 100%, with select architectures like EfficientNetV2B2 and CNN-multi-VGG reaching perfect scores on specific datasets. A deep neural network trained on more than 900,000 12-lead ECG recordings achieved F1 scores of 0.957 for rhythm disorders, 0.925 for acute coronary syndromes, and 0.893 for conduction abnormalities.
Perhaps most striking: AI successfully flagged MI in 75% of ECGs that human readers had initially interpreted as completely normal. When broadened to include ECGs flagged as abnormal, that figure rose to 86%.
What Experts Recommend for Clinical Deployment
Despite impressive performance numbers, researchers and clinicians emphasize that AI should function as a decision-support tool – not a replacement for clinical judgment. Federico Nani, lead researcher on the ESC Acute CardioVascular Care 2026 study, described the technology as a complement to existing diagnostic pathways that include troponin testing and coronary angiography.
Several best-practice benchmarks are emerging for AI-ECG deployment in clinical settings:
- Negative predictive value above 98% – to safely rule out MI and expedite low-risk patient discharges
- Specificity of 99% or higher – to minimize false positives in high-stakes emergency settings where unnecessary catheterization lab activations carry real costs and risks
- F1 scores of at least 0.893 across major diagnostic categories, with acute coronary syndrome targets of 0.925 or above
- Local validation before deployment – because AI performance varies from hospital to hospital based on patient populations, equipment, and data characteristics
- Integration with biomarkers – combining AI-ECG analysis with troponin levels and other labs to minimize door-to-balloon time for patients who need percutaneous coronary intervention
The scoping review literature also flags important caveats. Many published models report headline accuracy figures that come from relatively small, single-source datasets using validation strategies prone to optimism. Cross-validation was reported in only 57% of studies reviewed, while 36% did not specify their data-splitting method at all. Accuracy often declined under external validation, indicating that generalizability remains a work in progress.
Health Equity and Accessibility
The Michigan model’s reliance on standard EKGs rather than PET imaging creates profound implications for healthcare equity. PET scanners are concentrated in large academic medical centers and wealthy urban areas. Rural hospitals, community clinics, and healthcare systems in developing nations rarely have access. A diagnostic tool that extracts PET-level insights from a device available in virtually every medical facility on earth could democratize detection of a condition that disproportionately goes undiagnosed in underserved populations.
This is not a theoretical benefit. The model was specifically designed to identify patients who would benefit from advanced testing – serving as a triage layer that directs scarce imaging resources toward the patients who need them most, rather than requiring every chest-pain patient to undergo expensive scans.
What Comes Next
The Michigan research received funding from multiple National Institutes of Health grants and the Department of Veterans Affairs, placing it within the established U.S. clinical research infrastructure. The broader field is gaining institutional momentum as well: the European Society of Cardiology featured AI-enhanced ECG interpretation as a key theme at ESC Acute CardioVascular Care 2026 and plans to expand the discussion at ESC Congress 2026 in Munich.
Related work at the Mount Sinai Kravis Children’s Heart Centre has produced an AI model that uses standard ECGs to assess which patients who have undergone surgery for tetralogy of Fallot need additional MRI monitoring – another example of AI extending the diagnostic reach of inexpensive, widely available tests.
Challenges remain. Dataset biases must be addressed. Explainability – the ability for clinicians to understand why an AI flagged a particular EKG – needs improvement. Algorithmic fairness across different demographic groups requires rigorous testing. And large-scale prospective trials are still needed before these tools can be fully endorsed for routine clinical use.
But the trajectory is clear. AI is not replacing cardiologists. It is giving them a faster, sharper lens – one that can see what the human eye has been missing in the squiggly lines of a 10-second EKG strip. For the millions of patients whose heart disease has gone undetected because the right test was too expensive or too far away, that sharper lens could be the difference between a missed diagnosis and a saved life.