Week 10

External Engagement – UF AI Days 2025:
We presented our poster “BeatNet ECG AI: Foundation Model for Cardiac Signal Understanding.” During the event, we met Dr. David Winchester, a UF Health cardiologist, whose insights on ECG morphology and diagnostic workflows reinforced the importance of interpretability and clinical trust in AI-driven cardiology.

Advancing the Foundation Model

This week was a significant milestone for us as we moved from architectural design to actively prototyping our ECG foundation model. Following last week’s design discussions, we concentrated on developing and testing the initial version of our self-supervised pretraining pipeline, turning the idea of contrastive learning from theory into practice.

Our collective goal was to transform unlabeled ECG data into structured, patient-invariant representations that can serve as the backbone for future landmark detection and disease classification models. The week was defined by rigorous experimentation, cross-validation, and early visualization of emerging cardiac signal embeddings.

Key Accomplishments This Week

Foundation-Model Strategy Defined

We conducted a detailed technical meeting with Dr. Kejun Huang and Dr. Keider Hoyos and finalized the move toward a foundation model built on large-scale ECG corpora such as PTB-XL and MIMIC-III/IV. The model will leverage contrastive pre-training, treating 5-second and 10-second ECG windows from the same patient as positive pairs, while windows from different patients act as negative pairs. This approach allows the model to learn invariant, patient-specific embeddings.

Explored Alternate Self-Supervised Approaches

We evaluated potential pre-training routes including masked autoencoder (MAE) learning and distance-map regression, which predicts landmark-wise distance functions rather than explicit waveform peaks.

Semester 2 Modeling Pillars Finalized

Our Semester 2 (Spring 2026) deliverables are now centered around three primary components:

Landmark Detection Model: multi-output regression to identify P/Q/R/S/T time-stamps.

Arrhythmia & Disease Classification: fine-tuning foundation embeddings for AFib, PVC, LBBB/RBBB, and conduction block detection.

Model Distillation for Mobile Deployment: quantization and pruning to adapt the foundation model for Aventusoft’s single-lead (500 Hz) device.

Single-Lead Adaptation Framework

Discussions also clarified strategies for retraining 12-lead models to function effectively with single-lead input, aligning with Aventusoft’s mobile ECG system requirements.

Next Steps: Prototype Implementation

In the coming week, our focus will shift toward implementing the prototype foundation-model pipeline. We will begin with contrastive pre-training using PTB-XL segments to establish the base embedding space and develop an augmentation module capable of simulating inter-patient variability through signal inversion, noise injection, and time-warping. Next, we will visualize these learned embeddings using t-SNE to validate whether the model can effectively cluster normal and arrhythmic patterns. Simultaneously, we plan to benchmark different encoder backbones, such as ResNet and EfficientNet-1D, to identify the most efficient architecture for downstream fine-tuning. Finally, our team will document the complete workflow from pre-training to distillation for inclusion in the upcoming System Level Design Review (SLDR) and coordinate with Aventusoft regarding access to internal anonymized ECG data and available GPU resources.

This week was about transforming technical insight into implementation readiness—laying the foundation for self-supervised ECG intelligence that bridges research innovation with real-world device deployment.

See you next week!

Leave a Reply

Your email address will not be published. Required fields are marked *