Week 12

From Pretraining to Explainability : A New Focus for BeatNet

All of our efforts this week were geared toward one major goal: validating our new foundation-model This was a landmark week for Team BeatNet! After spending the last few weeks building our core pipeline, we successfully completed our first full pretraining run of the foundation model. We presented these exciting initial results—which already show the model is learning the deep structure of ECGs—to our sponsors, Dr. Kejun Huang and Dr. Keider Hoyos.

This successful review meeting not only validated our technical approach but also gave us a critical new focus for the next sprint: clinical interpretability.

Key Accomplishments This Week

First Pretraining Round Complete We successfully completed the first round of pretraining using our contrastive learning approach. By teaching the model to recognize ECG segments from the same patient as “positive pairs,” we are forcing it to learn the fundamental, patient-invariant features of a heartbeat. Early results are promising, showing “emerging latent structure” in the embeddings, which validates our entire model setup.

Strategic Focus on Clinical Interpretability During our technical review, Dr. Hoyos emphasized that for this tool to be trusted by physicians, it cannot be a “black box.” It’s not enough for the AI to be accurate; doctors must understand why it’s making a specific diagnosis. Following this guidance, the sponsors approved our plan to integrate explainability modules (like attention heatmaps) in the next sprint.

Full Pipeline Integration and Fine-Tuning Initiated With the foundation model prototype finalized (combining CNN encoders with a Transformer decoder), we have begun attacking the downstream tasks. We have already started the first fine-tuning experiments for arrhythmia classification and have implemented the pipeline for fiducial landmark (P/Q/R/S/T) regression.

Deployment & Optimization Research In parallel, we have been optimizing the model and preparing for deployment. We worked on benchmarking different window lengths (5s vs. 10s) to find the sweet spot for context learning. We also continued research on model distillation and lightweight architectures, assessing how we can shrink this powerful model to run efficiently on BeatNet’s embedded system.

Next Steps: Validation, Explainability, and Deployment

With the pretrained model in hand, next week is all about validation and implementation. We will begin cross-validation testing, using the model’s embeddings for both arrhythmia classification and fiducial detection.

A major priority will be implementing the new explainability modules—generating waveform attention maps and fiducial localization overlays. We will also evaluate model compression and quantization strategies for mobile deployment. Finally, all of this will be packaged into our updated slides for the upcoming System-Level Design Review (SLDR), where we will highlight our model’s explainability and a clear path to deployment.

See you next week!

Leave a Reply

Your email address will not be published. Required fields are marked *