
From Public Validation to Pipeline Implementation
All of our efforts this week were geared toward one major goal: validating our new foundation-model strategy with the wider UF community and beginning the core implementation of our self-supervised pipeline. After a successful presentation at the Prototype Inspection Day (PID), Team BeatNet is now fully focused on building the components necessary to pre-train our model on large-scale datasets.
This week was about turning plans into concrete action—coding the contrastive learning framework, benchmarking encoder backbones, and preparing scripts for downstream fine-tuning.
Key Accomplishments This Week
Foundation-Model Strategy Defined:Successful Prototype Inspection Day (PID)
We presented our prototype progress and foundation-model strategy to UF faculty and alumni. Judges praised the team’s clear communication and cohesive technical direction. Dr. Chenhao Wang highlighted our clarity, Dr. Tingsao Xiao called our foundation-model approach “intuitive and reasonable,” and Dr. Catia Silva encouraged us to begin embedding implementation as soon as possible.
Contrastive Learning Pipeline Initiated
We began implementing the SimCLR-style contrastive learning pipeline, developing an augmentation module that generates positive pairs through noise, jitter, and scaling—an essential step toward robust patient-invariant embeddings.
Downstream Pipeline Development
In parallel, we started building the downstream fine-tuning workflow to adapt pretrained embeddings for arrhythmia classification and later for fiducial landmark regression.
Encoder Benchmarking
We benchmarked multiple 1D encoder backbones—including ResNet1D and EfficientNet1D—to identify the optimal architecture for feature extraction efficiency and downstream transferability.
Improved Communication Flow
Following judge feedback, the team is also refining presentation visuals, ensuring balanced speaking roles, and better illustrating the link between preprocessing, embedding, and disease classification.
Next Steps: Pre-training and Visualization
Next week, we will execute the first pre-training runs on the PTB-XL dataset, finalizing and debugging the contrastive learning script. We will develop a t-SNE visualization notebook to evaluate whether embeddings effectively cluster normal and arrhythmic segments, and improve visualization of time-positional encoding within our Transformer prototype.
We will also begin preparing documentation for the upcoming System Level Design Review (SLDR) and conduct internal comparisons between baseline and foundation-model performance.
See you next week!