Team Noesys has successfully completed our Final Design Review presentation this week! We presented our emotional analysis system for AGIS AI at the Reitz Union, showcasing our web application’s real-time capabilities in detecting emotions across audio, video, and text modalities.… Read More
Week 15 – Poster and Video Demo
To accompany our FDR presentation, the team developed a poster explaining our project’s background, describing how the emotional analysis system works behind the scenes, and highlighting important system features. We also recorded a video that illustrates our product’s use case… Read More
Week 14 – FDR Preparation and Final System Refinements
This week, we enhanced our webapp with action unit detection and recording capabilities while updating it to incorporate our latest audio model. We also introduced speech semantic analysis using 24 nuanced emotional categories, providing more detailed insights. Our team completed… Read More
Week 13 – Enhanced UI and Expanded Emotional Analysis
Our team improved our web application and emotional analysis capabilities this week. We enhanced the web app by adding a dedicated audio window, substantially improving transcript accuracy. We also introduced key moments detection and an emotional state timeline, providing users… Read More
Week 12 – Prototype Inspection Day
Our team presented our prototype at Prototype Inspection Day and received valuable reviewer feedback. We substantially improved our web application interface, adding data exporting capabilities and an LLM-based emotion summary feature. Our audio team achieved a 80% macro-F1 score by… Read More
Week 11 – Prototype Refinement and Dataset Expansion
Our team has been busy this week preparing for our Project Implementation Design (PID) presentation. The text modality team created a comprehensive CSV with over 1,000 sentences labeled across seven emotion classes, expanding our training and testing capabilities. We fine-tuned… Read More
Week 9 – Dataset Creation and Model Improvements
This week, we started creating our emotion-labeled sentence dataset with over 500 entries categorized into our seven emotion classes, which will be used to generate our custom testing dataset. We also developed our demonstration webapp, which now implements live emotional… Read More
Week 8 – Late Fusion Demo
This week, our team built our first real-time demo incorporating all modalities with late fusion. As the user speaks, the demo captures visual, audio, and textual information, and integrates the predictions from each of these models to determine the likelihood… Read More
Week 7 – QRB2
This week, our team presented our QRB2 update to the review committee. During the review, we received valuable feedback that will guide our ongoing development efforts. The committee identified several key areas for improvement in our emotional analysis system. Based… Read More
Week 6 – Late Fusion Testing
This week our team integrated text and bounding box components into our late fusion model. We verified that our fusion accuracy successfully exceeds that of individual modalities, validating our multi-modal approach. Our audio team completed 1D CNN model evaluations and… Read More
Week 5 – Cross-Dataset Testing
Our audio team successfully migrated the model from Tensorflow to Pytorch and completed grid search optimization for the 1D CNN hyperparameters. The video team focused on CLIP development, working with the text encoder and improving zero-shot accuracy while also running… Read More
Week 4 – Preparing for Fusion
This week, we successfully fine-tuned our CLIP model with an 80-20 train-test split, achieving 84% accuracy. Our audio team made progress by implementing grid search capabilities for the audio modality. We delivered our DFX presentation in class and expanded our… Read More
Week 3 – QRB1
This week, we presented our progress at QRB1 and received valuable feedback from various coaches. We shared detailed model performance metrics with our liaison, and we’ve achieved our target accuracy goals for audio, transcript, and action unit recognition as specified… Read More
Week 2 – Model Testing and Integration
Our team made significant progress in model development this week. We tested the accuracies of Recurrent Neural Networks and Support Vector Classification for the audio modality and set up EmotionCLIP for accuracy evaluation. We initiated the FG-Net system for Facial… Read More
Week 1 – Spring Semester Kickoff
Our team hit the ground running in our first week of the spring semester. We established new meeting times to accommodate everyone’s schedules and developed our critical path for the semester ahead. After creating our January work breakdown schedule, we… Read More