Blog Posts

From Concept to Completion: The L.A.A.R.K Journey

Final Design Review: Showcasing Our Complete System

This week marked the conclusion of our L.A.A.R.K project as we presented our work during the Final Design Review (FDR). What began as an idea to address fragmented and inefficient facility management systems has now been transformed into a fully integrated, end-to-end platform.

During the presentation, we demonstrated how our system connects the frontend, backend, database, and predictive model into a single unified workflow. The ability to show real-time data flowing through the system — from user input to prediction output — highlighted the completeness and functionality of our solution.

System Demonstration: Turning Data into Insights

A major highlight of our final presentation was the live system demonstration. We showcased:

  • Interactive dashboard with building and fixture-level data
  • Seamless database connectivity using Supabase
  • Integrated machine learning model generating prediction outputs
  • Floorplan interaction with pop-up visualization for fixtures

This demonstrated how our system moves beyond static tracking and enables a more proactive, data-driven approach to lighting asset management.

Final Refinements and Stability

Leading up to the final presentation, our focus was on polishing and stabilizing the system. This included:

  • Debugging integration and UI issues
  • Improving backend response consistency
  • Enhancing frontend clarity for better visualization of results
  • Implementing and refining the floorplan pop-up feature

These final improvements ensured that the system performed smoothly and delivered a clear, reliable user experience during the demonstration.

Reflection: From Idea to Implementation

This project has been a journey from conceptual thinking to full system implementation. One of the biggest learnings was understanding that building individual components is only part of the process — the real challenge lies in integrating them into a cohesive system.

We also learned the importance of balancing technical depth with usability, ensuring that complex model outputs are translated into meaningful insights for end users. Additionally, working with evolving datasets and sponsor feedback helped us develop adaptability and problem-solving skills in a real-world context.

Key Takeaways

This experience reinforced:

  • The importance of full-stack system integration
  • The value of clean, structured data for reliable predictions
  • The need for clear and intuitive user interfaces
  • The impact of strong communication in presenting technical work

Closing Thoughts

With the completion of the Final Design Review, L.A.A.R.K stands as a fully functional prototype that brings together data, analytics, and design into a single intelligent platform.

This project reflects not just the technical work we have accomplished, but also the collaboration, learning, and growth we experienced as a team throughout the semester.

Tagged as: , ,

Continuing the Momentum After Prototype Inspection Day

Continuing the Momentum After Prototype Inspection Day

After the progress we made during Prototype Inspection Day, this week felt like a shift toward refining both our system and how we present it. It was less about building new features and more about strengthening the clarity, structure, and overall impact of our work.

Peer Review and Cross-Team Learning

One of the highlights of this week was the peer review for our FDR presentation. Interacting with other teams gave us a fresh perspective on different approaches to similar problems. It was interesting to see how each team structured their ideas and communicated their solutions.

This experience helped us reflect on our own presentation especially in terms of how clearly we are conveying our problem, solution, and value. It reinforced the idea that strong communication can make a big difference in how a project is perceived.

Feedback from Faculty Coach

We also had a valuable discussion with our faculty coach, which helped us think more critically about our system and documentation. The feedback focused not just on improvements, but on making our work more structured and easier to follow.

This pushed us to refine how we connect different parts of the project ensuring that the reasoning behind our design choices is clearly explained.

FDR Documentation Refinement

A significant portion of the week was dedicated to working on the FDR documentation. Compared to earlier stages, this felt more like shaping a narrative rather than just compiling technical details.

We focused on clearly linking:

  • the problem we are addressing
  • why it is important
  • how our system provides a solution

The goal was to make the document accessible and easy to understand, even for someone who is not deeply familiar with the technical aspects.

Key Takeaways

This week highlighted the importance of:

  • clear and structured storytelling
  • presenting a strong value proposition
  • thinking from a user and stakeholder perspective

It became clear that a well-built system alone is not enough, how we communicate and justify our work plays an equally important role.

Looking Ahead

Moving forward, we plan to:

  • continue refining the user interface and overall experience
  • improve how we explain and validate our model outputs
  • strengthen the integration between data, visualization, and prediction

Overall, this phase felt like an important step in making our project not just functional, but also clear, polished, and impactful.

Tagged as: , ,

Advancing LAARK: From Prototype to a More Complete System

Following our successful Prototype Inspection Day, our team continued refining LAARK with a strong focus on both technical improvements and project communication. This phase marked an important transition—from presenting a functional prototype to enhancing the system into a more complete and user-ready platform.

During Prototype Inspection Day, we demonstrated a fully integrated system with key features such as dashboard insights and predictive capabilities . Building on the feedback we received, we focused on improving the usability and practicality of the platform.

One of the most significant technical improvements we achieved was:

1. Project Introduction Video

We completed a video introduction that demonstrates:

  • The real-world problem LAARK solves
  • A walkthrough of the system
  • Key features and user interactions

This video helps translate complex technical ideas into a format that is easy to understand for a broader audience.


2. Digital Poster Completion

In addition, we finalized our digital poster, which summarizes:

  • Problem motivation
  • System architecture
  • Key innovations
  • Results and impact

The poster serves as a concise and visual representation of our work, making it suitable for presentations, showcases, and evaluations.


3. Reflection: From Feedback to Improvement

Prototype Inspection Day emphasized the importance of:

  • Clear storytelling
  • Strong value proposition
  • User-centered design

This round of improvements reflects how we applied that feedback:

  • The floorplan feature improves usability
  • The video improves accessibility of ideas
  • The poster improves clarity and communication

4. Moving Forward

As we continue developing LAARK, our focus will be on:

  • Further refining the user interface and experience
  • Improving model explanation and validation
  • Strengthening integration between data, visualization, and prediction
  • Delivering a polished and impactful final system

Tagged as: , ,

Prototype Inspection Day- Showcasing LAARK

INTRODUCTION

This week marked a significant milestone for our team as we participated in Prototype Inspection Day, one of the key events in the IPPD journey. The event provided us with the opportunity to present our project, LAARK, to multiple panels of judges and receive valuable feedback on our progress.

Prototype Inspection Day is designed for teams to showcase their working systems and gather insights from faculty and industry professionals, helping refine both technical and presentation aspects of the project

PRESENTING OUR COMPLETE SYSTEM

For this inspection, we presented a fully integrated version of Laark, demonstrating how all components of our platform come together into a cohesive system.

Our presentation focused on:

  • The problem we are solving
  • The architecture and workflow of our platform
  • A live walkthrough of the system
  • Key features such as dashboard insights and predictive capabilities

Unlike earlier reviews, this milestone required us to move beyond concepts and showcase a functional prototype, highlighting both design and implementation.

ENGAGING WITH THREE DIFFERENT PANELS

One of the most valuable aspects of the day was presenting to three different panels of judges. Each panel brought a unique perspective, allowing us to view our project through different lenses:

  • Technical depth and feasibility
  • Clarity of problem statement and value proposition
  • Usability and overall system design

This iterative presentation process pushed us to continuously adapt and improve how we communicated our ideas.

REFLECTION AS A TEAM

Prototype Inspection Day was not just about presenting our system it was about learning how to communicate engineering solutions effectively.

Presenting multiple times helped us:

  • Refine our pitch in real-time
  • Identify gaps in our explanation
  • Understand how different audiences interpret our platform

It reinforced the importance of balancing technical depth with clarity and storytelling.

MOVING FORWARD

We will focus on:

  • Strengthening our problem narrative and value proposition
  • Improving model explanation and validation techniques
  • Enhancing the user interface and dashboard clarity
  • Practicing a more structured and confident presentation flow

These improvements will guide us as we prepare for the upcoming milestones.

From Offline Model to Live Dashboard: Closing the Loop in Week 11

This week marked a major turning point for LuminaTech’s L.A.A.R.K Lite platform. What we previously treated as “completed work in separate lanes”, modeling, backend services, and dashboard development, is now converging into a single, functioning pipeline. The key shift this week was moving from having a model that runs successfully on its own to having a model that runs through the system, triggered by backend APIs and surfaced through the frontend experience.

Bringing the Model Into the System

On the modeling side, we completed the final validation checks and packaged the predictive model in a form that can reliably support integration. This step was critical because it turns the model into a reusable system component rather than a one-off notebook result. With the model finalized, we were able to move forward confidently with deployment-style integration.

Backend Integration: Inference Through APIs

A major technical milestone this week was integrating the model into the backend layer so that inference can be called through APIs. Instead of running predictions separately, the backend can now trigger inference as part of the platform workflow. This is one of the most important steps toward an MVP because it confirms the platform can support prediction generation as a product feature, not just a research output.

Frontend Progress: Showing Outputs in the Dashboard

In parallel, the team connected most backend endpoints to the frontend dashboard so model outputs can be retrieved and displayed through UI workflows. This establishes the core pipeline structure we’ve been building toward: dataset → model → backend → frontend

With most connections in place, the remaining work is primarily about tightening reliability, finishing missing links, and validating the flow end-to-end under realistic usage.

Moving Forward

Next week, our focus will shift from integration to stability and demonstration readiness. Key goals include:

  • Completing remaining endpoints required for prediction retrieval and clean response formatting
  • Running full system-level testing across the pipeline (model → backend → frontend)
  • Debugging integration issues and improving UI clarity for interpreting predictions
  • Exploring how to store and serve floor plan data through Supabase to support richer visualization in the dashboard

Reflection

This week highlighted the difference between “a working model” and “a working product.” Finalizing the model was important, but integrating it into backend APIs and connecting it to the dashboard is what makes L.A.A.R.K Lite feel operational. We are now at the stage where the system is capable of producing and presenting predictions through the user experience — and the next step is proving that flow is stable, testable, and ready for sponsor-facing demonstration.

Building the Future of Smart Facilities: Inside Team 14’s LAARK Platform

Modern commercial facilities contain thousands of lighting fixtures spread across multiple floors and buildings. Yet in many organizations, these critical assets are still managed using spreadsheets and manual inspection routines. This fragmented approach makes it difficult to monitor fixture performance, predict failures, or plan maintenance efficiently.

To address these challenges, we designed LAARK, an integrated platform that creates a AI platform of a facility’s lighting infrastructure.

The system combines:

  • Facility layout data
  • Lighting fixture metadata
  • Maintenance records
  • Machine learning models

Together, these components enable facility operators to move from reactive maintenance to predictive maintenance.

The LAARK system integrates several technical components:

Front-end layer

  • Interactive dashboards
  • Floor plan visualization
  • Asset management interface

Back-end services

  • API layer for system communication
  • Database for fixture and facility data
  • Integration with building metadata

AI analytics pipeline

  • data ingestion
  • model training
  • predictive inference

This architecture ensures the platform remains scalable while maintaining real-time responsiveness for facility operations.

Tagged as: , ,

Expanding Dataset Diversity and System Integration

This week, our team focused on improving the diversity and coverage of our dataset, which is a key step toward building a more reliable predictive maintenance model. We successfully scraped and processed the TAG Museums dataset provided by our sponsor and extracted relevant information about fixtures and buildings. This new dataset allowed us to introduce additional light fixtures and facility data, increasing the variability and realism of the training data used by our predictive model. By expanding the dataset in this way, we aim to make the model more robust and capable of handling a wider range of real-world lighting environments.

Ensuring Compatibility with our existing pipeline

To ensure compatibility with our existing pipeline, the TAG Museums dataset was standardized and integrated into the same structure used for the ENERGY STAR-based dataset. This will allow the newly collected data to align with the current schema used by the model and backend services. On the system side, we also created a separate user profile for the sponsor within the frontend interface, enabling them to access and explore the dataset directly through the dashboard. At the same time, the frontend continued to be refined so that, in further developments, the fixture-level and building-level information from the new dataset could be correctly displayed within the application.

Moving Forward

The team will begin training and evaluating the predictive model using the expanded dataset, ensuring that the additional fixture diversity improves prediction accuracy and reliability. We will also implement backend endpoints for retrieving model predictions and dataset queries, followed by comprehensive system-level testing of the pipeline, encompassing dataset ingestion, model inference, API communication, and UI visualization. These steps will bring us closer to delivering a complete end-to-end predictive maintenance platform.

QRB-2 Presentation and System Integration Progress

This week marked an important milestone for our team as we presented our QRB-2 (Quarterly Review Board 2) update. The presentation focused on demonstrating the progress we have made in building an end-to-end pipeline for our predictive lighting maintenance system. Over the past week, we validated our tuned model across multiple buildings and scenarios and presented the output format that will be integrated into the user interface. The system now provides meaningful outputs such as a risk score, risk category, and estimated time-to-failure, which will help facility managers make better maintenance decisions. We also continued stabilizing backend APIs and improving logging and error handling to make the system more reliable during integration and testing.

Our Development Side

On the development side, our team continued enhancing the React frontend, adding more detailed room and lighting information to the fixture-level views and UI panels. We also summarized and responded to feedback from our project sponsor regarding dataset features and modeling choices. At the same time, work progressed on planning the machine learning model integration with the backend and frontend so that the complete workflow from login and dashboard to building selection and prediction visualization functions would run smoothly. Additional work also included preparing separate data handling for the Shands Hospital and museum datasets, along with improvements to the floorplan viewer and room-management features in the UI.

MOVING FORWARD

Our focus will be on finalizing the dataset and model while completing the full integration of the system components. We also plan to push updated technical documentation to GitHub for sponsor review and ensure all required resources and permissions are available for stable testing. As we progress toward the next phase, our goal is to deliver a fully functional pipeline where data flows seamlessly from the model to the API and into the user interface, enabling real-time predictions and actionable insights for building lighting maintenance.

From Independent Components to a Unified System

This week marked a major milestone for L.A.A.R.K Lite as we successfully transitioned from isolated development modules to a fully connected system. What began as separate frontend, backend, database, and modeling efforts has now evolved into an integrated full-stack prototype.

System Integration: Connecting the Pieces

Our primary focus this week was establishing seamless communication between the frontend, backend, and database layers. Using Supabase as our cloud database solution, we connected our backend APIs to a live database and ensured that the frontend could dynamically fetch and display stored data.

This means that data entered through the user interface is now persistently stored, retrieved in real time, and reflected back in the dashboard — validating our end-to-end system architecture. Seeing live data flow through the application confirmed that our technical stack is functioning cohesively rather than as independent components.

Model Completion and Readiness for Deployment

Alongside system integration, we completed the development of our baseline machine learning model. Using the refined and merged dataset, we finalized feature selection, validated outputs, and ensured the model generates meaningful predictions aligned with our predictive maintenance objectives.

With the model completed and the infrastructure ready, we are now positioned to integrate predictive outputs directly into the dashboard experience.

Reflection: From Architecture to Implementation

This week highlighted the importance of modular design. Because each subsystem — frontend, backend, database, and ML — was developed with clear boundaries, integration was a matter of alignment rather than redesign.

The platform is no longer theoretical. It is operational.

Moving Forward

In the upcoming week, we will focus on:

  • Integrating model predictions into the live dashboard
  • Refining UI interactions for improved usability
  • Enhancing data validation and testing workflows
  • Preparing for MVP-level demonstration

With Supabase powering our database, APIs actively serving data, and our predictive model finalized, L.A.A.R.K Lite is steadily advancing toward a fully functional intelligent lighting asset management system.

Tagged as: , ,

Continuing Momentum at LuminaTech

This week at LuminaTech felt like a turning point.

What started as separate pieces, data, models, UI mockups, backend functions is slowly becoming a connected system. Our whiteboard says “LuminaTech,” but behind that name is real architectural progress.

From Idea to Baseline Model

We began the week by locking down something fundamental: clarity. We finalized our input features, confirmed our target definition, and built a baseline ensemble model to establish our first true performance benchmark. Instead of experimenting in multiple directions, we now have a clear reference point, a measurable starting line.

This baseline gives us something powerful:
a way to prove improvement, not just assume it.

Next step? Hyperparameter tuning. We’ll be running structured search strategies and comparing performance carefully, ensuring that any gain is meaningful and reproducible.

Strengthening the Backbone: Backend Progress

While modeling took shape, the backend architecture matured as well. We implemented and organized Supabase Edge Functions to better support workflow logic. Endpoints were structured more cleanly, making communication between the model layer and frontend more predictable and stable.

This is the invisible work that makes everything else possible, the difference between a demo and a system.

Bringing the Floor Plan to Life

On the frontend side, integration continued steadily.

We advanced our React interface and deepened the interaction between the floor plan and fixture-level views. Clicking on a fixture now feels less like navigating static UI and more like interacting with a living system. With updated code pushed to GitHub, collaboration is smoother and version control remains clean as multiple components evolve in parallel.

What’s Next: Moving Toward Full Integration

The upcoming week is focused on refinement and connection.

We’ll:

  • Tune the model and compare baseline vs optimized configurations
  • Stabilize backend responses for UI integration
  • Connect prediction outputs into the frontend flow
  • Begin validating the full pipeline

Data → Model → API → UI

Questions for Alignment

As we move toward deeper integration, we’re seeking clarification from our liaison engineer on two key areas:

  • What format should model outputs take for sponsor visibility?
    Risk score? Category? Time-to-failure?
  • Do we have sufficiently reliable maintenance/failure history to support stronger validation?

These answers will guide how we design both evaluation and user display.

Where We Stand

Hyperparameter tuning is defined.
UI integration is advancing.
Backend endpoints are taking shape.

LuminaTech is transitioning from isolated development into a cohesive predictive maintenance platform.

This week wasn’t just about code, it was about convergence.