Blog Posts

THE FINAL CHAPTER.

It is hard to believe how far this journey has come. What started as early planning sessions, rough ideas, and whiteboard sketches has now turned into a fully developed system, a complete story, and a project we are proud to stand behind.

Week 15 marked our Final Design Review. This was the moment where everything came together in front of our coaches, liaisons, faculty, and peers. It was not just about presenting what we built, but showing the thought process, decisions, iterations, and lessons that shaped the system along the way.

Walking into the review felt different from every presentation before. There was less uncertainty and more confidence. We knew the system. We understood the trade offs. We had lived through every challenge, from infrastructure delays to data selection to full integration. And this time, we were not just explaining a solution. We were telling the story behind it.

The presentation brought together every piece of our work. The system architecture, the RAG pipeline, the security layers, and the evaluation framework all connected into one cohesive narrative. The feedback we received throughout the semester had shaped this moment, and it showed in how clearly we were able to communicate both the technical depth and the real world impact of our solution.

Alongside the presentation, we also finalized our project poster. Condensing months of work into a single visual format was a challenge in itself. Every section had to be intentional, from architecture diagrams to key results, while still remaining clear and accessible. It became a snapshot of everything we built.

Looking back, the most valuable part of this experience was not just the final product. It was the process. Learning how to navigate ambiguity, adapt to constraints, collaborate as a team, and continuously improve based on feedback. Every delay, every iteration, and every late night contributed to where we are now.

The project is complete. The system is built. The story is told.

And as we walked out of the Final Design Review, there was a quiet sense of accomplishment. Not because everything was perfect, but because it was real, it was thoughtful, and it was ours.

This is not just the end of a project. It is the beginning of everything we take forward from it. 🚀

Tagged as: , ,

WEEK 14 : Refining, Reviewing, and Getting Ready

Week 14 was about taking everything we have built and making it better. After completing full system integration last week, the focus shifted to polishing and refining our work, incorporating feedback, and preparing for final delivery.

One of the highlights of this week was participating in the Final Design Review (FDR) peer review session. Having peers evaluate our work provided a perspective we cannot get from within the team. The feedback we received on design clarity, technical depth, and overall presentation quality was constructive and actionable. Hearing how others interpret and engage with our design helped us identify gaps we had grown too familiar to notice.

Building on that momentum, we also completed the draft of our FDR report. This document captures the full scope of our system, architecture decisions, design rationale, and implementation details. Getting the draft done is a significant milestone. It means the technical story of our project is now written down in a structured, reviewable form. The next step is incorporating the peer feedback we gathered and finalizing it for submission.

Our project poster also received meaningful updates this week. Acting on feedback from peers, we improved content clarity, restructured the layout for better flow, and refined the visual presentation. A poster has to communicate quickly and clearly, and this round of revisions brought us noticeably closer to that goal.

Behind the scenes, continuous testing and iterative improvements remained a constant thread through the week. System performance, stability, and reliability are not one-time achievements, they require ongoing attention. Each testing cycle reveals small opportunities to improve, and those improvements add up.

Looking ahead to next week, the priorities are clear: finalize the FDR report, continue system optimization, and prepare for final presentations. The storytelling, demo flow, and key talking points all need to be sharp. We are building toward a presentation that reflects not just what the system does, but the thinking and effort that went into creating it.

The project remains on schedule. Week 14 was about refinement. The system is strong, the documentation is taking shape, and the team is heads down preparing for the finish line.

The final stretch is here. Time to make it count. 🚀

Week 13 : Bringing It All Together

Week 13 felt like everything coming full circle. After weeks of building, testing, and refining, we are now at the stage where the system is not just functional, but complete and ready to be presented as a cohesive product.

One of the biggest milestones this week was completing full end to end integration. Every major component of the system is now connected. Frontend, authentication, vector database, and data pipeline are all working together as a unified system. This was the moment where everything we have been building individually finally became one.

Alongside technical progress, we also shifted focus toward how we present our work. We recorded and presented a short project video in class, showcasing the system and its capabilities. Watching our own work from an outside perspective was a different experience. Peer feedback helped us identify areas where the story could be clearer, more engaging, and easier to follow. It was a reminder that building a strong system is only part of the job. Communicating it effectively matters just as much.

We also created the first draft of our project poster. This involved condensing months of work into a format that is both visually clear and technically informative. It is not easy deciding what to include and what to leave out, but this draft gives us a strong starting point for refinement.

At the same time, we continued improving the system itself. Based on feedback from liaison engineers, we focused on stability, usability, and performance. Small improvements across different components are adding up to a more reliable and polished experience.

Looking ahead, next week will be about refinement and final preparation. We will be improving the project video based on feedback, finalizing the poster design, continuing system optimization, and conducting thorough end to end testing to ensure everything works as expected.

The project remains on schedule. Week 13 was about convergence. The system is built, the story is taking shape, and everything is coming together for the final stretch.

We are not just finishing a project. We are preparing to present everything we have built. 🚀

WEEK 12 : FROM PROTOTYPE TO PRODUCT

Week 12 marked an important shift in our journey. We were no longer just building and integrating. We were refining, validating, and shaping the system into something closer to a real product, and this was made possible through the Prototype Inspection Day where we actually shipped and deployed our prototype on Azure and were eager to gather feedback.

Prototype Inspection Day 2 was the highlight of the week. After weeks of parallel development and integration, we presented a fully working and deployed product hosted on Azure. The demo flowed smoothly, and for the first time, the system felt cohesive from end to end. Frontend, authentication, backend, and data pipeline all working together in a single experience.

But what mattered most was the feedback. Judges focused on areas that take a system from functional to reliable. Stability, validation layers, and security. These are not features that stand out in a demo, but they define how trustworthy a system is in real use.

We did not wait to act on it. Soon after PID 2, the team regrouped and began implementing improvements. The focus shifted toward strengthening the system from within, reducing failure points, improving validation checks, and making the entire pipeline more robust.

Midweek, we met with our liaison engineers to walk through the product again. This session gave us a second layer of feedback, more grounded in real world use and expectations. The discussion helped us validate our direction and identify areas where design and implementation could be tightened further.

At the same time, we continued improving integration across the system. Connecting frontend, authentication, and backend components is no longer just about making them work together, but making them work reliably under different scenarios and edge cases.

Looking ahead, the next week will be focused on completing all feedback implementation, optimizing system performance, and refining how we present the system. At this stage, how we communicate the system is just as important as how we build it.

The project remains on schedule. Week 12 was not about adding new features. It was about making what we built stronger, cleaner, and more dependable.

The prototype is no longer just a demo. It is starting to feel like a product. 🚀

WEEK 11 : Integrating the Pieces

Week 11 was all about bringing everything together. The team split into three focused sub-teams, each tackling a critical part of the system: data extraction, database setup, and security integration. This parallel approach allowed us to make progress on multiple fronts without losing focus.

The data extraction team received the target folder and began preprocessing the documents, breaking them into chunks ready for embedding. Meanwhile, the database team configured the Vector Database and started performing vector tests using a golden dataset we had prepared. Seeing the pipeline start to work end-to-end for the first time was exciting—this is the system finally taking shape.

On the frontend, we implemented Role-Based Access Control through Azure Entra ID and enabled “Sign in with Microsoft” functionality using Oelrich Construction credentials. These steps were critical to ensure secure access and proper authentication, making the system enterprise-ready.

Looking ahead, next week will be focused on completing the full integration. We aim to connect the frontend authentication, vector database, and data ingestion pipeline into a single working prototype for Prototype Inspection PID 2. Vector testing with the golden dataset will also be finalized to ensure reliability and correctness for the demonstration.

The project remains on schedule. Week 11 was a turning point where the individual components of our system started to converge. The pieces are coming together, and the team is ready to deliver a fully integrated prototype. 🚀

WEEK 9 : Turning Documents Into Knowledge

By Week 9, the focus of our work began shifting from infrastructure and deployment to something equally important for any AI system. Data. Not just any data, but the right data.

This week started with a dedicated project work session where the team focused on development progress and took a deeper look at the document ingestion pipeline. Now that our cloud environment is coming together, the next challenge is making sure the system is pulling meaningful information from the company’s document ecosystem.

We continued our weekly coordination meeting with the Comsys team, where we reviewed technical progress and discussed different strategies for ingesting documents from the company’s server structure. These discussions helped clarify the practical constraints around document formats, permissions, and how information is actually organized in the real environment.

Later in the week, we held an important stakeholder meeting with Ashley to review the company’s folder hierarchy. This conversation turned out to be extremely valuable. Instead of guessing which files might be useful, we were able to walk through the actual structure used within the organization and identify which datasets would provide the most meaningful information for our proof of concept.

As we began evaluating documents, we discovered something interesting. Many PDF files contained mostly images rather than extractable text. While these files may be useful for humans, they do not always provide enough textual information for embedding generation within our retrieval pipeline. This insight helped us refine our ingestion strategy and avoid wasting processing time on documents that would contribute little to the system.

One dataset quickly stood out as particularly valuable. OAC meeting minutes. These documents capture key project updates, schedule discussions, RFIs, and change orders. In other words, they summarize exactly the type of information someone might want to query when using the system.

Based on this discovery, we selected the Project Management directory as the starting point for our proof of concept ingestion pipeline. Image heavy folders such as project photo directories will be excluded for now while we focus on structured documents that provide meaningful text content.

Looking ahead, the team will begin implementing the ingestion pipeline using OAC meeting minutes as the initial dataset. Once the pipeline is validated, we plan to expand ingestion to additional directories and continue refining document filtering methods.

The project remains on schedule, and this week helped clarify an important principle. A powerful AI system does not just need infrastructure and models. It needs the right information flowing into it.

Week 9 was about finding that signal in the noise. 🚀

WEEK 8 : Finally in the Cloud

For weeks, Azure access had been the missing piece in our workflow. We had built the system locally, simulated data flows, and prepared the architecture for cloud deployment. Week 8 was the moment that piece finally clicked into place.

After a productive troubleshooting session with the Comsys IT team, we successfully secured the final Azure access credentials on Thursday. What had been a dependency for several weeks suddenly turned into forward momentum. The team immediately began migrating backend scripts from our local development environment into the live Azure cloud stack.

Deployment started almost as soon as access was confirmed. Backend modules that had been running locally were pushed to the cloud environment, allowing us to begin testing system components in the infrastructure where the final product will live. Early backend testing has already begun, focusing on validating communication between modules and confirming that services behave as expected in the cloud environment.

Alongside deployment work, we also started developing the evaluation framework for the system. One of the key goals is to ensure responses generated by the model can be evaluated objectively. To support this, the team began creating specialized prompt templates and scoring rubrics for our LLM as a Judge framework. This will allow us to consistently evaluate the quality, relevance, and accuracy of model outputs as the system evolves.

The RAG pipeline also continued to evolve this week. We focused on refining the retrieval logic and preparing the system for live data ingestion once integration is complete. Improving retrieval accuracy and handling complex scenarios early will help ensure the system remains reliable when connected to real company data.

Looking ahead, next week will focus on completing the full backend deployment and conducting end to end testing within the Azure environment. We will also implement updated logic for handling conflicting data sources in the RAG pipeline so the system can present citations clearly and transparently to users.

Another important task will be translating the existing file permission structure from the current system into a role based access control model within Azure. This will help ensure the new platform respects the same security hierarchy already used within Oelrich’s systems.

We will also begin structuring blind testing sessions where judges provide queries directly to validate model reasoning and robustness. Combined with our evaluation rubric, this will give us a clearer picture of system performance under realistic conditions.

The project remains on schedule. More importantly, the system has officially taken its first steps into the cloud. After weeks of preparation, the environment we designed for is finally the environment we are building in. 🚀

Week 7: Validation and Forward Motion

Week 7 was a milestone week. QRB2 arrived, and this time it felt different. Less about proving we have a plan and more about demonstrating that the system is real, functional, and evolving.

On February 24, we presented at Qualification Review Board 2. The focus was on the current state of our enterprise AI solution and how far it has progressed since the last review. Instead of just slides and diagrams, this time we brought a working prototype into the conversation.

One of the most impactful moments was demonstrating the model’s ability to detect out of scope queries. Showing that the system knows what not to answer is just as important as showing what it can answer. It reinforced that we are not just building a chatbot, but a controlled enterprise tool designed for reliability and guardrails.

The committee provided thoughtful feedback, particularly around testing rigor and benchmarking transparency. Rather than seeing this as criticism, we treated it as an opportunity to strengthen our evaluation framework. We immediately began formulating a structured plan to improve blind testing procedures, clarify benchmarking logic, and document our evaluation rubric more thoroughly.

Behind the scenes, there was another critical update. We confirmed with our IT liaison that Azure access will be granted, allowing us to begin deploying scripts and moving into the cloud environment. This has been our largest dependency for weeks, and hearing that it is actively being finalized gave the entire team a boost of momentum.

Looking ahead, we plan to introduce blind testing during future demos where judges provide queries directly to verify model robustness. We will also enhance the RAG pipeline to retrieve conflicting sources and present them transparently with citations, improving trust and user validation. At the same time, we are beginning formal documentation of our prompt templates and LLM as a Judge evaluation criteria to ensure reproducibility and clarity.

The project remains on schedule. QRB2 confirmed that we are on track for a successful outcome, and the positive evaluation reinforced that our direction is strong. With funding secured and Azure access nearly finalized, the next phase shifts from refinement to full cloud deployment.

Week 7 was not just about presenting progress. It was about validating it. And now, we move forward with even greater confidence. 🚀

WEEK 6 : BUILDING IN PARALLEL

Week 6 was all about controlled momentum. While we are still waiting on full Azure access and server credentials, the team refused to slow down. Instead of pausing progress, we shifted focus and built everything we could in parallel.

On the backend side, we made significant progress by continuing development in our local environments. We simulated data flows, tested logic pathways, and strengthened the core architecture while waiting for live infrastructure access. It was not the final environment, but it allowed us to refine the backbone of the system without losing time.

At the same time, UI and UX refinements continued steadily. The interface is becoming cleaner and more intuitive with each iteration. Every adjustment is being made with the upcoming worker training phase in mind. The goal is not just functionality, but usability. We want the tool to feel natural for Oelrich employees from the first interaction.

Infrastructure coordination remained a constant effort this week. We maintained active communication with the Oelrich IT team to expedite Azure account provisioning and server access credentials. These remain our highest priority dependencies, as securing access will unlock live data integration and full cloud deployment.

Another major focus this week was beginning preparation for QRB2. Even while technical work continues, we started organizing the narrative for the next Quality Review Board presentation. The emphasis will be on demonstrating technical progress, infrastructure readiness, and system maturity since QRB1. Preparing early allows us to align our technical execution with the story we will present.

Looking ahead, next week carries important milestones. We plan to finalize and deliver the QRB2 presentation, complete Azure onboarding as soon as access is granted, migrate backend modules from local development to the live Azure stack, and begin integrated testing between backend and frontend services in the cloud environment.

There are still a couple of critical carry over items. Azure subscription provisioning and server access permissions remain essential for moving from simulation to live integration. While these are external dependencies, we have mitigated delays by advancing UI improvements and backend logic development in parallel.

The project remains on schedule. More importantly, the team has demonstrated adaptability. When infrastructure was temporarily out of reach, we shifted focus and strengthened everything else.

Week 6 was not about waiting. It was about building anyway. 🚀

Week 5 : Clarifying the Path Forward

Week 5 was about decisions. Last week laid the backbone, and this week we focused on choosing the path that would define how we move forward. With infrastructure discussions largely settled, our attention shifted to the architecture and security decisions that will carry us through implementation. As a team, we came together for Project Work Day to discuss key decisions that would help shape and strengthen the architecture of our final deliverable.

A major milestone came when the team evaluated server access options for building our Vector Database in Azure. We spent dedicated project time analyzing the trade-offs of different approaches. After weighing complexity, cost, and scalability, we decided to move forward without Azure AI Search and instead implement a simpler, more cost-effective Azure-based database solution. The choice reduced technical risk and clarified the next steps for the team.

Security planning also took center stage. We designed a role-based access strategy using Microsoft Entra ID, ensuring users only see the information they are authorized to access. These discussions highlighted that security isn’t just a checkbox—it shapes how every component of the system interacts and protects the data it holds.

Communication with project liaisons reinforced alignment. We walked through the architecture decisions, explained the reasoning behind them, and outlined the next steps. Seeing consensus emerge made it clear that the team and stakeholders are moving forward together, confident in the chosen approach.

On the technical side, planning for data integration began in earnest. With the architecture defined, we can start implementing direct server data extraction, set up the Azure database environment, and validate user access controls. Backend integration and workflow testing are queued for next week, keeping the momentum high as we transition from planning to execution.

Looking ahead, the focus will be on implementation. Server access permissions need confirmation, the security configuration requires validation, and integration testing will bring the system to life. By clarifying the architecture this week, we reduced complexity and created a clear path for the team to execute confidently.

Week 5 was about clarity and alignment. The backbone is in place, the path is chosen, and the team is ready to move forward with purpose. 🚀