Blog Posts

Week 6: Building, Testing, Tuning

This week we continued building momentum, with each subteam pushing forward and beginning to produce more measurable results.

The benchmarking team developed and tested new CI (continuous integration) scripts to run benchmarks across multiple target selection strategies and scenario combinations. This gives us clearer insight into execution times and overall system performance under varying conditions.

Meanwhile, the GPU testing team preprocessed and cleaned a satellite imagery dataset, enabling more reliable and consistent pipeline evaluation. Initial results across 500 images are promising: TensorRT achieved a 40% speedup in FP32 with only a 1.5% change in detections, and a 60% speedup in FP16 with just a 2.3% change. These results highlight a strong performance gain with minimal accuracy tradeoff.

On the reinforcement learning side, the team integrated Ray Tune to automate experiment sweeps with detailed logging. This has already enabled initial hyperparameter tuning results and sets the stage for faster iteration and more efficient optimization moving forward.

In parallel, Stefano worked with our liaisons to design early concepts for an updated simulator frontend. The proposed interface includes a landing page with clear entry points for running simulations, training, and benchmarking. A very simplified version of the diagrams he created are shown below.

Another productive week! We are looking forward to QRB2 next Tuesday, and to keep making progress.

Week 5: Full Throttle

This week, instead of our usual class, we had an IPPD project work day to keep our project moving forward. The team met over Zoom (see photo below) to collaborate across sub-teams and made great progress.

The GPU Testing Team created FP16 TensorRT engines that produce similar results to FP32 engines but run about 20% faster, giving us another performance boost for inference tasks. The Benchmarking Team added the ability to alter the satellite starting location in the orchestrator for scenario testing. They also finished integrating the lightweight runner into the benchmarking script, making it easier to test and evaluate other tasking methods. Additionally, they added a benchmark stage to the GitLab CI file, which can automatically run a test instance of the benchmarking script when merging code. Finally, the RL Training Team constructed a detailed training plan for phase-based model training sequences that can be run either sequentially or concurrently, giving the team flexibility in orchestrating experiments.

It was a productive week thanks to the IPPD project work day! Each sub-team is moving forward, and we’re excited to see these pieces come together as we continue to scale up.

Week 4: Defining Success

This week, the GPU testing, benchmarking, and RL learning team all advanced. The GPU testing team got TensorRT working with dynamic batch sizes and found that it was much faster than PyTorch! The benchmarking team finished the lightweight runner but realized that the position of the satellite must be considered per scenario so more edits must be made in the simulator’s configuration and the lightweight runner. The RL learning team tried to run multiple training jobs in parallel via AWS and unfortunately crashed the EC2. They have since made the pivot to train sequentially.

The image below is a full day of movement from the satellite. During our conversations about if the reward should be penalized for pictures in the dark, the team had to take a step back and realize that a polar sun synchronous orbit ensures that the satellite will never be on the dark side of the Earth!

Aside from progress made in teams, the entire team came together to discuss which metrics should be considered to determine success of the trained agent over the heuristic methods. A great discussion was had on which metrics should be maximized and which should be minimized for the best strategy (hopefully the trained agent). We must continue having conversations in our Slack channel and at our weekly meetings with MRSL to formalize what success is.

Week 3: Post QRB 1!

We are fresh off of our QRB 1 presentation and we received good feedback on our project. The coach feedback was valuable and has already begun guiding our project. For example, we created goals that got approved by MRSL that we will be aiming to achieve for the rest of the project. We also confirmed the fidelity of our simulation through our weekly liaison meeting. The media below is a snippet of a part of our QRB slides that the judges particularly liked.


The GPU Testing team continued analysis of Pytorch vs. ONNX models. The Benchmarking team revised scenarios to be more accurate and started building the last large component of the benchmarking script, a lightweight runner. Speaking of a lightweight runner, the RL team has its own and has begun using it to experiment with different parameter configurations.

Week 2: Progress in Motion

This week, the Orbiteers continued to build on the strong start to the semester as development efforts ramped up across the project.

A major focus this week was laying the groundwork for more systematic evaluation of our approaches. The team began developing tooling to support large-scale experimentation, making it easier to run and organize multiple variations of training runs. This will allow us to more efficiently explore different configurations and better understand how our system behaves under varying conditions.

In parallel, progress was made on benchmarking heuristic-based approaches. By testing these methods across a range of scenarios and target distributions, such as one distribution shown in the image below, the team is working toward establishing meaningful comparison baselines that will help guide future development and analysis.

Overall, this week was about putting solid foundations in place. With tools taking shape and evaluation efforts underway, the Orbiteers are setting themselves up for a productive stretch ahead!

Week 1: Back in Orbit

This week marked the Orbiteers’ first week back after the December break, and we hit the ground running.

Right before the break, our NVIDIA Jetson Orin Nano Super Developer Kit arrived, and now that we’re back on campus, our resident computer engineer Jack has had the opportunity to start tinkering with it. Initial hands-on exploration with the hardware has begun, and it’s exciting to finally have the platform in hand as we move closer to the next stages of development.

We also held our first team meetings of the new semester, where we discussed our path forward and aligned on priorities for the coming weeks. With the learning process for our tasking agent now stabilized, the team is shifting focus toward exploring parameter tuning and performance refinement. In parallel, we are beginning to systematically characterize heuristic-based approaches to establish strong comparison baselines.

With momentum building and clear goals ahead, it’s full steam ahead for the Orbiteers as we kick off the new semester!

Week 13: System Level Design Review

This week the Orbiteers completed their System Level Design Review (the document and the presentation)!! All the previous work from the Preliminary Design Review, Prototype Inspection Day, and the Peer Review Events led us to this monumental achievement. We presented the current state of our project to MRSL, Dr. Christian Grant, and Ann Marie Cassar-Chupa. We also had the amazing opportunity to connect with industry leaders at the events preceding the presentation and organized by the IPPD team.

Following the presentation, the Orbiteers discussed with MRSL what the immediate goals of the project are prior to the conclusion of the first phase of IPPD and what we can expect for the coming semester. We are all overjoyed with our achievements for this semester and cannot wait to see what is to come! Let’s finish strong Orbiteers! 

Tagged as:

Week 12: Almost Friday

We, the Orbiteers, delivered our Peer Review System Level Design Review (PR SLDR) this week to an audience of graduate students and professors. Our presentation was very well received, and the majority of the feedback was positive. Leading up to this event, we emphasized providing better context on our problem statement and breaking down the functionality in each module to facilitate understanding among our peers. Additionally, we developed new greedy tasking strategies to serve as baselines for our future work and resolved a few bugs in our program. We are extremely close to the end of the semester and the SLDR presentation, but are very excited to move headfirst into IPPD 2! Below is a picture of our team during this week’s SLDR practice presentation!

Tagged as: ,

Week 11: The Plot in Us

This was a week of code changes and end-of-semester preparations that kept us quite busy! We implemented plotting for our observation structure (OBS), missed targets, boresight vectors, and training metrics. Each of these will improve explainability in our project by allowing us to examine each step in our system. This is a luxury exclusive to us while our code is still on the ground. Once our work is in space, it will become astronomically more difficult to explore metrics! Below are some example plots of the OBS using Matplotlib and training metrics in TensorBoard, respectively:

Additionally, we have been working on our System Level Design Review (SLDR) drafting in preparation for our peer-review SLDR next week. The final SLDR presentation in a few weeks will be our last deliverable of the semester. It is crazy to be looking ahead to the end of IPPD 1 already, but we have loved the experience so far, and are excited to end strong!

Tagged as: , , , ,

Week 10: Prototype Inspection Day

The Orbiteers had a fantastic time showing off our project at Prototype Inspection Day this week! A picture of us setting up our code to demo is shared below. We demoed our simulator prototype and received feedback from six UF faculty coaches with diverse backgrounds and knowledge. Their feedback was insightful and rewarding, and we are thankful for the experience. A few noteworthy recommendations we received include explaining more details about our precise steps to upgrade our prototype from a rule-based heuristic to reinforcement learning, instead of simply stating we are going to upgrade it. Also, more information about how we plan to compare and evaluate reinforcement learning strategies would’ve been helpful for the judges.

We made some code changes this week, too. They focused on improving the logging and metrics on the frontend display and the fidelity and organization of the backend simulator. Moving forward, we have more changes to keep upgrading our simulator in the coming weeks. We are also going to start working on our System Level Design Review (SLDR) and prepare for the peer review presentation. The end of the semester is coming fast, and the Orbiteers are moving onwards and upwards!