Week 4: Tracking the Path Forward

This week, we made significant progress in our object detection and object tracking algorithms

On object detection, with a batch of annotations on a new tower completed, we were able to run our algorithm on a new tower with different conditions from the initial video. Not only are there more antennas on this tower, but the camera setters were different, resulting in the overall video being brighter than the previous one. Adding this to our training data should allow our model to function in a wider range of conditions.

We also tested with two separate types of model: One to detect any antennas, and another to differentiate antennas viewed from the front from antennas viewed from the side.

 

Results of the two object detection algorithms.

The object detection algorithms had good results, with intersection over union values of over 90% for both models, and over 99% for the model to detect any antennas.

We’ve also made headway in the object tracking algorithm. In order to keep track of which data is associated with which antenna, we need to track the bounding boxes across the image.

Clip from results of object tracking algorithm

As you can see, the algorithm does a pretty good job at tracking the antennas, but runs into some issues towards the edges with regards to assigning the IDs. It seems that as an antenna rotates and approaches the point where the model should not register it, it switches back and forth between labeling it. This then causes the object tracking algorithm to assign a new ID.

Although this behavior can be reduced by improving the object detection model, to guarantee consistent behavior, we will want to add some post processing that will interpret the raw IDs outputted by the algorithm, and map them to the actual antennas, but for now, the fact that the algorithm is doing as well as it is is promising.

Leave a Reply

Your email address will not be published. Required fields are marked *