Week 3: Finding Where We Are

This week, we continued our work on the angle calculations, along with improving our object detection algorithm and making progress on the 3D tagging.

In the angle calculation algorithm, the next step after finding and matching key points is to use those keypoints in calibration and triangulation. The goal of calibration is to determine where in 3D space the image was taken from in each picture, then using triangulation to determine where in 3D space each point would need to be in order for them to be at those locations in each image.

When a given point appears in two different locations in two images taken from different perspective, there is one spot in 3D space that point can be. If you know the location the images were taken, you can calculate it.

Traditionally, the location of the camera in both perspectives are calculated using the points themselves. Based on how much the matched points move between the perspectives, you can infer how the camera moved. But these calculations may not be necessary. Because the drone collects metadata on the position and orientation with GPS and gyroscopes, we may be able to use that data directly to determine the positions.

Plot in 3D space of drone position according to metadata
Plot in 2D space of drone GPS coordinates

The primary concern with using this data is the potential for inaccuracy. As you can see in the images, the points tend to align along certain lines unnaturally. This indicates that the values stored in the metadata are rounded, introducing some amount of error. We won’t know how much this error will affect things until we get some initial results, however we will have potential areas to address if those results need improvement.

Leave a Reply

Your email address will not be published. Required fields are marked *