This week, we continued our work on the angle calculations, along with improving our object detection algorithm and making progress on the 3D tagging.
In the angle calculation algorithm, the next step after finding and matching key points is to use those keypoints in calibration and triangulation. The goal of calibration is to determine where in 3D space the image was taken from in each picture, then using triangulation to determine where in 3D space each point would need to be in order for them to be at those locations in each image.

Traditionally, the location of the camera in both perspectives are calculated using the points themselves. Based on how much the matched points move between the perspectives, you can infer how the camera moved. But these calculations may not be necessary. Because the drone collects metadata on the position and orientation with GPS and gyroscopes, we may be able to use that data directly to determine the positions.


The primary concern with using this data is the potential for inaccuracy. As you can see in the images, the points tend to align along certain lines unnaturally. This indicates that the values stored in the metadata are rounded, introducing some amount of error. We won’t know how much this error will affect things until we get some initial results, however we will have potential areas to address if those results need improvement.