Author: Will McCoy
After considering the budget constraints from the previous week, I focused in on one of the most expensive subcomponents: the camera array. I realized that we could reduce costs if we designed and fabricated the array by ourselves, instead of buying COTS cameras and connecting them together. Through my research this week and last week, I have become familiar enough with cameras to know that their core component is the image sensor, so I figured I would identify the image sensors used in common commercial embedded cameras. I found that many use sensors from OmniVision, such as the OV4689, the OV5640 (Adafruit Camera Breakout), and the OV5647 (Raspberry Pi Camera Module 1). However, newer versions of the Raspberry Pi Camera Modules use sensors from Sony, such as the IMX219 (Raspberry Pi Camera Module 2) and the IMX708 (Raspberry Pi Camera Module 3). I feel like this gives a good basis for investigating and evaluating other sensors, though it is very possible that we may use one of these in our design.
Once we identify the sensors, it will be important to determine how to best arrange them. As seen in my last post, I created a graphic to visualize the overlapping FOVs (fields of vision) in a circular camera array. Making this graphic was time-consuming, yet would be important for evaluating all future camera arrays under consideration. Therefore, this week I automated graphic-generation process; I used SFML to create an interactive environment to explore the array FOVs. Seen below is a screenshot of the output for a circular array of cameras. It should be noted that any 2D configuration can be evaluated with ease using this technique.
