This week, as we became better acquainted with each other and our abilities, we were finally able to draft our team charter and assign team roles. Here is how we decided to initially architect our team:
With roles assigned, we began work on our first major group assignment: the Product Design Specification (PDS). This is a document which enumerates the project requirements, associates them with quantitative specifications, and uses a technique called a House of Quality to prioritize the specifications. This yields the Technical Product Measures (TPMs), which are the most important Specifications to track. This system provides an organized framework for ensuring all expectations are met (and that none are missed). Stay tuned this year to watch our plans in the PDS become reality.
This week, the Road Watch team met in person for the first time at the Tuesday IPPD class. After becoming acquainted, we took headshots, prepared for our first meeting with our sponsor (FPL), and discussed the project’s Scope of Work (SoW). Over the course of this year, we will be building a system which will alert roadway workers when a vehicle looks like it will collide with them. Creating this project will certainly be a non-trivial task, and will require leveraging expertise in mechanical, electrical, and computer engineering; fortunately, we have all of that talent on our team. We are already considering solutions, including LiDAR and computer vision, though we will need to spend much more time researching before we can make any definitive decisions. We look forward to updating you with our progress over the following year. Please stay tuned.
This week feels like the first real week of IPPD. After being postponed by Hurricane Idalia and going through a lot of the boilerplate assignments at the start of the course, I finally got to meet my team. My first impression is incredibly positive; everyone seems to be a proficient engineer. I look forward to working with them on this project. Another reason this feels like the first week is because the workload of the course is finally setting in. The best I can do is take everything one week at a time. I’ll do my best to keep you updated here along the way.
The product we are working on is the Florida Power and Light (FPL) Maintenance of Traffic (MoT) Sensor. This is a system which will be placed within a roadway worksite. It will scan the road for traffic which is looks like it will crash into the worksite; if it detects this, it will alert the workers in the site. When I heard about this project on team selection day, it was my first choice, because it seemed like the most interesting application of electrical engineering. My first-impression solution was to use a circular phased array of mmWave RaDAR to detect the approaching traffic. However, FPL outlined in its Scope of Work (SoW) that they would like us to use LiDAR and image recognition (I interpret this as computer vision). I feel comfortable with LiDAR; I understand how the technology works, and feel like if I sat down with the hardware I could have something working in about a weekend. However, I am much more skeptical of the computer vision aspect, because much of it is built on top of machine learning. I haven’t had the best experiences with ML projects in the past, and I’ve found that ML solutions have too many false positives and false negatives. However, I will keep an open mind for this project, and hopefully learn something about computer vision along the way.
An image I drew to explain to my team of how LiDAR works, and how we could synthesize it with cameras to scan an engironment. Note the circular spinning LiDAR module in the center, the circular array of cameras surrounding it, and the objects in the scene to be detected.