This week we put all of our effort into setting up a server for all of our API fetches, and eventually to take care of the image processing algorithm. We were able to set this up while still maintaining compatibility with the app. From here, we are going to improve the security of it by implementing encryption and authentication.
We have also begun work on developing the image processing algorithm. We are using some existing Python libraries to do so. So far, we have been able to detect edges in the radar images and ignore most of the unnecessary details. Our next plans are to implement a way of finding the centers of some of these clusters, and to calculate the distances traveled from one radar image to the next. This will give us some basic radar processing, and from there we can tweak the parameters and continue to improve its efficacy.
We’ve been working diligently on making progress on our radar implementation. We are pretty much set on doing some sort of image processing since most of the weather APIs give us the images of the radar instead of the raw data. This isn’t a problem though, because they provide us with a few time steps, so with some basic image processing we can find the direction and speed of most weather events.
We’ve started to make progress in accessing the radar information. So far, we’ve been able to fetch radar data from the API and display it in an html page. From here, we’re going to incorporate this functionality into the app itself, expand its capabilities, and being to implement an image processing algorithm on it.
We have also decided that we should develop a server to host most of the API requests, do all of the image processing, and store the API key so that the app is more secure and runs faster. We are working on this server now. From here, we just need to finish establishing the server and continue to work on the radar.
We completed their QRB1 this week. This gave us a lot of valuable feedback to improve our project. A lot of the feedback we received had to do with us not clearly stating certain parts of the project. This includes each team member’s explicit role in the project, how we are going to test the app during the testing phase, and how we are going to translate the radar data into a non-visual format. We have already begun to make these points much clearer for the next presentation. Also, the Gantt chart that we used in the presentation did not have the months labeled well, and it was a bit disorganized. We have another, more detailed, Gantt chart that we did not use for the presentation because we thought it would be too cluttered, but we are adjusting it now to use for future presentations.
We are going to continue to improve our presentations and certain parts of our project. At this point, we are fully in development and are expanding the functionality of the project. Our main task at hand is to start implementing our radar algorithm, and that begins with meeting as a team and finalizing our ideas.
The team has steadily been continuing to work on the app. We’ve fully integrated the hourly and daily forecast data from the weather API we’re using. Our app now displays highs and lows for the day and updates in real time. This puts us in a nice spot where we can focus our attention to getting the radar data taken care of.
For this week, however, we are shifting our attention to next week’s QRB1. We have been talking about what information we’re going to include and to get some of the information finalized. We’re feeling confident about our project and are eager to present. We look forward to hearing the feedback we’re going to get on how to further improve the work we’ve done.
The Spring semester has just started and the team is eager to continue working! It was great to meet with the team, liaison, and coach again.
Things are already off to a good start. We’ve started working with a weather API that offers the hourly and daily forecasts for free, so we’ve begun to integrate that into our app. The process of doing so looks to be pretty straighforward. Once this is done, we’re going to shift our focus into the radar aspect of the project. Translating the visual radar information into a non-visual manner is going to be our biggest challenge in this project, so we’re going to tackle this problem ASAP.
This week the team presented their SLDR presentation, and it went pretty well. We were finally able to showcase our prototype to the sponsor company. We’re all pretty proud of the work we’ve done and the research that we’ve conducted. The presentation went smoothly and we got some great questions from everyone. These helped us learn what aspects of the project we should emphasize more and develop further. For example, we are now going to start incorporating some more haptic feedback since it seems like this was a strong area of interest.
Alas, we have reached the end of the semester. This semester was full of a lot of firsts for us, but we have all grown as professionals and as developers. We’ve all really enjoyed this semester and working with Freedom Scientific for this first half of the program. We are all very eager to continue to develop the project further and bring the vision to life.
The semester is coming to a close, but we are still working hard on finishing strong. This week, we had our peer review for our SLDR presentation. We got some great feedback on it, and a lot of is really going to improve the quality of our presentation. Perhaps our biggest shortcoming in this presentation was that some of our slides and diagrams were monochromatic and crowded, so we are promptly adjusting that so that our information is more fun and interesting to look at.
We are finishing up our final SLDR report as well, and we’ve begun revising it based off the feedback from our coach. We are quite proud of the work we’ve done this semester, and we’re all quite eager to continue working. Right now, our biggest hurdle is still to find the right weather API to incorporate into our project. However, we have communicated with a few of the companies and we have narrowed down our choices to a couple of APIs. Our goal is to have this finalized and implemented in some way into our app by the end of this winter break.
This week the team presented their initial prototype to a group of 6 coaches. Our protoype consisted of 3 screens that can be navigated to through a navigation bar at the bottom of the screen. Our first screen was a forecast screen that showed the current, hourly, and daily forecast. Our second screen showed a radar image and displayed text describing what was on the image. Our last screen showed some severe weather alerts for the area. All of the information in our prototype was all dummy data, and none of this was dynamic or programmatic. We manually entered all of this information for the sake of demonstration.
Each of the componentes in our app had alternative text that would be read out when Voiceover or Talkback were enabled on the device (these are the screen reader and accesibility functions for iOS and Android respectively). This allowed us to show how accessible our app would be to those who are blind or have limited-vision capabilities. We also structured each of the components on the app such that if you traversed each component linearly, starting from the top, then you would start by getting the most relevant and important information first, and it gradually decreases in relevancy. This is to ensure that those with the accessibility features enabled would not have to search the screen for the infromation they want, rather they can just start from the top and move forward.
Our app also adapts to the display settings of the device, so it changes depending on if the device is set to dark or light mode. The following images are screenshots from our app:
We were really excited to show off our prototype, and we got some really useful feedback about how to improve it. We are eager to keep working on this and bringing the vision to life. Our next steps are to continue to add functionality to the app and bring in some of the weather API data.
This week we focused our time primarily on working on Minor Report 2. We split up the work amongst ourselves and tackled each component. Once we each finished our parts we sent it over to our coach for feedback. From there, we just needed to revise it, compile it all together, and proof read it once more. We are currently compiling it all together to submit it for tonight.
Once this is completed, the team is going to dedicate their time to fleshing out the prototype. We are going to meet this weekend to get our vision straight, and then we are going to work on each of the different components. Our focus is on having a navigable and elegant UI that accomodates accessibility features and the accessibility design guidelines that we researched previously. This is all in preparation for our protoype showcase this upcoming Tuesday.
We all did quite a bit this week. Firstly, we continued to contact weather API companies and are just awaiting their responses. Secondly, our coach wanted us each to implement accessibility features into the app we made last week so we could show it off to her in our weekly meeting. We each adjusted our app to meet this. We found that React Native has some built in features to accomodate accessibility. The main one that we implemented was the ability to add alternate text to our components. This is so that when VoiceOver or Talkback are enabled on the smartphone, whenever the user hovers over the component then it will read our custom alternate text. This feature is crucial to bringing our vision for this app to life.
We also met early on the week to try to get a semi-final design layout for this app. We had some fun trying to come up with an idea that meets all of our design principles while still maintaining a pleasant user experience. Our initial designs are very crude, but they will help us to polish up our ideas. We will continue to build on these ideas, and we will make a formal app design diagram soon. The main takeaways from this design are the corner buttons at the top. We are still deciding whether the functionality of the buttons will stay static, or if they will change their functions depending on what screen they are on.
As we started working on this design, we realized that we still had a few questions about how user’s prefere to use their apps. We cut our brainstorming session early to design a questionnaire to send over to our liaison so that he can distribute it to blind and low vision users. The questionnaire focused on what navigational design features blind and low vision users prefer to have, and what features they think are missing in most apps. We will use this data to develop our design further.
Finally, we have begun splitting up the tasks for Minor Report 2. We hope to finish a draft of the report early in the week so that we have ample time to improve it and to get approval from our coach. We would like to continue to work on our prototype.