Blog Posts

All done! (4/23/21)

We did it! We presented our FDR presentation and we think that it went really well. We practiced our presentation quite thoroughly, and got some last minute feedback before presenting. All in all, the whole FDR event was very fun and very rewarding. We really enjoyed meeting each other in person for the first time, as well as meeting our awesome coach. There is still quite some work to be done before we’re completely finished.

We are currently working on getting the final revisions done on the FDR report. Our coach and liaison have given us extensive critiques that have really helped to improve the report significantly. Once we are done, we will send it to the coach and liaison for their signatures, and then ultimately submit it. Furthermore, we are working on commenting all of our code and finalizing all of the code documentation. This includes inline/block comments, a README file for the Github repository, and documentation that explains how the code works. This is to ensure a smooth handoff to the sponsor company, and so they can build on our work. All in all, IPPD has been an amazing experience, and we have all grown so much as code developers, presenters, writers, and professionals.

Preparing for the FDR (4/16/21)

The team has been hard at work at finalizing the poster, video, FDR report, and FDR presentation. Each team member has taken on different roles for each part, and we have been gradually joining everything together. This week we completed our draft for the FDR, and have already received feedback from our coach on things to improve. We are currently incorporating those changes. Furthermore, we have begun receiving feedback for the poster and we are also updating that as well. We are going to continue to improve each of these deliverables as we receive more feedback.

This week we also presented our FDR presentation to our peers. This went well, and the audience mainly enjoyed the presentation. Just like other peer-review presentations, we received feedback from everyone and we will incorporate their comments to improve our presentation for the final presentation. Moreover, we’ve also been improving the application and making last minute changes before we are finished. We are receiving great feedback from the users who have been testing our app, and it has allowed the application to improve significantly.

Next week is our final FDR presentation. The team is very eager to show off all the work that we have done for the last two semesters. We are quite proud of our project, and we look forward to showing it off to everyone.

Project video: https://web.microsoftstream.com/video/cdd24fb4-68a6-42c1-abfc-2512b257f12b

Nearing the end! (4/9/21)

The main highlight of this past week was that we have officially deployed our app! With it we included a user manual describing what features our app has as well as a survey to get the user’s feedback. We shared the app for Android users using Expo Client, and for iOS users we used Testflight. Testflight has a useful feature of allowing the user to send screenshots back to the developers of the app of any bugs or issues found. We’ve sent these document to our liaison, and he has already forwarded these to blind and low-vision people. We are expected to have our first results in the coming days. We will immediately take the feedback into consideration and work to resolve the issues. We are all very excited to see what members of the blind and low-vision community think about our app, and we hope that it can be useful to them.

Other than that, we are currently drafting our FDR report. We’ve delegated the sections of the FDR to everyone in the team. As we complete our sections, we’ll combine it with everything else we’ve written and revise it as we go.

Finally, this week we also created our initial video to pitch our project. We are really happy with the way it turned out, and we received a lot of good comments when we presented to the rest of the program. There are a few things that could use some tweaking, and we are going to continue to improve it before our final video. Our next steps are to finish writing the FDR, continue to improve the app, and tweak our poster and videos. Then comes the FDR event!

User testing and working on FDR (4/2/21)

We are nearing the end of this project. This week we started drafting our project poster that will be used in the FDR event. In tandem, we’ve also started drafting a storyboard for our video. This video is crucial in conveying the goal, evolution, and outcome of this project to people who are unfamiliar. We are making sure that the video flows well, addresses all major topics, and is explained well enough that a completely unfamiliar viewer will be able to understand the problems that come with designing weather apps for blind and low-vision people, as well as how we went about solving them. Along with this, we are beginning to draft the FDR itself. We have begun delegating different sections to each member of our team and we will compile the sections together as they are developed.

On the project side, we’ve been ironing out some of the bugs that the app has when running on Android devices. Some parts of the app don’t display properly when renderied on the Android device, and there are a few functions that we were using that are supported on Android. We’ve replaced these functions with equivalent ones, and we’ve modified the UI so that it is consistent across mobile OS’. The app is almost completely in a functional and deployable state.

In the next day or two, we well finish drafting a user manual and user experience survey to be sent out alongside the application. These three components will be sent to our liaison to be distributed out to blind and low-vision people for hands-on testing. The user manual will inform the users how the application can be used and what functionality comes with it. The survey will gather data on problems the users are facing when using it, and any recommendations they may have for improvements. As this feedback comes to us we will immediatly incorporate their suggestions and improve the app.

Deploying the application (3/26/21)

This week we presented our prototype to a group of IPPD faculty. We incorporated the feedback we got last week from the students in preparation for this presentation. This including having both a prerecorded and live demo of the application. This went over well with the IPPD faculty, since they told us that these inclusions added a lot of clarity and intrigue in the presentation. They were all very happy with the progress we’ve made so far and are quite impressed what we’ve done so far.

Along with these positive comments we received some criticisms. This was expected and welcome. The main point of concern was that we haven’t had any feedback or testing done from members of the blind and low-vision community. This is a valid concern, since the semester is ending in a month and this is a crucial step. However, we are confident that we are going to be able to deploy a prototype, receive feedback, and implement and necessary changes to improve the experience. We are going to have a deployable version of the application done by tomorrow (3/27/21), and we will sent this out, along with a TAM and feedback survey, to our liaison to be distributed to bline and low-vision users.

Preparing for deployment and user testing (3/19/21)

This week was quite a useful week for us. We had our practice prototype demo in front of other students in IPPD, and we received some useful feedback and criticisms about our project. It helped to have members of theh audience that do not have professional experience in the field of study so that we could get another perseptive on our project. It also helped to have a bigger audience during this presentation than we usually do presented to faculty and staff of IPPD. The feedback we got was more diverse and covered parts of our project that normally do not get too much acknowledgement. We are already working towards implementing this feedback and improving both our project and presentation.

We are now preparing for the prototype demo in front of IPPD faculty and staff this upcoming week. We are feeling quite confident in our project and are very excited to show it off. We are adding the last few major features to our project to fully flesh out the vision we had in mind from the beginning. We are perfected the image processing algorithm, improving the UI, and adding some quality of life settings and design elements so that it looks more complete. We are going to have these features implemented by Tuesday to show off in our presentation. After that, we are going to be ready to make some last minor tweaks, and then deploy it. Freedom Scientific already has a list of a few people in the blind and low-vision community who are eager to test out the app.

Finishing the prototype (3/12/21)

This week we created a presentation to show to members of Freedom Scientific to update them on the progress of the project. We were quite excited to show off our image processing algorithm in action. The presentation went quite well, and we got tons of feedback on how to perfect it. They were impressed with the progress we’ve made, but they offered some noteworthy critiques to continue to perfect the app. We are going to incorporate all of this feedback for our eventual comprehensive presentation to them at the end of the semester.

Since then, we’ve been focusing on testing the components we’ve finished. We are constructing multiple unit tests and integration tests in parallel. We want the eventual prototype built to be as functional and bug-free as possible. Furthermore, we are continuing to add more settings to the settings screen, and we are going to begin revamping the UI. This is so that it is as accessible as possible. Finally, we are continuing to tweak the radar screen and how we are going to word the radar data is so that it is succint but informative. We are going to consult some meteorologists for advice on the best way to word the data.

Getting the app all together! (3/5/21)

We are nearing the end of initial development for the complete app. The pieces are starting to be finalized and many of the components are starting to get integrated. The image processing algorithm is complete functionality-wise, but requires a few minor bug fixes. The algorithm now identifies weather events from the tile images, finds their centers, distances, and angle relative to the user, calculates their rainfall/snowfall rate, and packages this up in a JSON and sends it to the server. The server is now able to send this data to the app, and we are now working on turning this information into language for the user.

The radar screen and weather alerts screen are coming together as well. Like I said earlier, the radar screen is currently getting the functionality of translating the JSON data into English text. Furthermore, it is going to display the real time radar image for low-vision or sighted users who would like it. The weather alerts screen is on the last few steps. The app is already able to fetch weather alerts from the API, so now it is just a matter of improving the UI and formatting.

The settings screen is nearing completion. Most of the desired app customizations are in place, and we are not fixing the bugs. Some of these settings can only be tested when some of the other screens are completed, so this task is currently blocked.

On Tuesday of next week we are presenting our progress to Freedom Scientific, and we will be demoing our app. We are all really excited and proud of the work we’ve done. We can’t wait to impress the company with how our application has come, and we are eager to get it into the hands of members of the blind and low-vision community to get their impressions.

Getting close to finishing image processing! (2/26/21)

We are nearing the end of the image processing algorithm! If you recall, we’ve been able to fetch weather radar images from our API and perform basic edge detection on them. Since then, we’ve been able to fix some of the bugs we were having, and now our algorithm is able to detect “blobs” on the screen (aka rain or snow clouds), and it is able to find the “center” of these blobs. These are major steps in finishing this algorithm, since now we are able to programmatically find the points of interest in the picture. We’ve also been able to calculate the intensity of each of these events, and whether or not they are rain or snow clouds. From here, we are going to package the most critical information into a data structure to send back to the application, for the last bit of processing. This processing involves turning the information into langauge for the user to listen to, as well as calculate the distances of these events relative to the user’s specified location

Progress has been continuing on the other aspects of the application as well. We’ve continued to develop and apply more unit tests to make sure everything is working as intended. The settings screen is progressing with our vision of the desired customizable features. The server is fully set up and running, and only requires some further tweaking to enable it to run the image processing algorithm and send the information back to the application. Once these pieces are put in place, the app will be in an alpha state ready for user feedback. We will continue to tweak the bugs, and the UI design.

Interpreting radar information (2/19/21)

The team has been actively fleshing out the radar data interpretation algorithm. We’ve made some decent strides in processing radar images. For one, we’ve been able to eliminate noise and only study the key weather events on the radar image. We’ve also been able to outline and find points that lie inside of these events. These images are being fetched from the API, so with a little more sophistication then we’ll be able to test out how it works in real-time. Our next goals are to be able programmatically identify what each weather event is and what it is going to do. We’ve already come up with ideas for how we’re going to do this, so it’s just a matter of bringing it to life.

Besides the image processing, we’ve also been making progress in fleshing out the app itself. We’re developing the settings and weather alerts screen in parallel. We’ve also begun developing unit tests to ensure that these features are working as intended. Once these are completed, we will run integration tests on the entire app, and by that point the image processing will be nearing completion.