Today we will be presenting for the last time at the Reitz! Our final project hype video is seen below:
The team is glad to be graduating, but at the same time, we are also sad. These past two semesters have been an amazing time together as one of the smaller teams of only 4 members! The team has grown close over these past 8 months and will continue to stay in touch as we all go on to graduate school. Altogether, we’ve learned a lot with this project:
We’ve developed a deeper understanding of coding involving websites, along with coding together on a long-term project.
We’ve learned how to tailor the depth of content in our presentations based on the audience we’re presenting our project in front of.
Awareness about making our presentations more accessible for those with visual impairments.
We would like to shout out our amazing liaison, Mr. Sriram Ramananthan, who helped guide the team through the worst and best of our project’s ups and downs. Without our liaison, our presentations would not have been as good or as tailored to visually impaired individuals. Furthermore, we would also like to shout out our wonderful coach, Dr. Idalis Villanueva, who has been phenomenal about arranging meetings for University professors or with the DRC. Without our coach, we would not have had such a flushed-out project or completed user testing. All in all, this team could not have done as much without our coach and liaison- we appreciate you both!
This week, we started preparation for our final presentation. We will be giving it on the 19th of April, which is only two weeks away! Our recent progress has mainly been in IPPD paperwork and refining our software, as the code is nearly done. This week, we also finished a first draft of the video that we will play during our final presentation. We had some bugs that prevented us from testing last week, but we were able to fix them and complete user testing on April 9th. With these testing results, we will be able to complete some more IPPD paperwork. Testing has been difficult to start, as we needed to find blind or visually impaired participants and work with them in-person. Fortunately, some generous University of Florida students have volunteered their time in exchange for some delicious food!
This week we went to present our software to Freedom Scientific! This event was fun while also stressful– the days leading up to it we were making last-minute changes to the software and presentation to make sure it was the best it could be! We arrived at the company early in the morning so we can practice the dual virtual/in-person presentation, and also so we can get a tour of the company! It was interesting to see how the office was working in-person (mechanical team) vs online (software team), this adaptability that came about due to COVID was interesting to see in real life. One really cool thing we saw during the tour of the office was the Freedom Scientific server! I’ve never seen a server that took up a huge room before so it was really interesting to see. Most people don’t learn how to use/keep up a server that large in any class, as most of it is learned via the company you are working with!
At the end of the day, we had an amazing time, presented our software without a hitch due to our last-minute modifications, met a lot of amazing employees, and ate free food! We enjoyed this outing to Freedom Scientific and appreciate the fact that IPPD was able to support us in going down to Clearwater!
This week, we want to shout out to James Datray for being a JAWS script guru for helping us enable tandem usability of our software with JAWS running in the background! This was a huge hurdle to overcome and was necessary for us in order to move forward with the project.
Furthermore, we reached out to the visually impaired individuals that completed the survey we sent to the DRC (disabilities resources center) and we will begin testing next week to receive feedback on the interface for our software. Currently, the software only implements 12 JAWS shortcuts, so testing might be different than with a fully implemented >300 JAWS shortcut software. However, we mainly just want feedback on what the participants think of the software, if they will use it in the future, and any adjustments that might be needed. This testing scenario would be with a basic website structure, and we will be asking the participants to locate specific elements on the page in two scenarios (with our software and without). We hope to get a lot of feedback and we are looking forward to what they have to say!
In prior weeks, we have worked on intercepting keystrokes from JAWS. This week, James Datray from Freedom Scientific informed us about how to best implement keystroke interception. He suggested that we use JAWS scripts to obtain information about the virtual cursor on a webpage. We are able to obtain several important types of information from the virtual cursor, including but not limited to HTML tag, XML element, and MSAA object. Fortunately, Mr. Datray gave us a script that runs code that we can model our solution after. Once this is complete, we will be one step closer to our third prototype, which we are excited to present to Freedom Scientific. We will also be visiting their offices in Tampa, FL on March 28, and will enjoy presenting our work to their engineers.
This week we started on a second version of intent recognition. We recognize that the human mind is difficult to model and that we will need to have a reasonable approximation for our project to be viable. Currently, we assume that a user’s intent can be determined by a few factors, which are time spent on an element, whether the action involves input into the webpage (such as typing in a field), and potentially some other factors. We may consider an action to be an end-action if a user spends two standard deviations longer on an element than they typically do. This means very few actions are end actions, however, this prevents the issue of annoying users with incorrect nudges.
This week there were 3 major events! The first major event of the week was our Qualifiication Review Board (QRB) presentation on Tuesday; this event was our second presentation with the same coaches from last month’s QRB. A new and improved slide deck was presented with better transitions and explanations of our project and progress. Overall, the coaches gave helpful feedback which will be implemented for our final presentation at the end of the semester.
The second major event of the week was when our liaison, Mr. Sriram Ramanathan, came up from Clearwater to have an in-person meeting with us to have more of a feel for our program. Furthermore, we talked in-depth about our trip to Freedom Scientific next month and how we are going to gear our presentation to the different audience there. All in all, it was good to see our liaison in person and not through zoom! I believe the team works better in person with Mr. Sriram.
Lastly, we have created a promotional flyer to send out to UF’s Disabilities Resource Center! The plan is to send out this flyer by next week and have user testing started right after spring break. The plan is to have ~10-20 visually impaired user testers give us feedback on our program. This step is crucial in our goal as user-friendliness is almost as critical as the program working correctly!
Last week, we me met with Department Chair Juan Gilbert, an expert on intent recognition. We discussed several features that may be useful for recognizing intent, primarily time. We believe several features are sufficient for recognizing intent, which are the following: time, whether the user uses the back button, and whether the user selects something. We recognize that different thresholds may make sense for different users, but we believe these three features to be sufficient for complete intent recognition. We also spoke with him about user testing and learned that recording users is not optimal, since that requires significant paperwork. On Tuesday, we went to Tyummi for boba.
This week was focused on the intent recognition of our software. More specifically, we scheduled a meeting with Dr. Juan Gilbert, an expert in intent recognition and the department chair at UF in computer science. This meeting was exceedingly beneficial for the team as we have been struggling to make the intent recognition module optimal for our presentation at the end of the semester.
During this meeting, we were told that our original intent recognition, a time-based approach, is a good start – but can be better! This surprised us as we thought the time-based approach will be completely thrown out the window and replaced with a machine learning element. While this is what we thought Dr. Gilbert would suggest, we honestly didn’t know how to implement the machine learning aspect since our company wouldn’t be able to provide any data for the machine to learn from. Therefore, hearing that the time-based approach would work was amazing! Some improvements as suggested by Dr. Gilbert, are as follows: 1. Have the time-based threshold adapt with the user 2. The time-based threshold should also change depending on the element that the user is currently on 3. Identify if the user went back to an element, and if so, that will also be an indicator of intent 4. Identify if the user selected/repeated a certain section, and if so, this is also an indicator of intent.
Overall, the team is happy to have an intent recognition module be fixed with an easier solution than previously realized! This upcoming week we are hoping to start implementing these suggestions and within the next couple of weeks we hope to then move on to beta testing to get feedback on the intent recognition side of our software!
Intent recognition – software recognizes information being sent in and completes an action (organization, ect.).
This week, we had our first success with keystroke interception, which enables us to work on the next component of our project. Keystroke interception involves obtaining raw keystroke data, including the time the key was pressed down and the time it rebounded. By obtaining raw keystroke data, we are able to remove the requirement of using a pseudo jaws, or at least reduce the need. In the next weeks, we will be using JAWS in combination with our web server, and may need to implement additional hotkeys so that we can track the user’s location and provide relevant recommendations. We may be meeting with a graduate student of Dr. Villanueva in the next one or two weeks to work on developing a better intent recognition algorithm. Currently, we do not think it is feasible to complete intent recognition by the end of IPPD, however, we believe it is possible to outline an algorithm in depth that can be implemented by a future team.