Blog Posts

Concept Combination

Our focus this week was to write PDR Report and prepare PDR Presentation. One of the most important content is the selection and combination of possible concepts. We continued to research possible options based on the concept generation two weeks ago. After weighing the pros and cons, the following choices were made.

  • Hardware
The Azure Kinect DK in its low-profile housing

The primary basis for determining a hardware structure is the Azure Kinect DK (1A). It is an all-in-one solution that comprises all the necessary features and more. With mainly cost reservations, the team evaluated recreating the Azure Kinect DK with only necessary components.

  • Software
Conceptual model of gesture recognition

We believe that two palms are sufficient for single-person interaction to meet all currently known control needs (select, pan, zoom, and rotate if necessary). Besides, because too much palm recognition may disrupt the single user experience, the initial prototype concept involves identifying the first two palms within a certain distance and ignoring additional palms that may appear later.

Developing an Architecture

This week, our focus was to plan out an architecture for our prototype. We started with the existing items:

  • Touch Table
  • Internal Windows PC
  • Intuiface software

We then created two major blocks for the hardware and software subsystems. The functional block diagram includes multiple options for implementing our software prototype. The main options are to use hand gestures to:

  1. Control the touch table’s cursor (regardless of software)
  2. Control the Intuiface software’s elements directly

Additionally, two major concepts have been introduced for the hardware. Either we will use an all-in-one solution like the Microsoft Azure Kinect DK or implement the sensors individually.

See the figures below for more detail!

Complete Architecture

Concept Generation

This week, our focus was to develop possible concepts for the prototype design. The concept generation process was split up between hardware and software. See the table below or a summary of our concept ideas. 

 Software

A possible software implementation option is Google’s open-source gesture recognition package with TensorFlow Lite and MediaPipe. The package recognizes a variety of one-handed gestures, and the test recognition results are accurate. This package is not the only option; the team will consider more open source packages to aid development. Another option is the gesture recognition package from Tencent, which seems to be complete and supports various systems and application environments.  

https://google.github.io/mediapipe/solutions/hands

 Hardware

Three main options exist for the overall hardware architecture. All-in-one solutions like the Azure Kinect DK already have all the necessary sensing equipment (and more) built into one set-top box. This is the most expensive and most powerful option.  Simplified solutions like the OmniVision image sensors have some of the necessary sensing hardware and are much less expensive. In this case, the prototype would likely require additional hardware. Creating the entire system from individual hardware components would be the most complex option to design (though likely the cheapest to produce). Many other hardware-related decisions will stem from this overall hardware architecture concept. In the individual component case, concept finalization will require deciding on the types of sensors and microcontroller(s). Essentially, this method would be attempting to recreate an all-in-one solution like the Azure Kinect DK but with only the necessary components. 

Concept Combination Table

Introducing our Team Name & Logo

This week, our team finalized a logo and name to represent ourselves. So, in addition to some project updates below, we will introduce the background behind Team FLUX.

FLUX = FL + UX

The name “FLUX” stems from the core of our project: user experience. In technologic contexts, user experience is often abbreviated to “UX”. We combined this with “FL” to represent our home base at the University of Florida. Another layer to this name is the association between “flux” and motion. Our project’s objective is to be completely motion-controlled and dynamic. Thus, we decided to name ourselves FLUX.

To represent this name, Ziyang drafted a logo that embraces the technological background of our project. See our final logo below.

Project Updates

We talked to our Liaison to go over the Project Design Specifications as well as the House of Quality table. We explained what we saw as ‘wants’ and ‘needs’ of the project and confirmed them with our Liaison. We received positive feedback on both documents, letting us know that we are on the right track with the sponsors scope of work.

Project Summary & Updates

The Touchless Touchscreen project is sponsored by the Harn Museum of Art. Our project sponsor is Eric Segal, Director of Education and our liaison engineer is Matt Herring, Director of Museum Technology. Our coach is Dr. Andrea Goncher, a lecturer in the Department of Engineering Education. The purpose of our project is to create a safe and accessible learning experience for all museumgoers. Through our engineering background, we aim to find creative solutions to the new and unexpected obstacles created by COVID-19. The goal is to create a digital, interactive museum exhibit that is powered by movement rather than physical contact.   

So far, we have had two weekly meetings with our coach, Dr. Andrea Goncher. Additionally, we had our kickoff meeting with our sponsor and liaison at the Harn Museum of Art. We have developed a team charter and analyzed the scope of work provided by our sponsor. The next few weeks will include lots of brainstorming and developing initial designs and concepts for the project. Stay tuned!