Blog Posts

New Year, Fresh Start

Hey all the team is back! After the holiday break, the team is now ready to keep working on our exciting project. During the first week, our team and coach Peters and Dr. Winesett have decided on the new schedule for our weekly meeting. During the first meeting, we established a short term goal for the team to kick start the development of our simulation system.

A Promising Start

Our team was able to achieve a real-time animation process by the end of last semester. This year, we already have made an important development on the animation component of our project. We now are able to record the animation to be used on different characters. There still are some limitations but we are confident that we can tackle or work around them.

Recorded animation sample

We can’t wait to present our plan for the project at the first QRB event and look forward to all the constructive ideas we might receive from the event.

End of The First Chapter: SLDR Event

On Tuesday, December 8th, we had the final event and another milestone for the team, the System Level Design Review, to finish off the first semester in the IPPD program.

The team presented our progress for the project over the semester, including requirements and specifications, software architecture, the prototype design, the project plan, and the risk assessment.

After the presentation, we are thrilled that our sponsor Dr.Winesett approves of our work. We also received some constructive feedback from other coaches.

We want to thank our coach Prof. Peters, our sponsor Dr. Winesett, Dr. Latorre, all the IPPD staff and all the other teams and judges who participated in the program for an amazing experience in such a difficult time. We are looking forward to the next semester.

Milestone: Prototype Inspection Day

On Nov 17th, 2020, the team reached another milestone: the Prototype Inspection Day (PID). During the event, we presented the early stage prototype of the two main component of our project: animation and user interface.

Meet Nathan

Nathan, the leading role of our demo

For the animation prototype, we played a video demonstrating our development motion capture and animation. Although this is still an early stage implementation, the result is quite promising.

Screenshot of the demo video

The UI

For the user interface, we designed a functional wireframe to demonstrate the basic flow of use to our system.

The mock-up game page


We were excited to receive positive feedback from the judges that attended our presentation. The team will continue the hard work and move forward with the same energy.

The Chosen one

Motion Capture is the key to our entire project. Without a major game studio budget and equipment, the team decided to use the Intel RealSense Depth Camera as our main weapon to realize motion capture after exploring all available options.

The Intel Realsense Depth Camera

Ignoring the horrible cable management in the back ground, you will see our leading actor: the depth camera. With the help of the Intel RealSense sdk, here’s the basic idea of how the camera works

The camera is collecting data and compute the distance between objects and the camera itself and display the result in different colors. This is just the basic functionality of the camera. With some third-party sdk, we can achieve things like skeleton tracking and ultimately motion capture with the camera. We will demonstrate those in our pototype for the animation component of our project.


After weeks of preparation, last Tuesday, October 20th, our team NeuroMediSim delivered its PRELIMINARY DESIGN REVIEW presentation to our sponsor Dr. Winesett with our coach Prof. Peters and IPPD director Dr. Latorre in the audience. This is the first milestone our team has reached during the IPPD journey.

Screenshot taken during the presentation

During the presentation, the team presented its results from research, the process of concept generation, and a proposed combination of the concept as the solution to our project. Dr. Winesett and coach Peters gave the team some valuable feedback and insight for us to further improve our solution and future presentation.

One down, more to go

The PDR presentation is the first milestone for the team and there are plenty more to come. Next, we are heading into the prototyping stage. The team is excited to move forward and will keep making efforts towards the best end product.


So you have seen our concept generation table from the previous post. You may wonder why we choose certain solution over the other ones. This post will give you a detailed explaination.


For our system model, we have a few actors, as well as the I/O that is moved by each function. For our initial architecture, here is a description of the functions, as well as their actors and I/O:

  • Dr. Winesett and Medical Workers will provide reference footage to the RealSense camera.
  • The RealSense camera will provide video and depth data to Unity.
  • Unity will interpret and animate the data, sending the new data to the System.
  • The User will provide input to the System, such as choosing an answer option.
  • The System will provide information to the User in the form of education, I/O response, and assessment feedback.
  • The System will provide a software package to the Distributor.
  • The Distributor will provide the same software package to the User.

For concept generation, we included 5 sub-functions with multiple concept choices.

The reasons

The first sub-function is Animation, which is the interpretation and animation of the data sent by the camera. The concept choices are the software that can be used. We chose Unity because there are many plugins for taking 3D video information into Unity, as well as tools for an interface.

The second sub-function is Interface, which is the UI system that the User will interact with to provide input and receive feedback. The concept choices are the assets or software that can be used. The choices are Qt, Quizlet, Visual Studio, Web Design, and Unity. We chose Unity because it can also handle Animation. This means we can build the entire system in the same software. Unity can create both executables and web applets, making it flexible.

The third sub-function is Data Source, which is the original data sample(s) that Unity will have to interpret into animation. The choices are Preexisting Video, Medical Workers, and Future Actors. Preexisting Video is a video of seizures that can be provided by Dr. Winesett. Medical Workers are volunteers that understand the types of seizures that would be acting for a 3D camera. Future Actors would also be acting for a 3D camera, but a post-design phase. We chose Medical Workers because it would be an immediate source of information and it includes a depth channel. The Preexisting Video does not include depth, so we would need different software to interpret it and the result may not be as robust. We may still need to pivot and design the system to work with future 3D data from future actors if the data from the Medical Workers is insufficient.

The fourth sub-function is the Camera, which is the 3D camera that will be used to record the motion data. The choices are Vicon, Kinect, and Intel RealSense. We chose the Intel RealSense depth camera because it was a good price and the Kinect is no longer in production. It also had a good number of software options and plugins that would allow the data it produces to be transferred to Unity easily.

The last sub-function is Data Analysis, which is the software that will be used to synthesize the data from the 3D Camera and move it into Unity. The options are Vicon, OpenPose, MOKKA, Kinovea, and Tracker. Vicon is a suite of software from Vicon that would work well with the Vicon camera. OpenPose is a free, open-source project that can be easily used with the Intel RealSense camera to interpret the footage and bring it into Unity. MOKKA (Motion Kinematic & Kinetic Analyzer) is free software that would provide data from a 3D camera. Kinovea is a paid software that would work well with the Intel RealSense camera and would be able to transfer that data to Unity. Tracker is a free video analysis tool that would work well with 2D footage to create 3D data. We chose OpenPose because it is free, we chose to use the RealSense camera, and we chose to use Unity.


This week, our team is focusing on exploring and comparing potential solutions to our system design.

Concept Generation

Concept generation table

After discussion with our coach and sponsor, we have decided on the general frame and structure of our project. Because of its ability to integrate interface and animaiton, Unity is our choice of development engine. After researching the availability, extension and support aspect of different depth camera, we agreed on the Intel RealSense Camera as our choice of data capturing camera. We will be working with Dr.Winesett and the medical workers to film live demonstration of seizure syptoms and generate depth data to be used in the animation process.


Big news this week! Team 14 finally has a proper name and so it is time for a proper introduction.


After brainstorming and trying our best to stay away from copyright problems, we finally have a winner for our group name: NeuroMediSim. The name captures the two most essential aspect of our project which is neurology and simulation. Our brilliant Marshall designed the logo to perfectly represent the team and the project itself.

Our logo

A game for professionals

Many neurologic conditions utilize visual clues which to an experienced neurologist lead to an almost instant diagnosis. Unfortunately, the opportunities for learning these conditions is limited due to the relative rarity of many of these conditions. Our team’s objective is to design and create a simulation game with realistic scenarios where the participants must utilize their knowledge to determine the type of neurologic condition by utilizing visual clues as they would in real-life situations. So how are we planning to achieve this?

Motion Capture

Our team is exploring all the available and practical means of motion capture to generate data from real-life demonstration and then translate into 3D animation for the most accurate game experience. It may not be as cool as shooting the Avengers but still, pretty exciting right?

Skeleton tracking


This Week …

Meeting with Sponser

During this week’s meeting, Dr. Winesett joined us for the first time and shared his perspective on the project. He explained the core purpose of the project and presented us with some resources and examples of the topic to help us form a more concrete idea about the system we are designing.

Team Member Meeting

The team had a second meeting to work on assignments and discuss team-building issues. During the meeting, we decided on the name of the team and came up with a general idea for the team logo.

Coming Up …

MoCap Technology Exploration

The team is researching for practical and efficient ways of motion capture and animation. We hope to demonstrate the options during next week’s meeting to get feedback from the coach and sponsor.

Team Presentation

The team logo design is underway. We will finish before next week’s class and be ready for the presentation.

Blog Page Update

We will be updating our team’s Blog page with correct team information and possibly embellish it with images.

Tagged as: ,