Blog Posts

Welcome to 2021

Welcome back!

At the end of last semester, Pythax delivered their complete presentation of their system-level design to a panel of liaison engineers. The feedback received from that presentation was positive and valuable, and the team was able to finally move out of the planning phase.

Now that the winter break is over, Pythax is ready to roll with some new sprints! The first of these includes implementing features for disabling questions on the game board and developing the backend for question hints. New hints for old challenges are also being generated at this step, and the proposed UI changes from the previous semester are beginning to be implemented.

The GitLab repository is starting to see activity with new branches, commits, merges, and all the other fun stuff that comes along with version control. So far the team is mostly on target for meeting their deadlines, which bodes well for the first Quality Review Board (QRB1) of the semester.

Commits per day of the month on the Pythax cyber jeopardy repository

We’re looking forward to a productive and eventful semester. Stay tuned for more updates on our progress! From us to you, Happy New Year, and go Gators! 

Final Presentation of Fall

We finally did it! The semester is coming to a close and Pythax has finally delivered their final presentation to the folks at Raytheon Technologies (RTX). 

The presentation itself went great, since we were able to implement the changes to it based on the feedback and criticism from our recent peer review. After presenting there were no real questions to be asked, which normally might be a sign that nobody was paying attention, but in this case all the interested parties responded with positive feedback rather than questions. 

It seems like everyone is just as excited as we are for us to start the actual development of this thing beginning next semester. We at Pythax cannot wait to showcase what we’re really capable of, and we hope it exceeds expectations. 

Until then, it’s been a great semester and we’re very grateful to everyone who has helped us along the way. 

Team Pythax delivering their final presentation of the fall semester.

System Level Design Report – Peer Review

As the end of the semester approaches, Pythax is looking forward to our final presentation of all the work we’ve been doing these past few months. In doing so, we prepared a draft of our final report and put together a presentation outlining the most important material from that report. We were able to present this in class and gain some feedback from other teams and their respective coaches, which we will discuss here.

Firstly there was the issue of our slides themselves. In all of our prior presentations this semester we did our best to have a thematic set of slides which would hopefully help us stand out and keep folks’ attention. However, the dark background we used for the slides was unpopular and some audience members complained that it hurt their eyes to read white-on-dark. This was a relatively easy fix to make but it drastically changed the appearance of our content for the final presentation. Ultimately this was for the better, as our final presentation looked a lot cleaner and more professional, and this allowed us to focus harder on the content rather than the stylization.

The second glaring criticism we received was that we focused too heavily on what aspects of the project we decided not to keep. Our intention here was to show the changes that we’ve made and to showcase our progress to the final product, but instead it came off as confusing and folks didn’t really get a good sense of what our project truly was. To fix this, all mention of past work was removed from the slides, and a new video was recorded of the prototype to showcase some of its better and newer features. 

All in all, the criticisms we received were extremely helpful in getting Pythax’s final presentation to the level of quality that it needed to be. We thank all who gave feedback along the way at the various stages of concept generation and presentation!

Prototype Inspection Day

Pythax recently had to generate a series of mockups and prototypes of the two concept-ideas that we’ve been considering thus far: the dueling snakes theme and the tron theme. 

Some of these changes started out as conceptual. After creating a series of low-fidelity wireframes to get a sense of where we wanted to put new features, we began to consider themes and designs for each concept. Eventually this led to two prototypes created using the web application Figma, as well as some changes being actually implemented in the code of the system itself which gives us a good head start. 

After these prototype ideas were compiled into a slideshow, Pythax presented them to a panel of faculty coaches from the other teams in order to receive feedback on various aspects of the ideas. Through this process we found that certain color schemes were quite unpopular. We initially had intended to incorporate a red-blue scheming for the dueling snakes theme, which was hard on the eyes of some audience members. Ultimately, the tron theme beat out the dueling snakes theme for its clean UI, balanced color scheme, and trueness to the original concept of the system. 

Being able to get this feedback from the judges was an enormous help to the team, as it allowed us to streamline our later presentations to focus on a single concept, which allowed the features themselves to get more of the limelight. 

Workspace @ Hatchery

Big news! Pythax now has a physical workspace, located inside UF’s Innovation Hub. Thanks to Raytheon, we’ve been able to set ourselves up in an office space surrounded by some great startups and innovative companies. Inside the workspace we’ve got a projector, 3 computers for developing, 2 servers, a whiteboard, and plenty of coffee to keep us going. 

We have 24/7 access to this space, which will be extremely helpful for late night crunches to make rapid progress. We’ve been given some guidelines as to what we can and cannot do with the equipment that’s been provided to us, and it seems that we’ll have quite a lot of freedom to do whatever we need with the tools at our disposal.

We’ll be following a very rigorous set of procedures while working in this space to prevent any unnecessary pandemic risks. This involves wiping down every surface that’s been touched, wearing masks when more than one person is present, and maintaining ≥8 feet of physical distance. Each team member is also completing bi-weekly covid tests and screenings to ensure nobody is present at the Innovation Hub while infected.

PDR Presentation – RTX

We did it!

Well, maybe not what you think we mean by “it.” The final project is still on our distant horizon, but we’re now one step closer, because we finally presented our preliminary design report to our liaison engineer at Raytheon! It was extremely well-received and we got some solid positive feedback, which we’ll look into here. 

Firstly, we gained some further insight as to what happens in a typical CTF when Raytheon hosts one. Apparently, some of the challenges can be “brute-forced.” That is, on occasion contestants will try a long list of answers hoping that eventually one of them will work. This doesn’t really support anyone’s learning, so the idea came up that we should have a way of catching someone doing this. Also, the way things currently are, if someone enters a long list of random characters, but the correct answer is within those characters, they will get credit for solving the challenge. We want to prevent this from happening so nobody can just dump a dictionary on each question to pick up easy points. 

We were able to ask our sponsor who the CTF is truly intended to be used by. Turns out, it’s for pretty much anyone who signs up! Apparently it isn’t used as a strict Q&A tool; it’s more of a capstone event covering specific content from their course’s curriculum. While the idea of making the CTF be a running system which teaches students as they use it was brought up and piqued their interest, we’ll likely be sticking with the platform as a capstone assessment so we can focus our efforts on other foundational changes. Most often, there are 4 teams of about 3 people each, and never more than 13 teams at once. The fact that teams can work together and students can teach each other is one of the most valuable aspects of the system and we intend to maintain that, if not improve upon it.

This process also brought some new ideas and topics to the table. All-in-all, the ball is rolling on this project and Pythax is eagerly looking forward to being yet another step closer to our final project.

PDR Presentation – Peers

We had the chance to do an initial presentation of our preliminary design report to the rest of the IPPD class, and got some super valuable feedback from it! Without going into excessive detail regarding our presentation itself, we’ll dive into some of that feedback here and explore the changes that we made for the real thing.

First thing out of the way, we have to admit our presentation skills all could use some work. This was a pretty common response from our classmates and other audience members. But the feedback is valuable, and we definitely improved upon that moving into the actual PDR presentation with our sponsor. 

Secondly, some of the criticism we received was regarding the formatting and layout of our slides. Initially we sought to have a stylized slide background and a thematic color scheme, but this caused slides to be harder to read, and there was too much contrast for some viewers’ eyes. One comment we received actually said that they had to change to a different tab because our presentation was giving them a headache. We changed this around as well for the final PDR presentation and the slides looked much much cleaner. 

Lastly, we found that we were slightly lacking in the way of providing user stories with our slides. A user story in software development is a written account of an interaction with the app, from the active perspective of the user. Our peers had a decent amount of these and we updated our presentation to meet these standards.

Concept Generation

Once we narrowed down our critical sub-functions in the functional architecture generation step, we were able to identify the key areas to direct our creative efforts. 

To list them:

  • Add new challenges (specifically, allow functionality for an admin to do this)
  • Display gameboard background
  • Display gameboard icons
  • Display challenge
  • Display admin panel
  • Display user interface
  • Modify game settings (again, allowing an admin to make the changes)

We then brainstormed awhile in each of these areas to come up with any ideas we could for improving upon and meeting the goals set by the sponsor in these areas. We won’t list these here because of how numerous they are. However, we did narrow our scope down after this step by combining our ideas from each category into several cohesive concepts, the most promising of which we will list here.

Our first concept’s theme is to feature a Tron-like color scheme and iconography. Of course, we cannot use any copyrighted material so the theming would only be loosely inspired by Tron rather than completely ripping off that intellectual property. This concept was unique in that it would allow users to spend their points earned in order to pay for hints on tougher challenges. Also, admins and optionally students could receive feedback based on questions answered correctly and incorrectly. More gamification elements would be added to this concept so students would feel rewarded for their work. The database would also be modified on this concept to prevent question addition/removal from affecting other questions present in the database.

The next concept we decided to run with into the prototyping phase has a dueling snakes theme. The inspiration for this theme was similar to our logo; the primary language we’ll be using is Python, and the competitive nature of CTFs could be well-represented by an image of fighting snakes. This concept would also allow admins to add challenges with raw text; presently, challenges have to be entered in HTML syntax. There would be more focus put towards balancing the experience out, by having a good spread of easy and difficult challenges, and these would influence the style as well as the design of the gameboard. This concept would also have more responsive feedback for the users, such as displaying a small animation to indicate a correctly-completed challenge. As for the database, changes would be made to ensure that wiping the database doesn’t break the game or environment. 

Functional Architecture

Before any work can be done to come up with ideas for this Capture the Flag platform, we have to identify the key players. Not just the players in the game, but all the entities which make up the system and interact with it. 

The actors we identified for this step are as follows. Naturally, the user is a prime actor, as is the administrator of the CTF. The non-human actors include the database, web browser, operating system, and server. 

These actors work together in the system through various input-output interactions. One such interaction is a query, in which the database handles a request for pertinent information and returns that data to the requesting party, be it the user, an admin, or one of the other actors listed previously. Other input-output interactions include use of the mouse and keyboard, web browser events, and HTTP requests/responses.

After identifying all of these important functions and actors, a graphic was able to be generated which maps out the system to a pretty intricate level of detail. It’s included below, but beware! The complexity makes it a bit difficult to parse.

From this process, we were able to identify some critical sub-functions. Some of these include the ability of an administrator to add new challenges to the game, the ability of the system to display the gameboard and background graphics, as well as all other UI elements, and the ability of an admin to modify game settings. These critical sub-functions were used to help guide our concept generation process, focusing our ideation to meet specific goals and improve upon the most important features of the system.

In doing this, we have generated a functional architecture, which outlines all the interactions between various elements of the application and the people using it.

Why CTF?

As you might already know, the folks at Raytheon Technologies (RTX) are using their Capture The Flag Jeopardy as a capstone assessment for some of their cybersecurity-themed internal training courses. But why use a CTF? Why not just test the students the traditional way? To answer some of these questions, we did some digging to figure out what benefits a CTF could provide as a learning device. 

The obvious benefit of a CTF as a learning tool is that it requires students to be engaged in what they’re doing. Having more interaction than a standard paper test allows the students to feel more invested in their learning process. This does present a couple of issues to keep in mind, however. 

Firstly there’s the issue of competition. The competitive nature of a CTF environment is something that could discourage new learners. Less confidence in one’s own abilities could prevent people from even trying at all, which would be a big problem for a learning device to have. Luckily, there are some solutions to this problem. Having a team-based approach lets new learners rely on more experienced teammates for guidance while still being able to contribute their fair share. To go further on this, providing hints and analytics to users has been found to be a good means of encouraging less-experienced users to participate. 

The second problem is that of engagement. How do we make learning interesting and fun? As it turns out, making a game out of the process is the way to go. More interactivity, personalization, and other game elements significantly increase the entertainment value of a learning device, and with that comes an increase in user engagement and success. To elaborate a little further, providing points to users for completing challenges and allowing users to spend these points on hints is an example of in-game economics that provide a sense of strategy to the learning experience. Lastly, game balance is a solid means of increasing engagement for those aforementioned early learners. Being able to provide easier questions for those users and harder questions for more-experienced learners is key.