HCI Research

STEM vocabulary is vital for the academic development of individuals in a technological society. In our research, we aimed to create a gamified learning tool that would engage users and help them learn more STEM vocabulary in different disciplines. We first conducted interviews with current or previous students who had taken STEM courses to gather their views on gamified learning, as well as information about their study habits and interests. We proceeded with this interview with a survey to support our data. Using this data, we created a prototype that we then tested on participants to find out major usability problems. After iterating, we examined the prototype again to support our changes. Once we had our finalized prototype, we conducted an A/B test to compare two different paths of an issue we ran into during each test. Ultimately, we came up with a prototype of a gamified learning tool called "4STEM" that would aim to help stimulate learning and be both enjoyable and useful to students.

The completion of this project was possible by the hard work of three HCI students who helped me develop the learning tool in four months. Below is an example of the methods that we used to complete the project.

Problem

STEM concepts can be challenging to learn without a good knowledge of the breadth and depth of the concept's vocabulary. One of the main features of the learning tool is its ability to customize the level of difficulty for the game based on a student's progress. Therefore, the team needed help with the development of the learning tool interface and interaction.


My roles in this project:
Scrum master
UX researcher
Descriptive statistic

Tools:
Miro, XD, Illustrator, Google Docs, Slack, SPSS, R studio

Interactive Prototype

Before ResearchAfter Research

Python Conference 

Before we conducted our research, I created a scientific poster for a Python conference to present the game's algorithm to the Python community. During this time, I worked closely with the leading developer to understand the logic of the process and iterate ideas to make the complicated and technical process look simple and easy for everyone to understand. The creation of the poster helped me to understand how the game would work, and it gave me ideas on how to approach the research.

Method 1 : Focus Group

The primary purpose of the focus group was to listen and learn from K-12 students on what made a game engaging, plus learn how students used games at school. Since the participants were minors, we created a consent form jointly with parents and teachers.

All the feedback collected from the study was incredibly valuable for the research and development of the learning tool. But nevertheless, the team wanted to focus on different academic levels, and we decided to continue the study with high school and university students. In the end, we concluded that the aspects of the game would be similar across academic levels, and age will not be a factor in influencing the game design, as we were researching students' learning methods.

Method 2: Interviews

We recruited participants at the DePaul University library and through friends. All participants were between 20 to 27 years of age and had taken a STEM course in the last few years.

After obtaining informed consent, we followed an interview script to ask participants about 1) the participants' experience with technologies in the classroom, and 2) any experience with learning software in the past and present.

Ten interviews were conducted in cafes and at the DePaul University library, and two were conducted in the researchers' homes. The interviews took about fifteen minutes to complete. After the interviews were completed, we transferred all the data to a spreadsheet, which we then transferred to Miro (online whiteboard tool) to use for an affinity diagram. Here is an example of how we created an Excel sheet to collect our findings.

Affinity Map

We used two methods to analyze the data: 1) organized interview responses by the sections of the interview script in a spreadsheet to find the most common issue, and 2) an affinity diagram created to find patterns between participants.

Method 3: Survey

The third method used was an online survey using Google forms. This method was used to ask our participants questions about their experiences with STEM classes, their studying methods, and their gaming experiences.

Goals
● Learn about which learning tools participants have used in the past
● Understand how people use these learning tools and how often they use learning tools in general
● Understand users' opinions on gamified learning tools
● Understand users' experience for specific learning functionalities

Participants
We recruited participants through the DePaul participant pool, and all participants were 18 years or older.

Procedure
Participants followed the steps in the survey. We asked participants questions to help us better understand their opinions on STEM vocabulary learning tools in a gamified context. The survey took approximately 10 minutes to complete.

After obtaining a total of 25 participants, we transferred all the data to a spreadsheet, which we then analyzed and transferred the insights to Miro to use to build a Flow Chart.

Data Analysis
We used three methods to analyze data: 1) organized survey responses by sections to find the most common issues and insights, 2) created a Flow Chart, and 3) created a Conceptual Model. We used both the survey data and the interview data to build personas for our prototype.

Personas

Both interviews and survey data were used to build our personas and conceptual model to visualize the flow of actions that needs to take place when students navigate throughout the game. With data collected, we analyzed students' interactions across different levels from the beginning to the time the students logged off. Finally, we reviewed the proposition with the developer to see if the features considered for the game were possible to implement.

Method 4: Mid-Fidelity Prototype and User Testing

Our fourth method was to develop a mid-fidelity prototype based on the results from the interviews and surveys. Along with the prototype, we conducted a usability test to evaluate our concepts.

Goals
● Use the initial low-fi prototype handed to us and develop a mid-fi prototype
● Find major usability problems
● Fix interaction logics and strengthen functionalities

Participants
After developing the mid-fi prototype, we recruited participants through family and friends to participate in the usability test. All participants were between the ages of 24 and 30 years old.

Procedure
We developed a usability test composed of seven usability tasks to evaluate seven of the core functions of the game:
1) log in as a guest
2) create a profile
3) play a game
4) add a friend
5) compete with a friend
6) change the subjects of the game
7) get hints on the game

The testing took approximately 30-35min per participant. After the test, we encoded the participants' results into a spreadsheet (see Appendix 9) to later analyze for insights and to solve usability issues for a new mid-fi prototype.

Method 5:Iteration: Mid-Fidelity Prototype and Usability Test

Our fifth method was to iterate the previous mid-fi prototype based on the results from the previous usability test and conduct a secondary usability test.

Goals
● Find major usability problems
● Fix interaction logics and strengthen functionalities

Participants
After iterating the mid-fi prototype, we again recruited 10 participants to join in the usability test. All participants were between the ages of 20 and 30 years.

Procedure
We made changes to the prototype based on the results of the first usability test to increase functionality and fix the significant usability problems we ran into. This expanded our prototype to 56 screens, as opposed to the original 26. We proceeded to test our eight participants by using the same usability script with an added question, asking them to look up the meaning of the word in the dictionary. After the test, we encoded participants' results into a spreadsheet to later analyze for insights and to solve usability issues for a new mid-fi prototype.

Data Analysis
We organized the results of each participant per task to find the most common usability issues and insights. These were then taken into consideration when adjusting the mid-fi prototype.

Method 6: Finalizing Mid-Fidelity Prototype and A/B Testing

Based on the second round of usability testing, we created a third and final iteration of the prototype. With this prototype, we also conducted an A/B test to decide which path we should keep for the "challenge a friend" feature. We based our A/B test on the following research question and hypothesis: Is there any difference in time to completion between the two locations of the "VS" button to challenge a friend?

Hypothesis
H0 There is no statistical difference between the two locations of the inviting a friend button.
H1 There is a difference in time between the locations of the invite button.

Goal
● Determine which path is more natural for participants to navigate to challenge a friend

Participants
We recruited eight participants through family and friends to participate in the A/B test. Each researcher tested two participants, each with the A prototype and the other with the B prototype.

Procedure
We gave the participants the task and timed them as they completed it, as well as counted the number of clicks they made and the number of errors. Afterward, we had them rate their confidence and satisfaction with the task on a Likert scale.

Data Analysis
We took the time duration of each participant on the task for both the A and the B tests and found the mean. Then used the mean to perform a t-test on the data.

Results

Interviews
Our participants believe that STEM subjects are essential, but they explained that they find the classes to be difficult overall. Some of the issues that we found were related to the lack of understanding due to the complexity of the topics as well as a lack of interest in the subject. With this in mind, we focused our findings on learning more about: 1) students' motivation when learning STEM vocabulary, 2) current games facilitating the learning process of abstract topics, and 3) stimulating intrinsic learning.

a. Students' motivation when learning STEM vocabulary
● Students understand the importance of STEM courses. Many methods currently help them learn, including YouTube videos, flashcards, and writing down words multiple times. Interestingly enough, none of our participants were able to name a specific game that helps them learn STEM vocabulary.

b. Current games facilitating the learning process of abstract topics
● Most students want to win, and if the students find the game too difficult, they could lose interest in the game. If this happens, they will not be willing to play the game again. Consequently, the game needs to assess the knowledge of the students before they start playing. Games also need to be entertaining enough to keep students engaged in the learning process.
● Most students agree that games have the ability to engage students in learning difficult topics.

c. Stimulating intrinsic learning
● Participants mentioned that they would like to play with friends or with others through social media. By adding a social context to the game, students can challenge each other or collaborate on difficult topics.
● Social aspects and competition stimulate intrinsic learning. When students play against each other, they are motivated to learn faster to be able to keep up with other players. However, we also found that students want to spend time playing alone, as well, meaning that social aspects should be optional.

Surveys
Through our survey responses, we found that all participants except one have used a learning platform, whether it be currently or in the past. The majority of our participants (77.8%) have used an online learning platform such as Khan Academy or Lynda, which shows their willingness to learn on a web-based application.

We found four common features that most participants were interested in through their learning and gaming experiences:

1) access to a dictionary, 2) the ability to play both alone or with friends, 3) a simple, straightforward game design with graphical elements, and 4) the ability to gain points and exchange those points for help within the game.

First Round of Usability Testing
In the first round of user testing, we found different results, both between tasks and participants. Our most successful tasks were the 1st, 2nd, and 6th, which were all of the most straight-forward activities that required the least amount of steps. All participants rated those with a high level of confidence (5).

We ran into two main issues:
● Task 1: the participants did not fully understand why they were being asked questions after they logged in and just wanted to skip into the game.
● Task 6: Participants did not understand what the icons meant.

Task 3-1 and 3-2 also had a high level of confidence from participants (4), yet they had criticisms. We wanted them to enter the game by clicking the pin on the map, yet most of them entered the game through the listed games on top of the screen. Those who did enter the game through the map were unsure if they did it correctly, as it dropped them into a game of word search instead of hangman. On task 3-2, two participants noted that they thought clicking on the "I don't know" button would cause them to fail the game, which is why they were hesitant.

asks 4-1, 4-2, and 5 were our worst scoring and highest time duration tasks of the test. We had one or two participants fail each task. One of the main issues behind the failure was that not every portion of the prototype was clickable, which confused participants when trying to challenge a friend as well as trying to change the subject. While trying to change the subject in task 5, participants were also unsure why it was located in the Dictionary page and mostly ended up clicking there by mistake. While trying to challenge friends in task 4-2, participants noted that they assumed they would be able to challenge them through the friend button on the dashboard, instead of having to go into the game.

The Second Round of Usability Testing and Statistical Testing
Based on the results from the first round of testing, we were able to iterate our prototype and go through the second round of testing to see if the usability issues had improved or been fixed. Major issues we still ran into were participants commenting on the icons of the dashboard being too similar, noting that there seem to be too many paths to challenge a friend to play, and not being sure if they had finished a task due to there being no confirmation page when they added a friend.

Overall, almost every task improved in time and confidence levels. Figure 1 shows the average duration of each task in the first round of testing, while Figure 2 shows the average duration of each task in the second round of testing. We performed a t-test based on the means of the two tests. We found that there was a 95% confidence level in the second prototype (p < 0.02795309), proving that the changes significantly improved the usability and ease of the prototype.

AB testing 

The figure below shows the time in seconds it took for each participant to finish the task, for both the A test and the B test. After conducting a t-test with SPSS and R Studio, we found that there was no significant difference between the dashboard (M= 26.75, SD=21) and the game page (M= 31, SD=11.5) at alpha level .05 (T(6)= -.35, p=.74)

Discussion

Our goal for this project was to create a gamified, web-based learning tool that was both useful and fun for students to use. We wanted to give students a helpful way to learn STEM vocabulary, where they would be challenged yet still invested in their learning process.

During our interview process, we were able to gather valuable information that helped drive the project. Our participants were able to explain helpful study habits they use/used, such as copying down information for repetition, using flashcards, or watching YouTube videos that describe the topics. We used this information by integrating these methods into our prototype. We intend to make it familiar and engage the student, basing our games and hints on what they already know, helping them with the learning curve.

From our interview, we also found that students were motivated to keep engaging with a game the more they won. It was essential to make sure the games were challenging but not so challenging that the players would lose interest. One meaningful way in which we decided to handle this issue was to do a quick quiz before users started playing the game to determine their current skill level of knowledge in the topic and then launch them at the appropriate level.

From the interview, subjects also expressed interest in both learning alone and with friends. Through previous literature research, we agreed that an interactive social option could help encourage learning, which is why we wanted to give the players the option of challenging friends, as well as keeping a current leader-board so they would stay motivated to move up in rank by completing more levels.

Using our results from the interviews, we built our survey to ask participants about certain game features that would interest them. Half of our participants marked that they had used tools that had integrated online dictionaries. We took this as a suggestion to make sure users could look up definitions of words they came across in the games. Our survey responses further supported the data from the interviews.

In terms of game design, our participants remarked that they preferred a straightforward game design with nice visual elements. We used this information when it came to building our dashboard for the game; we wanted all the information to be clearly laid out for the player while also keeping them on a journey-based game where they successfully pass levels and move on to the next one.

Throughout our three prototype iterations, we made many changes regarding usability issues we found in our usability tests. During the first round of testing, we ran into issues regarding questions after the user logged in, confusion over the game's layout, confusion over how to correctly enter the game, and users being unsure whether they completed a task or not. Taking these findings into consideration, we were able to iterate the prototype. During our second round of testing, it was noticeable that our usability issues were mostly fixed. We still found problems with users not being able to orient themselves on the map and figure out exactly where to click to enter the game without having to read the label. Although we were not able to test a third time, we updated the design, hoping to fix these issues, and if we were to do future testing, this would be one of the areas of focus.

For our A/B testing, our findings were based on a small participant pool (n=8). Although we did not find a significant difference between the two tasks, we feel that the results might have been different if we had a larger subject pool. In the future, we recommend at least doubling the number of participants. Based on our results and previous comments from user testing, we decided to keep both pathways (inviting a friend). We want to give the player both options to be able to challenge friends from within the game, and on the main screen to minimize the number of clicks depending on where they were.

Overall, we were able to produce a prototype inspired by users' comments and the results obtained from user testing. We also conducted a presentation for Itay, to showcase our findings and propose our design. The prototype will be handed to developers, and the game is expected to be completed by the end of the year or in early 2021.