Maze is a user testing and discovery platform used for gaining early insights to validate your product ideas and bring confidence to the decision makes process pre-development. In this article, we'll be walking through our process and experience with Maze, as well as the pros and the cons.
Starting a new project in Maze gives you two options. For our needs we decided to go with Maze User Testing which focuses on design validation. Depending on the project and/or client the user also has the option to choose Maze Discovery used for gaining early insights like market research, and idea validation.
After inserting the link for your prototype you will be asked to create your first task. There are a variety of block types that are possible to choose from. It was great to have this many options because it really allowed us to gain additional insights not only within the area of quantitative data driven feedback, but also qualitative feedback.
Maze provided easy to find resources that helped guide us in correctly creating non-leading missions. Upon creating new block types, the user is also given the option to read more, to understand how to correctly use each block for their test.
Throughout our testing process, we found that it wasn't immediately evident how direct success, in direct success, and bounce rate ties into the overall usability score. It would be really great in the future to have an onboarding-type experience to help users understand how to achieve a higher usability score, rather than discovering how usability works after the test is complete. Of course there are ways to look into this by searching through Maze's FAQ, but something more upfront and center could be a nice addition.
In order to increase your usability score, it's important to try your best to think about all of the expected paths that a user might take. You are negatively affected when a user bounces or when a user takes an indirect path. Most users want to explore, so make sure to account for multiple expected paths that a user may take to complete their mission. As you learn from your tests and iterate on them, you will learn about more expected paths that were previously unexpected, thus helping improve your usability score.
We really loved how easy it was to update our prototype before sending it out to users for testing. Each time a new path is added to your Figma prototype, you can simply hit "refresh my prototype" for the latest version. We found that sometimes this required us to refresh our entire page for the updated prototype to load.
Sourcing testers in Maze was great considering that the cost per tester was only $3. Although for this price, it meant trade-offs. We found that there weren't as many options for certifying that the users were chosen in context to the app that we were testing. Compared to a platform like usertesting.com, our ability to source testers based on the context of the project was a bit limited, but we were also able to produce results in a much more timely manner, which helped us meet the deadline of the product release with confidence. In Maze, users are sourced using Amazon MTurk. Upon purchasing testers, you are asked to enter how many testers are needed, sex, and age range. Considering that we were able to get about 25-35 testers in a matter of hours was huge for us and really sped up our design validation process.
After testers are ordered and your test goes live, you can actively see the number of testers that have completed your test and how this might impact the value of your test. Additionally, you can outsource users by sharing a link to your social networks as well as sending out links independently. This may help with gaining some additional users that are more specific to the product experience you're testing.
For a first time user we recommend conducting additional rounds of testing to understand what effects your usability score. We also found that asking qualitative questions provided us different insights from simply following quantitative data. Maze not only allows you to gain valuable data driven insights, but it also gives you the tools to find out how people actually feel about the experience.
It's very easy to assess data. There are conveniently more than one to present your results. You can get the overview of your test from an internal perspective, or present it as an overview with the "go to report" button located within the top navigation.
After results have been recorded and stopped, Maze lets the user move through each mission and prioritize feedback. This is a really nice feature considering that some feedback may be caused by outliers. It also allows you to see your feedback from an internal perspective before viewing results as a report.
Using the Maze Report is a great way to summarize your findings to your team, and to your clients. It starts you off with an initial screen that shows how many responses (testers) you had, as well as your usability score.
Each mission in your report will start out with some basic stats about your misclick rate, and average duration, success, and bounce rate. This a great way to quickly recap and give an overview of how users performed.
Maze also provides screens to rework, screens to check on, and great screens. This type of analysis was very valuable to us because it helped us realize where people were getting stuck. For example, in one screen we had an icon that was intended to act as an illustration, but users assumed that they could tap on it to find what they were looking for. This immediately helped us make the decision to remove the icon and find and come up with a different solution.
Mission paths are a great visual indicator to show which expected routes your users are taking. Dark blue is meant to represent the "selected path", light blue represents the "expected path", and red represents "Tester drop-off". It would've been great for this part of the report to highlight which of the "selected paths" are also indirect success paths. Sometimes with a large amount of paths and screen variations it's nice to see how these two are correlated.
The report also goes on to show a success analysis for each mission, as well as a usability breakdown.
As mentioned earlier in the article, it is sometimes a little bit unclear how your usability score is calculated on Maze. Below, we've provided a link that explains a bit more how everything works.
Iterating in Maze is easy for the most part. If you happen to create a completely new prototype with a new link, you'll unfortunately need to create a new project. Aside from this, you can easily duplicate previous tests and add in new paths or questions, etc.
For those looking to rapidly gain design validation look no further! Maze had very few downsides. If anything, we are excited to see how the product evolves in the future. We'd love to see more detail added for sourcing testers, but all in all it was a great experience and helped us make the design decisions that were vital to creating a good user experience.
Through the data that we were provided, it was easy to make decisions and act on user feedback where we saw patterns or trends. We ended up conducting more tests than anticipated because of the valuable insights that we were receiving from testing. Our decision making process after receiving the results was highly driven by heat-maps, where users were getting stuck and bouncing, and by the indirect paths that we hadn't previously expected or accounted for.
Based on the quantitative and qualitative feedback that we received, it was relatively easy for us to boost our usability score from 76 to 90.