In our “UX Testing Methodologies” article, we’ve talked about the UX (or usability) testing role for the business. We also took a look at what sizes the lost profit reaches if the testing is not carried out. In this article, we are going to analyze in detail some of the methods that relate directly to users’ task performance. Also, we will take a closer look at what to do with the test results.
You can use each of them at any level of prototype fidelity: both with a paper prototype of the resource, and with its highly detailed version on a web platform. A simple paper version which can be drawn in a few hours is suitable for analyzing a simple user flow. However, complex logic containing scripts, animations, and all the elements of visual design is best tested on interactive prototypes.
Corridor Testing – a small group of users checks your resource sequentially, and the moderator records the performance time and user actions using a particular testing app.
Remote Testing - the moderator communicates with users remotely, using any communication channel, while they perform their tasks on the website, and the process of their conversation is recorded.
Remote Unmoderated Testing - users perform tasks without the presence of a moderator and communication with him. Everything takes place using a particular program: program gives tasks to the user that he performs on the website (to find the contact page or a subscription form, etc.). The way to do this is subject to the user. The program records the amount of time spent and the complexity that arose in the process.
Expert Assessment - engaging an expert in the area where you aim to identify problems. For example, for a shoe-selling website, one can invite an experienced seller who will explain what buyers are most interested in. For analyzing a cosmetics website, one can request a beautician, etc. Such specialists can highlight the problem areas since they already know the target audience well.
Big Data Testing. This option is suitable for large companies such as Google or Yandex. Basically, it is the launch of a beta version of a product “to the world”. Then the website or application collects massive traffic within a day and allows us to test hypotheses based on a large data amount and to notice all the problem areas and fix them.
Now let's take a look at some methods related to the respondent/moderator interaction.
Observation. The method’s peculiarity is that the moderator is not engaged in communication with the respondent. He only observes the respondent’s actions and analyzes them. After finishing work with the website, the respondent has to fill out a questionnaire, and the moderator uses his notes to interpret what the respondent has written correctly. This is important because, at the end of the survey, people typically don’t remember precisely what actions they performed on the website, and how simple or complicated it has been for them.
Shadow Method. It involves three participants: respondent, moderator, and expert, working simultaneously. The respondent, as in the previous version, performs the tasks, the expert comments on his actions so as not to miss anything, the moderator makes records.
Thinking Aloud. It is no different from the previous two in terms of the respondent’s freedom of action; however, when using this method, the respondent thinks all his actions out loud. It has been noticed that this method helps the user perform tasks more attentively, but at the same time, the picture of the user's natural behavior is blurred.
Retrospective. Combines Observation and Thinking aloud methods. It is time-consuming but allows you to understand the user behavior deeply. With this method, the respondent first performs all the tasks and then watches the video of his actions and comments on why he acted that way.
Dialogue. Respondents and the moderator communicate during the test, ask follow-up questions, and describe their impressions of the product. Best of all, this method has proven itself for conducting qualitative research yet at the prototype and concept stages.
The choice of a method directly depends on what goal we are facing. For example, problems with the content can be determined cost-effectively through unmoderated remote testing. An expert will help confirm or disprove the hypothesis about technical issues.
Upon completion of the UX testing, we analyze the quantitative results obtained and qualitative research. Percentage of users who successfully completed the scenario, how satisfied they are with the interface, etc. are quantitative results. User reviews, what they like, and what dislike are qualitative results. Finally, we give a list of errors found on the resource and recommendations on how they need to be fixed.
Then two significant steps follow:
1. Implementation of Real Changes to the Interface. They need to be implemented so that all the work done above makes sense, and your resource works much more effectively. For example, if it wasn’t clear for users, what the “send OTP” button does, then you need to change the name. If 70% of users didn’t find the “register” button, it needs to become brighter and be positioned elsewhere. Sometimes happens that users did not complete the purchase, because they were afraid to click “next” with caution that they would immediately be charged. In such a case, it is necessary to outline below the button that customers will not be charged, and this is just the next step of the ordering process, etc.
2. Repeated Testing is a critical stage that will answer the central question: whether or not the website now performs its functions 100%. But the key role is still played by the implementation of our recommendations. If nothing is changed, nothing will change.
In 2018, a shift in web traffic took place towards smartphones. Nowadays, 52% of users access the Internet from mobile devices and not from computers. We witness the ultimate growth of mobile versions of websites and all kinds of applications, as well as the development of specialized tools: for UX testing on desktop devices, and mobile UX testing.
There is no overall difference between testing a website and testing an application: in both cases, usability and user flow are also being tested. But during the mobile application testing, more attention is paid to download speed, because, for those who use the mobile Internet, download speed is a critical parameter. The only difference between these two types of testing is the tools that are used in the process.
Google Analytics, PageSpeed Insights, Plerdy, AskUsers, User Testing, Usability Tools, Usability Hub, Optimal WorkShop, Fend-GUI, etc.
Crashlytics, Adjust, AppsFlyer, HockeyApp, Sensor Tower, Woopra, Amplitude, AppLyzer, Clicktale, etc.
Testing Increases Revenues. Forrester research confirmed that every $1 invested in UX testing matters returns as $100, provided that in addition to surveys, the necessary changes have been implemented. The reason is simple: the better the usability, the higher the conversion rate.
Many successful modern companies pay much attention to the so-called “continuous development”; they improve their interface, even if it has no obvious problems. For instance, Booking.com conducts up to 1000 A/B tests daily.
No Loss. Here is an example: you have a website, you decide to increase the traffic and spend money on advertising. A large number of new users visit the site - and 2 options are possible here. If you have poor UX, annoying pop-ups, or inconvenient navigation, all this traffic will go straight to the competitors. And you lose the money invested in advertising.
The lack of UX testing leads to the fact that 35% of customers leave the website without ordering anything.
If you have conducted UX testing and implemented the necessary changes, the increased traffic is being converted into profit.
Saved Time. UX tests simplify the website design and allow it to run twice as fast, with no errors and a precisely successful interface.
The Ability to Revive Failed Startups. The number of IT startups in the world grows each year and their value measured in billions of dollars. At least $150 million from this sum are allocated to unsuccessful projects. If a good UX shows whether a startup has a future at the start, the high-quality UX testing allows you to revive a promising project that has survived a failed launch.
A quality UX of a website is made up of seeming trifles. UX testing identifies problems even in the most unexpected components: from the button placement and its color to the menu item name and the inscription on the feedback form.
These components also include:
- page loading speed
- correct categorization of goods in the catalog (by color, operating principles, etc.)
- improving the quality of scroll on the page
- elimination of footer errors
- unified design and alignment of elements, blocks, etc.
But even the highest quality testing will yield benefits only if you implement the entire list of improvements advised and fix the indicated errors. Then the resource becomes much more attractive to the customers, and the conversion will grow naturally.
Would you like to order competent UX testing for a product at any stage of its creation? Write to us. Our experience and specialists are at your service.