Evaluating UX for Dementia UK, Part 2: Participants

Part 2: Choosing Test Participants

Introduction

In my first article of this series, I talked through the process I followed to interpret my and define goals for my UX evaluation. With that established, I now move on to the process of deciding on participant recruitment.

Identifying Participants

It is common to select participants primarily based on their age or gender; however, this will vastly oversimplify the approach. By doing this, I could end up overlooking or excluding some of a service’s representative users.

I went back to my brief and reviewed my previously stated goals to draw out additional distinguishing characteristics which could impact a person’s experience of using Dementia UK’s website.

Photo by Jacek Dylag on Unsplash

Empathising With Your Participants

I started by asking myself some questions to prompt my brainstorming process:

  • What would drive a user to be looking for information on the Dementia UK website?
  • How have they arrived at the Dementia UK website? Are they starting their journey on the homepage before following an information scent to satisfy their information needs, or have they landed directly on a content page from a search engine?
  • Is the user looking for information for themselves or for someone they know? Would this alter their information needs?

These are only a small number of examples, but the purpose of asking myself these types of questions is to better understand more about who is visiting Dementia UK’s website, from which I could define realistic categories. My categories also acted as my guide for recruiting participants. I frequently checked back against these to ensure the people I recruited were reflective of these categories.

To supplement this, I also reviewed the content on Dementia UK’s website; systematically inspecting the menu systems, labels and content in each section, to try and get a feeling for whom the information has been designed. This is a useful low-cost technique to use if you are unable to interview your stakeholders or any other subject matter experts for their input.

Categorising Your Participants

From my analysis and content review, I identified several categories, but to manage the complexity, I reduce these to three, which I felt still covered the representative website users:

  1. Non-medical professionals, looking for medical information to self diagnose symptoms, support friends or family, or find practical information or support to assist in the care of someone experiencing dementia.
  2. Non-medical professionals who are interested in raising funds or volunteering to support the charity’s work. Users in this category may be part of the first category too.
  3. Medical professionals, interested in supporting the charity’s work by volunteering to become an Admiral Nurse (a specialised care role provided by the charity, geared towards helping people with dementia). Users in this category may be part of the first two categories too.

I’m Ready To Recruit Participants… How Many Do I Need?

To decide on an appropriate number of participants, I based my calculation on the three user categories I had previously made. Instead of aiming to recruit a magic number (5? 10? 15? 20?), I decided to recruit several participants to cover each of my representative groups. This gave me the confidence that my test data would reflect a broad spectrum of the system’s intended users. Secondly, this will be valuable to justify, both to myself and my stakeholders, the basis of my findings when I come to present my results.

Photo by NESA by Makers on Unsplash

“[For] user research that is effective, you’ll need to recruit participants who represent your (potential) users. These participants should possess characteristics found in your eventual customers — the people in your target group.” — Ditte Hvas Mortensen (2)

Just In Case…

This small scale study was part of the Human-Computer Interaction Design masters program I am studying at City, University of London, so I was relying on the goodwill of each of my recruited participant volunteers to give up their spare time. For this reason, I was keenly aware that sometimes my participants may need to change their plans or drop out. To mitigate the risk of any tests not taking place, I recruited a couple of additional participants. This has the double benefit of providing more observational test data and, in the early stages of the evaluation process, it helped me refine the execution of my tests and data capture technique.

Another benefit of recruiting and running additional participants is that it helps me to assess when I might have reached data saturation, due to the number of participants I had already tested.

Lessons Learned

  1. Define and categorise your users, then recruit participants who cover these categories. Spend time analysing the information needs of your users, and this will give you a clearer idea of your user groups which you can use to guide your recruitment. This will also help you justify your recruitment plan when you begin your analysis and reporting.
  2. Recruiting participants to cover your representative user categories rather aiming to satisfy a magic number. You will be observing users who better represent the range of your system’s user population, which will result in observing more realistic scenarios. You will also have the advantage of being able to link your findings to realistic categories of your user’s population too.
  3. Recruit more participants than you need. With the best will in the world, your participants are helping you out by taking part. If you are recruiting volunteers who are juggling study or work, accept that life may get in the way, and a few last-minute cancellations might occur. By having more participants than you think you need means your extra observation material may help you surface additional insights during your data analysis. At worst, if a couple of your participants do have to drop out, you will still maintain your intended quantity of users and the quality of your collected data will not be compromised.

Next

In my next article, I will explain the process I used to select my usability evaluation methods.

Further Reading

  1. Hoa Loranger, Checklist for Planning Usability Studies (2016)
  2. Ditte Hvas Mortensen, The Basics of Recruiting Participants for User Research (2020)
  3. Jared Spool, Seven Common Usability Testing Mistakes (2005)
Published
Categorized as UX Tagged

Leave a comment

Your email address will not be published.