Every senior UX Researcher needs to be mixed methods

A picture that has a female with rainbow color on her and data visualizations/graphs.
More art I created thanks to Midjourney… Everyone should use this AI art program.

“Kris I need to establish a metric baseline for the [new product] that we are implementing.” — Says a stakeholder on my team “Ok that sounds reasonable, do we have a list of people to send it out to?” — Says me, the optimistic researcher. “Yes, I can ask the PM.” — Mentions the stakeholder “Great how many users does this new product have?” I curiously inquire… “Well, right now, we don’t have any users.”

It feels like it was only last week (how cliche) that I was the only UX researcher on a team of 20+ product designers. Unfortunately, at the time, and for the first half of when I was there, I wasn’t the best at flat-out denying the need of a project. Initially, instead of turning down certain projects, I would safely just push out the timeline a couple of weeks. My go-to line would be, “We can definitely evaluate this, but there are a lot of requests right now, so we’ll have to revisit this in a couple of weeks.” At the time, it felt like a pretty safe move, especially considering being the only UX Researcher, I could just say that I needed more support, so until that happens, we will keep delaying projects. It wasn’t until the later part of my stint that I started to realize that not all UX Research requests are equal in value.

I had an epiphany when that stakeholder wanted to run a survey on a brand-new product that had no users. I had so many requests from other different stakeholders, but hey, at least they clarified they had the names of around 4 prospective users that could eventually use the product. The research goals were still pretty precarious; they wanted to run a survey with a sample size of 4 of users to establish a baseline using a UX metric. A baseline of what? No one uses the product yet! How are we going to measure the product baseline if the people we want to survey are prospective users?

I really wanted to say, “what the hell are we trying to accomplish here?” So I pushed back a bit, and then I found out the true reason they wanted to run the study; It was part of their yearly review goals to have UXR run X amount of their projects, and they were waiting until November, right before the holiday season/new year, to do it. They knew deep down that the methodology and requests were not a high business priority need, but they needed to hit their HR goals… it changed my viewpoint. I felt bad that they were in a tight situation and set goals they couldn’t reach, but I firmly told them there wasn’t much value in running the survey, and also that, given the current priority of projects, we should really think about the need for this. I wasn’t going to be another check mark for a pointless OKR. Luckily for me (and having a supportive manager), I could mostly deny the request. Instead of pushing it back 2 weeks, I pushed it back to February when the product was more fleshed out and used by someone.

It dawned on me to flat-out deny projects or push back if they lacked scale. I started to understand that although there can be value in doing research on every project, not every project gains an equal amount of value. It’s my responsibility to politely turn them down…

Quant methods by their very nature provide answers to questions of scale and causation. .. Qual provides coherence and participant focus, but lacks scale and causation. — Sam Ladner, mixed methods

As a Senior UX researcher running your own program, or being in charge of what requests get worked on, you need to determine how much of an impact your research will have. Don’t be a “yes” person, stand up for what makes the most research sense… It’s not easy. As a UX person, you are probably empathetic to people, but you are also a researcher. Researchers look at trends and the data. Researchers are analytical. Researchers advance topics. The best way to be a researcher, and contain your empathy and hate for conflict, is by looking at statistical trends and deriving insights from them.

It’s one thing to be a junior or mid-level UX researcher and to take requests ad-nauseam, but once you become a senior or lead level, you must know the scale of what your insights will have on the user. Looking at feature usage statistics, pulling tables of data from server logs, and then cleaning/organizing the data to then analyze, are three vital techniques that all UXRs should strive to have. You don’t have to be a data scientist, you just need to know where it makes sense to apply quantitative methods to prioritize a project need. You are a researcher. A researcher in every established field, from Sociology to Physics, knows how to use statistics; UX Research should absolutely be no different. If you are reading this and consider yourself a qualitative researcher, I promise that it’s not as hard/intimidating as you think it is if you get a streamlined/focused education on behavioral statistics.

My example was a pretty basic scenario where anyone who could count to 4 (well technically 0) could understand the scale (4 prospective users). There are ways to dive deeper. Let’s imagine a different scenario: A product manager wants to do research on pain points related to an import feature. They suggest recruiting a representative sample of users, and are proposing interviewing 15 people.

Here’s one way to approach the request.

Usually, you can work with an engineer or PM to pull data on different usage of features, depending on what’s been instrumented for data to be collected on (this is a huge factor). Imagine there’s a product that has an import feature, and you want to see how often users transfer data into that platform. You can get a spreadsheet of the import feature and how often people have brought that in. Pretty simple right? Well now let’s say you’re interested in a specific user type, and you want to segment that import usage data by users’ demographic, and subscription level. We’re adding some sophistication, and the tables might be spread out in multiple places. Usually there’s an identifier or a key of each unique user you could match them with in excel. Now let’s say you wanted to not only segment your users by demographic, and subscription level, but also how often they use a specific page of a different part of your platform. Now there’s a lot of table matching to do. But the data you can wind up with can create something like this:

The import feature is utilized the most by users who live in NY, have a platinum level subscription, and are using the delivery page. To cut down on unnecessary research resources we should only focus on this segment. There’s a tremendous drop-off in usage for other user groups, so to represent our users pain points, we could potentially only run 8 interviews with a very specific sample of users.

Techniques like these help senior level researchers determine which projects to focus on, and which not to.

Published
Categorized as UX Tagged

Leave a comment

Your email address will not be published.