Democratizing research is still a debated topic at the moment. There doesn’t seem to be a clear answer about whether or not it should be done, and from what I can tell, that’s because people are having different arguments on top of each other. Simplifying the argument into the Yays and the Nays, the discussion is oddly skewed. People against democratization say it’s too much time and effort that doesn’t produce much value, but a good amount of harm. Interestingly, even those in favor admit the possible dangers but say that the value to the practice and the organization outweighs them. The stakes of this argument have gotten higher given recent global industry and technological shifts, so I wanted to disentangle the different arguments to get a bit of clarity.
Generally, the disagreements about democratization seem to come from two sources:
- How people explicitly or implicitly define the term “democratization”
- What problem democratization is being used to solve
Let’s break them down.
If you are a researcher at an organization or you work with or around some of them, you’re probably familiar with the concept of democratization. In a nutshell, it involves a researcher or research team teaching non-researchers (designers, product managers, engineers, etc.) to do some researchy things. These partners will get anything from guides and templates to more direct guidance including crash courses and workshops so that they can do things that a researcher would normally be responsible for, with varying degrees of competency.
The differences in how people define “democratization” boils down to a few things:
- What types of research are democratized? (Formative? Generative? Evaluative?)
- What phase of the research process are non-researchers involved in? (Planning? Building? Data collection? Analysis and synthesis? Reporting?)
- How much of the research process is done ONLY by non-researchers, without a researcher involved? (Different setups can range from a close partnership to the non-researchers tackling research solo)
You can visualize it as a spectrum from no democratization at all, to whatever it means to “fully” democratize a job function.
There must be a good reason that people — smarter than you and I — think that it’s a good idea to have people do work they’re not well-trained for. The origin of the democratization conversation is pretty similar across all organizations that have an R&D function: More resources and staffing are allocated for the D than the R, and researchers don’t have enough bandwidth to do all the things asked of them, thus becoming a bottleneck for other work.
At any large company that produces a product or service, ratios of researchers to designers or engineers are often in the 1:15 to 1:5 range at best (Kelsey Kingman cites a 1:8 ratio at Fidelity), with the larger tech companies usually staffing more researchers (The layoffs from 2022 and 2023 have likely made these ratios more unfavorable). It is not unheard of to be the only researcher at a small to medium-sized company. The reason that research teams are so thin is that one “unit” of research tends to support, inform, and enable a broader swath of other work.
Because research is cool like that.
The problem is, organizations sometimes have a difficult time understanding just exactly how many “units” of research are needed to accomplish what they want to do, and doing what businesses do naturally, they will hire the minimum they think is needed. The result is that researchers are regularly faced with a reality that every working professional experiences at some point in their careers: There is too much work to do and too little time.
This is the moment when the decision usually happens. The most straightforward options that usually come to mind to address the bottleneck are:
1. Say “no” to the work, risking alienating partners and inviting rogue researchers who do research on their own without anyone else knowing
2. Say “yes” to the work, overloading the research team even more, straining team morale, and risking burnout and costly mistakes
3. Democratize the work, offloading the work onto the people asking for research, empowering them to move forward without as much effort from researchers
It’s not hard to understand why the idea of democratization is attractive. On its surface, it seems like it could be an “everybody wins” scenario. In some cases, it gets close…
One approach to democratization is essentially a structured form of cross-team collaboration, bringing stakeholders along for the research ride and ensuring that the results are broadcast outside the immediate team for maximum reach. These processes are set up in order to ensure that research is relevant to stakeholders’ needs, that stakeholders and partners feel ownership over the work, and to get more support for doing the work itself.
Basically, this approach seeks to maximize the research impact, not to lighten the load. Researchers are still in charge of either executing or reviewing the work at every stage, but non-research partners get a “behind the curtain” look at how the sausage is made.
At the cost of more time and effort to accommodate a bigger working team, “collaborative democratization” has enough benefits that it’s a relatively common practice.
- More research gets done than it would without stakeholder involvement
- Greater impact: With more stakeholders involved in research from the planning to analysis phases, the results can be more relevant and valuable to the organization
- Upskill partners: Stakeholders get a better understanding of the research process and where insights come from and can grow more skillful at asking good questions
- Recognition: More involvement of different parts of the organization in research means more visibility of research to the organization
We’ve reached the end of the helpful parts of democratization, which falls somewhere in the middle of the spectrum. The endpoints are obviously not feasible, and possibly imaginary. I am not aware of any researchers in an organization who either do all the research in a silo without involving anyone else, nor have I met any researchers who are living the easy life because they have given away their job entirely (If you know any, please introduce me). However, I want to emphasize the risks of what I will call “extreme democratization” which involves stakeholders taking on part of the research process without any guidance or oversight from the research team. Want me to do research? I won’t do it for you, but I will teach you how to do it yourself without waiting for me.
It’s the corporate version of teaching others to fish, which is problematic when your job is to be the fisherman.
Giving up too much control over research comes at a cost:
- Continuous investment: Even the laziest researchers won’t just tell someone else to do their job with no guidance. Teaching others to do something new requires a lot of time and effort and must be done repeatedly
- Lower quality: Research done by people who are not researchers is going to be worse than if a professional researcher did it; mistakes are guaranteed and can lead to confusion, bad decisions, and even legal problems. One counterpoint is that “research” will be done by non-researchers no matter what, so there is an argument for making people better at it instead of trying to prevent them from doing it
- Unwillingness: Non-researchers have their own work to do, and might not want the extra job added to their busy schedules
- Career risk: Giving others the responsibility to do your job communicates at some level that you are replaceable by a mostly untrained human (or an AI)
Extreme democratization (giving away full control of part of your job) is clearly not a good idea, but even collaborative democratization is questionable when it is used as a tool to solve for lack of bandwidth. Even in the perfect world (where the costs of democratizing are smaller than the costs of doing the work yourself), democratization only addresses the surface issue, not the root causes behind the lack of resources dedicated to research. Here are a couple of very real contributors to that universal problem:
- No one understands the process or purpose of research: There is no clear process for doing research at your organization so it defaults to a service model where people ask for what research they think they need, for the purpose of validating what they already want to do.
- Researchers aren’t well-organized: Research says yes to too much work partially due to a lack of good project management. If it’s not clear how much time and effort things take, and it’s also not clear what’s the most important thing, how can you ever justify saying no to anything?
- Researchers forget what they are paid for: Research doesn’t effectively communicate its impact on the organization. Data, insights, and recommendations are the direct outputs of the work, and reports serve to document those. But nobody gets money to grow a team because of how many reports they publish. They get funding because of how much value they brought to the business, however the business measures that. Money is usually a good starting point, but it’s not the only factor.
In case you are looking for a “do I do it or not?” type of answer, here’s a handy diagram outlining my perspective:
tl;dr:
- If you’re trying to improve your research impact, involve your partners and stakeholders! Just make sure it’s a collaboration and not a handoff
- Don’t use democratization to solve a lack of bandwidth. Even if you do it well, you probably won’t save any time doing it