Erika Hall: What Design Research Is and Isn’t
Erika Hall has been a voice of reason in the design community for over two decades. She is the author of Conversational Design and the industry-standard Just Enough Research. She is the co-founder and Director of Strategy at Mule. In this interview, she discusses the origin of her work, misconceptions about design research, and what it actually takes for organizations to learn and make better decisions.
Interview by Jonah Ginsburg, Director of DesignLabs.
Words from the interviewer are in bold italics.
JG: I really enjoyed reading Just Enough Research. It’s been incredibly helpful for me and many of the people I work with. What’s the backstory behind the book? What was the catalyst for writing it?
EH: When I got my first agency job, it was during one of those boom-and-bust cycles. I’d come out of a tech publishing company that had spun out a startup, so I “speed-ran” a few different types of companies already. I happened to get an agency job through a friend, and I immediately realized it was the kind of work I wanted to do.
Because they were hiring so fast, I was seated next to a researcher who had just come out of academia. He was an anthropologist and ethnographer by background. He was essentially a mentor to the whole team. So a few years later, when we started Mule Design, that way of working felt normal to me. But when we worked with clients, they would say, “Can we just skip the research part and get to the design?” I thought that was nonsense. You can’t make a thing until you know what you’re supposed to make. Even illustration should be research-based.
I was so tired of having the same conversation over and over again. I looked for reference material to point people to, but I didn’t find anything. The design research community at the time was small, and while I had friends who had written great books, they were 600 or 800 pages long and cost over $60. You can’t hand a client a giant, expensive book to convince them that research doesn’t have to take a lot of time or money. I wrote the book I wished existed. The title came to me first: you just have to do enough research to make the decisions you’re going to make.
JG: What do you think causes that typical feeling of wanting to skip over the research? What are the common misconceptions people have about it?
EH: There are so many misconceptions about design research. “Research” is a confusing word. People think it will feel like homework. The two biggest misconceptions are that design research is the same thing as academic research, and that you’re doing it to get “an answer.”
Design research shouldn’t be an academic exercise to generate a paper; it’s about learning what you need to learn in order to make a better-informed decision.
The decision is central. Once you talk through it like that—asking clients if they want to make good decisions or if they want to waste time and money—they get it. You do research so you don’t end up setting fire to a lot of money. The output you want is the best possible decision given your available time.
Another challenge is that most information about design research today comes from people trying to sell you software. They want you to believe that using the tool is the work. Organizations buy the most expensive or robust platform because they conflate the tool with the practice. You can do this work with a pencil and paper. The most useful approaches often don’t have a marketing budget because they don’t have a thing to sell.
JG: Do you have examples of what might go wrong without design research?
EH: Negative examples are much easier to find because if you do design research well, the result is often not doing something stupid or offensive. Success comes when the people creating a service truly understand the context and constraints.
Research isn’t a vending machine for insight. It all depends on your goals. What are your goals? And how did you choose those goals? The worst thing about “design thinking” when it’s practiced at a shallow level is that it doesn’t interrogate the goals. If you come in with a bad goal, research won’t help you.
A classic negative example is Walmart’s “Project Impact” from around 2009. They hired executives from Target and did a survey asking customers if they wanted the stores to be less cluttered. Of course, people said yes. They used that “bad” research to justify a massive redesign to attract customers during a financial crisis. They lost over a billion dollars, not counting the cost of redesigning the stores, and eventually had to undo it.
Every time you see a startup launch some nonsense that nobody wants, that’s a case for research.
A good exercise is to just walk around and start observing the world. Look at the things that are good and working and ask why are the good things like that? And then look at the things that are frustrating or bad or broken and ask why are the bad things like that? And you can find out why pretty quickly. But people just don’t do that.
JG: How does research in startups differ from research in more established organizations?
EH: This connects to the larger issue of financialization. In many startups, the real customer is the investor. Success is based on crafting a narrative to get more funding. At Mule, we generally have a rule against working with startups because they often don’t actually want research; they’re just “trying stuff.” It’s gambling versus science. If you don’t have a reality-based business model, research only gets in the way because real-world material conditions harsh your storytelling. We’ve seen a shift recently from huge research budgets to massive layoffs because these businesses are just selling a narrative to investors.
But research should look the same for them. They should be asking, “what information are we missing in order to reduce risk and increase the chance of success.”
JG: Let’s say, at some point, a startup has to confront reality and generate revenue from real customers. What would need to change organizationally? How does an organization learn how to learn?
EH: The basis of everything is collaboration. If you don’t have functional, collaborative decision-making, you cannot bring new knowledge into the organization. I see organizations bring in PhDs and specialists, but the decision-making culture is still one person’s word against another’s.
Researchers are often confused about why their work is being ignored. It’s because the organization isn’t set up to metabolize new information. If people aren’t allowed to ask questions or criticize ideas, research is just window dressing. I often see situations where the people making decisions will cherry-pick any data that supports what they already want to do. The only way to fix that is to have the organization commit to evidence-based collaborative decision-making.
JG: What’s the role of “Research Ops” and research infrastructure in turning an organization more evidence-based?
EH: You cannot operationalize a practice until you have a functional practice. If an organization has a broken culture and throws Ops at it, you just end up with a lot of repeatable processes for things that will be ignored. You have to start with a commitment from the top that it is safe to ask questions and perform critical thinking.
But absolutely it’s great to have people who can see across an organization. Something that functions like a corpus callosum, connecting the different parts of the organization through a research practice. Fantastic. But that doesn’t create the culture.
JG: Are there team structures that are more conducive to building the right kind of learning culture? And how do external partners fit in?
EH: It depends on the business, but ideally, all learning functions should be grouped together. What doesn’t work is having market research, user research, data science, and analytics all siloed. If the “quant” people and the “qual” people are fighting for legitimacy in the eyes of leadership, you won’t have good learning. The organization should operate like one brain.
I also hate the phrase “democratization of research.” Democracy is when you distribute decision-making power. Usually, when I hear democratization of research it’s just shitty delegation.
If you can do the work, you can learn the research tasks. People who are designers or technologists or product managers can do research. You don’t need a PhD to do the vast majority of product research or to figure out what your copy should sound like. I do wish everyone had taken a course in statistics, though. Most decision-makers who demand quantitative data don’t actually understand statistics; they just want a number to justify what they already want to do.
External partners like Mule are helpful when an organization has a lot of data but no shared understanding. We operate like a nimble management consultancy. We can go in and settle fights because we can’t be fired for asking the questions everyone else is afraid to ask. When design went in-house, the practice changed because job number one for an internal employee is not getting fired. To do good design, you have to be able to tell the truth, even if it’s “the emperor has no clothes.”
JG: What’s the story behind people’s bias toward quantitative research?
EH: It goes back to wanting an unambiguous answer. But you can’t measure what you don’t understand. You need to understand the phenomena in the world (qualitative) and then quantify it (quantitative).
In daily life, everybody knows how to do this. I use the vacation planning example all the time. If you’re planning a trip, you look at photos, you talk to friends who’ve been to Hawaii or Cancun, you compare prices, and you look at the weather. You do this whole mixed-methods research project to make sure you don’t regret your decision or end up on a crappy vacation. Everybody knows how to do that. But you put those same people in a corporate structure and they suddenly want “the one answer” that feels secure.
Many people talk about “hypothesis-driven research,” but you have to observe the world before you can even form a hypothesis. Measuring feels secure and depersonalized, whereas qualitative work is terrifying because you don’t know what you’re going to find. You might find out your whole strategy is wrong. Most dysfunctional research approaches are just people managing their own discomfort.
JG: What can teams do to get beyond discomfort-management and start actually learning?
EH: It all depends on what decision you’re making, what your goals are, and what you already know. The exercise I recommend is getting your team in a room and making a list of what you actually know vs. what you assume or hope to be true. You have to separate out the load-bearing assumptions that have no evidentiary basis. A couple of years ago I was the foreperson on a murder trial and I did this and it worked great.
But it can be scary. I’ve worked with clients who claim to know something only to realize later that it was only true for them personally.
Sometimes we’re working on something very complex and nuanced and we can’t get it wrong. Like when getting it wrong might mean harming a population or losing a tremendous amount of money. The amount of research should scale with the amount of risk and the amount of unknowns.
When you reframe it from “we need to do research” to “we need to learn” then you can get a lot more creative with your study plan. For example, you may not always have to do original research yourself. There’s so much work that’s already been done. You can read existing studies or good journalism. If you have the critical thinking skills to evaluate sources, the world is your research repository.


