Reforming Bad Research Habits: My Resolutions for 2020

twigandfish
twig+fish
Published in
4 min readJan 14, 2020

--

Photo by Ross Findon on Unsplash

It’s a new year and a new decade. How about if we set some UX/human-centered research resolutions?

There is a cycle of bad research habits and sloppiness that leads to a deflation in our practice, and perennial problems the origins of which are hard to identify.

For example, nearly all researchers have resource problems (no time, no money, no people power). While it’s easy to blame our organizations for a lack of good planning, we also need to turn the responsibility over to ourselves. If all researchers are experiencing these issues, perhaps there is something we are not doing well enough to mitigate it.

So what are some things I have observed as bad practices that lead to domain deflation? Well, it’s often practical things — or things that may come easily to the researcher, but be obscure to non-researchers.

Objectives are insights, and insights are objectives.

When I collaborate with Meena Kothandaraman of twig+fish research practice, we always begin our projects with an alignment exercise using the NCredible Framework. During this exercise, we learn about the questions teams have about the people they serve and the offerings they produce. During these work sessions, we often see overlap in how people talk about their objectives with their projected insights. For instance “our customers need to feel trust and confidence” is a common refrain that can easily be applied in an objective as well as the outcome of a research program (the insight). The harm? Confusion, boredom (don’t we all need trust and confidence), and a lack of descriptive direction to orient research implications (what we do now that we know this).

Researchers need to be more attentive to the language we and our organizations use in objectives and our insights. Clearly described objectives guide our work (methods, recruiting strategy, analysis approaches) and give us purpose. Clearly identified insights inspire and inform our collaborating teams on how to consider the human-centered perspective in their work.

Building the thing, because we are asked to build it.

“We need to use this research to develop some personas,” is a common request/demand. While the request may not be so misplaced (maybe they do need a tool like personas), leading with this request obscures other kinds of research outcomes that provide services. Perhaps the team actually needs opportunity spaces, thematic insights, or even just a moment away from their workstations and in the field.

Researchers may not be asking our teams to think critically about what they need to learn, and how these learnings integrate. Personas, for example, are a tangible outcome that most non-researchers generally understand can imagine using. However, personas require a certain amount of rich, qualitative data, alignment on meaningful categories to include, and effective socialization (including when to stop using them). This post-study work may not be on the radar of our non-research team members, but once they learn about these efforts, they might realize that personas may not be the most useful outcome.

Recruiting confusion.

I am often presented with a list of recruiting criteria riddled with assumptions about demographics and psychographics. Upon completion of the data collection, the client then wants to know about each of the distinct recruited slices of that population, and how they compare and contrast. For instance, we might recruit a 50/50 split of “users” and “non-users” to learn about product-agnostic behaviors (let’s say our product is a scheduling app, but we want to learn about scheduling generally). The client wants to know “how users and non-users compare.” Sure, comparison could be an interesting facet to examine, but we already know a key factor that make their behaviors different: one group is using the app, and the other is not. Reporting on that isn’t as interesting as looking for common themes across scheduling behaviors generally and the influence of tool adoption on those behaviors.

Researchers may not be doing enough to help our teams/clients understand the various ways we can and should identify, select, and achieve variety in our samples. The result is often an assumption that our recruiting techniques lead to efficient reporting buckets — and that does not always produce serviceable insights.

With these few observations in mind, I have set a few “resolutions” for my research practice.

  1. Be intentional and scrutinizing about language. I am going to help my clients clarify their objectives and insights. I am going to teach them how to identify and avoid superfluous subjective language that does not clearly describe the service of that knowledge.
  2. Simplify and communicate foundational knowledge. I am going to think about the nuances and principles of study design that my clients may not understand deeply (such as recruiting) and build time into my engagements with them to coach.
  3. Focus more on the organization’s view of research. I am going to really listen for signals of an organization’s perspective on research benefits, and identify how my work fits in. (for instance, is research the team that helps others get their work done? are they a strategic function?)

It’s on us to eliminate or mitigate our own bad habits. What are you going to work on this year (or decade)?

Best of luck in 2020!

--

--

twigandfish
twig+fish

a human-centered research consultancy that empowers teams to practice empathy