Product team asks a question. Researcher does a study and hands off a long-form academic-style report, but it’s hard to read and there are no clear arguments for what action to take. The team ignores the report and goes with their gut, and the researcher is frustrated that the team isn’t addressing the top issues from the study.
Later, a colleague suggests that the researcher should change up his reporting style and try to “sell” the results more. The researcher demurs, saying that researchers have a responsibility to adhere to certain standards for reporting data. The cycle repeats.
Maybe you recognize this scenario. Maybe you’ve been the team member with a report you couldn’t use. Or maybe you’ve been the researcher, delivering meticulous reports that never seem to land right at your company. If so, you’re not alone — this scenario crops up over and over again, especially for conscientious researchers who have recently made the transition from academia to industry.
Regardless of where you fit in the scenario, we all benefit when we have common expectations about how data should be communicated. Most non-researchers are comfortable using data casually to support arguments. But for academically trained researchers, casual use can be harder. Researchers are trained to have special respect for data. We can sometimes feel that by interpreting data too heavily, or by arguing for a particular course of action, we’re committing a violation of our professional code of ethics.
Having a code of ethics is essential to the credibility of the profession, but we shouldn’t adhere blindly to a set of standards without thinking about why. Is it possible for a researcher to have impact at his or her company while still respecting the data? To answer this we need to start with the source of our reporting standards: academia.
Where academic reporting guidelines come from
Academic research has one primary goal: to further the state of knowledge by supporting and refuting theories according to the scientific method. Academic reporting helps us do this by 1) discussing results in a neutral, objective manner, and 2) providing enough detail that someone else can review or replicate the findings. Ideally, scientists shouldn’t have pet theories, and they should be ready to have their results challenged at any time, because that’s what helps us determine which theory stands up best.
So for academic research to be successful, we must strive to avoid any errors or omissions in reporting our studies, because doing otherwise risks introducing false signals that could support the wrong theories. Complex reporting guidelines reduce this type of error by providing painstaking detail for verification and replication.
Corporate research has different goals
Corporations approach research differently. They’re generally not interested in supporting or refuting scientific theories — they just need enough information to effectively guide decisions and generate profit. They may construct models, but only as a basis for prediction and strategizing (as opposed to advancing the state of knowledge).
The goals in industry are not those of academia, and this means that corporate research needs different things to be successful. Yes, research should be trustworthy and accurate, but in industry this needs to be balanced with timeliness and actionability. An accurate result is worthless if no one knows how to use it, or if no one bothers to read it because they’re struggling to interpret your F-values.
“But I was taught to always report results in this format.”
As discussed earlier, detailed reporting serves a specific function in academia, which is to permit review and replication (thus cutting down on the long-term risk of error). But this need is reduced in industry because research tends to be shorter-lived and the practice of exact replication is uncommon.
In industry, minor error can often be better tolerated than in academia. This is partly due to the speed of innovation, where taking the time to perfect a report to academic standards can mean the difference between releasing before or after a major competitor. There can also be opportunities to correct error over the course of a release cycle. In many industry environments, decisions made based on research undergo additional testing and improvement, whether via RITE methodology, A/B testing, or the ultimate testbed of the market. (This is the whole idea behind the “fail fast” approach championed by many Lean UX advocates.) The exhaustive, detailed reporting we do in academia can harm this process by slowing delivery of results and obscuring actionable insights.
I would argue that the unique “covenant” for researchers is conducting solid, trustworthy research, not so much reporting it in a specific way. Yes, we must represent results honestly, but at this point the researcher is no different from a practitioner of any other discipline. There is nothing inherently wrong with a persuasive researcher provided he or she does not misrepresent data and that he or she readily provides data access to others when needed, so that they can draw their own conclusions.
You can balance persuasion and transparency
The basics of persuasion and business communication are the same for researchers and non-researchers, so I won’t get into those here. But it is always important to allow research findings to be scrutinized and challenged. When communicating with other researchers, often a footnote with p-values, effect sizes, etc., can do this without distracting from the main argument. You can also provide access to raw data (a standard practice from academia that translates well to industry; incidentally, it’s also part of the UXPA code of conduct).
When communicating with non-researchers (who are usually your primary audience), you’re obliged to provide adequate guidance for interpreting findings, in ways the audience can understand. Generalizations, video clips, verbatims, and examples should be representative of the sample. If a result has low certainty, this needs to be called out. Established facts need to be distinguished from opinions and educated guesses. Recommendations should be prioritized with a good understanding of the team’s goals, and the presentation itself should be structured with a good understanding of the executive audience. This sort of guidance is much more helpful and usable for product teams than an arcane methods section.
The other variable to consider is the shelf life of the research. The more likely it is that people will return to your results over time, the less ability you have to foresee who will be looking at them and how they’ll want to use them. This means it’s safer to include more of the scientific metadata (i.e., method and analysis details) that others can use to evaluate the findings and conclusions themselves. But this should still be in a form that can be interpreted by an audience with minimal effort.
The truth is that research findings must be interpreted by someone in order to be useful. Even if you succeed in delivering a completely neutral, bias-free report to a PM, designer, or VP, the recipient will then use those results to make an argument of his or her own (even if he or she has a weaker grasp of the study). The only alternative is for your results to be ignored.
Our guidelines for reporting results in academic journals serve a valuable function in that environment, but are not the only standard for ethical reporting of data. Like any member of a product team, we as researchers need to know what we’re doing, report it honestly, and persuade others of the right action to take. There doesn’t have to be a conflict between persuasive communication and transparent reporting of data.
Thoughts? Disagreement? Feel free to send me an email.