Product team asks a question. Researcher does a study and hands off a long-form academic-style report, but it’s hard to read and there are no clear arguments for what action to take. The team ignores the report and goes with their gut, and the researcher is frustrated that the team isn’t addressing the top issues from the study.
Later, a colleague suggests that the researcher should change up his reporting style and try to “sell” the results more. The researcher demurs, saying that researchers have a responsibility to adhere to certain standards for reporting data. The cycle repeats.
Maybe you recognize this scenario. Maybe you’ve been the team member with a report you couldn’t use. Or maybe you’ve been the researcher, delivering meticulous reports that never seem to land right at your company. If so, you’re not alone — this scenario crops up over and over again, especially for conscientious researchers who have recently made the transition from academia to industry.
Regardless of where you fit in the scenario, we all benefit when we have common expectations about how data should be communicated. Most non-researchers are comfortable using data casually to support arguments. But for academically trained researchers, casual use can be harder. Researchers are trained to have special respect for data. We can sometimes feel that by interpreting data too heavily, or by arguing for a particular course of action, we’re committing a violation of our professional code of ethics.
Having a code of ethics is essential to the credibility of the profession, but we shouldn’t adhere blindly to a set of standards without thinking about why. Is it possible for a researcher to have impact at his or her company while still respecting the data? To answer this we need to start with the source of our reporting standards: academia.