A Beginner’s Guide to Reading Scientific Research

Scientific journal articles can be incredibly intimidating to read, even for other scientists. Heck, I have a Ph.D. in a research science and have authored scientific papers, but sometimes I look at a research report outside my field of study and just go, “Nope, can’t decipher this.”

Learning to read them is an important skill, however, in today’s environment of what I call “research sensationalism.” This is where the popular media gets hold of a scientific research report and blows the findings WAY out of proportion, usually while misrepresenting what the researchers actually did and/or found. You know what I’m talking about.

Unfortunately, you can’t trust popular media reports about scientific research studies. Too often, it’s shockingly evident that the people writing these reports (a) aren’t trained to evaluate scientific research, and (b) are just parroting whatever newswire release they got that morning with no apparent fact-checking.

Thus, if staying informed is important to you—or you just want to be able to shut down all the fearmongers in your life—you need to learn how to read the original journal articles and form your own judgments. You don’t have to become an expert in every scientific field, nor a statistician, to do so. With a little know-how, you can at least decide if the popular media reports seem accurate and if any given study is worth your time and energy.

Where to Begin

First things first, locate the paper. If it’s behind a paywall, try searching Google Scholar to see if you can find it somewhere else. Sometimes authors upload pdfs to their personal webpages, for example.

Ten years ago, I would have told you to check the journal’s reputation next. Now there are so many different journals with different publishing standards popping up all the time, it’s hard to keep up. More and more researchers are choosing to publish in newer open access journals for various reasons.

Ideally, though, you want to see that the paper was peer reviewed. This means that it at least passed the hurdle of other academics agreeing that it was worth publishing. This is not a guarantee of quality, however, as any academic can tell you. If a paper isn’t peer reviewed, that’s not an automatic dismissal, but it’s worth noting.

Next, decide what type of paper you’re dealing with:

Theoretical papers

  • Authors synthesize what is “known” and offer their own interpretations and suggestions for future directions.
  • Rarely the ones getting popular press.
  • Great if you want to know the new frontiers and topics of debates in a given field.

Original research, aka empirical research

  • Report the findings of one of more studies where the researchers gather data, analyze it, and present their findings.
  • Encompasses a wide variety of methods, including ethnographic and historical data, observational research, and laboratory-based studies.

Meta-analyses & systematic reviews

  • Attempt to pool or summarize the findings of a group of studies on the same topic to understand the big picture.
  • Combining smaller studies increases the number of people studied and the statistical power. It can also “wash out” minor problems in individual studies.
  • Only as good as the studies going into them. If there are too few studies, or existing studies are of poor quality, pooling them does little. Usually these types of reports include a section describing the quality of the data.

Since popular media articles usually focus on empirical research papers, that’s what I’ll focus on today. Meta-analyses and reviews tend to be structured in the same way, so this applies to them as well.

Evaluating Empirical Research

Scientists understand that even the best designed studies will have issues. It’s easy to pick apart and criticize any study, but “issues” don’t make studies unreliable. As a smart reader, part of your job is to learn to recognize the flaws in a study, not to tear it down necessarily, but to put the findings in context.

For example, there is always a trade-off between real-world validity and experimental control. When a study is conducted in a laboratory—whether on humans, mice, or individual cells—the researchers try to control (hold constant) as many variables as possible except the ones in which they are interested. The more they control the environment, the more confident they can be in their findings… and the more artificial the conditions.

That’s not a bad thing. Well-controlled studies, called randomized control trials, are the best method we have of establishing causality. Ideally, though, they’d be interpreted alongside other studies, such as observational studies that detect the same phenomenon out in the world and other experiments that replicate the findings.

NO STUDY IS EVER MEANT TO STAND ON ITS OWN. If you take nothing else from this post, remember that. There is no perfect study. No matter how compelling the results, a single study can never be “conclusive,” nor should it be used to guide policy or even your behavioral choices. Studies are meant to build on one another and to contribute to a larger body of knowledge that as a whole leads us to better understand a phenomenon.

Reading a Scientific Journal Article

Most journal articles follow the same format: Abstract, Introduction, Methods, Results, Discussion/Conclusions. Let’s go through what you should get out of each section, even if you’re not a trained research scientist.

The Abstract succinctly describes the purpose, methods, and main findings of the paper. Sometimes you’ll see advice to skip the abstract. I disagree. The abstract can give you a basic idea of whether the paper is interesting to you and if it is likely to be (in)comprehensible.

DO NOT take the abstract at face value though. Too often the abstract oversimplifies or even blatantly misrepresents the findings. The biggest mistake you can make is reading only the abstract. It is better to skip it altogether than to read it alone.

The Introduction describes the current research question, i.e., the purpose of the study. The authors review past literature and set up why their study is interesting and needed. It’s okay to skim the intro.

While reading the introduction:

  • Make a note of important terms and definitions.
  • Try to summarize in your own words what general question the authors are trying to address. If you can, also identify the specific hypothesis they are testing. For example, the question might be how embarrassment affects people’s behavior in social interactions, and the specific hypothesis might be that people are more likely to insult people online when they feel embarrassed.
  • You might choose to look up other studies cited in the introduction.

The Methods should describe exactly what the researchers did in enough detail that another researcher could replicate it. Methods can be dense, but I think this is the most important section in terms of figuring out how much stock you should be putting in the findings.

While reading the methods, figure out:

  • Who/what were the subjects in this study? Animals, humans, cells?
  • If this is a human study, how were people selected to participate? What are their demographics? How well does the sample represent the general population or the population of interest?
  • What type of study is this?
    • Observational: observing their subjects, usually in the natural environment
    • Questionnaire/survey: asking the subject questions such as opinion surveys, behavioral recall (e.g., how well they slept, what they ate), and standardized questionnaires (e.g., personality tests)
    • Experimental: researchers manipulate one or more variables and measure the effects
  • If this is an experiment, is there a control condition—a no-treatment condition used as a baseline for comparison?
  • How were the variables operationalized and measured? For example, if the study is designed to compare low-carb and high-carb diets, how did the researchers define “low” and “high?” How did they figure out what people were eating?

Some red flags that should give you pause about the reliability of the findings are:

  • Small or unrepresentative sample (although “small” can be relative).
  • Lack of a control condition in experimental designs.
  • Variables operationalized in a way that doesn’t make sense, for example “low-carb” diets that include 150+ grams of carbs per day.
  • Variables measured questionably, as with the Food Frequency Questionnaire.

The Results present the statistical analyses. This is unsurprisingly the most intimidating section for a lot of people. You don’t need to understand statistics to get a sense of the data, however.

While reading the results:

  • Start by looking at any tables and figures. Try to form your own impression of the findings.
  • If you aren’t familiar with statistical tests, do your best to read what they authors say about the data, paying attention to which effects they are highlighting. Refer back to the tables and figures and see if what they’re saying jibes with what you see.
  • Pay attention to the real magnitude of any differences. Just because two groups are statistically different or something changes after an intervention doesn’t make it important. See if you can figure out in concrete terms how much the groups differed, for example. If data are only reported in percentages or relative risk, be wary of drawing firm conclusions.

It can take a fair amount of effort to decipher a results section. Sometimes you have to download supplementary data files to get the raw numbers you’re looking for.

The Discussion or Conclusions summarize what the study was about. The authors offer their interpretation of the data, going into detail about what they think the results actually mean. They should also discuss the limitations of the study.

While reading the discussion:

  • Use your own judgment to decide if you think the authors are accurately characterizing their findings. Do you agree with their interpretation? Are they forthcoming about the limitations of their study?

Red flags:

  • Concrete statements like “proved.” Hypotheses can be supported, not proven.
  • Talking in causal terms when the data is correlational! As I said above, well-controlled experimental designs are the only types of research that can possibly speak to causal effects. Questionnaire, survey, and historical data can tell you when variables are potentially related, but they say nothing about what causes what. Anytime authors use words like “caused,” “led to,” or “_[X]_ increased/decreased _[Y]_” about variables they didn’t manipulate in their study, they are either being sloppy or intentionally misleading.

What about Bias?

Bias is tricky. Even the best intentioned scientists can fall victim to bias at all stages of the research process. You certainly want to know who funded the study and if the researchers have any conflicts of interest. That doesn’t you should flatly dismiss every study that could potentially be biased, but it’s important to note and keep in mind. Journal papers should list conflicts of interest.

Solicit Other Opinions

Once you feel like you have your own opinion about the research, see what other knowledgeable people you trust have to say. I have a handful of people I trust for opinions—Mark, of course, Chris Kresser, and Robb Wolf being a few. Besides fact-checking yourself, this is a good way to learn more about what to look for when reading original research.

To be clear, I don’t think it’s important that you read every single study the popular media grabs hold of. It’s often okay just to go to your trusted experts and see what they say. However, if a report has you really concerned, or your interest is particularly piqued, this is a good skill to have.

Remember my admonition: No study is meant to stand alone. That means don’t put too much stock in any one research paper. It also means don’t dismiss a study because it’s imperfect, narrow in scope, or you can otherwise find flaws. This is how science moves forward—slowly, one (imperfect) study at a time.

That’s it for today. Share your questions and observations below, and thanks for reading.

About the Author

Lindsay Taylor, Ph.D., is a senior writer and community manager for Primal Nutrition, a certified Primal Health Coach, and the co-author of three keto cookbooks.

As a writer for Mark’s Daily Apple and the leader of the thriving Keto Reset and Primal Endurance communities, Lindsay’s job is to help people learn the whats, whys, and hows of leading a health-focused life. Before joining the Primal team, she earned her master’s and Ph.D. in Social and Personality Psychology from the University of California, Berkeley, where she also worked as a researcher and instructor.

Lindsay lives in Northern California with her husband and two sports-obsessed sons. In her free time, she enjoys ultra running, triathlon, camping, and game nights. Follow along on Instagram @theusefuldish as Lindsay attempts to juggle work, family, and endurance training, all while maintaining a healthy balance and, most of all, having fun in life. For more info, visit lindsaytaylor.co.

If you'd like to add an avatar to all of your comments click here!

13 thoughts on “A Beginner’s Guide to Reading Scientific Research”

Leave a Reply

Your email address will not be published. Required fields are marked *

  1. Thank you for writing this. It’s great. That relative risk vs absolute risk thing really trips people up and I think it’s the most abused part of statistics. Except maybe for sample size.

    I am extremely lucky that I found a frontier in science very early on. It forced me to look at, and evaluate science articles at an early age. Everyone says we’re living in a time when too much information is coming at us from all directions. But that’s actually a blessing. Every time you feel like it’s overwhelming, the ability to evaluate too much info at once improves. Trust yourself.

    Reading studies is like muscle mass. There will always be someone with bigger muscles and a better training plan. But that doesn’t mean you shouldn’t train. PubMed Central® (PMC) is all paywall free and so is Academia.com. Academia also publishes master’s thesis and such, that are not peer reviewed but often contain background information that makes other studies more readable.

  2. Just as dangerous as popular media “research sensationalism” is when the researchers themselves are pushing an unsupported (possibly sensationalistic) point of view, masquerading as good science.

  3. I’m surprised that I read all the way through your piece without finding anything about the Scientific Method. It’s powerful, but rarely used, and studies that do not use it cannot be considered “science”. For example, statistical studies are not necessarily scientific. Statistics are simply a tool.

    The Method demands that you try to disprove your hypothesis, not gather evidence to prove it. It is possible to disprove a hypothesis, but you cannot prove it. The best you can do is “fail to reject” a hypothesis, which is a huge leap forward in itself.

    You must embrace your critics, because they keep you honest. Science is never settled and the best scientists are their own harshest critics. After all, the goal is not to be fooled, and the researcher is the easiest person to fool.

    1. Yes. The concept of the null hypothesis is tricky to grasp, even for a lot of researchers as you know. I am writing a follow-up to this post where I touch on this, though. Thanks for your comment!

  4. This is an excellent article, thank you!

    I’ve been editing research articles for a major US nursing journal for nearly 20 years, and I’d add two more points to consider:

    1 – Be aware of “predatory” publishers. The journals they. publish exercise NO standards whatsoever and charge authors a fee. If an author is willing to pay, they’ll publish anything. PubMed is pretty good at weeding these journals out, but some get through. They often have fancy sounding names, but there are giveaways, including poor grammar and spelling. Most reputable journals at least copyedit.

    2 – If possible, find out if the journal fact-checks and edits its research articles. Many, if not most, do not! The journal I work for puts articles through BOTH peer review and rigorous editing. Peer review is not editing, and peer reviewers don’t look at articles through a fine lens. I catch significant errors in the way data are reported and interpreted in 80% to 90% of the articles I edit. The authors are always grateful. It’s scary to see how many articles make it into the literature without anyone really fact-checking anything.

    1. Yes, and it’s so hard for the average person to tell the difference, unfortunately!

  5. Also statistically significant vs. clinically significant.

    Things can be very statistisically significant without being clinically significant. Clinically significant means what is significant to the individual. A four day extended life span at major expense, or possibility of life changing “side effects”[1] or major lifestyle changes is not worth it.

    [1] Drugs have effects, some of which we intend and others we don’t which is a purely subjective call.

    1. Yes, another important point that I’ll be talking about in the follow-up to this post 😉