research methodology of reports

What is Research Methodology? Definition, Types, and Examples

research methodology of reports

Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. Several aspects must be considered before selecting an appropriate research methodology, such as research limitations and ethical concerns that may affect your research.

The research methodology section in a scientific paper describes the different methodological choices made, such as the data collection and analysis methods, and why these choices were selected. The reasons should explain why the methods chosen are the most appropriate to answer the research question. A good research methodology also helps ensure the reliability and validity of the research findings. There are three types of research methodology—quantitative, qualitative, and mixed-method, which can be chosen based on the research objectives.

What is research methodology ?

A research methodology describes the techniques and procedures used to identify and analyze information regarding a specific research topic. It is a process by which researchers design their study so that they can achieve their objectives using the selected research instruments. It includes all the important aspects of research, including research design, data collection methods, data analysis methods, and the overall framework within which the research is conducted. While these points can help you understand what is research methodology, you also need to know why it is important to pick the right methodology.

Paperpal your AI academic writing assistant

Having a good research methodology in place has the following advantages: 3

  • Helps other researchers who may want to replicate your research; the explanations will be of benefit to them.
  • You can easily answer any questions about your research if they arise at a later stage.
  • A research methodology provides a framework and guidelines for researchers to clearly define research questions, hypotheses, and objectives.
  • It helps researchers identify the most appropriate research design, sampling technique, and data collection and analysis methods.
  • A sound research methodology helps researchers ensure that their findings are valid and reliable and free from biases and errors.
  • It also helps ensure that ethical guidelines are followed while conducting research.
  • A good research methodology helps researchers in planning their research efficiently, by ensuring optimum usage of their time and resources.

Writing the methods section of a research paper? Let Paperpal help you achieve perfection  

Types of research methodology.

There are three types of research methodology based on the type of research and the data required. 1

  • Quantitative research methodology focuses on measuring and testing numerical data. This approach is good for reaching a large number of people in a short amount of time. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations.
  • Qualitative research methodology examines the opinions, behaviors, and experiences of people. It collects and analyzes words and textual data. This research methodology requires fewer participants but is still more time consuming because the time spent per participant is quite large. This method is used in exploratory research where the research problem being investigated is not clearly defined.
  • Mixed-method research methodology uses the characteristics of both quantitative and qualitative research methodologies in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method.

What are the types of sampling designs in research methodology?

Sampling 4 is an important part of a research methodology and involves selecting a representative sample of the population to conduct the study, making statistical inferences about them, and estimating the characteristics of the whole population based on these inferences. There are two types of sampling designs in research methodology—probability and nonprobability.

  • Probability sampling

In this type of sampling design, a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are:

  • Systematic —sample members are chosen at regular intervals. It requires selecting a starting point for the sample and sample size determination that can be repeated at regular intervals. This type of sampling method has a predefined range; hence, it is the least time consuming.
  • Stratified —researchers divide the population into smaller groups that don’t overlap but represent the entire population. While sampling, these groups can be organized, and then a sample can be drawn from each group separately.
  • Cluster —the population is divided into clusters based on demographic parameters like age, sex, location, etc.
  • Convenience —selects participants who are most easily accessible to researchers due to geographical proximity, availability at a particular time, etc.
  • Purposive —participants are selected at the researcher’s discretion. Researchers consider the purpose of the study and the understanding of the target audience.
  • Snowball —already selected participants use their social networks to refer the researcher to other potential participants.
  • Quota —while designing the study, the researchers decide how many people with which characteristics to include as participants. The characteristics help in choosing people most likely to provide insights into the subject.

What are data collection methods?

During research, data are collected using various methods depending on the research methodology being followed and the research methods being undertaken. Both qualitative and quantitative research have different data collection methods, as listed below.

Qualitative research 5

  • One-on-one interviews: Helps the interviewers understand a respondent’s subjective opinion and experience pertaining to a specific topic or event
  • Document study/literature review/record keeping: Researchers’ review of already existing written materials such as archives, annual reports, research articles, guidelines, policy documents, etc.
  • Focus groups: Constructive discussions that usually include a small sample of about 6-10 people and a moderator, to understand the participants’ opinion on a given topic.
  • Qualitative observation : Researchers collect data using their five senses (sight, smell, touch, taste, and hearing).

Quantitative research 6

  • Sampling: The most common type is probability sampling.
  • Interviews: Commonly telephonic or done in-person.
  • Observations: Structured observations are most commonly used in quantitative research. In this method, researchers make observations about specific behaviors of individuals in a structured setting.
  • Document review: Reviewing existing research or documents to collect evidence for supporting the research.
  • Surveys and questionnaires. Surveys can be administered both online and offline depending on the requirement and sample size.

Let Paperpal help you write the perfect research methods section. Start now!

What are data analysis methods.

The data collected using the various methods for qualitative and quantitative research need to be analyzed to generate meaningful conclusions. These data analysis methods 7 also differ between quantitative and qualitative research.

Quantitative research involves a deductive method for data analysis where hypotheses are developed at the beginning of the research and precise measurement is required. The methods include statistical analysis applications to analyze numerical data and are grouped into two categories—descriptive and inferential.

Descriptive analysis is used to describe the basic features of different types of data to present it in a way that ensures the patterns become meaningful. The different types of descriptive analysis methods are:

  • Measures of frequency (count, percent, frequency)
  • Measures of central tendency (mean, median, mode)
  • Measures of dispersion or variation (range, variance, standard deviation)
  • Measure of position (percentile ranks, quartile ranks)

Inferential analysis is used to make predictions about a larger population based on the analysis of the data collected from a smaller population. This analysis is used to study the relationships between different variables. Some commonly used inferential data analysis methods are:

  • Correlation: To understand the relationship between two or more variables.
  • Cross-tabulation: Analyze the relationship between multiple variables.
  • Regression analysis: Study the impact of independent variables on the dependent variable.
  • Frequency tables: To understand the frequency of data.
  • Analysis of variance: To test the degree to which two or more variables differ in an experiment.

Qualitative research involves an inductive method for data analysis where hypotheses are developed after data collection. The methods include:

  • Content analysis: For analyzing documented information from text and images by determining the presence of certain words or concepts in texts.
  • Narrative analysis: For analyzing content obtained from sources such as interviews, field observations, and surveys. The stories and opinions shared by people are used to answer research questions.
  • Discourse analysis: For analyzing interactions with people considering the social context, that is, the lifestyle and environment, under which the interaction occurs.
  • Grounded theory: Involves hypothesis creation by data collection and analysis to explain why a phenomenon occurred.
  • Thematic analysis: To identify important themes or patterns in data and use these to address an issue.

How to choose a research methodology?

Here are some important factors to consider when choosing a research methodology: 8

  • Research objectives, aims, and questions —these would help structure the research design.
  • Review existing literature to identify any gaps in knowledge.
  • Check the statistical requirements —if data-driven or statistical results are needed then quantitative research is the best. If the research questions can be answered based on people’s opinions and perceptions, then qualitative research is most suitable.
  • Sample size —sample size can often determine the feasibility of a research methodology. For a large sample, less effort- and time-intensive methods are appropriate.
  • Constraints —constraints of time, geography, and resources can help define the appropriate methodology.

Got writer’s block? Kickstart your research paper writing with Paperpal now!

How to write a research methodology .

A research methodology should include the following components: 3,9

  • Research design —should be selected based on the research question and the data required. Common research designs include experimental, quasi-experimental, correlational, descriptive, and exploratory.
  • Research method —this can be quantitative, qualitative, or mixed-method.
  • Reason for selecting a specific methodology —explain why this methodology is the most suitable to answer your research problem.
  • Research instruments —explain the research instruments you plan to use, mainly referring to the data collection methods such as interviews, surveys, etc. Here as well, a reason should be mentioned for selecting the particular instrument.
  • Sampling —this involves selecting a representative subset of the population being studied.
  • Data collection —involves gathering data using several data collection methods, such as surveys, interviews, etc.
  • Data analysis —describe the data analysis methods you will use once you’ve collected the data.
  • Research limitations —mention any limitations you foresee while conducting your research.
  • Validity and reliability —validity helps identify the accuracy and truthfulness of the findings; reliability refers to the consistency and stability of the results over time and across different conditions.
  • Ethical considerations —research should be conducted ethically. The considerations include obtaining consent from participants, maintaining confidentiality, and addressing conflicts of interest.

Streamline Your Research Paper Writing Process with Paperpal  

The methods section is a critical part of the research papers, allowing researchers to use this to understand your findings and replicate your work when pursuing their own research. However, it is usually also the most difficult section to write. This is where Paperpal can help you overcome the writer’s block and create the first draft in minutes with Paperpal Copilot, its secure generative AI feature suite.  

With Paperpal you can get research advice, write and refine your work, rephrase and verify the writing, and ensure submission readiness, all in one place. Here’s how you can use Paperpal to develop the first draft of your methods section.  

  • Generate an outline: Input some details about your research to instantly generate an outline for your methods section 
  • Develop the section: Use the outline and suggested sentence templates to expand your ideas and develop the first draft.  
  • P araph ras e and trim : Get clear, concise academic text with paraphrasing that conveys your work effectively and word reduction to fix redundancies. 
  • Choose the right words: Enhance text by choosing contextual synonyms based on how the words have been used in previously published work.  
  • Check and verify text : Make sure the generated text showcases your methods correctly, has all the right citations, and is original and authentic. .   

You can repeat this process to develop each section of your research manuscript, including the title, abstract and keywords. Ready to write your research papers faster, better, and without the stress? Sign up for Paperpal and start writing today!

Frequently Asked Questions

Q1. What are the key components of research methodology?

A1. A good research methodology has the following key components:

  • Research design
  • Data collection procedures
  • Data analysis methods
  • Ethical considerations

Q2. Why is ethical consideration important in research methodology?

A2. Ethical consideration is important in research methodology to ensure the readers of the reliability and validity of the study. Researchers must clearly mention the ethical norms and standards followed during the conduct of the research and also mention if the research has been cleared by any institutional board. The following 10 points are the important principles related to ethical considerations: 10

  • Participants should not be subjected to harm.
  • Respect for the dignity of participants should be prioritized.
  • Full consent should be obtained from participants before the study.
  • Participants’ privacy should be ensured.
  • Confidentiality of the research data should be ensured.
  • Anonymity of individuals and organizations participating in the research should be maintained.
  • The aims and objectives of the research should not be exaggerated.
  • Affiliations, sources of funding, and any possible conflicts of interest should be declared.
  • Communication in relation to the research should be honest and transparent.
  • Misleading information and biased representation of primary data findings should be avoided.

research methodology of reports

Q3. What is the difference between methodology and method?

A3. Research methodology is different from a research method, although both terms are often confused. Research methods are the tools used to gather data, while the research methodology provides a framework for how research is planned, conducted, and analyzed. The latter guides researchers in making decisions about the most appropriate methods for their research. Research methods refer to the specific techniques, procedures, and tools used by researchers to collect, analyze, and interpret data, for instance surveys, questionnaires, interviews, etc.

Research methodology is, thus, an integral part of a research study. It helps ensure that you stay on track to meet your research objectives and answer your research questions using the most appropriate data collection and analysis tools based on your research design.

Accelerate your research paper writing with Paperpal. Try for free now!  

  • Research methodologies. Pfeiffer Library website. Accessed August 15, 2023. https://library.tiffin.edu/researchmethodologies/whatareresearchmethodologies
  • Types of research methodology. Eduvoice website. Accessed August 16, 2023. https://eduvoice.in/types-research-methodology/
  • The basics of research methodology: A key to quality research. Voxco. Accessed August 16, 2023. https://www.voxco.com/blog/what-is-research-methodology/
  • Sampling methods: Types with examples. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/types-of-sampling-for-social-research/
  • What is qualitative research? Methods, types, approaches, examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-qualitative-research-methods-types-examples/
  • What is quantitative research? Definition, methods, types, and examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-quantitative-research-types-and-examples/
  • Data analysis in research: Types & methods. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/data-analysis-in-research/#Data_analysis_in_qualitative_research
  • Factors to consider while choosing the right research methodology. PhD Monster website. Accessed August 17, 2023. https://www.phdmonster.com/factors-to-consider-while-choosing-the-right-research-methodology/
  • What is research methodology? Research and writing guides. Accessed August 14, 2023. https://paperpile.com/g/what-is-research-methodology/
  • Ethical considerations. Business research methodology website. Accessed August 17, 2023. https://research-methodology.net/research-methodology/ethical-considerations/

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Dangling Modifiers and How to Avoid Them in Your Writing 
  • Research Outlines: How to Write An Introduction Section in Minutes with Paperpal Copilot
  • How to Paraphrase Research Papers Effectively
  • What is a Literature Review? How to Write It (with Examples)

Language and Grammar Rules for Academic Writing

Climatic vs. climactic: difference and examples, you may also like, dissertation printing and binding | types & comparison , what is a dissertation preface definition and examples , how to write a research proposal: (with examples..., how to write your research paper in apa..., how to choose a dissertation topic, how to write a phd research proposal, how to write an academic paragraph (step-by-step guide), maintaining academic integrity with paperpal’s generative ai writing..., research funding basics: what should a grant proposal..., how to write an abstract in research papers....

Still have questions? Leave a comment

Add Comment

Checklist: Dissertation Proposal

Enter your email id to get the downloadable right in your inbox!

Examples: Edited Papers

Need editing and proofreading services, research methodology guide: writing tips, types, & examples.

calender

  • Tags: Academic Research , Research

No dissertation or research paper is complete without the research methodology section. Since this is the chapter where you explain how you carried out your research, this is where all the meat is! Here’s where you clearly lay out the steps you have taken to test your hypothesis or research problem.

Through this blog, we’ll unravel the complexities and meaning of research methodology in academic writing , from its fundamental principles and ethics to the diverse types of research methodology in use today. Alongside offering research methodology examples, we aim to guide you on how to write research methodology, ensuring your research endeavors are both impactful and impeccably grounded!

Ensure your research methodology is foolproof. Learn more

Let’s first take a closer look at a simple research methodology definition:

Defining what is research methodology

Research methodology is the set of procedures and techniques used to collect, analyze, and interpret data to understand and solve a research problem. Methodology in research not only includes the design and methods but also the basic principles that guide the choice of specific methods.

Grasping the concept of methodology in research is essential for students and scholars, as it demonstrates the thorough and structured method used to explore a hypothesis or research question. Understanding the definition of methodology in research aids in identifying the methods used to collect data. Be it through any type of research method approach, ensuring adherence to the proper research paper format is crucial.

Now let’s explore some research methodology types:

Types of research methodology

1. qualitative research methodology.

Qualitative research methodology is aimed at understanding concepts, thoughts, or experiences. This approach is descriptive and is often utilized to gather in-depth insights into people’s attitudes, behaviors, or cultures. Qualitative research methodology involves methods like interviews, focus groups, and observation. The strength of this methodology lies in its ability to provide contextual richness.

2. Quantitative research methodology

Quantitative research methodology, on the other hand, is focused on quantifying the problem by generating numerical data or data that can be transformed into usable statistics. It uses measurable data to formulate facts and uncover patterns in research. Quantitative research methodology typically involves surveys, experiments, or statistical analysis. This methodology is appreciated for its ability to produce objective results that are generalizable to a larger population.

3. Mixed-Methods research methodology

Mixed-methods research combines both qualitative and quantitative research methodologies to provide a more comprehensive understanding of the research problem. This approach leverages the strengths of both methodologies to provide a deeper insight into the research question of a research paper .

Research methodology vs. research methods

The research methodology or design is the overall strategy and rationale that you used to carry out the research. Whereas, research methods are the specific tools and processes you use to gather and understand the data you need to test your hypothesis.

Research methodology examples and application

To further understand research methodology, let’s explore some examples of research methodology:

a. Qualitative research methodology example: A study exploring the impact of author branding on author popularity might utilize in-depth interviews to gather personal experiences and perspectives.

b. Quantitative research methodology example: A research project investigating the effects of a book promotion technique on book sales could employ a statistical analysis of profit margins and sales before and after the implementation of the method.

c. Mixed-Methods research methodology example: A study examining the relationship between social media use and academic performance might combine both qualitative and quantitative approaches. It could include surveys to quantitatively assess the frequency of social media usage and its correlation with grades, alongside focus groups or interviews to qualitatively explore students’ perceptions and experiences regarding how social media affects their study habits and academic engagement.

These examples highlight the meaning of methodology in research and how it guides the research process, from data collection to analysis, ensuring the study’s objectives are met efficiently.

Importance of methodology in research papers

When it comes to writing your study, the methodology in research papers or a dissertation plays a pivotal role. A well-crafted methodology section of a research paper or thesis not only enhances the credibility of your research but also provides a roadmap for others to replicate or build upon your work.

How to structure the research methods chapter

Wondering how to write the research methodology section? Follow these steps to create a strong methods chapter:

Step 1: Explain your research methodology

At the start of a research paper , you would have provided the background of your research and stated your hypothesis or research problem. In this section, you will elaborate on your research strategy. 

Begin by restating your research question and proceed to explain what type of research you opted for to test it. Depending on your research, here are some questions you can consider: 

a. Did you use qualitative or quantitative data to test the hypothesis? 

b. Did you perform an experiment where you collected data or are you writing a dissertation that is descriptive/theoretical without data collection? 

c. Did you use primary data that you collected or analyze secondary research data or existing data as part of your study? 

These questions will help you establish the rationale for your study on a broader level, which you will follow by elaborating on the specific methods you used to collect and understand your data. 

Step 2: Explain the methods you used to test your hypothesis 

Now that you have told your reader what type of research you’ve undertaken for the dissertation, it’s time to dig into specifics. State what specific methods you used and explain the conditions and variables involved. Explain what the theoretical framework behind the method was, what samples you used for testing it, and what tools and materials you used to collect the data. 

Step 3: Explain how you analyzed the results

Once you have explained the data collection process, explain how you analyzed and studied the data. Here, your focus is simply to explain the methods of analysis rather than the results of the study. 

Here are some questions you can answer at this stage: 

a. What tools or software did you use to analyze your results? 

b. What parameters or variables did you consider while understanding and studying the data you’ve collected? 

c. Was your analysis based on a theoretical framework? 

Your mode of analysis will change depending on whether you used a quantitative or qualitative research methodology in your study. If you’re working within the hard sciences or physical sciences, you are likely to use a quantitative research methodology (relying on numbers and hard data). If you’re doing a qualitative study, in the social sciences or humanities, your analysis may rely on understanding language and socio-political contexts around your topic. This is why it’s important to establish what kind of study you’re undertaking at the onset. 

Step 4: Defend your choice of methodology 

Now that you have gone through your research process in detail, you’ll also have to make a case for it. Justify your choice of methodology and methods, explaining why it is the best choice for your research question. This is especially important if you have chosen an unconventional approach or you’ve simply chosen to study an existing research problem from a different perspective. Compare it with other methodologies, especially ones attempted by previous researchers, and discuss what contributions using your methodology makes.  

Step 5: Discuss the obstacles you encountered and how you overcame them

No matter how thorough a methodology is, it doesn’t come without its hurdles. This is a natural part of scientific research that is important to document so that your peers and future researchers are aware of it. Writing in a research paper about this aspect of your research process also tells your evaluator that you have actively worked to overcome the pitfalls that came your way and you have refined the research process. 

Tips to write an effective methodology chapter

1. Remember who you are writing for. Keeping sight of the reader/evaluator will help you know what to elaborate on and what information they are already likely to have. You’re condensing months’ work of research in just a few pages, so you should omit basic definitions and information about general phenomena people already know.

2. Do not give an overly elaborate explanation of every single condition in your study. 

3. Skip details and findings irrelevant to the results.

4. Cite references that back your claim and choice of methodology. 

5. Consistently emphasize the relationship between your research question and the methodology you adopted to study it. 

To sum it up, what is methodology in research? It’s the blueprint of your research, essential for ensuring that your study is systematic, rigorous, and credible. Whether your focus is on qualitative research methodology, quantitative research methodology, or a combination of both, understanding and clearly defining your methodology is key to the success of your research.

Once you write the research methodology and complete writing the entire research paper, the next step is to edit your paper. As experts in research paper editing and proofreading services , we’d love to help you perfect your paper!

Here are some other articles that you might find useful: 

  • Essential Research Tips for Essay Writing
  • How to Write a Lab Report: Examples from Academic Editors
  • The Essential Types of Editing Every Writer Needs to Know
  • Editing and Proofreading Academic Papers: A Short Guide
  • The Top 10 Editing and Proofreading Services of 2023

Frequently Asked Questions

What does research methodology mean, what types of research methodologies are there, what is qualitative research methodology, how to determine sample size in research methodology, what is action research methodology.

Found this article helpful?

One comment on “ Research Methodology Guide: Writing Tips, Types, & Examples ”

This is very simplified and direct. Very helpful to understand the research methodology section of a dissertation

Leave a Comment: Cancel reply

Your email address will not be published.

Your vs. You’re: When to Use Your and You’re

Your organization needs a technical editor: here’s why, your guide to the best ebook readers in 2024, writing for the web: 7 expert tips for web content writing.

Subscribe to our Newsletter

Get carefully curated resources about writing, editing, and publishing in the comfort of your inbox.

How to Copyright Your Book?

If you’ve thought about copyrighting your book, you’re on the right path.

© 2024 All rights reserved

  • Terms of service
  • Privacy policy
  • Self Publishing Guide
  • Pre-Publishing Steps
  • Fiction Writing Tips
  • Traditional Publishing
  • Additional Resources
  • Dissertation Writing Guide
  • Essay Writing Guide
  • Academic Writing and Publishing
  • Citation and Referencing
  • Partner with us
  • Annual report
  • Website content
  • Marketing material
  • Job Applicant
  • Cover letter
  • Resource Center
  • Case studies

Instant insights, infinite possibilities

Research report guide: Definition, types, and tips

Last updated

5 March 2024

Reviewed by

Short on time? Get an AI generated summary of this article instead

From successful product launches or software releases to planning major business decisions, research reports serve many vital functions. They can summarize evidence and deliver insights and recommendations to save companies time and resources. They can reveal the most value-adding actions a company should take.

However, poorly constructed reports can have the opposite effect! Taking the time to learn established research-reporting rules and approaches will equip you with in-demand skills. You’ll be able to capture and communicate information applicable to numerous situations and industries, adding another string to your resume bow.

  • What are research reports?

A research report is a collection of contextual data, gathered through organized research, that provides new insights into a particular challenge (which, for this article, is business-related). Research reports are a time-tested method for distilling large amounts of data into a narrow band of focus.

Their effectiveness often hinges on whether the report provides:

Strong, well-researched evidence

Comprehensive analysis

Well-considered conclusions and recommendations

Though the topic possibilities are endless, an effective research report keeps a laser-like focus on the specific questions or objectives the researcher believes are key to achieving success. Many research reports begin as research proposals, which usually include the need for a report to capture the findings of the study and recommend a course of action.

A description of the research method used, e.g., qualitative, quantitative, or other

Statistical analysis

Causal (or explanatory) research (i.e., research identifying relationships between two variables)

Inductive research, also known as ‘theory-building’

Deductive research, such as that used to test theories

Action research, where the research is actively used to drive change

  • Importance of a research report

Research reports can unify and direct a company's focus toward the most appropriate strategic action. Of course, spending resources on a report takes up some of the company's human and financial resources. Choosing when a report is called for is a matter of judgment and experience.

Some development models used heavily in the engineering world, such as Waterfall development, are notorious for over-relying on research reports. With Waterfall development, there is a linear progression through each step of a project, and each stage is precisely documented and reported on before moving to the next.

The pace of the business world is faster than the speed at which your authors can produce and disseminate reports. So how do companies strike the right balance between creating and acting on research reports?

The answer lies, again, in the report's defined objectives. By paring down your most pressing interests and those of your stakeholders, your research and reporting skills will be the lenses that keep your company's priorities in constant focus.

Honing your company's primary objectives can save significant amounts of time and align research and reporting efforts with ever-greater precision.

Some examples of well-designed research objectives are:

Proving whether or not a product or service meets customer expectations

Demonstrating the value of a service, product, or business process to your stakeholders and investors

Improving business decision-making when faced with a lack of time or other constraints

Clarifying the relationship between a critical cause and effect for problematic business processes

Prioritizing the development of a backlog of products or product features

Comparing business or production strategies

Evaluating past decisions and predicting future outcomes

  • Features of a research report

Research reports generally require a research design phase, where the report author(s) determine the most important elements the report must contain.

Just as there are various kinds of research, there are many types of reports.

Here are the standard elements of almost any research-reporting format:

Report summary. A broad but comprehensive overview of what readers will learn in the full report. Summaries are usually no more than one or two paragraphs and address all key elements of the report. Think of the key takeaways your primary stakeholders will want to know if they don’t have time to read the full document.

Introduction. Include a brief background of the topic, the type of research, and the research sample. Consider the primary goal of the report, who is most affected, and how far along the company is in meeting its objectives.

Methods. A description of how the researcher carried out data collection, analysis, and final interpretations of the data. Include the reasons for choosing a particular method. The methods section should strike a balance between clearly presenting the approach taken to gather data and discussing how it is designed to achieve the report's objectives.

Data analysis. This section contains interpretations that lead readers through the results relevant to the report's thesis. If there were unexpected results, include here a discussion on why that might be. Charts, calculations, statistics, and other supporting information also belong here (or, if lengthy, as an appendix). This should be the most detailed section of the research report, with references for further study. Present the information in a logical order, whether chronologically or in order of importance to the report's objectives.

Conclusion. This should be written with sound reasoning, often containing useful recommendations. The conclusion must be backed by a continuous thread of logic throughout the report.

  • How to write a research paper

With a clear outline and robust pool of research, a research paper can start to write itself, but what's a good way to start a research report?

Research report examples are often the quickest way to gain inspiration for your report. Look for the types of research reports most relevant to your industry and consider which makes the most sense for your data and goals.

The research report outline will help you organize the elements of your report. One of the most time-tested report outlines is the IMRaD structure:

Introduction

...and Discussion

Pay close attention to the most well-established research reporting format in your industry, and consider your tone and language from your audience's perspective. Learn the key terms inside and out; incorrect jargon could easily harm the perceived authority of your research paper.

Along with a foundation in high-quality research and razor-sharp analysis, the most effective research reports will also demonstrate well-developed:

Internal logic

Narrative flow

Conclusions and recommendations

Readability, striking a balance between simple phrasing and technical insight

How to gather research data for your report

The validity of research data is critical. Because the research phase usually occurs well before the writing phase, you normally have plenty of time to vet your data.

However, research reports could involve ongoing research, where report authors (sometimes the researchers themselves) write portions of the report alongside ongoing research.

One such research-report example would be an R&D department that knows its primary stakeholders are eager to learn about a lengthy work in progress and any potentially important outcomes.

However you choose to manage the research and reporting, your data must meet robust quality standards before you can rely on it. Vet any research with the following questions in mind:

Does it use statistically valid analysis methods?

Do the researchers clearly explain their research, analysis, and sampling methods?

Did the researchers provide any caveats or advice on how to interpret their data?

Have you gathered the data yourself or were you in close contact with those who did?

Is the source biased?

Usually, flawed research methods become more apparent the further you get through a research report.

It's perfectly natural for good research to raise new questions, but the reader should have no uncertainty about what the data represents. There should be no doubt about matters such as:

Whether the sampling or analysis methods were based on sound and consistent logic

What the research samples are and where they came from

The accuracy of any statistical functions or equations

Validation of testing and measuring processes

When does a report require design validation?

A robust design validation process is often a gold standard in highly technical research reports. Design validation ensures the objects of a study are measured accurately, which lends more weight to your report and makes it valuable to more specialized industries.

Product development and engineering projects are the most common research-report examples that typically involve a design validation process. Depending on the scope and complexity of your research, you might face additional steps to validate your data and research procedures.

If you’re including design validation in the report (or report proposal), explain and justify your data-collection processes. Good design validation builds greater trust in a research report and lends more weight to its conclusions.

Choosing the right analysis method

Just as the quality of your report depends on properly validated research, a useful conclusion requires the most contextually relevant analysis method. This means comparing different statistical methods and choosing the one that makes the most sense for your research.

Most broadly, research analysis comes down to quantitative or qualitative methods (respectively: measurable by a number vs subjectively qualified values). There are also mixed research methods, which bridge the need for merging hard data with qualified assessments and still reach a cohesive set of conclusions.

Some of the most common analysis methods in research reports include:

Significance testing (aka hypothesis analysis), which compares test and control groups to determine how likely the data was the result of random chance.

Regression analysis , to establish relationships between variables, control for extraneous variables , and support correlation analysis.

Correlation analysis (aka bivariate testing), a method to identify and determine the strength of linear relationships between variables. It’s effective for detecting patterns from complex data, but care must be exercised to not confuse correlation with causation.

With any analysis method, it's important to justify which method you chose in the report. You should also provide estimates of the statistical accuracy (e.g., the p-value or confidence level of quantifiable data) of any data analysis.

This requires a commitment to the report's primary aim. For instance, this may be achieving a certain level of customer satisfaction by analyzing the cause and effect of changes to how service is delivered. Even better, use statistical analysis to calculate which change is most positively correlated with improved levels of customer satisfaction.

  • Tips for writing research reports

There's endless good advice for writing effective research reports, and it almost all depends on the subjective aims of the people behind the report. Due to the wide variety of research reports, the best tips will be unique to each author's purpose.

Consider the following research report tips in any order, and take note of the ones most relevant to you:

No matter how in depth or detailed your report might be, provide a well-considered, succinct summary. At the very least, give your readers a quick and effective way to get up to speed.

Pare down your target audience (e.g., other researchers, employees, laypersons, etc.), and adjust your voice for their background knowledge and interest levels

For all but the most open-ended research, clarify your objectives, both for yourself and within the report.

Leverage your team members’ talents to fill in any knowledge gaps you might have. Your team is only as good as the sum of its parts.

Justify why your research proposal’s topic will endure long enough to derive value from the finished report.

Consolidate all research and analysis functions onto a single user-friendly platform. There's no reason to settle for less than developer-grade tools suitable for non-developers.

What's the format of a research report?

The research-reporting format is how the report is structured—a framework the authors use to organize their data, conclusions, arguments, and recommendations. The format heavily determines how the report's outline develops, because the format dictates the overall structure and order of information (based on the report's goals and research objectives).

What's the purpose of a research-report outline?

A good report outline gives form and substance to the report's objectives, presenting the results in a readable, engaging way. For any research-report format, the outline should create momentum along a chain of logic that builds up to a conclusion or interpretation.

What's the difference between a research essay and a research report?

There are several key differences between research reports and essays:

Research report:

Ordered into separate sections

More commercial in nature

Often includes infographics

Heavily descriptive

More self-referential

Usually provides recommendations

Research essay

Does not rely on research report formatting

More academically minded

Normally text-only

Less detailed

Omits discussion of methods

Usually non-prescriptive 

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

research methodology of reports

What Is Research Methodology?

Private Coaching

I f you’re new to formal academic research, it’s quite likely that you’re feeling a little overwhelmed by all the technical lingo that gets thrown around. And who could blame you – “research methodology”, “research methods”, “sampling strategies”… it all seems never-ending!

In this post, we’ll demystify the landscape with plain-language explanations and loads of examples (including easy-to-follow videos), so that you can approach your dissertation, thesis or research project with confidence. Let’s get started.

Research Methodology 101

  • What exactly research methodology means
  • What qualitative , quantitative and mixed methods are
  • What sampling strategy is
  • What data collection methods are
  • What data analysis methods are
  • How to choose your research methodology
  • Example of a research methodology

Free Webinar: Research Methodology 101

What is research methodology?

Research methodology simply refers to the practical “how” of a research study. More specifically, it’s about how  a researcher  systematically designs a study  to ensure valid and reliable results that address the research aims, objectives and research questions . Specifically, how the researcher went about deciding:

  • What type of data to collect (e.g., qualitative or quantitative data )
  • Who  to collect it from (i.e., the sampling strategy )
  • How to  collect  it (i.e., the data collection method )
  • How to  analyse  it (i.e., the data analysis methods )

Within any formal piece of academic research (be it a dissertation, thesis or journal article), you’ll find a research methodology chapter or section which covers the aspects mentioned above. Importantly, a good methodology chapter explains not just   what methodological choices were made, but also explains  why they were made. In other words, the methodology chapter should justify  the design choices, by showing that the chosen methods and techniques are the best fit for the research aims, objectives and research questions. 

So, it’s the same as research design?

Not quite. As we mentioned, research methodology refers to the collection of practical decisions regarding what data you’ll collect, from who, how you’ll collect it and how you’ll analyse it. Research design, on the other hand, is more about the overall strategy you’ll adopt in your study. For example, whether you’ll use an experimental design in which you manipulate one variable while controlling others. You can learn more about research design and the various design types here .

Need a helping hand?

research methodology of reports

What are qualitative, quantitative and mixed-methods?

Qualitative, quantitative and mixed-methods are different types of methodological approaches, distinguished by their focus on words , numbers or both . This is a bit of an oversimplification, but its a good starting point for understanding.

Let’s take a closer look.

Qualitative research refers to research which focuses on collecting and analysing words (written or spoken) and textual or visual data, whereas quantitative research focuses on measurement and testing using numerical data . Qualitative analysis can also focus on other “softer” data points, such as body language or visual elements.

It’s quite common for a qualitative methodology to be used when the research aims and research questions are exploratory  in nature. For example, a qualitative methodology might be used to understand peoples’ perceptions about an event that took place, or a political candidate running for president. 

Contrasted to this, a quantitative methodology is typically used when the research aims and research questions are confirmatory  in nature. For example, a quantitative methodology might be used to measure the relationship between two variables (e.g. personality type and likelihood to commit a crime) or to test a set of hypotheses .

As you’ve probably guessed, the mixed-method methodology attempts to combine the best of both qualitative and quantitative methodologies to integrate perspectives and create a rich picture. If you’d like to learn more about these three methodological approaches, be sure to watch our explainer video below.

What is sampling strategy?

Simply put, sampling is about deciding who (or where) you’re going to collect your data from . Why does this matter? Well, generally it’s not possible to collect data from every single person in your group of interest (this is called the “population”), so you’ll need to engage a smaller portion of that group that’s accessible and manageable (this is called the “sample”).

How you go about selecting the sample (i.e., your sampling strategy) will have a major impact on your study.  There are many different sampling methods  you can choose from, but the two overarching categories are probability   sampling and  non-probability   sampling .

Probability sampling  involves using a completely random sample from the group of people you’re interested in. This is comparable to throwing the names all potential participants into a hat, shaking it up, and picking out the “winners”. By using a completely random sample, you’ll minimise the risk of selection bias and the results of your study will be more generalisable  to the entire population. 

Non-probability sampling , on the other hand,  doesn’t use a random sample . For example, it might involve using a convenience sample, which means you’d only interview or survey people that you have access to (perhaps your friends, family or work colleagues), rather than a truly random sample. With non-probability sampling, the results are typically not generalisable .

To learn more about sampling methods, be sure to check out the video below.

What are data collection methods?

As the name suggests, data collection methods simply refers to the way in which you go about collecting the data for your study. Some of the most common data collection methods include:

  • Interviews (which can be unstructured, semi-structured or structured)
  • Focus groups and group interviews
  • Surveys (online or physical surveys)
  • Observations (watching and recording activities)
  • Biophysical measurements (e.g., blood pressure, heart rate, etc.)
  • Documents and records (e.g., financial reports, court records, etc.)

The choice of which data collection method to use depends on your overall research aims and research questions , as well as practicalities and resource constraints. For example, if your research is exploratory in nature, qualitative methods such as interviews and focus groups would likely be a good fit. Conversely, if your research aims to measure specific variables or test hypotheses, large-scale surveys that produce large volumes of numerical data would likely be a better fit.

What are data analysis methods?

Data analysis methods refer to the methods and techniques that you’ll use to make sense of your data. These can be grouped according to whether the research is qualitative  (words-based) or quantitative (numbers-based).

Popular data analysis methods in qualitative research include:

  • Qualitative content analysis
  • Thematic analysis
  • Discourse analysis
  • Narrative analysis
  • Interpretative phenomenological analysis (IPA)
  • Visual analysis (of photographs, videos, art, etc.)

Qualitative data analysis all begins with data coding , after which an analysis method is applied. In some cases, more than one analysis method is used, depending on the research aims and research questions . In the video below, we explore some  common qualitative analysis methods, along with practical examples.  

  • Descriptive statistics (e.g. means, medians, modes )
  • Inferential statistics (e.g. correlation, regression, structural equation modelling)

How do I choose a research methodology?

As you’ve probably picked up by now, your research aims and objectives have a major influence on the research methodology . So, the starting point for developing your research methodology is to take a step back and look at the big picture of your research, before you make methodology decisions. The first question you need to ask yourself is whether your research is exploratory or confirmatory in nature.

If your research aims and objectives are primarily exploratory in nature, your research will likely be qualitative and therefore you might consider qualitative data collection methods (e.g. interviews) and analysis methods (e.g. qualitative content analysis). 

Conversely, if your research aims and objective are looking to measure or test something (i.e. they’re confirmatory), then your research will quite likely be quantitative in nature, and you might consider quantitative data collection methods (e.g. surveys) and analyses (e.g. statistical analysis).

Designing your research and working out your methodology is a large topic, which we cover extensively on the blog . For now, however, the key takeaway is that you should always start with your research aims, objectives and research questions (the golden thread). Every methodological choice you make needs align with those three components. 

Example of a research methodology chapter

In the video below, we provide a detailed walkthrough of a research methodology from an actual dissertation, as well as an overview of our free methodology template .

Research Methodology Bootcamp

Learn More About Methodology

Triangulation: The Ultimate Credibility Enhancer

Triangulation: The Ultimate Credibility Enhancer

Triangulation is one of the best ways to enhance the credibility of your research. Learn about the different options here.

Research Limitations 101: What You Need To Know

Research Limitations 101: What You Need To Know

Learn everything you need to know about research limitations (AKA limitations of the study). Includes practical examples from real studies.

In Vivo Coding 101: Full Explainer With Examples

In Vivo Coding 101: Full Explainer With Examples

Learn about in vivo coding, a popular qualitative coding technique ideal for studies where the nuances of language are central to the aims.

Process Coding 101: Full Explainer With Examples

Process Coding 101: Full Explainer With Examples

Learn about process coding, a popular qualitative coding technique ideal for studies exploring processes, actions and changes over time.

Qualitative Coding 101: Inductive, Deductive & Hybrid Coding

Qualitative Coding 101: Inductive, Deductive & Hybrid Coding

Inductive, Deductive & Abductive Coding Qualitative Coding Approaches Explained...

📄 FREE TEMPLATES

Research Topic Ideation

Proposal Writing

Literature Review

Methodology & Analysis

Academic Writing

Referencing & Citing

Apps, Tools & Tricks

The Grad Coach Podcast

199 Comments

Leo Balanlay

Thank you for this simple yet comprehensive and easy to digest presentation. God Bless!

Derek Jansen

You’re most welcome, Leo. Best of luck with your research!

Asaf

I found it very useful. many thanks

Solomon F. Joel

This is really directional. A make-easy research knowledge.

Upendo Mmbaga

Thank you for this, I think will help my research proposal

vicky

Thanks for good interpretation,well understood.

Alhaji Alie Kanu

Good morning sorry I want to the search topic

Baraka Gombela

Thank u more

Boyd

Thank you, your explanation is simple and very helpful.

Suleiman Abubakar

Very educative a.nd exciting platform. A bigger thank you and I’ll like to always be with you

Daniel Mondela

That’s the best analysis

Okwuchukwu

So simple yet so insightful. Thank you.

Wendy Lushaba

This really easy to read as it is self-explanatory. Very much appreciated…

Lilian

Thanks for this. It’s so helpful and explicit. For those elements highlighted in orange, they were good sources of referrals for concepts I didn’t understand. A million thanks for this.

Tabe Solomon Matebesi

Good morning, I have been reading your research lessons through out a period of times. They are important, impressive and clear. Want to subscribe and be and be active with you.

Hafiz Tahir

Thankyou So much Sir Derek…

Good morning thanks so much for the on line lectures am a student of university of Makeni.select a research topic and deliberate on it so that we’ll continue to understand more.sorry that’s a suggestion.

James Olukoya

Beautiful presentation. I love it.

ATUL KUMAR

please provide a research mehodology example for zoology

Ogar , Praise

It’s very educative and well explained

Joseph Chan

Thanks for the concise and informative data.

Goja Terhemba John

This is really good for students to be safe and well understand that research is all about

Prakash thapa

Thank you so much Derek sir🖤🙏🤗

Abraham

Very simple and reliable

Chizor Adisa

This is really helpful. Thanks alot. God bless you.

Danushika

very useful, Thank you very much..

nakato justine

thanks a lot its really useful

karolina

in a nutshell..thank you!

Bitrus

Thanks for updating my understanding on this aspect of my Thesis writing.

VEDASTO DATIVA MATUNDA

thank you so much my through this video am competently going to do a good job my thesis

Jimmy

Thanks a lot. Very simple to understand. I appreciate 🙏

Mfumukazi

Very simple but yet insightful Thank you

Adegboyega ADaeBAYO

This has been an eye opening experience. Thank you grad coach team.

SHANTHi

Very useful message for research scholars

Teijili

Really very helpful thank you

sandokhan

yes you are right and i’m left

MAHAMUDUL HASSAN

Research methodology with a simplest way i have never seen before this article.

wogayehu tuji

wow thank u so much

Good morning thanks so much for the on line lectures am a student of university of Makeni.select a research topic and deliberate on is so that we will continue to understand more.sorry that’s a suggestion.

Gebregergish

Very precise and informative.

Javangwe Nyeketa

Thanks for simplifying these terms for us, really appreciate it.

Mary Benard Mwanganya

Thanks this has really helped me. It is very easy to understand.

mandla

I found the notes and the presentation assisting and opening my understanding on research methodology

Godfrey Martin Assenga

Good presentation

Nhubu Tawanda

Im so glad you clarified my misconceptions. Im now ready to fry my onions. Thank you so much. God bless

Odirile

Thank you a lot.

prathap

thanks for the easy way of learning and desirable presentation.

Ajala Tajudeen

Thanks a lot. I am inspired

Visor Likali

Well written

Pondris Patrick

I am writing a APA Format paper . I using questionnaire with 120 STDs teacher for my participant. Can you write me mthology for this research. Send it through email sent. Just need a sample as an example please. My topic is ” impacts of overcrowding on students learning

Thanks for your comment.

We can’t write your methodology for you. If you’re looking for samples, you should be able to find some sample methodologies on Google. Alternatively, you can download some previous dissertations from a dissertation directory and have a look at the methodology chapters therein.

All the best with your research.

Anon

Thank you so much for this!! God Bless

Keke

Thank you. Explicit explanation

Sophy

Thank you, Derek and Kerryn, for making this simple to understand. I’m currently at the inception stage of my research.

Luyanda

Thnks a lot , this was very usefull on my assignment

Beulah Emmanuel

excellent explanation

Gino Raz

I’m currently working on my master’s thesis, thanks for this! I’m certain that I will use Qualitative methodology.

Abigail

Thanks a lot for this concise piece, it was quite relieving and helpful. God bless you BIG…

Yonas Tesheme

I am currently doing my dissertation proposal and I am sure that I will do quantitative research. Thank you very much it was extremely helpful.

zahid t ahmad

Very interesting and informative yet I would like to know about examples of Research Questions as well, if possible.

Maisnam loyalakla

I’m about to submit a research presentation, I have come to understand from your simplification on understanding research methodology. My research will be mixed methodology, qualitative as well as quantitative. So aim and objective of mixed method would be both exploratory and confirmatory. Thanks you very much for your guidance.

Mila Milano

OMG thanks for that, you’re a life saver. You covered all the points I needed. Thank you so much ❤️ ❤️ ❤️

Christabel

Thank you immensely for this simple, easy to comprehend explanation of data collection methods. I have been stuck here for months 😩. Glad I found your piece. Super insightful.

Lika

I’m going to write synopsis which will be quantitative research method and I don’t know how to frame my topic, can I kindly get some ideas..

Arlene

Thanks for this, I was really struggling.

This was really informative I was struggling but this helped me.

Modie Maria Neswiswi

Thanks a lot for this information, simple and straightforward. I’m a last year student from the University of South Africa UNISA South Africa.

Mursel Amin

its very much informative and understandable. I have enlightened.

Mustapha Abubakar

An interesting nice exploration of a topic.

Sarah

Thank you. Accurate and simple🥰

Sikandar Ali Shah

This article was really helpful, it helped me understanding the basic concepts of the topic Research Methodology. The examples were very clear, and easy to understand. I would like to visit this website again. Thank you so much for such a great explanation of the subject.

Debbie

Thanks dude

Deborah

Thank you Doctor Derek for this wonderful piece, please help to provide your details for reference purpose. God bless.

Michael

Many compliments to you

Dana

Great work , thank you very much for the simple explanation

Aryan

Thank you. I had to give a presentation on this topic. I have looked everywhere on the internet but this is the best and simple explanation.

omodara beatrice

thank you, its very informative.

WALLACE

Well explained. Now I know my research methodology will be qualitative and exploratory. Thank you so much, keep up the good work

GEORGE REUBEN MSHEGAME

Well explained, thank you very much.

Ainembabazi Rose

This is good explanation, I have understood the different methods of research. Thanks a lot.

Kamran Saeed

Great work…very well explanation

Hyacinth Chebe Ukwuani

Thanks Derek. Kerryn was just fantastic!

Great to hear that, Hyacinth. Best of luck with your research!

Matobela Joel Marabi

Its a good templates very attractive and important to PhD students and lectuter

Thanks for the feedback, Matobela. Good luck with your research methodology.

Elie

Thank you. This is really helpful.

You’re very welcome, Elie. Good luck with your research methodology.

Sakina Dalal

Well explained thanks

Edward

This is a very helpful site especially for young researchers at college. It provides sufficient information to guide students and equip them with the necessary foundation to ask any other questions aimed at deepening their understanding.

Thanks for the kind words, Edward. Good luck with your research!

Ngwisa Marie-claire NJOTU

Thank you. I have learned a lot.

Great to hear that, Ngwisa. Good luck with your research methodology!

Claudine

Thank you for keeping your presentation simples and short and covering key information for research methodology. My key takeaway: Start with defining your research objective the other will depend on the aims of your research question.

Zanele

My name is Zanele I would like to be assisted with my research , and the topic is shortage of nursing staff globally want are the causes , effects on health, patients and community and also globally

Oluwafemi Taiwo

Thanks for making it simple and clear. It greatly helped in understanding research methodology. Regards.

Francis

This is well simplified and straight to the point

Gabriel mugangavari

Thank you Dr

Dina Haj Ibrahim

I was given an assignment to research 2 publications and describe their research methodology? I don’t know how to start this task can someone help me?

Sure. You’re welcome to book an initial consultation with one of our Research Coaches to discuss how we can assist – https://gradcoach.com/book/new/ .

BENSON ROSEMARY

Thanks a lot I am relieved of a heavy burden.keep up with the good work

Ngaka Mokoena

I’m very much grateful Dr Derek. I’m planning to pursue one of the careers that really needs one to be very much eager to know. There’s a lot of research to do and everything, but since I’ve gotten this information I will use it to the best of my potential.

Pritam Pal

Thank you so much, words are not enough to explain how helpful this session has been for me!

faith

Thanks this has thought me alot.

kenechukwu ambrose

Very concise and helpful. Thanks a lot

Eunice Shatila Sinyemu 32070

Thank Derek. This is very helpful. Your step by step explanation has made it easier for me to understand different concepts. Now i can get on with my research.

Michelle

I wish i had come across this sooner. So simple but yet insightful

yugine the

really nice explanation thank you so much

Goodness

I’m so grateful finding this site, it’s really helpful…….every term well explained and provide accurate understanding especially to student going into an in-depth research for the very first time, even though my lecturer already explained this topic to the class, I think I got the clear and efficient explanation here, much thanks to the author.

lavenda

It is very helpful material

Lubabalo Ntshebe

I would like to be assisted with my research topic : Literature Review and research methodologies. My topic is : what is the relationship between unemployment and economic growth?

Buddhi

Its really nice and good for us.

Ekokobe Aloysius

THANKS SO MUCH FOR EXPLANATION, ITS VERY CLEAR TO ME WHAT I WILL BE DOING FROM NOW .GREAT READS.

Asanka

Short but sweet.Thank you

Shishir Pokharel

Informative article. Thanks for your detailed information.

Badr Alharbi

I’m currently working on my Ph.D. thesis. Thanks a lot, Derek and Kerryn, Well-organized sequences, facilitate the readers’ following.

Tejal

great article for someone who does not have any background can even understand

Hasan Chowdhury

I am a bit confused about research design and methodology. Are they the same? If not, what are the differences and how are they related?

Thanks in advance.

Ndileka Myoli

concise and informative.

Sureka Batagoda

Thank you very much

More Smith

How can we site this article is Harvard style?

Anne

Very well written piece that afforded better understanding of the concept. Thank you!

Denis Eken Lomoro

Am a new researcher trying to learn how best to write a research proposal. I find your article spot on and want to download the free template but finding difficulties. Can u kindly send it to my email, the free download entitled, “Free Download: Research Proposal Template (with Examples)”.

fatima sani

Thank too much

Khamis

Thank you very much for your comprehensive explanation about research methodology so I like to thank you again for giving us such great things.

Aqsa Iftijhar

Good very well explained.Thanks for sharing it.

Krishna Dhakal

Thank u sir, it is really a good guideline.

Vimbainashe

so helpful thank you very much.

Joelma M Monteiro

Thanks for the video it was very explanatory and detailed, easy to comprehend and follow up. please, keep it up the good work

AVINASH KUMAR NIRALA

It was very helpful, a well-written document with precise information.

orebotswe morokane

how do i reference this?

Roy

MLA Jansen, Derek, and Kerryn Warren. “What (Exactly) Is Research Methodology?” Grad Coach, June 2021, gradcoach.com/what-is-research-methodology/.

APA Jansen, D., & Warren, K. (2021, June). What (Exactly) Is Research Methodology? Grad Coach. https://gradcoach.com/what-is-research-methodology/

sheryl

Your explanation is easily understood. Thank you

Dr Christie

Very help article. Now I can go my methodology chapter in my thesis with ease

Alice W. Mbuthia

I feel guided ,Thank you

Joseph B. Smith

This simplification is very helpful. It is simple but very educative, thanks ever so much

Dr. Ukpai Ukpai Eni

The write up is informative and educative. It is an academic intellectual representation that every good researcher can find useful. Thanks

chimbini Joseph

Wow, this is wonderful long live.

Tahir

Nice initiative

Thembsie

thank you the video was helpful to me.

JesusMalick

Thank you very much for your simple and clear explanations I’m really satisfied by the way you did it By now, I think I can realize a very good article by following your fastidious indications May God bless you

G.Horizon

Thanks very much, it was very concise and informational for a beginner like me to gain an insight into what i am about to undertake. I really appreciate.

Adv Asad Ali

very informative sir, it is amazing to understand the meaning of question hidden behind that, and simple language is used other than legislature to understand easily. stay happy.

Jonas Tan

This one is really amazing. All content in your youtube channel is a very helpful guide for doing research. Thanks, GradCoach.

mahmoud ali

research methodologies

Lucas Sinyangwe

Please send me more information concerning dissertation research.

Amamten Jr.

Nice piece of knowledge shared….. #Thump_UP

Hajara Salihu

This is amazing, it has said it all. Thanks to Gradcoach

Gerald Andrew Babu

This is wonderful,very elaborate and clear.I hope to reach out for your assistance in my research very soon.

Safaa

This is the answer I am searching about…

realy thanks a lot

Ahmed Saeed

Thank you very much for this awesome, to the point and inclusive article.

Soraya Kolli

Thank you very much I need validity and reliability explanation I have exams

KuzivaKwenda

Thank you for a well explained piece. This will help me going forward.

Emmanuel Chukwuma

Very simple and well detailed Many thanks

Zeeshan Ali Khan

This is so very simple yet so very effective and comprehensive. An Excellent piece of work.

Molly Wasonga

I wish I saw this earlier on! Great insights for a beginner(researcher) like me. Thanks a mil!

Blessings Chigodo

Thank you very much, for such a simplified, clear and practical step by step both for academic students and general research work. Holistic, effective to use and easy to read step by step. One can easily apply the steps in practical terms and produce a quality document/up-to standard

Thanks for simplifying these terms for us, really appreciated.

Joseph Kyereme

Thanks for a great work. well understood .

Julien

This was very helpful. It was simple but profound and very easy to understand. Thank you so much!

Kishimbo

Great and amazing research guidelines. Best site for learning research

ankita bhatt

hello sir/ma’am, i didn’t find yet that what type of research methodology i am using. because i am writing my report on CSR and collect all my data from websites and articles so which type of methodology i should write in dissertation report. please help me. i am from India.

memory

how does this really work?

princelow presley

perfect content, thanks a lot

George Nangpaak Duut

As a researcher, I commend you for the detailed and simplified information on the topic in question. I would like to remain in touch for the sharing of research ideas on other topics. Thank you

EPHRAIM MWANSA MULENGA

Impressive. Thank you, Grad Coach 😍

Thank you Grad Coach for this piece of information. I have at least learned about the different types of research methodologies.

Varinder singh Rana

Very useful content with easy way

Mbangu Jones Kashweeka

Thank you very much for the presentation. I am an MPH student with the Adventist University of Africa. I have successfully completed my theory and starting on my research this July. My topic is “Factors associated with Dental Caries in (one District) in Botswana. I need help on how to go about this quantitative research

Carolyn Russell

I am so grateful to run across something that was sooo helpful. I have been on my doctorate journey for quite some time. Your breakdown on methodology helped me to refresh my intent. Thank you.

Indabawa Musbahu

thanks so much for this good lecture. student from university of science and technology, Wudil. Kano Nigeria.

Limpho Mphutlane

It’s profound easy to understand I appreciate

Mustafa Salimi

Thanks a lot for sharing superb information in a detailed but concise manner. It was really helpful and helped a lot in getting into my own research methodology.

Rabilu yau

Comment * thanks very much

Ari M. Hussein

This was sooo helpful for me thank you so much i didn’t even know what i had to write thank you!

You’re most welcome 🙂

Varsha Patnaik

Simple and good. Very much helpful. Thank you so much.

STARNISLUS HAAMBOKOMA

This is very good work. I have benefited.

Dr Md Asraul Hoque

Thank you so much for sharing

Nkasa lizwi

This is powerful thank you so much guys

I am nkasa lizwi doing my research proposal on honors with the university of Walter Sisulu Komani I m on part 3 now can you assist me.my topic is: transitional challenges faced by educators in intermediate phase in the Alfred Nzo District.

Atonisah Jonathan

Appreciate the presentation. Very useful step-by-step guidelines to follow.

Bello Suleiman

I appreciate sir

Titilayo

wow! This is super insightful for me. Thank you!

Emerita Guzman

Indeed this material is very helpful! Kudos writers/authors.

TSEDEKE JOHN

I want to say thank you very much, I got a lot of info and knowledge. Be blessed.

Akanji wasiu

I want present a seminar paper on Optimisation of Deep learning-based models on vulnerability detection in digital transactions.

Need assistance

Clement Lokwar

Dear Sir, I want to be assisted on my research on Sanitation and Water management in emergencies areas.

Peter Sone Kome

I am deeply grateful for the knowledge gained. I will be getting in touch shortly as I want to be assisted in my ongoing research.

Nirmala

The information shared is informative, crisp and clear. Kudos Team! And thanks a lot!

Bipin pokhrel

hello i want to study

Kassahun

Hello!! Grad coach teams. I am extremely happy in your tutorial or consultation. i am really benefited all material and briefing. Thank you very much for your generous helps. Please keep it up. If you add in your briefing, references for further reading, it will be very nice.

Ezra

All I have to say is, thank u gyz.

Work

Good, l thanks

Artak Ghonyan

thank you, it is very useful

Trackbacks/Pingbacks

  • What Is A Literature Review (In A Dissertation Or Thesis) - Grad Coach - […] the literature review is to inform the choice of methodology for your own research. As we’ve discussed on the Grad Coach blog,…
  • Free Download: Research Proposal Template (With Examples) - Grad Coach - […] Research design (methodology) […]
  • Dissertation vs Thesis: What's the difference? - Grad Coach - […] and thesis writing on a daily basis – everything from how to find a good research topic to which…

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

research methodology of reports

  • Print Friendly

Uncomplicated Reviews of Educational Research Methods

  • Writing a Research Report

.pdf version of this page

This review covers the basic elements of a research report. This is a general guide for what you will see in journal articles or dissertations. This format assumes a mixed methods study, but you can leave out either quantitative or qualitative sections if you only used a single methodology.

This review is divided into sections for easy reference. There are five MAJOR parts of a Research Report:

1.    Introduction 2.    Review of Literature 3.    Methods 4.    Results 5.    Discussion

As a general guide, the Introduction, Review of Literature, and Methods should be about 1/3 of your paper, Discussion 1/3, then Results 1/3.

Section 1 : Cover Sheet (APA format cover sheet) optional, if required.

Section 2: Abstract (a basic summary of the report, including sample, treatment, design, results, and implications) (≤ 150 words) optional, if required.

Section 3 : Introduction (1-3 paragraphs) •    Basic introduction •    Supportive statistics (can be from periodicals) •    Statement of Purpose •    Statement of Significance

Section 4 : Research question(s) or hypotheses •    An overall research question (optional) •    A quantitative-based (hypotheses) •    A qualitative-based (research questions) Note: You will generally have more than one, especially if using hypotheses.

Section 5: Review of Literature ▪    Should be organized by subheadings ▪    Should adequately support your study using supporting, related, and/or refuting evidence ▪    Is a synthesis, not a collection of individual summaries

Section 6: Methods ▪    Procedure: Describe data gathering or participant recruitment, including IRB approval ▪    Sample: Describe the sample or dataset, including basic demographics ▪    Setting: Describe the setting, if applicable (generally only in qualitative designs) ▪    Treatment: If applicable, describe, in detail, how you implemented the treatment ▪    Instrument: Describe, in detail, how you implemented the instrument; Describe the reliability and validity associated with the instrument ▪    Data Analysis: Describe type of procedure (t-test, interviews, etc.) and software (if used)

Section 7: Results ▪    Restate Research Question 1 (Quantitative) ▪    Describe results ▪    Restate Research Question 2 (Qualitative) ▪    Describe results

Section 8: Discussion ▪    Restate Overall Research Question ▪    Describe how the results, when taken together, answer the overall question ▪    ***Describe how the results confirm or contrast the literature you reviewed

Section 9: Recommendations (if applicable, generally related to practice)

Section 10: Limitations ▪    Discuss, in several sentences, the limitations of this study. ▪    Research Design (overall, then info about the limitations of each separately) ▪    Sample ▪    Instrument/s ▪    Other limitations

Section 11: Conclusion (A brief closing summary)

Section 12: References (APA format)

Share this:

About research rundowns.

Research Rundowns was made possible by support from the Dewar College of Education at Valdosta State University .

  • Experimental Design
  • What is Educational Research?
  • Writing Research Questions
  • Mixed Methods Research Designs
  • Qualitative Coding & Analysis
  • Qualitative Research Design
  • Correlation
  • Effect Size
  • Instrument, Validity, Reliability
  • Mean & Standard Deviation
  • Significance Testing (t-tests)
  • Steps 1-4: Finding Research
  • Steps 5-6: Analyzing & Organizing
  • Steps 7-9: Citing & Writing

Create a free website or blog at WordPress.com.

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 6. The Methodology
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The methods section describes actions taken to investigate a research problem and the rationale for the application of specific procedures or techniques used to identify, select, process, and analyze information applied to understanding the problem, thereby, allowing the reader to critically evaluate a study’s overall validity and reliability. The methodology section of a research paper answers two main questions: How was the data collected or generated? And, how was it analyzed? The writing should be direct and precise and always written in the past tense.

Kallet, Richard H. "How to Write the Methods Section of a Research Paper." Respiratory Care 49 (October 2004): 1229-1232.

Importance of a Good Methodology Section

You must explain how you obtained and analyzed your results for the following reasons:

  • Readers need to know how the data was obtained because the method you chose affects the results and, by extension, how you interpreted their significance in the discussion section of your paper.
  • Methodology is crucial for any branch of scholarship because an unreliable method produces unreliable results and, as a consequence, undermines the value of your analysis of the findings.
  • In most cases, there are a variety of different methods you can choose to investigate a research problem. The methodology section of your paper should clearly articulate the reasons why you have chosen a particular procedure or technique.
  • The reader wants to know that the data was collected or generated in a way that is consistent with accepted practice in the field of study. For example, if you are using a multiple choice questionnaire, readers need to know that it offered your respondents a reasonable range of answers to choose from.
  • The method must be appropriate to fulfilling the overall aims of the study. For example, you need to ensure that you have a large enough sample size to be able to generalize and make recommendations based upon the findings.
  • The methodology should discuss the problems that were anticipated and the steps you took to prevent them from occurring. For any problems that do arise, you must describe the ways in which they were minimized or why these problems do not impact in any meaningful way your interpretation of the findings.
  • In the social and behavioral sciences, it is important to always provide sufficient information to allow other researchers to adopt or replicate your methodology. This information is particularly important when a new method has been developed or an innovative use of an existing method is utilized.

Bem, Daryl J. Writing the Empirical Journal Article. Psychology Writing Center. University of Washington; Denscombe, Martyn. The Good Research Guide: For Small-Scale Social Research Projects . 5th edition. Buckingham, UK: Open University Press, 2014; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008.

Structure and Writing Style

I.  Groups of Research Methods

There are two main groups of research methods in the social sciences:

  • The e mpirical-analytical group approaches the study of social sciences in a similar manner that researchers study the natural sciences . This type of research focuses on objective knowledge, research questions that can be answered yes or no, and operational definitions of variables to be measured. The empirical-analytical group employs deductive reasoning that uses existing theory as a foundation for formulating hypotheses that need to be tested. This approach is focused on explanation.
  • The i nterpretative group of methods is focused on understanding phenomenon in a comprehensive, holistic way . Interpretive methods focus on analytically disclosing the meaning-making practices of human subjects [the why, how, or by what means people do what they do], while showing how those practices arrange so that it can be used to generate observable outcomes. Interpretive methods allow you to recognize your connection to the phenomena under investigation. However, the interpretative group requires careful examination of variables because it focuses more on subjective knowledge.

II.  Content

The introduction to your methodology section should begin by restating the research problem and underlying assumptions underpinning your study. This is followed by situating the methods you used to gather, analyze, and process information within the overall “tradition” of your field of study and within the particular research design you have chosen to study the problem. If the method you choose lies outside of the tradition of your field [i.e., your review of the literature demonstrates that the method is not commonly used], provide a justification for how your choice of methods specifically addresses the research problem in ways that have not been utilized in prior studies.

The remainder of your methodology section should describe the following:

  • Decisions made in selecting the data you have analyzed or, in the case of qualitative research, the subjects and research setting you have examined,
  • Tools and methods used to identify and collect information, and how you identified relevant variables,
  • The ways in which you processed the data and the procedures you used to analyze that data, and
  • The specific research tools or strategies that you utilized to study the underlying hypothesis and research questions.

In addition, an effectively written methodology section should:

  • Introduce the overall methodological approach for investigating your research problem . Is your study qualitative or quantitative or a combination of both (mixed method)? Are you going to take a special approach, such as action research, or a more neutral stance?
  • Indicate how the approach fits the overall research design . Your methods for gathering data should have a clear connection to your research problem. In other words, make sure that your methods will actually address the problem. One of the most common deficiencies found in research papers is that the proposed methodology is not suitable to achieving the stated objective of your paper.
  • Describe the specific methods of data collection you are going to use , such as, surveys, interviews, questionnaires, observation, archival research. If you are analyzing existing data, such as a data set or archival documents, describe how it was originally created or gathered and by whom. Also be sure to explain how older data is still relevant to investigating the current research problem.
  • Explain how you intend to analyze your results . Will you use statistical analysis? Will you use specific theoretical perspectives to help you analyze a text or explain observed behaviors? Describe how you plan to obtain an accurate assessment of relationships, patterns, trends, distributions, and possible contradictions found in the data.
  • Provide background and a rationale for methodologies that are unfamiliar for your readers . Very often in the social sciences, research problems and the methods for investigating them require more explanation/rationale than widely accepted rules governing the natural and physical sciences. Be clear and concise in your explanation.
  • Provide a justification for subject selection and sampling procedure . For instance, if you propose to conduct interviews, how do you intend to select the sample population? If you are analyzing texts, which texts have you chosen, and why? If you are using statistics, why is this set of data being used? If other data sources exist, explain why the data you chose is most appropriate to addressing the research problem.
  • Provide a justification for case study selection . A common method of analyzing research problems in the social sciences is to analyze specific cases. These can be a person, place, event, phenomenon, or other type of subject of analysis that are either examined as a singular topic of in-depth investigation or multiple topics of investigation studied for the purpose of comparing or contrasting findings. In either method, you should explain why a case or cases were chosen and how they specifically relate to the research problem.
  • Describe potential limitations . Are there any practical limitations that could affect your data collection? How will you attempt to control for potential confounding variables and errors? If your methodology may lead to problems you can anticipate, state this openly and show why pursuing this methodology outweighs the risk of these problems cropping up.

NOTE:   Once you have written all of the elements of the methods section, subsequent revisions should focus on how to present those elements as clearly and as logically as possibly. The description of how you prepared to study the research problem, how you gathered the data, and the protocol for analyzing the data should be organized chronologically. For clarity, when a large amount of detail must be presented, information should be presented in sub-sections according to topic. If necessary, consider using appendices for raw data.

ANOTHER NOTE: If you are conducting a qualitative analysis of a research problem , the methodology section generally requires a more elaborate description of the methods used as well as an explanation of the processes applied to gathering and analyzing of data than is generally required for studies using quantitative methods. Because you are the primary instrument for generating the data [e.g., through interviews or observations], the process for collecting that data has a significantly greater impact on producing the findings. Therefore, qualitative research requires a more detailed description of the methods used.

YET ANOTHER NOTE:   If your study involves interviews, observations, or other qualitative techniques involving human subjects , you may be required to obtain approval from the university's Office for the Protection of Research Subjects before beginning your research. This is not a common procedure for most undergraduate level student research assignments. However, i f your professor states you need approval, you must include a statement in your methods section that you received official endorsement and adequate informed consent from the office and that there was a clear assessment and minimization of risks to participants and to the university. This statement informs the reader that your study was conducted in an ethical and responsible manner. In some cases, the approval notice is included as an appendix to your paper.

III.  Problems to Avoid

Irrelevant Detail The methodology section of your paper should be thorough but concise. Do not provide any background information that does not directly help the reader understand why a particular method was chosen, how the data was gathered or obtained, and how the data was analyzed in relation to the research problem [note: analyzed, not interpreted! Save how you interpreted the findings for the discussion section]. With this in mind, the page length of your methods section will generally be less than any other section of your paper except the conclusion.

Unnecessary Explanation of Basic Procedures Remember that you are not writing a how-to guide about a particular method. You should make the assumption that readers possess a basic understanding of how to investigate the research problem on their own and, therefore, you do not have to go into great detail about specific methodological procedures. The focus should be on how you applied a method , not on the mechanics of doing a method. An exception to this rule is if you select an unconventional methodological approach; if this is the case, be sure to explain why this approach was chosen and how it enhances the overall process of discovery.

Problem Blindness It is almost a given that you will encounter problems when collecting or generating your data, or, gaps will exist in existing data or archival materials. Do not ignore these problems or pretend they did not occur. Often, documenting how you overcame obstacles can form an interesting part of the methodology. It demonstrates to the reader that you can provide a cogent rationale for the decisions you made to minimize the impact of any problems that arose.

Literature Review Just as the literature review section of your paper provides an overview of sources you have examined while researching a particular topic, the methodology section should cite any sources that informed your choice and application of a particular method [i.e., the choice of a survey should include any citations to the works you used to help construct the survey].

It’s More than Sources of Information! A description of a research study's method should not be confused with a description of the sources of information. Such a list of sources is useful in and of itself, especially if it is accompanied by an explanation about the selection and use of the sources. The description of the project's methodology complements a list of sources in that it sets forth the organization and interpretation of information emanating from those sources.

Azevedo, L.F. et al. "How to Write a Scientific Paper: Writing the Methods Section." Revista Portuguesa de Pneumologia 17 (2011): 232-238; Blair Lorrie. “Choosing a Methodology.” In Writing a Graduate Thesis or Dissertation , Teaching Writing Series. (Rotterdam: Sense Publishers 2016), pp. 49-72; Butin, Dan W. The Education Dissertation A Guide for Practitioner Scholars . Thousand Oaks, CA: Corwin, 2010; Carter, Susan. Structuring Your Research Thesis . New York: Palgrave Macmillan, 2012; Kallet, Richard H. “How to Write the Methods Section of a Research Paper.” Respiratory Care 49 (October 2004):1229-1232; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008. Methods Section. The Writer’s Handbook. Writing Center. University of Wisconsin, Madison; Rudestam, Kjell Erik and Rae R. Newton. “The Method Chapter: Describing Your Research Plan.” In Surviving Your Dissertation: A Comprehensive Guide to Content and Process . (Thousand Oaks, Sage Publications, 2015), pp. 87-115; What is Interpretive Research. Institute of Public and International Affairs, University of Utah; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University; Methods and Materials. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College.

Writing Tip

Statistical Designs and Tests? Do Not Fear Them!

Don't avoid using a quantitative approach to analyzing your research problem just because you fear the idea of applying statistical designs and tests. A qualitative approach, such as conducting interviews or content analysis of archival texts, can yield exciting new insights about a research problem, but it should not be undertaken simply because you have a disdain for running a simple regression. A well designed quantitative research study can often be accomplished in very clear and direct ways, whereas, a similar study of a qualitative nature usually requires considerable time to analyze large volumes of data and a tremendous burden to create new paths for analysis where previously no path associated with your research problem had existed.

To locate data and statistics, GO HERE .

Another Writing Tip

Knowing the Relationship Between Theories and Methods

There can be multiple meaning associated with the term "theories" and the term "methods" in social sciences research. A helpful way to delineate between them is to understand "theories" as representing different ways of characterizing the social world when you research it and "methods" as representing different ways of generating and analyzing data about that social world. Framed in this way, all empirical social sciences research involves theories and methods, whether they are stated explicitly or not. However, while theories and methods are often related, it is important that, as a researcher, you deliberately separate them in order to avoid your theories playing a disproportionate role in shaping what outcomes your chosen methods produce.

Introspectively engage in an ongoing dialectic between the application of theories and methods to help enable you to use the outcomes from your methods to interrogate and develop new theories, or ways of framing conceptually the research problem. This is how scholarship grows and branches out into new intellectual territory.

Reynolds, R. Larry. Ways of Knowing. Alternative Microeconomics . Part 1, Chapter 3. Boise State University; The Theory-Method Relationship. S-Cool Revision. United Kingdom.

Yet Another Writing Tip

Methods and the Methodology

Do not confuse the terms "methods" and "methodology." As Schneider notes, a method refers to the technical steps taken to do research . Descriptions of methods usually include defining and stating why you have chosen specific techniques to investigate a research problem, followed by an outline of the procedures you used to systematically select, gather, and process the data [remember to always save the interpretation of data for the discussion section of your paper].

The methodology refers to a discussion of the underlying reasoning why particular methods were used . This discussion includes describing the theoretical concepts that inform the choice of methods to be applied, placing the choice of methods within the more general nature of academic work, and reviewing its relevance to examining the research problem. The methodology section also includes a thorough review of the methods other scholars have used to study the topic.

Bryman, Alan. "Of Methods and Methodology." Qualitative Research in Organizations and Management: An International Journal 3 (2008): 159-168; Schneider, Florian. “What's in a Methodology: The Difference between Method, Methodology, and Theory…and How to Get the Balance Right?” PoliticsEastAsia.com. Chinese Department, University of Leiden, Netherlands.

  • << Previous: Scholarly vs. Popular Publications
  • Next: Qualitative Methods >>
  • Last Updated: Sep 4, 2024 9:40 AM
  • URL: https://libguides.usc.edu/writingguide

Reference management. Clean and simple.

What is research methodology?

research methodology of reports

The basics of research methodology

Why do you need a research methodology, what needs to be included, why do you need to document your research method, what are the different types of research instruments, qualitative / quantitative / mixed research methodologies, how do you choose the best research methodology for you, frequently asked questions about research methodology, related articles.

When you’re working on your first piece of academic research, there are many different things to focus on, and it can be overwhelming to stay on top of everything. This is especially true of budding or inexperienced researchers.

If you’ve never put together a research proposal before or find yourself in a position where you need to explain your research methodology decisions, there are a few things you need to be aware of.

Once you understand the ins and outs, handling academic research in the future will be less intimidating. We break down the basics below:

A research methodology encompasses the way in which you intend to carry out your research. This includes how you plan to tackle things like collection methods, statistical analysis, participant observations, and more.

You can think of your research methodology as being a formula. One part will be how you plan on putting your research into practice, and another will be why you feel this is the best way to approach it. Your research methodology is ultimately a methodological and systematic plan to resolve your research problem.

In short, you are explaining how you will take your idea and turn it into a study, which in turn will produce valid and reliable results that are in accordance with the aims and objectives of your research. This is true whether your paper plans to make use of qualitative methods or quantitative methods.

The purpose of a research methodology is to explain the reasoning behind your approach to your research - you'll need to support your collection methods, methods of analysis, and other key points of your work.

Think of it like writing a plan or an outline for you what you intend to do.

When carrying out research, it can be easy to go off-track or depart from your standard methodology.

Tip: Having a methodology keeps you accountable and on track with your original aims and objectives, and gives you a suitable and sound plan to keep your project manageable, smooth, and effective.

With all that said, how do you write out your standard approach to a research methodology?

As a general plan, your methodology should include the following information:

  • Your research method.  You need to state whether you plan to use quantitative analysis, qualitative analysis, or mixed-method research methods. This will often be determined by what you hope to achieve with your research.
  • Explain your reasoning. Why are you taking this methodological approach? Why is this particular methodology the best way to answer your research problem and achieve your objectives?
  • Explain your instruments.  This will mainly be about your collection methods. There are varying instruments to use such as interviews, physical surveys, questionnaires, for example. Your methodology will need to detail your reasoning in choosing a particular instrument for your research.
  • What will you do with your results?  How are you going to analyze the data once you have gathered it?
  • Advise your reader.  If there is anything in your research methodology that your reader might be unfamiliar with, you should explain it in more detail. For example, you should give any background information to your methods that might be relevant or provide your reasoning if you are conducting your research in a non-standard way.
  • How will your sampling process go?  What will your sampling procedure be and why? For example, if you will collect data by carrying out semi-structured or unstructured interviews, how will you choose your interviewees and how will you conduct the interviews themselves?
  • Any practical limitations?  You should discuss any limitations you foresee being an issue when you’re carrying out your research.

In any dissertation, thesis, or academic journal, you will always find a chapter dedicated to explaining the research methodology of the person who carried out the study, also referred to as the methodology section of the work.

A good research methodology will explain what you are going to do and why, while a poor methodology will lead to a messy or disorganized approach.

You should also be able to justify in this section your reasoning for why you intend to carry out your research in a particular way, especially if it might be a particularly unique method.

Having a sound methodology in place can also help you with the following:

  • When another researcher at a later date wishes to try and replicate your research, they will need your explanations and guidelines.
  • In the event that you receive any criticism or questioning on the research you carried out at a later point, you will be able to refer back to it and succinctly explain the how and why of your approach.
  • It provides you with a plan to follow throughout your research. When you are drafting your methodology approach, you need to be sure that the method you are using is the right one for your goal. This will help you with both explaining and understanding your method.
  • It affords you the opportunity to document from the outset what you intend to achieve with your research, from start to finish.

A research instrument is a tool you will use to help you collect, measure and analyze the data you use as part of your research.

The choice of research instrument will usually be yours to make as the researcher and will be whichever best suits your methodology.

There are many different research instruments you can use in collecting data for your research.

Generally, they can be grouped as follows:

  • Interviews (either as a group or one-on-one). You can carry out interviews in many different ways. For example, your interview can be structured, semi-structured, or unstructured. The difference between them is how formal the set of questions is that is asked of the interviewee. In a group interview, you may choose to ask the interviewees to give you their opinions or perceptions on certain topics.
  • Surveys (online or in-person). In survey research, you are posing questions in which you ask for a response from the person taking the survey. You may wish to have either free-answer questions such as essay-style questions, or you may wish to use closed questions such as multiple choice. You may even wish to make the survey a mixture of both.
  • Focus Groups.  Similar to the group interview above, you may wish to ask a focus group to discuss a particular topic or opinion while you make a note of the answers given.
  • Observations.  This is a good research instrument to use if you are looking into human behaviors. Different ways of researching this include studying the spontaneous behavior of participants in their everyday life, or something more structured. A structured observation is research conducted at a set time and place where researchers observe behavior as planned and agreed upon with participants.

These are the most common ways of carrying out research, but it is really dependent on your needs as a researcher and what approach you think is best to take.

It is also possible to combine a number of research instruments if this is necessary and appropriate in answering your research problem.

There are three different types of methodologies, and they are distinguished by whether they focus on words, numbers, or both.

Data typeWhat is it?Methodology

Quantitative

This methodology focuses more on measuring and testing numerical data. What is the aim of quantitative research?

When using this form of research, your objective will usually be to confirm something.

Surveys, tests, existing databases.

For example, you may use this type of methodology if you are looking to test a set of hypotheses.

Qualitative

Qualitative research is a process of collecting and analyzing both words and textual data.

This form of research methodology is sometimes used where the aim and objective of the research are exploratory.

Observations, interviews, focus groups.

Exploratory research might be used where you are trying to understand human actions i.e. for a study in the sociology or psychology field.

Mixed-method

A mixed-method approach combines both of the above approaches.

The quantitative approach will provide you with some definitive facts and figures, whereas the qualitative methodology will provide your research with an interesting human aspect.

Where you can use a mixed method of research, this can produce some incredibly interesting results. This is due to testing in a way that provides data that is both proven to be exact while also being exploratory at the same time.

➡️ Want to learn more about the differences between qualitative and quantitative research, and how to use both methods? Check out our guide for that!

If you've done your due diligence, you'll have an idea of which methodology approach is best suited to your research.

It’s likely that you will have carried out considerable reading and homework before you reach this point and you may have taken inspiration from other similar studies that have yielded good results.

Still, it is important to consider different options before setting your research in stone. Exploring different options available will help you to explain why the choice you ultimately make is preferable to other methods.

If proving your research problem requires you to gather large volumes of numerical data to test hypotheses, a quantitative research method is likely to provide you with the most usable results.

If instead you’re looking to try and learn more about people, and their perception of events, your methodology is more exploratory in nature and would therefore probably be better served using a qualitative research methodology.

It helps to always bring things back to the question: what do I want to achieve with my research?

Once you have conducted your research, you need to analyze it. Here are some helpful guides for qualitative data analysis:

➡️  How to do a content analysis

➡️  How to do a thematic analysis

➡️  How to do a rhetorical analysis

Research methodology refers to the techniques used to find and analyze information for a study, ensuring that the results are valid, reliable and that they address the research objective.

Data can typically be organized into four different categories or methods: observational, experimental, simulation, and derived.

Writing a methodology section is a process of introducing your methods and instruments, discussing your analysis, providing more background information, addressing your research limitations, and more.

Your research methodology section will need a clear research question and proposed research approach. You'll need to add a background, introduce your research question, write your methodology and add the works you cited during your data collecting phase.

The research methodology section of your study will indicate how valid your findings are and how well-informed your paper is. It also assists future researchers planning to use the same methodology, who want to cite your study or replicate it.

Rhetorical analysis illustration

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 11: Presenting Your Research

Writing a Research Report in American Psychological Association (APA) Style

Learning Objectives

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centred in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioural Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behaviour?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behaviour (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humourous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humour and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favourite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behaviour during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centred on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Three ways of organizing an APA-style method. Long description available.

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centred at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centred at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

""

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different colour each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.

Long Descriptions

Figure 11.1 long description: Table showing three ways of organizing an APA-style method section.

In the simple method, there are two subheadings: “Participants” (which might begin “The participants were…”) and “Design and procedure” (which might begin “There were three conditions…”).

In the typical method, there are three subheadings: “Participants” (“The participants were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”).

In the complex method, there are four subheadings: “Participants” (“The participants were…”), “Materials” (“The stimuli were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”). [Return to Figure 11.1]

  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The compleat academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

A type of research article which describes one or more new empirical studies conducted by the authors.

The page at the beginning of an APA-style research report containing the title of the article, the authors’ names, and their institutional affiliation.

A summary of a research study.

The third page of a manuscript containing the research question, the literature review, and comments about how to answer the research question.

An introduction to the research question and explanation for why this question is interesting.

A description of relevant previous research on the topic being discusses and an argument for why the research is worth addressing.

The end of the introduction, where the research question is reiterated and the method is commented upon.

The section of a research report where the method used to conduct the study is described.

The main results of the study, including the results from statistical analyses, are presented in a research article.

Section of a research report that summarizes the study's results and interprets them by referring back to the study's theoretical background.

Part of a research report which contains supplemental material.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

research methodology of reports

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

41k Accesses

59 Citations

60 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

research methodology of reports

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research methodology of reports

Home Market Research

Research Reports: Definition and How to Write Them

Research Reports

Reports are usually spread across a vast horizon of topics but are focused on communicating information about a particular topic and a niche target market. The primary motive of research reports is to convey integral details about a study for marketers to consider while designing new strategies.

Certain events, facts, and other information based on incidents need to be relayed to the people in charge, and creating research reports is the most effective communication tool. Ideal research reports are extremely accurate in the offered information with a clear objective and conclusion. These reports should have a clean and structured format to relay information effectively.

What are Research Reports?

Research reports are recorded data prepared by researchers or statisticians after analyzing the information gathered by conducting organized research, typically in the form of surveys or qualitative methods .

A research report is a reliable source to recount details about a conducted research. It is most often considered to be a true testimony of all the work done to garner specificities of research.

The various sections of a research report are:

  • Background/Introduction
  • Implemented Methods
  • Results based on Analysis
  • Deliberation

Learn more: Quantitative Research

Components of Research Reports

Research is imperative for launching a new product/service or a new feature. The markets today are extremely volatile and competitive due to new entrants every day who may or may not provide effective products. An organization needs to make the right decisions at the right time to be relevant in such a market with updated products that suffice customer demands.

The details of a research report may change with the purpose of research but the main components of a report will remain constant. The research approach of the market researcher also influences the style of writing reports. Here are seven main components of a productive research report:

  • Research Report Summary: The entire objective along with the overview of research are to be included in a summary which is a couple of paragraphs in length. All the multiple components of the research are explained in brief under the report summary.  It should be interesting enough to capture all the key elements of the report.
  • Research Introduction: There always is a primary goal that the researcher is trying to achieve through a report. In the introduction section, he/she can cover answers related to this goal and establish a thesis which will be included to strive and answer it in detail.  This section should answer an integral question: “What is the current situation of the goal?”.  After the research design was conducted, did the organization conclude the goal successfully or they are still a work in progress –  provide such details in the introduction part of the research report.
  • Research Methodology: This is the most important section of the report where all the important information lies. The readers can gain data for the topic along with analyzing the quality of provided content and the research can also be approved by other market researchers . Thus, this section needs to be highly informative with each aspect of research discussed in detail.  Information needs to be expressed in chronological order according to its priority and importance. Researchers should include references in case they gained information from existing techniques.
  • Research Results: A short description of the results along with calculations conducted to achieve the goal will form this section of results. Usually, the exposition after data analysis is carried out in the discussion part of the report.

Learn more: Quantitative Data

  • Research Discussion: The results are discussed in extreme detail in this section along with a comparative analysis of reports that could probably exist in the same domain. Any abnormality uncovered during research will be deliberated in the discussion section.  While writing research reports, the researcher will have to connect the dots on how the results will be applicable in the real world.
  • Research References and Conclusion: Conclude all the research findings along with mentioning each and every author, article or any content piece from where references were taken.

Learn more: Qualitative Observation

15 Tips for Writing Research Reports

Writing research reports in the manner can lead to all the efforts going down the drain. Here are 15 tips for writing impactful research reports:

  • Prepare the context before starting to write and start from the basics:  This was always taught to us in school – be well-prepared before taking a plunge into new topics. The order of survey questions might not be the ideal or most effective order for writing research reports. The idea is to start with a broader topic and work towards a more specific one and focus on a conclusion or support, which a research should support with the facts.  The most difficult thing to do in reporting, without a doubt is to start. Start with the title, the introduction, then document the first discoveries and continue from that. Once the marketers have the information well documented, they can write a general conclusion.
  • Keep the target audience in mind while selecting a format that is clear, logical and obvious to them:  Will the research reports be presented to decision makers or other researchers? What are the general perceptions around that topic? This requires more care and diligence. A researcher will need a significant amount of information to start writing the research report. Be consistent with the wording, the numbering of the annexes and so on. Follow the approved format of the company for the delivery of research reports and demonstrate the integrity of the project with the objectives of the company.
  • Have a clear research objective: A researcher should read the entire proposal again, and make sure that the data they provide contributes to the objectives that were raised from the beginning. Remember that speculations are for conversations, not for research reports, if a researcher speculates, they directly question their own research.
  • Establish a working model:  Each study must have an internal logic, which will have to be established in the report and in the evidence. The researchers’ worst nightmare is to be required to write research reports and realize that key questions were not included.

Learn more: Quantitative Observation

  • Gather all the information about the research topic. Who are the competitors of our customers? Talk to other researchers who have studied the subject of research, know the language of the industry. Misuse of the terms can discourage the readers of research reports from reading further.
  • Read aloud while writing. While reading the report, if the researcher hears something inappropriate, for example, if they stumble over the words when reading them, surely the reader will too. If the researcher can’t put an idea in a single sentence, then it is very long and they must change it so that the idea is clear to everyone.
  • Check grammar and spelling. Without a doubt, good practices help to understand the report. Use verbs in the present tense. Consider using the present tense, which makes the results sound more immediate. Find new words and other ways of saying things. Have fun with the language whenever possible.
  • Discuss only the discoveries that are significant. If some data are not really significant, do not mention them. Remember that not everything is truly important or essential within research reports.

Learn more: Qualitative Data

  • Try and stick to the survey questions. For example, do not say that the people surveyed “were worried” about an research issue , when there are different degrees of concern.
  • The graphs must be clear enough so that they understand themselves. Do not let graphs lead the reader to make mistakes: give them a title, include the indications, the size of the sample, and the correct wording of the question.
  • Be clear with messages. A researcher should always write every section of the report with an accuracy of details and language.
  • Be creative with titles – Particularly in segmentation studies choose names “that give life to research”. Such names can survive for a long time after the initial investigation.
  • Create an effective conclusion: The conclusion in the research reports is the most difficult to write, but it is an incredible opportunity to excel. Make a precise summary. Sometimes it helps to start the conclusion with something specific, then it describes the most important part of the study, and finally, it provides the implications of the conclusions.
  • Get a couple more pair of eyes to read the report. Writers have trouble detecting their own mistakes. But they are responsible for what is presented. Ensure it has been approved by colleagues or friends before sending the find draft out.

Learn more: Market Research and Analysis

MORE LIKE THIS

Agile Qual for Rapid Insights

A guide to conducting agile qualitative research for rapid insights with Digsite 

Sep 11, 2024

When thinking about Customer Experience, so much of what we discuss is focused on measurement, dashboards, analytics, and insights. However, the “product” that is provided can be just as important.

Was The Experience Memorable? — Tuesday CX Thoughts

Sep 10, 2024

Data Analyst

What Does a Data Analyst Do? Skills, Tools & Tips

Sep 9, 2024

Gallup Access alternatives

Best Gallup Access Alternatives & Competitors in 2024

Sep 6, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence
  • Research Report: Definition, Types + [Writing Guide]

busayo.longe

One of the reasons for carrying out research is to add to the existing body of knowledge. Therefore, when conducting research, you need to document your processes and findings in a research report. 

With a research report, it is easy to outline the findings of your systematic investigation and any gaps needing further inquiry. Knowing how to create a detailed research report will prove useful when you need to conduct research.  

What is a Research Report?

A research report is a well-crafted document that outlines the processes, data, and findings of a systematic investigation. It is an important document that serves as a first-hand account of the research process, and it is typically considered an objective and accurate source of information.

In many ways, a research report can be considered as a summary of the research process that clearly highlights findings, recommendations, and other important details. Reading a well-written research report should provide you with all the information you need about the core areas of the research process.

Features of a Research Report 

So how do you recognize a research report when you see one? Here are some of the basic features that define a research report. 

  • It is a detailed presentation of research processes and findings, and it usually includes tables and graphs. 
  • It is written in a formal language.
  • A research report is usually written in the third person.
  • It is informative and based on first-hand verifiable information.
  • It is formally structured with headings, sections, and bullet points.
  • It always includes recommendations for future actions. 

Types of Research Report 

The research report is classified based on two things; nature of research and target audience.

Nature of Research

  • Qualitative Research Report

This is the type of report written for qualitative research . It outlines the methods, processes, and findings of a qualitative method of systematic investigation. In educational research, a qualitative research report provides an opportunity for one to apply his or her knowledge and develop skills in planning and executing qualitative research projects.

A qualitative research report is usually descriptive in nature. Hence, in addition to presenting details of the research process, you must also create a descriptive narrative of the information.

  • Quantitative Research Report

A quantitative research report is a type of research report that is written for quantitative research. Quantitative research is a type of systematic investigation that pays attention to numerical or statistical values in a bid to find answers to research questions. 

In this type of research report, the researcher presents quantitative data to support the research process and findings. Unlike a qualitative research report that is mainly descriptive, a quantitative research report works with numbers; that is, it is numerical in nature. 

Target Audience

Also, a research report can be said to be technical or popular based on the target audience. If you’re dealing with a general audience, you would need to present a popular research report, and if you’re dealing with a specialized audience, you would submit a technical report. 

  • Technical Research Report

A technical research report is a detailed document that you present after carrying out industry-based research. This report is highly specialized because it provides information for a technical audience; that is, individuals with above-average knowledge in the field of study. 

In a technical research report, the researcher is expected to provide specific information about the research process, including statistical analyses and sampling methods. Also, the use of language is highly specialized and filled with jargon. 

Examples of technical research reports include legal and medical research reports. 

  • Popular Research Report

A popular research report is one for a general audience; that is, for individuals who do not necessarily have any knowledge in the field of study. A popular research report aims to make information accessible to everyone. 

It is written in very simple language, which makes it easy to understand the findings and recommendations. Examples of popular research reports are the information contained in newspapers and magazines. 

Importance of a Research Report 

  • Knowledge Transfer: As already stated above, one of the reasons for carrying out research is to contribute to the existing body of knowledge, and this is made possible with a research report. A research report serves as a means to effectively communicate the findings of a systematic investigation to all and sundry.  
  • Identification of Knowledge Gaps: With a research report, you’d be able to identify knowledge gaps for further inquiry. A research report shows what has been done while hinting at other areas needing systematic investigation. 
  • In market research, a research report would help you understand the market needs and peculiarities at a glance. 
  • A research report allows you to present information in a precise and concise manner. 
  • It is time-efficient and practical because, in a research report, you do not have to spend time detailing the findings of your research work in person. You can easily send out the report via email and have stakeholders look at it. 

Guide to Writing a Research Report

A lot of detail goes into writing a research report, and getting familiar with the different requirements would help you create the ideal research report. A research report is usually broken down into multiple sections, which allows for a concise presentation of information.

Structure and Example of a Research Report

This is the title of your systematic investigation. Your title should be concise and point to the aims, objectives, and findings of a research report. 

  • Table of Contents

This is like a compass that makes it easier for readers to navigate the research report.

An abstract is an overview that highlights all important aspects of the research including the research method, data collection process, and research findings. Think of an abstract as a summary of your research report that presents pertinent information in a concise manner. 

An abstract is always brief; typically 100-150 words and goes straight to the point. The focus of your research abstract should be the 5Ws and 1H format – What, Where, Why, When, Who and How. 

  • Introduction

Here, the researcher highlights the aims and objectives of the systematic investigation as well as the problem which the systematic investigation sets out to solve. When writing the report introduction, it is also essential to indicate whether the purposes of the research were achieved or would require more work.

In the introduction section, the researcher specifies the research problem and also outlines the significance of the systematic investigation. Also, the researcher is expected to outline any jargons and terminologies that are contained in the research.  

  • Literature Review

A literature review is a written survey of existing knowledge in the field of study. In other words, it is the section where you provide an overview and analysis of different research works that are relevant to your systematic investigation. 

It highlights existing research knowledge and areas needing further investigation, which your research has sought to fill. At this stage, you can also hint at your research hypothesis and its possible implications for the existing body of knowledge in your field of study. 

  • An Account of Investigation

This is a detailed account of the research process, including the methodology, sample, and research subjects. Here, you are expected to provide in-depth information on the research process including the data collection and analysis procedures. 

In a quantitative research report, you’d need to provide information surveys, questionnaires and other quantitative data collection methods used in your research. In a qualitative research report, you are expected to describe the qualitative data collection methods used in your research including interviews and focus groups. 

In this section, you are expected to present the results of the systematic investigation. 

This section further explains the findings of the research, earlier outlined. Here, you are expected to present a justification for each outcome and show whether the results are in line with your hypotheses or if other research studies have come up with similar results.

  • Conclusions

This is a summary of all the information in the report. It also outlines the significance of the entire study. 

  • References and Appendices

This section contains a list of all the primary and secondary research sources. 

Tips for Writing a Research Report

  • Define the Context for the Report

As is obtainable when writing an essay, defining the context for your research report would help you create a detailed yet concise document. This is why you need to create an outline before writing so that you do not miss out on anything. 

  • Define your Audience

Writing with your audience in mind is essential as it determines the tone of the report. If you’re writing for a general audience, you would want to present the information in a simple and relatable manner. For a specialized audience, you would need to make use of technical and field-specific terms. 

  • Include Significant Findings

The idea of a research report is to present some sort of abridged version of your systematic investigation. In your report, you should exclude irrelevant information while highlighting only important data and findings. 

  • Include Illustrations

Your research report should include illustrations and other visual representations of your data. Graphs, pie charts, and relevant images lend additional credibility to your systematic investigation.

  • Choose the Right Title

A good research report title is brief, precise, and contains keywords from your research. It should provide a clear idea of your systematic investigation so that readers can grasp the entire focus of your research from the title. 

  • Proofread the Report

Before publishing the document, ensure that you give it a second look to authenticate the information. If you can, get someone else to go through the report, too, and you can also run it through proofreading and editing software. 

How to Gather Research Data for Your Report  

  • Understand the Problem

Every research aims at solving a specific problem or set of problems, and this should be at the back of your mind when writing your research report. Understanding the problem would help you to filter the information you have and include only important data in your report. 

  • Know what your report seeks to achieve

This is somewhat similar to the point above because, in some way, the aim of your research report is intertwined with the objectives of your systematic investigation. Identifying the primary purpose of writing a research report would help you to identify and present the required information accordingly. 

  • Identify your audience

Knowing your target audience plays a crucial role in data collection for a research report. If your research report is specifically for an organization, you would want to present industry-specific information or show how the research findings are relevant to the work that the company does. 

  • Create Surveys/Questionnaires

A survey is a research method that is used to gather data from a specific group of people through a set of questions. It can be either quantitative or qualitative. 

A survey is usually made up of structured questions, and it can be administered online or offline. However, an online survey is a more effective method of research data collection because it helps you save time and gather data with ease. 

You can seamlessly create an online questionnaire for your research on Formplus . With the multiple sharing options available in the builder, you would be able to administer your survey to respondents in little or no time. 

Formplus also has a report summary too l that you can use to create custom visual reports for your research.

Step-by-step guide on how to create an online questionnaire using Formplus  

  • Sign into Formplus

In the Formplus builder, you can easily create different online questionnaires for your research by dragging and dropping preferred fields into your form. To access the Formplus builder, you will need to create an account on Formplus. 

Once you do this, sign in to your account and click on Create new form to begin. 

  • Edit Form Title : Click on the field provided to input your form title, for example, “Research Questionnaire.”
  • Edit Form : Click on the edit icon to edit the form.
  • Add Fields : Drag and drop preferred form fields into your form in the Formplus builder inputs column. There are several field input options for questionnaires in the Formplus builder. 
  • Edit fields
  • Click on “Save”
  • Form Customization: With the form customization options in the form builder, you can easily change the outlook of your form and make it more unique and personalized. Formplus allows you to change your form theme, add background images, and even change the font according to your needs. 
  • Multiple Sharing Options: Formplus offers various form-sharing options, which enables you to share your questionnaire with respondents easily. You can use the direct social media sharing buttons to share your form link to your organization’s social media pages.  You can also send out your survey form as email invitations to your research subjects too. If you wish, you can share your form’s QR code or embed it on your organization’s website for easy access. 

Conclusion  

Always remember that a research report is just as important as the actual systematic investigation because it plays a vital role in communicating research findings to everyone else. This is why you must take care to create a concise document summarizing the process of conducting any research. 

In this article, we’ve outlined essential tips to help you create a research report. When writing your report, you should always have the audience at the back of your mind, as this would set the tone for the document. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • ethnographic research survey
  • research report
  • research report survey
  • busayo.longe

Formplus

You may also like:

Assessment Tools: Types, Examples & Importance

In this article, you’ll learn about different assessment tools to help you evaluate performance in various contexts

research methodology of reports

Ethnographic Research: Types, Methods + [Question Examples]

Simple guide on ethnographic research, it types, methods, examples and advantages. Also highlights how to conduct an ethnographic...

How to Write a Problem Statement for your Research

Learn how to write problem statements before commencing any research effort. Learn about its structure and explore examples

21 Chrome Extensions for Academic Researchers in 2022

In this article, we will discuss a number of chrome extensions you can use to make your research process even seamless

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

A tutorial on methodological studies: the what, when, how and why

Lawrence mbuagbaw.

1 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada

2 Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario L8N 4A6 Canada

3 Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Daeria O. Lawson

Livia puljak.

4 Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia

David B. Allison

5 Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN 47405 USA

Lehana Thabane

6 Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON Canada

7 Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON Canada

8 Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON Canada

Associated Data

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 – 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 – 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig1_HTML.jpg

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 – 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

  • Comparing two groups
  • Determining a proportion, mean or another quantifier
  • Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

  • Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
  • Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
  • Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
  • Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 – 67 ].
  • Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
  • Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
  • Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
  • Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

  • What is the aim?

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

  • 2. What is the design?

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

  • 3. What is the sampling strategy?

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

  • 4. What is the unit of analysis?

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig2_HTML.jpg

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Acknowledgements

Abbreviations.

CONSORTConsolidated Standards of Reporting Trials
EPICOTEvidence, Participants, Intervention, Comparison, Outcome, Timeframe
GRADEGrading of Recommendations, Assessment, Development and Evaluations
PICOTParticipants, Intervention, Comparison, Outcome, Timeframe
PRISMAPreferred Reporting Items of Systematic reviews and Meta-Analyses
SWARStudies Within a Review
SWATStudies Within a Trial

Authors’ contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

This work did not receive any dedicated funding.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • RMIT Australia
  • RMIT Europe
  • RMIT Vietnam
  • RMIT Global
  • RMIT Online
  • Alumni & Giving

RMIT University Library - Learning Lab

  • What will I do?
  • What will I need?
  • Who will help me?
  • About the institution
  • New to university?
  • Studying efficiently
  • Time management
  • Mind mapping
  • Note-taking
  • Reading skills
  • Argument analysis
  • Preparing for assessment
  • Critical thinking and argument analysis
  • Online learning skills
  • Starting my first assignment
  • Researching your assignment
  • What is referencing?
  • Understanding citations
  • When referencing isn't needed
  • Paraphrasing
  • Summarising
  • Synthesising
  • Integrating ideas with reporting words
  • Referencing with Easy Cite
  • Getting help with referencing
  • Acting with academic integrity
  • Artificial intelligence tools
  • Understanding your audience
  • Writing for coursework
  • Literature review
  • Academic style
  • Writing for the workplace
  • Spelling tips
  • Writing paragraphs
  • Writing sentences
  • Academic word lists
  • Annotated bibliographies
  • Artist statement
  • Case studies
  • Creating effective poster presentations
  • Essays, Reports, Reflective Writing
  • Law assessments
  • Oral presentations
  • Reflective writing
  • Art and design
  • Critical thinking
  • Maths and statistics
  • Sustainability
  • Educators' guide
  • Learning Lab content in context
  • Latest updates
  • Students Alumni & Giving Staff Library

Learning Lab

Getting started at uni, study skills, referencing.

  • When referencing isn't needed
  • Integrating ideas

Writing and assessments

  • Critical reading
  • Poster presentations
  • Postgraduate report writing

Subject areas

For educators.

  • Educators' guide
  • Methodology section in a report

Method/Methodology

The method section of a report details how the research was conducted, the research methods used and the reasons for choosing those methods. It should outline:

  • the participants and research methods used, e.g. surveys/questionnaire, interviews
  • refer to other relevant studies.

The methodology is a step-by-step explanation of the research process. It should be factual and is mainly written in the past tense.

Sample Methodology

The research used a quantitative methodology based on the approach advocated by Williams (2009). This study was conducted by questionnaire and investigated university teaching staff attitudes to the use of mobile phones in tutorials (see Appendix 1). The questionnaire used Likert scales to assess social attitudes (Jones 2007) to student mobile phone use and provided open-ended responses for additional comments. The survey was voluntary and anonymous. A total of 412 questionnaires were distributed online to randomly selected staff from each of the three colleges within the university. The completed questionnaires were returned by email.

  • 'Describe' is short for: describing how the research was done.
  • 'Refers' is short for: refers to relevant reading/literature.

[Describe: The research used a quantitative methodology based on the approach advocated by Williams (2009).] [Refer: This study was conducted by questionnaire and investigated university teaching staff attitudes to the use of mobile phones in tutorials (see Appendix 1). The questionnaire used Likert scales to assess social attitudes (Jones 2007) to student mobile phone use and provided open-ended responses for additional comments.] [Describes: The survey was voluntary and anonymous. A total of 412 questionnaires were distributed online to randomly selected staff from each of the three colleges within the university. The completed questionnaires were returned by email.]

  • Overall structure of a report
  • Example of a report
  • Report checklist
  • Writing a business research report

Still can't find what you need?

The RMIT University Library provides study support , one-on-one consultations and peer mentoring to RMIT students.

  • Facebook (opens in a new window)
  • Twitter (opens in a new window)
  • Instagram (opens in a new window)
  • Linkedin (opens in a new window)
  • YouTube (opens in a new window)
  • Weibo (opens in a new window)
  • Copyright © 2024 RMIT University |
  • Accessibility |
  • Learning Lab feedback |
  • Complaints |
  • ABN 49 781 030 034 |
  • CRICOS provider number: 00122A |
  • RTO Code: 3046 |
  • Open Universities Australia

Research Methodology WRITING A RESEARCH REPORT

  • September 2021
  • In book: Research Methodology in Social Sciences (A Short Manual) (pp.177)
  • Publisher: New Delhi: Corvette Press

Harish K Thakur at Himachal Pradesh University

  • Himachal Pradesh University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Miriam Hill
  • Derek Jansen
  • Kerryn Warren
  • Kretchmer Paul
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Research Designs Compared | Guide & Examples

Types of Research Designs Compared | Guide & Examples

Published on June 20, 2019 by Shona McCombes . Revised on June 22, 2023.

When you start planning a research project, developing research questions and creating a  research design , you will have to make various decisions about the type of research you want to do.

There are many ways to categorize different types of research. The words you use to describe your research depend on your discipline and field. In general, though, the form your research design takes will be shaped by:

  • The type of knowledge you aim to produce
  • The type of data you will collect and analyze
  • The sampling methods , timescale and location of the research

This article takes a look at some common distinctions made between different types of research and outlines the key differences between them.

Table of contents

Types of research aims, types of research data, types of sampling, timescale, and location, other interesting articles.

The first thing to consider is what kind of knowledge your research aims to contribute.

Type of research What’s the difference? What to consider
Basic vs. applied Basic research aims to , while applied research aims to . Do you want to expand scientific understanding or solve a practical problem?
vs. Exploratory research aims to , while explanatory research aims to . How much is already known about your research problem? Are you conducting initial research on a newly-identified issue, or seeking precise conclusions about an established issue?
aims to , while aims to . Is there already some theory on your research problem that you can use to develop , or do you want to propose new theories based on your findings?

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

The next thing to consider is what type of data you will collect. Each kind of data is associated with a range of specific research methods and procedures.

Type of research What’s the difference? What to consider
Primary research vs secondary research Primary data is (e.g., through or ), while secondary data (e.g., in government or scientific publications). How much data is already available on your topic? Do you want to collect original data or analyze existing data (e.g., through a )?
, while . Is your research more concerned with measuring something or interpreting something? You can also create a research design that has elements of both.
vs Descriptive research gathers data , while experimental research . Do you want to identify characteristics, patterns and or test causal relationships between ?

Finally, you have to consider three closely related questions: how will you select the subjects or participants of the research? When and how often will you collect data from your subjects? And where will the research take place?

Keep in mind that the methods that you choose bring with them different risk factors and types of research bias . Biases aren’t completely avoidable, but can heavily impact the validity and reliability of your findings if left unchecked.

Type of research What’s the difference? What to consider
allows you to , while allows you to draw conclusions . Do you want to produce  knowledge that applies to many contexts or detailed knowledge about a specific context (e.g. in a )?
vs Cross-sectional studies , while longitudinal studies . Is your research question focused on understanding the current situation or tracking changes over time?
Field research vs laboratory research Field research takes place in , while laboratory research takes place in . Do you want to find out how something occurs in the real world or draw firm conclusions about cause and effect? Laboratory experiments have higher but lower .
Fixed design vs flexible design In a fixed research design the subjects, timescale and location are begins, while in a flexible design these aspects may . Do you want to test hypotheses and establish generalizable facts, or explore concepts and develop understanding? For measuring, testing and making generalizations, a fixed research design has higher .

Choosing between all these different research types is part of the process of creating your research design , which determines exactly how your research will be conducted. But the type of research is only the first step: next, you have to make more concrete decisions about your research methods and the details of the study.

Read more about creating a research design

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Types of Research Designs Compared | Guide & Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/methodology/types-of-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is a research methodology | steps & tips, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

GSS Logo

  • GSS Data Explorer

The General Social Survey is recruiting individuals who are deaf or hard of hearing to participate in a brief interview study to improve the accessibility of the GSS website and the GSS Data Explorer. If you, or someone you know, is a STEM researcher who uses online survey data and/or who uses GSS data, please email GSSaccessibility@norc.org to determine your eligibility.

Methodological Reports

​​​Articles on methodological issues in survey research specifically dealing with the GSS as well as more general problems.

Sparkman, Rachel; Wells, Brian M.; Norling-Ruggles, Abigail; Schapiro, Benjamin; Bautista, Rene

Methodological Reports

This report examines two survey item experiments conducted on the 2021 and 2022 General Social Survey (GSS) evaluating the web-based implementation o f multiple historical GSS question s given the increased use of web questionnaires in these years. The first experiment examines the use of a grid-format response for two sets of items with shared question stems , response options, and conceptual topics . The second ex periment examines displaying response categories for historically volunteered survey responses (e.g., “no opinion,” “stay as is, ” “ somewhat strong ,” etc . ). We analyze data from the GSS 2021 and 2022 to explore differences in these web-based wording experiments, investigate comparisons to the 2018 GSS for select grid and volunteer ed response experiment variables, and conduct logistic regression analyses to determine i f select demographic characteristics are more likely to predict a volunteer ed response .  

The high-level findings are as follows:  

  • There are few significant differences in estimates between grid and non-grid question s for web cases and among all modes.
  • The presence of historically volunteered responses  in the web questionnaire significantly increases the usage of these categories for nearly every item examined . 
  • There are no consistent demographic predictors for providing a volunteer ed response, but significant predictors are related either directly or indirectly to the question construct .
  • Compared to  2018 , the grid experiment variables have inconsistent results. In the volunteer ed response experiment, t he additional response option increases the prevalence of volunteered responses already given to interviewers in in-person and telephone interviews.  

The grid and volunteer ed response experiments allow methodological innovation in the historical GSS with the introduction of web and a multimode design . W e posit that the combination of the non-gridded and gridded versions (e.g., ABANY, ABANYG) is reasonable and should have minimal impact on findings. However, the analytic approach for the volunteered experiments is less clear and researchers should use caution when combining variables (e.g., GRASSNV, GRASSV). The GSS is conduct ing experiments in 2024 to further test impact.  

GSS years: 2021, 2022

Wells, Brian M.; Christian, Leah; Bautista, Rene; Lafia, Sara; Davern, Michael

Multimode data collection methodologies can help address survey challenges such as declining rates of participation, rising costs, and increasing needs for more timely data collection. However, the order of survey modes as part of a data collection protocol can have an impact on how effective a design can be. This brief explores how the sequential ordering of web and face-to-face (FTF) in a nationally representative survey may impact response rates, key trends, and overall costs. In 2022, the General Social Survey (GSS) was fielded as a multimode study where respondents were randomly assigned to one of two data collection sequences in an experimental design. The first sequence used FTF as the primary mode and then invited all nonrespondents to complete the survey on the web (FTF-first). The second sequence started with a push-to-web (PTW) methodology and then invited a subsample of nonrespondents to complete the survey in a FTF interview (Web-first). Our analyses found that both sequences produced comparable results and neither sequence achieved a better response rate. For costs, the Web-first sequencing was more cost effective per completed interview, but the PTW follow-up in the FTF-first sequence increased response rates at a lower cost and did not require subsampling of nonrespondents.

GSS years: 2022

Morgan, Stephen L.

Some longstanding questionnaire items of the GSS have gender-specific wordings, and the attitudes that they are assumed to measure could be elicited with gender-neutral wordings. This report details and evaluates a 2021 and 2022 experiment that introduces gender-neutral wording alternatives, implemented using a random half-sample assignment of two questionnaire forms. The estimated treatment effects are small, both substantively and with respect to their standard errors. The report recommends that the alternative gender-neutral wordings become the standard wordings for their respective items beginning with the 2026 GSS.

Wells, Brian M.; Sparkman, Rachel

This report details the inclusion of AmeriSpeak® panelists as an oversample population in the 2022 General Social Survey (GSS) and the implications of including Black, Hispanic, and Asian oversample from this sample source. This report provides an overview of the AmeriSpeak sample and its properties relevant for the 2022 GSS. We examine how the AmeriSpeak oversample cases compare to the baseline GSS sample and how they impact estimates at the population and oversampled group levels.

The high-level findings are as follows:

• The AmeriSpeak cases exhibit some demographic differences from their baseline counterparts, but often improve representation, particularly for racial and ethnic subgroups (e.g., South American Hispanic groups, Chinese).

• Given the AmeriSpeak sample only completed the GSS on the web, there are some differences in substantive responses consistent with previous GSS work suggesting sensitivity to mode.

• U.S. population estimates should exhibit minimal differences between the existing 2022 estimates without the AmeriSpeak oversample as with the AmeriSpeak oversample.

• Including the Black and Hispanic oversamples minimally change the overall estimates for their respective subpopulations, but including the Asian oversample does produce large estimate changes for Asian subpopulation given the oversample accounts for a majority of the total Asian sample.

The AmeriSpeak oversample offers increased sample sizes for Black, Hispanic, and Asian respondents in the 2022 GSS Cross-section. In particular, the sample size for Asian respondents more than doubles with the inclusion of the oversample given their low prevalence in the population. While the Asian subpopulation estimates see more movement than their Black and Hispanic counterparts, we see improved representation for Asian subgroups, suggesting a potential improvement in estimation more broadly given the small initial sample size. Researchers are encouraged to conduct their own research to determine additional impacts of including the AmeriSpeak oversample.

Wells, Brian M.; Seeskin, Zachary H.; Ihde, Amy

This report describes a new set of post-stratification weights available for users of the 1972-2018 General Social Survey (GSS) cross-sectional surveys to help improve nonresponse bias adjustment. The weight derivation follows the approach applied to 2021 and 2022 GSS Cross-sections. Use of these weights results in weighted totals that, for each GSS cross-sectional sample, equal marginal control totals from the U.S. Census Bureau estimates for education, sex, marital status, age, region of the country, race, U.S. born status, and Hispanic origin when available. NORC recommends that GSS data users use this new weight for all analyses in the future. These weights also: (a) correct for the form assignment errors reported in GSS Methodological Report 36 for 1978, 1980, 1982, 1983, 1984, and 1985; (b) correct for the ballot-and-form assignment errors reported in GSS Methodological Report 134 for 2002, 2010, 2012, 2016, and 2018; and (c) support person-level analyses of the combined main and Black oversamples for 1982 and 1987. Given the global trend of declining response rates over the past several years, the use of auxiliary data, such as U.S. Census totals for nonresponse adjustment, is important for improving representativeness of estimates with respect to key demographic characteristics. In addition, this report examines the impact of using the poststratification weights across all GSS cross-sections. The majority of estimate differences observed include poststratification variables and their close correlates.

GSS years: 1972-2018

Wells, Brian M.; Davern, Michael; Ihde, Amy; Bautista, Rene

Methodological Reports, NORC Working Paper

The General Social Survey (GSS), a biennial nationally representative survey of the U.S. adult population, has employed subsampling since 2004. Approximately halfway through the field period in years prior to 2020, half of the remaining cases are randomly subsampled for a more focused follow-up, while the other cases are dropped. Subsampling in the GSS has helped to improve response rates and to achieve cost and sample size efficiencies (O’Muircheartaigh and Eckman 2007). This paper explores the extent to which subsampled (or late) respondents vary from non-subsampled (or early) respondents in GSS 2014, 2016, and 2018. We first examine the demographic characteristics of early and late respondents. Second, we explore substantive differences between the two groups on key analytic variables (e.g., attitudes toward premarital sex, abortion, the death penalty, gun regulation, marijuana legalization, national spending priorities). Finally, we examine differences between early and late respondents on key GSS analytic variables controlling for demographic differences using multivariate logistic regression. Our investigations over three years of the GSS suggest that some demographic and

substantive differences between early and late respondents exist, consistent with previous GSS research (Smith 2006). Our results also suggest that most of the differences on key analytic variables do not persist after controlling for demographic characteristics in multivariate logistic regression models. This finding is consistent with past research on interviewer-administered surveys that find that late respondents are not different from early responders on most variables net of demographic characteristics (e.g., Keeter et al. 2006). Differences found between the 2014, 2016, and 2018 analyses emphasize the need for continued research related to subsampling in the GSS.

GSS years: 2014, 2016, 2018

This memo describes a new set of post-stratification weights available for users of the 2000–2018 GSS cross-sectional surveys. The weight derivation follows the approach applied to the previously released 2021 GSS Cross-section, for which post-stratification weights were developed to improve nonresponse bias adjustment, given the impact of the COVID-19 pandemic on survey operations and response rate. Use of these weights results in weighted totals of each GSS cross-sectional sample that equal marginal control totals from the U.S. Census Bureau estimates for education, sex, marital status, age, region of the country, race, Hispanic origin, and U.S. born status. These weights also correct for the ballot and ballot-and-form assignment errors reported in GSS Methodological Report 134 for 2002, 2010, 2012, 2016, and 2018. The use of auxiliary data such as U.S. Census totals for nonresponse adjustment is important for improving representativeness of estimates with respect to key demographic characteristics, given the global trend of declining response rates over the past several years.

GSS years: 2002, 2010, 2012,2016, 2018

Christian, Leah; Davis, Nicholas; Lancaster, Caroline; Paddock, Susan

As previously reported, an unintended overlap between respondent selection and questionnaire assignment procedures in GSS surveys created an association between questionnaire version (ballot and form) and age order in some households in the historical GSS data. This assignment error occurred in data years 2002, 2010, 2012, 2016 and 2018.  This methodological report describes the equivalence testing to compare original and corrected estimates for each category and each variable for all questions in the affected years. The analysis shows that the error differences due to the assignment error are overall relatively small and corrected weights have been provided for data users.

GSS years: 2002, 2010, 2012, 2016, 2018,

Cowan, Sarah K.; Hout, Michael; Perett, Stuart

Long-running surveys need a systematic way to reflect social change and to keep items relevant to respondents, especially when they ask about controversial subjects, or they threaten the items’ validity. We propose a protocol for updating measures that preserves content and construct validity. First, substantive experts articulate the current and anticipated future terms of debate. Then survey experts use this substantive input and their knowledge of existing measures to develop and pilot a large battery of new items. Third, researchers analyze the pilot data to select items for the survey of record. Finally, the items appear on the survey-of-record, available to the whole user community. Surveys-of-record have procedures for changing content that determine if the new items appear just once or become part of the core. We provide the example of developing new abortion attitude measures in the General Social Survey. Current questions ask whether abortion should be legal under varying circumstances. The new abortion items ask about morality, access, state policy, and interpersonal dynamics. They improve content and construct validity and add new insights into Americans’ abortion attitudes.

Morgan, Stephen L.; Lee, Jiwon

In this report, we present strategies for constructing weights to adjust for attrition in the GSS treble panel. We offer Stata code for the construction of the weights that we explain, as well as data files of weights that researchers may wish to adopt for their own use. 

GSS years: 2006, 2008, 2010, 2012, 2014

(no abstract provided)

GSS years: 2012, 2014, 2016, 2018

Smith, Tom W.

Methodological Report, Chicago, NORC , 2019

* Please note this is an updated version of MR009 (1979). The problem of underrepresentation of males on the GSS reflects the nonresponse tendency of males, possibly exacerbated by female interviewers. Surveys using full probability sampling generally have an underrepresentation of males.

Smith, Tom W.; Son, Jaesok

GSS years: 2016

Stephen L. Morgan

For the 2020 GSS, a review of the free expression items  suggested revisions to “a Muslim clergyman who preaches hatred of the United States,” as part of a broader effort by the GSS Board to reassess all GSS items that are gender-specific in some way. Two gender-neutral alternatives were discussed, “an Islamic cleric who preaches hatred of the United States” and “an Islamic religious leader who preaches hatred of the United States.” For the reasons detailed below, it is possible that a switch to “an Islamic cleric who preaches hatred of the United States” could prompt an undesirable discontinuity in response patterns, beyond what could be expected to result from a gender-neutral substitution. If some GSS respondents are more likely to suspect  that the referenced Islamic cleric has a connection to terrorism, the elicited response may be a mixture of opposition to free expression and a perceived fear of physical violence, with more weighting on the latter. In contrast, “an Islamic religious leader who preaches hatred of the United States” may be preferable, if it is the case that GSS respondents are no more likely to infer a threat to their security than is the case for “a Muslim clergyman who preaches hatred of the United States.” In this report, I offer two sets of results to inform decisions about the questionnaire for the 2020 GSS. First, to set the background, I use GSS data from 2008 through 2018 to summarize levels and changes in attitudes toward free expression for all six existing reference individuals. Second, I offer results from a three-armed experiment that compares “Muslim clergyman” to the two alternatives of “Islamic cleric” and “Islamic religious leader.” The experimental data were collected over the web in January and February of 2019 as part of the AmeriSpeak panel.

on the 2016 General Social Survey (GSS), two question-wording experiements were conducted testing variant versions of core GSS items on job satisfaction (SATJOB) and the co-residence of adult children and their parents (AGED). This report details the findings of these experiments.

Smith, Tom W.; Kim, Jibum

Surveys are conducted using many different modes (e.g. face-to-face, mail, telephone, Internet). Because different modes have different error structures, it is very important to understand the advanctages and disadvantages associated with each mode. In recent years there have been major changes in the mdoes typically utilized in surveys. In particular, there have been increases in the use of computers in data collection, self-administration, and mixed-mode designs. The implications of these and future changes are considered. 

This paper documents an update to the Erickson, Goldthorpe & Portocarero social class schema, first proposed in 1987. Due to the backcoding of the 2010 US Census Occupational Classification, it is now possible to treat all GSS cases according to the same occupational codes, which can then be linked to updated EGP codes. Validity is ascertained using the 2012 American Community Survey’s occupational classification. Please note that this methodological report also includes a .do file for adding EGP codes to a STATA GSS datafile, as well as an OCC10 to EGP crosswalk, included below with download links

GSS years: 1972-2016

http://gss.norc.org/Documents/other/occ10-to-egp-class-crosswalk.csv http://gss.norc.org/Documents/other/code-for-egp-crosswalk.do

Hout, Michael Smith, Tom W. Marsden, Peter V.

Methodological Report, Chicago, NORC

The 2012 GSS included a popular prestige rating (Smith and Son 2014). A sample of 1,001 individuals, first interviewed in 2008 and included in the GSS panel, rated 90 occupations each; a rotation of occupations among respondents resulted in ratings for 860 occupational titles, most of which could be assigned to one of the 840 codes in the 2010 Standard Occupational Classification (SOC). This methodological report explains how we collected the ratings and converted them into prestige scores and a socioeconomic index for each of the 539 occupational categories of the Census Bureau's coding scheme now used in the GSS.

GSS years: 2012, 2014

Smith, Tom W. Laken, Faith Son, Jaesok

Methodological Report, Chicago, NORC , 1, 2014

Given the magnitude and seriousness of gun violence, it is important to have accurate and reliable information on the possession and use of firearms in the United States. This report examines one crucial element, the level of and trends in household and personal gun ownership. First, the report considers methodological issues concerning the measurement of gun ownership. Second, it examines trends in gun ownership. Third, it evaluates the nexus of these two factors, the impact of methodological issues on the measurement of trends gun ownership. Finally, it considers what ancillary trend data on crime, hunting, household size, and number of guns available suggest about trends in gun ownership.

GSS years: 1972-2014

Smith, Tom W. Son, Jaesok

GSS years: 2012

Smith, Tom W. Kim, Jibum

GSS years: N/A

Kim, Jibum Son, Jaesok Kwok, Peter K. Kang, Jeong-han Laken, Faith Daquilanea, Jodie Shin, Hee-Choon Smith, Tom W.

Methodological Report, Chicago, NORC , 2012

Using the 40 years of the General Social Survey (GSS), we investigate the long-term trend and the correlates of family and personal income nonresponse. Family and personal income nonresponse has increased slightly by about 5 percentage points from 1974 to 2010 (9% to 13% in family income; 7% to 12% in personal income). While family income nonresponse was equivalently attributed to “Don’t Know” and “Refused,” personal income nonresponse was mainly attributed to “Refused.” We found very similar correlates of family and personal income nonresponse, such as being older, female, married, self employed, those not answering the number of earners, uncooperative respondents, people living in the East, and those surveyed in recent periods. In addition, based on the interviewer’s evaluation, uncooperative respondents are less likely to response “Don’t Know” than “Refused” and respondents with poor comprehension are more likely to respond “Don’t Know” than “Refused.” Our findings suggest that we need to distinguish “Refused” from “Don’t Know” if we aim to better understand income nonresponse and to consider paradata to evaluate the cognitive processing of income nonresponse.

GSS years: 1972-2012

Hout, Michael Hastings, Orestes P

Methodological Report, Chicago, NORC , 6, 2012

We assess the reliability and stability of core items in the General Social Survey using Alwin’s (2007) implementation of Heise’s (1969) model. Of 265 core items examined we find mostly positive results. Eighty items (over 30 percent) have reliability coefficients greater than 0.85; another 84 (32 percent) have reliability coefficients between 0.70 and 0.85. Facts are generally more reliable than other items. Stability was slightly higher, overall, in the 2008-2010 period than the 2006-2008 period. The economic recession of 2007-09 and the election of Barack Obama in 2008 altered the social context in ways that may have contributed to instability.

GSS years: 2006, 2008, 2010

Methodological Report, Chicago, NORC , 5, 2011

GSS years: 2006, 2008

Methodological Report, Chicago, NORC , 5, 2010

Methodological Report, Chicago, NORC , 11, 2009

GSS years: 2002, 2004, 2006, 2008

Methodological Report, Chicago, NORC , 2010

GSS years: 2006,2008

Methodological Report, Chicago, NORC, 2009

GSS years: 2008

MR113 2006-2008 General Social Survey Panel Validation

Smith, Tom W. Sokolowski, John

Methodological Report, Chicago, NORC , 2008

GSS years: 2002,2008

Malhotra, Neil Krosnick, Jon A Haertel, Edward

Methodological Report, Chicago, NORC , 8, 2007

Social scientists in many disciplines have used the GSS's ten-item Wordsum vocabulary test to study the causes and consequences of vocabulary knowledge and related constructs. In adding up the number of correct answers to yield a test score, researchers have implicitly assumed that the ten items all reflect a single, underlying construct and that each item deserves equal weight when generating the total score. In this paper, we report evidence suggesting that extracting the unique variance associated with each word and measuring the latent construct only with the variance shared among all indicators strengthens the validity of the index. We also report evidence suggesting that Wordsum could be improved by adding words of moderate difficulty to accompany the existing questions that are either quite easy or quite difficult. Previous studies that used Wordsum should be revisited in light of these findings, because their results might change when a more optimal analytic method is used.

GSS years: 1974-2004

Smith, Tom W. Kim, Seokho

Methodological Report, Chicago, NORC , 4, 2007

The new Baylor Religion Survey (BRS) is an important addition to the available data on religion in contemporary America ('American Piety,' 2006). Few national surveys have included so many valuable questions on the religious background, beliefs, and behaviors of adult Americans. The BRS is a fruitful source for expanding our knowledge about religion and the initial analysis that accompanied the release of the data last Fall has already made a major contribution to the sociology of religion ('American Piety' 2006; 'American Piety 2005,' 2006; Dougherty, Johnson, and Polson, 2006; Dougherty, Johnson, and Polson, 2007; 'Losing My Religion,' 2006).

GSS years: 2006

Methodological Report, Chicago, NORC , 2007

Kim, Jibum Smith, Tom W. Kang, Jeong-han Sokolowski, John

Methodological Report, Chicago, NORC , 2006

GSS years: 1984-2002

GSS years: 1984-2004

Methodological Report, Chicago, NORC , 2005

Several probes were added to the 2004 GSS to see if the declining Protestant population was due to the data being hidden in Christian or Inter/non-denominational categories. Very few cases were found, but the 2006 GSS will attempt to resolve the status of several dozen cases.

GSS years: 1972-2004

The large rise in multiple ethnic mentions was due to the change in mode to CAPI. Although this change was noted, there was no significant change on the distribution of ethnicities.

GSS years: 2002,2004

Switching from a 4-category to 5-category health scale doesn't change explanatory power and would prohibit trends in these categories from being reliably estimated across scales. Changing from 4 to 5 also shifts distribution at the positive end, but not at the negative end.

NHIS, FQES, National Health and Nutrition Examination Survey II

Using CAPI (introduced in 2002) allows researchers to compare HEF variables and reconcile conflicting information while still in the field. Cleaning and consistency checks will be made of HEF variables to obtain more accurate data.

GSS years: 1980-2004

Hout, Michael

Methodological Report, Chicago, NORC , 2004

GSS years: 1972-2002

Kim, Jibum Smith,Tom W. Kim, Seokho Kang, Jeong-han Berktold, Jennifer

GSS years: 2000, 2002

Smith, Tom W. Michael, J. Michael

Methodological Report, Chicago, NORC , 2003

GSS years: 1984, 2000, 2002

GSS years: 2002

GSS years: 1988-2002

GSS years: 1972-2000

Methodological Report, Chicago, NORC , 10, 2001

GSS years: 2001

Marsden, Peter V.

Methodological Report, Chicago, NORC , 6, 2001

Name generators, used measuring egocentric networks in surveys, are complex questions that make substantial demands on respondents and interviewers alike. They are therefore vulnerable to interviewer effects, which arise when interviewers administer questions differently in ways that affect responses-in particular, the number of names elicited. Van Tilburg (1998) found significant interviewer effects on network size in a study of elderly Dutch respondents; that study included an instrument with seven name generators, the complexity of which may have accentuated interviewer effects. This article examines a simpler single-generator elicitation instrument administered in the 1998 General Social Survey (GSS). Interviewer effects on network size as measured by this instrument are smaller than those found by Van Tilburg, but only modestly so. Variations in the network size of respondents within interviewer caseloads (estimated using a single-item "global" measure of network size and an independent sample of respondents) reduce but do not explain interviewer effects on the name generator measure. Interviewer differences remain significant after controls for between-interviewer differences in the sociodemographic composition of respondent pools. Further insight into the sources of interviewer effects may be obtained via monitoring respondent-interviewer interactions for differences in how name generators are administered. 

GSS years: 2000

Methodological Report, Chicago, NORC , 7, 2001

Methodological Report, Chicago, NORC , 1999

GSS years: 1988-1998

Methodological Report, Chicago, NORC , 12, 2002

GSS years: 1998

Methodological Report, Chicago, NORC , 1998

The relationship between educational attainment and age/cohort is curvilinear, and not negative as the historical trend might indicate, for two reasons: the advent of associate degrees and the time required to obtain graduate degrees. Control variables (for age/cohort) straighten out otherwise confounding relationships.

GSS years: 1990, 1991, 1993, 1994, 1996

ANES 1984-1991

Methodological Report, Chicago, NORC , 1997

GSS years: 1994, 1996

Methodological Report, Chicago, NORC , 2, 1997

GSS years: 1996

Smith, Tom W. Shin, Hee-Choon Xiaoxi, Tong

Methodological Report, Chicago, NORC , 2, 1996

There were few sample frame effects associated with NORC's shift from the 1980 to the 1990 Census. Also, design effects based on the 1990 sample frame are of the same nature and magnitude as indicated by previous research.

GSS years: 1993

Methodological Report, Chicago, NORC , 12, 1995

GSS years: 1972-1994

Methodological Report, Chicago, NORC , 6, 1995

This article reviews questions in the GSS asking a respondent's race or ethnicity and proposes several methods in which these measures could be refined.

Methodological Report, Chicago, NORC , 1995

The GSS recently shortened the length of its core and, as a result, the context of many items changed. Few context effects were caused by these shifts.

GSS years: 1994

GSS years: 1972-1993

Methodological Report, Chicago, NORC , 8, 1994

GSS years: 1975-1993

Methodological Report, Chicago, NORC , 3, 1994

Response differences in two GSS and one ISSP spending scale were slight, except for spending on the environment, which showed a context effect. In all, the spending scales are generally answered in a consistent and meaningful manner.

GSS years: 1990

Methodological Report, Chicago, NORC , 1980

The 1980 General Social Survey and the American National Study by SRC are compared to determine house effects. The difference on frequency of don't know categories between the two surveys is the most significant house effect. Also, the difference in time of interview and training of interviewers causes variation in data.

GSS years: 1980

Methodological Report, Chicago, NORC , 1985

Studies of voting behavior and other political matters in the fifties developed a picture of the American electorate that was startlingly at odds with the basic assumption of a rational citizenry as formulated in classic democratic theory. In general, the low or defective levels of conceptualization, information, participation, attitude constraint, and consistency were seen as indicating a very underdeveloped level of political thought and weak or disorganized political attitudes. In particular, inconsistency in attitudes over time was interpreted as indicating an abundance of non-attitudes. In this paper, we will review the literature on non-attitudes. We will examine how the concept of non-attitudes compares with rival explanations of mass belief systems and evaluate the conceptual and evaluate the conceptual and empirical appropriateness of competing formulations. We will then consider the implications of these findings on survey design and analysis in general.

GSS years: 1972, 1973, 1974, 1975, 1976, 1977, 1978

Schaeffer, Nora Cate

Methodological Report, Chicago, NORC , 1982

Variations in the wording of the child qualities items were examined in order to determine the degree of male bias present. This bias is present in both variations but to a lesser degree in the child item than the he item.

Objective measures of ethnicity and nationality may be inaccurate because substantial numbers of people do not know where their ancestors were born, have multiple nationalities, or do not adopt the ethnicity suggested by place of birth. Generally a combination of behavioral, natal, and subjective approaches is most effective, and even a simple subjective measure may be more effective than an objective one.

GSS years: 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1980

Conflicts found in the work supervision and self-employment items stem from: (1) borderline cases which include both elements, (2) answering the question for one's spouse rather than self, and (3) misinterpretation of the supervision question.

Methodological Report, Chicago, NORC , 12, 1993

Response differences in a GSS scale and the ISSP scales result from measurement differences and group descriptors. Differences in questions wording, in particular, concerning descriptors for the government, produced different responses for subjects suggesting that the two labels were connoting different meanings.

GSS years: 1991

Methodological Report, Chicago, NORC , 7, 1993

John Brehm incorrectly weighted the data when comparing the CPS and the GSS. This article shows that the GSS, when properly weighted, is very similar to the CPS in all demographics except for gender (See also Brehm, 324).

GSS years: 1978-1988

CPS 1978, 1980, 1982, 1984, 1986, 1988; NES

Changes in the alignment of categories or the shift in scale can have significant response effects for surveys. The physical layout of surveys can also affect interviewers and hence data quality negatively through confusing skip patterns or the amount of physical space available corresponding to how much open ended detail is recorded.

ISSP 1987; Gallup 1954; Wirthlin; Gordon Black; ORC; Yankelovich and Hart-Tetter 1990; NORC 1954

Methodological Report, Chicago, NORC , 1992

GSS years: 1972-1991

Respondents with non-attitudes and middle-of-the-road attitudes are attracted to the +1 rather than the -1 response category on a ten-point scalometer. Endpoints, especially the negative endpoint, disproportionately attract responses. Changes in the scalometer, such as including 0 as a midpoint and pointing out all ten or eleven response categories, might cause fewer response effects and more accurate ratings.

GSS years: 1974-1991

Gallup 1953-1973

This article compares the differences in the respondents to the 1991 GSS and the 1992 reinterview, and describes three differences. First, respondents to the reinterview were more upscale than the original interviewees. Second, non-response was higher among those who were uncooperative in the first interview. Third, non-response was higher among non-voters.

Nakao, Keiko Treas, Judith

Following Duncan's procedures new socioeconomic indexes are calculated based on the GSS/NORC study of occupational prestige.

GSS years: 1989

Census 1980

This article describes the addition of 93 variables from the HEF for all full-probability surveys from 1975-1992.

Methodological Report, Chicago, NORC , 1991

Ethno-religious composition, political composition, density, and composition of a network by friends or co-members of organizations are measured with relatively high reliability, and some, such as sex composition, remain problematic, even when the number of alters grows quite large. The sensitivity of reliability estimates to differences in instrument design is examined using design variations in the surveys studied.

GSS years: 1985, 1987, 1988

Northern California Community Study 1977-1978

Missing information is a greater problem for income than for any other demographic and non-response has increased over time. However, non-response in the GSS is less than in many other studies

Census 1960, 1970; CPS 1973; Income Survey Development Program 1978; Survey Income and Program Participation 1983; NES 1988; ISSP 1989

Methodological Report, Chicago, NORC , 1990

The authors construct a new prestige scale for the 1980 Census Occupational classification. Using 1989 GSS data, 740 occupations were ranked according to social prestige.

Nakao, Keiko Hodge, Robert W. Treas, Judith

Methodological Report, Chicago, NORC , 10, 1990

Prestige scores for all occupations developed from the national surveys in the 1960's have been widely used by researchers in the social sciences. The change in the 1980 Census classification of occupations necessitates updating the prestige scale accordingly. New scores can be obtained either by reworking the old scores or by collecting new data. In this paper, we argue for the latter choice based on the methodological, substantive, and theoretical considerations. The plan to collect occupational assessments from a nationally representative sample of 1500 Americans in the 1989 NORC General Social Survey will also be outlined.

Methodological Report, Chicago, NORC , 9, 1990

GSS years: 1988, 1989

Methodological Report, Chicago, NORC , 12, 1989

Missing data on father's occupation may have a small impact on intergenerational comparisons with intergenerational associations weaker for missing data. Possible corrections for missing data may include using mother's and spouse's work information.

GSS years: 1972-1988

Methodological Report, Chicago, NORC , 9, 1989

Difficulty in studying order effects stems from the number of potential causal influences, competing explanations, interaction effects with question type, question specificity, question vagueness, question centrality, response type, history, administration, conflicting attitudes, and other effects. Promising solutions include split ballots, think aloud procedures, follow-up questions, probes on other dimensions, and various experimental designs.

GSS years: 1976, 1978, 1980

SRC 1979-80; NORC 1987; Greater Cincinnati Surveys 1983-84; DAS 1971

Though most of the data on sexual behavior and attitudes from the 1988 and 1989 GSS appear valid and reliable, caution still needs to be use when examining the 1988 responses of male homosexuals and the gender gap in reported number of partners. (See also No. 2954??)

GSS years: 1988, 1989, 1990

Ligon, Ethan

Methodological Report, Chicago, NORC , 1989

Two new GSS income measures (REALINC and RINCOME) constructed from current GSS variables for household and respondent income correct for changes in the price level across years.

BLS 1983; CPS 1980

Alwin, Duane F.

Methodological Report, Chicago, NORC , 2, 1989

In this paper I discuss several of the difficulties involved in estimating the reliability of survey measurement. Reliability is defined on the basis of classical true-score theory, as the correlational consistency of multiple measures of the same construct, net of true change. This concept is presented within the framework of a theoretical discussion of the sources of error in survey data and the design requirements for separating response variation into components representing such response consistency and measurement errors. Discussion focuses on the potential sources of random and nonrandom errors, including "invalidity" of measurement, the term frequently used to refer to components of method variance. Problems with the estimation of these components are enumerated and discussed with respect to both cross-sectional and panel designs. Empirical examples are given of the estimation of the quantities of interest, which are the basis of a discussion of the interpretational difficulties encountered in reliability estimation. Data are drawn from the ISR's Quality of Life surveys, the National Election Studies and the NORC's General Social Surveys. The general conclusion is that both crosssectional and panel estimates of measurement reliability are desirable, but for the purposes of isolating the random component of error, panel designs are probably the most advantageous. 

GSS years: 1973, 1974

Alwin, Duane F. Krosnick, Jon A.

GSS years: 1973 -1988

Methodological Report, Chicago, NORC , 12, 1988

Open ended coding errors for occupation are frequent, but fortunately mistakes do not differ from correct coding by much, and thus do not greatly affect analysis. More attention is needed in training coders and devising coding schemes.

GSS years: 1988

Methodological Report, Chicago, NORC , 11, 1988

For the sexual behavior items on the 1988 GSS, there is little evidence of non-response bias and attitudes and behaviors appear somewhat consistent. Though reports of sexless marriages can reasonably be explained, the data on number of partners and male homosexuals is questionable. (See also No. 2953??)

Methodological Report, Chicago, NORC , 9, 1988

By the recoding of various demographics, GSS respondents can be classified according to the government definition of poverty.

Changes in GSS measurement procedures have distorted some trends in variables across time. These effects are identified, and in cases of extreme distortion, corrections are suggested.

Gallup 1976

Methodological Report, Chicago, NORC , 8, 1988

Rasinski, Kenneth A

Methodological Report, Chicago, NORC , 5, 1988

Past attempts at explaining the effect of question wording on responses to survey questions have stressed the ability of question wording to persuade and influence the respondent, resulting in attitude change. This paper promotes an alternative view, which is that even small changes in wording often shift the meaning of the question and thus affect the way the respondent things about the issue. Analyses of question wording experiments on the 1984, 1985, and 1986 General Social Surveys were conducted to examine the effect of wording changes on public support for various types of government spending. Consistent wording effects were found across the three years. An examination of the effects of wording changes and of their interaction with respondent individual differences led to two conclusions: (1) even minor wording changes can alter the meaning of a survey question, and (2), this effect is not limited to individuals with lower levels of education or with less stable attitudes

GSS years: 1984, 1985, 1986

Schuman, Howard Scott, Jacqueline

Methodological Report, Chicago, NORC , 5, 1989

Two split-ballot experiments, one on DK filtering and one on agreeing response set, were included in the GSS in 1974 and replicated in 1982. Response effects occurred in each experiment in 1974 and were generally replicated in 1982, but the effects do not interact with time. 

GSS years: 1972, 1982

Methodological Report, Chicago, NORC , 2, 1988

The GSS's switch from a rotation to a split ballot design offers advantages of maintaining one year intervals for variables, ease in judging rate of change in items, and applying econometric time series analysis and testing for context effects. Disadvantages include more sampling variability, complications in representation of time in analysis, and the possibility of introducing new context effects.

GSS years: 1984

Methodological Report, Chicago, NORC , 8, 1987

GSS years: 1973-1987

Test/retest consistency varies by attributes of the respondent, and this variation is largely a function of reliability differentials between groups.

GSS years: 1972, 1973, 1974, 1978

The Commission's analysis of public opinion is methodologically unsound and therefore substantively suspect.

GSS years: 1972-1986

Gallup 1977, 1985; Yankelovich 1975 1976 1977 1982; Commission on Obscenity and Pornography 1970; Newsweek/Gallup 1985

Duncan, Greg J. Groskind, Fred

Methodological Report, Chicago, NORC , 5, 1987

Low benefit responses to the 1986 Welfare Vignette supplement are probably not due to misunderstanding the questions. It is also a mistake to assume that respondent-designated incomes were intended to be net benefits.

GSS years: 1986

Krosnick, Jon A. Alwin, Duane F.

Methodological Report, Chicago, NORC , 3, 1987

Respondents often choose merely satisfactory answers to survey questions when the cognitive and motivational demands of choosing optimal answers are high. Satisficing is more prevalent among people with less cognitive sophistication, though it is no more prevalent among people for whom the topic of a question is low in salience and/or personal importance.

Methodological Report, Chicago, NORC , 1986

Disinterest, lack of reading ability, difficulty in judgment, and comprehension lead to nonresponse to the welfare vignettes. Bias was small and related to nonresponse associated variables. Respondents also made marking mistakes. Despite these problems, the vignettes worked well with a small amount of error.

Methodological Report, Chicago, NORC , 1987

This paper discusses the use of survey supplements, factors influencing supplement attrition and nonresponse bias, and attrition and nonresponse bias on the 1985 ISSP Supplement. Overall supplement attrition was moderate and not random. Attrition was higher among those who are politically uninterested and less educated, less likely to discuss problems and socialize with others, Northeasterners, isolationists, and dislike foreign countries.

GSS years: 1984, 1985

OCG I & II 1962, 1973; BSA 1983-1985; Opinion and ISV 1976; Civil Liberties Survey 1978; NORC 1964

Burt, Ronald S.

Methodological Report, Chicago, NORC , 6, 1987

GSS years: 1985

Burt, Ronald S. Marsden, Peter V. Rossi, Peter H.

Building on the GSS network items, the authors propose several changes and improvements which could be used to make a standard set of network items for survey research. This set would be efficient, reliable, and valid.

DAS 1966; Northern California Communities Study 1977

Burt, Ronald S. Guilarte, Miguel G.

The idea of structural balance is used to suggest quantitative intervals between relationship strength response categories in the GSS network data. In contrast to an assumption of equal intervals between the categories of relationship strength, the intervals appear quite uneven.

Methodological Report, Chicago, NORC , 6, 1986

The people identified as important discussion partners in the GSS network data were cited in order of strength of relationship with respondent: the first cited person having the strongest relation, the second having the next strongest, and so on. Order effects on closeness and contact frequency are described in the context of network size and relation content.

Smith, Tom W. Peterson, Bruce L.

An unintended overlap between respondent selection and form assignment procedures in GSS surveys from 1978 to 1985 created an association between form and age order in some households. This led to an association between form and various variables linked to age order. A weight was developed to compensate for the assignment bias and achieve random distribution of affected variables across forms.

GSS years: 1973-1985

Overall proxy reports for spouses were as accurate as self-reports, probably because attributes measured (religion education, occupation, etc.) were major, basic demographics. Significantly higher levels of non-response were found for proxy reports, but a level of missing data was nevertheless negligible.

GSS years: 1972-1978, 1980, 1982-1985

Alteration of the GSS content by the addition or deletion of items, by the switching of items from permanent to rotating status, or by switching items from one rotation to another hampers keeping measurement conditions constant and therefore increases the possibility that true change will be confounded with measurement effects.

GSS years: 1972-1985

The term welfare consistently produces more negative evaluations than does the term poor, illustrating the major impact different words can have on response patterns.

SRC 1972, 1974, 1976, 1982; MAP 1968, 1982; Harris 1972 1976; Gallup 1976; Yankelovich

Methodological Report, Chicago, NORC , 1984

This is an argument for obtaining network data in the General Social Survey.

Peterson, Bruce L.

This report on the 1984 GSS experiment comparing the effect of varying the number of response categories, concludes that the inter-item correlations are not appreciably different in the seven-point version of the confidence question than in the traditional three-point item.

The report is a preliminary analysis of eight methodological experiments and adaptations in the 1984 General Social Survey: New Denominational codes; intra-item order effects, child qualities; sex of child, child qualities; spend priorities; confidence variation in response categories; bible fundamentalism, two trends; Images of God, two scales; and order effect of grace.

GSS years: 1972-1984

Gallup; ANES; NORC

Dempsey, Glenn R.

Methodological Report, Chicago, NORC , 9, 1984

The purpose of these two experiments on the 1983 GSS was to determine whether U.S. and European scaling techniques could measure political ideology and social status in the U.S. in similar ways. POLVIEWS tends to have stronger correlations with political and social attitudes than does POLVIEWX (European scale). CLASS, the standard GSS question also correlates higher than the European counterpart RANK.

GSS years: 1972-1982

Methodological Report, Chicago, NORC , 1983

When question order was reversed so that questions on valued qualities of children came before those on abortion, support for abortion decreased. Although a split ballot in 1983 failed to confirm the effect of altered question order, that may be the result of lower overall support of abortion.

GSS years: 1977, 1978, 1980, 1983

Methodological Report, Chicago, NORC , 4, 1983

Ranking and rating techniques for measuring parental socialization values are found to be similar with respect to ordering aggregate preferences. However, ranked measures account for appreciably more variance in the latent variable, self-direction versus conformity.

Durall 1946; Schuman and Presser, 1981

Methodological Report, Chicago, NORC , 10, 1984

GSS years: 1980, 1982

Results from the 1982 GSS experiment show that non-affective dimensions such as importance, information, firmness, and open-ended questions added to issues like support/opposition to the ERA and abortion, and can discriminate the attitude constraint between two related measures.

Methodological Report, Chicago, NORC , 6, 1983

Clustering scale items together increases inter-item correlations, but has no clear impact between the scale and independent variables.

GSS years: 1973, 1974, 1975, 1976, 1977, 1978, 1980, 1982

GSS years: 1972, 1973, 1982

Methodological Report, Chicago, NORC , 7, 1982

Voter turnout and candidate voted for are difficult variables to reliably measure. Voting is consistently over-reported and votes for winners are usually exaggerated.

ANES 1968, 1972, 1976, 1980; CPS 1968 1972, 1976, 1980

Methodological Report, Chicago, NORC , 5, 1982

Order-effects are an ill-known phenomenon in survey research. There are many different types with distinct causes. Conditional order effects in which the variation occurs mostly or completely among those giving a particular response to the antecedent question are examined in depth.

Methodological Report, Chicago, NORC , 1981

Respondents who contradict themselves on abortion items actually disapprove of abortion. The approving response to the general item is best considered an error in grasping the connection between the general and the situational items.

GSS years: 1977, 1978, 1980

Ethnicity is the most difficult of all background variables to measure, as language, religion, race, nationality and culture must be pieced together. About one quarter of Americans are either over- or under-identifiers of their ancestors. The ability of ethnicity to explain attitudes drops with immigrant generation, though it remains significant even after several generations.

SRC 1978; ANES 1978; Census of Canada 1971

In general, item nonresponse is higher for the less educated. The reverse is true however on obscure and fictive questions without filters. With filters, the obscure and fictive questions show no association between item nonresponse and education.

ANES 1956, 1958, 1960, 1980

Various methods of measuring the impact of non-response bias on survey estimates are examined. It is concluded that there is no simple, general, or accurate way of measuring it. Further research is encouraged.

There is an apparent contradiction between the disapproving responses to the general hitting question and the more specific subquestions. This contradiction is due in part to differences in education and achievement.

GSS years: 1973, 1975, 1976, 1978

Methodological Report, Chicago, NORC , 1979

* Please note there is a version updated in 2009 (MR009a). The problem of underrepresentation of males on the GSS reflects the nonresponse tendency of males, possibly exacerbated by female interviewers. Surveys using full probability sampling generally have an underrepresentation of males.

Census 1970, 1972-78; CPS 1975-77; CNS 1973-74; ANES 1972-78

Smith, Tom W. Bruce, C. Stephenson

Methodological Report, Chicago, NORC , 1979.

The authors explain various techniques to determine measurement error in opinion surveys. Focusing on test/retest experiments, they conclude that the problems of distinguishing measurement error from true change are sufficiently fundamental and sufficiently complex that they must be attacked with various techniques.

Stephenson, C. Bruce

Methodological Report, Chicago, NORC , 4, 1979.

Probability sampling with quotas (PSQ) overrepresents large households. Both PSQ and full probability sampling (FP) underrepresent people from large households. Also, PSQ underrepresents men who are working full-time. Finally, difficult respondents may be underrepresented more seriously in the PSQ sample. However, FP underrepresents men and urbanites.

GSS years: 1972, 1973, 1974, 1975, 1976

Methodological Report, Chicago, NORC , 3, 1980.

Ethnicity is a difficult attribute to measure. It can be determined for about 78 percent of all non-blacks when measured subjectively and for about 85 percent when determined subjectively and natally. A lack of ethnic affiliation is related to being a member of the old stock, host culture; having low education and social standing; and poor transmission of family information between generations.

GSS years: 1972, 1973, 1974, 1975, 1976, 1977

CPS 1972; SRC 1972, 1974, 1978

Methodological Report, Chicago, NORC , 1978.

This report examines response rates of NORC and SRC and finds that on the GSS the causes of non-response are explicit refusals, unavailable, and a small residual group of sick or otherwise uninterviewable people. The mixture of non-responses appears to differ between the GSS's and SRC's surveys, although total response rates are nearly identical.

SRC 1972-78

Methodological Report, Chicago, NORC , 4, 1984.

Review of the GSS size of place codes resolved suspected sampling frame artifact but uncovered miscoded size of place variables. Fortunately, the magnitude of the misclassifications is minimal.

Both full probability and block-quota sampling techniques overrepresent people from small households. This bias can be eliminated by weighting the number of eligible respondents per household. The distortions caused by this bias fortunately appear to be small.

While house effects are not an insurmountable and pervasive survey problem, they do affect survey response particularly in the area of the don't know response level.

Stouffer 1954; NORC 1960; Gallup 1971-76 (14); Roper 1971, 1973; SRC 1972, 1974-76

Christian, Leah; Paddock, Susan; Blumerman, Lisa; Bautista, Rene; Davern, Michael

Methodological Report, Chicago, NORC , 1981.

Differences in survey procedures, i.e., format, wording placement, and order, artificially increase the variation of responses to questions on institutional confidence. Also, the concept of confidence is somewhat vague and allows for fluctuations that complicate an analysis of opinions on confidence. All in all, much of the inter- and intra-survey changes in trends are true fluctuations.

Main Building

1155 E 60th Street, Chicago, IL 60637

GSS@norc.org

Supported by

NSF logo

  • Terms and Conditions

© Copyright 2024 NORC at the University of Chicago

research methodology of reports

2nd Edition

Research and Education An Introduction to Methods, Approaches and Processes

Description.

This core text, now in its second edition, provides an easy-to-read, comprehensive introduction to educational research that will develop your understanding of research strategies, theories and methods, giving you the confidence and enthusiasm to discuss and write about your research effectively.

Specifically written for undergraduate education studies students, the book guides you through the process of planning a research project, the different research methods available and how to carry out your research and write it up successfully. Highlighting the theoretical and methodological debates and discussing important ethical and practical considerations, the book is structured to help you tackle all the different aspects of your project from writing your literature review, designing a questionnaire and analysing your data to the final writing up. This new edition is updated throughout with activities, case studies and further reading lists.

New chapters include:

  • Mixed-methods research
  • Narrative inquiry
  • Creative and visual research methods
  • Extended chapter on research with children and vulnerable groups

Part of the Foundations of Education Studies series, this timely new edition is essential reading for students undertaking a research methods course or a piece of educational research.

Table of Contents

Introduction

1. Approaches to educational research

Part I: Planning an education research project        

2. Choosing a topic and writing a proposal 

3. Reviewing the literature

4. Sampling

5. Data analysis

6. Writing up your research project  

Part II: Research strategies        

8. Mixed-methods research

9. Case studies

10. Ethnography

11. Action research

12. Narrative inquiry

Part III: Methods of data collection      

13. Questionnaires        

14. Interviews and focus groups

15. Observations

16. Documents   

17. Creative and visual research methods    

Part IV: Theorising research      

18. Using theories and concepts in educational research  

19. Evaluating methods

20. Ethical issues in educational research    

21. Research with children and vulnerable groups

Sam Shields is Senior Lecturer in Education at Newcastle University, UK.

Alina Schartner is Senior Lecturer in Applied Linguistics at Newcastle University, UK.

Will Curtis is Professor and Deputy Pro-Vice Chancellor (Education, Quality and Standards) at the University of Warwick, UK.

Mark Murphy is Reader in Educational Leadership and Policy at the University of Glasgow, UK.

About VitalSource eBooks

VitalSource is a leading provider of eBooks.

  • Access your materials anywhere, at anytime.
  • Customer preferences like text size, font type, page color and more.
  • Take annotations in line as you read.

Multiple eBook Copies

This eBook is already in your shopping cart. If you would like to replace it with a different purchasing option please remove the current eBook option from your cart.

Book Preview

research methodology of reports

The country you have selected will result in the following:

  • Product pricing will be adjusted to match the corresponding currency.
  • The title Perception will be removed from your cart because it is not available in this region.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

In Tied Presidential Race, Harris and Trump Have Contrasting Strengths, Weaknesses

Methodology, table of contents.

  • Other findings: An uncertain election outcome, the more critical candidate, Trump and the 2020 election
  • Voting preferences among demographic groups
  • Support for Harris, Trump among voters and nonvoters in recent elections
  • How Harris and Trump supporters see their vote
  • Do voters think it’s clear who will win?
  • Most voters cite several issues as very important to their vote
  • Changes in confidence in candidates on issues, following Biden’s departure from race 
  • Do voters see the candidates as ‘too personally critical’?
  • Do the candidates make you feel proud, hopeful, uneasy or angry?
  • How the candidates make Harris and Trump supporters feel
  • How men and women view the impact of the candidates’ genders
  • Views of the candidates’ races and ethnicities
  • Views of the candidates’ ages among younger and older voters
  • Views of the values and goals of the other candidate’s supporters
  • Should the president work with the opposing party in Congress?
  • Top economic concerns: Food and consumer prices, housing costs
  • Acknowledgments
  • The American Trends Panel survey methodology

Data in this report comes from Wave 153 of the American Trends Panel (ATP), Pew Research Center’s nationally representative panel of randomly selected U.S. adults. The survey was conducted from Aug. 26 to Sept. 2, 2024. A total of 9,720 panelists responded out of 10,645 who were sampled, for a survey-level response rate of 91%.

The cumulative response rate accounting for nonresponse to the recruitment surveys and attrition is 3%. The break-off rate among panelists who logged on to the survey and completed at least one item is less than 1%. The margin of sampling error for the full sample of 9,720 respondents is plus or minus 1.3 percentage points.

SSRS conducted the survey for Pew Research Center via online (n=9,440) and live telephone (n=280) interviewing. Interviews were conducted in both English and Spanish.

To learn more about the ATP, read “ About the American Trends Panel .”

Panel recruitment

Since 2018, the ATP has used address-based sampling (ABS) for recruitment. A study cover letter and a pre-incentive are mailed to a stratified, random sample of households selected from the U.S. Postal Service’s Computerized Delivery Sequence File. This Postal Service file has been estimated to cover 90% to 98% of the population. 1 Within each sampled household, the adult with the next birthday is selected to participate. Other details of the ABS recruitment protocol have changed over time but are available upon request. 2 Prior to 2018, the ATP was recruited using landline and cellphone random-digit-dial surveys administered in English and Spanish.

A national sample of U.S. adults has been recruited to the ATP approximately once per year since 2014. In some years, the recruitment has included additional efforts (known as an “oversample”) to improve the accuracy of data for underrepresented groups. For example, Hispanic adults, Black adults and Asian adults were oversampled in 2019, 2022 and 2023, respectively.

Sample design

The overall target population for this survey was noninstitutionalized persons ages 18 and older living in the United States. All active panel members were invited to participate in this wave.

Questionnaire development and testing

The questionnaire was developed by Pew Research Center in consultation with SSRS. The web program used for online respondents was rigorously tested on both PC and mobile devices by the SSRS project team and Pew Research Center researchers. The SSRS project team also populated test data that was analyzed in SPSS to ensure the logic and randomizations were working as intended before launching the survey.

All respondents were offered a post-paid incentive for their participation. Respondents could choose to receive the post-paid incentive in the form of a check or gift code to Amazon.com.  Incentive amounts ranged from $5 to $20 depending on whether the respondent belongs to a part of the population that is harder or easier to reach. Differential incentive amounts were designed to increase panel survey participation among groups that traditionally have low survey response propensities.

Data collection protocol

The data collection field period for this survey was Aug. 26-Sept. 2, 2024. Surveys were conducted via self-administered web survey or by live telephone interviewing.

For panelists who take surveys online: 3 Postcard notifications were mailed to a subset on Aug. 26. 4 Survey invitations were sent out in two separate launches: soft launch and full launch. Sixty panelists were included in the soft launch, which began with an initial invitation sent on Aug. 26. All remaining English- and Spanish-speaking sampled online panelists were included in the full launch and were sent an invitation on Aug. 27.

Table shows Invitation and reminder dates for web respondents, ATP Wave 153

Panelists participating online were sent an email invitation and up to two email reminders if they did not respond to the survey. ATP panelists who consented to SMS messages were sent an SMS invitation with a link to the survey and up to two SMS reminders.

For panelists who take surveys over the phone with a live interviewer: Prenotification postcards were mailed on Aug. 21, and reminder postcards were mailed on Aug. 26. Soft launch took place on Aug. 26 and involved dialing until a total of five interviews had been completed. All remaining English- and Spanish-speaking sampled phone panelists’ numbers were dialed throughout the remaining field period. Panelists who take surveys via phone can receive up to six calls from trained SSRS interviewers.

Data quality checks

To ensure high-quality data, Center researchers performed data quality checks to identify any respondents showing patterns of satisficing. This includes checking for whether respondents left questions blank at very high rates or always selected the first or last answer presented. As a result of this checking, seven ATP respondents were removed from the survey dataset prior to weighting and analysis.

The ATP data is weighted in a process that accounts for multiple stages of sampling and nonresponse that occur at different points in the panel survey process. First, each panelist begins with a base weight that reflects their probability of recruitment into the panel. These weights are then calibrated to align with the population benchmarks in the accompanying table to correct for nonresponse to recruitment surveys and panel attrition. If only a subsample of panelists was invited to participate in the wave, this weight is adjusted to account for any differential probabilities of selection.

Among the panelists who completed the survey, this weight is then calibrated again to align with the population benchmarks identified in the accompanying table and trimmed at the 1st and 99th percentiles to reduce the loss in precision stemming from variance in the weights. Sampling errors and tests of statistical significance take into account the effect of weighting.

Table shows American Trends Panel weighting dimensions

The following table shows the unweighted sample sizes and the error attributable to sampling that would be expected at the 95% level of confidence for different groups in the survey.

Table shows Sample sizes and margins of error, ATP Wave 153

Sample sizes and sampling errors for other subgroups are available upon request. In addition to sampling error, one should bear in mind that question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of opinion polls.

Dispositions and response rates

Table shows Final dispositions, ATP Wave 153

Validated voters

Members of Pew Research Center’s nationally representative American Trends Panel were matched to public voting records from national commercial voter files in an attempt to find records for voting in the 2016 and 2020 general elections. Validated voters are citizens who told us in a post-election survey that they voted in a given election and have a record for voting in that election in a commercial voter file. Nonvoters are citizens who were not found to have a record of voting in any of the voter files or told us they did not vote.

In an effort to accurately locate official voting records, up to three commercial voter files were searched for each panelist. The number of commercial files consulted varied by when a panelist was recruited to the ATP. Three files were used for panelists recruited in 2022 or before, while one file was used for panelists recruited in 2023. Altogether, files from four different vendors were used, including two that serve conservative and Republican organizations and campaigns, one that serves progressive and Democratic organizations and campaigns, and one that is nonpartisan.

Additional details and caveats about the validation of votes in 2016 and 2020 can be found in these methodological reports:

  • An examination of the 2016 electorate, based on validated voters
  • Validated voters methodology

© Pew Research Center 2024

  • AAPOR Task Force on Address-based Sampling. 2016. “ AAPOR Report: Address-based Sampling .” ↩
  • Email [email protected] . ↩
  • The ATP does not use routers or chains in any part of its online data collection protocol, nor are they used to direct respondents to additional surveys. ↩
  • Postcard notifications for web panelists are sent to 1) panelists who were recruited within the last two years and 2) panelists recruited prior to the last two years who opt to continue receiving postcard notifications. ↩

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Donald Trump
  • Election 2024
  • Kamala Harris
  • Political Issues
  • Political Parties
  • Voter Demographics

The Political Values of Harris and Trump Supporters

As robert f. kennedy jr. exits, a look at who supported him in the 2024 presidential race, harris energizes democrats in transformed presidential race, many americans are confident the 2024 election will be conducted fairly, but wide partisan differences remain, joe biden, public opinion and his withdrawal from the 2024 race, most popular, report materials.

  • September 2024 Presidential Preference Detailed Tables
  • Questionnaire

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts , its primary funder.

© 2024 Pew Research Center

  • Skip to main content
  • Skip to FDA Search
  • Skip to in this section menu
  • Skip to footer links

U.S. flag

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

U.S. Food and Drug Administration

  •   Search
  •   Menu
  • FDA-CDER and HESI | Nitrosamine Ames Data Review and Method Development Workshop - 10/15/2024

Public | In Person

Event Title FDA-CDER and HESI | Nitrosamine Ames Data Review and Method Development Workshop October 15 - 16, 2024

FDA White Oak Campus, Room 2031, Building 2. 10903 New Hampshire Avenue, Silver Spring.

Background:

The Food and Drug Administration (FDA) Center for Drug Evaluation and Research (CDER) and Health & Environmental Sciences Institute (HESI) Global agree to co-sponsor the Nitrosamine Ames Data Review and Method Development Workshop that convenes subject matter experts to review data and discuss recommended test conditions for the Enhanced Ames Testing for nitrosamines. Both FDA and HESI have a mutual interest in addressing this scientific problem and publicly disseminating scientific knowledge under the terms set out below. 

This workshop will review the nitrosamines Ames data generated by the US Food and Drug Administration/National Center for Toxicological Research (US FDA/NCTR), European Medicines Agency’s (EMA) MutaMind project and through the collaborative HESI Genetic Toxicology Technical Committee (GTTC) ring trial, each of which have been testing nitrosamine compounds across various Ames assay conditions. Key aspects of the Ames assay as it pertains to nitrosamines will be addressed through data presentations and panel discussions from these three research programs and additional invited speakers.

Goals and Objectives:

  • To provide a forum for open discussion between regulators and industry on the data generated using enhanced Ames assay methodologies to assess the mutagenic potential of nitrosamines.
  • Present and hold open discussions on specific questions related to the generated data.
  • Discuss recommendations on the relevant and most predictive test conditions of the Ames test for more reliable prediction of the mutagenic potential of nitrosamines.
  • Discuss the correlation of Ames assay results for nitrosamine drug substance related impurities (NDSRIs) and in vivo carcinogenicity/mutagenicity.
  • Generate recommendations for publication in a scientific journal and make them publicly available.

Who Should Attend:

This workshop is intended for scientists from organizations (EMA, FDA, HESI, Industry, Contract Research Organizations) and from other Regulatory Agencies who have been directly involved in generating and evaluating Ames assay data for nitrosamines across various assay conditions. The primary audience includes leading academic experts, interested pharmaceutical companies, regulatory agencies, non-profit organizations, scientists involved in the assessment of nitrosamine impurities in those same industries, and regulatory scientists who have conducted research on Ames assay assessment of nitrosamines or who evaluate this data.  Attendance is limited to invited participants involved in generating data related to enhanced Ames testing.

If to HESI:           

Connie Chen, Senior Scientific Program Manager Health & Environmental Sciences Institute (HESI) 740 15th Street NW, Suite 600 Washington, DC 20005 Tel: 202-652-8404

Aisar Atrakchi, Supervisor Pharmacology Toxicology CDER/Office of New Drugs Food and Drug Administration 10903 New Hampshire Avenue White Oak Building 22 Silver Spring, MD 21029 Tel: 301-796-1036 Email:  [email protected]

Timothy J. McGovern, Associate Director, Pharmacology/Toxicology CDER/Office of New Drugs Food and Drug Administration 10903 New Hampshire Avenue White Oak Building 22 Silver Spring, MD 21029 Tel: 240.402.0477 Email: [email protected]

U.S. flag

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Vital Signs : Suicide Rates and Selected County-Level Factors — United States, 2022

Early Release / September 10, 2024 / 73

Alison L. Cammack, PhD 1 ; Mark R. Stevens, MSPH 1 ; Rebecca B. Naumann, PhD 1 ; Jing Wang, MD 1 ; Wojciech Kaczkowski, PhD 1 ; Jorge Valderrama, PhD 2 ; Deborah M. Stone, ScD 1 ; Robin Lee, PhD 1 ( View author affiliations )

What is already known about this topic?

In 2022, approximately 49,000 persons died by suicide in the United States. A comprehensive approach that addresses health-related community factors, such as health care access, social and community context, and economic stability, could help prevent suicide.

What is added by this report?

Suicide rates were lowest in counties with the highest health insurance coverage, broadband Internet access, and income. These factors were more strongly associated with lower suicide rates in some groups that are disproportionately affected by suicide.

What are the implications for public health practice?

Implementing programs, practices, and policies that improve the conditions in which persons are born, grow, live, work, and age might be an important component of suicide prevention efforts. Decision-makers, government agencies, and communities can work together to address community-specific needs and save lives.

This figure shows five actions to lower suicide risk: strengthen, create, improve, promote, and teach.

Introduction: Approximately 49,000 persons died by suicide in the United States in 2022, and provisional data indicate that a similar number died by suicide in 2023. A comprehensive approach that addresses upstream community risk and protective factors is an important component of suicide prevention. A better understanding of the role of these factors is needed, particularly among disproportionately affected populations.

Methods: Suicide deaths were identified in the 2022 National Vital Statistics System. County-level factors, identified from federal data sources, included health insurance coverage, household broadband Internet access, and household income. Rates and levels of factors categorized by tertiles were calculated and presented by race and ethnicity, sex, age, and urbanicity.

Results: In 2022, the overall suicide rate was 14.2 per 100,000 population; rates were highest among non-Hispanic American Indian or Alaska Native (AI/AN) persons (27.1), males (23.0), and rural residents (20.0). On average, suicide rates were lowest in counties in the top one third of percentage of persons or households with health insurance coverage (13.0), access to broadband Internet (13.3), and income >100% of the federal poverty level (13.5). These factors were more strongly associated with lower suicide rates in some disproportionately affected populations; among AI/AN persons, suicide rates in counties in the highest tertile of these factors were approximately one half the rates of counties in the lowest tertile.

Conclusions and Implications for Public Health Practice: Higher levels of health insurance coverage, household broadband Internet access, and household income in communities might play a role in reducing suicide rates. Upstream programs, practices, and policies detailed in CDC’s Suicide Prevention Resource for Action can be implemented by decision-makers, government agencies, and communities as they work together to address community-specific needs and save lives.

Introduction

In 2022, approximately 49,000 persons died by suicide in the United States (age-adjusted suicide rate = 14.2 per 100,000 population), and provisional data indicate a similar number of persons died by suicide in 2023 ( 1 ). Suicide was the second leading cause of death among persons aged 10–34 years in 2022 ( 1 ). Several demographic groups are disproportionately affected by suicide in the United States ( 2 ). These groups include males, rural residents, and persons from certain racial and ethnic groups, particularly non-Hispanic American Indian or Alaska Native (AI/AN) persons ( 1 ).

Suicide rates have increased during the last 20 years and remain high ( 1 ): on average one person dies by suicide every 11 minutes ( 1 ). However, despite these concerning data, suicide is a preventable public health problem. Suicide prevention requires a comprehensive public health approach that addresses multiple modifiable suicide risk and protective factors at the individual, relationship, community, and societal levels ( 3 ). Such an approach includes implementation of upstream policies, programs, and practices to prevent persons from reaching a crisis point, and downstream prevention focused on treatment, crisis intervention, and postvention (i.e., activities that reduce risk and promote healing in suicide loss survivors after a suicide has taken place).

A number of nonmedical factors that affect health outcomes, often described as social determinants of health, play an important role in shaping upstream suicide prevention efforts ( 4 ). These factors are the conditions in which persons are born, grow, work, live, and age.* For example, insurance coverage, access to broadband Internet, and higher household income might decrease suicide risk by improving health care access, increasing job opportunities, and providing access to sources of support and information ( 5 – 7 ). However, although evidence of associations between higher levels of these factors and reduced suicide risk exists ( 5 – 7 ), this evidence is more limited among groups disproportionately affected by suicide. To guide opportunities for prevention, CDC examined differences in suicide rates according to three specific county-level factors, overall and within demographic groups: 1) health insurance coverage, 2) broadband Internet access, and 3) income.

Ascertainment of Suicide Deaths

Suicide deaths from the 2022 National Vital Statistics System (NVSS) mortality files were identified using the International Classification of Diseases, Tenth Revision underlying cause of death codes X60–X84, Y87.0, and U03. † , § Demographic factors were extracted, including data on decedent race and ethnicity (i.e., AI/AN, Asian and Native Hawaiian or Pacific Islander [Asian and NH/PI], ¶ Black or African American [Black], White, Hispanic or Latino [Hispanic], and multiracial), sex, and age group (10–24,** 25–44, 45–64, and ≥65 years). Hispanic decedents could be of any race; all other racial and ethnic groups were non-Hispanic. Decedent county of residence was linked to the 2023 U.S. Department of Agriculture Rural-Urban Continuum Codes and categorized as urban or rural. ††

County-Level Factors

Three county-level factors (health insurance coverage, broadband Internet access, and household income) were measured and linked with decedent county of residence. These three factors were selected based on published literature and their relevance to multiple suicide prevention strategies, including those in CDC’s Suicide Prevention Resource for Action ( 3 ). Health insurance coverage was assessed as the percentage of persons in the county who had health insurance, measured using 2021 Small Area Health Insurance Estimates (SAHIE). §§ Broadband Internet access was defined as the percentage of households in the county that had a broadband Internet subscription, measured using 5-year estimates from the 2018–2022 American Community Survey. ¶¶ Income level was derived from the percentage of persons in the county with household incomes >100% of the federal poverty level, measured using 2022 Small Area Income and Poverty Estimates.*** Counties were categorized into tertiles of each individual factor (i.e., counties with the highest, middle, and lowest third for percentage of persons or households with a factor). †††

Data Analysis

Suicide rates (suicide deaths per 100,000 population) were calculated by tertiles of health insurance coverage, household broadband internet access, and household income, overall and by demographic subgroups. Rates were calculated using U.S. postcensal single race estimates of the July 1, 2022, residential population as denominators. Age-adjusted rates were calculated by the direct method, §§§ using the 2000 U.S. standard population. Differences (examined for each factor individually) in suicide rates between the counties in the highest and lowest tertiles for each factor and counties in the intermediate and lowest tertiles for each factor were compared using Z-tests when the number of suicide deaths was ≥100; p-values <0.05 were considered statistically significant. When the number of suicide deaths was <100, differences in rates were considered significant if CIs, based on a gamma distribution, did not overlap. Rate ratios (RRs) were also computed to quantify associations between levels of factors and suicide rates (i.e., RRs for counties in the highest versus lowest tertiles of factors and RRs for counties in the intermediate versus lowest tertiles of factors). Analyses were conducted using SAS software (version 9.4; SAS Institute) and R software (version 4.4.0; The R Foundation). This activity was reviewed by CDC, deemed not research, and was conducted consistent with applicable federal law and CDC policy. ¶¶¶

Suicide Deaths and Rates, Overall and by Demographic Factors

In 2022, a total of 49,476 suicides occurred in the United States (age-adjusted rate = 14.2 per 100,000 population) ( Table 1 ). Among all racial and ethnic groups, the highest rates were among AI/AN persons (27.1), followed by White persons (17.6); approximately 75% of all suicides were among White persons (37,481). The suicide rate among males (23.0) was nearly four times that among females (5.9) and was higher among rural residents (20.0) than among urban residents (13.4). By age group, rates were highest among persons aged 25–44 (18.9) and 45–64 years (19.0).

Suicide Rates by County-Level Factors

Overall, average suicide rates were inversely related to each of the three county-level factor tertiles ( Figure 1 ). Suicide rates were highest in counties in the lowest tertile of health insurance coverage (16.4), broadband Internet access (19.2), and household income (15.2), followed by counties in the intermediate tertiles (14.3, 16.5, and 14.8, respectively). The lowest suicide rates occurred in counties in the highest tertiles (13.0, 13.3, and 13.5, respectively). These findings correspond to 26%, 44%, and 13% lower suicide rates in counties in the highest versus lowest tertiles of health insurance coverage, broadband Internet access, and household income, respectively.****

Suicide Rates and RRs by County-Level Factors and Demographic Groups

Among AI/AN persons, White persons, males, and adults aged 25–44 years, suicide rates were significantly lower among those who lived in counties in the highest and intermediate tertiles for health insurance coverage, broadband Internet access, and income than they were among persons who lived in counties in the lowest tertiles for these factors ( Table 2 ). The magnitude of the RRs (i.e., rate in counties in the highest tertile compared with rate in counties in the lowest tertile) tended to be lowest (indicating that presence of the factor was most protective) in these groups and was particularly low for AI/AN persons, for whom the RRs ranged from 0.44 to 0.49 for counties in the highest versus the lowest factor tertiles ( Figure 2 ). In other demographic groups, suicide rates were less consistently associated with these factors. For example, among females living in the lowest-income tertile counties, suicide rates were similar to those among females living in the highest-income tertile counties (RR = 0.98), and a similar pattern was observed among Black persons with respect to health insurance coverage (RR = 1.03). ††††

These findings highlight the importance of three county-level factors (health insurance coverage, household broadband Internet access, and household income) in relation to suicide rates. Overall, suicide rates in counties with higher levels of health insurance coverage, household broadband Internet access, and household income were lower than rates in counties with lower levels of these factors. There are several potential explanations for how these factors might protect against suicide. Health insurance might facilitate access to mental health services, as well as primary care and crisis intervention ( 8 , 9 ). Broadband Internet, recently referred to as a superdeterminant of health ( 10 ), can connect persons to job prospects, opportunities for social connectedness and support, and expanded access to medical services via telehealth ( 7 , 10 ). Living in higher-income communities is associated with ability to meet basic needs, such as food security and housing stability ( 11 , 12 ).

In addition, this analysis found that overall, higher suicide rates continue to affect certain sociodemographic groups, including rural residents, males, and AI/AN and White populations. For some sociodemographic groups included in the analyses, especially AI/AN persons, the three county-level factors examined might be particularly important. These findings are especially meaningful considering that some of these groups, such as AI/AN persons, are more likely to live in communities with lower levels of these factors, including broadband Internet access ( 13 ). The finding that higher levels of the three assessed factors are more strongly related to lower suicide rates among AI/AN persons and males aligns with previous studies examining economic factors ( 14 , 15 ). In contrast, the factors considered in this analysis were less clearly linked with suicide rates for some groups, such as Black persons. Other risk factors or protective factors not examined in this report might be more relevant among these populations. Additional community or societal factors, such as indicators of structural racism and stigma and norms around help-seeking, might influence the relationship between county-level factors and decreased suicide risk in certain populations ( 16 , 17 ). These findings highlight the need to examine risk and protective factors within populations and incorporate the findings of such research into suicide prevention practices.

A comprehensive approach to suicide prevention that targets both upstream and downstream prevention can promote these factors. This approach is laid out in the new 2024 National Strategy for Suicide Prevention ( https://www.hhs.gov/nssp ), which specifically highlights the importance of upstream prevention strategies. CDC’s Suicide Prevention Resource for Action ( https://www.cdc.gov/suicide/resources/prevention.html ) aligns with the National Strategy and describes policies, programs, and practices with the best available evidence that states and territories, tribes, and communities can implement to address suicide risk and protective factors at the individual, relationship, community, and societal levels ( 3 ). Relevant upstream strategies include strengthening economic supports (e.g., strengthening household financial security, such as through the Supplemental Nutrition Assistance Program and stabilizing housing), improving access and delivery of suicide care (e.g., Zero Suicide §§§§ ), promoting healthy connections (e.g., community engagement), teaching coping and problem-solving skills, and creating protective environments (e.g., creating healthy organizational policies and culture). These strategies are being implemented in populations disproportionately affected by suicide through CDC’s Comprehensive Suicide Prevention Program (CSP) ( https://www.cdc.gov/suicide/programs/csp.html ). For example, in addition to conducting a public health campaign to reduce stigma and training providers in hospital and emergency departments on suicide prevention approaches, the CSP recipient in Vermont is specifically supporting rural populations, including farmers, through peer support networks and increasing providers’ abilities to reach and deliver tele-mental health to these populations using telehealth. The CSP recipient in Colorado is not only working with counties and local organizations to promote connectedness for populations at high risk for suicide and providing gatekeeper trainings to help identify and connect persons at risk for suicide with the support services they need but is also working to strengthen community factors that protect against suicide by developing partnerships to support economic stability initiatives, such as food security, affordable housing, and transportation ( https://www.cdc.gov/suicide/csp-profiles/index.html ).

Limitations

The findings in this report are subject to at least five limitations. First, although these findings highlight associations between health insurance coverage, household broadband Internet access, household income, and decreased suicide rates, this study had an ecologic design and thus did not make causal inferences. The possibility of confounding other than by demographic factors was not addressed. Second, it was not possible to examine some disproportionately affected populations, including veterans, persons with disabilities, and sexual and gender minorities ( 2 ). Third, factors were measured at the county level; smaller geographic units (e.g., official U.S. census tracts) might better represent communities and be more closely associated with reduced suicide risk ( 18 ). Fourth, rates by race and ethnicity could reflect underreporting of deaths in the vital statistics data, particularly for AI/AN and Hispanic persons, thereby underestimating rates in these populations ( 19 , 20 ). Finally, other county-level factors that might be relevant to suicide prevention were not examined in this analysis.

Implications for Public Health Practice

Improving the conditions where persons are born, grow, work, live, and age might reduce suicide deaths ( 4 ). Decision-makers, government agencies, and communities can work together to implement programs, practices, and policies that increase access to health insurance and broadband Internet and promote economic supports; this approach is especially important for populations disproportionately affected by suicide. Combined with downstream actions that support persons at increased or immediate risk for suicide (e.g., crisis care or the 988 Suicide & Crisis Lifeline; https://www.988lifeline.org ), an upstream approach that promotes these factors might be an important component of suicide prevention. More attention to such upstream strategies that prevent suicide crises before they start has the potential to accelerate public health’s ability to save lives.

Acknowledgment

Shikhar Kumar, Guidehouse.

Corresponding author: Alison L. Cammack, [email protected] .

1 Division of Injury Prevention, National Center for Injury Prevention and Control, CDC; 2 Guidehouse, McLean, Virginia.

All authors have completed and submitted the International Committee of Medical Journal Editors form for disclosure of potential conflicts of interest. No potential conflicts of interest were disclosed.

* https://www.cdc.gov/about/priorities/why-is-addressing-sdoh-important.html

† https://www.cdc.gov/nchs/nvss/deaths.htm

§ To incorporate data from all 50 states, vital records from Connecticut supplemented NVSS files. This strategy was necessary for analyses that incorporated county-level measures because 2022 NVSS county information is classified based on Connecticut’s eight counties, but all U.S. Census Bureau products from 2022 forward only contain Connecticut’s nine planning regions as county-equivalents. To fill this gap, Connecticut vital statistics provided data for persons who died by suicide in Connecticut, representing 377 of 398 suicide deaths among Connecticut residents.

¶ Asian and NH/PI were combined because the number of deaths for NH/PI alone would have yielded suppressed rates.

** Suicide deaths among persons aged <10 years were suppressed because of low death counts.

†† The U.S. Department of Agriculture urbanicity scheme was used because it is the most current urbanicity scheme. Rural-Urban Continuum Codes 1–3 were coded as urban, and Codes 4–9 were coded as rural. https://www.ers.usda.gov/data-products/rural-urban-continuum-codes

§§ SAHIE measures any type of health insurance coverage. SAHIE estimates reflect county estimates of health insurance coverage among persons aged <65 years because health insurance coverage among persons aged ≥65 years is nearly universal. All ages were included in analyses of overall rates and by race and ethnicity, sex, and urbanicity because subanalyses of the ≥65 years age group demonstrated associations between county-level health insurance coverage and suicide rates. https://www.census.gov/programs-surveys/sahie.html

¶¶ https://www.census.gov/programs-surveys/acs

*** https://www.census.gov/programs-surveys/saipe.html

††† The county tertile cutoffs for the percentage of residents or households with a given factor were as follows: health insurance coverage: 53.7%–87.0%, 87.1%–91.7% and 91.7%–97.6%; broadband Internet access: 36.0%–80.6%, 80.6%–86.0% and 86.0%–100%; and income >100% of the federal poverty level: 57.6%–83.9%, 84.0%–88.3% and 88.4%–96.9%. Percentages were rounded to one decimal place for readability, but groups do not overlap; statistical ranking was used to split counties into tertile groups before rounding.

§§§ https://wonder.cdc.gov/wonder/help/ucd-expanded.html#Age%20Adjustment

¶¶¶ 45 C.F.R. part 46, 21 C.F.R. part 56; 42 U.S.C. Sect. 241(d); 5 U.S.C. Sect. 552a; 44 U.S.C. Sect. 3501 et seq.

**** Percent reduction as calculated by the formula: ([rate for highest tertile of factor – rate for tertile level of factor] / rate for highest tertile of factor) × 100. Percent reduction was calculated using exact, unrounded rates.

†††† RRs were calculated using exact, unrounded rates.

§§§§ https://zerosuicide.edc.org

  • CDC. National Center for Health Statistics mortality data on CDC WONDER. Atlanta, GA: US Department of Health and Human Services, CDC; 2024. Accessed July 25, 2024. https://wonder.cdc.gov/mcd.html
  • CDC. Suicide prevention: health disparities in suicide. Atlanta, GA: US Department of Health and Human Services, CDC; 2024. https://www.cdc.gov/suicide/disparities/index.html
  • CDC. Suicide prevention resource for action. Atlanta, GA: US Department of Health and Human Services, CDC; 2022. https://www.cdc.gov/suicide/resources/prevention.html
  • Pirkis J, Gunnell D, Hawton K, et al. A public health, whole-of-government approach to national suicide prevention strategies. Crisis 2023;44:85–92. https://doi.org/10.1027/0227-5910/a000902 PMID:36960842
  • Steelesmith DL, Fontanella CA, Campo JV, Bridge JA, Warren KL, Root ED. Contextual factors associated with county-level suicide rates in the United States, 1999 to 2016. JAMA Netw Open 2019;2:e1910936. https://doi.org/10.1001/jamanetworkopen.2019.10936 PMID:31490540
  • Kerr WC, Kaplan MS, Huguet N, Caetano R, Giesbrecht N, McFarland BH. Economic recession, alcohol, and suicide rates: comparative effects of poverty, foreclosure, and job loss. Am J Prev Med 2017;52:469–75. https://doi.org/10.1016/j.amepre.2016.09.021 PMID:27856114
  • Benda NC, Veinot TC, Sieck CJ, Ancker JS. Broadband internet access is a social determinant of health! Am J Public Health 2020;110:1123–5. https://doi.org/10.2105/AJPH.2020.305784 PMID:32639914
  • Sommers BD, Gawande AA, Baicker K. Health insurance coverage and health—what the recent evidence tells us. N Engl J Med 2017;377:586–93. https://doi.org/10.1056/NEJMsb1706645 PMID:28636831
  • Walker ER, Cummings JR, Hockenberry JM, Druss BG. Insurance status, use of mental health services, and unmet need for mental health care in the United States. Psychiatr Serv 2015;66:578–84. https://doi.org/10.1176/appi.ps.201400248 PMID:25726980
  • Bauerly BC, McCord RF, Hulkower R, Pepin D. Broadband access as a public health issue: the role of law in expanding broadband access and connecting underserved communities for better health outcomes. J Law Med Ethics 2019;47(Supp 2):39–42. https://doi.org/10.1177/1073110519857314 PMID:31298126
  • Leonard T, Hughes AE, Donegan C, Santillan A, Pruitt SL. Overlapping geographic clusters of food security and health: where do social determinants and health outcomes converge in the U.S? SSM Popul Health 2018;5:160–70. https://doi.org/10.1016/j.ssmph.2018.06.006 PMID:29998188
  • Hepburn P, Rutan DQ, Desmond M. Beyond urban displacement: suburban poverty and eviction. Urban Aff Rev 2023;59:759–92. https://doi.org/10.1177/10780874221085676
  • US Census Bureau. Broadband access in tribal areas lags rest of the nation. Washington, DC: US Department of Commerce, US Census Bureau; 2024. https://www.census.gov/library/stories/2024/06/broadband-access-tribal-areas.html
  • Reeves A, McKee M, Stuckler D. Economic suicides in the Great Recession in Europe and North America. Br J Psychiatry 2014;205:246–7. https://doi.org/10.1192/bjp.bp.114.144766 PMID:24925987
  • Akee R, Feir D, Gorzig MM, Myers S Jr. Native American “deaths of despair” and economic conditions. Res Soc Stratification Mobility 2024;89:100880. https://doi.org/10.1016/j.rssm.2023.100880
  • Alvarez K, Polanco-Roman L, Samuel Breslow A, Molock S. Structural racism and suicide prevention for ethnoracially minoritized youth: a conceptual framework and illustration across systems. Am J Psychiatry 2022;179:422–33. https://doi.org/10.1176/appi.ajp.21101001 PMID:35599542
  • Misra S, Jackson VW, Chong J, et al. Systematic review of cultural aspects of stigma and mental illness among racial and ethnic minority groups in the United States: implications for interventions. Am J Community Psychol 2021;68:486–512. https://doi.org/10.1002/ajcp.12516 PMID:33811676
  • Rehkopf DH, Buka SL. The association between suicide and the socio-economic characteristics of geographical areas: a systematic review. Psychol Med 2006;36:145–57. https://doi.org/10.1017/S003329170500588X PMID:16420711
  • Arias E, Heron M, Hakes J; National Center for Health Statistics. The validity of race and Hispanic-origin reporting on death certificates in the United States: an update. Vital Health Stat 2 2016;2:1–21. PMID:28436642
  • Arias E, Xu J, Curtin S, Bastian B, Tejada-Vera B. Mortality profile of the non-Hispanic American Indian or Alaska Native population, 2019. Natl Vital Stat Rep 2021;70:1–27. https://doi.org/10.15620/cdc:110370 PMID:34842523
Demographic group Suicide deaths Rate*
AI/AN 650 27.1
Asian and NH/PI 1,554 7.1
Black or African American 3,826 8.9
White 37,481 17.6
Hispanic or Latino 5,122 8.1
Multiracial 682 10.5
Female 10,203 5.9
Male 39,273 23.0
**
10–24 6,533 10.0
25–44 16,848 18.9
45–64 15,645 19.0
≥65 10,438 18.1
Urban 40,096 13.4
Rural 9,359 20.0

Abbreviations : AI/AN = American Indian or Alaska Native; NH/PI = Native Hawaiian or Pacific Islander. * Suicide deaths per 100,000 population. † Age-adjusted rates, as described by https://wonder.cdc.gov/wonder/help/ucd-expanded.html#Age-Adjusted%20Rates . Hispanic or Latino (Hispanic) decedents could be of any race; all other racial and ethnic groups were non-Hispanic. § Race or ethnicity missing for 161 deaths. ¶ Crude rates. ** Age missing for three deaths. †† Suppression of persons aged <10 years due to low death counts. §§ Age-adjusted rates (calculated via direct method, using 2000 U.S. standard population) used 10 age group categories for age-adjustment: 0–4, 5–14, 15–24, 25–34, 35–44, 45–54, 55–64, 65–74, 75–84, and ≥85 years. National Vital Statistics System data was used for all states except Connecticut, where state vital records were used (data provided for 377 of 398 suicide deaths among Connecticut residents). ¶¶ Rural-Urban Continuum Codes 1–3 were coded as urban, and Codes 4–9 were coded as rural. https://www.ers.usda.gov/data-products/rural-urban-continuum-codes/

FIGURE 1 . Suicide rates,* by tertiles of selected county-level factors † , § , ¶ , ** — National Vital Statistics System, †† United States, 2022

Abbreviation: FIPS = Federal Information Processing Standard.

* Age-adjusted rates (calculated via direct method, using 2000 U.S. standard population) used 10 categories for age adjustment: 0–4, 5–14, 15–24, 25–34, 35–44, 45–54, 55–64, 65–74, 75–84, and ≥85 years.

† Percentage of persons with health insurance coverage. Connecticut and Valdez-Cordova Census Area, Alaska, and its updated split FIPS codes (Chugach and Copper River Census Areas, Alaska) were excluded. Data was not available for Kalawao County, Hawaii. Data for 2021 are available at https://www.census.gov/programs-surveys/sahie.html .

§ Percentage of households with a broadband Internet subscription. Valdez-Cordova Census Area, Alaska, and its updated split FIPS codes (Chugach and Copper River Census Areas, Alaska) were excluded. Five-year estimates (2018–2022) are available at https://www.census.gov/programs-surveys/acs .

¶ Percentage of persons living in a household with income >100% of the federal poverty level. Valdez-Cordova Census Area, Alaska, and its updated split FIPS codes (Chugach and Copper River Census Areas, Alaska) were excluded. Data was not available for Kalawao County, Hawaii. Data for 2022 are available at https://www.census.gov/programs-surveys/saipe.html .

** The county tertile cutoffs for the percentage of residents or households with a given factor were as follows: health insurance coverage: 53.7%–87.0%, 87.1%–91.7% and 91.7%–97.6%; broadband Internet access: 36.0%–80.6%, 80.6%–86.0%, and 86.0%–100%; and income >100% of the federal poverty level: 57.6%–83.9%, 84.0%–88.3%, and 88.4%–96.9%. Percentages were rounded to one decimal place for readability, but groups do not overlap; statistical ranking was used to split counties into tertile groups before rounding.

†† Data from state vital records were used for 377 of 398 suicide deaths among Connecticut residents.

Characteristic Tertile
Lowest Intermediate Highest
Deaths Rate Deaths Rate Deaths Rate
**
AI/AN 377 35.0 188 24.5*** 85 15.4***
Asian and NH/PI 243 8.0 444 6.8*** 851 7.0
Black or African American 1,151 9.0 1,393 8.7 1,246 9.2
White 9,855 22.2 11,809 18.6*** 15,513 15.1***
Hispanic or Latino 1,979 9.0 1,777 7.7*** 1,325 7.5***
Multiracial 139 10.2 191 9.8 348 11.2
Female 2,782 6.7 3,189 5.8*** 4,135 5.6***
Male 10,984 26.5 12,667 23.3*** 15,316 20.9***
10–24 1,874 11.6 2,172 10.2*** 2,446 9.1***
25–44 4,768 22.1 5,537 19.0*** 6,409 17.1***
45–64 4,211 21.2 4,883 18.8*** 6,408 17.9***
≥65 2,911 20.6 3,260 18.5*** 4,185 16.5***
Urban 10,396 15.3 12,947 13.5*** 16,403 12.4***
Rural 3,370 21.1 2,909 20.1 3,048 18.8***
** ****
AI/AN 261 41.0 138 29.7*** 251 19.3***
Asian and NH/PI 17 8.2 100 7.3 1,435 7.0
Black or African American 267 8.3 843 9.7*** 2,711 8.8
White 3,371 22.7 8,009 19.8*** 26,086 16.5***
Hispanic or Latino 296 9.5 824 8.9 3,999 7.9***
Multiracial 40 13.5 84 9.9 558 10.6
Female 758 7.2 1,905 6.3*** 7,536 5.7***
Male 3,503 31.4 8,125 27.0*** 27,623 21.3***
10–24 582 13.5 1,219 10.6*** 4,725 9.6***
25–44 1,482 28.4 3,510 23.5*** 11,846 17.2***
45–64 1,259 22.8 3,107 21.2*** 11,273 18.1***
≥65 937 21.5 2,191 19.8*** 7,310 17.3***
Urban 1,080 16.7 6,012 14.8*** 33,001 13.0***
Rural 3,181 20.3 4,018 19.9 2,158 19.7
AI/AN 343 37.9 159 22.6*** 148 18.5***
Asian and NH/PI 174 6.9 499 7.3 879 7.0
Black or African American 1,216 9.1 1,359 9.1 1,246 8.7
White 7,036 20.0 13,196 19.1*** 17,234 15.8***
Hispanic or Latino 1,082 8.2 2,085 8.1 1,952 8.1
Multiracial 95 9.4 212 9.6 375 11.4
Female 1,949 5.9 3,544 6.0 4,706 5.8
Male 8,026 25.1 14,027 23.9*** 17,198 21.4***
10–24 1,398 10.4 2,227 10.0 2,901 9.8
25–44 3,648 21.3 6,092 19.8*** 7,098 17.2***
45–64 2,905 19.2 5,533 19.7 7,201 18.3***
≥65 2,020 18.6 3,716 18.7 4,702 17.4***
Urban 6,271 13.3 14,020 13.8*** 19,802 13.1
Rural 3,704 20.5 3,551 20.1 2,102 18.9***

Abbreviations: AI/AN = American Indian or Alaska Native; FIPS = Federal Information Processing Standard; NH/PI = Native Hawaiian or Pacific Islander. * Data from state vital records were used for 377 of 398 suicide deaths among Connecticut residents. † The county tertile cutoffs for the percentage of residents or households with a given factor were as follows: health insurance coverage: 53.7%–87.0%, 87.1%–91.7% and 91.7%–97.6%; broadband Internet access: 36.0%–80.6%, 80.6%–86.0% and 86.0%–100%; and income >100% of the federal poverty level: 57.6%–83.9%, 84.0%–88.3% and 88.4%–96.9%. Percentages were rounded to one decimal place for readability, but groups do not overlap; statistical ranking was used to split counties into tertile groups before rounding. § Suicide deaths per 100,000 population. ¶ Percentage of persons with health insurance coverage. Data for 2021 are available at https://www.census.gov/programs-surveys/sahie.html . ** Valdez-Cordova Census Area, Alaska, and its updated split FIPS codes (Chugach and Copper River Census Areas, Alaska) were excluded. †† Connecticut was excluded. §§ Data not available for Kalawao County, Hawaii. ¶¶ Age-adjusted rates (calculated via direct method, using 2000 U.S. standard population) used 10 categories for age adjustment: 0–4, 5–14, 15–24, 25–34, 35–44, 45–54, 55–64, 65–74, 75–84, and ≥85 years. Hispanic or Latino decedents could be of any race; all other racial and ethnic groups were non-Hispanic. *** p<0.05 for difference with counties in the lowest tertile of factor based on Z-test for >100 deaths. When deaths were <100, differences in rates were considered significant if CIs based on a gamma distribution did not overlap. ††† Crude rates. §§§ Suppression of persons aged <10 years due to low death counts. ¶¶¶ Rural-Urban Continuum Codes 1–3 were coded as urban, and Codes 4–9 were coded as rural. https://www.ers.usda.gov/data-products/rural-urban-continuum-codes/ **** Percentage of households with a broadband Internet subscription. Five-year estimates (2018–2022) are available at https://www.census.gov/programs-surveys/acs . †††† Percentage of persons living in a household with income >100% of the federal poverty level. Data for 2022 are available at https://www.census.gov/programs-surveys/saipe.html .

FIGURE 2 . Associations between selected county-level factors* , † , § , ¶ and suicide rates,** by demographic group †† , §§ , ¶¶ — National Vital Statistics System,*** United States, 2022 †††

Abbreviations: AI/AN = American Indian or Alaska Native; FIPS = Federal Information Processing Standard; NH/PI = Native Hawaiian or Pacific Islander.

* Percentage of persons with health insurance coverage. Connecticut and the Valdez-Cordova Census Area, Alaska, and its updated split FIPS codes (Chugach and Copper River Census Areas, Alaska) were excluded. Data not available for Kalawao County, Hawaii. Data for 2021 are available at https://www.census.gov/programs-surveys/sahie.html .

† Percentage of households with a broadband Internet subscription. Valdez-Cordova Census Area, Alaska, and its updated split FIPS codes (Chugach and Copper River Census Areas, Alaska) were excluded. Five-year estimates (2018–2022) are available at https://www.census.gov/programs-surveys/acs .

§ Percentage of persons living in a household with income >100% of the federal poverty level. Valdez-Cordova Census Area, Alaska, and its updated split FIPS codes (Chugach and Copper River Census Areas, Alaska) were excluded. Data for 2022 are available at https://www.census.gov/programs-surveys/saipe.html .

¶ The county tertile cutoffs for the percentage of residents or households with a given factor were as follows: health insurance coverage: 53.7%–87.0%, 87.1%–91.7%, and 91.7%–97.6%; broadband Internet access: 36.0%–80.6%, 80.6%–86.0%, and 86.0%–100%; and income >100% of the federal poverty level: 57.6%–83.9%, 84.0%–88.3%, and 88.4%–96.9%. Percentages were rounded to one decimal place for readability, but groups do not overlap; statistical ranking was used to split counties into tertile groups before rounding.

** Rates were age-adjusted (calculated via direct method, using 2000 U.S. standard population) for race and ethnicity, sex, and urbanicity; used 10 categories for age adjustment: 0–4, 5–14, 15–24, 25–34, 35–44, 45–54, 55–64, 65–74, 75–84, and ≥85 years. Crude rates were used for age-stratified groups.

†† Hispanic or Latino (Hispanic) decedents could be of any race; all other racial and ethnic groups were non-Hispanic.

§§ Persons aged <10 years were not included in age-stratified rate ratios because of low death counts.

¶¶ Rural-Urban Continuum Codes 1–3 were coded as urban, and Codes 4–9 were coded as rural. https://www.ers.usda.gov/data-products/rural-urban-continuum-codes/

*** Data from state vital records were used for 377 of 398 suicide deaths among Connecticut residents.

††† The x-axis is plotted on the log scale.

Suggested citation for this article: Cammack AL, Stevens MR, Naumann RB, et al. Vital Signs : Suicide Rates and Selected County-Level Factors — United States, 2022. MMWR Morb Mortal Wkly Rep. ePub: 10 September 2024. DOI: http://dx.doi.org/10.15585/mmwr.mm7337e1 .

MMWR and Morbidity and Mortality Weekly Report are service marks of the U.S. Department of Health and Human Services. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. References to non-CDC sites on the Internet are provided as a service to MMWR readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of pages found at these sites. URL addresses listed in MMWR were current as of the date of publication.

All HTML versions of MMWR articles are generated from final proofs through an automated process. This conversion might result in character translation or format errors in the HTML version. Users are referred to the electronic PDF version ( https://www.cdc.gov/mmwr ) and/or the original MMWR paper copy for printable versions of official text, figures, and tables.

IMAGES

  1. (PDF) Research Methodology WRITING A RESEARCH REPORT

    research methodology of reports

  2. Types of Research Report

    research methodology of reports

  3. FREE 30+ Sample Research Reports in MS Word, Apple Pages, Google Docs

    research methodology of reports

  4. 15 Research Methodology Examples (2024)

    research methodology of reports

  5. Types of Research Methodology: Uses, Types & Benefits

    research methodology of reports

  6. FREE 8+ Sample Scientific Reports in PDF

    research methodology of reports

VIDEO

  1. Research Methodology and Case Analysis

  2. Metho 6: The Research Process (Introduction)

  3. Research Methodology and Case Analysis

  4. Research methodology (research design formulation)

  5. Types of Report ( Written & Oral Reports ) Part 1 Research Methodology

  6. Methodological Reviews

COMMENTS

  1. Research Methodology

    Qualitative Research Methodology. This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

  2. Research Report

    Research Report is a written document that presents the results of a research project or study, including the research question, methodology, results, and conclusions, in a clear and objective manner. ... For example, a research report on a new teaching methodology could provide insights and ideas for educators to incorporate into their own ...

  3. Your Step-by-Step Guide to Writing a Good Research Methodology

    Provide the rationality behind your chosen approach. Based on logic and reason, let your readers know why you have chosen said research methodologies. Additionally, you have to build strong arguments supporting why your chosen research method is the best way to achieve the desired outcome. 3. Explain your mechanism.

  4. What Is a Research Methodology?

    What Is a Research Methodology? | Steps & Tips. Published on August 25, 2022 by Shona McCombes and Tegan George. Revised on September 5, 2024. Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing ...

  5. What is Research Methodology? Definition, Types, and Examples

    0 comment 39. Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research.

  6. Research Methodology Guide: Writing Tips, Types, & Examples

    1. Qualitative research methodology. Qualitative research methodology is aimed at understanding concepts, thoughts, or experiences. This approach is descriptive and is often utilized to gather in-depth insights into people's attitudes, behaviors, or cultures. Qualitative research methodology involves methods like interviews, focus groups, and ...

  7. Research report guide: Definition, types, and tips

    A research report is a collection of contextual data, gathered through organized research, that provides new insights into a particular challenge (which, for this article, is business-related). Research reports are a time-tested method for distilling large amounts of data into a narrow band of focus. Their effectiveness often hinges on whether ...

  8. What Is Research Methodology? Definition + Examples

    As we mentioned, research methodology refers to the collection of practical decisions regarding what data you'll collect, from who, how you'll collect it and how you'll analyse it. Research design, on the other hand, is more about the overall strategy you'll adopt in your study. For example, whether you'll use an experimental design ...

  9. Writing a Research Report

    There are five MAJOR parts of a Research Report: 1. Introduction 2. Review of Literature 3. Methods 4. Results 5. Discussion. As a general guide, the Introduction, Review of Literature, and Methods should be about 1/3 of your paper, Discussion 1/3, then Results 1/3. Section 1: Cover Sheet (APA format cover sheet) optional, if required.

  10. 6. The Methodology

    "The Method Chapter: Describing Your Research Plan." In Surviving Your Dissertation: A Comprehensive Guide to Content and Process. (Thousand Oaks, Sage Publications, 2015), pp. 87-115; What is Interpretive Research. Institute of Public and International Affairs, University of Utah; Writing the Experimental Report: Methods, Results, and ...

  11. What is research methodology? [Update 2024]

    A research methodology encompasses the way in which you intend to carry out your research. This includes how you plan to tackle things like collection methods, statistical analysis, participant observations, and more. You can think of your research methodology as being a formula. One part will be how you plan on putting your research into ...

  12. Writing a Research Report in American Psychological Association (APA

    Identify the major sections of an APA-style research report and the basic contents of each section. Plan and write an effective APA-style research report. ... The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it ...

  13. PDF How to Write an Effective Research REport

    Abstract. This guide for writers of research reports consists of practical suggestions for writing a report that is clear, concise, readable, and understandable. It includes suggestions for terminology and notation and for writing each section of the report—introduction, method, results, and discussion. Much of the guide consists of ...

  14. A tutorial on methodological studies: the what, when, how and why

    Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...

  15. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  16. Research Reports: Definition and How to Write Them

    Research reports are recorded data prepared by researchers or statisticians after analyzing the information gathered by conducting organized research, typically in the form of surveys or qualitative methods. A research report is a reliable source to recount details about a conducted research. It is most often considered to be a true testimony ...

  17. Research Report: Definition, Types + [Writing Guide]

    A research report is a well-crafted document that outlines the processes, data, and findings of a systematic investigation. It is an important document that serves as a first-hand account of the research process, and it is typically considered an objective and accurate source of information.

  18. A tutorial on methodological studies: the what, when, how and why

    In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts). In the past 10 years, there has been an increase in the use of terms related to ...

  19. Methodology section in a report

    The method section of a report details how the research was conducted, the research methods used and the reasons for choosing those methods. It should outline: the participants and research methods used, e.g. surveys/questionnaire, interviews. refer to other relevant studies. The methodology is a step-by-step explanation of the research process.

  20. Research Methodology WRITING A RESEARCH REPORT

    Nature of Research Qualitative Research Report This is the type of report is written for qualitative research. It outlines the methods, processes, and findings of a qualitative method of ...

  21. Types of Research Designs Compared

    Other interesting articles. If you want to know more about statistics, methodology, or research bias, make sure to check out some of our other articles with explanations and examples. Statistics. Normal distribution. Skewness. Kurtosis. Degrees of freedom. Variance. Null hypothesis.

  22. PDF How to Write a Research Report & Presentation

    Writing a Research Report: Presentation. Tables, Diagrams, Photos, and Maps. -Use when relevant and refer to them in the text. -Redraw diagrams rather than copying them directly. -Place at appropriate points in the text. -Select the most appropriate device. -List in contents at beginning of the report.

  23. PDF GUIDELINES FOR PREPARING A RESEARCH REPORT

    Organization of the Research Report. Most scientific research reports, irrespective of the field, parallel the method of scientific reasoning. That is: the problem is defined, a hypothesis is created, experiments are devised to test the hypothesis, experiments are conducted, and conclusions are drawn. This framework is consistent with the ...

  24. Methodological Reports

    Methodological Report, Chicago, NORC , 1981. Various methods of measuring the impact of non-response bias on survey estimates are examined. It is concluded that there is no simple, general, or accurate way of measuring it. Further research is encouraged. GSS years: 1980

  25. Research and Education An Introduction to Methods ...

    This core text, now in its second edition, provides an easy-to-read, comprehensive introduction to educational research that will develop your understanding of research strategies, theories and methods, giving you the confidence and enthusiasm to discuss and write about your research effectively. Specifically written for undergraduate education studies students, the book guides you through the ...

  26. Methodology

    The American Trends Panel survey methodology Overview. Data in this report comes from Wave 153 of the American Trends Panel (ATP), Pew Research Center's nationally representative panel of randomly selected U.S. adults.

  27. Iron Ore Market Size, Share & Growth Analysis Report, 2030

    Report Attribute. Details. Market size value in 2024. USD 278.44 billion. Revenue forecast in 2030. USD 301.86 billion. Growth rate. CAGR of 1.4% from 2024 to 2030. Base year for estimation. 2023. Historical data. 2018-2022. Forecast period. 2024-2030. Quantitative units. Volume in million tons, revenue in USD million, and CAGR from 2024 to ...

  28. Executives say sustainability investments are up, new Deloitte research

    Methodology The report is based on a survey of 2,103 C-level executives. The survey, conducted by KS&R Inc. and Deloitte, during May and June 2024, polled respondents from 27 countries: 46% from Europe/Middle East/South Africa; 17% from North America; 9% from Latin America; 28% from Asia-Pacific.

  29. Nitrosamine Ames Data Review and Method Development Workshop

    The Food and Drug Administration Center for Drug Evaluation and Research and Health & Environmental Sciences Institute Global agree to co-sponsor the Nitrosamine Ames Data Review and Method ...

  30. Vital Signs: Suicide Rates and Selected County-Level Factors

    Introduction. In 2022, approximately 49,000 persons died by suicide in the United States (age-adjusted suicide rate = 14.2 per 100,000 population), and provisional data indicate a similar number of persons died by suicide in 2023 (1).Suicide was the second leading cause of death among persons aged 10-34 years in 2022 (1).Several demographic groups are disproportionately affected by suicide ...