In survey research, response rate , also known as settlement rate or rate of return , is the number of people who respond to the survey divided by the number of people in the sample. Usually expressed as a percentage. The term is also used in direct marketing to refer to the number of people responding to an offer.
The general consensus in the academic survey is to choose one of six definitions summarized by the American Association for Public Opinion Research (AAPOR). This definition is supported by the National Research Council and the Journal of the American Medical Association, among other recognized institutions. They:
- Response Level 1 (RR1) - or minimum response rate, is the number of complete interviews divided by the number of interviews (complete plus partial) plus the number of non-interviews (denial and dash plus non-contact plus others) plus all unknown cases of eligibility (it is unknown whether housing units, plus unknowns, others).
- Response Level 2 (RR2) - RR1 counts partial interviews as respondents.
- Response Level 3 (RR3) - estimates the proportion of unknown eligible cases of eligibility. Respondents who are not considered eligible are excluded from the denominator. The estimation method should be explicitly stated with RR3.
- Response Level 4 (RR4) - allocates cases of unknown uncertainty as in RR3, but also includes partial interviews as respondents as in RR2.
- Response Level 5 (RR5) - is a special case of RR3 because it assumes that there are no eligible cases among unknown cases of eligibility or rare cases in which no unknown cases of eligibility. RR5 is only appropriate when it is valid to assume that none of the unknown cases are eligible, or when there are no unknown cases.
- Response Level 6 (RR6) - makes the same assumptions as RR5 and also includes partial interviews as respondents. RR6 represents the maximum response rate.
The six definitions of AAPOR vary as to whether the survey was conducted partially or completely and how the researcher faced unknown nonrespondents. Definition # 1, for example, does NOT include a partially resolved survey in the numerator, while definition # 2 does not. Definition 3-6 deals with the uncertainty of potential unknown respondents who can not be contacted. For example, there is no answer on the 10th door of the house you are trying to survey. Probably 5 of those who already know the eligible home people for your survey based on the neighbors tell you who lives there, but the other 5 are completely unknown. Maybe the occupants match your target population, maybe they are not. This may or may not be considered in your response rate, depending on the definition you use.
Example: if 1000 surveys are sent by mail, and 257 successfully completed (all) and returned, then the response rate will be 25.7%.
Video Response rate (survey)
Importance
The survey response rate is the result of dividing the number of people interviewed by the total number of people in the sample eligible to participate and should be interviewed. A low response rate can lead to a sampling bias if it is unresponsive is not the same among participants regarding exposure and/or results. Such bias is known as nonresponse bias.
Over the years, survey response rates are seen as an important indicator of survey quality. Many observers consider that higher response rates assure more accurate survey results (Aday 1996; Babbie 1990; Backstrom and Hursh 1963; Rea and Parker 1997). But because it measures the relationship between nonresponse and the complex statistical accuracy of surveys and expensive, several studies designed to provide strictly empirical evidence to document the consequences of lower response rates to date.
Such studies have finally been done in recent years, and some have concluded that the increased cost of response rates is often not justified given the differences in survey accuracy.
One early example of a finding was reported by Visser, Krosnick, Marquette and Curtin (1996) suggesting that surveys with lower response rates (close to 20%) resulted in more accurate measurements than surveys with higher response rates (nearly 60 or 70%). In another study, Keeter et al. (2006) compared the 5-day survey results using the usual Pew Research Center methodology (with a 25% response rate) with results from more rigorous surveys conducted over longer field periods and achieved a 50% higher response rate. In 77 of 84 comparisons, the two surveys produced statistically indistinguishable results. Among the items that manifest significant differences in the two surveys, the difference in the proportion of people giving a given answer ranged from 4 percentage points to 8 percentage points.
A study by Curtin et al. (2000) tested the effect of lower response rates on Consumer Sentiment Index (ICS) estimates. They assessed the impact of excluding respondents who initially refused to cooperate (which reduced the response rate by 5-10 percentage points), respondents requiring more than five calls to complete the interview (reducing response rates by about 25 percentage points), and those requiring more than two calls (a reduction of about 50 percentage points). They found no effect excluding this group of respondents on ICS estimates using monthly samples from hundreds of respondents. For an annual estimate, based on thousands of respondents, the exclusion of people who need more calls (though not from the original) has a very small amount.
Holbrook et al. (2005) assessed whether a lower response rate was associated with unequal demographic representation of the sample. By examining the results of 81 national surveys with response rates varying from 5 percent to 54 percent, they found that surveys with a much lower response rate reduced demographic representation in the range examined, but were few.
Choung et al. (2013) looked at the level of community responses to the questionnaires sent about functional gastrointestinal disorders. The response rate for their community survey was 52%. Then, they took a random sample of 428 respondents and 295 nonresponders for medical record abstraction, and compared nonresponders to respondents. They found that respondents had significantly higher body mass index and health seeking behavior for non-GI problems. However, except for diverticulosis and skin diseases, there was no significant difference between the respondent and nonpenerima in terms of gastrointestinal symptoms or certain medical diagnoses.
Dvir and Gafni (2018) examine whether consumer response rates are influenced by the amount of information provided. In a series of large-scale web experiments ( n = 535 and n = 27.900), they compared the variant of the marketing webpage (also called Landing page), which focuses on how it is converted to the number content affects the user's desire to provide their e-mail address (a behavior called Conversion rate in marketing terms). The results show a significantly higher response rate on shorter pages, indicating that contrary to previous work, not all response rate theory is effective online.
However, apart from recent research studies, a higher response rate is preferred because the missing data is not random. There is no satisfactory statistical solution to handle missing data that may not be random. Assuming extreme bias in the respondents is one of the recommended methods for handling low survey response rates. The high response rate (& gt; 80%) of small random samples is preferred over the low response rates of large samples.
Maps Response rate (survey)
See also
- Response ratio rate
References
Source of the article : Wikipedia