Introduction
Response rates are widely reported to have decreased for many types of surveys over the past decade, especially for random-digit-dial (RDD) surveys (e.g., Curtin, Presser, and Singer 2005; Steeh et al. 2001). One outgrowth of this major decline of response rates (particularly problematic for RDD studies) is anxiety among scholars in the social science research community about the validity of analysis of data from surveys with low response rates; at what point, for example, are these surveys judged to be unacceptable as valid research due to their low response rates? And, not coincidentally, will journal editors increasingly reject manuscripts that are based on surveys with low response rates? Or, will major journals instead expand their publication standards to include multiple measures of nonresponse bias and data quality rather than focusing solely on response rates? More simply, do low response rates matter at all (yet) to journal editors in the social science, health, and statistics fields?
Anecdotally, some of our colleagues hold fast to the perception that it is harder to get studies published if they fail to achieve acceptable response rate standards. However, these same individuals readily admit that they do not have an accurate picture of what, if any, standards regarding data quality or survey error are imposed by journal editors when considering manuscripts which report results based on data analysis of surveys. There is not, to our knowledge, literature that directly addresses the question of standards used by journal editors for judging the quality of surveys used in manuscript submissions. In this paper, we provide results from a survey of editors of social science (e.g., sociology, psychology, and political science/survey research), health, education, marketing research, and statistics research journals, in which we asked about standards and considerations, both de facto and de jure, in use when deciding whether to accept a manuscript for publication. This paper reports on the results of a survey to measure the importance of response rates in editorial decisions.
Methods
To build a sample of journal editors for our study, we employed a multi-stage approach. First, we constructed an expert panel of highly-published authors across a wide variety of disciplines. We asked these authors to provide lists of journals in their respective fields most likely to publish articles using survey data. Second, we searched the Web of Science (which includes the Social Citation Index Expanded, Social Sciences Citation Index, and Arts & Humanities Citation Index) for the keywords “survey,” “response rate,” and “survey and response rate” to identify individual journals that yielded “hits” on those keywords. We analyzed search results, counting the number of articles for each journal with these keywords. We then compared the number of articles for each journal to the list of expert-identified journals and retained the top four to six journals in each discipline. This process resulted in a total of 33 journals that were selected for inclusion in the study – four to six journals across seven disciplines (education, health, marketing research, political science/survey research, psychology, sociology, and statistics) most likely to publish articles reporting survey data.
Finally, we used online web pages for each journal to identify the editors for each journal. For each journal, we selected the editor-in-chief; if multiple editors were listed as “editor” or “co-editor,” we selected up to two for inclusion in the sample. We then randomly selected, using a random number generator, up to two associate editors for each journal. (Because of their position on the “totem pole,” we hypothesized that associate editors may have different views on response rate standards than those espoused by editors-in-chief.) Our final sample totaled 109 individuals; 42 editors-in-chief and 67 associate editors.
We compiled names, affiliation, and contact information (including address, telephone number, and e-mail address) for all sample members using online web pages for each journal, as well as web pages of their affiliated agencies/universities. Since we were able to identify an e-mail address for the vast majority of sample members (99 percent), e-mail was chosen as the primary means of contact for sample members.
In October 2005, sample members were sent an e-mail that described the study and provided the URL for a web survey along with a username and password. This was followed by a mailed hard copy questionnaire packet a few days later. A reminder was sent via e-mail to sample members approximately 2 weeks later as well as a hard-copy postcard, followed by a final survey request approximately 1 week later via e-mail. The website remained open for sample members to participate for 6 weeks.
Table 1 displays the number of journals selected by discipline along with sample size, number of respondents, and response rate. We selected six journals each in the political science/survey research and psychology fields, five in the health field, and four each in the education, marketing research, sociology, and statistics fields. Across these 33 journals, our overall eligible sample of editors and associate editors totaled 91 [1]; in the end, 39 sample members responded to the survey for an overall response rate of 42.9 percent (AAPOR1). Editors-in-chief responded at a higher rate (52.8 percent) compared to associate editors (36.6 percent). Of the 39 surveys completed, 24 (62 percent) were completed via the web and 15 (38 percent) on hard copy.
Results
We were interested in understanding the submission and publication decision-making process from the journal editors surveyed. To establish a context in which to place this decision-making, we first asked about the volume of manuscripts based on survey data, and about the acceptance rates for this kind of article. Respondents indicated that less than 51 percent of all submissions are accepted. The majority of journals included in our sample do indeed publish articles using survey data; however, approximately 7 out of 10 respondents indicated that less than half of the submissions to their journal presented survey data (Table 2). Manuscripts that present survey data are accepted at similar rates to all other articles. Approximately three in ten respondents noted that less than 10 percent of submissions presenting survey data were accepted, 61 percent indicated that 10 to 25 percent were accepted, and only 9 percent indicated that 26 to 50 percent were accepted (Table 2).
We then asked whether response rate is an important part of the publication decision for articles that present survey data: just under 90 percent of respondents indicated that response rate is either somewhat or very important in publication decisions for articles that present survey data (Table 2). Only respondents from statistics journals (11 percent) reported that response rates were not at all important in publication decisions (results not shown). We also asked how often response rate is a major reason for rejection: only 3 percent of respondents indicated that submissions are rejected primarily due to low response rates most of the time, while 69 percent indicated this occurred some of the time, and 29 percent indicated submissions were never rejected primarily due to low response rate (Table 2).
Next, we inquired about what kind of standards journals had used in publication decisions for articles that present survey data. Table 3 shows that all respondents indicated that their journals do not have written standards for response rates that articles citing survey data must meet. That said, 13 percent of all respondents indicated that their journal does have unwritten standards or “rules of thumb” for minimally acceptable response rates. Given the aforementioned documented decline in response rates, we wondered whether journals’ standards might have been changed or adapted over time. This does not appear to be the case; approximately 97 percent of all respondents indicated any response rate standards that did apply to submissions for their journal have not changed in the past 10 years (Table 3). Political science/survey research respondents (14 percent) were the only ones to report such a change (results not shown).
Because we surmised that most journals would not have written or documented response rate standards, we also asked respondents to tell us about other measures of survey quality that they use to make publication recommendations, and then rate the importance of those factors compared to response rate. We grouped their open-ended responses into 10 categories:
- sampling, including design, plan, and technique;
- questionnaire design, including measurement design and innovative design;
- representativeness, including reliability, response rate, and generalizability;
- theoretical framework;
- policy implications, including importance and timeliness of the research;
- nonresponse, including missing data and bias;
- sponsorship, including data collection organization and author;
- relevance to the organization, journal, or readership;
- data collection methods and analysis; and
- other (which included things such as originality and overall quality of the research).
Table 4 shows that sampling (22 percent), questionnaire design (20 percent), methods (18 percent), and representativeness (14 percent) were the four most cited measures of quality considered in publication decisions other than response rate.
Discussion
We set out to answer several questions about response rates in journal articles. Specifically, we wanted to test the perception held among many of our colleagues that journals have not regarded favorably articles based on analyses of surveys with lower response rates. To that end, we wondered if there were, indeed, standards in use by journal editors for accepting or rejecting potential publications based on response rates.
We found that the journals in our sample do publish articles that present survey data, but as with all types of submissions, less than 50 percent of submissions that present survey data are accepted. While journal editors overwhelmingly (approximately 90 percent) say that response rate is at least somewhat important in publication decision-making, it would appear that such a feeling or perception is loosely interpreted; that is, there are not written standards or conventions for either reporting response rate information or deciding minimum thresholds. While a distinct minority of our respondents (13 percent) told us that they used a “rule of thumb,” the application of such rules resulted in publications based on surveys with widely varying response rates (16 to 91 percent). Moreover, even if these unwritten standards do exist and were applied consistently, they have not changed in at least the last 10 years, according to our data.
While our sample of journals and journal editors does not span the entire universe of social science publications, it certainly is wide-ranging enough to draw some conclusions. First, it would appear that the perception among social science researchers that journals weight response rates heavily in the manuscript review process is unfounded. Most journal editors seem to rely more on a gut feeling and think about any manuscript’s worth or merit based more on intangible or global concepts, such as design (be it sample or questionnaire design) than they do on measures of survey quality. Secondly, it seems as though journal editors are not aware of, or are not overly concerned about, the response rate decline, at least insofar as publication decisions are concerned – with one notable exception: at least some political science and survey research journal editors suggested that they were, indeed, changing standards (or, at least, their “rules of thumb”).
After initial survey requests were distributed to sample members, three journals were replaced due to sample members informing us that the journal did not publish articles using survey data. We replaced these journals with journals identified next on the list from Web of Science based on our keyword search. Two editors-in-chief and three associate editors were replaced during the data collection after we were notified that the selected editor or associate editor was no longer acting in that role for the selected journal.