Background
The Federal Employee Viewpoint Survey (FEVS) is an organizational climate survey administered by the U.S. Office of Personnel Management (OPM). Launched in 2002 as the Federal Human Capital Survey (FHCS), the survey was administered biennially until 2010 when it was renamed FEVS and administered annually. The Web-based survey is sent to employees from 85 agencies via a personalized link in an email. Weekly reminders are sent to nonrespondents over the course of a six-week field period. The survey instrument consists predominantly of attitudinal items posed on a five-point Likert-type response scale, for example ranging from “Completely Disagree” to “Completely Agree,” and taps into a diverse range of constructs, such as job satisfaction, engagement, and the perceptions of senior leadership within the agency.
The FEVS sampling frame is derived from the Statistical Data Mart of the Enterprise Human Resources Integration (EHRI-SDM), an expansive personnel database maintained by OPM. The sample size has increased markedly over its existence, from approximately 200,000 in 2002 to approximately 900,000 in 2016. The observed increase is attributable to participating agencies’ desire for progressively deeper reporting and analyses within the organization (Berry 2012). Detailed in OPM (2015), each agency provides hierarchically structured work-unit identifiers for its roster of FEVS-eligible[1] employees which are used to create sampling strata and also for reporting out survey results. If the necessary sample size given the agency’s stratification scheme surpasses 75% or more of the agency’s population, a census is conducted instead. In FEVS 2016, a census was conducted in 71 out of the 85 participating agencies.
Parker (2011) reviews some of the inherent and perceived advantages and disadvantages of conducting a census versus sampling in organizational climate surveys. While acknowledging sampling is generally more cost effective and, with proper planning and execution, allows for sufficiently representative results, she cautions that some employees not selected may feel excluded in certain circumstances. In addition, she notes the tendency for lay consumers of survey results to ascribe more credibility to data derived from a census. We can attest to this in the FEVS. For example, Stier (2016) asserts that transitioning to an annual FEVS census “would enhance the usefulness of the survey as an oversight and accountability tool for Congress and offer more value to agency leaders and managers who are using the survey to improve satisfaction and commitment within their organizations.”
Numerous other stakeholders have lobbied for the FEVS to be administered as an annual census as well. Among the range of motivating factors, which includes senior leaders’ desire for the optics of a survey giving each and every employee an equal say, is the widespread speculation that offering every individual the opportunity to participate will boost response rates. As noted in Lewis and Hess (2015), response rates to the FEVS have been gradually declining, a trend the FEVS administration team is keen on reversing. So, if the assumed effect can be proven to exist, budget constraints permitting, the FEVS administration team would pursue a change in its sampling strategy to accommodate agencies to conduct a census perennially.
To the best of our knowledge, there has been no published research investigating the relationship between response rates in organizational climate surveys and whether or not the workforce was censused. Within the purview of the FEVS, the first formal investigation into the matter was Lewis et al. (2016), who exploited the natural experiment (Wooldridge 2012) resulting from a sequence of sample design changes between 2011 and 2013. With few exceptions, every agency conducted a census in FEVS 2012, but not in the two adjacent administrations, FEVS 2011 and FEVS 2013. A by-product was a cohort of agencies that transitioned into and out of a census administration which could be compared against a cohort of agencies that conducted a census continuously from FEVS 2011 to FEVS 2013. Using what Wooldridge (2012) terms a first-differenced estimator, Lewis et al. (2016) found marginally significant evidence suggesting a census is linked with higher response rates. The study was limited because it was retrospective and observational. In this paper, we present results from a follow-up study employing a prospective research design devoid of that criticism.
This paper is organized as follows. After a brief review of the pertinent literature, we posit two hypotheses about the relative impacts of informing individuals whether the agency’s workforce was sampled or censused. We then report results from an FEVS 2016 email wording experiment carried out to test these hypotheses. The paper concludes with a summary and discussion of limitations.
Two Hypotheses Regarding the Relative Impacts of a Census Versus a Sample
In their review of the theory of diffusion of responsibility, Barron and Yechiam (2002) cite two studies relevant to the question of whether a census might be associated with a response rate increase. The first is Darley and Latané (1969), who found one’s motivation to help is tempered when others are perceived as able to help. The second is Diekmann (1985), who discovered in a similar vein that individuals in a game setting were less likely to volunteer to help the greater good of the group if they knew someone else already had volunteered. Viewing the solicitation to participate in an organizational climate survey such as the FEVS as a request for help (i.e., to provide opinions and perspectives used to drive organizational improvements), our first hypothesis is that we expect employees to be less inclined to participate in the survey if made aware all other employees in the organization have also been asked to participate.
A corollary of the argument calling for an FEVS census is that surveying only a sample of an organization’s employees is somehow associated with lower response rates. Here, too, the literature seems to suggest otherwise. Groves et al. (1992) outline a set of psychological constructs that factor into a given individual’s decision to participate in a survey, maintaining that individuals may be more likely to comply if the requestor highlights the scarcity of the opportunity, such as “not all employees have been given the chance to participate” or “only a select number of employees like you were randomly selected.” Indeed, Porter and Whitcomb (2003) found this strategy to be effective in a survey of high school students. As such, our second hypothesis stipulates that informing the employee he or she is part of a random sample will result in a response rate increase.
Data and Methods
To test the two competing hypotheses laid out in the previous section, during the FEVS 2016 administration that ran from April 26 to June 16, we systematically manipulated the messaging embedded in reminder emails sent to 242,717 employees from four agencies: two conducting a census and two conducting a sample. The two censused agencies were the Department of the Interior (DOI) and the Environmental Protection Agency (EPA); the two sampled agencies were the Department of Homeland Security (DHS) and the Department of Veterans Affairs (VA). All employees in these agencies received the same initial survey invitation. Thereafter, one-half of the employees were randomly assigned to a condition where the merits of a census or a sample, respectively, were heavily emphasized in subsequent reminders. We refer to this as pro-census or pro-sample wording. The other half was assigned to a condition not mentioning the sampling strategy, which we refer to as neutral wording. Three examples are provided in the Appendix.
Our key outcome measure of interest is the response rate, which we report in accordance with the RR1 definition of the American Association of Public Opinion Research (American Association for Public Opinion Research (AAPOR) 2016). To be considered a complete, we used the same rule described in OPM (2015) whereby the respondent must have answered at least 21, or 25%, of the 84 nondemographic survey items. Employees determined to have left their position (e.g., retired, took a job in the private sector) during the six-month lag between the time the sampling frame was produced and the start of the survey field period were considered ineligible and removed from the denominator of the response rate calculation.
Results
Table 1 summarizes results from the FEVS 2016 email wording experiment. Emphasizing a census produced mixed results, so our first hypothesis was not upheld. A slightly positive effect (0.4%) was observed for DOI, but a negative effect was observed for EPA (−0.8%). Because DOI is much larger than EPA, the net result was a marginal response rate increase of 0.2%. This was not a statistically significant difference (t = 0.33; p = 0.3704). On the other hand, our second hypothesis was upheld. We found informing the employee that he or she was one of a select number of individuals sampled to participate had a positive effect on response rates for both DHS (0.3%) and VA (1.4%). Combined, the overall response rate increase was 0.9%. Although modest in magnitude, the increase was statistically significant (t = 2.42; p = 0.0078).
Discussion
The purpose of this paper was to present results from an experiment carried out during FEVS 2016 to investigate the veracity of speculation within the FEVS community that response rates could be boosted if all employees within an agency were given the opportunity to participate. Relatedly, we sought to assess whether the sampling strategy currently implemented for 14 of the 85 participating agencies was negatively impacting response rates. The experimental design called for randomly apportioning employees from four agencies – two conducting a census and two conducting a sample – into two groups. The first group received email reminders with wording emphasizing that the agency was reaching out to all employees in the form of a census or, where applicable, that only a sample of employees had been granted the opportunity to participate. For both conditions, a control group within the agency received email reminders not mentioning the sampling strategy.
Our findings do not support the prevailing notion that a higher level of response could be achieved if the FEVS were to census the workforces of all participating agencies. There was no substantive increase in response rates observed when highlighting to respondents that a census was being undertaken. On the other hand, we did observe a modest, albeit statistically significant, response rate increase when emphasizing to the employee that he or she was part of a randomly selected sample.
Despite the large sample size of approximately 250,000 employees, our study was limited in scope in that it focused on only four of the 85 participating agencies. However, these four agencies do represent roughly 25% of the overall FEVS 2016 sample and were purposefully selected for our experiment due to their expressed desire to conduct a census in administrations following FEVS 2012. We acknowledge it would have been preferred to first randomly assign a larger number of agencies to be either sampled or censused prior to randomly assigning employees therein to the two messaging conditions. We opted against this, however, because we were concerned breaking away from the established sample versus census determination rules detailed in OPM (2015) could have influenced results by inadvertently advertising the experiment.
The primary practical implication of our research is that, as was asserted in Groves et al. (1992) and previously demonstrated empirically by Porter and Whitcomb (2003), emphasizing the scarcity of the survey opportunity can help improve response rates. The improvement may be slight and likely varies depending on the target population, survey topic, and the medium of emphasis. For example, whereas Porter and Whitcomb (2003) found an effect of about 6 percentage points, we observed an effect of about 1 percentage point. While there could be other factors at play, one possible explanation of the more muted effect we observed is that it is harder to convey the scarcity of the opportunity to participate in a large-scale, highly publicized annual survey like the FEVS. With its current 50% marginal sampling rate, 1 out of every 2 individuals in the target population is sampled. Hence, individuals are just as likely not to be given the opportunity to participate as they are to be afforded the opportunity.
Disclaimer
The opinions, findings, and conclusions expressed in this article are those of the authors and do not necessarily reflect those of the U.S. Office of Personnel Management.
Appendix: Examples of Email Messaging Conditions
- Example of a pro-census email message
Inspire change through your participation in the Federal Employee Viewpoint Survey!
What matters most to you as a Federal employee? If you had the opportunity to speak directly with your agency’s senior leaders, what would you say?
To get the most comprehensive view possible about what’s working well in <AGENCY NAME> and what areas need improvement, we are reaching out to each and every employee. All voices are important!
If you have not yet completed the 2016 FEVS, take this opportunity to fill out the survey. This is your chance to voice your opinions and let your leadership know which issues are most critical to you.
<URL HERE>
If the link does not take you directly to the survey, copy and paste the following into a browser window:
Please DO NOT forward this e-mail since it contains your personalized link to the survey.
Please reply to this message if you have any questions or difficulties accessing the survey, or call our Survey Support Center toll free at: 1-855-OPM-FEVS (1-855-676-3387).
- Example of a pro-sample email message
Inspire change through your participation in the Federal Employee Viewpoint Survey!
What matters most to you as a Federal employee? If you had the opportunity to speak directly with your agency’s senior leaders, what would you say?
We know you are busy, but your opinions are very important. Only a select number of <AGENCY NAME> employees have been asked to participate. Your answers to the survey will represent both you and your colleagues who were not selected to participate.
If you have not yet completed the 2016 FEVS, take this opportunity to fill out the survey. This is your chance to voice your opinions and let your leadership know which issues are most critical to you.
<URL HERE>
If the link does not take you directly to the survey, copy and paste the following into a browser window:
Please DO NOT forward this e-mail since it contains your personalized link to the survey.
Please reply to this message if you have any questions or difficulties accessing the survey, or call our Survey Support Center toll free at: 1-855-OPM-FEVS (1-855-676-3387).
- Example of a neutral message
Inspire change through your participation in the Federal Employee Viewpoint Survey!
What matters most to you as a Federal employee? If you had the opportunity to speak directly with your agency’s senior leaders, what would you say?
If you have not yet completed the 2016 FEVS, take this opportunity to fill out the survey. This is your chance to voice your opinions and let your leadership know which issues are most critical to you.
<URL HERE>
If the link does not take you directly to the survey, copy and paste the following into a browser window:
Please DO NOT forward this e-mail since it contains your personalized link to the survey.
Please reply to this message if you have any questions or difficulties accessing the survey, or call our Survey Support Center toll free at: 1-855-OPM-FEVS (1-855-676-3387).
The target population includes permanently employed, full- or part-time, nonpolitical, nonseasonal, civilian personnel on board the agency at least six months prior to the start of the data collection field period.