Surveys are likely the most common method of data collection, and they are especially relevant to obtain information on aspects of human experience not observable by others. For that reason, surveys are often used to obtain data on emotional experiences, such as trauma and grief. Recent research has begun to study the impact of surveys with emotionally challenging content and has demonstrated that most individuals who participate in studies of this type experience no significant negative impacts. For example, a year after the 2001 World Trade Center attack, Boscarino et al. (2004) surveyed 2,300+ people in New York City. They reported that 15 percent of their participants said the survey was emotionally upsetting but less than 2 percent were upset after it was completed. However, 28 percent of individuals who had sought out some form of mental health treatment reported the survey was stressful, but only 3 percent of this group was still upset at survey completion. Individuals with existing post-traumatic stress disorder (PTSD), depression, or anxiety were most likely to find the survey experience stressful, suggesting increased vulnerability for them as subjects. Overall, however, 75 percent of the participants reported the survey was generally a positive experience. Likewise, in a study in New York City during the first year after the attack, Galea et al. (2005) reported similar findings. That is, 13 percent reported being upset by survey questions, but only 1 percent reported upset after survey completion. Less than 1 percent of those who completed were interested in assistance from a counselor (in addition to a small number who did not finish the survey). Also consistent with Boscarino et al. (2004), those respondents with mental health symptoms were more likely to be upset, but most tolerated the survey very well.
Others have explored the effects of survey research in the context of injuries from accidents, assault, or domestic violence. Walker et al. (1997) reported on 500 women, who completed questionnaires on sexual and physical abuse, were asked about the benefit, expected upset, and regret they experienced as a result of participation. Over 25 percent reported some benefit, but 13 percent felt completing the study was more upsetting than expected. However, only 5 percent said they would not participate again if they knew how they would have felt beforehand. Distress was associated with PTSD symptoms and more exposures to traumatic events in childhood. A second study by this group included interviews after 1,174 women completed a survey that included items about childhood abuse and maltreatment; a subset of 252 participants was also interviewed (Newman, Walker, and Gefland 1999). Only 10 percent of survey respondents reported unexpected upset, and these individuals had higher levels of PTSD symptoms. Many (23 percent of those who completed the questionnaire and 86 percent of those interviewed) reported benefit and only 5 percent reported regret for having participated. These appraisals of benefit and regret about the interview were generally stable over the 48 hours post-interview. One study looked at the amount of time necessary to “get over” the distress associated with discussing an upsetting experience of the respondent’s choosing; the median amount of time was 30 minutes, with a range of 0–72 hours (Labott et al. 2013).
We can conclude from the studies above that in general populations, most people tolerate emotionally challenging surveys well, but those with premorbid depression or PTSD are more likely to experience significant distress. To look specifically at this issue, Carlson et al. (2003) completed trauma-focused interviews with 233 psychiatric inpatients and measured psychiatric symptoms. In this sample, 66 of the individuals were very upset by the questions, and 17 stopped the interview because they were upset. Upset ratings were significantly correlated with symptoms of depression, dissociation, PTSD, self-destructiveness, abuse, and aggression. The authors described this as a “worst-case scenario” because of the high levels of psychiatric problems in the population and the relationship to distress during participation.
In this article, we will summarize methods that can be utilized by researchers to avoid or manage any negative emotional reactions that may occur with survey subjects. There are several points in the research during which strategies to protect human subjects from emotional distress can be implemented. We will review safety protections for use during the consent process, at eligibility screening, during the survey itself, after data collection is completed, and through the use of Data Safety Monitoring Boards (DSMB).
There are two main ways to utilize the consent process to protect subjects from emotional distress: (1) informing subjects of potential risks so that they can choose not to participate and (2) letting potential respondents know that if they choose to participate now, they can also choose to skip questions or to stop participating later. Informing subjects of potential risks of the research during the consent process and also making efforts to minimize them is required for all federally funded social research (Department of Health and Human Services 1991). Potential subjects are always told the research is voluntary, and the nature of potential risks is disclosed, as well as their likelihood and acceptability (National Bioethics Advisory Commission 1998). Research not funded by the federal government is not necessarily held to these standards, but many organizations that receive any federal support extend these requirements to all of their research activities.
One of the concerns in survey research has been that the probability and magnitude of potential risks associated with emotional surveys were unknown. While little research has been done on this issue specifically, as noted above, the literature that does exist suggests that the probability of serious emotional reactions is low, that any negative reactions that occur are likely to be short in duration, and with no long-term effects. But there are some groups at higher risk. Therefore, the potential risks need to be described to potential respondents using information on the specific survey content as well as the specific population being studied. That is, a survey about reactions to the death of a loved one is likely to be significantly more distressing (and therefore associated with greater emotional risk) for someone who is depressed or who has recently lost a loved one, than for someone who has not had these experiences. Presenting these details will allow potential respondents to assess their individual risk to make an informed choice about participating. One concern, however, is that an explicit presentation of the potential risks to vulnerable participants may create anxiety and decrease the likelihood of participation, similar to past research which has demonstrated that confidentiality assurances are not always experienced as comforting (Singer, Hippler, and Schwarz 1992).
A second way to protect subjects at the time of formal consenting is to remind them of their right to skip questions and to withdraw from the research later. For this to be a useful way to protect respondents, they need to feel comfortable that they can change their minds about participation at any time. For example, in the formal consent document, potential respondents could be reminded that “Your participation is completely voluntary. If you agree to participate, you are able to change your mind and withdraw from the study at any time during the interview.” Newman et al. noted that extensive informed consent procedures seemed to mitigate upset and regret for their participants (Newman, Walker, and Gefland 1999) – it would also be expected that additional care and reassurance about their rights would create more comfort for respondents who wish to withdraw after initially consenting.
Eligibility screening is often used to screen out those who are thought to be most at risk for emotional harm. Using screening in this manner, a researcher who is studying reactions to suicide might develop procedures to screen out those who have experienced a recent suicide in their family or social network. While this may be an effective way to insure that human subjects are protected from negative emotional reactions that could be harmful, there can be problematic scientific and ethical implications of this strategy. From a scientific perspective, if a researcher is studying reactions to suicide, those who have had this experience recently may have the best information to provide. By making individuals with a recent personal experience of suicide ineligible, the researcher is not collecting data from the most relevant group of respondents. Further, from an ethical standpoint, for the researcher to decide a priori that certain individuals should be ineligible is somewhat paternalistic, as it does not allow a potential participant the ability to make their own autonomous and informed decision about participation. In studies with at-risk participants, only small numbers report they would not have agreed to participate if they would have known about the distress they would experience during the survey beforehand (Griffin et al. 2003; Ruzek and Zatzick 2003), suggesting that even at-risk participants are willing and generally able to participate without significant harm.
Even if one can argue that screening out the most at-risk subjects is appropriate from a human subject’s protection standpoint, this procedure will have implications for the generalizability of the results. If it is determined, however, that the participation of at-risk respondents is necessary to obtain the relevant data, then other procedures need to be in place to protect them as they participate.
Methods that can be used to protect human subjects during data collection include (1) interviewer training procedures, (2) check in procedures to see how respondents are feeling emotionally during the interview itself, and (3) formal scripts to assess needs and determine appropriate action.
Interviewers are typically trained in general human subject issues and in the administration of the specific survey. In research with sensitive topics, some researchers have already researched and incorporated training procedures to improve the sensitivity of interviewers to potential emotional reactions of subjects (Campbell et al. 2009), as well as to improve their ability to manage any strong emotional reactions. In some cases, these interviewer training procedures can be quite extensive. In one recent survey, respondents were asked to discuss a personally upsetting event (Labott et al. 2013). Prior to data collection, interviewers spent several hours in training that included didactics and practice in the assessment and management of emotional distress. This training involved instruction about verbal and nonverbal cues that could be “observed” while doing a telephone interview (e.g., crying, sniffling, vocal changes). Interviewers were then also trained in actions they should take if any of these behaviors occurred (involving check in procedures and safety scripts described below).
Check in Procedures
A second method to protect human subjects involves specific procedures during the interview to determine the respondent’s emotional state as the distressing topic is being discussed. In this scenario, items designed to check on the respondent’s emotional state are interspersed with survey items at intervals throughout the survey. Respondents can be asked if they are OK, if they would like to take a short break, if they would like to reschedule the remainder of the interview for later, or if they need support. Check in items of this sort decrease the likelihood that an interviewer will miss a nonverbal cue. If the respondent reports being OK, the survey continues, but if the respondent indicates some distress, then further assessment can occur to determine actions necessary to address their distress (see Safety scripts below).
Traditionally, if a respondent is distressed, informal measures have been used to address this, e.g., the interviewer talks to the respondent more fully to see if he or she is OK or calls a supervisor to do so. Another way to protect the respondent during survey administration is to develop safety scripts that the interviewer can use to formally assess a respondent’s emotional distress and then take appropriate action. In the Labott et al. (2013) study, certain responses were flagged (e.g., a comment that suggested the respondent was thinking about suicide). If a respondent made one of these responses, the survey stopped, and the interviewer immediately began asking a different set of items designed to more formally assess the respondent’s situation. Each of these assessments consisted of a few questions which then lead to other protective actions on the part of the interviewer. For example, if a respondent made a comment about suicide, the interviewer asked if the respondent was actively considering suicide at present. If so, the individual was asked to provide information so that 911 could be called. In addition, one of the investigators (a clinical psychologist) was called to follow-up with the respondent immediately. (If procedures like this are used, it is important to make respondents aware during the consent process that confidentiality could be breached in certain cases.) If, however, the respondent was not acutely suicidal, the respondent was asked if he or she wanted the psychologist to follow-up with him or her, if the respondent wanted to contact her on his or her own, or if the respondent wanted a list of community resources for support sent to him or her. If the safety assessment indicated the subject was not acutely at risk, then it was determined if the subject would continue with the survey, reschedule to complete it at a later time, or stop completely. One major benefit of scripts of this type is that interviewers who are not mental health professionals do not need to make their own judgment calls about risk, and these situations can be handled in a standardized way that has been pre-determined by the researchers.
Even when interviews are administered by mental health professionals, scripts may be helpful to assess the respondent’s emotional situation. For example, Draucker, Martsolf, and Poole (2009) have provided a protocol that was used to assess distress, suicide, and homicide in a study of dating violence. This protocol demonstrates the entire spectrum of options that were available to interviewers if respondents became distressed, e.g., stop the interview, provide hospital emergency room number, have a psychologist contact the subject, call an ambulance to aid the respondent. While protocols of this type aid in consistent management of distress in respondents, they can also yield data to inform future work on these topics.
Traditionally, methods to assess and intervene with emotional distress have occurred after the survey is completed. Some researchers have utilized quantitative measures to debrief respondents and assess distress. For example, some researchers have asked respondents for a rating of distress after the survey (Griffin et al. 2003; Parslow et al. 2000; Walker et al. 1997). Others have used items from the Reactions to Research Participation Questionnaire (Kassam-Adams and Newman 2002) to assess emotional reactions and other aspects of research participation, e.g., voluntariness, regret (Kassam-Adams and Newman 2005; Ruzek and Zatzick 2003).
While ratings of emotional distress are useful from a research perspective, the real human subject benefit lies in then providing resources to help the respondent to manage the distress. Some researchers have asked post-survey items about the experience of being upset during the survey as well as continued distress afterward and then offered respondents a call from a counselor either immediately or as soon as possible (Galea et al. 2005), or options including an immediate call from a counselor, a hotline number, or information about nearby counseling services (Boscarino et al. 2004).
Data Safety Monitoring Boards
Clinical trials utilize Data Safety Monitoring Boards/Plans to monitor human subject safety during trials, although this method has been rarely used in survey research. However, in some cases, it would be appropriate to set up a board of professionals who could review the data on subject reactions during an emotional survey and then determine if any changes to procedures need to occur (including stopping data collection if significant risks/harms appear to be present). One study developed a data safety plan that involved outside review of procedures and respondent safety issues (Labott et al. 2013). These researchers selected a consultant with extensive human subject experience, and one with expertise in PTSD who were not involved in data collection for the study. During data collection, the PI reviewed any situations in which safety scripts were accessed and the outcomes of these with one of the consultants on a weekly basis. In this way, an independent consultant could review safety procedures and provide recommendations if changes seemed warranted.
Recent work has begun to address emotional risks to those who participate in surveys on distressing topics. While most individuals tolerate surveys on emotionally distressing topics well, individuals with pre-existing PTSD or depression may experience more significant distress which could be harmful. However, in survey research, the potential harms are difficult to measure and have only been studied minimally. This paper describes those strategies that have been or could be used to protect respondents in surveys on emotional topics. These strategies can be implemented throughout the research, that is, during the consent process, at eligibility screening, during data collection, and post-survey. Data Safety Monitoring Boards can also be used to review risks as the data collection proceeds. At present, researchers need to consider both the content of a survey as well as characteristics of the subject population to determine which human subject protection procedures are most relevant for a specific study. Future research would do well to delineate the utility of these strategies in specific situations with at-risk samples.
We would like to thank Marni Basic of the Survey Research Laboratory for her helpful comments regarding an earlier draft of this paper. This project was supported by Award Number R21NR010595 from the National Institute of Nursing Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Nursing Research or the National Institutes of Health.