Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:54866/feed
Articles
Vol. 11, Issue 2, 2018March 12, 2018 EDT

I Don’t Know. The Effect of Question Polarity on No-opinion Answers

Naomi Kamoen, Jasper Van de Pol, André Krouwel, Claes De Vreese, Bregje Holleman,
question polarityno-opinionvaas
https://doi.org/10.29115/SP-2018-0017
Photo by Josefa nDiaz on Unsplash
Survey Practice
Kamoen, Naomi, Jasper Van de Pol, André Krouwel, Claes De Vreese, and Bregje Holleman. 2018. “I Don’t Know. The Effect of Question Polarity on No-Opinion Answers.” Survey Practice 11 (2). https:/​/​doi.org/​10.29115/​SP-2018-0017.
Save article as...▾
Download all (3)
  • Figure 1 Example of a positively worded question: The cars that are most polluting (cars older than Diesel Euro 3 and Gas Euro 0) should be allowed in the city center. The concise translation of the response categories is: “completely agree, agree, neutral, not agree, completely not agree, no-opinion”.
    Download
  • Figure 2 Example of a negatively worded question: The cars that are most polluting (cars older than Diesel Euro 3 and Gas Euro 0) should be banned from the city center. The concise translation of the response categories is: “completely agree, agree, neutral, not agree, completely not agree, no-opinion”.
    Download
  • Appendix 1
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined

View more stats

Abstract

A new type of political attitude survey that has gained popularity in Europe and in the United States is the voting advice application (VAA). VAAs provide users with a voting advice based on their answers to a set of attitude questions. In the calculation of this advice, no-opinion answers are excluded. We tested the hypothesis that negative VAA questions lead to more no-opinion answers than their positive equivalents. In a field experiment, visitors (N=41,505) of a VAA developed for the municipality of Utrecht in the Netherlands, were randomly guided to one of the versions of the tool in which the polarity of 16 questions was manipulated. Results do not show an overall effect of question polarity. This overall null finding appears to be caused by contrasting effects for two subtypes of negative questions: Explicit negatives (e.g. not allow) yield more no-opinion answers than their positive counterparts (e.g. allow) do, while the reverse holds for implicit negatives (e.g. forbid).

Introduction

Research from the 1940s onward has shown that respondents more often answer “no” or “disagree” to negative questions than “yes” or “agree” to positive ones (e.g., Rugg 1941). This holds both for questions with an explicit sentence negation such as not (“The government cannot cut down on social work”; e.g., Holleman et al. 2016), and for questions with an implicit negation, containing a word with negative valence (cf. Warriner et al. 2013), such as forbid (“Do you think the government should forbid the showing of X-rated movies”; e.g., Schuman and Presser 1981/1996). Hence, someone’s opinion about an attitude object seems to be more positive when the question is phrased negatively (for an overview, see Kamoen et al. 2013).

These question polarity effects on the mean substantive answers have sparked a debate on which question wording is best (e.g., Chessa and Holleman 2007; Holleman 2006; also see discussions about unipolar versus bipolar questions, e.g., Friborg et al. 2006; Saris et al. 2010). No-opinion answers are an important proxy for (a lack of) data quality, as survey respondents frequently choose such answers to indicate comprehension problems (e.g., Deutskens et al. 2004; Kamoen and Holleman 2017). The current research therefore investigates the effect of question polarity on nonsubstantive answers: the proportion of no-opinion answers. To the best of our knowledge, no-opinion answers have not yet been analyzed as a dependent variable in polarity research before. This is probably because survey respondents shy away from providing no-opinion answers (Krosnick and Presser 2010), which means that a large sample size is needed for demonstrating any effect.

The Complexity of Positive vs. Negative Questions

Survey handbooks acknowledge the advantages of mixing positive and negative wording in sets of questions in order to “alert inattentive respondents that item content varies” (Swain, Weathers, and Niedrich 2008, 116) and also to detect straightliners (e.g., Sudman and Bradburn 1982; Weisberg 2005). Yet, they also warn against using negative questions in abundance (e.g., Dijkstra and Smit 1999; Dillman et al. 2009; Korzilius 2000). This is because negative questions are more difficult to process than their positive counterparts. Outside of a survey context, it has been shown repeatedly that negatives take more processing effort than their positive equivalents (e.g., Clark 1976; Hoosain 1973; Sherman 1973). Horn (1989, 168) summarizes that: “all things being equal, a negative sentence takes longer to process and is less accurately recalled and evaluated relative to a fixed state of affairs than the corresponding positive sentence”. This holds both for sentences that include an explicit negation (e.g., not happy/happy) as well as for sentences including an implicit negative, and generalizes across morphologically markedness (e.g., unhappy/happy vs. sad/happy), and across semantic types such as verbs (e.g., forget/remember) and contradictory adjectives (e.g., absent/present) (Clark and Clark 1977). The presumed cause for these processing differences is that negatives must be converted into positives before they can be understood (see Clark 1976; Kaup et al. 2006).

A second reason for survey handbooks to advise against the use of negative questions is that the answers to negative questions are relatively hard to interpret. This is because it is counterintuitive for respondents to answer ‘no’ or ‘disagree’ to indicate that they favor an attitude object (Dillman, Smith, and Christian 2009). For questions with an explicit negative, this confusion is particularly large, because in daily language use a no-answer to a question with an explicit negation indicates agreement with the negated statement. For example, one would probably answer No, asylum seekers should not be allowed to indicate agreement with the statement Do you think the government should not allow any more asylum seekers (example taken from Dijkstra and Smit 1999, 84). In a survey context, however, a yes-answer is the desired response to indicate agreement. This causes difficulties in interpreting the meaning of yes/no and agree/disagree-answers to questions with an explicit negation. On top of that, questions with explicit negatives sometimes generate invalid responses, because fast responders miss the negative term and therefore provide a response that does not match their opinion (e.g., Dillman et al. 2009).

Taken together, based on survey handbooks and linguistic research, we may assume that negative questions are more difficult to process than their positive counterparts. As no-opinion answers are an important proxy for question complexity (e.g., Deutskens et al. 2004; Kamoen and Holleman 2017), we expect more no-opinion answers for negative questions than for positive ones. We will test this hypothesis in the context of a specific type of survey called a voting advice application (VAA). VAAs are online tools that help users determine which party to vote for in election times. These tools have become increasingly popular in Europe over the past decades, reaching up to 40% of the electorate in countries such as the Netherlands (see Marschall 2014). In a VAA, users express their attitudes to a set of survey questions about political issues. These questions are formulated by a commercial or government-funded VAA developer, in dialogue with the political parties running in the election. Based on the match between the user’s answers and the parties’ issue positions, the tool subsequently provides a personalized voting advice. In the calculation of this advice, no-opinion answers are excluded (De Graaf 2010; Krouwel, Vitiello, and Wall 2012). As VAA developers want to base their voting advice on as many VAA questions as possible, this makes an investigation of the effect of question polarity on the proportion of no-opinion answers practically relevant too; it would be problematic if one wording would lead to more no-opinion answers than another wording. This is especially true since several studies have shown that the VAA voting advice has an impact on users’ vote choice (e.g., Andreadis and Wall 2014; Wall et al. 2012).

Method

Design and Materials

During the Dutch municipal elections of March 2014, we conducted a real-life field experiment on a VAA developed for the municipality of Utrecht, which is the fourth largest city in the Netherlands with 258,087 inhabitants. In collaboration with VAA developer Kieskompas and all of the 17 political parties running in the elections, four experimental versions of Kieskompas Utrecht were constructed in addition to this benchmark version.[1]

In the experimental VAA versions, the polarity of the question was varied for 16 out of 30 questions (see Figures 1 and 2 for an example). These manipulations can be distinguished into two types. A total of 10 questions contained an explicit sentence negation (e.g., ‘The municipality can cut down/cannot cut down on social work’). The remaining 6 manipulations contained an implicit negative, so a word with negative valence (e.g., The requirement for a building permit of one’s own house should remain to exist /be abolished). Research shows that language users can easily distinguish between words with positive versus negative valence (Hamilton and Deese 1971), and all the implicit negative terms used in the current research scored high on negative valence in an empirical study (Warriner, Kuperman, and Brysbaert 2013)[2]. All experimental materials can be found in supplemental materials.

The manipulated questions were distributed across the VAA versions in such a way that each VAA contained an equal number of positive and negative items. All of the Kieskompas Utrecht visitors (N = 41,505) were randomly assigned to either the standard Kieskompas version, or to one of the experimental versions.[3]

Figure 1 Example of a positively worded question: The cars that are most polluting (cars older than Diesel Euro 3 and Gas Euro 0) should be allowed in the city center. The concise translation of the response categories is: “completely agree, agree, neutral, not agree, completely not agree, no-opinion”.
Figure 2 Example of a negatively worded question: The cars that are most polluting (cars older than Diesel Euro 3 and Gas Euro 0) should be banned from the city center. The concise translation of the response categories is: “completely agree, agree, neutral, not agree, completely not agree, no-opinion”.

Participants

Kieskompas Utrecht was launched on February 18, 2014. Between February 18 and March 19 (Election Day), the tool was visited 41,505 times.[4] For the purposes of the current study, we focus on those VAA users who were randomly assigned to one of the experimental versions of Kieskompas, which means that VAA users who were assigned to the benchmark version (N = 7,812) were excluded. This was necessary because the benchmark version differed in more than one respect from the four experimental versions, as the benchmark version did not contain headings above the question.

In addition, we only took into account those VAA users who were 18 or older (and hence eligible to vote), for whom it took longer than 2 minutes to fill out all 30 statements, and who did not show straight-lining behavior (i.e., reported the same answer to each and every statement). This cleaning method is similar to the one used in Van de Pol et al. (2014). Cleaning the data led to the exclusion of another 2,581 cases, which means that 31,112 Kieskompas users were included in the analyses.

In our final sample, the male/female division is about equal (50.7% female). The mean age is 37.3 years (SD = 13.8). VAA users are fairly highly educated (the median category was higher vocational education or university bachelor), and rather interested in politics (mean of 3.3 on a 5-point scale, SD= 0.83). These imbalances with respect to educational level and political interest are very common for samples of VAA users (Marschall 2014).
In order to check the randomization, we compared the experimental versions with respect to age (F(3, 22207) = 1.58; p =.19), gender (χ2 (3) = 2.90; p = .41), educational level (χ2 (3) = 11.90; p = .85) and interest in politics (F(3, 21990) = 0.34; p = .80). As none of these tests showed a difference between conditions, there is no reason to assume that there are a priori differences between the VAA users in the experimental conditions.

Measurement and analyses

To analyse the effect of statement polarity on the proportion of no-opinion answers, we constructed a binary variable indicating whether the VAA user provided a no-opinion answer (0) or a substantive answer (1) to each of the 16 manipulated questions. This binary dependent variable was subsequently predicted in the logit multilevel model displayed in Equation 1 below. In this model, Y(jk) indicates whether or not individual j (j = 1, 2…31,112) gives a substantive answer to question k (k = 1, 2,…16). In the model, two cell means (Searle 2006) are estimated in the fixed part of the model: one for positive and one for negative question wordings.

To estimate these, dummy variables are created that can be turned on if the observation matches the prescribed type (D_POS or D_NEG). Using these dummies, two logit proportions are estimated (β1, and β2), which may vary between persons (u1j, u2j) and questions (v0k)[5]. The model assumes that the proportion of no-opinion answers is nested within items and respondents at the same time. This means that a cross-classified model is in operation (Quené and Van den Bergh 2004, 2008). Please note that while the person-variance is estimated separately for positive and negative wordings, the question variance is estimated only once. This is a constraint of the model. All residuals are normally distributed with an expected value of zero, and a variance of, respectively, S2u1j, S2u2j, and S2v0k.

Equation 1:

Logit Y(jk) = D_POS(jk)(β1 + u1j) + D_Neg(jk)(β2 + u2j) + v0k

Results

The first row in Table 1 shows the mean proportion of substantive answers across the 16 manipulated statements. A comparison of positive and negative wordings shows that there is no effect of question polarity on the number of nonsubstantive answers (χ2 = 0.04; df = 1; p = .95).

Table 1 Parameter estimates of the multilevel models used for estimating the effect of question polarity (Nrespondents = 31,112).
% substantive answers (Logit; SE) S2respondents (SE) S2items (SE)
Positive Negative Positive Negative Pooled
All items 94.7% 94.7% 2.86 2.67 0.91
(Nitems = 16) (2.88; 0.24) (2.88; 0.24) -0.04 -0.04 -0.32
Explicit negatives 94.2%** 93.9% 2.21 2.01 1.09
(Nitems = 10) (2.79; 0.33) (2.74; 0.33) -0.05 -0.04 -0.49
Implicit negatives 95.4%** 95.9% 2.89 3.23 0.21
(Nitems = 6) (3.03; 0.19) (3.16; 0.19) -0.08 -0.09 -0.12

Note Table 1. For the sake of presentational clarity, the mean answers are given in percentages and in the Logits used for the analysis (between brackets). A higher percentage means that more substantive answers are provided. The variances are only expressed in Logits.
** = p < .001

As we did not observe an overall polarity effect, we also explored the effect of question polarity for each of the two types of negatives separately. The second row of Table 1 shows the polarity effect for explicit negatives (N = 10). In line with prior expectations, we observed that negative questions generate more no-opinion answers than their positive equivalents (χ2 = 14.00; df = 1; p < .001). The size of this effect, however, is tiny relative to both the between-person standard deviation (Cohen’s d = 0.03) and the between-question standard deviation (Cohen’s d = 0.05) . In absolute terms, the chance of providing a no-opinion answer is about 5% larger for negative questions than for positive ones.

The third row of Table 1 displays the effect of question polarity for the subset of items containing an implicit negative (N = 6). Also for this subset of items, an effect of question polarity is observed, albeit in a different direction: contrary to expectation, positive questions yield more no-opinion answers than their negative equivalents (χ2 = 35.5; df = 1; p < .001). The size of this effect is small compared to the differences between respondents (Cohen’s d = 0.07), and substantive but small compared to the differences between items (Cohen’s d = 0.28). The chance of providing a no-opinion answer is roughly 14% larger for positive questions than for negative ones.

Conclusion

The current research investigated, in the context of an online VAA, whether the proportion of no-opinion answers depends on an important question characteristic: the choice for a positive or a negative statement wording. Across a set of 16 manipulated questions we find no overall effect of question polarity. This is contrary to expectations, because survey handbooks (e.g., Dijkstra and Smit 1999; Dillman et al. 2009; Korzilius 2000) as well as linguistic research (Clark 1976; Kaup, Ludtke, and Zwaan 2006) point out that negative questions and their answers are structurally more difficult to comprehend than their positive counterparts are. We did observe polarity effects when analysing two types of negatives separately. For questions including an explicit negation (e.g., not or none), we observed more no-opinion answers for the negative question versions as compared to their positive equivalents. The reverse was true for the set of implicit negatives (e.g., forbid/allow): For these pairs, the positive wording generated more no-opinion answers.

Discussion

We can only speculate about the reasons for the unexpected finding that implicit negatives yield less no-opinion answers than their positive equivalents. One explanation is that implicit negatives are actually easier to process than their positive counterparts. This explanation matches with work on the forbid/allow-asymmetry. In a semantic analysis of forbid and allow questions, it has been shown that the meaning of forbid-questions is really well-defined, as forbidding always indicates “…an act of inserting a barrier, and to a force dynamic pattern which brings about change”, whereas allow-questions are more ambiguous, because allowing “may imply causing (removing a barrier) as well as letting (not inserting a barrier)” (Holleman 2000, 186). In line with this work, it has been shown that the answers to a set of forbid-questions are more homogeneous, and therefore more reliable, than answers to a comparable set of allow-questions (Holleman 2006). Hence, if these results generalize to other contrast pairs, results of the present study can be explained by the fact that implicit negatives have a clearer meaning as compared to positive equivalents.

Alternatively, not (just) the linguistic form, but the match between the linguistic form and the status quo of the question topic may explain our findings. In our experimental materials, the items including an implicit negative (e.g., forbid/allow) always related to situations where the status quo matched the positive wording (e.g., circuses with animals are currently allowed). By contrast, for items including an explicit negation (e.g., may/may not cut down), the negative wording always matched the status quo (e.g., there are no cut-downs on art and culture). This means that we can also rephrase our results such that, irrespective of the linguistic form, wordings representing the current state of affairs generate more no-opinion answers than wordings that represent change. The idea that the appropriateness of a linguistic form in its usage context determines processing complexity, matches with theories from pragmatic linguistics (e.g., Sperber and Wilson’s Relevance Theory 1995). According to this work, language users want to make their contribution to a discourse situation as informative as required for the purposes of the exchange. In a political attitude context, wordings that represent change with respect to the status quo are probably more informative than wordings that describe the current state of affairs. This is because most citizens have little knowledge of the exact political issues at stake (Delli Carpini and Keeter 1996), whereas they do have knowledge about the status quo, as they encounter this status quo in their daily lives (Lupia 1992). To disentangle these two explanations, we propose a future study in which both the question wording (positive, implicit negative, explicit negative) and the status quo (issue X is currently allowed in municipality 1/ and forbidden in municipality 2) are varied. Such an experiment will allow tearing apart the two explanations.

Although we cannot yet explain the current findings, they are clearly relevant for survey and VAA practice. In a VAA context, the answers VAA users give to the political attitude questions directly influence the voting advice (De Graaf 2010; Krouwel, Vitiello, and Wall 2012). As the voting advice affects vote choices (e.g., Andreadis and Wall 2014), we believe that it is legitimate to conclude that our results are in fact important for VAA developers, even though the statistical size of the effects observed is small. Taking the odds ratio as a standard of comparison, about 1 in 20 no-opinion answers can be avoided if explicit negations matching the status quo are replaced by positive wordings representing change. Hence, if there is a political discussion about whether or not new houses should be built in a certain area, a question wording such as “New houses should be built in area X” should be preferred over the negative phrase “There should be no new houses built in area X”. In addition, roughly one out of every 7 no-opinion answers can be avoided when positive wordings that match the status quo are replaced by implicit negatives describing change. Hence, if there is a debate about whether taxes on housing should or should not continue to exist, one can better ask respondents to react to the statement “Taxes on housing should be abolished”, rather than “Taxes on housing should remain to exist”.

Moreover, our results are also relevant for the broader context of political attitude surveys and surveys on policy issues, as these surveys include very similar questions to the ones we find in VAAs. Inspect, for example, questions in Eurobarometer (e.g., asking if respondents are pro or con a European economic and monetary union with one single currency, the Euro; Eurobarometer 2015, QA18.1), or popular polls in newspapers and other media (e.g., Do you think Ukraine should become a member of the EU?; https://www.burgercomite-eu.nl/peiling-maurice-de-hond/). The only difference between a VAA and these other political attitude survey contexts might be the type of respondents: VAA users may be more motivated to fill out the questionnaire as they are rewarded with a personalized advice (Holleman, Kamoen, and Vreese 2013). Moreover, the VAA users in our sample appeared to be rather highly educated and fairly interested in politics. We know from Krosnick’s work on survey satisificing (e.g., Krosnick 1991) that the more motivated, highly educated and interested in the survey topic the respondent is, the smaller the size of the effect of various wording variations on reported attitudes. Hence, if VAA users are really more motivated (but studies showing otherwise are Baka et al. 2012; Kamoen and Holleman 2017), highly educated and interested, this would imply that polarity effects may be even larger in these other political attitude contexts.

Overall, we conclude that questions about political issues can best be phrased in terms of a change with respect to the status quo, and in doing so, not to shy away from using implicit negatives; using this technique can reduce the proportion of no-opinion answers. Therefore, if a country is currently in the European monetary union, it is better to ask if the country should leave this union rather than ask about staying in.

Funding

This work was supported by the Dutch Science Foundation (NWO), grant number 321-89-003.


  1. The project description was approved prior to fielding the study by Utrecht University, the Dutch Science Foundation, and the Utrecht City Council. Visitors always entered Kieskompas Utrecht voluntarily, and they could stop filling out the VAA at any point in time.

  2. Warriner et al. (2013) report valence scores on a scale from 1 (negative) to 9 (positive) for 13,915 English words. Human evaluators assign these scores. Across all reported words, the average valence score is 5.06. The implicit negative terms used in the current research (forbid, ban, stop, abolish, and force) had valence scores between 2.82 and 4.73. For the positive equivalents (allow, decide for yourself, remain to exist, maintain, continue) scores ranged between 5.61 and 6.39.

  3. A total of 13 out of the 16 manipulated questions also contained a manipulation of issue framing, operationalized by variation in the heading above the question (left-wing or right-wing). So, in fact, these questions were manipulated following a 2 (question polarity: positive or negative) x 2 (heading: left-wing or right-wing) design. As the effect of question polarity did not interact with the effect of the headings, we decided to report the effect of the headings elsewhere (authors, under review).

  4. It is impossible to check whether all visitors are unique users, because monitoring IP-addresses would violate VAA users’ privacy concerns. Even if IP-addresses would be available, it would be impossible to distinguish unique users based on IP-address because multiple users may access the tool from the same IP-address and the same user may access the tool from various IP-addresses. If the same users filled out the VAA twice (or more), they would again be assigned randomly to a VAA version and receive a VAA consisting of positive and negative questions. We expect that multiple usage might decrease effect sizes, but will not affect the direction of the effects.

  5. Please note that the model implies that there is variance due to the interaction between respondent and item. However, because the dependent variable in the model is binomial, this variance is not estimated as it is fixed if the mean proportions are known. The interaction variance can be approximated applying the formula p * (1-p), in which p represents the estimated proportion of no-opinion answers for positive and negative questions, respectively.

References

Andreadis, I., and M. Wall. 2014. “The Impact of Voting Advice Applications on Vote Choice.” In Matching Voters with Parties and Candidates: Voting Advice Applications in Comparative Perspective, edited by D. Garzia and S. Marschall, 115–28. Colchester, UK: ECPR Press.
Google Scholar
Baka, Aphrodite, Lia Figgou, and Vasiliki Triga. 2012. “‘Neither Agree, nor Disagree’: A Critical Analysis of the Middle Answer Category in Voting Advice Applications.” International Journal of Electronic Governance 5 (3/4): 244. https:/​/​doi.org/​10.1504/​ijeg.2012.051306.
Google Scholar
Chessa, Antonio G., and Bregje C. Holleman. 2007. “Answering Attitudinal Questions: Modelling the Response Process Underlying Contrastive Questions.” Applied Cognitive Psychology 21 (2): 203–25. https:/​/​doi.org/​10.1002/​acp.1337.
Google Scholar
Clark, H.H. 1976. Semantics and Comprehension. The Hague, Netherlands: Mouton.
Google Scholar
Clark, H.H., and E.V. Clark. 1977. Psychology of Language: An Introduction to Psycholinguistics. New York: Harcourt Brace.
Google Scholar
Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. New York: Lawrence Erlbaum Associates.
Google Scholar
De Graaf, J. 2010. “The Irresistible Rise of Stemwijzer.” In Voting Advice Applications in Europe: The State of the Art, edited by L. Cedroni and D. Garzia, 35–46. Napoli, Italy: Scriptaweb.
Google Scholar
Delli Carpini, M.X., and S. Keeter. 1996. What Americans Know about Politics and Why It Matters. New Haven, CT: Yale University Press.
Google Scholar
Deutskens, E., K. De Ruyter, M. Wetzels, and P. Oosterveld. 2004. “Response Rate and Response Quality of Internet-Based Surveys: An Experimental Study.” Marketing Letters 15 (1): 21–36.
Google Scholar
Dijkstra, W., and J. Smit. 1999. Onderzoek Met Vragenlijsten. Een Praktische Handleiding. Amsterdam, Netherlands: VU Uitgeverij.
Google Scholar
Dillman, D., J.D. Smith, and J.M. Christian. 2009. *Internet, Mail, and Mixed Mode Surveys: The Tailored Design Method. Hoboken, NJ: Wiley.
Google Scholar
Friborg, O., M. Martinussen, and J.H. Rosenvinge. 2006. “Likert-Based versus Semantic Differential-Based Scorings of Positive Psychological Constructs: A Psychometric Comparison of Two Versions of a Scale Measuring Resilience.” Personality and Individual Differences 40 (5): 873–84.
Google Scholar
Hamilton, H.W., and J. Deese. 1971. “Does Linguistic Marking Have a Psychological Correlate?” Journal of Verbal Learning and Verbal Behavior 10 (6): 707–14.
Google Scholar
Holleman, B.C. 2000. The Forbid/Allow Asymmetry. On the Cognitive Mechanisms Underlying Wording Effects in Surveys. Rodopi, Amsterdam.
Google Scholar
———. 2006. “The Meanings of ‘yes’ and ‘No’. An Explanation for the Forbid/Allow Asymmetry.” Quality and Quantity 40 (1): 10–38.
Google Scholar
Holleman, B.C., N. Kamoen, A. Krouwel, J. Van Pol, and C.H. De Vreese. 2016. “Positive vs. Negative: The Impact of Question Polarity in Voting Advice Applications.” Plos One 11 (10).
Google Scholar
Holleman, B.C., N. Kamoen, and C.H.De Vreese. 2013. “Stemadvies via Internet: Antwoorden, Attitudes En Stemintenties.” Tijdschrift Voor Taalbeheersing 35 (1): 25–46.
Google Scholar
Hoosain, R. 1973. “The Processing of Negation.” Journal of Verbal Learning and Verbal Behaviour 12 (6): 618–26.
Google Scholar
Horn, L.R. 1989. A Natural History of Negation. Chicago: Chicago University Press.
Google Scholar
Kamoen, N., and B.C. Holleman. 2017. “I Don’t Get It. Response Difficulties in Answering Political Attitude Statements in Voting Advice Applications.” Survey Research Methods 11 (2): 125–40.
Google Scholar
Kamoen, N., B.C. Holleman, and H. Van den Bergh. 2013. “Positive, Negative, and Bipolar Questions: The Effect of Question Polarity on Ratings of Text Readability.” Survey Research Methods 7 (3): 181–89. http:/​/​dspace.library.uu.nl/​handle/​1874/​287515.
Google Scholar
Kaup, B., J. Ludtke, and R.A. Zwaan. 2006. “Processing Negated Sentences with Contradictory Predicates: Is a Door That Is Not Open Mentally Closed?” Journal of Pragmatics 38 (7): 1033–50. https:/​/​doi.org/​10.1016/​j.pragma.2005.09.012.
Google Scholar
Korzilius, H. 2000. De Kern van Het Survey-Onderzoek. Assen, Netherlands: Van Gorcum.
Google Scholar
Krosnick, J.A. 1991. “Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys.” Applied Cognitive Psychology 5 (3): 213–36.
Google Scholar
Krosnick, J.A., and S. Presser. 2010. “Questionnaire Design.” In Handbook of Survey Research, edited by D. Wright and P.V. Marsden, 2nd ed., 263–314. West Yorkshire, England: Emerald Group.
Google Scholar
Krouwel, A.P.M., T. Vitiello, and M. Wall. 2012. “The Practicalities Issuing Vote Advice: A New Methodology for Profiling and Matching.” International Journal of Electronic Governance 5 (3–4): 223–43.
Google Scholar
Lupia, A. 1992. “Busy Voters, Agenda Control, and the Power of Information.” The American Political Science Review 86 (2): 390–403. https:/​/​doi.org/​10.2307/​1964228.
Google Scholar
Marschall, S. 2014. “Profiling Users.” In Matching Voters with Parties and Candidates: Voting Advice Applications in Comparative Perspective, edited by D. Garzia and S. Marschall, 93–106. Colchester, UK: ECPR Press.
Google Scholar
Quené, H., and H. Van den Bergh. 2004. “On Multilevel Modeling of Data from Repeated Measures Designs: A Tutorial.” Speech Communication 43 (1–2). https:/​/​doi.org/​10.1016/​j.specom.2004.02.004.
Google Scholar
———. 2008. “Examples of Mixed-Effects Modeling with Crossed Random Effects and with Binomial Data.” Journal of Memory and Language 59 (4): 413–42. https:/​/​doi.org/​10.1016/​j.jml.2008.02.002.
Google Scholar
Rugg, D. 1941. “Experiments in Wording Questions II.” Public Opinion Quarterly 5 (1): 91–92. https:/​/​doi.org/​10.1086/​265467.
Google Scholar
Saris, W., M. Revilla, J.A. Krosnick, and E.M. Shaeffer. 2010. “Comparing Questions with Agree/Disagree Response Options to Questions with Construct-Specific Response Options.” Survey Research Methods 4 (1): 61–79.
Google Scholar
Schuman, H., and S. Presser. 1981. Questions and Answers in Attitude Surveys. Experiments on Form, Wording and Context. London, England: Academic Press.
Google Scholar
Searle, S.R. 2006. Linear Models for Unbalanced Data. New York: Wiley.
Google Scholar
Sherman, M.A. 1973. “Bound to Be Easier? The Negative Prefix and Sentence Comprehension.” Journal of Verbal Learning and Verbal Behavior 12 (1): 76–84. https:/​/​doi.org/​10.1016/​S0022-5371(73)80062-3.
Google Scholar
Sperber, D., and D. Wilson. 1995. Relevance: Communication and Cognition. 2nd ed. Oxford/Cambridge: Blackwell Publishers.
Google Scholar
Sudman, S., and N.M. Bradburn. 1982. Asking Questions. a Practical Guide to Questionnaire Design. San Francisco, CA: Jossey-Bass Publishers.
Google Scholar
Swain, S.D., D. Weathers, and R.W. Niedrich. 2008. “Assessing Three Sources of Misresponse to Reversed Likert Items.” Journal of Marketing Research 45 (1): 116–31. https:/​/​doi.org/​10.1509/​jmkr.45.1.116.
Google Scholar
Van de Pol, J., B.C. Holleman, N. Kamoen, A. Krouwel, and C.H. De Vreese. 2014. “Beyond Young, Higher Educated Males: A Typology of VAA Users.” Journal of Information Technology and Politics 11 (4): 397–411. https:/​/​doi.org/​10.1080/​19331681.2014.958794.
Google Scholar
Wall, M., A.P.M. Krouwel, and T. Vitiello. 2012. “Do Voters Follow the Recommendations of Voter Advice Application Websites? A Study of the Effect of Kieskompas.Nl on Its Users’ Vote Choices in the 2010 Dutch Legislative Elections.” Party Politics 20 (3): 416–28. https:/​/​doi.org/​10.1177/​1354068811436054.
Google Scholar
Warriner, A.B., V. Kuperman, and M. Brysbaert. 2013. “Norms of Valence, Arousal, and Dominance for 13,915 English Lemmas.” Behavior Research Methods 45 (4): 1191–1207. https:/​/​doi.org/​10.3758/​s13428-012-0314-x.
Google Scholar
Weisberg, H.F. 2005. The Total Survey Error Approach: A Guide to the New Science of Survey Research. Chicago: The University of Chicago Press.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system