Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:32151/feed
Articles
January 23, 2025 EDT

Visual design effects in a cross-national context: Are Indian and US respondents differently affected by the number of text boxes?

Ingrid Arts, Rens van de Schoot, Katharina Meitinger,
web probinglist-style boxesweb surveysvisual designcross-cultural research
https://doi.org/10.29115/SP-2024-0034
Photo by ODISSEI on Unsplash
Survey Practice
Arts, Ingrid, Rens van de Schoot, and Katharina Meitinger. 2025. “Visual Design Effects in a Cross-National Context: Are Indian and US Respondents Differently Affected by the Number of Text Boxes?” Survey Practice 18 (January). https:/​/​doi.org/​10.29115/​SP-2024-0034.
Save article as...▾
Download all (5)
  • Figure 1. Example of a specific probe
    Download
  • Figure 2. Example of a list-style specific probe
    Download
  • Figure 3. Profile plot for the number of themes
    Download
  • Appendix a
    Download
  • Appendix b
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined

View more stats

Abstract

Research indicates that multiple text boxes for list-style open-ended questions and probes yield a better response quality than a single box. However, these findings are based on single-country studies from Europe and the US. In the current cross-national study, we evaluate the stability of visual design effects across two countries (US and India), and three languages (American-English, Indian-English, Hindi). In our web survey, we implemented a between-subject design comparing hard nonresponse, soft nonresponse, mismatches, and the number of themes across 1 and 5 answer box designs at two specific probes. The survey was fielded in the US (N=415) and India (N=537) between December 2022 and May 2023. In India, the survey was available in English (N=327) and Hindi (N=247). We found that in India and the US, the number of substantive themes significantly increased with multiple answer boxes. For one of the probes, multiple answer boxes also significantly reduced soft nonresponse and mismatches. Only for hard nonresponse were significant differences between languages found.

Introduction

Web surveys make it possible to reach respondents across the globe. At the same time, surveys that cover many cultures and languages need to work equally for all respondents (Leitgöb et al. 2023). Whereas the role of optimal translation procedures is widely acknowledged (Behr 2023), the question of how survey design might differently impact respondents from different cultures, even if the content of the question is the same, has so far received less attention.

Open-ended questions have recently gained importance in survey methodology (Neuert et al. 2021; Singer and Couper 2017). Different open-ended questions exist that illicit information from respondents; for a classification, see Couper et al. (2011). List-style open-ended questions request short responses that capture aspects respondents think of regarding a specific issue (Keusch 2014; Meitinger and Kunz 2024). Web probing is the application of probing techniques from cognitive interviewing to assess the validity and comparability of survey questions (Behr et al. 2017). Probes are follow-up open-ended questions that ask respondents to provide additional information on the closed target question (Beatty and Willis 2007). Different probe types exist that ask for details in a question, for example, specific, comprehension, or category selection probes (Behr et al. 2017). Specific probes (SP) “focus on a particular detail of a term, on specific aspects that got activated in the context of a given question” (Behr et al. 2017, 5). If presented with multiple text boxes, they are labeled list-style SP; see Figures 1 & 2 for examples.

Figure 1
Figure 1.Example of a specific probe
Figure 2
Figure 2.Example of a list-style specific probe

Since respondents have to verbalize their answers, open-ended questions and probes increase the response burden, which can be alleviated by optimal visual design (Smyth et al. 2009). This improves the response quality to open-ended questions in general (e.g., Emde 2014), of list-style open-ended questions (Hofelich Mohr, Sell, and Lindsay 2016; Keusch 2014; Smith 2019), and SP (Behr et al. 2014; Kunz and Meitinger 2022; Meitinger and Kunz 2024). An important visual cue is the number of text boxes. Multiple text boxes clarify that multiple themes are expected (Meitinger and Kunz 2024). In studies on open-ended questions, multiple text boxes augmented the number of themes (Emde and Fuchs 2012; Hofelich Mohr, Sell, and Lindsay 2016; Keusch 2014; Smyth et al. 2009). However, multiple text boxes also intensify the perceived response burden (Meitinger and Kunz 2024), which can increase item nonresponse (Fuchs 2013; Smyth et al. 2009).

For list-style SP, Meitinger and Kunz (2024) compared one text box with three, five, and ten list-style boxes in a German study. More boxes elicited more themes and reduced mismatches; that is, respondents providing answers to a different probe type rather than the one requested, but the increase was not linear. Experimental versions did not significantly differ regarding hard nonresponse (HNR, i.e., respondent leaves answer box empty) and soft nonresponse (SNR, i.e., don’t knows, refusals, or random letter combinations; Kaczmirek, Meitinger, and Behr 2017).

The overall recommendation based on previous research is to provide multiple answer boxes. However, all previous studies manipulating the number of text boxes were single-country studies from Europe (Fuchs 2013; Keusch 2014; Meitinger and Kunz 2024) or the US (Hofelich Mohr, Sell, and Lindsay 2016; Smyth et al. 2009), that did not assess cultural variations regarding the visual effect. Respondents from different cultures potentially react differently to visual input (Cenek and Cenek 2015; Cyr, Head, and Larios 2010; Würz 2006). The cognitive processes that are involved in answering survey questions (comprehension, retrieving information, forming a judgement, reporting the results) are likely to be influenced by visual design (Meitinger and Kunz 2024). Visual attention and cognitive style have been reported to differ between cultures (Čeněk, Tsai, and Šašinka 2021; Chua, Boland, and Nisbett 2005; Ji and Yap 2016; Šašinková et al. 2023) and differences in visual perception potentially influence response behavior (Tourangeau, Couper, and Conrad 2004).

Since the goal of web probing is to assess and improve the comparability and validity of survey items (Behr et al. 2017), the method and its visual design should be performing equally well across countries, even if cultures are very distinct from the US and European context. The American culture is often described as individualistic and focused on success, with a direct communication style (Hofstede 1980; Lewis 2006). In contrast, Indian culture is collectivistic and hierarchical, with a polite and indirect communication style (Hofstede 1980; Nishimura, Nevgi, and Tella 2008). India is currently the most populous country in the world (Worldbank, n.d.), and already has a long-standing survey culture (Lau, Marks, and Gupta 2018). Furthermore, India is a multi-lingual country, which makes it possible to also test the effect of language (proficiency). To the best of our knowledge, web probing has so far never been applied in India.

First, since the perceived response burden increases with multiple text boxes, more text boxes can demotivate and increase item nonresponse (Fuchs 2013; Smyth et al. 2009). Therefore, it is expected that both hard nonresponse and soft nonresponse increases with the number of text boxes. Moreover, since multiple text boxes can clarify the response task (Meitinger and Kunz 2024), it is expected that multiple text boxes reduce mismatches while increasing the number of themes mentioned.

Second, the visual design effect is expected to be moderated by cultural mechanisms as cultures differ in their communication style (Hall 1976). Low-context cultures, such as the US, use direct communication that emphasizes explicitness and clarity of the message and is more individualistic (Hofstede 1980). High-context cultures, such as in India, are often collectivistic (Hofstede 1980), and communication is more indirect (Kapoor et al. 2003) and influenced by human relations and hierarchy (Nishimura, Nevgi, and Tella 2008). For example, face is “the positive social value a person effectively claims for himself by the line others assume he has taken during a particular contact” (Goffman 1967, 5). It influences interpersonal communication (Bresnahan and Zhu 2017) and is found to differ per culture (Baig, Ting-Toomey, and Dorjee 2014; Yabuuchi 2004). Specifically, in collectivistic cultures preserving face is paramount because losing it affects the entire group (Merkin 2017). In India, face is closely related to izzat (honor) and sharam (shame) (Soni 2012) and balances between the wants and the needs of the individual and the group (Baig, Ting-Toomey, and Dorjee 2014). Moreover, face affects politeness (O’Driscoll 2017), and rules of politeness can affect response behavior. In some cultures, not answering a question is considered impolite, and respondents are more likely to opt for SNR instead of HNR (Meitinger, Behr, and Braun 2021). Furthermore, Indians tend to reduce disagreements with softened negative statements and apologies or by delaying their answer when disagreeing or refusing (Al-Sallal and Ahmed 2022; Valentine 1994). Therefore, the visual design effect might be counteracted by conversational norms in the Indian context, and we hypothesize that the visual design effect on soft nonresponse is weaker for Indian than for American respondents.

Third, language proficiency affects response behavior (Schwarz, Oyserman, and Peytcheva 2010); for example, second-language users often have a smaller vocabulary, a lower conceptual understanding of the depth of words (Kieffer and Lesaux 2015), and give less detailed responses (Walsh et al. 2013). India is a multi-lingual country with two official languages of government communication: Hindi and English (“Indian Constitution Part XVII,” n.d.), and about 10% of Indians speak English, often as a second or third language (Indian Census 2011). Therefore, we expect the visual design effect on the number of themes mentioned to be lowest for Indian English respondents. Lastly, due to lower language proficiency of Indian respondents who answer in English, we expect the visual design effect on mismatches to be lowest for Indian English because the clarifying effect of multiple text boxes is counteracted by a potential misunderstanding of the probe wording.

The current study extends the research by Meitinger and Kunz (2024) on visual design effects and compares a 1-box SP with a 5-box SP in two countries (US and India) to study the influence of culture and language (American-English, Indian-English, and Hindi) on response quality.

Data and Methods

Data collection and sample

Data were collected on Amazon MTurk in the US and India from December 2022 to May 2023. Intended sample size was 1,200 respondents, evenly distributed per language, gender, and age group (18–30, 31–50, and 50+ years old). We offered American and Indian respondents who completed the survey a compensation of $2.50, roughly coinciding with a minimum wage in the US (U.S. government, n.d.). Furthermore, we used MTurk users who had a HIT approval rate of 90% or more. We implemented IP HUB in our survey (Kennedy et al. 2020) to block users from outside India or the US and users of a proxy or virtual private server. We also used reCAPTCHA v2 and v3 in Qualtrics for bot protection. Indian respondents could participate in Hindi and English. For detailed documentation, codebooks and the data, see Arts, van de Schoot, and Meitinger (2024).

In total, 459 American respondents and 683 Indian respondents (English: 373, Hindi: 310) answered the survey. In our study, we excluded respondents with extreme survey duration (Berger and Kiefer 2021) and respondents who did not answer both target questions, which left 989 responses (American-English, AE: 415, Indian-English, IE: 327, Hindi, IH: 247). See Table 1 for sample size, demographics, and completion time by experimental group. Due to significant differences in age and gender (age: F(2, 983)=5.92, p=.015; gender: χ2(2)=10.20, p=.006), we included both variables as covariates in our analysis.

Experimental design

Based on a between-subjects design, respondents were randomly assigned to a 1-box or a 5-box condition at the beginning of the survey. The experiments were implemented in a questionnaire on environmental concerns, identical to questions from Module 5 of the World Value Survey (WVS 2014). We added the following target and probing questions:

TQ1: “To which extent do you agree or disagree with the following statement? I would give part of my income if I were certain that the money would be used to prevent environmental pollution.”

SP1: “Which type(s) of environmental pollution were you thinking of when you answered the previous question?”

TQ2: “How serious do you consider the problem of global warming or the greenhouse effect to be for the world as a whole?”

SP2: “Which problems relating to global warming or the greenhouse effect did you think of when answering the previous question?”

SP1 was implemented at the beginning of the survey, and SP2 was implemented in the middle of the survey.

Table 1.Sample composition and duration perlanguage version
American English Indian English Hindi
# Boxes 1 5 1 5 1 5
N 213 202 175 152 125 121
Gender (% women) 49.8 49.0 42.3 42.8 38.4 35.5
Age (M, SD) 36.0 (12.0) 37.3 (12.3) 36.5 (10.7) 35.8 (9.5) 34.1 (8.0) 34.9 (8.3)
Education (% high) 93.0 91.0 96.0 95.4 92.8 94.2
Average completion time in minutes (Median) 15.6 17.2 19.9 21.9 18.2 20.3

Coding procedure

Based on the probe responses, we developed two code schemata that distinguish between substantive themes and methodological issues (e.g., mismatches, nonresponse types); see Appendix A and B for the coding schemata. Coding was done by two student assistants who received specific training. Intercoder reliability was high (SP1: 98%, SP2: 97%; Holsti, 1969)

Indicators

We distinguish between HNR (i.e., empty answer box) and SNR, where respondents provide an answer that is insufficient for coding (see Kaczmirek, Meitinger, and Behr 2017). Mismatches (MM) occur when respondents provide an answer to a different probe type than requested, e.g., a comprehension probe instead of a specific probe (Behr et al. 2017). The number of themes is the number of substantive themes that each respondent wrote in all text boxes of a probe.

Analysis

To test our hypotheses, we conducted a two-way ANCOVA (number of themes) and logistic regressions (SNR, MM). Due to the low prevalence of HNR, we refrained from conducting a logistic regression but reported a Fisher’s exact test instead. In the case of a significant language or interaction effect, we conducted post hoc tests. In all analyses, we added age and gender as covariates. Analyses were performed using R version 4.3.1 (R Core Team 2013).

Results

Hard nonresponse

HNR was low for all languages and box conditions; see Table 2. For SP1, Fisher’s exact test showed a significant difference between languages (p<.001) but not between number of boxes (p=.591). A Fisher’s exact pairwise comparison showed significant differences between AE and IE (p=.037) and AE and IH (p<.001), but not for differences between IE and IH (p=.052) For SP2, HNR was slightly higher, but Fisher’s exact test showed no significant differences between languages (p=.445) or box-conditions (p=.742).

Table 2.Descriptives of different indicators by language and box-condition for the two probes
American English Indian English Hindi
1-⁠box 5-⁠box 1-⁠box 5-⁠box 1-⁠box 5-⁠box
N % N % N % N % N % N %
HNR SP1 0 0.0 0 0.0 1 0.6 3 2.0 4 3.2 6 5.0
SP2 12 5.6 7 3.4 4 2.3 5 3.3 5 4.0 5 4.1
SNR SP1 67 31.5 30 14.9 44 25.1 41 27 37 37.6 22 18.2
SP2 30 9.4 17 8.4 24 13.7 12 7.9 12 9.6 11 9.1
Mismatches SP1 35 16.4 14 6.9 9 5.1 2 1.3 11 8.8 1 0.8
SP2 15 7.0 10 5.0 11 6.3 8 5.3 12 9.6 6 5.0
Substantive themes (mean) SP1 1.73 3.83 1.93 3.76 1.46 3.68
SP2 2.81 3.26 2.83 3.03 2.59 3.29

Soft nonresponse

Overall, respondents opted more often for SNR than HNR, in particular in the 1-box condition and for SP1; see Table 2. For SP1, SNR was substantially reduced for AE and IH responses in the 5-box condition, but was high and stable across conditions for IE responses. SNR significantly differed (see Table 3) by number of text boxes (OR=0.38, p<.001) but not by language (IE: p=.213, IH: p=.257). There was a significant interaction effect between the AE 1-box condition and the IE 5-box condition (OR= 2.92, p=.003) but not between the AE 1-box condition and the IH 5-box condition (p=.995). For SP2, SNR did not significantly differ by number of boxes (p=.730), language (IE: p=.282, IH: p=.712), or interaction effect (IE: 5-box, p=.372, IH: 5-box p=.801).

Table 3.Logistic regression results for soft nonresponse and mismatches
Soft Nonresponse Mismatches
SP1 SP2 SP1 SP2
Odds Ratios CI p Odds Ratios CI p Odds Ratios CI p Odds Ratios CI p
# Boxes (ref. 1-box) 0.38 0.23 – 0.61 <.001 0.89 0.44 – 1.75 .730 0.40 0.20 – 0.76 .007 0.67 0.28 – 1.51 .341
Language version (ref. American English)
Indian English 0.75 0.47 – 1.18 .213 1.42 0.75 – 2.72 .282 0.30 0.13 – 0.62 .002 0.85 0.37 – 1.90 .696
Indian Hindi 1.32 0.82 – 2.11 .257 0.86 0.38 – 1.85 .712 0.55 0.25 – 1.10 .103 1.37 0.61 – 3.05 .437
Age 0.99 0.97 – 1.00 .142 0.97 0.95 – 0.99 .012 1.00 0.98 – 1.02 .896 1 0.97 – 1.02 .994
Gender 1.43 1.07 – 1.92 .016 0.71 0.45 – 1.09 .119 1.49 0.91 – 2.45 .112 0.81 0.48 – 1.36 .432
# Boxes x Language
Indian English 5-boxes 2.92 1.45 – 5.93 .003 0.63 0.23 – 1.72 .372 0.61 0.08 – 2.88 .571 1.25 0.35 – 4.39 .728
Indian Hindi 5-boxes 1.02 0.47 – 2.21 .955 1.15 0.38 – 3.54 .801 0.21 0.01 – 1.30 .161 0.71 0.18 – 2.59 .605
N 976 976 976 976
R2 Tjur 0.039 0.013 0.044 0.005

Mismatches

For both probes and all language versions, mismatches were more frequent in the 1-box than in the 5-box condition. Mismatches significantly differed for SP1 by number of text boxes (OR=0.40, p=.007) and partly language (IE: OR=0.30, p=.002, IH: p=.103). There were no significant interaction effects (IE-5-box p=.571, IH-5-box p=.161). For SP2, there were no significant main effects (language: IE: p=.696, IH: p=.437; box condition: p=.341) and no significant interaction effects (IE-5-box p=.728, IH-5-box p=.605)

Number of themes

Across all languages and probes, respondents mentioned more themes in the 5-box than in the 1-box condition; see Table 2 & Figure 3. For both probes, number of boxes exerted a significant and large effect on the number of themes (SP1: F(1, 567)=470.08, p=<.001, ω2=0.17; SP2: F(1, 586)=11.56, p=.001, ω2=0.12) but language version did not (SP1: F(2, 567)=2.91, p=.055; SP2: F(2, 586)=0.18, p=.836), and neither did the interaction effect (SP1: F(2,567)=1.43, p=.241; SP2: F(2, 586)=1.19, p=.306).

Figure 3
Figure 3.Profile plot for the number of themes

Discussion and conclusion

In this study, we compared a 1-box with a 5-box design for list-style SP across three language versions: American English, Indian English, and Hindi. We implemented the experiment at two probes to evaluate whether previous one-country studies can be extended to more distant cultural contexts, potentially providing some support for a universal mechanism for response behavior. The overall results point in this direction because the 5-box design significantly reduced mismatches for SP1 and significantly increased the number of themes for both probes.

Our findings align with Meitinger and Kunz (2024), who conducted their study in Germany. Additionally, SNR was significantly lower in the 5-box condition. Although this effect has the opposite direction than hypothesized and diverges from previous research (Fuchs 2013), this finding provides even more support for using a 5-box design. Contrary to the previous indicators, we found significant differences across language versions for HNR for SP1. HNR does not significantly differ between the number of boxes. Due to the low HNR prevalence, we had to refrain from using a logistic regression, and the covariates age and gender could not be considered. Therefore, we cannot clearly disentangle the effect of language, gender, and age between language versions.

For SP2, most indicators showed similar patterns as for SP1, but only the number of themes significantly increased with five text boxes. Differences between probe responses can probably be linked to the placement in the questionnaire. SP2 was asked further along in the questionnaire, meaning respondents are more familiar with the topic and response task. However, respondents may experience survey fatigue (Scanlon 2019).

Our second research goal was understanding whether cultural mechanisms and language proficiency moderate the visual design effect. We hypothesized that rules of politeness counteract the visual design effect on SNR in the Indian context. For SP1, we found a significant interaction effect between language and visual design. SNR was lower in the 5-box condition than in the 1-box condition for AE and IH but not for IE responses. The pattern was reversed for SP2. The probe topic seems more relevant for SNR variations across visual design versions than cultural differences. Language proficiency also did not moderate the visual design effect on the number of themes and mismatches.

Our research findings underline the importance of optimal visual design for questionnaire developers. In our case, multiple text boxes clarified the task for the respondents, reducing nonresponse and mismatches and increasing the number of themes mentioned. These effects were not significantly moderated by cultural mechanisms or language. Only for HNR we found significant differences between language versions. Therefore, we recommend implementing multiple text boxes for list-style open-ended questions and specific probes in cross-national studies. Although we studied the US and India, similar effects are likely to appear in single country studies with a multicultural and multilingual sample (e.g., in the US or Canada).

Limitations and future research

This survey did not use data from a probability panel but recruited respondents on Amazon MTurk. However, we proactively addressed potential data issues during data collection and analysis. In addition, using a nonprobability panel is a common procedure in web probing studies since the goal is a validity and comparability assessment of survey measures and not to make statistical inferences to the general population (Behr et al. 2017).

In this study, highly educated[1] respondents are overrepresented in all language versions. This might lead to more people understanding the question, and thus to a reduction of nonresponse and mismatches. We expect lower educated respondents to rely more on the visual design than higher educated respondents. Therefore, the effect might be less pronounced in our study than in a study with a higher proportion of lower educated respondents.

We also implemented our experiment with two questions on environmental attitudes. Future research should replicate our experiment with other topics to exclude the topic specificity of the effects. In our study, we focused on one aspect of visual design (i.e., the number of answer boxes). Future research could assess, for example, the influence of text box size or color. In addition, effect sizes and model fit were relatively low, implying that adding more (culturally influenced) variables to the model might provide additional insights.


Acknowledgements

We thank Kyra Girán and Vidhi Ramnarain for their coding efforts. We would also like to thank the editors and reviewers from Survey Practice for their useful comments and suggestions.

Lead author contact information

Ingrid Arts, Department of Methods & Statistics, Utrecht University
Utrecht, Netherlands
i.j.m.arts@uu.nl


  1. Education according to ISCED classification (UNESCO Institute for Statistics 2015). High: completed high school and higher (ISCED 4 and higher), low: unfinished high school or lower (ISCED 3 and lower).

Submitted: October 11, 2024 EDT

Accepted: December 16, 2024 EDT

References

Al-Sallal, R. E., and M. O. Ahmed. 2022. “The Role of Cultural Background in the Use of Refusal Strategies by L2 Learners.” International Journal of Society, Culture and Language. https:/​/​doi.org/​10.22034/​ijscl.2022.550928.2596.
Google Scholar
Arts, I., R. Schoot van de, and K. Meitinger. 2024. “A Bilingual Dataset for Testing Web Probing in the US and India: The Example of Measures of Environmental Concern.” Journal of Open Psychology Data 12 (1). https:/​/​doi.org/​10.5334/​jopd.113.
Google Scholar
Baig, N., S. Ting-Toomey, and T. Dorjee. 2014. “Intergenerational Narratives on Face: A South Asian Indian American Perspective.” Journal of International and Intercultural Communication 7 (2): 127–47. https:/​/​doi.org/​10.1080/​17513057.2014.898362.
Google Scholar
Beatty, P. C., and G. B. Willis. 2007. “Research Synthesis: The Practice of Cognitive Interviewing.” Public Opinion Quarterly 71 (2): 287–311. https:/​/​doi.org/​10.1093/​poq/​nfm006.
Google Scholar
Behr, D. 2023. “What to Consider and Look out for in Questionnaire Translation (GESIS Survey Guidelines) (Version 1.0).” GESIS - Leibniz Institute for the Social Sciences. https:/​/​doi.org/​10.15465/​GESIS-SG_EN_043.
Behr, D., W. Bandilla, L. Kaczmirek, and M. Braun. 2014. “Cognitive Probes in Web Surveys: On the Effect of Different Text Box Size and Probing Exposure on Response Quality.” Social Science Computer Review 32 (4): 524–33. https:/​/​doi.org/​10.1177/​0894439313485203.
Google Scholar
Behr, D., K. Meitinger, M. Braun, and L. Kaczmirek. 2017. “Web Probing – Implementing Probing Techniques from Cognitive, Interviewing in Web Surveys with the Goal to Assess the Validity of Survey Questions (GESIS Survey Guidelines) (Version 1.0).” GESIS - Leibniz Institute for the Social Sciences. https:/​/​doi.org/​10.15465/​GESIS-SG_EN_023.
Berger, A., and M. Kiefer. 2021. “Comparison of Different Response Time Outlier Exclusion Methods: A Simulation Study.” Frontiers in Psychology 12:675558. https:/​/​doi.org/​10.3389/​fpsyg.2021.675558.
Google Scholar
Bresnahan, M., and Y. Zhu. 2017. “Interpersonal Communication and Relationships across Cultures.” In Intercultural Communication, edited by L. Cheng, 199–217. Boston: de Gruyter. https:/​/​doi.org/​10.1515/​9781501500060-009.
Google Scholar
Cenek, J., and S. Cenek. 2015. “Cross-Cultural Differences in Visual Perception.” Journal of Education Culture and Society 1:187–206. https:/​/​doi.org/​10.15503/​jecs20151.187.206.
Google Scholar
Čeněk, J., J.-L. Tsai, and Č. Šašinka. 2021. “Correction: Cultural Variations in Global and Local Attention and Eye-Movement Patterns during the Perception of Complex Visual Scenes: Comparison of Czech and Taiwanese University Students.” PLOS ONE 16 (2): e0247219. https:/​/​doi.org/​10.1371/​journal.pone.0247219.
Google Scholar
Chua, H. F., J. E. Boland, and R. E. Nisbett. 2005. “Cultural Variation in Eye Movements during Scene Perception.” Proceedings of the National Academy of Sciences 102 (35): 12629–33. https:/​/​doi.org/​10.1073/​pnas.0506162102.
Google Scholar
Couper, M. P., C. Kennedy, F. G. Conrad, and R. Tourangeau. 2011. “Designing Input Fields for Non-Narrative Open-Ended Responses in Web Surveys.” Journal of Official Statistics 27 (1): 65–85.
Google Scholar
Cyr, D., M. Head, and H. Larios. 2010. “Colour Appeal in Website Design within and across Cultures: A Multi-Method Evaluation.” International Journel of Human-Computer Studies 38 (1–2): 1–21. https:/​/​doi.org/​10.1016/​j.ijhcs.2009.08.005.
Google Scholar
Emde, M. 2014. “Open-Ended Questions in Web Surveys. Using Visual and Adaptive Questionnaire Design to Improve Narrative Response.” Doctoral thesis, Technische Universität Darmstadt. http:/​/​tuprints.ulb.tu-darmstadt.de/​id/​eprint/​4219.
Emde, M., and M. Fuchs. 2012. “Using Adaptive Questionnaire Design in Open-Ended Questions: A Field Experiment.” Paper presented at the American Association for Public Opinion Research (AAPOR) 67th Annual Conference, San Diego, USA, May 17–20.
Fuchs, M. 2013. “Dynamic Visual Design for List-Style Open-Ended Questions.” Paper presented at the 68th AAPOR conference, Boston, MA, May 16–19.
Goffman, E. 1967. Interaction Ritual: Essays on Face-to-Face Behavior. 1st ed. New York: Doubleday.
Google Scholar
Hall, E. T. 1976. Beyond Culture. New York: Doubleday.
Google Scholar
Hofelich Mohr, A., A. Sell, and T. Lindsay. 2016. “Thinking Inside the Box: Visual Design of the Response Box Affects Creative Divergent Thinking in an Online Survey.” Social Science Computer Review 34 (3): 347–59. https:/​/​doi.org/​10.1177/​0894439315588736.
Google Scholar
Hofstede, G. 1980. Culture’s Consequences: International Differences in Work-Related Values. Beverly Hills, CA: Sage Publications, Inc.
Google Scholar
Indian Census. 2011. “Indian Census.” https:/​/​censusindia.gov.in/​census.website/​.
“Indian Constitution Part XVII.” n.d. Accessed May 7, 2024. https:/​/​www.mea.gov.in/​Images/​pdf1/​Part17.pdf.
Ji, L.-J., and S. Yap. 2016. “Culture and Cognition.” Current Opinion in Psychology 8:105–11. https:/​/​doi.org/​10.1016/​j.copsyc.2015.10.004.
Google Scholar
Kaczmirek, L., K. Meitinger, and D. Behr. 2017. “Higher Data Quality in Web Probing with EvalAnswer: A Tool for Identifying and Reducing Nonresponse in Openended Questions.” GESIS Papers. https:/​/​doi.org/​10.21241/​SSOAR.51100.
Google Scholar
Kapoor, S., P. C. Hughes, J. R. Baldwin, and J. Blue. 2003. “The Relationship of Individualism–Collectivism and Self-Construals to Communication Styles in India and the United States.” International Journal of Intercultural Relations 27 (6): 683–700. https:/​/​doi.org/​10.1016/​j.ijintrel.2003.08.002.
Google Scholar
Kennedy, R., S. Clifford, T. Burleigh, P. D. Waggoner, R. Jewell, and N. J. G. Winter. 2020. “The Shape of and Solutions to the MTurk Quality Crisis.” Political Science Research and Methods 8 (4): 614–29. https:/​/​doi.org/​10.1017/​psrm.2020.6.
Google Scholar
Keusch, F. 2014. “The Influence of Answer Box Format on Response Behavior on List-Style Open-Ended Questions.” Journal of Survey Statistics and Methodology 2 (3): 305–22. https:/​/​doi.org/​10.1093/​jssam/​smu007.
Google Scholar
Kieffer, M. J., and N. K. Lesaux. 2015. “Knowledge of Words, Knowledge about Words: Dimensions of Vocabulary in First and Second Language Learners in Sixth Grade.” Reading and Writing 25:347–73. https:/​/​doi.org/​10.1007/​s11145-010-9272-9.
Google Scholar
Kunz, T., and K. Meitinger. 2022. “A Comparison of Three Designs for List-Style Open-Ended Questions in Web Surveys.” Field Methods 34 (4): 303–17. https:/​/​doi.org/​10.1177/​1525822X221115831.
Google Scholar
Lau, C. Q., E. Marks, and A. K. Gupta. 2018. “Survey Research in India and China.” In Advances in Comparative Survey Methods, edited by T. P. Johnson, B. Pennell, I. A. L. Stoop, and B. Dorer, 1st ed., 583–96. Wiley. https:/​/​doi.org/​10.1002/​9781118884997.ch28.
Google Scholar
Leitgöb, H., D. Seddig, T. Asparouhov, D. Behr, E. Davidov, K. De Roover, S. Jak, et al. 2023. “Measurement Invariance in the Social Sciences: Historical Development, Methodological Challenges, State of the Art, and Future Perspectives.” Social Science Research 110:102805. https:/​/​doi.org/​10.1016/​j.ssresearch.2022.102805.
Google Scholar
Lewis, R. D. 2006. When Cultures Collide: Leading across Cultures. 3rd ed. Boston: Nicholas Brealey International.
Google Scholar
Meitinger, K., D. Behr, and M. Braun. 2021. “Using Apples and Oranges to Judge Quality? Selection of Appropriate Cross-National Indicators of Response Quality in Open-Ended Questions.” Social Science Computer Review 39 (3): 434–55. https:/​/​doi.org/​10.1177/​0894439319859848.
Google Scholar
Meitinger, K., and T. Kunz. 2024. “Visual Design and Cognition in List-Style Open-Ended Questions in Web Probing.” Sociological Methods & Research 53 (2): 940–67. https:/​/​doi.org/​10.1177/​00491241221077241.
Google Scholar
Merkin, R. S. 2017. Saving Face in Business: Managing Cross-Cultural Interactions. 1st ed. Palgrave Macmillan New York.
Google Scholar
Neuert, C., K. Meitinger, D. Behr, and M. Schonlau. 2021. “Editorial: The Use of Open-Ended Questions in Surveys.” Methods, Data, Analyses : A Journal for Quantitative Methods and Survey Methodology (Mda) 15 (1): 3–6.
Google Scholar
Nishimura, S., A. Nevgi, and S. Tella. 2008. “Communication Style and Cultural Features in High/Low Context Communication Cultures: A Case Study of Finland, Japan and India.” Paper presented at the Ainedidaktiikan symposiumi, Helsinki, Finland, April 8.
O’Driscoll, J. 2017. “Face and (Im)Politeness.” In The Palgrave Handbook of Linguistic (Im)Politeness, edited by J. Culpeper, M. Haugh, and D. Z. Kádár, 89–118. Palgrave Macmillan London. https:/​/​doi.org/​10.1057/​978-1-137-37508-7_5.
Google Scholar
R Core Team. 2013. “R: A Language and Environment for Statistical Computing.” Vienna, Austria: R Foundation for Statistical Computing. http:/​/​www.R-project.org/​.
Šašinková, A., J. Čeněk, P. Ugwitz, J.-L. Tsai, I. Giannopoulos, D. Lacko, Z. Stachoň, J. Fitz, and Č. Šašinka. 2023. “Exploring Cross-Cultural Variations in Visual Attention Patterns inside and Outside National Borders Using Immersive Virtual Reality.” Scientific Reports 13 (1): 18852. https:/​/​doi.org/​10.1038/​s41598-023-46103-1.
Google Scholar
Scanlon, P. J. 2019. “The Effects of Embedding Closed-Ended Cognitive Probes in a Web Survey on Survey Response.” Field Methods 31 (4): 328–43. https:/​/​doi.org/​10.1177/​1525822X19871546.
Google Scholar
Schwarz, N., D. Oyserman, and E. Peytcheva. 2010. “Cognition, Communication, and Culture: Implications for the Survey Response Process.” In Survey Methods in Multinational, Multiregional, and Multicultural Contexts, edited by J. A. Harkness, M. Braun, B. Edwards, T. P. Johnson, L. Lyberg, P. Ph. Mohler, B. Pennell, and T. W. Smith, 1st ed., 175–90. Hoboken, N.J.: Wiley & Sons, Inc. https:/​/​doi.org/​10.1002/​9780470609927.ch10.
Google Scholar
Singer, E., and M. P. Couper. 2017. “Some Methodological Uses of Responses to Open Questions and Other Verbatim Comments in Quantitative Surveys.” Methods 11 (2): 115–34. https:/​/​doi.org/​10.12758/​MDA.2017.01.
Google Scholar
Smith, T. W. 2019. “Optimizing Questionnaire Design in Cross-National and Cross-Cultural Surveys.” In Advances in Questionnaire Design, Development, Evaluation and Testing, edited by P. Beatty, D. Collins, L. Kaye, J. L. Padilla, G. Willis, and A. Wilmot, 471–92. Hoboken, N.J.: John Wiley & Sons, Inc. https:/​/​doi.org/​10.1002/​9781119263685.
Google Scholar
Smyth, J. D., D. A. Dillman, L. M. Christian, and M. Mcbride. 2009. “Open-Ended Questions in Web Surveys.” Public Opinion Quarterly 73 (2): 325–37. https:/​/​doi.org/​10.1093/​poq/​nfp029.
Google Scholar
Soni, S. 2012. “‘IZZAT’ AND THE SHAPING OF THE LIVES OF YOUNG ASIANS IN BRITAIN IN THE 21st CENTURY.” Doctoral thesis, University of Birmingham. https:/​/​etheses.bham.ac.uk/​id/​eprint/​4078/​1/​Soni13PhD.pdf.
Tourangeau, R., M. P. Couper, and F. Conrad. 2004. “Spacing, Position, and Order: Interpretive Heuristics for Visual Features of Survey Questions.” Public Opinion Quarterly 38 (6): 368–93. https:/​/​doi.org/​10.1093/​poq/​nfh035.
Google Scholar
UNESCO Institute for Statistics. 2015. “International Standard Classification of Education: Fields of Education and Training 2013 (ISCED-F 2013) Detailed Field Descriptions.” UNESCO Institute for Statistics. https:/​/​doi.org/​10.15220/​978-92-9189-179-5-en.
U.S. government. n.d. “Minimum Wage US.” Accessed April 30, 2024. https:/​/​www.usa.gov/​minimum-wage.
Valentine, T. M. 1994. “When ‘No’ Means ‘Yes’: Agreeing and Disagreeing in Indian English Discourse.” Paper presented at the International Conference on World Englishes Today, Urbana, IL, March 31–April 2.
Walsh, T., P. Nurkka, H. Petrie, and J. Olsson. 2013. “The Effect of Language in Answering Qualitative Questions in User Experience Evaluation Web-Surveys.” In Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration, November 25-25, Adelaide, Australia. https:/​/​doi.org/​10.1145/​2541016.2541049.
Google Scholar
Worldbank. n.d. “Population India.” Worldbank. Accessed November 19, 2024. https:/​/​databank.worldbank.org/​views/​reports/​reportwidget.aspx?Report_Name=CountryProfile&Id=b450fd57&country=IND.
Würz, E. 2006. “Intercultural Communication on Web Sites: A Cross-Cultural Analysis of Web Sites from High-Context Cultures and Low-Context Cultures.” Journal of Computer-Mediated Communication 11:274–99. https:/​/​doi.org/​10.1111/​j.1083-6101.2006.00013.x.
Google Scholar
WVS. 2014. “World Values Survey: Round Five—Country-Pooled Datafile Version.” http:/​/​www.worldvaluessurvey.org/​WVSDocumentationWV5.jsp.
Yabuuchi, A. 2004. “Face in Chinese, Japanese, and U.S. American Cultures.” Journal of Asian Pacific Communication 14 (2): 261–97. https:/​/​doi.org/​10.1075/​japc.14.2.05yab.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system