As I mentioned in Thursday’s post, I am about to step away from my research for a month (I did one more interview on Friday, but transcription will have to wait). Since I arrived here in early August and am finishing up in May 2016, this month marks the halfway point of my research grant. As such, this is a timely moment to revisit my initial research goals, review where this research has taken me, and consider where I will be heading during the first half of 2016.
Here is the inaugural post in which I defined my research agenda on the eve of my flight to New Delhi from Chicago. Upon rereading this post, one observation I have is that the research topic I outlined was quite technical in nature: how the issue of systematic sampling error manifests itself in the context of Indian public opinion research, and how Indian public opinion researchers grapple with it. Systematic sampling error is the phenomenon by which certain population groups are more or less likely to be sampled than others, leading to biased estimates of public opinion. To study how this phenomenon manifests itself in Indian public opinion measurement, I envisioned myself working alongside Indian researchers at three organizations, asking them focused — and potentially invasive — questions about the protocols of their research, and observing how their research protocols worked out in the field.
In the last four months, I participated in the questionnaire design, training, and analysis of CSDS’ post-poll survey for the Bihar election. I observed fieldwork for exit polls conducted by CVoter during the third phase of this election in Patna. I observed fieldwork for a market research survey being conducted in Delhi by Impetus Research. I interviewed some of India’s leading journalists and pollsters on this and other topics. I published an op-ed in The Hindu and presented at an IPSA conference in West Bengal reflecting on my experiences observing and participating in the design and implementation of Bihar surveys.
Where does this leave me? What have I learned about instances of systematic sampling error in India? How do researchers address this challenge?
It turns out that systematic sampling error is not a methodological challenge that bedevils Indians researchers. In fact, researchers here are quite attuned to addressing the challenges of reaching a sample that is a demographic reflection of their population. They have different means of going about it, and some make more deliberate efforts than others. But the methods do not vary much from one research organization to another. To reach more female respondents, more female interviewers are used. To reach more Muslim respondents, more Muslim interviewers are used. When the demographic profile of the sample does not match exactly that of the population being studied, weighting is used to make up for the difference. This is no different than how public opinion researchers deal with such challenges in the United States.
As I alluded in both my Hindu op-ed and my IPSA presentation, there is one demographic variable for which systematic sampling error may be slightly biasing samples: caste. I have a suspicion, unsubstantiated thus far, that upper castes might be systematically overrepresented in many survey samples. Census data, the most recent of which comes from 2011, offers district-level figures about the share of women and share of every religious group. But in terms of caste figures, only the proportions of Scheduled Tribes (STs) and Scheduled Castes (SCs) are known. We don’t really know how many Other Backwards Classes (OBCs) or Extremely Backwards Classes (EBCs) or upper castes there are in each state since the last comprehensive caste census was done by the British in 1931. The Indian government did a Socio Economic and Caste Census in 2011, but those data have not yet been made available.
The uncomfortable reality then is that survey researchers must rely on estimates about how many Brahmins, or Kurmis, or Yadavs, or Jats, etc., are present in each state when determining if the demographic profile of their samples are representative. They can look at previous survey samples to see if the proportions match up reasonably well across time. But if there is a systematic bias towards overrepresenting or underrepresenting one caste that has affected all survey samples, it would be impossible to tell. We need a caste census to know for sure. I am hoping to investigate this during the second half of my research, if I have access to enough surveys, by looking at how often and how much surveys fail to predict election results. If there is a systematic bias towards upper castes, one would expect survey predictions to be off the mark more often when 1) upper caste vote is more consolidated towards one party, 2) the share of upper caste voters in a state is larger, or 3) both.
But it still must be pointed out that if such a bias exists, its effect would probably be marginal. The broader challenge to conducting public opinion research in India is that the resources for doing high-quality research are often not available. This is especially the case when it comes to the academic study of political and social phenomena using survey data. The Centre for the Study of Developing Societies conducts high-quality election studies — see the Bihar election study I worked on, for example — but there do not appear to be other organizations that do such studies in India. Much, if not most, public opinion research here is funded by media outlets who are more interested in predicting election winners first, not understanding the underlying factors behind such a win, and certainly not building a comprehensive database of political data on India’s electorate.
Because these media outlets generally do not understand what differentiates good survey research from bad survey research, researchers who cut corners can do quite well for themselves. This is especially the case since a standard set of reporting requirements regarding methodology do not exist. Researchers do not have to report margin of error, sampling methodology, sample size, and sample demographic profile. Because such requirements do not exist, it is more difficult to distinguish good polls from bad ones. Unfortunately, the reputation of all Indian pollsters suffers as a result.
Pollsters, generally speaking, like to talk about their research with a language of dispassionate objectivity. They like to think of themselves as scientists in a laboratory, placing the social and political phenomena of the world under a powerful microscope. As a means of approaching their research, this is the correct approach.
But it is also something of a deception. To do this research, researchers need to obtain resources from those who will fund it. Those funders will have their own incentives for doing the research, and might direct researchers to cut, add, or change questions. They might ask researchers to cut methodological corners for the sake of cutting costs. Some pollsters will be more receptive to such pressures than others. Some sponsors will give pollsters more autonomy than others. Such decisions underline how survey research is a tool to understand social and political phenomena, but still is shaped by the social and political (and economic!) constraints of that world.
For my two cents, survey research is a valuable and precise means of studying our societies. But it still has limits. When survey researchers talk up their craft as if it is an exact science, rather than a precise but imperfect tool of measurement, they are building up expectations that ultimately cannot be met. This is especially the case when the topic under study is the volatile and evolving world of Indian politics, which political scientists are still grappling to comprehend.
I am not sure exactly how to incorporate such “big picture” thoughts into my research. Perhaps they are beyond my bailiwick. Nonetheless, they are thoughts I am left with at the end of 2015.
A happy holidays and joyful new year to all of you.