Close contest: A man watching television waits for the release of exit polls at an appliance store in Ajmer on June 1. AFP
What methodology should be followed for accuracy? How do we know that a survey has done its homework? What are the red flags? When are exit polls conducted? Do respondents open up on their choice? In a close election, is it more difficult to get seats and voter shares right?
SRINIVASAN RAMANI
The story so far:
Several exit polls predicted the return of the Bharatiya Janata Party-led National Democratic Alliance (NDA) to power, with a tally of more than 300 seats for the BJP alone. One pollster, Today’s Chanakya, predicted 400 seats (plus or minus 15) for the NDA, and another, Axis MyIndia, said the NDA would win an average of 381 seats. All these polls were way off the mark, as results showed.
What were the vote share projections?
The CSDS-Lokniti post-poll predicted that the NDA would receive a vote share of 46% while that of the INDIA bloc (excluding the Left, Trinamool Congress in West Bengal and AAP in Punjab) would be 35%. The error margin was 3.08% points. The results show that the NDA had bagged 292 seats (43.63% vote share) and the INDIA bloc 205 seats (excluding the Trinamool Congress which won 29 seats) with a vote share of 37%. CSDS-Lokniti did not project seats for the alliances but predicted that the NDA would return with a majority. Its vote share figures were within the error margins but were 2.5 points higher for the NDA. Axis My India projected 47% for the NDA and 39% for the INDIA bloc and the results showed that the projections overestimated the NDA vote share beyond error margins. C-Voter projected a seat tally of 353 to 383 seats for the NDA, with a vote share of 45.3% and 38.9% for the BJP alone, which was 2.3 points higher than the actual vote share for the party — 36.56%. Its figures for the INDIA bloc were also roughly 2.4 points lower than the actual mark. While the vote shares were within error margins nationally, its seat tallies were way off across several States.
What are exit polls?
Opinion polls are sample surveys where a cross section of the electorate is randomly chosen and interviewed about their choice of party or candidates. These polls could either be conducted in-person or over devices as is the case with telephonic surveys. Exit polls ask voters about their choice right after they have exercised their mandate, sometimes outside the polling booth. Some pollsters prefer to do “post-poll surveys” which are conducted at the residence of the voters after voting. CSDS-Lokniti’s poll is a “post-poll survey”. Other surveys such as Axis MyIndia’s were “exit polls”.
Did methodology matter in the way exit polls got the numbers wrong?
For exit polls to be accurate, certain factors have to be kept in mind like the sample size of the survey, the selection process of the sample, the manner in which the survey is conducted, and the weighting of the sample according to estimates of the population.
The size of the sample has to be representative and the largeness of the sample is immaterial as long as it is significant enough to statistically predict the winner. If the sample is randomly chosen and the size of the sample is enough to accurately predict the possibility of a certain candidate winning more than 40-45% of the vote — which is generally the case with Indian elections — then even a representative sample of around 20,000-odd respondents is enough to predict winners in a country of a voting population of close to 100 crore. One can conduct larger surveys with more than 20,000 respondents or even with lakhs of respondents but the key is to get good representation. CSDS Lokniti’s total sample size was 19,663 across 23 States and 193 parliamentary constituencies while that of Axis MyIndia’s was 5,82,574 across all the 543 constituencies. But the former got its vote share predictions within the error margins while the latter didn’t.
For good representation, the samples have to be chosen randomly (so as to avoid bias) and also be chosen in a stratified manner (so as to avoid missing out on any section of the population). The most ideal way of choosing a random, but stratified sample, is to use electoral rolls for identifying respondents. Once sampling is done and the list of respondents are identified, they need to be weighted on the basis of the representation of sections in the population — the percentage of women, Dalits, minorities, majority population, and urban vs rural voters. After a representative sample has been prepared, interviews are conducted. Ideally, a face-to-face interview works better, and in the same language as the respondent.
Do respondents reveal their choice?
There is a high probability that many respondents, especially those from marginalised sections, either do not reveal their voting preferences or require a measure of trust with the surveyor before opening up on their choices. There is a possibility that the pollsters who got this election wrong either under-sampled marginalised voters or their surveyors were not trusted by the respondents to reveal the right answer or were misled by them.
What happens after the surveys?
Once the survey is done, the results should be matched with the estimated demographic information. If there are 12 respondents among Dalits whose choices are recorded in a population of 100 and the actual proportion of Dalits in that population is 15, then the weighting can be uniformly done for the 15 based on the 12 respondents. But if only 39 women in a population of 100 are interviewed, extrapolating the views of the 39 to that of 48 women (the possible actual estimate of the women population) would be problematic as women do not vote as a single category. This could be one reason why the Axis MyIndia poll got its estimates wrong. The men-women representation in the poll’s sample was 69 to 31.
Most of the pollsters, who had tied up with TV channels, used their surveys to predict seat shares — CSDS Lokniti didn’t. Vote share to seat share conversions can be done in different ways. The most commonly used method is by assessing the swing in vote share for a particular party from previous elections, either in a State or to be more accurate, in a particular region and to be more precise, if the sampling allows the pollster to do so, in a particular constituency. The swing for or against a party as against the same for its opponent(s) can provide the basis for whether an incumbent will be returned from a particular constituency or whether a party can retain a certain number of seats in a region of a State or a State as a whole.
As veteran psephologist and media personality Prannoy Roy points out in his book, The Verdict, written with Dorab R. Sopariwala, some pollsters look at swings from previous polls and the “index of opposition unity” to determine the margin of victory for a particular candidate and predict seat share from vote shares. None of the pollsters who tied up with major television channels got their vote to seat shares right. Since none of them have revealed what they consider their “secret sauce” — the conversion process — it is difficult to ascertain why they got it wrong.
Is a close election difficult to predict?
It is evident that pollsters in India mostly get the winner of an election and the seat shares closer to reality when the outcome is decisive. When elections are this close, pollsters rarely tend to be accurate on vote and seat shares. Whether a polling agency has done a good survey is clear from what it reveals in its methodology — the sample size, the mode of survey, the representation of the sample, and inbuilt error margins. If a survey doesn’t reveal these, it should not be considered serious enough.
Whether a polling agency has done a good survey is clear from what it reveals in its methodology
https://epaper.thehindu.com/ccidist-ws/th/th_international/issues/86058/OPS/GV7CTFE09.1.png?cropFromPage=true
No comments:
Post a Comment