Advocacy research on elections

You know how the left wants to ban guns, right? I’ve got something I want to ban, and I think the benefit to society would be greater: the use of statistics by lawyers. Lawyering isn’t a science that seeks truth, it’s a form of advocacy that seeks victory at any price, so when lawyers are … Continue reading “Advocacy research on elections”

You know how the left wants to ban guns, right? I’ve got something I want to ban, and I think the benefit to society would be greater: the use of statistics by lawyers. Lawyering isn’t a science that seeks truth, it’s a form of advocacy that seeks victory at any price, so when lawyers are given a pile of numbers they invariably sift through them in order to find the ones that bolster their case, at the expense of truth, objectivity, and anything else that gets in the way, like the rest of the numbers.

We had a good object lesson in this tendency in the arguments put by Tribe and Rosenbaum to the Ninth Circuit on voting machines, and in the amici by Rick Hasen, the textbook author who never fails to mention his book titles in the briefs he files.

The ink isn’t dry in the California Recall and these masters of obfuscation are already jumping up and down screaming that the election was flawed by punch-card voting systems. The evidence: fewer votes were cast on question 1, the go/no go on Davis, than in question 2, the 135 possible replacements.

Excuse me, but this argument is ridiculous on its face. If there was something defective about the system, question two should have had fewer votes cast than question 1, since it was much harder to find your candidate among the six pages and much easier to over-punch. Question 1 was right at the top, with “Yes” and “No” plainly marked.

The way they get themselves into this tizzy is remarkable. See Mickey?Kaus:

The Brady Hunch: Punch-card foe Henry Brady of Berkeley now claims that 176,000 votes were lost in the recall election due to punch-card balloting systems.

Or Michael McDonald:

… touchscreen voters had smaller undervote rates than the punchcard and optical scan voters, and also had a rate smaller than the exit poll indicated.

Or Steven Hertzberg:

Our preliminary calculations show that Question #1 was either not marked by the voter or recorded by the equipment in 7.7% of the ballots cast on these machines. The average “not counted/marked” rate for the remaining voting systems is 2.3%, with the next highest rate being the Optech optical scanner at 4.35%.

Or Textbook Hasen:

Mickey Kaus comes down hard here on Henry Brady’s most recent statistics regarding the extent of unintentional undervotes caused by punch cards, but other preliminary analyses have reached the same conclusion:

Of course, it’s not too surprising that multiple advocates using the same flawed method would reach the same conclusion. In this case, the advocates, all predisposed to believe that The Man uses punch cards to deprive minorities of their voting rights, all compared the “undervote” on question 1 with a statewide exit poll, and found that the voters in punchcard counties recorded fewer votes than the statewide exit poll predicted they should.

Duh. Voters in punch card counties voted against the recall and for Bustamante more than the statewide exit poll predicts they should, but I don’t see anybody complaining about that. Why is that?

One form of polling — such as an exit poll — is only useful as a calibration on another form of polling — the election — if it’s more accurate. Certainly, a statewide poll doesn’t tell us anything about the propensity of voters in particular areas to vote one way or another. A county-by-county exit poll would be more useful, but none of the critics offers one. Instead, we get an analysis that lumps all counties with similar voting systems together:

Percentages Punch card Touch Screen Optical scan
a) Actual missing votes 6.3% 1.5% 2.7%
b) Intended non-votes (exit poll) 2.9% 1.4% 2.5%
Estimate of “missing vote” (a
minus b)
3.4% 0.1% 0.2%

Blumenthal, according Hasen, reasons that punch card voting system denied 160,000 people of their voting right on recall question 1.

I think this is erroneous. According the exit poll, 57% of the voters intended to vote Yes on the recall, and 43% No. But the statewide totals, according to the Sec’y of State, are 55.3% Yes, and 44.7% No.

Now I’d be willing to bet that the election results are more accurate overall than the exit polls, given the methods and all that. Exit polls are face-to-face, and people in that situation are inclined to say what they’re supposed to say, not what they really did.

On a statewide basis, if we’re to take the exit poll as god, then the voting equipment must have had a pro-Davis bias built into it across the state, and if there really was an anti-Davis bias built into the punch card systems, that would simply help to balance the whole system out overall. But nobody claims that.

So don’t believe any analysis of the election results that doesn’t do these things:

1) Discuss the inaccuracy of exit polling.
2) Make a county-by-county comparison of exit polls and actual polls.
3) Compare voting rates on question 1 with question 2.
4) Fully disclose the author’s bias.
5) Discuss the county demographics and party registration.

If they’re not all there, you were swindled.

A good analysis would go county-by-county comparing exit polls with actual polls, with correction for the bias in exit polls. It’s not that hard to do this kind of an analysis, but I’m willing to bet that Brady, Hasen, Blumenthal, et. al., won’t do it; they’ll be too busy screaming “Bias!” to get to the point of proving any.

Lawyers and numbers; they don’t mix.

(By the way, I voted for the recall and for Arnie.)

UPDATE: Hasen’s article on the recall in Findlaw is pretty light on specifics, and long on exaggeration (“Issa poured millions into the recall”). The one specific claim he makes about voting equipment is wrong:

…Los Angeles and Alameda counties are fairly comparable counties in terms of political leanings and ethnic makeup, yet nearly 9 percent of voters in Los Angeles did not cast a recordable vote on the first part of the recall, compared to less than one percent of voters in Alameda, which used an electronic touch screen system.

In LA County, 49.1% voted for the recall, vs. 30% in Alameda county. These two counties are clearly not comparable in any meaningful way.

SOME MORE UPDATES: Neither Calblog nor xrlq is very impressed with Prof. Hasen’s analysis of the recall election.

3 thoughts on “Advocacy research on elections”

  1. Excellent effort on all that. But it’s not just lawyers who misuse statistics. It’s you, me, lawyers, and anyone else who breathes and lives in this country, especially if it’s someone who’s out to prove that the use of statistics is no good.

    The media (including fringe and political weblogs) are going to do just what you have done here: use numbers to prove any points, and ignore any numbers that don’t support theories.

    The lawyers (and the left) in California aren’t the only ones fudging numbers.

  2. “In LA County, 49.1% voted for the recall, vs. 30% in Alameda county. These two counties are clearly not comparable in any meaningful way.”

    Ah, but they’re both red! What more do you want?

    Sarcasm aside, here’s another way to look at it. L.A. looks red all by itself, but turns green as soon as you pair it up with any one of the five smaller counties that border it (Ventura, Kern, San Bernardino, Riverside or Orange). Try doing that with Alameda + anything.

  3. The thing about Hasen’s voting rights beef that amazes me is the blind assumption that exit polls are more accurate than real polls. Surely there’s a reason we vote in secret and all that.

Comments are closed.