A good deal of traditional democratic theory leads us to expect more from national elections than they can possibly provide. We expect elections to reveal the “will” or the preferences of a majority on a set of issues. This is one thing elections rarely do, except in an almost trivial fashion.
— Robert A. Dahl, A Preface to Democratic Theory (1956, 131)
Robert Dahl (1956, 1) began A Preface to Democratic Theory by acknowledging that “there is no democratic theory— there are only democratic theories.” Nevertheless, he noted (1956, 34– 35) that “running through the whole history of democratic theories is the identification of ‘democracy’ with political equality, popular sovereignty, and rule by majorities”— a notion of “Populistic Democracy” he associated with such diverse thinkers as Aristotle, Locke, Rousseau, Jefferson, de Tocqueville, and Lincoln.
Dahl’s notion of “Populistic Democracy” corresponds closely with what we have called the “folk theory” of democracy. In particular, its emphasis on popular sovereignty requires that “whenever policy choices are perceived to exist, the alternative selected and enforced as governmental policy is the alternative most preferred by the members” of the relevant political community (Dahl 1956, 37). But how might popular sovereignty in this sense actually come about? Theorists in this tradition and practitioners alike have focused on two primary mechanisms— electoral competition and “direct democracy” in the form of popular initiatives or referenda. We consider these two mechanisms in this and the following chapter, respectively.
As the Gilded Age of the late 19th century came to an end, aristocratic English observer James Bryce (1894, 923) portrayed Americans marching “with steady steps” toward a new stage in the evolution of government— “Government by Public Opinion”— in which “the will of the people acts directly and constantly upon its executive and legislative agents.” Populism as a political movement crested in the United States in the 1890s; but the Progressive Era that followed was also characterized by high enthusiasm for popular democracy as a broad political ideal. Historian Charles Beard (1912, 14), who could be clear- eyed and even cynical about the political motives of the Founders, nevertheless expressed the conviction that “every branch of law that has been recast under the influence of popular will has been touched with enlightenment and humanity.” He argued that the era’s new political institutions— including the initiative, referendum, and recall— would surely produce still better government in the future through greater democratization. In the same spirit, John Dewey (1927, 146) insisted that it was no “mystic faith” but “a well- attested conclusion from historic facts” that government can serve the people only when “the community itself shares in selecting its governors and determining their policies.” Even the famously acerbic journalist and political observer H. L. Mencken (1916, 19) expressed his skepticism about democracy in colorfully populist terms, defining it as “the theory that the common people know what they want, and deserve to get it good and hard.”
By the middle of the 20th century the populist ideal was firmly established in both American political culture and scholarly understanding of democracy. Political thinkers who resisted “the identification of ‘democracy’ with political equality, popular sovereignty, and rule by majorities” felt compelled to explain why. As we noted in the previous chapter, Joseph Schumpeter (1942, 250) prefaced his own theory of democracy with a scathing critique of the unrealism of a “classical doctrine” in which democracy “realizes the common good by making the people itself decide issues through the election of individuals who are to assemble in order to carry out its will.” Dahl (1956) was less dismissive of “populistic” democracy, but emphasized the importance of alternative “Madisonian” and “polyarchal” conceptions. Even William Riker, who would later castigate populism as a “totalitarian sleight- of- hand . . . used to justify coercion in the name of temporary or spurious majorities” (1982, 13-14), wrote in a mid- century American government textbook (1953, 91– 92) that “truly responsible government is only possible when elections are so conducted that a choice of men is a decision on policy.” He added that plebiscites in the Soviet Union “are a façade . . . because the structure of government does not permit elections to influence policy making.”
Ironically, the decades in which Schumpeter and the early Dahl and Riker wrote also gave rise to two major intellectual challenges to the folk theory. One of these was a logical challenge stemming from the theoretical work of economists studying collective choice. Duncan Black (1948; 1958), Kenneth Arrow (1951), and Anthony Downs (1957) all made fundamental contributions to a theory of democracy focusing on the translation of individual preferences into collective choices through voting. While their work gave the populist ideal a much clearer and more definite form than it had previously had, it also revealed unexpected difficulties in the very notion of popular sovereignty— difficulties severe enough to provoke Riker’s subsequent rejection of “coercion in the name of temporary or spurious majorities.”
The second formidable mid- century challenge to the populist ideal came from sociologists and political scientists harnessing the new technology of survey research to the study of public opinion and electoral politics. Time and time again, they found that the opinions and behavior of ordinary citizens comported poorly with expectations derived from democratic theory as they understood it— that is, from the folk theory. For example, a team from Columbia University conducted pathbreaking empirical studies of voting behavior in the 1940 and 1948 presidential elections (Lazarsfeld, Berelson, and Gaudet 1948; Berelson, Lazarsfeld, and McPhee 1954). They produced a long list of contrasts between democratic ideals and their own findings regarding voters’ motivations, knowledge, and reasoning. “The democratic citizen is expected to be well informed about political affairs,” they wrote (Berelson, Lazarsfeld, and McPhee 1954, 308). “He is supposed to know what the issues are, what their history is, what the relevant facts are, what alternatives are proposed, what the party stands for, what the likely consequences are. By such standards the voter falls short.” As we will show, subsequent research by a great many other scholars has come to very similar conclusions.
In the remainder of this chapter we take up both these challenges to the folk theory of democracy. We begin with the logical challenge, then turn to the empirical challenge.
The “Spatial Model” of Voting and Elections
The most systematic and sophisticated instantiation of the populist ideal is the “spatial model” of voting and elections. Although the model has been a mainstay of political science for the past half century, it was originally formulated primarily by economists— perhaps because the intellectual framework of economics meshed naturally with “the liberal view” that “the aim of democracy is to aggregate individual preferences into a collective choice in as fair and efficient a way as possible,” as David Miller (1992, 55) put it. Miller acknowledged in a footnote that some readers might object to the limited focus on “one strand of liberalism— the importance it attaches to individual preferences and their expression”; however, he argued that that strand “prevails in contemporary liberal societies, where democracy is predominantly understood as involving the aggregation of independently formed preferences.” Thus, in effect, the goal of the spatial model was to give mathematical form to the folk theory of democracy.
In the canonical version of the spatial model, due primarily to Anthony Downs (1957), the political “space” consists of a single ideological dimension on which feasible policies are arrayed from left to right. Each voter is represented by an ideal point along this dimension reflecting the policy she prefers to all others. Each party is represented by a platform reflecting the policy it will enact if elected. Voters are assumed to maximize their ideological satisfaction with the election outcome by voting for the parties closest to them on the ideological dimension. Parties are assumed to maximize their expected payoff from office- holding by choosing the platforms most likely to get them elected.
1. Downs (1957, 115) attributed the spatial “apparatus” to Harold Hotelling, who briefly sketched a political application in the course of his spatial analysis of economic competition (1929, 54–55).
2. Downs wrote of parties rather than candidates, but rendered the distinction irrelevant by assuming “complete agreement on goals among the members of an office-seeking coalition” (1957, 26).
3. In multiparty systems, voters may maximize their satisfaction with the outcome by voting “strategically” for a party further from their ideal point, depending on other voters’ choices. Downs recognized this fact and worried that “if each attempts to take into account the diversity of preferences, and therefore votes only after calculating how others will vote, the process of calculation becomes too complicated for him to handle” (1957, 154). Kedar (2005) argued that the calculations need not be very demanding. But in any case, this complication does not arise in the simple two-party case considered here.
In the simplest case, where there are just two parties, this framework is sufficient to derive a striking and substantively important prediction: both parties will adopt identical platforms corresponding to the median of the distribution of voters’ ideal points. This so- called median voter theorem is a special case of a more general result regarding collective choice with single- peaked preferences (Black 1948). Since a platform (and resulting government policy) located at the median of the distribution of voters’ ideal points has a smaller average distance from all voters’ ideal points than any other feasible policy, the median voter theorem seems to imply that the mere fact of electoral competition will ensure that voters’ preferences are as well satisfied in a utilitarian sense as they possibly can be. Thus the voters enjoy responsive government regardless of which party wins any given election.
4. Under the assumption that everyone votes for the party closer to her, a party located at the median will defeat any alternative located to the left of the median (by winning the median voter and everyone to her right) or to the right of the median (by winning the median voter and everyone to her left). Downs (1957, 116–117) barely acknowledged this result before turning to an “improved” version of the model in which “elastic demand” and the threat of abstention by extremists induce the parties to offer distinct platforms. In that case, the predictions of the model are quite sensitive to the distribution of voters’ ideal points and to auxiliary assumptions regarding the bases of abstention (Downs 1957, 117–122), as much subsequent research confirmed.
5. In the context of the spatial model, a voter’s preferences are “single-peaked” if she pre- fers any platform to the left of her ideal point to any other platform further to the left, and any platform to the right of her ideal point to any other platform further to the right. The symmetric case considered here, in which voters’ preferences are a simple monotonic function of distance, satisfies this condition.
In addition to being elegant and normatively attractive, the spatial theory seemed to provide a compelling explanation for the ideologically muted politics of mid- 20th- century America: candidates and parties were moderate and (to a casual observer) largely indistinguishable, apparently due to the natural centripetal tendencies of a smoothly running majoritarian system. And when more extreme candidates emerged on the political scene— Barry Goldwater in 1964, George McGovern in 1972— they were trounced at the polls, just as the theory suggested they should be. By the 1980s, it seemed apparent to many scholars that “the spatial theory of voting provides important insights into real- world voting,” and more specifically that “the center of voter opinion exerts a powerful force over election results” (Enelow and Hinich 1984, 217, 221).
Subsequent work has elaborated the canonical spatial model in a variety of important ways— for example, by allowing for probabilistic voting behavior, nonspatial “valence” factors such as charisma and incumbency, parties motivated by policy as well as office seeking, constraints on parties’ platforms (for example, due to historical legacies), and uncertainty in voters’ perceptions of parties’ platforms.6 For our purposes here, the most important elaboration replaced the unidimensional ideological spectrum with a multidimensional policy space (Davis and Hinich 1966; 1967). Reducing all of politics to a single ideological dimension was plainly at odds with empirical evidence suggesting that most citizens in the 1950s had distinct— indeed, virtually uncorrelated— views about economic, social, and foreign policies (Stokes 1963, 370). Thus, as Otto Davis, Melvin Hinich, and Peter Ordeshook (1970, 429) acknowledged, “if spatial models are to retain descriptive and predictive value, they must allow for more than one dimension of conflict and taste.”
6. Enelow and Hinich (1984), Austen-Smith and Banks (1999), and Grofman (2004) have provided useful syntheses and reviews.
In his critique of Downs’s spatial model, Donald Stokes (1963, 370– 371) referred to the axiom of unidimensionality as its “most evident— and perhaps least fundamental” point of unrealism, suggesting that “it might well be dispensed with.” From a purely technical standpoint Stokes was right; within a few years of his writing, the assumption of unidimensionality was dispensed with. Unfortunately, this technical advance turned out to generate considerable conceptual difficulties for a model doing double duty as an empirical theory of electoral politics and a normative theory of populist democracy.
One of the most striking virtues of the canonical one- dimensional spatial model is that it identified a unique, normatively attractive and seemingly feasible solution to the problem of aggregating individual preferences into a “democratic” policy choice— the policy located at the “ideal point” of the median voter. However, in their influential “expository development” of the spatial theory, Davis, Hinich, and Ordeshook (1970, 427, 428) noted “an important distinction between the unidimensional and multi- dimensional cases”: positions preferred by a majority of voters to every alternative position, “in general, do not exist for a multi- dimensional world.” As they observed, “The possibility that such a paradox exists poses a problem for majority decision- making.” Thus, even if voters’ preferences in every issue domain are single- peaked, as the spatial model assumes, there may be no policy platform with a logical claim to represent “the will of the majority,” much less “the will of the people.”
Davis, Hinich, and Ordeshook (1970, 438) described “conditions that guarantee the dominance of a single position for any number of dimensions”— the symmetry and unimodality of the electorate’s preference density in multidimensional space (Plott 1967). However they carefully noted “the eminent restrictiveness of these conditions” and concluded that “one should not presume the existence of dominant positions” in multidimensional models. Subsequent analyses along similar lines (Kramer 1973; McKelvey 1976; Schofield 1983) richly justified that caution by producing a series of so- called chaos theorems demonstrating with increasing mathematical generality that the sufficient conditions are exceedingly fragile; once the distribution of voters’ ideal points deviates from multidimensional symmetry, it is very likely that any feasible policy will be beatable by some other feasible policy in a straight majority vote.
7. The nonobvious nature of these results is underlined by Brian Barry’s (1970, 138) cheerful assumption that identifying equilibrium strategies in a multidimensional setting would prove to be straightforward: “Without an even distribution of voters the solution is more difficult, but it is still clear that the parties would come together where they divide the votes between them equally.” The problem is that no such convergent strategy is an equilib- rium; at least one party could increase its vote share by deviating from it.
This “problem for majority decision- making,” as Davis, Hinich, and Ordeshook called it, is a manifestation of the “paradox of voting” explored by a long line of social choice theorists, including the Marquis de Condorcet in the 18th century and Charles Dodgson (Lewis Carroll) in the 19th century. Kenneth Arrow’s (1951) much broader general possibility theorem demonstrated that any collective decision- making process satisfying certain reasonable- sounding conditions— not just majority rule— must be subject to similar difficulties. Arrow’s theorem demonstrated with mathematical rigor that what many people seemed to want— a reliable “democratic” procedure for aggregating coherent individual preferences to arrive at a coherent collective choice— was simply, logically, unattainable. One commentator, Charles Plott (1976, 511– 512), referred rather melodramatically to Arrow’s theorem and related theoretical work as “a gigantic cavern into which fall almost all of our ideas about social actions. Almost anything . . . anyone has ever said about what society wants or should get is threatened with internal inconsistency. It is as though people have been talking for years about a thing that cannot, in principle, exist, and a major effort is needed to see what objectively remains from the conversations.” Of course, Arrow’s theorem and the multidimensional spatial model are specific formulations of the problem of collective choice within the narrow framework of the folk theory of democracy, not the sum total of what “anyone has ever said about what society wants or should get.” Nonetheless, the remarkable fact that the populist ideal turned out to be logically incoherent within this simple, seemingly congenial theoretical framework did spur “a major effort” to reassess processes of preference aggregation in democratic politics.
Subsequent work by political scientists (Shepsle 1979; Riker 1980; Calvert 1995) has attempted to specify and explain how specific political institutions shape collective choices in the absence of “equilibria of tastes” (Riker 1980) derivable directly from individual preferences. For example, germaneness rules, committee specialization, and conference procedures in legislatures may simplify policy choices sufficiently to ensure the existence of stable equilibria even in settings where the distribution of preferences alone would leave any outcome potentially susceptible to being overturned through agenda manipulation and multidimensional log- rolling.
In the realm of electoral politics, Richard Johnston and colleagues (1992, 3) acknowledged that “the image of the electorate which dominates popular discussion” is one of “the people as a free- standing body, with its own indomitable collective opinion.” But, in an election, the collective opinion of the people can be expressed only with reference to a very limited menu of alternative party platforms— in effect, a restriction analogous to the institutional rules facilitating the avoidance of disequilibrium in the legislative realm. As E. E. Schattschneider (1942, 52) famously put it, the electorate is “a sovereign whose vocabulary is limited to two words, ‘Yes’ and ‘No.’ ” Thus, Johnston and his colleagues argued for a broader view of the electoral process encompassing not only “how voters choose” but also “how parties and leaders shape the alternatives from which the choice is made.”
Empirical scholarship on agenda- setting processes and institutions has been stimulating and fruitful; but it has done little to fill the gaping hole identified by Plott in the normative logic of popular sovereignty. If collective choices depend crucially on “how parties and leaders shape the alternatives from which the choice is made,” and on the detailed workings of political institutions that are themselves ultimately created by collective choices, in what sense can those choices be said to reflect popular will? The choice of institutional structures is subject to Arrow’s theorem, too. So far, the spatial modeling tradition has produced no satisfactory solution to this conundrum.
Some democratic theorists have attempted to sidestep the logical incoherence revealed by Arrow’s theorem by supposing that deliberation rather than formal agenda- setting procedures might reduce complex multidimensional decisions to a single dimension or to a sequence of unidimensional decisions, conveniently satisfying the technical assumptions for the existence of a majoritarian equilibrium.8 For example, Miller (1992) suggested that democratic deliberation might mitigate incoherence by generating widespread consensus about how to locate the various alternatives along a single dimension, or about how to separate the issue into independent unidimensional components. Writing two decades later, Jack Knight and James Johnson (2011, 149) observed that “it is by now a commonplace among interpreters of social choice theory that political argument might well work precisely to induce just such constraints . . . by allowing relevant constituencies to sort out, and hopefully reduce, the dimensions over which they disagree.”
8. For example, Mackie (2003, 191) argued that voting on issues one dimension at a time would produce an outcome corresponding to “the intersection of medians,” which he characterized as “a normatively attractive point of aggregate subjective welfare.” However, the identification of that point hinges on an arbitrary choice of axes in the multidimensional issue space; alternative decompositions of the same multidimensional preferences into different, presumably equally valid packages of separate issues will in general result in different, presumably equally attractive outcomes. In any case, strategic voters will often want to logroll across dimensions, exchanging their votes on issues they care little about for support on more important issues. That brings Arrow’s theorem back in, making Mackie’s proposal untenable in practice even if the structure of the issue space is taken as given.
Christian List and his colleagues (2013) used before- and- after comparisons of opinion from “deliberative polls” (Fishkin 2009) to provide some empirical support for this suggestion. On a variety of issues, participants were somewhat more likely to express single- peaked (unidimensional) preferences after participating in deliberation than before. However, whether that should be reassuring would seem to depend crucially on the quality of deliberation leading to the reorganization of opinion, and on how frequently such reorganization actually occurs in more typical political settings.
The closest Knight and Johnson came to a concrete example of how deliberation might “induce . . . constraints” on the dimensionality of collective choice was to suggest (2011, 163) that “the parties to some controversy might agree that what is at stake is a matter of national defense rather than, say, facilitating interstate commerce (e.g., as in the argument over the U.S. interstate highway system).” But on what basis might they come to that agreement? And why should we be reassured if they do, since the agreement is patently mistaken? The interstate highway system is clearly a matter of both national defense and interstate commerce.
Just as with formal rules for simplifying collective choices, informal agreements struck in the course of democratic deliberation may be more or less sensible and broadly accepted. But, just as with formal agenda- setting procedures, any particular agreement to divide an inherently multidimensional issue into unidimensional pieces will shape the outcome of collective choice in some way that is, from a purely populist standpoint, fundamentally arbitrary. Thus, deliberation provides no easy escape from the theoretical challenge posed by Arrow’s theorem.
Formal theories of collective choice thus turned out to be a mixed blessing for the folk theory of democracy. On one hand, the simple unidimensional spatial model proposed by Downs provided an elegant and largely comforting account of the political significance of electoral competition. As John Zaller (2012, 623) put it, “The economic theory of democracy has great curb appeal: The rationally ignorant median voter gets what he wants without much effort.” On the other hand, the multidimensional spatial model— and the theory of social choice more generally— has no curb appeal at all: even a perfectly rational, highly informed median voter does not get what she wants. That result raised fundamental logical problems for the populist ideal by calling into question how any sort of electoral process could reliably aggregate potentially complex individual preferences into a coherent “will of the people.”
Public Opinion and Political Ideology
Zaller’s allusion to the “rationally ignorant median voter” fused two distinct aspects of Downs’s “economic theory of democracy.” One is the unidimensional spatial model of electoral competition, in which parties have strong incentives to converge on the ideological “ideal point” of the median voter. The other is Downs’s analysis of political information costs, which led him to conclude that, because of “the infinitesimal role which each citizen’s vote plays in deciding the election,” the returns to acquiring political information “are so low that many rational voters [will] refrain from purchasing any political information per se” (Downs 1957, 258). Thus, “A large percentage of citizens— including voters— do not become informed to any significant degree on the issues involved in elections, even if they believe the outcomes to be important” (Downs 1957, 298).
Unfortunately, for the spatial model of electoral competition to work, “rationally ignorant” voters do need some political information. In particular, if they are to succeed in voting for the party closest to them they need to know their own preferences and the platforms of the competing parties regarding “the issues involved in elections.” The voters’ own preferences, especially, are often simply taken for granted in the populist theory of democracy. But what if voters don’t really know what they want? In that case, the folk theory of democracy, and the spatial model in particular, loses its starting point.
One telling indication that this foundation of the folk theory may be shakier than it appears is the fact that expressed political attitudes can be remarkably sensitive to seemingly innocuous variations in question wording or context. For example, 63% to 65% of Americans in the mid- 1980s said that the federal government was spending too little on “assistance to the poor”; but only 20% to 25% said that it was spending too little on “welfare” (Rasinski 1989, 391). “Welfare” clearly had deeply negative connotations for many Americans, probably because it stimulated rather different mental images than “assistance to the poor” (Gilens 1999). Would additional federal spending in this domain have reflected the will of the majority, or not? We can suggest no sensible way to answer that question.
It seems tendentious to insist that “welfare” and “assistance to the poor” denoted different policies, and that Americans carefully opposed the former while supporting the latter. However, even if that distinction is accepted, qualitatively similar framing effects appear in cases where the substantive distinction between alternative frames is even more tenuous. For example, in three separate experiments conducted in the mid- 1970s, almost half of Americans said they would “not allow” a communist to give a speech, while only about one- fourth said they would “forbid” him or her from doing so (Schuman and Presser 1981, 277). In the weeks leading up to the 1991 Gulf War, almost two- thirds of Americans were willing to “use military force,” but fewer than half were willing to “engage in combat,” and fewer than 30% were willing to “go to war” (Mueller 1994, 30). Framing more abstract quantitative choices in different but mathematically equivalent ways also produces predictable— and sometimes dramatic— differences in results (Pruitt 1967; Tversky and Kahneman 1981).
The psychological indeterminacy of preferences revealed by these “framing effects” (Kahneman, Slovic, and Tversky 1982) and question- wording experiments calls into question the most fundamental assumption of populist democratic theory— that citizens have definite preferences to be elicited and aggregated through some well- specified process of collective choice (Bartels 2003). In this respect, modern cognitive psychology has sharpened and reinforced concerns about the quality of public opinion raised by critics of democracy from Plato to the pioneering survey researchers of the 1940s and 1950s.
The first rigorous scientific portrait of the American voter, by Bernard Berelson and his colleagues at Columbia University, found that “the voter falls short” of displaying the motivation, knowledge, and rationality expected by “traditional normative theory” (Berelson, Lazarsfeld, and McPhee 1954, 308, 306). “On the issues of the campaign,” the Columbia scholars found (Berelson, Lazarsfeld, and McPhee 1954, 309, 311), “there is a considerable amount of ‘don’t know’— sometimes reflecting genuine indecision, more often meaning ‘don’t care.’ ” Voters consistently misperceived where candidates stood on the important issues of the day and exaggerated the extent of public support for their favorite candidates. And vote choices were “relatively invulnerable to direct argumentation” and “characterized more by faith than by conviction and by wishful expectation rather than careful prediction of consequences.”
Several years later, in a landmark study of The American Voter, Angus Campbell and his colleagues at the University of Michigan described “the general impoverishment of political thought in a large proportion of the electorate.” They acknowledged that “many people know the existence of few if any of the major issues of policy,” much less how the competing parties and candidates might address them (Campbell et al. 1960, 543, 168, 170). Shifts in election outcomes, they concluded, were largely attributable to defections from long- standing partisan loyalties by relatively unsophisticated voters with little grasp of issues or ideology.
Philip Converse’s (1964) essay on “The Nature of Belief Systems in Mass Publics” provided an even more devastating and influential portrait of the political thinking of ordinary citizens. Employing the growing store of data collected by the Michigan Survey Research Center, Converse concluded that many citizens “do not have meaningful beliefs, even on issues that have formed the basis for intense political controversy among elites for substantial periods of time” (Converse 1964, 245).
9. Kinder and Kalmoe (n.d., chaps 1– 2) noted that “The Nature of Belief Systems in Mass Publics” had been cited almost 700 times in 2013 alone, its 50th year in print.
Converse’s evidence was of three kinds. First, he scrutinized respondents’ answers to open- ended questions about political parties and candidates for evidence that they understood and spontaneously employed the ideological concepts at the core of elite political discourse. He found that about 3% of voters were clearly classifiable as “ideologues,” with another 12% qualifying as “near- ideologues”; the vast majority of voters (and an even larger proportion of nonvoters) seemed to think about parties and candidates in terms of group interests or the “nature of the times,” or in ways that conveyed “no shred of policy significance whatever” (Converse 1964, 217– 218; also Campbell et al. 1960, chap. 10).
10. We see this classification of citizens in less hierarchical terms than Converse did. As we show in chapter 10, the well-informed people who disproportionately occupy the top rungs of his scale are “ideologues” in some of the unfortunate senses of that term as well.
Second, Converse assessed the degree of organization of political belief systems, as measured by statistical correlations between responses to related policy questions. Could respondents give consistently liberal or consistently conservative responses? He found only modest correlations (averaging just .23) among domestic policy views (regarding employment, aid to education, and federal housing), similarly modest correlations among foreign policy views(regarding foreign economic aid, soldiers abroad, and isolationism), and virtually no correlation between views across these two domains. Nor were specific policy views strongly correlated with party preferences (averaging just .07). In each case, the corresponding correlations were much higher for a sample of congressional candidates responding to related but more specific policy questions. Converse (1964, 228) interpreted these results as providing strong support for the hypothesis that “constraint among political idea- elements begins to lose its range very rapidly once we move from the most sophisticated few toward the ‘grass roots.’ ”
Converse himself recognized that “constraint among political idea- elements”— especially across issue domains— was primarily a matter of social learning rather than logical reasoning. Critics at the time and since have pointed out that there may be nothing particularly sophisticated about parroting the specific combination of issue positions defining a conventional ideology or party line. Perhaps ordinary citizens’ issue preferences lacked “constraint” because they had thoughtfully constructed their own personal political belief systems transcending conventional ideologies and party lines?
Alas, this argument ran aground on Converse’s third set of analyses, which assessed the stability of their attitudes regarding specific issues. Converse gauged “the stability of belief elements” by tracking the same people’s responses to the same questions across three separate interviews conducted at two- year intervals between 1956 and 1960. Successive responses to the same questions turned out to be remarkably inconsistent. The correlation coefficients measuring the temporal stability of responses for any given issue from one interview to the next ranged from a bit less than .50 down to a bit less than .30, suggesting that issue views are “extremely labile for individuals over time” (Converse 1964, 240– 241). In marked contrast, expressions of party identification were much more stable, with correlations from one survey to the next exceeding .70. Converse (1964, 241) inferred that parties “are more central within the political belief systems of the mass public than are the policy ends that the parties are designed to pursue.”
11. Some of this temporal instability no doubt reflects measurement error due to the inevitable vagueness of survey questions (Achen 1975). Moreover, people may bring different relevant considerations to bear in answering the same question in successive interviews, producing unstable responses even as their underlying stores of relevant considerations remain unchanged (Zaller 1992). In the first years after Converse wrote, the responses of sophisticated people (as measured by formal education) seemed to be almost as unstable as those of less sophisticated people, suggesting that the inevitable noise in survey questions was at fault. However, later studies using better measures of political sophistication (based on respondents’ demonstrated factual knowledge about politics) have generally found the opinions of more sophisticated people to be a good deal more stable. For example, Kinder and Kalmoe (n.d., ms. 70) calculated that the average stability of responses to five issue questions in the 1992– 1996 American National Election Studies panel survey ranged from .60 among the best- informed quintile of respondents down to .25 in the bottom information quintile. Again, we want to emphasize that it is a mistake to view people at the top of the information scale as so- phisticated independent thinkers, as we demonstrate in the remainder of this book. The point is simply that they are more consistent in the positions they espouse from one survey to the next.
Converse’s essay set off a very substantial debate about his methodology and interpretations. Perhaps no single argument he advanced was fully persuasive. But even most critics agreed with his conclusion: the political “belief systems” of ordinary citizens bore little resemblance to the ideal embodied in the folk theory of democracy. As Kinder and Kalmoe (n.d., ms. 13) summarized current scholarly understanding, “Genuine ideological identification— an abiding dispositional commitment to an ideological point of view— turns out to be rare. Real liberals and real conservatives are found in impressive numbers only in the higher echelons of political society, confined to the comparatively few who are deeply and seriously engaged in political life.” For most ordinary citizens, ideology is— at best— a byproduct of more basic partisan and group loyalties. Thus, as Kinder and Kalmoe (n.d., ms. 12) noted, “Americans are much more resolute in their identification with party than they are in their identification with ideology.”
Research in other countries has generally produced similar portraits of democratic citizens. Their ideological self- placements are often driven more by partisanship than by policy positions (Inglehart and Klingemann 1976). Indeed, left- right terms are sometimes meaningful primarily as alternate names for the political parties— often, names that the parties themselves have taught the voters (Arian and Shamir 1983). Even in France, the presumed home of ideological politics, Converse and Pierce (1986, chap. 4) found that most vo ters did not understand political “left” and “right.” When citizens do understand the terms, they may still be uncertain or confused about where the parties stand on the left- right dimension (Butler and Stokes 1974, 323– 337). Perhaps as a result, their partisan loyalties and issue preferences are often badly misaligned. In a 1968 survey in Italy, for example, 50% of those who identified with the right- wing Monarchist party took left- wing policy positions (Barnes 1971, 170).
Lest younger readers be tempted to suppose that this sort of confusion is a remnant of an older and less sophisticated political era (or an artifact of older and less sophisticated scholarly analysis), we note that careful recent studies have repeatedly turned up similar findings. For example, Elizabeth Zechmeister (2006, 162) found “striking, systematic differences . . . both within and across the countries” in the conceptions of “left” and “right” offered by elite private college students in Mexico and Argentina, while André Blais (personal communication) found half of German voters unable to place the party called “Die Linke”— the Left— on a left- right scale.
12. Even among professional politicians and intellectuals, the meaning and salience of ideology sometimes vary greatly with the political context. For example, the specific policy preferences of Latin American legislators typically explain less than 10% of the variance in their left-right self-placements (Zechmeister 2010, 105–110). In France, Converse and Pierce (1986, 129–132) found that “left” sounded good even to rightist deputies. Terms like “left” and “right” or “liberal” and “conservative,” when they make sense at all, often turn out to rep- resent partisan commitments as much or more than issue positions. Thus, ideological language need not have much genuine ideological content, even among elites.
This rather bleak portrait of public opinion has provoked a good deal of resistance among political scientists, and a variety of concerted attempts to overturn or evade the findings of the classic Columbia and Michigan studies. In the 1970s, for example, some scholars claimed to have discovered The Changing American Voter, a much more issue- oriented and ideologically consistent specimen than the earlier studies had portrayed (Nie, Verba, and Petrocik 1976). Unfortunately, further scrutiny revealed that most of the apparent change could be attributed to changes in the questions voters were being asked rather than to more elevated political thinking. When 1970s voters were asked the old questions, their responses displayed little more consistency or sophistication than they had in the 1950s (Bishop, Oldendick, and Tuchfarber 1978a; 1978b; Brunk 1978; Sullivan et al. 1979).
13. Bartels (2010) summarized these and other developments in the scholarly study of American electoral behavior.
Other scholars have argued that overarching ideological convictions are unnecessary because citizens can derive meaningful policy preferences from somewhat narrower “core values” such as equal opportunity, limited government, or traditional morality (Feldman 1988; Goren 2001). Citizens’ allegiances (or antipathies) to these values do tend to be somewhat more stable than their specific policy preferences. However, they are a good deal less stable than the phrase “core values” would seem to imply, being significantly colored by party identification and even by short- term vote intentions (Goren 2005; McCann 1997).
Since the 1980s, the American political system has seen a substantial increase in partisan polarization, with Democratic elites becoming more clearly and consistently liberal and Republican elites more clearly and consistently conservative (Poole and Rosenthal 2007; Theriault 2008; Mann and Ornstein 2012). As a result, rank- and- file partisans have increasingly come to adopt ideological labels consistent with their partisanship (Layman, Carsey, and Horowitz 2006; Hetherington 2009; Levendusky 2009). Has this solved Converse’s problem? Alas, voters’ policy preferences seem to have become only modestly more “ideologically coherent” as a result, with the average correlation between pairs of policy preferences increasing from a paltry .16 in 1972 to a slightly less paltry .20 in 2012. Not much has changed.
14. Kinder and Kalmoe (n.d., ms. 45–46) helpfully calculated that if “ideological con- straint continues to increase into the indefinite future” at the same modest rate, “the American public’s views on policy would eventually come to approximate the degree of structure shown by partisan elites today”—in about 300 years.
Similarly, when a group of scholars half a century later painstakingly replicated many of the specific analyses presented in The American Voter using survey data from 2000 and 2004, they found no change in most respects, and only glacial improvements in the remainder (Lewis- Beck et al. 2008). Voters, it seems, are what they are, and not what idealistic proponents of popular sovereignty might wish them to be. The folk theory is of little use in understanding actual democratic politics.
Thus, most contemporary scholars of public opinion have come to accept, at least in broad outline, Converse’s portrait of democratic citizens. Kinder and Kalmoe (n.d., ms. 61– 62) conclude that “Converse’s conclusion of ideological innocence still stands. . . . Educational transformation, party polarization, revolutionary changes in information dissemination, fundamental alterations in gender and race relations: impressive as these changes have been, equally impressive is how little visible effect they have had on how the American electorate understands politics.”
15. Converse’s own subsequent assessments of these issues (1990; 2000; 2006 ) were broadly consistent with Kinder and Kalmoe’s. (The last of these assessments appeared as part of an extensive symposium in Critical Review on “democratic competence,” which also reprinted Converse’s original essay.)
Political Ignorance, Heuristics, and “The Miracle of Aggregation”
Confusion regarding political ideology is just the tip of a large iceberg of political unawareness. Michael Delli Carpini and Scott Keeter (1996) surveyed responses to hundreds of specific factual questions in U.S. opinion surveys over the preceding 50 years to provide an authoritative summary of What Americans Know about Politics and Why It Matters. In 1952, Delli Carpini and Keeter found, only 44% of Americans could name at least one branch of government. In 1972, only 22% knew something about Watergate. In 1985, only 59% knew whether their own state’s governor was a Democrat or a Republican. In 1986, only 49% knew which one nation in the world had used nuclear weapons (Delli Carpini and Keeter 1996, 70, 81, 74, 84). Delli Carpini and Keeter (1996, 270) concluded from these and scores of similar findings that “large numbers of American citizens are woefully underinformed and that overall levels of knowledge are modest at best.” Robert Luskin (2002, 282) put the same conclusion rather more colorfully, observing that most people “know jaw- droppingly little about politics.”
Here, too, it is striking how little seems to have changed in the decades since survey research began to shed systematic light on the nature of public opinion. Changes in the structure of the mass media have allowed people with an uncommon taste for public affairs to find an unprecedented quantity and variety of political news; but they have also allowed people with more typical tastes to abandon traditional newspapers and television news for round- the- clock sports, pet tricks, or pornography, producing an increase in the variance of political information levels but no change in the average level of political information (Baum and Kernell 1999; Prior 2007). Similarly, while formal education remains a strong predictor of individuals’ knowledge about politics, substantial increases in American educational attainment have produced little apparent increase in overall levels of political knowledge. When Delli Carpini and Keeter (1996, 17) compared responses to scores of factual questions asked repeatedly in opinion surveys over the past half century, they found that “the public’s level of political knowledge is little different today than it was fifty years ago. Given the ample reasons to expect changing levels of knowledge over the past fifty years, this finding provides the strongest evidence for the intractability of political knowledge and ignorance.” Ilya Somin (2013, 192) concluded from a more recent survey along similar lines that “widespread political ignorance is a serious problem for democracy,” and questioned “whether the modern electorate even comes close to meeting the requirements of democratic theory.”
Some critics of this perspective have supposed that opinion surveys significantly underestimate people’s political knowledge by providing insufficient motivation for them to answer questions correctly (Prior and Lupia 2008; Bullock et al. 2013). Unfortunately, insufficient motivation is endemic to mass politics, not an artifact of opinion surveys; we do not doubt that voters would be better informed if they were paid to learn political facts, but that seems impractical (and, judging by the results of these studies, extremely expensive). Others imagine that “visual political knowledge”— recognizing the faces of political figures but not their names— provides “a different road to competence” (Prior 2014); but adding photographs to the ballot would raise significant additional problems of voter bias (Todorov et al. 2005; Lawson et al. 2010; Olivola and Todorov 2010; Lenz and Lawson 2011).
Most attempts to “redeem” the electorate have taken a different tack, acknowledging that voters are generally inattentive and uninformed but denying that the quality of their political decisions suffers much as a result. For example, formal theorists have proposed versions of the spatial model of elections in which the usual postulate that voters are fully informed is loosened somewhat. Unfortunately, even “uninformed” voters in these models know a great deal more than most real voters do. For example, one influential spatial model of elections with “uninformed” voters (McKelvey and Ordeshook 1985; 1986) posited that all voters know the distribution of voters’ ideal points, informed voters know the candidates’ positions exactly, and uninformed voters know the levels of “informed” support for candidates (from poll data) and the left- right order of the candidates’ positions, from which they then proceed to infer the candidates’ positions on the basis of spatial theory. Alas, as we have seen, most voters do not know what political “left” and “right” mean, much less know what informed voters think. Thus, few if any of these cheery assumptions are likely to hold in practice.
In the early 1990s, a spate of books with such reassuring titles as The Reasoning Voter (Popkin 1991), Reasoning and Choice (Sniderman, Brody, and Tetlock 1991), and The Rational Public (Page and Shapiro 1992) argued that voters could use “information shortcuts” or “heuristics” to make rational electoral choices even though they lacked detailed knowledge about candidates and policies. These shortcuts could take a variety of forms, including “cues” from trusted individuals or groups, inferences derived from political or social stereotypes, or generalizations from personal experience or folk wisdom.
Sociologists and political scientists have long recognized that citizens sometimes take “cues” from better informed friends, relatives, neighbors, or coworkers (Katz and Lazarsfeld 1955; Huckfeldt and Sprague 1995; Mutz 2006). The writer Calvin Trillin once had such an arrangement. “Mrs. Trillin, Alice, gave Cyprus to Mr. Trillin for his birthday; for the next 12 months, she would think about Cyprus and he wouldn’t have to. Mr. Trillin, for Christmas, gave Mrs. Trillin Iran. Neither of them was willing to take over thinking about the SALT [disarmament] talks” (Leonard 1982). The very humor of the story points to one limitation of this defense of democracy: while Mr. and Mrs. Trillin were sufficiently well informed (and, we hope, politically compatible) to make such a division of labor feasible and efficient, most citizens don’t have a Cyprus watcher in the house and would not know what to make of advice about Cyprus if they had it.
The literature on “heuristics” in political science is an odd stepchild of the corresponding literature in psychology. Psychologists have devoted exhaustive attention to the biases in judgment produced by reliance on specific, identifiable heuristics. For example, the classic collection of essays edited by Daniel Kahneman, Paul Slovic, and Amos Tversky (1982) included reports on “belief in the law of small numbers,” “shortcomings in the attribution process,” “egocentric biases in availability and attribution,” “the illusion of control,” and “overconfidence in case- study judgments,” among other topics. It also included a series of essays on “corrective procedures” intended to mitigate the effects of these various biases and shortcomings.
Political scientists, by comparison, have typically been much more likely to view “heuristics” as a boon to democracy. We suspect, along with James Kuklinski and Paul Q uirk (2000, 154), that that enthusiasm has much to do with the fact that “the notion of a competent citizenry is normatively attractive. It buttresses efforts to expand citizen participation and credits the citizenry for some of American democracy’s success.”
When students of political heuristics have defined the tasks of citizens sufficiently clearly for concrete performance benchmarks to be meaningful, they have tended to present those tasks in such highly simplified form that all of the difficulties of real political inference are abstracted away (Lupia and McCubbins 1998). More often, observed preferences and behavior are deemed “rational” simply because they look reasonable or seem to be influenced by plausibly relevant considerations. In one of the most colorful examples of a political “information shortcut,” Samuel Popkin argued that Mexican- American voters had good reason to be suspicious of President Gerald Ford when he made the mistake, during a Texas primary campaign appearance, of trying to down a tamale without first removing its cornhusk wrapper. According to Popkin (1991, 3), “Showing familiarity with a voter’s culture is an obvious and easy test of ability to relate to the problems and sensibilities of the ethnic group and to understand and care about them.” An obvious and easy test, yes. An accurate basis for inferring Ford’s sensitivities toward Mexican- Americans? We have no idea.
Lacking any objective standard for distinguishing reliable political cues from unreliable ones, some scholars have simply asked whether uninformed citizens— using whatever “information shortcuts” are available to them— manage to mimic the preferences and choices of better informed people. Alas, statistical analyses of the impact of political information on policy preferences have produced ample evidence of substantial divergences between the preferences of relatively uninformed and better informed citizens (Delli Carpini and Keeter 1996, chap. 6; Althaus 1998). Similarly, when ordinary people are exposed to intensive political education and conversation on specific policy issues, they often change their mind (Luskin, Fishkin, and Jowell 2002; Sturgis 2003).
Parallel analyses of voting behavior have likewise found that uninformed citizens cast significantly different votes than those who were better informed. For example, Bartels (1996) estimated that actual vote choices fell about halfway between what they would have been if voters had been fully informed and what they would have been if everyone had picked candidates by flipping coins. Richard Lau and David Redlawsk (1997; 2006) analyzed the same elections using a less demanding criterion for assessing “correct” voting. (They took each voter’s partisanship, policy positions, and evaluations of candidate performance as given, setting aside the fact that these, too, may be subject to errors and biases.) They found that about 70% of voters, on average, chose the candidate who best matched their own expressed preferences. Lau and Redlawsk (2006, 88, 263) wondered, “Is 70 percent correct enough?”
16. The phrase “fully informed” is a misnomer here, since Bartels’s imputations of “fully informed” voting behavior were based on observed variations in voting behavior across a five- point summary measure of survey respondents’ general level of information about politics and public affairs. It seems safe to assume that even respondents at the top of this information scale were, in reality, far from being “fully informed.” Thus, the effects of low political knowledge on voting behavior were almost certainly underestimated. Bartels (1990) provided a more detailed discussion of political interests, political enlightenment, and the logic and potential applications of the imputation strategy.
Answering that question requires a careful assessment of the extent to which “incorrect” votes skew election outcomes. Optimism about the competence of democratic electorates has often been bolstered (at least among political scientists) by appeals to what Converse (1990) dubbed the “miracle of aggregation”— an idea formalized by the Marquis de Condorcet more than 200 years ago and forcefully argued with empirical evidence by Benjamin Page and Robert Shapiro (1992). Condorcet demonstrated mathematically that if several jurors make independent judgments of a suspect’s guilt or innocence, a majority are quite likely to judge correctly even if every individual juror is only modestly more likely than chance to reach the correct conclusion. Applied to electoral politics, Condorcet’s logic suggests that the electorate as a whole may be much wiser than any individual voter.
The crucial problem with this mathematically elegant argument is that it does not work very well in practice. Real voters’ errors are quite unlikely to be statistically independent, as Condorcet’s logic requires. When thousands or millions of voters misconstrue the same relevant fact or are swayed by the same vivid campaign ad, no amount of aggregation will produce the requisite miracle; individual voters’ “errors” will not cancel out in the overall election outcome, especially when they are based on constricted flows of information (Page and Shapiro 1992, chaps. 5, 9). If an incumbent government censors or distorts information regarding foreign policy or national security, the resulting errors in citizens’ judgments obviously will not be random. Less obviously, even unintentional errors by politically neutral purveyors of information may significantly distort collective judgment, as when statistical agencies or the news media overstate or understate the strength of the economy in the run- up to an election (Hetherington 1996).
17. Formal theorists have also raised questions regarding the logical underpinnings of the argument, which typically hinge on the assumption that voters behave “sincerely” rather than strategically (Austen-Smith and Banks 1996; Feddersen and Pesendorfer 1998).
18. What we have in mind is that voters’ errors can be correlated and thus not indepen- dently distributed—not the random white noise that Condorcet assumed. At the same time, voters’ judgments typically depend on irrelevancies that do not reflect incumbent compe- tence, so that election outcomes are not predictable by rational considerations—elections are “random” in that sense. Thus “nonrandom” (correlated) voter errors can lead to “random” (unpredictable from rational considerations) election outcomes.
Bartels (1996) estimated how well the overall outcomes of six presidential elections matched what they would have been if every voter had been “fully informed.” The average discrepancy between the actual popular vote and the hypothetical “fully informed” outcome of each election amounted to three percentage points— more than enough to swing a close contest. Related analyses of voting behavior in Sweden (Oscarsson 2007), Canada (Blais et al. 2009), Denmark (Hansen 2009), and many other countries (Arnold 2012) have found similar effects of information on aggregate election outcomes. Thus the lack of political knowledge matters— not only for individual voters, but also for entire electorates, the policies they favor, and the parties they elect.
The Illusion of “Issue Voting”
The spatial theory of voting cast “issue proximity” as both the primary determinant of voters’ choices and the primary focus of candidates’ campaign strategies. Over the course of the 1960s and 1970s, this theoretical development was gradually but powerfully translated into empirical analyses of voting behavior. In the authoritative American National Election Studies conducted by the University of Michigan, questions regarding issues of public policy were increasingly recast as seven- point “issue scales” directly inspired by spatial theory. Survey respondents were invited to “place” themselves, candidates, and parties on each issue dimension. The proliferation of issue scales provided ample raw material for naive statistical analyses relating vote choices to “issue proximities” calculated by comparing respondents’ own positions on these issue scales with the positions they attributed to the competing candidates or parties.
The causal ambiguity inherent in statistical analyses of this sort was clear to scholars of voting behavior by the early 1970s. Richard Brody and Benjamin Page (1972; Page and Brody 1972) outlined three distinct interpretations of the positive correlation between “issue proximities” and vote choices. The first, policy- oriented evaluation, corresponds to the conventional interpretation of issue voting in the folk theory of democracy: prospective voters observe the candidates’ policy positions, compare them to their own policy preferences, and choose a candidate accordingly. The second, persuasion, involves prospective voters altering their own issue positions to bring them into conformity with the issue positions of the candidate or party they favor. The third, projection, involves prospective voters convincing themselves that the candidate or party they favor has issue positions similar to their own (and perhaps also that disfavored candidates or parties have dissimilar issue positions) whether or not this is in fact the case. In both the second and third cases, “issue proximity” is a consequence of the voter’s preference for a specific candidate or party, not a cause of that preference.
Brody and Page (1972, 458) wrote, “We need some means for examining the potential for ‘persuasion’ and for ‘projection’ and of estimating them as separate processes. . . . If the estimation of policy voting is important to the understanding of the role of the citizen in a democracy— and theorists of democracy certainly write as if it is— then any procedure which fails to control for projection and persuasion will be an undependable base upon which to build our understanding.” Brody and Page’s clear warning was followed by some resourceful attempts to resolve the causal ambiguity they identified ( Jackson 1975; Markus and Converse 1979; Page and Jones 1979; Franklin and Jackson 1983). Unfortunately, those attempts mostly served to underline the extent to which the conclusions drawn from such analyses rested on fragile and apparently untestable statistical assumptions. Perhaps most dramatically, back- to- back articles by Markus and Converse (1979) and by Page and Jones (1979) in the same issue of the American Political Science Review estimated complex simultaneous- equation models relating partisanship, issue proximity, and assessments of candidates’ personalities using the same data from American National Election Studies surveys, but came to very different conclusions about the bases of voting behavior. If two teams of highly competent analysts asking essentially similar questions of the same data could come to such different conclusions, it seemed clear that the results of such exercises must depend at least as much on the analysts’ theoretical preconceptions and associated statistical assumptions as on the behavior of voters.
In light of this apparent impasse, many scholars of voting behavior have preferred to sidestep the causal ambiguity plaguing the relationship between issue positions and votes by reverting to simple single- equation models in which issue positions can affect vote choices but not vice versa. In effect, they have relied on the assumptions of the folk theory of democracy rather than empirical evidence to resolve the problem raised by Brody and Page. For example, Stephen Ansolabehere, Jonathan Rodden, and James Snyder (2008) cumulated responses to dozens of specific issue questions in American National Election Studies surveys into just two broad (“economic” and “moral”) issue positions, discarding all of the remaining variation in specific issue responses as attributable to “measurement error.” They imputed issue positions for voters who had none, or simply dropped them from the analysis. Then they imposed a model in which the observed relationship between issue positions and vote choices (net of partisanship and ideology) was attributed entirely to issue voting, with no allowance for persuasion or group identity effects. Unsurprisingly, they reported finding “stable policy preferences” and “strong evidence of issue voting” (Ansolabehere, Rodden, and Snyder 2008, 229).
19. Some analysts have mitigated the resulting problem of bias due to projection by substituting sample average perceptions of the candidates’ issue positions for individual respondents’ own perceptions (e.g., Aldrich, Sullivan, and Borgida 1989; Erikson and Romero 1990; Alvarez and Nagler 1995). While this approach has the considerable virtue of reducing bias due to projection, it does nothing to mitigate bias due to persuasion; to the extent that voters adopt issue positions consistent with those of parties or candidates they support for other reasons, they will still (misleadingly) appear to be engaging in issue voting. Moreover, substituting sample average perceptions of the candidates’ issue positions for respondents’ own perceptions sacrifices a good deal of theoretical coherence, since it is very difficult to see how or why voters would compare their own issue positions to other people’s perceptions of the candidates’ positions, ignoring their own perceptions.
20. Freeder, Lenz, and Turney (2014) provided a reinterpretation of this evidence more consistent with Converse’s (1964) view and our own.
Subsequent work by Gabriel Lenz (2009; 2012) provided substantial additional grounds for skepticism regarding inferences of this sort. Lenz used repeated interviews with the same individuals to show that persuasion plays a large role— and policy- oriented evaluation remarkably little role— in accounting for observed associations between issue positions and votes. That is, candidate choices determine issue positions, not vice versa.
In the 2000 presidential campaign, for example, candidate George W. Bush advocated allowing individual citizens to invest Social Security funds in the stock market, thereby catapulting a previously obscure policy proposal into the political limelight. Much of the news coverage and advertising in the final month of the campaign focused on the candidates’ contrasting stands on the issue; in a typical “battleground” media market, the two candidates together ran about 200 ads touching on Social Security privatization just in the final week before Election Day ( Johnston, Hagen, and Jamieson 2004, 153– 159). And, sure enough, the statistical relationship between voters’ views on Social Security privatization and their preferences for Bush or Al Gore (holding constant party identification) more than doubled over the course of the campaign.
This is exactly the sort of shift we might expect if voters were attending to the political debate, weighing the competing candidates’ policy platforms, and formulating their vote intentions accordingly. However, Lenz’s more detailed analysis employing repeated interviews with the same people demonstrated that this substantial increase in the apparent electoral impact of views about Social Security privatization was almost entirely illusory— due not to changes in vote intentions, but to Bush and Gore supporters learning their preferred candidate’s position on the issue and then adopting it as their own. As Lenz (2012, 59) put it, “the increase in media and campaign attention to this issue did almost nothing to make people whose position was the same as Bush’s more likely to vote for Bush than they already were.”
On issue after issue— ranging from support for public works in 1976 and defense spending in 1980 to European integration in Britain to nuclear power in the Netherlands in the wake of the Chernobyl reactor meltdown— Lenz’s analyses provided substantial evidence of vote- driven changes in issue positions but little or no evidence of issue- driven changes in candidate or party preferences. As John Zaller (2012, 617) put it, “Partisan voters take the positions they are expected as partisans to take, but do not seem to care about them.” Lenz (2012, 235) characterized these findings as “disappointing” for “scholars who see democracy as fundamentally about voters expressing their views on policy,” noting that the “inverted” relationship between issue positions and votes seemed to leave politicians with “considerable freedom in the policies they choose.”
Elections and Public Policy
Almost all of the scholarly evidence we have considered thus far regarding public opinion and electoral behavior focuses on the attitudes and votes of individual citizens. Fortunately, we are not limited to individual- level analyses of public opinion and voting behavior. We can also attempt to assess directly how elections shape democratic politics. Does issue voting compel both parties to adopt policy positions close to those of the median voter, as the spatial theory of elections implies?
As we have seen, U.S. presidential elections in the post– World War II era— and especially the landslide defeats suffered by Barry Goldwater in 1964 and George McGovern in 1972— seemed to comport rather well with the predictions of the spatial theory. However, more systematic research on U.S. presidential elections has suggested that Goldwater and McGovern’s losses had less to do with their issue positions than with election- year economic conditions; ideological “extremism” probably cost them just a few percentage points of the popular vote (Bartels and Zaller 2001; Cohen and Zaller 2012). More generally, the impact of candidates’ policy stands on election outcomes— at least over the range of policy stands observed in modern presidential elections— seems to be quite modest. As Zaller (2012, 616) put it, “the penalty for extremism, if real, is not large.”
The broad analysis of U.S. public policy in Robert Erikson, Michael MacKuen, and James Stimson’s The Macro Polity (2002, 303– 311) similarly underlines the failure of issue voting to discipline politicians in the manner suggested by the spatial theory of elections. Erikson, MacKuen, and Stimson measured the ideological tenor of policy activity in each branch of Congress and the White House over more than 40 years. They found that policy outcomes shifted substantially when partisan control shifted from Democrats to Republicans or from Republicans to Democrats. The public’s “policy mood” (Erikson, MacKuen, and Stimson 2002, 194– 205) also influenced policy regardless of which party was in control, but that effect was small by comparison. For example, the estimated impact on White House policy activity of moving from the most conservative “policy mood” recorded in four decades to the most liberal “policy mood” was only about one- third as large as the estimated impact of replacing a typical Republican president with a typical Democrat. The estimated effects of partisan control on congressional policy activity were even larger. The implication is that citizens affect public policy— insofar as they affect it at all— almost entirely by voting out one partisan team and replacing it with another.
If the election of a Republican or Democratic president itself provided a reliable signal of the public’s “policy mood,” the resulting swing to right or left in policy outcomes might be characterized as a reflection of “majority rule” (though not in the sense suggested by the median voter theorem). The authors of The Macro Polity argued that presidential election outcomes are strongly affected by the public’s “policy mood” (Erikson, MacKuen, and Stimson 2002, chap. 7). However, their statistical analyses required delicate controls for the prevailing balance of partisan loyalties in the electorate and the (inferred) ideological positions of the competing candidates. Subsequent analyses have found the apparent impact of “policy mood” evaporating once election- year economic conditions are taken into account (Cohen and Zaller 2012, table 3). Meanwhile, scholars attempting to forecast presidential election outcomes (e.g., Abramowitz 2012; Erikson and Wlezien 2012; Hibbs 2012) have generally been content to ignore “policy mood,” issue preferences, and ideology— a telling indication that these factors are of relatively little importance in determining who wins.
Studies of Congress likewise find that the policy preferences of citizens in a given state or district are only modestly predictive of election outcomes— and that Democrats and Republicans routinely take very different stands once they are elected, even when they represent states or districts with very similar political views. Both of these points are clear in figure 2.1, which relates the overall roll call voting record of each member of the House of Representatives in the 112th Congress (2011– 2013) to the policy preferences of his or her constituents. (Republican members are denoted by diamonds and Democrats by circles.) District preferences are measured using a 12- item scale including liberal- conservative self- identification, beliefs about climate change, and support for the Affordable Care Act, domestic spending, the Iraq War, gays in the military, gun control, affirmative action, environmental protection, defense spending, a path to citizenship for illegal immigrants, and abortion. The scale runs from 0 (most liberal) to 100 (most conservative), but the range of district averages is much narrower, from 24.5 to 59.0.
21. We summarize each representative’s entire roll call voting record using an index developed by Keith Poole and Howard Rosenthal (2007). Their (first- dimension) DW- NOMINATE scores represent “ideal points” that account as accurately as possible for each representative’s entire roll call voting record under the assumption of spatial voting. The scale on which the ideal points are measured is arbitrary, but they are conventionally normalized to run from −1 for the most liberal member of the House to +1 for the most conservative member. However, due to constraints imposed by the DW- NOMINATE algorithm on the movement of each representative on the scale from one Congress to the next, the range of actual scores in figure 2.1 is from −0.729 to 1.376.
22. These data are from surveys conducted in 2010 and 2012 by the Internet survey firm YouGov as part of the Cooperative Congressional Election Study (CCES). There were a total of 52,464 respondents in 2010 and 51,661 in 2012. The combined sample size in each congressional district ranged from 88 to 515 and averaged
239. YouGov employs opt- in recruiting, but uses matching and weighting to produce representative samples of adult U.S. citizens (Vavreck and Rivers 2008). Additional information regarding the CCES surveys is available at http://projects.iq.harvard.edu/cces/home.
23. Factor analysis produced a single dimension with factor loadings ranging from .78 for liberal-conservative self-identification to .50 for abortion. The resulting weights of the individual survey items are .176 for liberal-conservative self-identification, .161 for beliefs about climate change, .104 for domestic spending, .083 for support for the Affordable Care Act, .076 for environmental protection, .072 for affirmative action, .067 for gun control, .059 for the Iraq War, .058 for defense spending, .054 for gays in the military, .051 for abortion, and .039 for a path to citizenship for illegal immigrants.
The most liberal congressional districts in the country, at the far left of figure 2.1, invariably elected Democrats to the House in 2010. (These were overwhelmingly urban and mostly majority- minority districts.) At the opposite extreme, the most conservative districts almost all elected Republicans. However, for districts in the broad middle of the political spectrum, election outcomes were a rather unreliable reflection of citizens’ policy preferences. Moderately liberal districts (in the second quartile of the national distribution) elected Republicans 46% of the time, while moderately conservative districts (in the third quartile) elected Democrats 25% of the time.
24. The corresponding relationship between U.S. Senate election outcomes and constitu- ents’ policy preferences is even weaker. The difference may reflect the fact that Senate elections tend to involve more publicity and campaign spending than House elections, making voters more susceptible to being swayed by candidate-specific factors unrelated to policy (Krasno 1997).
The modest correlation between constituents’ preferences and election outcomes implies substantial variation in representation, given the gulf in roll call voting behavior between Republicans and Democrats representing similar districts in figure 2.1. Nor is it the case that representatives won election in what looked like uncongenial districts by catering closely to citizens’ preferences at the expense of their own (or their parties’) convictions. The dotted lines in figure 2.1 summarize the separate linear relationships between the conservatism of each party’s representatives and their constituents’ preferences. Within each party, there is a modest positive relationship between constituents’ preferences and House members’ roll call votes. However, the magnitudes of those relationships are dwarfed by the distance between the two lines, which represents the expected difference in conservatism between Republican and Democratic members representing districts with identical public opinion.
Clearly, Republican and Democratic members of Congress representing constituents with similar preferences behaved in very different ways. Whether these differences were produced by differences in the representatives’ personal ideological convictions or party pressures or other factors is, for our purposes here, irrelevant. The key point is that representatives’ voting behavior was not strongly constrained by their constituents’ views. Elections do not force successful candidates to reflect the policy preferences of the median voter, as Downsian logic implies.
25. Since the analysis presented in figure 2.1 employs incommensurate measures of representatives’ voting behavior and constituents’ preferences, we are not able to say whether Republican representatives were more conservative than their constituents or Democratic representatives were more liberal or both.
The pattern of partisan polarization evident in figure 2.1 is not a fluke attributable to a particular congressional session or opinion survey. Indeed, a historical analysis of every Congress going back to the 1870s (using presidential election returns as proxies for constituents’ preferences) suggests that similar differences in expected roll call voting patterns between Republicans and Democrats representing similar constituencies have been fairly common in the past 140 years (Bartels, Clinton, and Geer forthcoming).
26. For example, Bartels (2008, 256) documented a qualitatively similar pattern for the U.S. Senate in the late 1980s and early 1990s, and Joshua Clinton (2006, 401) did the same for the U.S. House of Representatives in 1999–2000.
27. Bartels, Clinton, and Geer’s (forthcoming) analysis suggests that the intense partisan polarization of congressional roll call voting since 1994 (above and beyond what could be accounted for by differences in district preferences) was matched in the period from 1874 through 1920. By comparison, the period from the mid-1930s through the mid-1970s was one of consistently low partisan polarization by this measure.
Scholars of comparative politics sometimes argue that the pattern of alternating partisan extremism that has characterized the American political system through much of its history is absent or attenuated in multiparty systems where legislative seats are allocated through proportional representation. According to G. Bingham Powell (2000, 243), for example, “proportional systems are more successful in getting governments (and even more so the influential policymakers) close to the median citizen.” However, assessments of this sort depend on untested assumptions about how parties’ positions get translated into policies under different institutional arrangements. More direct assessments of patterns of responsiveness have found little consistent difference between proportional and majoritarian systems in their extent of correspondence between median voters’ preferences and actual policy outcomes (e.g., Kang and Powell 2010; Bartels 2015). Perhaps some other institutional arrangement would work better; we do not know. But at present, there is little evidence to suggest that changes in electoral institutions will be sufficient to ensure popular control of public policy through electoral competition.
Conclusion
The folk theory of electoral democracy— the notion that elections can “reveal the ‘will’ or the preferences of a majority on a set of issues,” as Dahl (1956, 131) put it— has played a central role in both political science and popular thinking about politics. Scholars have elegantly codified the populist ideal in the “spatial theory” of elections. The behavior it enshrines as normative, “issue voting,” is widely regarded as a hallmark of good citizenship. Indeed, a conscientious voter nowadays can choose among a variety of websites inviting her to answer a series of policy questions and be told which candidates to support— apparently on the assumption that nothing other than the candidates’ policy positions (and just these policy positions) should matter to her. The social science theorizing and the cultural norm are both derived directly from the folk theory of democracy.
28. See, for example, http://votesmart.org/voteeasy/, http://www.ontheissues.org, http://www.votehelp.org/, http://selectsmart.com/politics.html, http://www.quizrocket .com/political-party-quiz.
Unfortunately, as we have seen, this populist ideal in both its scientific and popular incarnations suffers from grave logical and practical problems. Both the remarkable theoretical insights of Arrow (1951) and his successors and the seminal empirical research of Converse (1964) and many others punched significant holes in the romantic populist notion of democracy. Although a great deal of subsequent scholarly effort has been devoted to recasting, circumscribing, or rejecting their claims, repeated attempts to sidestep the theoretical and empirical deficiencies of the populist ideal have failed, leaving the “strong challenge to democratic hopes” (Kinder 2003, 15) posed by modern social science fundamentally intact.
These scientific findings have had little effect on practical politics. Joseph Schumpeter (1942, 250) argued— perhaps wishfully— that “today it is difficult to find any student of social processes who has a good word for” the simplistic notions of the folk theory. Nevertheless, he added, “action continued to be taken on that theory all the time it was being blown to pieces. The more untenable it was being proved to be, the more completely it dominated official phraseology and the rhetoric of the politician” (Schumpeter 1942, 249). More than seven additional decades of demolition work have done little to alter that picture.
Indeed, periodic frustration with the apparent failure of elections to faithfully translate “the will of the people” into public policy has prompted repeated attempts to wrench American political practice into closer accordance with the folk theory. The principal goal of these efforts has been to constrain or even bypass those whom the reformers blamed for their disappointments— professional politicians. Perhaps most radically, reformers in the Progressive Era attempted “to restore the absolute sovereignty of the people” (Bourne 1912) by circumventing the traditional electoral process altogether, instituting “direct democracy” via initiatives and referendums. We turn next to the promise and pitfalls of these repeated attempts to impose an idealistic theory on the recalcitrant reality of American democracy.