상세 컨텐츠

본문 제목

7. The Importance of Research Design

Quantitative Study

by 腦fficial Pragmatist 2023. 3. 28. 01:38

본문

반응형

Gary King, Robert O. Keohane, and Sidney Verba

 

Receiving five serious reviews in this symposium is gratifying and confirms our belief that research design should be a priority for our discipline. We are pleased that our five distinguished reviewers appear to agree with our unified approach to the logic of inference in the social sciences, and with our fundamental point: that good quantitative and good qualitative research designs are based fundamentally on the same logic of inference. The reviewers raise virtually no objections to the main practical contribution of our bookour many specific procedures for avoiding bias, getting the most out of qualitative data, and making reliable inferences.

 

However, the reviews make clear that although our book may be the latest word on research design in political science, it is surely not the last. We are taxed for failing to include important issues in our analysis and for dealing inadequately with some of what we included. Before responding to the reviewers’ most direct criticisms, let us explain what we emphasize in Designing Social Inquiry and how it relates to some of the points raised by the reviewers.

 

WHAT WE TRIED TO DO

 

Designing Social Inquiry grew out of our discussions while coteaching a graduate seminar on research design, reflecting on job talks in our department, and reading the professional literature in our respective subfields. Although many of the students, job candidates, and authors were highly sophisticated qualitative and quantitative data collectors, interviewers, soakers and pokers, theorists, philosophers, formal modelers, and advanced statistical analysts, many nevertheless had trouble defining a research question and designing the empirical research to answer it. The students proposed impossible fieldwork to answer unanswerable questions. Even many active scholars had difficulty with the basic questions: What do you want to find out? How are you going to find it out? And above all, how would you know if you were right or wrong?

 

We found conventional statistical training to be only marginally relevant to those with qualitative data. We even found it inadequate for students with projects amenable to quantitative analysis, since social science statistics texts do not frequently focus on research design in observational settings. With a few important exceptions, the scholarly literatures in quantitative political methodology and other social science statistics fields treat existing data and their problems as given. As a result, these literatures largely ignore research design and, instead, focus on making valid inferences through statistical corrections to data problems. This approach has led to some dramatic progress; but it slights the advantage of improving research design to produce better data in the first place, which almost always improves inferences more than the necessarily after-the-fact statistical solutions.

 

This lack of focus on research design in social science statistics is as surprising as it is disappointing, since some of the most historically important works in the more general field of statistics are devoted to problems of research design (see, e.g., Fisher 1935, The Design of Experiments). Experiments in the social sciences are relatively uncommon, but we can still have an enormous effect on the value of our qualitative or quantitative information, even without statistical corrections, by improving the design of our research. We hope our book will help move these fields toward studying innovations in research design.

 

We culled much useful information from the social science statistics literatures and qualitative methods fields. But for our goal of explicating and unifying the logic of inference, both literatures had problems. Social science statistics focuses too little on research design, and its language seems arcane if not impenetrable. The numerous languages used to describe methods in qualitative research are diverse, inconsistent in jargon and methodological advice, and not always helpful to researchers. We agree with David Collier that aspects of our advice can be rephrased into some of the languages used in the qualitative methods literature or that used by quantitative researchers. We hope our unified logic and, as David Laitin puts it, our ‘‘common vocabulary’’ will help foster communication about these important issues among all social scientists. But we believe that any coherent language could be used to convey the same ideas.

 

We demonstrated that ‘‘the differences between the quantitative and qualitative traditions are only stylistic and are methodologically and substantively unimportant’’ (KKV 4). Indeed, much of the best social science research can combine quantitative and qualitative data, precisely because there is no contradiction between the fundamental processes of inference involved in each. Sidney Tarrow asks whether we agree that ‘‘it is the combination of quantitative and qualitative’’ approaches that we desire (95 this volume). We do. But to combine both types of data sources productively, researchers need to understand the fundamental logic of inference and the more specific rules and procedures that follow from an explication of this logic.

 

Social science, both quantitative and qualitative, seeks to develop and evaluate theories. Our concern is less with the development of theory than theory evaluationhow to use the hard facts of empirical reality to form scientific opinions about the theories and generalizations that are the hopedfor outcome of our efforts. Our social scientist uses theory to generate observable implications, then systematically applies publicly known procedures to infer from evidence whether what the theory implied is correct. Some theories emerge from detailed observation, but they should be evaluated with new observations, preferably ones that had not been gathered when the theories were being formulated. Our logic of theory evaluation stresses maximizing leverageexplaining as much as possible with as little as possible. It also stresses minimizing bias. Lastly, though it cannot eliminate uncertainty, it encourages researchers to report estimates of the uncertainty of their conclusions.

 

Theory and empirical work, from this perspective, cannot productively exist in isolation. We believe that it should become standard practice to demand clear implications of theory and observations checking those implications derived through a method that minimizes bias. We hope that Designing Social Inquiry helps to ‘‘discipline political science’’ in this way, as David Laitin recommends; and we hope, along with James Caporaso, that ‘‘improvements in measurement accuracy, theoretical specification, and research should yield a smaller range of allowable outcomes consistent with the predictions made’’ (1995: 459).

 

Our book also contains much specific advice, some of it new and some at least freshly stated. We explain how to distinguish systematic from nonsystematic components of phenomena under study and focus explicitly on trade-offs that may exist between the goals of unbiasedness and efficiency (KKV chap. 2). We discuss causality in relation to counterfactual analysis and what Paul Holland (1986) calls the ‘‘fundamental problem of causal inference’’ and consider possible complications introduced by thinking about causal mechanisms and multiple causality (KKV chap. 3). Our discussion of counterfactual reasoning is, we believe, consistent with Donald Campbell’s ‘‘quasi-experimental’’ emphasis (Campbell and Stanley 1963); and we thank James Caporaso for clarifying this.

 

We pay special attention in chapter 4 to issues of what to observe: how to avoid confusion about what constitutes a ‘‘case’’ and, especially, how to avoid or limit selection bias. We show that selection on values of explanatory variables does not introduce bias but that selection on values of dependent variables does so; and we offer advice to researchers who cannot avoid selecting on dependent variables.

 

We go on in chapter 5 to show that while random measurement error in dependent variables does not bias causal inferences (although it does reduce efficiency), measurement error in explanatory variables biases results in predictable ways. We also develop procedures for correcting these biases even when measurement error is unavoidable. In that same chapter, we undertake a sustained analysis of endogeneity (i.e., when a designated ‘‘dependent variable’’ turns out to be causing what you thought was your ‘‘explanatory variable’’) and omitted variable bias, as well as how to control research situations so as to mitigate these problems. In the final chapter, we specify ways to increase the information in qualitative studies that can be used to evaluate theories; we show how this can be accomplished without returning to the field for additional data collection. Throughout the book, we illustrate our propositions not only with hypothetical examples but with reference to some of the best contemporary research in political science.

 

This statement of our purposes and fundamental arguments should put some of the reviewers’ complaints about omissions into context. Our book is about doing empirical research designed to evaluate theories and learn about the worldto make inferencesnot about generating theories to evaluate. We believe that researchers who understand how to evaluate a theory will generate better theoriestheories that are not only more internally consistent but that also have more observable implications (are more at risk of being wrong) and are more consistent with prior evidence. If, as Laitin suggests, our single-mindedness in driving home this argument led us implicitly to downgrade the importance of such matters as concept formation and theory creation in political science, this was not our intention.

 

Designing Social Inquiry repeatedly emphasizes the attributes of good theory. How else to avoid omitted variable bias, choose causal effects to estimate, or derive observable implications? We did not offer much advice about what is often called the ‘‘irrational nature of discovery,’’ and we leave it to individual researchers to decide what theories they feel are worth evaluating. We do set forth some criteria for choosing theories to evaluatein terms of their importance to social science and to the real worldbut our methodological advice about research design applies to any type of theory. We come neither to praise nor to bury rational-choice theory, nor to make an argument in favor of deductive over inductive theory. All we ask is that whatever theory is chosen be evaluated by the same standards of inference. Ronald Rogowski’s favorite physicist, Richard Feynman, explains clearly how to evaluate a theory (which he refers to as a ‘‘guess’’): ‘‘If it disagrees with [the empirical evidence], it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name isif it disagrees with [the empirical evidence] it is wrong. That is all there is to it’’ (1965: 156).

 

One last point about our goal: we want to set a high standard for research but not an impossible one. All interesting qualitative and quantitative research yields uncertain conclusions. We think that this fact ought not to be dispiriting to researchers but should rather caution us to be aware of this uncertainty, remind us to make the best use of data possible, and energize us to continue the struggle to improve our stock of valid inferences about the political world. We show that uncertain inferences are every bit as scientific as more certain ones so long as they are accompanied by honest statements of the degree of uncertainty entailed in each conclusion.

 

OUR ALLEGED ERRORS OF OMISSION

 

The major theme of what may seem to be the most serious criticism offered above is stated forcefully by Ronald Rogowski. He fears that ‘‘devout attention’’ to our criteria would ‘‘paralyze, rather than stimulate, scientific inquiry.’’ One of Rogowski’s arguments, echoed by Laitin, is that we are too obsessed with increasing the amount of information we can bring to bear on a theory and therefore fail to understand the value of case studies. The other major argument, made by both Rogowski and Collier, is that we are too critical of the practice of selecting observations according to values of the dependent variable and that we would thereby denigrate major work that engages in this practice. We consider these arguments in turn.

 

Science as a Collective Enterprise

 

Rogowski argues that we would reject several classic case studies in comparative politics. We think he misunderstands these studies and misses our distinction between a ‘‘single case’’ and a collection of observations. Consider two works that he mentions, <The Politics of Accommodation>, by Arend Lijphart (1975 [1968]), and <The Nazi Seizure of Power>, by William Sheridan Allen (1965). Good research designs are rarely executed by individual scholars isolated from prior researchers. As we say in our book, ‘‘A single observation can be useful for evaluating causal explanations if it is part of a research program. If there are other observations, perhaps gathered by other researchers, against which it can be compared, it is no longer a single observation’’ (KKV 211; see also sections 1.2.1 and 4.4.4, the latter devoted entirely to this point). Rogowski may have overlooked these passages. If we did not emphasize the point sufficiently, we are grateful for the opportunity to stress it here.

 

Lijphart: The Case Study That Broke the Pluralist Camel’s Back

 

What was once called pluralist theory by David Truman and others holds that divisions along religious and class lines make polities less able to resolve political arguments via peaceful means through democratic institutions. The specific causal hypothesis is that the existence of many crosscutting cleavages increases the level of social peace and, thus, of stable, legitimate democratic government.

 

In <The Politics of Accommodation>, Arend Lijphart (1975 [1968]) sought to estimate this causal effect. In addition to prior literature, he had evidence from only one case, the Netherlands. He first found numerous observable implications of his descriptive hypothesis that the Netherlands had deep class and religious cleavages, relatively few of which were cross-cutting. Thensurprisingly from the perspective of pluralist theoryhe found considerable evidence from many levels of analysis that the Netherlands was an especially stable and peaceful democratic nation. These descriptive inferences were valuable contributions to social science and important in and of themselves, but Lijphart also wished to study the broader causal question.

 

In isolation, a single study of the Netherlands, conducted only at the level of the nation at one point in time, cannot produce a valid estimate of the causal effect of cross-cutting cleavages on the degree of social peace in a nation. But Lijphart was not working in isolation. As part of a community of scholars, he had the benefit of Truman and others having collected many prior observations. By using this prior work, Lijphart could and did make a valid inference. Prior researchers had either focused only on countries with the same value of the explanatory variable (many cross-cutting cleavages) or on the basis of values of the dependent variable (high social conflict). Previous researchers therefore made invalid inferences. Lijphart measured social peace for the other value of the explanatory variable (few cross-cutting cleavages) and, by using his data in combination with that which came before, made a valid inference.

 

Lijphart’s classic study is consistent with our model of good research design. As he stressed repeatedly in his book, Lijphart was contributing to a large scholarly literature. As such, he was not trying to estimate a causal effect from a single observation; nor was he selecting on his dependent variable. Harvesting relevant information from others’ data, although often overlooked, may often be the best way to obtain relevant information.

 

By ignoring the place of Lijphart’s book in the literature to which it was contributing, Rogowski is unable to recognize the nature of its contribution. Rogowski’s alternative explanation for the importance of this book and the others he mentionsthat ‘‘(1) all of them tested, relied on, or proposed, clear and precise theories; and (2) all focused on anomalies’’ (95 this volume)suggests one of many possible strategies for choosing topics to research; but it is of almost no help with practical issues of research design or ascertaining whether a theory is right or wrong. Indeed, the only way to determine whether something is an anomaly in the first place is to follow a clear logic of scientific inference and theory evaluation, such as that provided in Designing Social Inquiry.

 

Allen: Distinguishing History from Social Science

 

<The Nazi Seizure of Power> is an account of life in an ordinary German community. Allen is not a social scientist: In his book, he proposes no generalization, evaluates no theory, and does not refer to the scholarly literatures on Nazi Germany; rather, he zeroes in on the story of what happened in one small place at a crucial moment in history, and he does so brilliantly. In our terms, he is describing historical detail and occasionally also conducting very limited descriptive inference. We emphasize the importance of such work: ‘‘Particular events such as the French Revolution or the Democratic Senate primary in Texas may be of intrinsic interest: they pique our curiosity, and if they were preconditions for subsequent events (such as the Napoleonic Wars or Johnson’s presidency) we may need to know about them to understand those later events’’ (KKV 36).

 

In our view, social science must go further than Allen. The social scientist must make descriptive or causal inferences, thus seeking explanation and generalization. Indeed, we think even Rogowski would not accept Allen’s classic work of history as a dissertation in political science. Allen’s work is, however, not irrelevant to the task of explanation and generalization that is of interest to us. In the hands of a good social scientist, who could place Allen’s work within an intellectual tradition, it becomes a single case study in the framework of many others. This, of course, suggests one traditional and important way in which social scientists can increase the amount of information they can bring to bear on a problem: read the descriptive casestudy literature.

 

THE PERILS OF AVOIDING SELECTION BIAS

 

We agree with David Collier’s observation that, if our arguments concerning selection bias are sustained, then ‘‘a small improvement in methodological self-awareness can yield a large improvement in scholarship’’ (1995: 461). Indeed, because qualitative researchers generally have more control over the selection of their observations than over most other features of their research designs, selection is an especially important concern (a topic to which we devote most of our chapter 4).

 

Rogowski believes that we would criticize Peter Katzenstein’s (1985) Small States in World Markets or Robert Bates’s (1981) Markets and States in Tropical Africa as inadmissibly selecting on the dependent variable. We address each book in turn.

 

Katzenstein: Distinguishing Descriptive Inference from Causal Inference

 

Peter Katzenstein’s (1985) <Small States in World Markets> makes some important descriptive inferences. For example, Katzenstein shows that small European states responded flexibly and effectively to the economic challenges that they faced during the forty years after World War II; and he distinguishes between what he calls ‘‘liberal and social corporatism’’ as two patterns of response. But many of Katzenstein’s arguments also imply causal claimsthat in Western Europe ‘‘small size has facilitated economic openness and democratic corporatism’’ (1985: 80), and that in the small European states, weak landed aristocracies, relatively strong urban sectors, and strong links between country and city led to cross-class compromise in the 1930s, creating the basis for postwar corporatism (1985: chap. 4).

 

Katzenstein seeks to test the first of these causal claims by comparing economic openness in small and large states (1985: 86, table 1). To evaluate the second hypothesis, he compares cross-class compromise in six small European states characterized by weak landed aristocracies and strong urban sectors, with the relative absence of such compromise in five large industrialized countries and Austria, which had different values on these explanatory variables. Much of his analysis follows the rules of scientific inference we discussselecting cases to vary the value of the explanatory variables, specifying the observable implications of theories, and seeking to determine whether the facts meet theoretical expectations.

 

But Katzenstein fudges the issue of causal inference by disavowing claims to causal validity: ‘‘Analyses like this one cannot meet the exacting standards of a social science test that asks for a distinction between necessary and sufficient conditions, a weighting of the relative importance of variables, and, if possible, a proof of causality’’ (1985: 138). However, estimating causal inferences does not require a ‘‘distinction between necessary and sufficient conditions, a weighting of the relative importance of variables,’’ or an absolute ‘‘proof’’ of anything. Katzenstein thus unnecessarily avoids causal language and explicit attention to the logic of inference which results. As we explain in our book, ‘‘avoiding causal language when causality is the real subject of investigation either renders the research irrelevant or permits it to remain undisciplined by the rules of scientific inference’’ (KKV 76).

 

Remaining inexplicit about causal inference makes some of Katzenstein’s claims ambiguous or unsupported. For example, his conclusion seems to argue that small states’ corporatist strategies are responsible for their postwar economic success. But because of the selection bias induced by his decision to study only successful cases, Katzenstein cannot rule out an important alternative causal hypothesisthat any of a variety of other factors accounts for this uniform pattern. For instance, the postwar international political economy may have been benign for small, developed countries in Europe. If so, corporatist strategies may have been unrelated to the degree of success experienced by small European states.

 

In the absence of variation in the strategies of his states, valid causal inferences about their effects remain elusive. Had Katzenstein been more attentive to the problems of causal inference that we discuss, he would have been able to claim causal validity in some limited instances, such as when he had variation in his explanatory and dependent variables (as in the 1930s analysis). More importantly, he would also have been able to improve his research design so that valid causal inferences were also possible in many other areas.

 

Rogowski is not correct in inferring that we would dismiss the significance of Small States in World Markets. Its descriptions are rich and fascinating, it elaborates insightful concepts such as liberal and social corporatism, and it provides some evidence for a few causal inferences. It is a fine book, but we believe that more explicit attention to the logic of inference could have made it even better.

 

Bates: How to Identify a Dependent Variable

 

Rogowski claims that Robert Bates’s purpose in <Markets and States> was to explain economic failure in tropical African states, and that by choosing only states with failed economies and low agricultural production, Bates biased his inferences. If agricultural production were Bates’s dependent variable, Rogowski would be correct, since (as we argue in <Designing Social Inquiry>; see also Collier 1995) usingbut not correcting forthis type of case selection does bias inferences. However, low agricultural production was, in fact, not Bates’s dependent variable.

 

Bates’s book makes plain his two dependent variables: (1) the variations in public policies promulgated by African states and (2) differences in the group relations between the farmer and the state in each country. Both variables vary considerably across his cases. Bates also proposed several explanatory variables, which he derived from his preliminary descriptive inferences. These include (1) whether state marketing boards were founded by the producers or by alliances between government and trading interests, (2) whether urban or rural interests dominated the first postcolonial government, (3) the degree of governmental commitment to spending programs, (4) the availability of nonagricultural sources for governmental funds, and (5) whether the crops produced were for food or export. These explanatory variables do vary, and they helped account for the variations in public policy and state-farmer relations that Bates observed.

 

As such, Bates did not select his observations so they had a constant value for his dependent variable. Moreover, he did not stop at the national level of analysis, for which he had a small number of cases and relatively little information. Instead, he offered numerous observable implications of the effects of these explanatory variables at other levels of analysis within each country. As with many qualitative studies, Bates had a small number of cases but an immense amount of information. We believe one of the reasons Bates’s study isand should beso highly regarded is that it is an excellent example of a qualitative study that conforms to the rules of scientific inference. In sum, Rogowski says that Bates wrote an excellent book that we would reject. If the book were as Rogowski describes it, we very well might reject it. Since it is notand indeed is a good example of our logic of research designwe join Rogowski in applauding it.

 

TRIANGULAR CONCLUSIONS

 

We conclude by emphasizing a point that is emphasized both in Designing Social Inquiry and in the reviews. We often suggest procedures that qualitative researchers can use to increase the amount of information they bring to bear on evaluating a theory. This is sometimes referred to as ‘‘increasing the number of observations.’’ As all our reviewers recognize, we do not expect researchers to increase the number of full-blown case studies to conduct a large-N statistical analysis: our point is not to make quantitative researchers out of qualitative researchers. In fact, most qualitative studies already contain a vast amount of information. Our point is that appropriately marshaling all the thick description and rich contextualization in a typical qualitative study to evaluate a specific theory or hypothesis can produce a very powerful research design. Our book demonstrates how to design research in order to collect the most useful qualitative data and how to restructure it even after data collection is finished, to turn qualitative information into ways of evaluating a specific theory. We explain how researchers can do this by collecting more observations on their dependent variable, by observing the same variable in another context, or by observing another dependent variable that is an implication of the same theory. We also show how one can design theories to produce more observable implications that then put the theory at risk of being wrong more often and easily.

 

This brings us to Sidney Tarrow’s suggestions for using the comparative advantages of both qualitative and quantitative researchers. Tarrow is interested specifically in how unsystematic and systematic variables and patterns interact, and seems to think that principles could be derived to determine what unsystematic events to examine. We think that this is an interesting question for any historically sensitive work. Many unsystematic, nonrepeated events occur, a few of which may alter the path of history in significant ways; and it would be useful to have criteria to determine how these events interact with systematic patterns. We expect that our discussions of scientific inference could help in identifying which apparently random, but critical, events to study in specific instances, and we are confident that our logic of inference will help determine whether these inferences are correct; Tarrow or others may be able to use the insights from qualitative researchers to specify them more clearly. We would look forward to a book or article that presented such criteria.

 

Another major point made by Tarrow is that all appropriate methods to study a question should be employed. We agree; a major theme of our book is that there is a single unified logic of inference. Hence it is possible effectively to combine different methods. However, the issue of triangulation that Tarrow so effectively raises is not the use of different logics or methods, as he argues, but the triangulation of diverse data sources trained on the same problem. Triangulation involves data collected at different places, sources, times, levels of analysis, or perspectives, data that might be quantitative, or might involve intensive interviews or thick historical description. The best method should be chosen for each data source. But more data are better. Triangulation, then, refers to the practice of increasing the amount of information brought to bear on a theory or hypothesis, and that is what our book is about.

------------------------

1. Editors’ note: This chapter is reprinted from the 1995 symposium on Designing Social Inquiry, published in the American Political Science Review. In this chapter, the authors respond to arguments developed in three additional articles in the APSR symposium that are reprinted in the present volume: those by Rogowski, Tarrow, and (reprinted in part) Collier. King, Keohane, and Verba likewise respond here to the two other articles in the symposiumby Laitin (1995) and Caporaso (1995)to which reference is made in the present volume, but which are not included here. The full original citation for this chapter is Gary King, Robert O. Keohane, and Sidney Verba (1995) ‘‘The Importance of Research Design in Political Science.’’ American Political Science Review 89, no. 2 (June): 47581. The table of contents, preface, and chapter 1 of Designing Social Inquiry are available at pup .princeton.edu/titles/5458.html.

2. To clarify further, we note that the definition of an ‘‘experiment’’ is investigator control over the assignment of values of explanatory variables to subjects. Caporaso emphasizes also the value of random assignment, which is desirable in some situations (but not in others, see KKV 12428) and sometimes achievable in experiments. (Random selection and a large number of units are also desirable and also necessary for relatively automatic unbiased inferences, but experimenters are rarely able to accomplish either.) A ‘‘quasi-experiment’’ is an observational study with an exogenous explanatory variable that the investigator does not control. Thus, it is not an experiment. Campbell’s choice of the word ‘‘quasi-experiment’’ reflected his insight that observational studies follow the same logic of inference as experiments. Thus, we obviously agree with Campbell’s and Caporaso’s emphases and ideas and only pointed out that the word ‘‘quasi-experiment’’ adds another word to our lexicon with no additional content. It is a fine idea, much of which we have adopted; but it is an unnecessary category.

3. Telling researchers to ‘‘choose better theories’’ is not much different than telling them to choose the right answer: it is correct but not helpful. Many believe that deriving rules for theory creation is impossible (e.g., Popper, Feynman), but we see no compelling justification for this absolutist claim. As David Laitin correctly emphasizes, ‘‘the development of formal criteria for such an endeavor is consistent with the authors’ goals.’’

4. Lijphart also went to great lengths to clarify the precise theory he was investigating, because it was widely recognized that the concept of pluralism was often used in conflicting ways, none clear or concrete enough to be called a theory. Ronald Rogowski’s description of pluralism as a ‘‘powerful, deductive, internally consistent theory’’ (97 this volume) is surely the first time it has received such accolades.

5. Selection problems are easily misunderstood. For example, Caporaso claims that ‘‘if selection biases operate independently of one’s hypothesized causal variable, it is a threat to internal validity; if these same selection factors interact with the causal variable, it is a threat to external validity’’ (1995: 460). To see that this claim is false, note, as Collier reemphasizes, that Caporaso’s ‘‘selection factors’’ can also be seen as an omitted variable. But omitted variables cannot cause bias if they are independent of your key causal variable. Thus, although the distinction between internal and external validity is often useful, it is not relevant to selection bias in the way Caporaso describes.

6. Subsequently, Bates pursued the same research program. For example, in Essays on the Political Economy of Rural Africa he evaluated his thesis for two additional areascolonial Ghana and Kenya (1983: chap. 3). So Bates did exactly what we recommend: having developed his theory in one domain, he extracted its observable implications and moved to other domains to see whether he observes what the theory would lead him to expect.

 

반응형

관련글 더보기