The task of social science is to provide true causal explanations of our most pressing social problems. And when social science works this way—taking into account competing hypotheses about the causes of our social problems, and testing them against all the evidence that we are able to garner from research and experiment—it can be very, very good. But even good social science cannot, and should not, determine public policy in and of itself. For even when it reveals the true causes of our social problems, the information that it provides is only one component of what goes into making public policy decisions. What we should do about these problems is always a separate question whose answer will, and should, be determined by a mix of the competing concerns, beliefs, goals, and interests that we collectively have about them. Good social science can inform our decisions about these matters, but it cannot make them for us. This is the way things are and, I think, the way things ought to be.
Social science can, in this sense, be driven by public policy. For our most pressing public policy concerns can obviously set the agenda for research in the social sciences. They can tell us which social problems we should study, and what kinds of social science research the public should support. But there are, all too obviously, other ways in which social science research can be driven by public policy that are not so good, that make it difficult for us to find true causal explanations, that may actually be bad—and, in some cases, may even be ugly.
For despite all that talk about consensus, anyone who has been closely involved with science knows that there are, on any day of the week, competent and well-credentialed scientists who disagree sharply about the causes of our most pressing social problems. These disagreements may track our policy disagreements about what we should do about them. But they are not always, or even often, the result of those policy disagreements. One way in which social science research can go wrong is when its financial sponsors, be it government or private agencies, choose to fund only those social scientists who they know are predisposed toward causal explanations that support their own public policy goals and agendas. Such funding decisions may not always result in bad social science—or in false causal explanations instead of true ones—but their potential for such results should be clear.
The panelists in my IF Science Project recognized this potential. One of the conceptual possibilities they developed suggested that it has become so great that we should now treat scientists more like lawyers than objective and disinterested researchers. They said that even our best scientists are likely to produce theories in accord with their biases, that even the best-intentioned of our increasingly politicized funding agencies are likely to sponsor their work for this reason, and that we should thus actually expect our scientists to make the best case for their sponsors’ policy goals and agendas instead of continuing to treat their theories and recommendations as the product of objective and disinterested research.
This would be fine—so long as the funding agencies were willing to support scientists with a wide range of different biases and allow their research to fight it out in the critical give and take of scientific debate.
This would be the more traditional scientific approach. But another way in which social science can and does go wrong is when social scientists deliberately ignore hypotheses and evidence that run contrary to their sponsors’ policy goals and agendas. This is not so much the result of their natural biases as their deliberate suppression of undesired hypotheses and evidence. It is a betrayal of science, the scientific method, and the faith and money that the public invests in it. And when public policy drives social science in this way, it can be very, very ugly.