Institutional Review Boards (IRBs) have become a ubiquitous presence on the landscape of America's higher education system. IRBs initially arose in response to major breaches of research ethics in biomedical research, such as the infamous Tuskegee experiments that led to major physical harm to research subjects. To prevent the reoccurrence of similar debacles, regulators came to require preapproval by an independent review panel of any research that posed a risk of substantial harm to research subjects. And thus the IRB was born. Over time the jurisdiction of IRBs has expanded beyond this traditional focus on biomedical research to cover myriad fields of research, including social science research that involves human subjects. Since that time, complaints about the operation of IRBs have spread almost as rapidly as their jurisdiction. IRBs have been widely criticized as wasteful, obstructionist, and unresponsive to researchers' needs, and in the end, ineffective at protecting research safety and ethics. 
Born from sound motives and administered by earnest and increasingly professional administrators, IRBs today are a source of delay and frustration with little to show for the costs that they impose. One is thus led to ask, "Why is it that the smart and conscientious people on IRBs are so prone to making such poor decisions?" As a recovering government bureaucrat myself,1 I hope I can provide some insight into governmental and other bureaucracies, such as academic bureaucracies. And as an economist, when obviously competent people overproduce suboptimal outcomes, the first places to look are the incentives and institutions that govern the decision-making by these institutions. The thesis of this article is that IRBs are fundamentally bureaucracies, and that this bureaucratic structure explains much of their frequent suboptimal decision-making. The poor performance of IRBs is thus not a consequence of those individuals who comprise it, but rather a reflection of their bureaucratic nature. The bureaucratic nature of IRBs appears to do nothing to improve the decisions that they make, while being the source of many of their problems. Governmental bureaucracies have been much studied in recent decades and there is a well-developed literature discussing the promise and pathologies of governmental bureaucracies as well as their resistance to reform.2 Legions of business experts have dedicated themselves to improving the operation of corporate bureaucracies. Academic bureaucracies, by contrast, have been little studied. In a recent book, Ryan Amacher and Roger Meiners suggest that academics are reluctant to turn their examination to their own nests. They observe, "Little scholarly work has been done about the problems of universities as bureaucracies. Given the mass of academic publications, this is a bit odd, but it is difficult to be objective about things in which we are intimately involved."3 They speculate that it may be "easier to think about how the highway department works or the incentives in the Pentagon than it is to think about our own bailiwick."4 Yet "[c]olleges are no less bureaucracies than are highway departments and local public schools."5 This Article will begin to remedy this oversight by sketching a theory of academic bureaucracy while also showing the ways in which the bureaucratic nature of IRBs structure their decision-making outcomes. It is the thesis of this Article that the frequent suboptimal decision-making outcomes of IRBs results from the institutions and incentives that have been created within universities and IRBs. Understanding the problems that hamper IRB effectiveness and how to reform them requires an understanding of their underlying bureaucratic structure. Borrowing from the literature on the operation of governmental bureaucracy, this Article will take a first step toward modeling academic bureaucracies with an eye toward improving IRB operation. This is, of course, merely a first step, but it is hoped that future scholars will take up the task of a more detailed examination of the bureaucratic structure of IRBs. Perhaps there are some situations where IRBs add value to the research process by providing protection to human subjects that could not be provided by a less-cumbersome and expensive regulatory system. If such examples exist, however, these beneficial outcomes almost certainly come despite, rather than because of, the bureaucratic structures that will be discussed here. When IRBs function well, it is likely because of the ability and sound judgment of their members rather than their bureaucracy. Where they function poorly, it often is attributable to the pathologies of academic bureaucracies that distract well-meaning and able IRB professionals from their primary purpose. The IRB bureaucracy and rules add little individual value to overall decision-making outcomes, although the bureaucratic elements of IRBs may multiply the overall negative effects caused by incompetent or bad-faith IRB members. As will be seen, IRBs are essentially "lawless" institutions, subject to few rules constraining their discretion, little outside oversight and regulation, and dependent on their own self-imposed judgment to act appropriately. At best, this system of self-regulation works tolerably well, but rarely does it perform very well, and quite frequently it malfunctions partially or completely. In their certainty that they are a necessary institutional check on the otherwise self-interested incentives of researchers, IRBs and university leaders ignore the fact that IRB members themselves also have selfinterested incentives and thus may create their own distinct abuses. And in insisting that professional self-regulation is an inadequate check on researchers' self-interest, IRBs have constructed and implemented a regime dependent purely on their own standards of self-regulation and selfrestraint. IRBs have come to see themselves as a necessary check to protect human subjects from the predations of researchers, but in so doing they have shed any checks and balances on their own power and in many cases have come to glorify their own narrow agenda at the expense of researchers' needs. A proper account of IRB functioning is a tale of two bureaucracies-- federal bureaucracy in the form of the basic mandates established by the Office for Human Research Protection (OHRP) of the Department of Health and Human Services and university bureaucracy in the decisions and actions of university administrators. The general rules established by OHRP are very broad, however, and leave most discretion in the operation of IRBs to each university. Governmental bureaucracies, especially OHRP and HHS, do play some direct role in the pathologies of IRBs, mainly through HHS's problematic relationship with the free speech protections of the First Amendment. For purposes of this paper, however, governmental bureaucracies are more important analytically and analogically, as the theory of governmental bureaucracy is much better established than the theory of academic bureaucracy and can thus provide a measure of guidance in modeling the behavior of academic bureaucracies such as IRBs. Understanding the bureaucratic nature and internal dynamics of IRBs is a necessary first step toward their reform. Merely documenting the inefficiencies of IRB bureaucracies and calling for reform will do little to improve matters without a coherent plan for reform that takes account of how we have reached the current state of affairs. The remainder of this Article thus proceeds as follows. Part I briefly reviews the performance of IRBs, specifying their objectives and their failure to accomplish them efficiently. Part II summarizes the insights of scholars who have analyzed governmental bureaucracies, focusing particularly on those aspects of the institutions of governmental bureaucracy that are relevant to the subsequent discussion of academic bureaucracies. Part III provides a model of academic bureaucracies, focusing on the incentives of the individual actors who operate within these institutions and the external constraints imposed upon them. The discussion in Part III will suggest that university IRBs have come to adopt many of the worst characteristics of governmental bureaucracies-- risk-aversion, tunnel vision, a failure to consider the full costs of their institutional regime, and an indifference to values outside their purview, such as the precepts of the First Amendment.

I. COSTS, BENEFITS AND IRB DECISION-MAKING The social value of regulation, whether by an IRB or a governmental regulator, can be measured by the social benefit minus the total costs of the regulatory system, including error costs and administrative costs. Error costs are measured by the ability to accurately distinguish "Type I" and "Type II" errors, or false positives and false negatives. A Type I error, or "false positive," is the rejection of a research proposal that should have been accepted; a Type II error, or "false negative," is the acceptance of a proposal that should have been rejected. Error costs are minimized by the joint minimization of the costs of Type I and Type II errors, as measured by their frequency and the severity of harm that results from their occurrence. Administrative costs refer to the simple transaction costs of operating the system, including paperwork burdens and the opportunity cost of researchers' and administrators' time spent shepherding a research project through the IRB approval process. Increased marginal administrative costs are socially justifiable only to the extent that they improve substantive IRB decision-making outcomes, e.g., the minimization of error costs. On the other hand, imposing marginal administrative costs that do not improve IRB decision-making are simply unnecessary deadweight economic loss and should be avoided if possible.

First, consider error costs. In the context of IRB decision-making, a Type I error occurs when a proposed project by a researcher is incorrectly rejected, unduly delayed, or detrimentally modified by an IRB that improperly overstates the risk to human subjects from the research. Thus, if the risk to human subjects is incorrectly perceived to be higher than is actually the case, or the restrictions imposed by the IRB on the researcher are more severe or costly than necessary to protect the participants, this constitutes a false positive by the IRB. A Type II error, or false negative, occurs when dangerous research is permitted to go forward without appropriate protections for human subjects or, in extreme cases, where no protection would suffice, is permitted to go forward at all. In addition to the frequency of errors, the severity of harm caused by either type of error is relevant as well. False negatives can lead to harm to subjects by permitting dangerous research to occur. But false positives also lead to harm by delaying potential life-saving or other beneficial treatments, increasing their costs, or even preventing their development. The objective is to minimize the total costs of Type I and Type II error. A second component of costs is administrative expense, including applications and paperwork, IRB inspection and verification, and the cost when necessary of coordinating more than one IRB and researcher, all of which are required by the complex institutional nature of university IRBs. At the margin, higher administrative costs are justified if their investment improves the accuracy of the system by reducing Type I and Type II errors by more than the increase in administrative costs. It may be possible to devise an error-free regulatory regime, but perfect results can come only at a prohibitive cost in terms of lost research time, administrators' salaries, legal fees, and the like. In such situations, the optimal regulatory regime may be one that permits some degree of error at the margin if the costs of this small error are justified by a savings in administrative costs. Similarly, it also might be possible for society to expend substantially more resources to pursue the goal of perfection in the criminal justice system in terms of distinguishing innocent from guilty defendants through more extended trials, more exhaustive appeals processes, and the like. Nonetheless, it seems fairly obvious that it would be unwise from a social perspective to seek "perfect justice" in every case of jaywalking or illegal parking, as the delay, congestion of the court system, and opportunity cost of judges, lawyers, jurors, and the like would not justify pursuit of the marginal gains. On the other hand, society could decide the guilt or innocence of a given defendant by the toss of a coin. Such a regime would have minimal administrative costs, but the error costs of this system, convicting many innocents and acquitting many perpetrators, would be intolerable. For purposes of this Article, it will be assumed that the goal of the regulatory regimes in question is to maximize the net social benefits of the governance of academic research by minimizing the costs of Type I and Type II errors and administrative costs. "Cost," as used here, is defined broadly to include the opportunity cost of the researcher's productive time and energy spent seeking regulatory approval, the opportunity cost of the IRB administrators and members for time spent reviewing applications, as well as the lost value of socially-beneficial research deterred, prevented, or reduced in value by the intervention of IRBs. Virtually all commentators who have studied the IRB system as it currently operates, including those at this Symposium, believe that the costs of the system as it operates today vastly exceed the benefits, especially when the opportunity cost of researchers' and IRB panelists' time is taken into account. There is little hard evidence that IRBs, as they are currently composed, create more than trivial amounts of public value in terms of reducing the risk of dangerous or unethical research, or that less burdensome alternatives could not perform the same functions more efficiently. There are few documented cases of IRBs preventing seriously dangerous research projects from being conducted, and of those cases where IRBs have prevented real harm, it is possible that the harm may have still been averted by a less-expensive and less-restrictive regulatory system. By contrast, there are several well-known examples of IRB lapses that permitted dangerous research to occur, notwithstanding compliance with the onerous IRB process. At the same time, there are many examples of innocent researchers caught in the Kafkaesque world of IRB procedures.6 Consider each set of costs in greater detail. The administrative cost to researchers of simply shepherding a research proposal through IRB approval seems unduly high in light of the actual benefits produced by this review system. Indeed, I have been unable to locate any empirical evidence that even suggests that the IRB review process as it currently operates assists in accurately distinguishing Type I from Type II errors. Critics of IRBs have noted that completion of forms has become something of an obsession with many IRBs, often to the detriment of their central mission of protecting subjects from harm. The Center for Advanced Study at the University of Illinois observes that "IRB's blizzard of paperwork is getting in the way of their fundamental mission: to protect the dignity and well being of human subjects."7 The authors further note that "priorities have shifted from fundamental ethical questions to regulatory documentation. Money, the time of researchers and IRB personnel, and good faith are squandered when IRBs require researchers to comply with policies that are seen as window dressing and busy work."8 The most common revision IRBs require of research proposals is the modification of consent forms.9 Illustrating the imposition of seemingly senseless costs, they report that the IRB at the University of Illinois required undergraduates to obtain consent forms before performing assigned practice interviews with their families.10 An editorial in Nature notes the effect of these priorities: "[R]esearchers and institutions alike are drowning in paperwork. Sadly, much of the documentation bears little relation to the realities of research, for it accomplishes little beyond slowing down legitimate experiments and making them harder to administer."11 The Editors conclude, "The work still goes on--just more slowly, and at greater cost in time and money."12 The paperwork burdens of seeking approval from the IRB at the outset are substantial, requiring numerous forms and often repeated interactions with IRB members to gain approval of research questions and protocols. One study found that the median amount spent by academic medical centers operating IRBs was almost $750,000 annually.13 IRB staff dedicated more time to general administration (29%) than on any other single set of tasks, including review and approval of research protocols (27% of their time) or monitoring of compliance (12%).14 At the same time, there appear to be few benefits to these bureaucratic hurdles and paperwork obligations in terms of reducing the frequency or magnitude of harm. Although IRBs frequently require and suggest wording changes in consent forms, studies find that the changes often make the forms less understandable, as opposed to more.15 One study found that consent forms undergoing IRB approval garnered a median of 46.5 changes (ranging from 3 to 160) in their language, most of which were alterations in wording without effect on meaning.16 Over 11% of the changes introduced an error into the form; and of the forms approved, 80% had at least one error and two-thirds had an error in the description of the research protocol or in a required consent form element.17 Of the errors in the approved forms, 27.4% were substantive, such as deletions of significant side effects, major errors in the description of study procedures, or the complete removal of a required section of the consent form (such as the right to withdraw from the study).18 Moreover, as a result of the IRB process, consent forms became less readable because the language was made more complicated and technical. As the language became more complicated, people with lower reading levels were unable to understand the terms in the form. Over time, the minimal reading level required to read the forms increased by almost one year.19 Given the attention paid to the wording of consent forms, it is ironic that IRBs appear not to actually do any sort of follow-up to determine whether the consent forms accomplish the stated goal of accurately measuring the subject's consent.20 This lack of accurate feedback mechanisms as to the marginal costs imposed by the IRB process can result in substantial costs with few offsetting benefits. Burris and Moss observe that IRBs "have no reliable way of knowing whether changes that appear sensible within the members' frame of reference are feasible or reasonable to implement."21 Some 88% of the wording changes required by IRBs, for instance, merely change the language, not the meaning of the words used, and many others actually make the forms less readable and introduce errors.22 But even a merely cosmetic wording change requires researcher and staff time. Burris and Moss observe that "small" changes or "simple" reporting requirements imposed by an IRB "may actually entail a great deal of time and trouble [and] retooling or invention by investigators to deal with a very minor risk."23 For instance, in one case of international research, hospitals in the foreign country where the research was to take place had a wellestablished "opt-out" protocol for permitting postmortems--i.e., upon being admitted a patient was presumed to permit a postmortem unless he optedout.24 The American IRB, however, demanded written opt-in consent for a postmortem, even though there was no evidence that the opt-out regime inadequately protected the patients' consent. Apparently considered a "minor" change by an American IRB, this new requirement added dramatically to the cost and complexity of the project with little offsetting benefit. As Burris and Moss conclude:

Sometimes changes are necessary but there is no threshold of importance specified for [IRB] intervention, and no ready means for researchers to contest problematic requirements. Whether the required change concerns a research topic, consent forms, or something else, consideration should be given to the question of whether [IRBs] could more frequently distinguish between desirable or advisable changes and those that are truly indispensable.25

Because IRBs make no effort to measure either the costs or benefits of their process, they have no idea whether their required changes are cost-justified or even effective.26 In addition, co-authored research generally requires redundant approval from the IRB at every institution with which the researchers are affiliated. Thus, whenever one researcher's IRB changes something, "`even if it was trivial,'" all of the researchers have to go back and get the rest of the IRBs to agree.27 Another researcher observed, "`If one IRB makes a change then the other has to make the same change and so you have to go through a revolving door and the process can take up to a year.'"28 Where researchers from multiple institutions collaborate, and thus multiple IRBs are involved, the time for procuring approval may rise from months to over a year. In fact, at some institutions IRBs are divided into subject-matter specializations, requiring researchers from within the institution itself to garner approval from multiple IRBs. A study of a single multi-center medical study involving nine IRBs found that 17% of the total research budget was spent procuring approval from the multiple IRBs asserting jurisdiction over the study, and eighteen months were required for approval.29 Each IRB mandated multiple edits in the formatting and wording of the consent and survey forms, and each change required a redundant round of approvals from the other IRBs involved. In the end, the substance of the protocol was left largely unchanged and the required wording changes had no real effect on the protection of the subjects involved.30 Moreover, field research is dynamic, not static, so any change in research protocols as part of the ongoing research requires a new battery of paperwork and review. Although imposing substantial costs on researchers, there is no evidence that requiring review of the same protocol by multiple IRBs meaningfully adds to safety or provides other benefits. From a social welfare perspective, it is utterly arbitrary that the number of panels reviewing a given research protocol depends on the number of researchers and institutions involved. If multiple reviews of a particular protocol by multiple independent IRBs actually added value, then they should be required even for single-author, single-institution research. At the very least, the number of independent panels reviewing a given research protocol should reflect the relative riskiness of the project, not the coincidence of how many researchers and institutions are associated on the project. Instead, some potentially risky research is reviewed only once, while other low-risk protocols are reviewed redundantly. It is hard to find any explanation for this illogical allocation of resources other than the reflexive bureaucratic mandate that each institution have its own IRB that reviews research proposals by researchers affiliated with that institution. Many of the larger costs of IRB overreach may be difficult to notice or measure, because they consist of the research that is never even proposed because of the researcher's fear of the cost and delay associated with seeking IRB review--i.e., "the paths not taken." "These may include," observe Bledsoe and Sherin,
the undergraduates diverted from original research toward a library thesis, the research grants not written, the decision to devote a career to studying union workers rather than children, the decision to avoid majoring in a field in which an honors thesis can be disseminated without IRB review, the choice of a graduate field that will not require IRB oversight.31

Journalism professor Margaret Blanchard, for instance, stated that she planned to switch from studying twentieth century media history to nineteenth century media history because "you do not have to worry about the IRB when you work in the nineteenth century."32 "A better formula for demoralizing graduate students and faculty members could not be imagined," Blanchard concludes. "A better formula for stultifying research is beyond contemplation. That formula is today in place, thanks to the IRB."33 Researchers report that the particularly onerous obligations imposed by IRBs on inter-institution collaborations has led some researchers to cease collaborating with colleagues at other institutions.34 Others even have refused to collaborate with other researchers within the same institution where doing so would require approval by multiple IRBs, such as at institutions that have multiple specialized IRBs for different subject matters.35

In addition, as the scope of IRB review has increased, a growing number of faculty members have been dragooned into serving as IRB reviewers as glorified volunteers, distracting them from their research as well.36 IRB members generally consider their participation on the IRB to be "onerous" and "a thankless task."37 They are poorly compensated for their time and risk the wrath of their colleagues for participating in a bureaucratic enterprise with little evident benefit.38 One study conducted during the early 1980s estimated the total cost of the IRB process as running into the tens of millions of dollars annually, including the opportunity cost of researchers and IRB members of reviewing and meeting on proposals.39 A recent study found that researchers spent a median thirty hours to complete the IRB approval process for a given application and that median approval time exceeded three months.40 In short, IRB "review is no regulatory free lunch."41 Moreover, the paperwork burden seems to be growing rapidly; as one expert observes, routine surveys and demographic analyses are being challenged more aggressively and "IRBs are making a fetish of consent forms."42 Perhaps there are situations where IRB intervention has prevented disaster or otherwise improved the ethics of a given research program-- although documented cases are few, especially in the social sciences.43 But there is widespread agreement that the current IRB system is deeply dysfunctional and that its costs substantially exceed its benefits, especially when compared to alternative regulatory regimes that could generate all or substantially all of the benefits of the IRB regime at much lower cost. Indeed, I have not located any disinterested empirical study that suggests that the benefits of IRBs, as currently operating, exceed the costs that they impose. Many of the criticisms of the overly bureaucratized IRB system have been present for some time, yet the problem seems to be worsening rather than improving, leading to skepticism that the system will right itself of its own accord in the near future. II. GOVERNMENTAL BUREAUCRACIES For many years economists and other researchers implicitly treated a governmental bureaucracy essentially as a "black box" institution mechanistically seeking to maximize social welfare. Typically, it was assumed that the observation of a market failure or some other social imperfection could be cured at low cost through the intervention of unbiased and publicspirited governmental experts. Researchers rarely looked into what went on inside the bureaucracy, or the incentives and behavior of those making decisions within it. Over time, however, it was recognized that this view of governmental bureaucracy was overly simple and failed to accurately model and predict governmental behavior. As a result, scholars devised more realistic models of governmental behavior rooted in an examination of the incentives and institutions that provide the opportunities and constraints on individual governmental regulators. This Part will focus on four attributes of governmental bureaucracies that appear to be relevant in understanding IRB behavior: (1) "empire building" tendencies, (2) undue risk aversion, (3) poor ability to measure the marginal costs and benefits of actions including discounting externalized costs, and (4) selection bias and tunnel vision on regulatory mission. Each of these four factors contributes to poor decision-making in governmental bureaucracies and may explain similar problems in academic bureaucracies. A. "Empire-Building" Modern analysis of governmental bureaucracy begins with the assumption that individuals who serve as governmental regulators are motivated by the same fundamental impulses as all people, including those who labor in the private sector.44 Economists assume that the primary motivation for human action, whether in the governmental or private sector, is the pursuit of rational self-interest. In practice this impulse may take many forms. In some instances it may be the pursuit of narrow financial self-interest, the stereotyped vision of homo economicus often associated with neoclassical economics. But it is neither necessary nor accurate to define rational selfinterest so narrowly. In other situations, the impulse may lead a person to seek leisure rather than pecuniary income, for instance, if a particular job permits long lunches, "goofing off," or work-free weekends when compared to a higher-paying alternative. Others may manifest the impulse toward rational self-interest through the pursuit of power or prestige (such as politicians or federal judges), or job security (such as tenured professors) as opposed to maximizing their income. Whatever its expression, however, it is assumed that the fundamental motivation of an individual working as a governmental regulator is the same in essence as that of a person working outside government. Given this assumption of the constancy of human nature, modern models of governmental regulation no longer posit intervention by publicspirited experts acting through a "black box" process. Instead, regulators' decisions as to when and how to intervene are understood to result from the interplay between the pursuit of their own self-interests and the constraints and incentives created by the institutional framework in which they operate. One model of regulation views governmental actors acting as "empire builders," seeking to maximize the power of their agencies and the size of their agency budgets.45 The economic theory of bureaucracy posits that those who manage governmental agencies will seek to maximize agency power, influence, and budget size as a means to the end of increasing the individual power, prestige, income, and security of the agency's decisionmakers.46 It is not necessary for current purposes to distinguish among the more fine-grained questions of which motivation is most prominent--for example, income, prestige, power, or job security. All of these individual motivations will be grouped under the general heading of bureaucratic "empire-building," meaning a desire to maximize the influence and prestige of the individual regulator through increasing the power, jurisdiction, and budget of the regulatory agency itself.47 This is not to say that government bureaucrats consciously seek to engage in empire-building or that they are motivated solely and consciously by a desire to maximize the size of their agency in order to increase their personal power, prestige, and salary. Some do--at least, based on my experience as a senior governmental employee.48 But it is not necessary to assume that these empire-building tendencies are conscious to accept that the phenomenon occurs. Rather, it is necessary only to assume that bureaucrats are at heart the same as all other human beings, no better and no worse, and that regardless of their individual vision of the good life, they seek more wealth and power rather than less, as more resources are a valuable means to accomplish whatever ends they seek.49 Moreover, governmental officials generally believe that they are advancing the public good by increasing the authority and responsibility of their agency, with an increase in the budget and power of the agency as an accompanying consequence. Governmental officials genuinely believe, or come to believe, in the inherent necessity of the vigorous assertion of agency power to protect the public, assuming that "if we don't do it no one will" even when this assumption is factually wrong. This suggests that there is a natural tendency for bureaucracies to grow over time as regulators assert jurisdiction over new areas of activity thought to need greater oversight. At the same time, there seems to be no countervailing tendency for bureaucracies to shrink in size as their responsibilities become obsolete, and when they do reduce their regulatory scope it is usually from external pressure and with no small degree of resistance.50 Thus, even if the proximate motivation is not individual self-interest, agency officials nonetheless will appear to act as if self-interested. There need not be a conscious effort to engage in empire-building, but simply the ubiquitous human tendency to prefer more to less and to seek to increase the material well-being of oneself and loved ones. B. Undue Risk Aversion A second characteristic of bureaucratic decision-making is a tendency toward inefficient risk aversion.51 Although Type I and Type II errors are equally problematic from a social perspective, as noted, individual governmental regulators tend to confront asymmetric costs between these two types of errors. Consider the United States Food and Drug Administration. If the FDA approves a drug prematurely or erroneously then those who take the drug may be harmed or killed. But if the FDA regulatory approval process unduly delays or deters a life-saving drug from being marketed, this too will result in harm by permitting otherwise preventable harm to occur. From the perspective of social welfare, regulators should be risk-neutral and should consider the opportunity costs of delayed approval in preventing suffering or saving lives equally with the costs to those who might be injured by premature approval of a drug. From the perspective of the individual administrator, however, the costs of erroneously approving a harmful drug will be much higher in terms of criticism, bad publicity, and adverse career consequences. Because those injured by dangerous drugs are tangible and identifiable, whereas those who would have benefited from swifter drug approval are abstract and unidentifiable, a decision-maker is likely to suffer greater costs from the former errors than the latter. Thus, even though the social costs of the Type I and Type II errors are conceptually identical, the private cost to her of approving a drug that harms people is higher than the harm that results from delaying helpful drugs which harms largely unknown people. As a result, it is expected that bureaucrats will tend to be inefficiently risk averse in their decision-making. Empirical studies tend to support this hypothesis.52 C. Marginalism and Cost Externalization Third, bureaucracies will tend to have a poor understanding of the marginal costs and marginal benefits of their regulations. In the private market, firms seek to set the supply of a good at the point where the marginal revenue generated by the sale of an additional unit is equal to the marginal cost of manufacturing the unit. Similarly, consumers purchase goods or engage in other activities so long as the marginal utility gained from purchasing an additional unit exceeds the marginal opportunity cost of doing so, in terms of money and time. Prices send signals to producers and consumers about the relative scarcity of goods and thus provide incentives to conserve on relatively more-expensive goods and services.53 Thus, if I eat one hamburger, in deciding whether to order a second hamburger I will weigh the marginal benefit to me of the hamburger (the enjoyment and nutrition I receive) against the marginal cost (money, time, and calories). The restaurant makes a similar calculation in deciding how to price its hamburgers so as to maximize the marginal net revenue from selling hamburgers. Regulators will also tend to underestimate the costs of their regulations because so many of the administrative and compliance costs are borne by private parties rather than by the regulatory agency itself. Efforts have been made to try to establish proxies to enable regulators to calculate the full marginal cost and marginal benefits of regulations, such as cost-benefit analysis, or the Paperwork Reduction Act which tries to calculate the external administrative burden imposed by various regulations. These efforts afford at best crude estimates of the marginal costs and benefits of government regulations. Yet the substantial effort expended trying to generate even these crude measurements of the marginal costs and benefits of regulatory activity illustrates the information vacuum that regulators conront 52 53in trying to determine the marginal costs and benefits of their regulations. Regulation by governmental bureaucracy may also "crowd out" alternative regulatory mechanisms (such as private market solutions) and stifle innovation in providing similar regulatory protections through alternative institutions. A common justification for the expansion of the scope of government regulation is the posited necessity to respond to market failures, such as the need to regulate unsafe consumer products or services. This narrow focus, however, ignores the myriad private mechanisms available to address those problems. For consumer products, for instance, third-party inspection and verification institutions such as Underwriter's Laboratory provide many of the same services as government safety regulators, and usually accord to much stricter safety standards.54 Other quality assurance mechanisms include the value of brands and trademarks,55 money-back guarantees, and private tort and contract actions. Providing unsafe products or services can result in substantial financial penalties through reduced consumer sales and a decline in the firm's stock price,56 and those financial losses often greatly exceed civil liability and government penalties and fines. D. Selection Bias and Tunnel Vision on Regulatory Mission A fourth tendency of regulators is to focus narrowly on the importance of their regulatory agenda at the expense of alternative policy goals and social ends. Environmental regulators, for instance, are often called to their jobs with a sense of mission and tend to focus narrowly on the pursuit of environmental policy goals at the expense of other important social goals, such as economic growth.57 This should not be surprising--organizations will tend to attract those with a comparative advantage in performing the obligations of the job. As economist Frank Knight colorfully put the point, "[T]he probability of the people in power being individuals who would dislike the possession and exercise of power is on a level with the probability that an extremely tender-hearted person would get the job of whipping master in a slave plantation."58 For similar reasons it will be relatively rare to find in the permanent bureaucracy of the government those with antiregulation dispositions. There are a variety of reasons why this is so. There are two basic components to the remuneration from a job: monetary income and non-monetary psychic income, such as a feeling of "doing good" in the world. Those who derive job satisfaction from engaging in regulation or are more comfortable with employing coercion will gain higher psychic income from a job than those who do not. As a result, economic theory predicts that the pro-regulation individual will be willing to accept a lower monetary wage for the job because of the job satisfaction she receives, meaning that at any given wage governmental jobs will tend to be filled by those with a bias toward regulation. For example, at the margin, an ardent environmentalist will be more willing to enact and enforce environmental regulations than one who lacks that sense of mission. Thus, governmental bureaucracies will tend to be staffed disproportionately with those who believe most strongly in their regulatory mission and will tend to push that mission further than might be socially optimal. Regulators and interest groups that share the regulators' vision can thus use the power of the government to pursue their subjective ends and impose costs on others, in essence imposing a "political externality."59 Moreover, the trappings of government tend to reinforce this tendency to believe in the mission of the organization and the necessity for its expansion. A recent example of regulatory tunnel vision illustrates the clash between safety concerns on one hand and First Amendment values on the other. Acting in the name of safety and public health, the FDA has historically imposed heavy restrictions on the rights of manufacturers to communicate truthful information about the safety and health benefits of many products, even enforcing strict prior restraint limitations.60 In several recent cases the FDA was taken to task for its undue infringement on First Amendment values, and in particular, for ignoring the potential benefits to consumers and marketplace competition of the provision of truthful information about products.61 In pursuing its mission of protecting consumers from the dangers of false health claims about products, the FDA improperly ignored the harmful effects of its regulations on the First Amendment commercial-speech rights of businesses, and of the denial of useful health information on consumers. Governmental regulators also tend to be subject to selection bias in the range of matters that they see. Consider an example taken from my government service at the Federal Trade Commission. E-commerce and Internet auctions (such as through eBay) are rapidly growing areas of the economy. As the volume of e-commerce activity rises, it is natural that Internet fraud and contractual misunderstandings will rise as well, at least in absolute terms, and predictably the number of FTC complaints about alleged online fraud have risen as well. Nonetheless, eBay recently stated that "[a]pproximately 0.01 per cent of transactions end in a confirmed case of fraud."62 Moreover, many of these problems probably could have been avoided quite easily by consumers through simple precautionary devices such as using credit cards or other secure payment systems (such as PayPal) rather than checks or cash or by dealing only with high-reputation sellers.63 Governmental regulators, however, generally will hear only from the 100 in 1 million consumers whose transactions end in fraud, rather than the 999,900 whose transactions do not go awry. News reporters and oversight officials will disproportionately hear from victims of fraud. As a result, there will be a tendency for regulators to promote the public perception of the problem and to impose increased regulations in an effort to control it. Given the low level of fraud already present as well as the comparative efficiency of self-help by consumers relative to government enforcement in most cases, however, increased governmental regulation in a situation such as this is unlikely to reduce fraud substantially. On the other hand, new regulations could increase the costs of all transactions, including the vast majority of nonproblematic transactions. Nonetheless, because of selection bias, governmental regulators will tend to discount these external costs of their regulations in decision-making. III. ACADEMIC BUREAUCRACIES There has been surprisingly little research modeling the institutions and incentive structure of academic bureaucracies, especially from an economic perspective. This Part will take the economic model of government bureaucracy described in Part II and consider whether it explains the behavior of university administrators. Examining the internal organization of universities, it is apparent that many of the characteristics of governmental bureaucracies are equally present in academic bureaucracies.

Universities, like governmental agencies, are nonprofit entities. Unlike governmental agencies, universities are not monopolists; universities participate in a market, albeit an imperfect one, in which they must compete with other institutions for resources, faculty, and students. Yet they still face attenuated competition and modest profit and loss accounting constraints.64 Many universities have substantial endowments that soften market discipline. Much of the competition among universities regards reputation, which is notoriously "sticky" over time, leading to weakened market signals. Reputation is a network good, tending to entrench reputations once established. Reputation also is highly correlated with financial resources, so financial strength is self-reinforcing in any sort of market competition. Moreover, there seem to be economies of scale for maintaining IRBs and other regulatory bureaucracies, thus their increased overhead and administrative costs disproportionately burden smaller and less-wealthy universities relative to larger, wealthier universities, as smaller schools can less well afford this diversion of resources from productive ends.65 In general, the size and expense of administrative bureaucracies in universities have grown substantially in recent decades. In part, the growing accumulation of government regulations (employment, civil rights, environmental, etc.) has necessitated the creation of new administrative staff to deal with these new responsibilities. With respect to university bureaucracies such as IRBs, at least some of the growth in their internal administrative burden has been spurred by governmental regulations.66 In addition, the preoccupation of IRBs with paperwork and forms has been promoted by a regime of "fear" of governmental oversight, "[f]ear by the institution that it will be `out of compliance' with one or more aspects of the paperwork, and so subject to penalty upon audit (be that by the NIH, the Office for Human Research Protection, the US Department of Agriculture, or whatever other organization is involved)."67 For example, with respect to Institutional Animal Care and Use Committees (IACUCs), the analogue to IRBs for research involving animal subjects, reviews must take place before the grant application is even permitted to be peer-reviewed by the National Institutes of Health (NIH).68 Thus, review of the research must take place even if the application is denied (which most are), resulting in a pointless waste of time, energy, and money to researchers, and no benefit to animal welfare. Indeed, because it diverts the attention of IACUCs from reviewing research that will actually occur, it may be counterproductive to animal welfare. Nonetheless, the regulation was imposed by the NIH as the result of political pressure from purported animal-welfare lobbyists, and its costs are borne by researchers and universities. The analogy to political bureaucracies may be especially apt for public universities, which face external political pressures in addition to internal agency slack, and which are also subsidized by tax revenues and thus may face weaker competitive incentives to operate efficiently. Amacher and Meiners observe that, in general, public universities tend to be larger than private universities, which they attribute to the political nature of public universities.69 Large state universities tend to get greater tax subsidies, and, in turn, have more political constituents and alumni to pressure state legislators for subsidies. Wealthy private universities (such as Harvard or Yale) could expand to 40,000 students, yet they do not, suggesting that it is external political dynamics, not economic efficiencies from economies of scale, that have driven the expansion of modern state "mega-universities." One study estimated that public colleges employ approximately 40% more labor than private colleges with the same size capital stock, suggesting that the subsidies and political issues that protect state universities from competition are reflected in less efficient delivery of services.70 Coates and Humphreys found that academic administrators act consistently with the predictions of the Niskanen model, as manifested in their willingness to enlarge their student bodies in order to increase their own budgets, and thereby the power and prestige of senior administrators.71 The remainder of this Part will thus focus on exploring the applicability of the public choice model of bureaucracy to university administrators generally and IRBs specifically. It will be suggested that this model may explain why it is that university IRBs have become so dysfunctional and that university bureaucrats face the same opportunities and incentives as governmental bureaucrats toward empire building, tunnel vision, selection bias, agency costs, and the other institutional problems that undermine the effectiveness of governmental bureaucracies. The remainder of this paper will borrow from the theory developed about governmental bureaucracies to investigate their applicability to the question of academic bureaucracies and IRBs specifically.

A. Empire-Building According to law, IRB approval for research is mandated only for federally funded research involving human subjects; thus, projects that are not federally funded are not required to undergo IRB review. At the University of Chicago, Richard Shweder reports, 80% of social science projects are not federally funded.72 Nonetheless, the university, like virtually every other, has imposed the requirement of IRB approval on all research involving human subjects, whether federally funded or not. Perhaps the defining feature of IRBs has been this tendency to expand well beyond their originally designed scope and purpose and to sweep under their wings vast areas of research, including oral history and other low-risk fields. In addition, IRBs appear to be increasingly prone to straying beyond their defined role of protecting participants, meddling instead in methodological issues such as measurement techniques or questions of statistical power where they have little expertise to add.73 Indeed, the consensus is that this growth in the scope and responsibilities of IRBs lies at the root of their various problems, from mission creep to the inability to distinguish true threats from minor paperwork obligations. IRBs often justify their mission creep in terms reminiscent of government bureaucracies--for example, "if we don't do it, no one else will." IRBs were originally established to deal with the problem of self-interest by researchers that might blind them to the ethical consequences and danger of their research activities. IRBs seize upon the purported distorting effects of researchers' self-interest to justify the "need" to expand into a wide range of activities beyond medical experiments with a true risk of harm to subjects.74 In fact, it is not clear why social sciences should be swept under the IRB umbrella in the first place. As Kevin Haggerty has observed, "It is unclear . . . that there ever was any demonstrated need for a formal bureaucratic system of ethical oversight for social scientific research."75 He adds,
[T]he origins of research ethics oversight for the social sciences in the United States donot [sic] resemble anything approaching a careful consideration of the objective requirements for such a system. Instead, the behavioral sciences were included almost as an afterthought to a [sic] ethics regulatory system that was being fashioned for biomedical research.76

The incentives for empire-building by IRBs today appear to be quite strong. As described by various members at the Symposium at which this Article was presented, the past several years have seen tremendous growth in what can be termed the "IRB industry." It was reported that at Northwestern the Office for the Protection of Research subjects grew from two full-time professionals in the late 1990s to 25 professionals and an administrative staff of 20 last year.77 And as IRBs have become more aggressive in asserting themselves, the power and prestige of IRB directors has increased. As the size, scope, and importance of IRB operations have increased, experienced IRB directors are in heavy demand and one can assume that salaries have increased commensurately.78 There is reported to be a growing IRB conference circuit. From the perspective of individual IRB directors, the incentives within the academic bureaucracy generally tend toward an assertion by IRBs of enlarged influence, as this tends to increase the director's power and, as a result, to influence the director's salary and feed the growth of the IRB industry. Moreover, there appear to be no effective external checks on any improper empire-building tendencies. Although this independence from external constraint may be useful to protect IRB authority to monitor researchers, it also removes constraints on arbitrary or self-interested IRB decision-making. I am not aware of any reported instances where an IRB's assertion of jurisdiction or imposition of conditions on research was overruled by a higher university official, to whom the IRB presumably reports. IRBs are thus essentially "lawless" institutions, in the sense that they are subject to no real external constraints or checks. At least governmental bureaucracies are accountable to the elected branches of government, the executive and legislature, and governed by a set of coherent procedures. IRBs, however, seem to be a law largely unto themselves, making ethical and social welfare assessments with few discernible standards and little external oversight. Again, it should be emphasized that venal or selfish motivations need not be attributed to IRB administrators to conclude that they will exhibit a tendency toward empire-building behavior. Instead, the expansion of IRBs may have come about through wholly innocent and public-spirited motivations. IRB directors seem genuinely to believe that their intervention is necessary to protect research participants from exploitation by researchers, even where a reasonable assessment of the risk would conclude that it is trivial and intangible, as is the case with much social science research. As with governmental bureaucracies, the relentless tendency of IRBs seems to be to expand over time, and there appears to be no internal mechanism to require them to reevaluate their range of responsibilities once acquired. B. Undue Risk Aversion Like governmental safety regulations, the objective function of IRBs is to minimize the costs of the IRB system through the minimization of Type I and Type II errors as well as administrative costs. But also like governmental regulators, individual IRB members face private incentives to act in an unduly risk-averse fashion. In high-profile cases at Johns Hopkins University and the University of Pennsylvania, universities and IRBs have been blasted by the media and the public for failing to prevent research that eventually resulted in injury to subjects.79 On the other hand, there seems to be little institutional or public praise for responsive IRBs that work to protect the needs of researchers against overly cautious safety fears. Major safety breakdowns are observable and lead to major negative publicity for IRB heads. By contrast, it is difficult to measure the costs of research that is prevented, reduced in value, or made more expensive to conduct than would otherwise have been the case. In short, the incentives pulling on IRB administrators strongly incline them toward unduly risk-averse behavior and increasing costs on researchers, which likely creates a circumstance where unduly cautious IRB administrators regulate inefficiently.80 Risk aversion may be even more pronounced with respect to social science research, where the benefits are much less tangible than for biomedical research. Researchers face a collective action problem in persuading IRBs to balance risks and benefits more efficiently. Collectively, researchers have incentives to try to band together to seek a more efficient balancing of risks and benefits. Individually, however, each researcher faces incentives to free ride on the efforts of others, but since each has the same incentive, all do. Thus, no one has an incentive to fight unless he must, and almost all will try instead to get through with the least amount of friction, expense, and delay. Alternatively, researchers may pursue a strategy of evasion of the system (such as by proceeding as though the requirement of IRB approval does not apply to one's research) or simple resignation to the IRB's dictates, no matter how senseless, so long as the cost is tolerable. One commenter has observed that rather than attempting to fight the system, "medical scientists have adopted a policy of `weary, self-resigned compliance coupled with minor or major evasion.'"81 Social scientists have responded similarly. "Many of my colleagues do not want to challenge the IRB, because they are concerned about drawing attention to themselves and their work," reports journalism professor Margaret Blanchard. "They fear such attention will lead to further supervision by the IRB and more restrictions on their work. `Don't rock the boat,' they say. `Let's keep a low profile. Maybe the IRB will not cause problems for our particular research project."82 Social scientists also have complained that their difficulties are exacerbated by the fact that most IRB directors at most institutions are drawn from the ranks of biomedical researchers and so may not adequately understand the research practices or social importance of social science research.83 Fighting the demands of an overzealous IRB can be a draining and distracting experience for researchers. For instance, J. Michael Bailey found himself in the crosshairs of his university IRB as a result of some dubious politically-motivated charges brought against him.84 Bailey was dragged before his university's IRB where, for over a year, he was forced to defend himself against scurrilous accusations of academic misconduct. Elizabeth Loftus similarly found herself embroiled in a draining and career-damaging dispute with her university's IRB.85 At the University of Illinois, the IRB threatened to block publication of an article by an assistant professor in The Kenyon Review about a creative writing class with fictionalized students at a fictionalized school, even though the subject matter of the article itself probably did not even require IRB approval in the first place.86 The threat of intimidation and delay, and therefore the incentive to capitulate, is probably greatest for graduate students and junior professors who face the pressures of limited research budgets and ticking time clocks for completion of their research.87 In fact, there are some reports that these fears have not only redirected graduate students to alternative research programs in order to avoid the cost and delay of IRB approval for their preferred research, but also that professors have encouraged their students to do so.88 One scholar spent months negotiating with her university administration regarding how to secure consent from the subjects of her study, native healers who provide Hispanic communities with medical advice, prescriptions, and treatments. Because many of these are illegal immigrants or could be charged with practicing medicine without a license, the researcher refused to secure from them signed papers of informed consent. By the time she acquired partial approval from her IRB, "a major portion of the funds budgeted for transcription and translation were no longer available" and "her graduate students were frustrated in their apprenticeships."89 Scholars may also find much of a sabbatical research leave squandered waiting for an IRB to approve a proposal. Other scholars try to avoid IRB review in order to avoid having their research proposals reviewed by individuals unfamiliar with their field of study.90 Given the cost, difficulty, and delay of IRB compliance, more vulnerable researchers may simply turn away from their preferred course of research and opt for less controversial research for which IRB approval will be easier to secure.91 Indeed, IRBs have chilled entire categories of research by prohibiting scholars from exploring sensitive topics, such as criminal records or more general archives that may contain embarrassing information. One leading historian notes that prohibiting questions on embarrassing or illegal topics could, for instance, bar interviews with civil rights activists who have routinely broken the law in acts of civil disobedience.92 Another oral historian has observed that "there is the possibility, or perhaps even the hint, that . . . controversial, difficult, or challenging topics cannot be addressed in historical research."93 Bledsoe and Sherin refer to this process of the "stifling or transformation" of research in the face of IRB pressures and practices as "consensual censorship."94 As the burden and delay of IRB compliance rises, this provides an incentive for researchers to turn away from research on controversial topics. One researcher describes the decision of one particularly talented student not to perform an ethnographic study of AIDS activists in Philadelphia as the student's sensible decision "to do a statistically significant, but dull, survey of the relationship between healthy eating habits and extracurricular activities of college students, rather than a controversial ethnographic study of AIDS activists that might never be approved [by the IRB]."95 C. Marginalism and Cost Externalization A third way in which IRBs are similar to governmental bureaucracies is their inability to measure marginal costs and benefits accurately, as well as a tendency to underestimate their costs because many of these costs are externalized on private actors. IRB administrators have no reliable mechanism for judging the marginal benefits and marginal costs of their mandates. Governmental regulators at least have crude proxies for measuring costs and benefits, such as cost-benefit analysis. IRBs, by contrast, appear to have no measures at all. Complying with IRB mandates can be quite time-consuming and tedious for researchers, yet there is no indication that IRB administrators take this into account. One researcher states, "`It takes so much unnecessary time. Every year, even with the same application, IRB members come up with some bit of minutia they want me to change . . . . It is way beyond what is helpful. It is very frustrating and timeconsuming.'"96 IRBs also make little effort to reduce the redundancy and time burden imposed on researchers in completing forms. One researcher describes the need to "`write nonredundant answers to redundant questions even though they could put all [of the questions] in one question,'"97 such as a standard questionnaire requirement to answer separate questions about protecting different subsets of human subjects rather than simply asking about all of the participants in one question. IRBs lack even crude metrics for measuring the marginal costs and benefits of the administrative requirements or the relative frequency of Type I and Type II errors. As noted, most of an IRB's attention is focused on the satisfactory completion of paperwork requirements rather than the underlying substance of the research under review. Researchers report that the combination of a growth in IRB responsibilities with greater IRB attention to minutiae has significantly lengthened the time required to receive IRB approval.98 The time for seeking approval is even higher when a researcher is operating in a less developed part of the world, such as Africa, where fax machines and photocopiers are rare, as IRBs still require the customary blizzard of paperwork. Illustrating an extreme example of cost externalization, some researchers report that they were prohibited from providing possibly life-saving medical treatment to patients in Africa because they were unable to secure signed consent forms in advance.99
95 96 97 98 99

An additional important cost of IRBs, albeit one difficult to measure, lies in their tendency to crowd out alternative forms of research ethics and to stifle the development of alternative institutions that might generate the putative benefits of IRBs (ethical research) without the high costs that accompany regulation by IRB. Some researchers acknowledge that in some ways the requirement of IRB preapproval has raised the ethical level of their research, especially when it comes to securing consent from research participants.100 But there are unintended consequences of this regulatory regime as well. First, IRBs' undue focus on paperwork and documenting consent may inculcate in researchers a sort of "legalistic" form of ethics that focuses on compliance with the letter of IRB consent requirements, rather than the underlying ethical issues that may be implicated by particular research.101 Moreover, just as the undue attention paid to paperwork and consent forms tends to divert an IRB's attention from substantive ethical issues to managerial matters like the proper documentation of consent, it distracts researchers as well, obscuring their focus on substantive ethical duties. Second, because IRBs hold the power to overrule the ethical judgments of researchers, those same researchers may be tempted to accept the ethical judgment of the IRB in place of their own independent judgment of the ethics of a given research project and to subordinate their own ethical questions to the outcomes of the IRB process. An official Canadian study has referred to this tendency--defining ethical obligations as equivalent to compliance with the IRB process--as the "bureaucratization of ethics."102 "By treating research ethics as equivalent to the [IRB] process," the study observes, "research institutions and sponsors have narrowed the scope of ethical concerns to front-end approvals of research proposals, thus ignoring what happens outside the process. But given that this reduction of research ethics to [IRB] approval, feedback mechanisms in the organization will provide the misleading reassurance that all is well."103 Third, the IRB monopoly on ethics has a tendency to stifle alternative institutions that might secure equal or greater protection of research ethics and human subjects without the costs and problems of IRB approval. Kevin Haggerty identifies "a combination of measures that are both less constitutionally objectionable and bureaucratically onerous"104 such as "mandatory ethics training for graduate students, ethics accreditation for researchers, the re-invigoration of discipline-specific codes of research ethics, and exemptions for broad fields of research that pose few ethical concerns."105 Researchers and institutions accused of ethical violations also face "potentially large sanctions imposed by the market in the form of lost opportunities for funding or collaboration, and serious harm to reputation" and even possible legal liability.106 Berkeley journalism school, which as a journalism school may be disposed to guard its First Amendment rights more jealously than academics in other fields,107 regulates low-risk research through faculty members who can enforce the rules through disciplinary and department codes of behavior.108 Many observers note that, in fact, journalists face many of the same challenges as other researchers, and have accordingly devised a number of different ethical codes and practices to resolve those tensions. D. Selection Bias and Tunnel Vision on Regulatory Mission IRBs also act like governmental bureaucracies in their tendency to pursue their regulatory objectives with a tunnel vision that causes them to overlook alternative social values. Abstract scholarly values of free speech and free inquiry may be especially vulnerable to sacrifice in the name of IRB safety protocols. This tendency of IRB administrators to glorify their mission when weighed against other values is natural, but the danger is multiplied in the IRB context as a result of the largely unreviewable discretion afforded to IRB administrators in vetoing or modifying research protocols. IRB decision-makers are subject to no real oversight of the fairness or accuracy of their decisions, even when they decide to squash a given research protocol. The rise of the IRB "industry" and the increasing professionalization of IRB administration may even bring about a heightened degree of censorship. The requirement of IRB preapproval for research applies only to federally-funded research. Nonetheless, universities have extended the mandate to all research involving human subjects, even though the vast majority of social science research is not federally funded. This requirement is typically justified by belief that IRBs are a necessary part of a system of checks and balances to curb the self-interest of researchers, whose natural inclinations may lead them to discount improperly the dangers to human subjects from their research. But the argument that IRBs are necessary to counterbalance the self-interest of researchers is somewhat ironic in that there are no institutional checks to balance the self-interested expansionist tendencies of IRB administrators. Thus, in a classic example of the problem of "who
105 106 watches the watchers," IRB administrators arrogate to themselves the power to oversee all research involving human subjects, even the vast majority of social science research that is not federally funded, while lacking any similar oversight for themselves. There is no reason to believe that IRB administrators are more saintly in exercising unreviewable powers than any other human beings. Indeed, there is reason to believe that IRB administrators may be as prone to abusing their authority as the researchers that they oversee. As this Symposium has stressed, the IRB preapproval process for research is fundamentally a form of censorship and prior restraint, where the IRB weighs the perceived benefits of the proposed research against the costs to the individual participants in the study and "the community" more generally. The basis of the federal requirement of IRB preapproval for biomedical research is to protect participants from the risk of verifiable physical or psychological harm that might result as a byproduct of the research itself. Over time, however, IRBs have come to define their mandate more broadly, to sweep in subjective and ill-defined fears of hurt feelings, embarrassment, and emotional discomfort. Protecting subjects from these minor harms may or may not be a valid policy goal--some research is designed specifically to deal with how people respond to public embarrassment or certain types of social disapproval. Some degree of minor embarrassment or hurt feelings may be an inherent byproduct of some scholarly research, such as scholarly explorations of criminal activity or political corruption. Moreover, by providing heightened protection for "vulnerable populations" such as drug abusers, minorities, or prisoners, IRBs may be stifling research that may benefit those groups.109 IRB standards that prevent research on such topics may thus fundamentally be at odds with the research itself, leading to various forms of evasion of IRB requirements by researchers or ad hoc exceptions to the rules by IRB administrators to permit certain kinds of research despite their articulated policies. Evasion is possible in many cases because formal advance IRB approval is required only for those who receive federal grants or private funding managed by a university. But for nonsponsored or privately funded research, "IRB compliance is on the honor system."110 The IRB can report the wayward researcher to government or university officials--if it later discovers the breach--but this self-reporting mechanism leaves much room for evasion. Evasion by researchers may take many forms. Sometimes minor evasion is intentional, such as when rapidly-developing research opportunities arise and it is impossible to acquire IRB preapproval; in such cases, the researcher calculates that it is better to "ask forgiveness rather than permission." Given the rapidly-changing nature of field research and the clunkiness of IRBs in keeping up with the challenges, one anthropologist confesses that IRBs "`turn everyone into a low-level cheater' in much the way that unreasonably low speed limits encourage disrespect for traffic laws."111 An anthropologist studying members of Berlin's Turkish minority simply refused to seek IRB approval before accepting an invitation that arose to talk with children in a Koran class.112 Some researchers assume that the requirement of IRB approval does not apply to their research, perhaps because they believe that it only applies if they receive outside funding for their research.113 Still others balance the trivial risks of their project against the absurdity, high expense, and delay of the system and perform a rational cost-benefit analysis that they may never be detected or severely punished, and decide to simply risk foregoing approval and deal with the situation if they are later caught.114 But the combination of vaguely-defined, unrealistic, aspirational rules with essentially unreviewable discretion presents a more troubling risk. IRB approval can turn on the wholly subjective opinions of IRB members as to whether the injured sensitivities and ruffled feathers in any given case are justified by the social value of the research. Thus a neutral process of review according to known and established rules can be supplanted by an opaque process of approval according to the whims and subjective assessment of IRB reviewers. The risk of political taint in IRB decision-making is especially troubling. One study asked human-subjects committees to review hypothetical proposals that were identical in their proposed treatment of human subjects, but differed in their sociopolitical sensitivity and level of ethical concerns.115 The study found that proposals on nonsensitive topics with no ethical concerns were approved 95% of the time, but that proposals on sensitive issues, also with no ethical problems, were approved only 40­50% of the time. Moreover, hypothetical studies on the effects of reverse discrimination were significantly more likely to be rejected than studies purporting to investigate discrimination.116 The standard rationale provided for rejecting sensitive proposals was "methodological" problems, rather than ethical problems--even though the hypothetical studies were identical except for the sensitivity of the topics, so the same methodological problems should have been present in all, if any, of the studies.117 In fact, in the study, proposals on sensitive topics with no ethical problems were rejected at the same rate as proposals with ethical violations such as deception.118 The authors conclude from their findings:
[T]he conclusion that can be drawn from the narratives for the proposals that did not contain ethically problematic procedures is that IRBs found the sensitive proposals to be socially objectionable, especially the reverse discrimination ones, and invoked whatever reasons were most convenient to justify their decisions to deny approval. When deception or lack of debriefing was part of the protocol, these ethical problems were given as the reason for the nonapproval. When no ethically problematic procedures were present, however, methodological objections were raised that had not been raised in the other cases.119

According to the researchers, therefore, "the primary reason for the rejection of sensitive proposals was the potential political impact of the proposed findings, for example, discrediting Affirmative Action policies."120 They further speculate that this decision reflects "shared liberal values" among IRB members.121 This consideration of the political impact of research flatly violates federal regulations, which expressly prohibit such considerations.122 As Carol Tavris has observed, "The growing power of IRBs in academia, along with the increasing number of restrictions on free speech in the politically correct name of `speech codes' and `conduct codes' . . . is perilous for independent scientific inquiry."123

Basing IRB decisions on such a vague and capacious metric of "harm," such as upsetting subjective sensitivities, thus fundamentally amounts to making these decisions according to no predictable rules at all and makes it virtually impossible to conduct any sort of controversial research without arbitrary and ad hoc exceptions to the rule. It may also run the risk of trivializing threats of real physical harm by treating them as mere differences of degree, rather than kind, from minor and subjective harms such as hurt feelings.124 Similarly, it is not practicable for social science researchers to engage simultaneously in research on some topics while also conforming to IRB regulations. As such evasion by researchers becomes more routine, it threatens to bring about "capricious" governance by IRB administrators-- one in which the violation of the rules is common, but where only some violators are punished, and, moreover, where the most vulnerable or marginal members of the academy are those most likely to be punished. In turn, the perception that the rules are unfair or arbitrary may further reduce voluntary compliance.125 As with governmental bureaucracies, the problem of regulatory tunnel vision in academic bureaucracies is exacerbated by self-selection by administrators for such positions. At root, requiring IRB preapproval for research is fundamentally a form of prior restraint censorship that prohibits researchers from conducting research without demonstrating to the censor that the expected social benefits of the research exceed the social costs.126 It follows that those who will be the most effective censors are those who suffer the least amount of disutility about exercising the powers of censorship to veto research proposals. Thus, the job will tend to attract willing, rather than reluctant, censors.127 One of the participants at the Symposium was the IRB administrator at one of the country's most prestigious research institutions.128 In his written remarks prepared for the conference, this IRB director considered a hypothetical academic research project that investigated the response of Muslims to the controversial "Danish cartoons" of Muhammad that spawned rioting in some parts of the world, or the response of Jews to anti-Semitic cartoons published in Iran. This person commented:

I believe we can assess how [the institution] might react to a research study in this area. Let us hypothesize that a well-meaning member of the faculty wanted to show the cartoons from Denmark and the anti-Semitic ones featured in Iranian newspapers to congregants at local mosques and synagogues. I contend that the risk of doing harm to the institution and larger community might be greater than the putative positive results of such scholarship. Freedom of speech may be the issue, but [the university] clearly views it within its purview to act in its best interest.129

In this sense, IRB decision-making is similar to government regulators in economies that are subject to overly-intrusive regulations. Heavy economic regulation inevitably spawns violations of the rule of law and the predictability that it embodies, as the sheer weight of regulations and their contradictory mandates makes it practicably impossible for those regulated to comply with all the rules. As a result, regulators are forced to make ad hoc and arbitrary exceptions to the rules to make it possible for ordinary economic activity to occur. See Todd J. Zywicki, The Rule of Law, Freedom, and Prosperity, 10 SUP. CT. ECON. REV. 1 (2003). 125 See Brian C. Martinson et al., Scientists' Perception of Organizational Justice and Self-Reported Misbehaviors, 1 J. EMPIRICAL RES. HUM. RES. ETHICS 51, 54 (2006); see also Burris & Moss, supra note 15, at 50. 126 Philip Hamburger, The New Censorship: Institutional Review Boards, 2004 SUP. CT. REV. 271 (2005). 127 Similarly, those who enforce campus speech codes appear not to be overly concerned about principles of fairness and due process in punishing violations of their mandates. See KORS & SILVERGLATE, supra note 123. 128 Because this individual is not publishing his remarks, I am quoting them without identification.

The dismissive reference to the "putative positive results" makes clear the view of this IRB administrator that he plainly considers any benefits from such research to be trivial. There is no effort to provide objective justification for the administrator's subjective dismissal of the hypothetical studies' possible benefits, nor criteria for weighing these benefits against "the harm to the institution and larger community." Nor does he indicate that there would be any effort to assess the objective validity of subjective claims of psychological injury by participants in the study. Such a standard is also prone to obvious abuse--for instance, in deciding which political communities have sensitivities that "count" in this inquiry.130 What's more, the participants in the hypothetical survey are consenting adults who can simply refuse to participate in the study, sparing themselves the anguish of being exposed to the offending cartoons. It is disconcerting that an IRB administrator at a major university would so casually approve of censorship on the basis of little more than concerns about the subjective sensibilities of "the larger community" and possible "harm to the university" from permitting controversial research. This tendency to mix improper political concerns in the IRB approval process is exacerbated by the very makeup of IRBs. The regulations governing IRB composition provide:

Each IRB shall have at least five members, with varying backgrounds to promote complete and adequate review of research activities commonly conducted by the institution. The IRB shall be sufficiently qualified through the experience and expertise of its members, and the diversity of the members, including consideration of race, gender, and cultural backgrounds and sensitivity to such issues as community attitudes, to promote respect for its advice and counsel in safeguarding the rights and welfare of human subjects.131
Unpublished paper presented at Symposium (emphasis added). For instance, it appears that IRBs are more willing to permit questions that expose discrimination against minorities than concerns about reverse discrimination, suggesting different attitudes toward which political communities "count." See supra notes 115­21 and accompanying text. 131 45 C.F.R. § 46.107(a) (2006). In addition, IRBs must make every "nondiscriminatory effort" to ensure that no IRB "consists entirely of men or entirely of women." Id.
130 129

The obvious intent of this requirement is to have "community members" bring their subjective perspectives to bear on the question of the ethics of proposed regulation. This further implies that research that is thought to be perfectly fine at one institution could be judged improper at some other institution whose IRB is drawn from a different community or which has members with different "cultural backgrounds and sensitivity to such issues as community standards." Ethics is far from an exact science. But it is hard to see why a scholar's opportunity to conduct research should depend on such subjective factors as community standards and cultural sensitivities, especially when the research may be conducted in very different communities (such as foreign countries) and will be published in a technical academic journal with national or international circulation. Indeed, given the comprehensive nature of the research covered by the IRB requirements, it is hard to see how community standards should be relevant to whether research may be conducted. Moreover, as has been noted, when coresearchers reside at different institutions, they are generally required by internal IRB regulations to secure approval from the board at every institution, seemingly raising the potential for contradictory IRB determinations arising from the wholly subjective assessments of different reviewers drawn from different cultural backgrounds and applying different sensitivities to community standards. In fact, basing the decision to permit research or publish research on the subjective assessments of community members may also raise First Amendment concerns.132 The tunnel vision problem is further exacerbated by the biomedical orientation of the IRB design framework and the domination of IRBs at most universities by biomedical researchers with little appreciation or understanding of social science research.133 As one social science researcher complained, "[T]he IRB is a medical model IRB and doesn't understand social science research. They have concerns that reflect a lack of understanding about things outside of clinical trials. All of my social science colleagues agree with my viewpoint on this."134 This combination of the possibility of politically-influenced decisionmaking together with the essentially unreviewable discretion of IRBs to prevent or modify research presents a serious threat to free speech. As noted by Ceci, et al., it appears to be quite easy for politically-motivated decisions to be concealed as questions about methodology or procedure.135 The combination of vague substantive standards with limited procedural protections generates fear that IRBs will prevent social science research on politically controversial topics. CONCLUSION This paper has sought to answer the question, "If IRBs are composed of so many intelligent and conscientious people, why are they so prone to making poor decisions?" One contributing factor appears to be the increasingly bureaucratized nature of IRB structure and administration. An IRB, like any other safety regulatory body, undertakes to maximize the net public benefits of its actions, and ought to do so by protecting participant safety while minimally interfering with researchers' needs. Over time, IRBs have become increasingly more expensive, sluggish, and prone to errors of focusing on trivialities rather than central ethical concerns. The bureaucratic structure and incentives of IRBs may help to explain why this evolution has occurred. A particular concern seems to be the essentially "lawless" nature of modern IRBs, which rely wholly on self-regulation of themselves while intrusively regulating research by others. Although they justify their intrusive oversight into all research as a necessary check on the individual selfinterest of researchers, they fail to recognize that IRBs are prone to the same problems of self-interest and tunnel vision of which they accuse researchers. Understanding the bureaucratic nature of IRBs is a necessary first step toward their reform. Just as efforts to "reinvent government" have done little actually to reduce the cost of governmental bureaucracy or increase its effectiveness, repeated studies and complaints about the inefficiencies of IRB bureaucracies standing alone will do little to reform the system. It is necessary to start with an accurate model of how the current situation came to be before we can come up with a plan for how to improve the situation.

See, e.g., Freedman v. Maryland, 380 U.S. 51 (1965) (describing rules governing censorship of obscene movies by local licensing boards). There is substantial evidence that different IRBs reach divergent results on the propriety of research protocols. See, e.g., Mary Terrell White & Jennifer Gamm, Informed Consent for Research on Stored Blood and Tissue Samples: A Survey of Institutional Review Board Practices, 9 ACCOUNTABILITY RES. 1 (2002); Jon Mark Hirshon et al., Variability in Institutional Review Board Assessment of Minimal-Risk Research, 9 ACADEMIC EMERGENCY MED. 1417 (2002); Thomas O. Stair et al., Variation in Institutional Review Board Responses to a Standard Protocol for a Multicenter Clinical Trial, 8 ACADEMIC EMERGENCY MED. 636 (2001). That consideration of subjective assessments of local communities might result in divergent outcomes was recognized in establishing the federal IRB scheme. One of the advisors to the National Commission was Robert J. Levine, who observes that permitting these differential outcomes was deliberate. See ROBERT J. LEVINE, ETHICS AND REGULATION OF CLINICAL RESEARCH 342 (1986). OHRP stresses that the determination is subjective and subject to local variation: "The risk/benefit assessment is not a technical one valid under all circumstances; rather, it is a judgment that often depends upon prevailing community standards and subjective determinations of risk and benefit. Consequently, different IRBs may arrive at different assessments of a particular risk/benefit ratio." OFFICE FOR THE PROTECTION FROM RESEARCH RISKS, NATIONAL INSTITUTE OF HEALTH, PROTECTING HUMAN RESEARCH SUBJECTS: INSTITUTIONAL REVIEW BOARD GUIDEBOOK 3­8 (1993). I would like to thank Philip Hamburger for his insight on the issues discussed in this paragraph.

Burris & Moss, supra note 15, at 42­43. The IRB Director quoted above who participated in the Symposium was trained as a biomedical researcher. See supra note 128 and accompanying text. 134 Id. at 43 (quoting anonymous survey respondent). 135 See supra notes 115­21 and accompanying text.
Share on Google Plus


NB: Join our Social Media Network on Google Plus | Facebook | Twitter | Linkedin