Though we have heard laments for decades that American democracy is sliding into idiocracy, never has more ink been spilled on the subject than during the Trump era. The argument goes that instead of a politics driven by the passions of the masses, run like reality TV, and debated at 280 characters, we need a return to sobriety — we need experts, not amateurs, to run things.
In his 2017 book The Death of Expertise, Tom Nichols, U.S. Naval War College professor and self-described Never Trumper, laments the turn of American politics toward “the worship of its own ignorance.” And libertarian-leaning Georgetown professor Jason Brennan writes in his book Against Democracy that “when it comes to politics, some people know a lot, most people know nothing, and many people know less than nothing.” Voters generally don’t know which party controls Congress, what major policy debates are about, or how federal spending is allocated. Brennan proposes the idea of “epistocracy,” a system where political power accrues more to the educated and knowledgeable — meaning, in practice, disenfranchisement schemes such as reviving literacy tests for voting, expanded to include basic economics and political science. Meanwhile, Parag Khanna, a TED Talker who describes himself as a “geopolitical futurist,” argues in his 2017 book Technocracy in America, “America has more than enough democracy. What it needs is more technocracy — a lot more…. Technocratic government is built around expert analysis and long-term planning rather than narrow-minded and short-term populist wins.”
For the Trump-era new right, meanwhile, experts are members of a corrupt ruling class, leftist gremlins aligned with the “deep state.” They work to undermine elected officials, the interests of the American people, and their democratic will.
This populist backlash feeds into a general wariness of scientific expertise. Although Americans continue to hold scientists in high esteem in the abstract, our public discourse is shot through with disagreement, distrust, and cynicism about scientific findings, from vaccines to climate change to genetically modified foods. Moreover, the public has begun to lose faith in the institutions that promulgate this expertise, and in attempts to shape policy based upon it — what Ari N. Schulman has called a “crisis of scientific authority.” Some of this mistrust is clearly earned. For instance, the replication crisis threatens to erode central planks of the scientific enterprise, such as peer review and statistical hypothesis testing. The sciences hardest hit also have the most obvious impact on public policy and our daily lives: economics, sociology, psychology, nutrition, and medicine.
All of this raises the question of what role expertise can and should play in our democratic republic. We need expertise to create policies that promote the public interest — while we also ensure that decisions are not simply delegated to those with no accountability. We need to guard against government by the ignorant — while also guarding against the hubris of experts ignorant of values that transcend their narrow domains. As Nichols observes, “Experts need to remember, always, that they are servants and not the masters of a democratic society and a republican government.” We need experts, including scientists, to inform government — but we must also avoid rule by experts. Now, more than ever, we need a model for expertise that is democratically soluble.
Against Technocracy
The term “technocracy” originated in the late nineteenth century and was reimagined and popularized by Howard Scott, who founded the Technocracy Inc. organization in New York in 1932. Inspired by the works of Thorstein Veblen, Scott believed that central planning by apolitical technicians could rationalize the economy by replacing the price system with a kind of “energy accounting,” whereby goods and services are allocated according to the amount of energy required to produce them. The technocracy movement represented this balance between consumption and production with its logo, a red-and-white yin and yang symbol. Of course, the concept of rule by scientific experts has deeper roots, from the writings of Francis Bacon and Auguste Comte to early twentieth-century progressive visions of applying science to government, culture, and industry. In our own time the concept of technocracy has come to take on a more nebulous and politically charged meaning, conjuring images of the Davos class, paternalistic Eurocrats, and Kafkaesque bureaucracies.
Technocracy, understood as rule by experts, covers a spectrum of governance models, from Oskar R. Lange’s centrally planned economy — criticized by Friedrich Hayek — to Congress’s delegation of its lawmaking power to administrative agencies such as the National Highway Traffic Safety Administration, which issues the Federal Motor Vehicle Safety Standards. What all forms of technocracy share is the idea that experts constitute a class apart — individuals with a special type of knowledge not possessed by the lay person, allowing for the effective manipulation of social and economic behavior to obtain favorable outcomes.
Broadly speaking, technocracy is problematic on three counts. First, there are moral and political criticisms of technocracy, which stress its anti-democratic and elitist nature. These criticisms come from across the political spectrum and are compelling to just about anyone with strong democratic sympathies.
That technocracy and democracy are in tension is obvious, since democracy requires that all citizens have an equal say. In a representative democracy such as ours, governance is delegated to a small number of elected officials, who are kept in check and held accountable through the ballot box and other mechanisms like the free press. Here the tension often arises from a lack of transparency and explicability — ordinary citizens do not or cannot know how or why particular kinds of expertise inform policymaking — or from an imbalance of power between elected officials and their constituents on the one hand, and the unelected experts who influence the lawmaking process on the other. Or democratic leaders may simply lack the incentives to consistently draw on expertise to advance the common good, so that experts are only invoked opportunistically.
A serious moral criticism of technocracy is that it can too easily encourage the abuse of state power. Some of the worst human rights violations by the government in American history came as a result of the eugenics movement, which was driven by scientists seeking to exercise rational control over human reproduction. Or consider technocratic urban planners like Robert Moses, who drove thousands from their communities with “urban renewal” and highway construction projects, the details of which were decided by unaccountable planners rather than democratically elected officials. And in illiberal regimes, technocratic aspirations can become catastrophic — the Soviet Holodomor, the Great Chinese Famine.
There are also practical criticisms of technocracy: that it simply does not work well, or is hard if not impossible to implement in practice. Practical criticisms may be neutral with respect to technocracy’s moral dangers, stressing instead the historical failures and unexpected consequences of technocratic policies, such as the disastrous demographic effects of China’s one-child policy, which the government is now trying to counteract through new technocratic incentives to increase fertility.
Finally, epistemological criticisms of technocracy reject its picture of knowledge as an oversimplification of how complex social systems function. Offered by political thinkers as politically wide-ranging as the classical liberal Hayek and the leftist anarchist James C. Scott, these criticisms emphasize the ineliminable role of know-how and local and tacit knowledge in social practice and behavior — types of knowledge that cannot easily be captured in the statistical aggregates needed for central planning. Considerfactory work. Scottdescribes a “work-to-rule strike,” in which workers “reverted to following the inefficient procedures specified by engineers, knowing that it would cost the company valuable time and quality, rather than continuing the more expeditious practices they had long ago devised on the job.” The workers “achieve the practical effect of a walkout while remaining on the job and following their instructions to the letter.” Scott takes this to illustrate the point — emphasized by Hayek in his attack on central planning — that even allegedly “rote” labor involves informal know-how that cannot be captured by formal rules or standardized processes.
A related criticism is that experts — especially when they are under pressure from or endowed with substantial political power — are susceptible to bias or self-interest. If experts are no exception to the rule that power corrupts, then how can they be trusted to rule dispassionately and disinterestedly, as technocracy requires? As economist Glen Weyl wrote in August in his essay “Why I Am Not a Technocrat”:
… formal systems of knowledge creation always have their limits and biases. They always leave out important considerations that are only discovered later and that often turn out to have a systematic relationship to the limited cultural and social experience of the groups developing them….
Technocracy divorced from the need for public communication and accountability is thus a dangerous ideology that distracts technical experts from the valuable role they can play by tempting them to assume undue, independent power and influence.
We are right, then, to reject technocracy — but that does not require rejecting expertise in governance. Scientific and technical expertise, in particular, are indispensable for governing and making policy in a society such as ours, one pervaded and shaped by science and technology. Expertise is dangerous only when wedded to technocracy — when experts are entrusted with political power and subject to little or no democratic accountability.
Experts in the Executive
If you want experts to play a role in politics, you have to decide where in the system they live, what they’re empowered to do, and what forms of democratic accountability they should be subject to. In the United States government, most experts currently live in executive branch agencies and are fairly insulated from democratic accountability. This includes an army of technical experts — there are 300,000 federal workers classified as having Science, Technology, Engineering, and Math occupations — in every imaginable subject, from network engineers at the Federal Communications Commission, to ecologists at the Environmental Protection Agency, to economists at the Securities and Exchange Commission, to particle physicists at the Nuclear Regulatory Commission, to aeronautical engineers and cosmologists at NASA, to marine biologists at the National Oceanic and Atmospheric Administration, to microbiologists, epidemiologists, and pharmacologists at the Department of Health and Human Services. Some of these experts serve in purely advisory roles, others conduct research, while still others help form regulatory policies.
The inclusion of experts in executive agencies is not a problem in itself, and is even necessary. As the R Street Institute’s Philip Wallach argues, in a republic of 325 million, it is both unrealistic and unfair to expect that all elected officials will be experts, or that all experts will be elected officials. The overwhelming volume and specificity of regulatory minutiae — covering a vast array of often highly technical issues — is far too complex for Congress to manage entirely on its own. Moreover, insulation from political forces can sometimes be a useful way to ward off abuse and corruption. For instance, an agency like the Federal Election Commission, which enforces campaign finance law, is probably better off with some political independence. Nor are all regulatory processes entirely technocratic, as federal agencies must generally collect and consider public comments on new proposed regulations. Congress may also invoke its powers, such as through the Congressional Review Act, to overrule proposed regulations (although it rarely chooses to do so).
Congress’s delegation to expert bureaucracies becomes problematic, however, when these agencies, wielding lawlike powers, act unaccountably and beyond the reach of elected officials. As the Cato Institute’s Gene Healy puts it, “our Constitution’s Framers preferred to leave national policy in the hands of bums you can vote out instead of bums you can’t.” The Constitution empowers Congress, not the executive branch, to make law, and it makes Congress accountable to the people. “In republican government,” wrote the author of Federalist No. 51, “the legislative authority necessarily predominates.”
In practice, however, things don’t work this way. Berin Szóka explains in a recent Cato Unbound article:
In theory, the Constitution vests the legislative power solely in the hands of Congress, the Executive branch implements or enforces the laws, and the Judicial branch resolves disputes about what the laws mean. In practice, Congress “delegates” the vast bulk of essentially legislative decisionmaking to a sprawling system of administrative agencies, some “independent” and some within the executive branch, and it’s these agencies that do the vast bulk of policymaking.
As Senator Ben Sasse (R-Neb.) put it in a 2018 Wall Street Journalarticle: “In the U.S. system, the legislative branch is supposed to be the center of politics. Why isn’t it? For the past century, more legislative authority has been delegated to the executive branch every year.”
Arguably, the populism that characterizes so much of our politics today is, in part, a response to the slow drift of power away from the people’s representatives, particularly in Congress, to the administrative state. This delegation of Congress’s legislative duties has effectively transformed major policy issues, from health care reform to energy policy, into technical matters to be settled by expert agencies, rather than matters of political moment to be disputed by democratically elected representatives in Congress. This expert capture, in turn, exacerbates the sentiment — and the reality — that many of the pressing problems of social and political life are out of reach of ordinary citizens and their elected representatives. In its way, President Trump’s paranoid frustration with the “deep state” reflects popular sentiment that technocracy has gone too far.
Congress’s Weakness
With the growth of the administrative state, Congress has seemed increasingly ineffective at legislating, and conservative lawmakers who fretted about executive overreach during the Obama years have done little to reassert Congress’s prerogative. The severe state of congressional dysfunction is hard to miss. At 18 percent, Congress’s job approval rating is well under half of President Trump’s 42 percent.
The perception that Congress is inept and ineffective is perhaps most acute in matters of science and technology. As Senator Ron Johnson (R-Wis.) has put it, “Most of us are Gilligan; there aren’t a whole lot of Professors.” For illustration, just watch members of Congress question Mark Zuckerberg in an attempt to learn how Facebook works, or watch Google’s Sundar Pichai explain at a congressional hearing that his company doesn’t make the iPhone.
Is it realistic or even desirable to put Congress at the helm of scientific and technical policy? Do we want to be governed by legislative-branch Gilligans rather than executive-branch professors?
It is important to remember that Congress is not a technocratic body. Not only is it not populated with technical experts (it has only two scientists and eleven engineers among its members), it was also designed to be inefficient. This is because Congress is, or at least is supposed to be, a site of deliberation, disagreement, and conflict, directly responsive to democratic pressures, capable of accommodating dissent. Congress’s current dysfunction arguably arises not because members won’t get along and work together, but because the institution no longer functions as a genuinely deliberative body — where disagreements, especially passionate disputes over fundamental values, are heard and compromises reached. A core problem with Congress today is that there is too little disagreement, not too much. Pluralism and the absence of a strong majority are political goods, even if they don’t always issue in the policies favored by technical experts.
Yet there is a case to be made that Congress is actually suffering from an undersupply of expertise, which can be a vital ingredient for a robust deliberative process. Lacking this capacity, Congress has an incentive to defer to interest groups, outside organizations, and executive agencies. According to a new report by Harvard political scholars on how to bolster Congress’s capacity to legislate on science and technology issues, “Congress does not give itself the human capital and funding necessary to be an effective co-equal branch of the federal government.” With less in-house expertise, Congress has delegated more and more to experts in federal agencies.
The expansive administrative apparatus of the executive branch suffers much more from the problems of technocracy than does Congress, which remains more pluralistic and inclusive. Unfortunately, as executive agencies have grown ever more preoccupied with technical matters, Congress has become less capable of keeping up. This imbalance was already identified in the 1960s, when members of Congress started to raise concerns about their inability to keep pace with the executive branch, particularly on matters of science and technology. As Representative George P. Miller (D-Cal.) remarked in a 1963 hearing in the House Committee on Science and Astronautics:
We are not the rubber stamps of the administrative branch of the Government. Whereas we will be guided, we want to take the advice of competent people within the administration … nevertheless we recognize our responsibility to the people and the necessity for making some independent judgments. This is the thing we are trying to get at when we do not particularly have the facilities nor the resources that the executive department of the Government has.
Out of these discussions came a reassertion of congressional capacity in the early 1970s. The Legislative Reorganization Act of 1970, which reformed a number of congressional procedures, also expanded what came to be called the Congressional Research Service, an agency dedicated to offering policy and legal analysis to Congress. The Technology Assessment Act of 1972 established the Office of Technology Assessment (OTA), whose unique role was to help Congress understand the real and potential impacts of new technologies. And the creation of the Congressional Budget Office in 1975 strengthened Congress’s control of the budget. Together, these efforts provided a counterbalance to the executive branch, in part by providing Congress with experts it could rely on for delivering information on scientific and technical matters.
But in recent decades, the balance of power has tipped even further toward the administrative state, and thus toward technocracy. There is now, however, a growing conversation in Washington about how to tip the balance back toward Congress.
Part of that conversation is the idea of reviving a congressional technology assessment office. It has featured in some recent legislative actions in Congress, such as an appropriations bill that includes $6 million in funding to revive the agency (as of this writing, it is unclear whether this provision of the bill will move forward), and in the Office of Technology Assessment Improvement and Enhancement Act sponsored by Representative Mark Takano (D-Cal.) and Senator Thom Tillis (R-N.C.). The idea has also appeared in the platforms of two presidential candidates, Senator Elizabeth Warren (D-Mass.) and Andrew Yang. A new Science, Technology Assessment, and Analytics team, based in the Government Accountability Office, has also been formed to assist Congress, taking on part of OTA’s original mission.
While there has been bipartisan support, the idea of creating a new expert body is still a cause for anxiety to many on the right, who fear it might institutionalize rule by experts. But a closer look at the agency’s two decades of assistance to Congress will help us to understand why a body of experts working for elected officials, rather than telling them what to do, need not succumb to the risk of technocracy — and may even serve to counter it.
Experts for Congress
From 1974 to 1995, the Office of Technology Assessment served as a science and technology think tank within Congress. It provided, in the words of Representative Emilio Daddario (D-Conn.), “independent means of obtaining necessary and relevant technological information for the Congress, without having to depend almost solely on the executive branch.” Daddario, who had proposed the legislation creating the OTA and went on to run the office for its first few years, believed that “it is only with this capability that Congress can assure its role as an equal branch in our federal structure.”
Rather than outsourcing decision-making to experts, the office’s role was to empower democratically elected representatives with authoritative information about the tradeoffs of different policy approaches, leaving the ultimate political judgments to them. Importantly, it did not make hard recommendations, write legislation, or set policy. A 1982 draft of an agency handbook reinforced this framing for its staffers:
It is the people, not the experts, who are the ultimate decisionmakers; and it is the Congress, not scientists, who translate information into policy. OTA’s mandate is thus to lay out options, not recommendations; and to expand rather than limit the options which Congress can choose.
In other words, the agency was not a technocratic body, since its technology assessment was helping a political process led by elected officials, rather than placing lawmaking power in the hands of experts.
In its last year, the Office of Technology Assessment had a budget of $37 million in today’s dollars, and nearly two hundred full-time equivalent employees. Its core product was its technology assessments — authoritative, multi-disciplinary, expert-reviewed reports that assessed the probable short- and long-term effects of emerging technologies. At the request of Congress, the OTA produced reports on aging nuclear plants, protection of digital medical records, chemical weapons, civilian satellite systems, police body armor, the Human Genome Project, pest control, and the neuroscience of mental disorders, to name a few.
These reports empowered Congress with better information so that it was less reliant on expert input from industry groups, think tanks, or executive agencies — sources that have a tendency to omit key facts or put a spin on the information they provide. The policy impacts of the reports have been wide-ranging. They helped members of Congress in developing new legislation, throwing out existing proposals, and preparing for hearings. OTA analyses helped pave the way for policies that shaped the digital age, including on electronic health record privacy, encryption, and auctions to allocate the wireless spectrum. For instance, its 1985 report Electronic Surveillance and Civil Liberties highlighted the lack of a legal framework for “electronic mail,” culminating in Congress extending greater privacy protections for electronic communications with the passage of new legislation the following year.
Despite the agency’s usefulness, it was not free from populist criticism. Conservative pundit Donald Lambro blasted the agency in a chapter of his 1980 book Fat City, criticizing it for producing overly technical and duplicative reports that nobody reads — a waste of taxpayer money. In fact, there is good evidence that people did read them. In 1980 alone, the Government Printing Office sold 48,000 OTA reports. One could still argue the office’s work was duplicative of federal agencies. However, such duplication was by design: The point of the Office of Technology Assessment was to provide Congress with the capability for independent analyses, free of distortion by the interests of the executive. It functioned as a sort of counterpart to the White House Office of Science and Technology Policy, just as the Congressional Budget Office functions as Congress’s counterpart to the executive branch’s Office of Management and Budget. In a system of checks and balances, some degree of duplication is necessary.
Even if the agency didn’t make policy recommendations and tried to be objective, bias still threatened to creep in. In its early days, Republicans voiced concerns that the office would be captured by Democrats — particularly under the influence of Senator Ted Kennedy — and that the general orientation of its staff was leftward. While it had built a reputation for nonpartisanship in the 1980s and 1990s, these lingering conservative concerns were not paranoid. A 1993 self-assessment of the OTA by its own staff suggested that its reports skewed toward “increased Federal intervention rather than market solutions or greater delegation of responsibility to state and local governments.”
Partisan bias does not seem to have been a concern shared by all Republicans, however. Senator Orrin Hatch (R-Utah) said on the Senate floor in 1995 that the office was “the one arm of Congress that does give us, to the best of their ability, unbiased, scientific and technical expertise that we could not otherwise get.” And Senator Chuck Grassley (R-Iowa) lauded it as “one of the few truly neutral sources of information for the Congress.”
The Office of Technology Assessment served as a model for structuring legislative science advice in other parts of the world. Since the 1980s, nearly two dozen different technology assessment offices have been created in Europe. These borrowed from the OTA model, but adapted it to local circumstances and developed new approaches.
Yet the agency’s achievements and influence were not enough to keep it from being defunded in 1995 as part of sweeping cuts to Congress made by the new Republican majority. The primary motivation for these cuts — which reversed gains made in the early 1970s for countering executive power — was to provide the moral authority to make deep cuts elsewhere in government, including the proposed elimination of cabinet-level departments. Heritage Foundation scholar David M. Mason testified that year that the office did “good work and useful work” but killing it “will make the job of eliminating other government functions far easier.”
Defunding the agency, with its meager budget, was better for conservative symbolism than for directly constraining federal spending. In a debate in the Senate in 1995, Senator Grassley highlighted a few of the OTA’s accomplishments:
OTA actually helped us save money. OTA’s study of the Social Security Administration plan to purchase computers saved $368 million. OTA’s cautions … about the Synthetic Fuel Corporation helped to secure $60 billion of savings…. OTA’s studies of preventive services for Medicare have assisted legislative decisions for the past 15 years — studies of pneumonia vaccines and pap smears that showed Medicare would save money by paying for these medical services for the elderly, and Medicare patients would save money. Both proposals passed as legislation. OTA’s work on nuclear power plants has played a central role in eliminating poorly conceived and burdensome regulations on the U.S. power industry.
While the office’s defunding, along with other cuts to Congress, were attempts at shrinking government, the overall trend has been the reverse, with the federal government continuing to balloon in size and scope. More importantly, Congress effectively gave away a powerful tool for conducting research on technical questions relevant to a range of policy issues.
Making Expertise More Democratic
While the Office of Technology Assessment was not technocratic, it also wasn’t perfect. Reimagining it today might help us build democratic institutions, both within and outside of Congress, that better embody a non-technocratic, or perhaps post-technocratic, view of expertise.
Technology assessment, as practiced by the OTA, partly contributed to the myth that scientific and technical experts operate in a value-neutral, apolitical space — that scientific experts are by definition impartial. As Richard Sclove wrote in the 2010 report “Reinventing Technology Assessment”:
In striving to produce studies that would be perceived as unbiased, the OTA sometimes contributed to the misleading impression that public policy analysis can be objective or value-free. However, whether or not there are ever circumstances in which objectivity is attainable or even conceivable — and those are enduringly contested questions in philosophy — assuredly objectivity is not achievable in the time-limited, interest-laden, hothouse atmosphere of legislative or other governmental advising.
According to what is sometimes called the “linear model” of scientific expertise, scientific practice takes place in a self-contained and value-free context, and its results are then delivered over into the political arena. The question then becomes whether and to what extent the lay public — legislators and voters — listen to the experts.
But philosophers, historians, and sociologists of science have long scorned the idea that science is an entirely value-free enterprise. Rather than a class apart, scientists are enmeshed in the same moral, political fabric as the rest of us. One need not believe that scientific knowledge is “socially constructed” to recognize that social and political values and contexts shape science — especially those scientific domains, such as medicine, ecology, and economics, that have the most direct implications for public policy.
This is most obvious when it comes to applying scientific knowledge to the messy realm of democratic politics, in which divergent viewpoints and value systems are often in open conflict. As Daniel Sarewitz points out, “Particular sets of facts may stand out as particularly compelling, coherent, and useful in the context of one set of values and interests, yet in another appear irrelevant to the point of triviality.”
A reimagined technology assessment office should be open about the role that values play in the policymaking process and even in science itself. It should also be unafraid to consider the social and ethical implications of the technical issues it examines. This approach could help guard against accusations of hidden bias by abandoning the implausible idea that technology assessment is value-neutral, while ensuring that there are mechanisms in place to prevent rank partisanship.
This does not mean technology assessment should put its thumb on the scale for particular policy prescriptions so much as invite disagreement from diverse stakeholders over both means and ends. In doing so, it can be a counterbalance to the technocratic tendencies of expertise in the executive branch, where disagreements between diverse viewpoints are more likely to be swept under the rug in the interest of advancing an agenda, whether of the administration or the experts themselves. That experts will inevitably have biases arising from their own political prejudices or prior commitments is a good reason to worry about experts acting with little democratic accountability. But it is not a reason to be wary of using experts in Congress — it is precisely a reason to have them there, to inform democratically accountable lawmaking and to foster productive disagreement.
Conservatives are right to worry about the threat of technocracy, about undue deference to experts to run our government. But to a significant extent that threat is already upon us, in the form of a Congress that has willingly handed over its legislative duties to the distended bureaucracy of a bloated executive branch. The solution is not to dispense with experts, whom we sorely need. The legislature has for too long been hampered by a lack of informed advice as it deliberates on technical questions. A new technology assessment agency at the disposal of Congress would help to restore the vitality of this atrophied body — its ability to legislate well, and its willingness to legislate at all. And it would put expertise back where it belongs: in the service of officials directly accountable to the citizenry.
Zach Graves is the head of policy at Lincoln Network and an associate fellow at the R Street Institute. M. Anthony Mills is director of science policy at the R Street Institute.
Zach Graves and M. Anthony Mills, “Reviving Expertise in a Populist Age,” The New Atlantis, Number 60, Fall 2019, pp. 22-34.
Notice: Undefined index: gated in /opt/bitnami/apps/wordpress/htdocs/wp-content/themes/thenewatlantis/template-parts/cards/25wide.php on line 27
The legacy of the Office of Technology Assessment, and why we need a body like it today