Wednesday, April 16, 2014

The End of Employment

Nothing is easier, as the Long Descent begins to pick up speed around us, than giving in to despair—and nothing is more pointless. Those of us who are alive today are faced with the hugely demanding task of coping with the consequences of industrial civilization’s decline and fall, and saving as many as possible of the best achievements of the last few centuries so that they can cushion the descent and enrich the human societies of the far future.  That won’t be easy; so?  The same challenge has been faced many times before, and quite often it’s been faced with relative success.

The circumstances of the present case are in some ways more difficult than past equivalents, to be sure, but the tools and the knowledge base available to cope with them are almost incomparably greater. All in all, factoring in the greater challenges and the greater resources, it’s probably fair to suggest that the challenge of our time is about on a par with other eras of decline and fall.  The only question that still remains to be settled is how many of the people who are awake to the imminence of crisis will rise to the challenge, and how many will fail to do so.

The suicide of peak oil writer Mike Ruppert two days ago puts a bit of additional emphasis on that last point. I never met Ruppert, though we corresponded back in the days when his “From The Wilderness” website was one of the few places on the internet that paid any attention at all to peak oil, and I don’t claim to know what personal demons drove him to put a bullet through his brain. Over the last eight years, though, as the project of this blog has brought me into contact with more and more people who are grappling with the predicament of our time, I’ve met a great many people whose plans for dealing with a postpeak world amount to much the same thing.  Some of them are quite forthright about it, which at least has the virtue of honesty.  Rather more of them conceal the starkness of that choice behind a variety of convenient evasions, the insistence that we’re all going to die soon anyway being far and away the most popular of these just now.

I admit to a certain macabre curiosity about how that will play out in the years ahead. I’ve suspected for a while now, for example, that the baby boomers will manage one final mediagenic fad on the way out, and the generation that marked its childhood with coonskin caps and hula hoops and its puberty with love beads and Beatlemania will finish with a fad for suicide parties, in which attendees reminisce to the sound of the tunes they loved in high school, then wash down pills with vodka and help each other tie plastic bags over their heads. Still, I wonder how many people will have second thoughts once every other option has gone whistling down the wind, and fling themselves into an assortment of futile attempts to have their cake when they’ve already eaten it right down to the bare plate. We may see some truly bizarre religious movements, and some truly destructive political ones, before those who go around today insisting that they don’t want to live in a deindustrial world finally get their wish.

There are, of course, plenty of other options. The best choice for most of us, as I’ve noted here in previous posts, follows a strategy I’ve described wryly as “collapse first and avoid the rush:”  getting ahead of the curve of decline, in other words, and downshifting to a much less extravagant lifestyle while there’s still time to pick up the skills and tools needed to do it competently. Despite the strident insistence from defenders of the status quo that anything less than business as usual amounts to heading straight back to the caves, it’s entirely possible to have a decent and tolerably comfortable life on a tiny fraction of the energy and resource base that middle class Americans think they can’t possibly do without. Mind you, you have to know how to do it, and that’s not the sort of knowledge you can pick up from a manual, which is why it’s crucial to start now and get through the learning curve while you still have the income and the resources to cushion the impact of the inevitable mistakes.

This is more or less what I’ve been saying for eight years now. The difficulty at this stage in the process, though, is that a growing number of Americans are running out of time. I don’t think it’s escaped the notice of many people in this country that despite all the cheerleading from government officials, despite all the reassurances from dignified and clueless economists, despite all those reams of doctored statistics gobbled down whole by the watchdogs-turned-lapdogs of the media and spewed forth undigested onto the evening news, the US economy is not getting better.  Outside a few privileged sectors, times are hard and getting harder; more and more Americans are slipping into the bleak category of the long-term unemployed, and a great many of those who can still find employment work at part-time positions for sweatshop wages with no benefits at all.

Despite all the same cheerleading, reassurances, and doctored statistics, furthermore, the US economy is not going to get better: not for more than brief intervals by any measure, and not at all if “better”  means returning to some equivalent of America’s late 20th century boomtime. Those days are over, and they will not return. That harsh reality is having an immediate impact on some of my readers already, and that impact will only spread as time goes on. For those who have already been caught by the economic downdrafts, it’s arguably too late to collapse first and avoid the rush; willy-nilly, they’re already collapsing as fast as they can, and the rush is picking up speed around them as we speak.

For those who aren’t yet in that situation, the need to make changes while there’s still time to do so is paramount, and a significant number of my readers seem to be aware of this. One measure of that is the number of requests for personal advice I field, which has gone up steeply in recent months. Those requests cover a pretty fair selection of the whole gamut of human situations in a failing civilization, but one question has been coming up more and more often of late: the question of what jobs might be likely to provide steady employment as the industrial economy comes apart.

That’s a point I’ve been mulling over of late, since its implications intersect the whole tangled web in which our economy and society is snared just now. In particular, it assumes that the current way of bringing work together with workers, and turning the potentials of human mind and muscle toward the production of goods and services, is likely to remain in place for the time being, and it’s becoming increasingly clear to me that this won’t be the case.

It’s important to be clear on exactly what’s being discussed here. Human beings have always had to produce goods and services to stay alive and keep their families and communities going; that’s not going to change. In nonindustrial societies, though, most work is performed by individuals who consume the product of their own labor, and most of the rest is sold or bartered directly by the people who produce it to the people who consume it. What sets the industrial world apart is that a third party, the employer, inserts himself into this process, hiring people to produce goods and services and then selling those goods and services to buyers.  That’s employment, in the modern sense of the word; most people think of getting hired by an employer, for a fixed salary or wage, to produce goods and services that the employer then sells to someone else, as the normal and natural state of affairs—but it’s a state of affairs that is already beginning to break down around us, because the surpluses that make that kind of employment economically viable are going away.

Let’s begin with the big picture. In any human society, whether it’s a tribe of hunter-gatherers, an industrial nation-state, or anything else, people apply energy to raw materials to produce goods and services; this is what we mean by the word “economy.” The goods and services that any economy can produce are strictly limited by the energy sources and raw materials that it can access.

A principle that ecologists call Liebig’s law of the minimum is relevant here: the amount of anything  that a given species or ecosystem can produce in a given place and time is limited by whichever resource is in shortest supply. Most people get that when thinking about the nonhuman world; it makes sense that plants can’t use extra sunlight to make up for a shortage of water, and that you can’t treat soil deficient in phosphates by adding extra nitrates. It’s when you apply this same logic to human societies that the mental gears jam up, because we’ve been told so often that one resource can always be substituted for another that most people believe it without a second thought.

What’s going on here, though, is considerably more subtle than current jargon reflects. Examine most of the cases of resource substitution that find their way into economics textbooks, and you’ll find that what’s happened is that a process of resource extraction that uses less energy on a scarcer material has been replaced by another process that takes more energy but uses more abundant materials. The shift from high-quality iron ores to low-grade taconite that reshaped the iron industry in the 20th century, for example, was possible because ever-increasing amounts of highly concentrated energy could be put into the smelting process without making the resulting iron too expensive for the market.

The point made by this and comparable examples is applicable across the board to what I’ve termed technic societies, that subset of human societies—ours is the first, though probably not the last—in which a large fraction of total energy per capita comes from nonbiological sources and is put to work by way of  machines rather than human or animal muscles.  Far more often than not, in such societies, concentrated energy is the limiting resource. Given an abundant enough supply of concentrated energy at a low enough price, it would be possible to supply a technic society with raw materials by extracting dissolved minerals from seawater or chewing up ordinary rock to get a part per million or so of this or that useful element. Lacking that—and there are good reasons to think that human societies will always be lacking that—access to concentrated energy is where Liebig’s law bites down hard.

Another way to make this same point is to think of how much of any given product a single worker can make in a day using a set of good hand tools, and comparing that to the quantity of the same thing that the same worker could make using the successive generations of factory equipment, from the steam-driven and belt-fed power tools of the late 19th century straight through to the computerized milling machines and assembly-line robots of today. The difference can be expressed most clearly as a matter of the amount of energy being applied directly and indirectly to the manufacturing process—not merely the energy driving the tools through the manufacturing process, but the energy that goes into  manufacturing and maintaining the tools, supporting the infrastructure needed for manufacture and maintenance, and so on through the whole system involved in the manufacturing process.

Maverick economist E.F. Schumacher, whose work has been discussed in this blog many times already, pointed out that the cost per worker of equipping a workplace is one of the many crucial factors that  mainstream economic thought invariably neglects. That cost is usually expressed in financial terms, but underlying the abstract tokens we call money is a real cost in energy, expressed in terms of the goods and services that have to be consumed in the process of equipping and maintaining the workplace. If you have energy to spare, that’s not a problem; if you don’t, on the other hand, you’re actually better off using a less complex technology—what Schumacher called “intermediate technology” and the movement in which I studied green wizardry thirty years ago called “appropriate technology.”

The cost per worker of equipping a workplace, in turn, also has a political dimension—a point that Schumacher did not neglect, though nearly all other economists pretend that it doesn’t exist. The more costly it is to equip a workplace, the more certain it is that workers won’t be able to set themselves up in business, and the more control the very rich will then have over economic production and the supply of jobs. As Joseph Tainter pointed out in The Collapse of Complex Societies, social complexity correlates precisely with social hierarchy; one of the functions of complexity, in the workplace as elsewhere, is thus to maintain existing social pecking orders.

Schumacher’s arguments, though, focused on the Third World nations of his own time, which had very little manufacturing capacity at all—most of them, remember, had been colonies of European empires, assigned the role of producing raw materials and buying finished products from the imperial center as part of the wealth pump that drove them into grinding poverty while keeping their imperial overlords rich. He focused on advising client nations on how to build their own economies and extract themselves from the political grip of their former overlords, who were usually all too eager to import high-tech factories which their upper classes inevitably controlled. The situation is considerably more challenging when  your economy is geared to immense surpluses of concentrated energy, and the supply of energy begins to run short—and of course that’s the situation we’re in today.

Even if it were just a matter of replacing factory equipment, that would be a huge challenge, because all those expensive machines—not to mention the infrastructure that manufactures them, maintains them, supplies them, and integrates their products into the wider economy—count as sunk costs, subject to what social psychologists call the “Concorde fallacy,” the conviction that it’s less wasteful to keep on throwing money into a failing project than to cut your losses and do something else. The real problem is that it’s not just factory equipment; the entire economy has been structured from the ground up to use colossal amounts of highly concentrated energy, and everything that’s been invested in that economy since the beginning of the modern era thus counts as a sunk cost to one degree or another.

What makes this even more challenging is that very few people in the modern industrial world actually produce goods and services for consumers, much less for themselves, by applying energy to raw materials. The vast majority of today’s employees, and in particular all those who have the wealth and influence that come with high social status, don’t do this.  Executives, brokers, bankers, consultants, analysts, salespeople—well, I could go on for pages: the whole range of what used to be called white-collar jobs exists to support the production of goods and services by the working joes and janes managing all the energy-intensive machinery down there on the shop floor. So does the entire vast maze of the financial industry, and so do the legions of government bureaucrats—local, state, and federal—who manage, regulate, or oversee one or another aspect of economic activity.

All these people are understandably just as interested in keeping their jobs as the working joes and janes down there on the shop floor, and yet the energy surpluses that made it economically viable to perch such an immensely complex infrastructure on top of the production of goods and services for consumers are going away. The result is a frantic struggle on everyone’s part to make sure that the other guy loses his job first. It’s a struggle that all of them will ultimately lose—as the energy surplus needed to support it dwindles away, so will the entire system that’s perched on that high but precarious support—and so, as long as that system remains in place, getting hired by an employer, paid a regular wage or salary, and given work and a workplace to produce goods and services that the employer then sells to someone else, is going to become increasingly rare and increasingly unrewarding. 

That transformation is already well under way. Nobody I know personally who works for an employer in the sense I’ve just outlined is prospering in today’s American economy.  Most of the people I know who are employees in the usual sense of the word are having their benefits slashed, their working conditions worsened, their hours cut, and their pay reduced by one maneuver or another, and the threat of being laid off is constantly hovering over their heads.  The few exceptions are treading water and hoping to escape the same fate. None of this is accidental, and none of it is merely the result of greed on the part of the very rich, though admittedly the culture of executive kleptocracy at the upper end of the American social pyramid is making things a good deal worse than they might otherwise be.

The people I know who are prospering right now are those who produce goods and services for their own use, and provide goods and services directly to other people, without having an employer to provide them with work, a workplace, and a regular wage or salary. Some of these people have to stay under the radar screen of the current legal and regulatory system, since the people who work in that system are trying to preserve their own jobs by making life difficult for those who try to do without their services. Others can do things more openly. All of them have sidestepped as many as possible of the infrastructure services that are supposed to be part of an employee’s working life—for example, they aren’t getting trained at universities, since the US academic industry these days is just another predatory business sector trying to keep itself afloat by running others into the ground, and they aren’t going to banks for working capital for much the same reason. They’re using their own labor, their own wits, and their own personal connections with potential customers, to find a niche in which they can earn the money (or barter for the goods) they need or want.

I’d like to suggest that this is the wave of the future—not least because this is how economic life normally operates in nonindustrial societies, where the vast majority of people in the workforce are directly engaged in the production of goods and services for themselves and their own customers.  The surplus that supports all those people in management, finance, and so on is a luxury that nonindustrial societies don’t have. In the most pragmatic of economic senses, collapsing now and avoiding the rush involves getting out of a dying model of economics before it drags you down, and finding your footing in the emerging informal economy while there’s still time to get past the worst of the learning curve.

Playing by the rules of a dying economy, that is, is not a strategy with a high success rate or a long shelf life. Those of my readers who are still employed in the usual sense of the term may choose to hold onto that increasingly rare status, but it’s not wise for them to assume that such arrangements will last indefinitely; using the available money and other resources to get training, tools, and skills for some other way of getting by would probably be a wise strategy. Those of my readers who have already fallen through the widening cracks of the employment economy will have a harder row to hoe in many cases; for them, the crucial requirement is getting access to food, shelter, and other necessities while figuring out what to do next and getting through any learning curve that might be required.

All these are challenges; still, like the broader challenge of coping with the decline and fall of a civilization, they are challenges that countless other people have met in other places and times. Those who are willing to set aside currently popular fantasies of entitlement and the fashionable pleasures of despair will likely be in a position to do the same thing this time around, too.

Wednesday, April 09, 2014

The Four Industrial Revolutions

Last week’s post on the vacuous catchphrases that so often substitute for thought in today’s America referenced only a few examples of the species under discussion.  It might someday be educational, or at least entertaining, to write a sequel to H.L. Mencken’s The American Credo, bringing his choice collection of thoughtstoppers up to date with the latest fashionable examples; still, that enticing prospect will have to wait for some later opportunity.In the meantime, those who liked my suggestion of Peak Oil Denial Bingo will doubtless want to know that cards can now be downloaded for free.

What I’d like to do this week is talk about another popular credo, one that plays a very large role in blinding people nowadays to the shape of the future looming up ahead of us all just now. In an interesting display of synchronicity, it came up in a conversation I had while last week’s essay was still being written. A friend and I were talking about the myth of progress, the facile and popular conviction that all human history follows an ever-ascending arc from the caves to the stars; my friend noted how disappointed he’d been with a book about the future that backed away from tomorrow’s challenges into the shelter of a comforting thoughtstopper:  “Technology will always be with us.”

Let’s take a moment to follow the advice I gave in last week’s post and think about what, if anything, that actually means. Taken in the most literal sense, it’s true but trivial. Toolmaking is one of our species’ core evolutionary strategies, and so it’s a safe bet that human beings will have some variety of technology or other as long as our species survives. That requirement could just as easily be satisfied, though, by a flint hand axe as by a laptop computer—and a flint hand axe is presumably not what people who use that particular thoughtstopper have in mind.

Perhaps we might rephrase the credo, then, as “modern technology will always be with us.” That’s also true in a trivial sense, and false in another, equally trivial sense. In the first sense, every generation has its own modern technology; the latest up-to-date flint hand axes were, if you’ll pardon the pun, cutting-edge technology in the time of the Neanderthals.  In the second sense, much of every generation’s modern technology goes away promptly with that generation; whichever way the future goes, much of what counts as modern technology today will soon be no more modern and cutting-edge than eight-track tape players or Victorian magic-lantern projectors. That’s as true if we get a future of continued progress as it is if we get a future of regression and decline.

Perhaps our author means something like “some technology at least as complex as what we have now, and fulfilling most of the same functions, will always be with us.” This is less trivial but it’s quite simply false, as historical parallels show clearly enough. Much of the technology of the Roman era, from wheel-thrown pottery to central heating, was lost in most of the western Empire and had to be brought in from elsewhere centuries later.  In the dark ages that followed the fall of Mycenean Greece, even so simple a trick as the art of writing was lost, while the history of Chinese technology before the modern era is a cycle in which many discoveries made during the heyday of each great dynasty were lost in the dark age that followed its decline and fall, and had to be rediscovered when stability and prosperity returned. For people living in each of these dark ages, technology comparable to what had been in use before the dark age started was emphatically not always with them.

For that matter, who is the “us” that we’re discussing here? Many people right now have no access to the technologies that middle-class Americans take for granted. For all the good that modern technology does them, today’s rural subsistence farmers, laborers in sweatshop factories, and the like might as well be living in some earlier era. I suspect our author is not thinking about such people, though, and the credo thus might be phrased as “some technology at least as complex as what middle-class people in the industrial world have now, providing the same services they have come to expect, will always be available to people of that same class.” Depending on how you define social classes, that’s either true but trivial—if “being middle class” equals “having access to the technology todays middle classes have,” no middle class people will ever be deprived of such a technology because, by definition, there will be no middle class people once the technology stops being available—or nontrivial but clearly false—plenty of people who think of themselves as middle class Americans right now are losing access to a great deal of technology as economic contraction deprives them of their jobs and incomes and launches them on new careers of downward mobility and radical impoverishment.

Well before the analysis got this far, of course, anyone who’s likely to mutter the credo “Technology will always be with us” will have jumped up and yelled, “Oh for heaven’s sake, you know perfectly well what I mean when I use that word! You know, technology!”—or words to that effect. Now of course I do know exactly what the word means in that context: it’s a vague abstraction with no real conceptual meaning at all, but an ample supply of raw emotional force.  Like other thoughtstoppers of the same kind, it serves as a verbal bludgeon to prevent people from talking or even thinking about the brittle, fractious, ambivalent realities that shape our lives these days. Still, let’s go a little further with the process of analysis, because it leads somewhere that’s far from trivial.

Keep asking a believer in the credo we’re discussing the sort of annoying questions I’ve suggested above, and sooner or later you’re likely to get a redefinition that goes something like this: “The coming of the industrial revolution was a major watershed in human history, and no future society of any importance will ever again be deprived of the possibilities opened up by that revolution.” Whether or not that turns out to be true is a question nobody today can answer, but it’s a claim worth considering, because history shows that enduring shifts of this kind do happen from time to time. The agricultural revolution of c. 9000 BCE and the urban revolution of c. 3500 BCE were both decisive changes in human history.  Even though there were plenty of nonagricultural societies after the first, and plenty of nonurban societies after the second, the possibilities opened up by each revolution were always options thereafter, when and where ecological and social circumstances permitted.

Some 5500 years passed between the agricultural revolution and the urban revolution, and since it’s been right around 5500 years since the urban revolution began, a case could probably be made that we were due for another. Still, let’s take a closer look at the putative third revolution. What exactly was the industrial revolution? What changed, and what future awaits those changes?

That’s a far more subtle question than it might seem at first glance, because the cascade of changes that fit under the very broad label “the industrial revolution” weren’t all of a piece. I’d like to suggest, in fact, that there was not one industrial revolution, but four of them—or, more precisely, three and a half. Lewis Mumford’s important 1934 study Technics and Civilization identified three of those revolutions, though the labels he used for them—the eotechnic, paleotechnic, and neotechnic phases—shoved them into a linear scheme of progress that distorts many of their key features. Instead, I propose to borrow the same habit people use when they talk about the Copernican and Darwinian revolutions, and name the revolutions after individuals who played crucial roles in making them happen.

First of all, then—corresponding to Mumford’s eotechnic phase—is the Baconian revolution, which got under way around 1600. It takes its name from Francis Bacon, who was the first significant European thinker to propose that what he called natural philosophy and we call science ought to be reoriented away from the abstract contemplation of the cosmos, and toward making practical improvements in the technologies of the time. Such improvements were already under way, carried out by a new class of “mechanicks” who had begun to learn by experience that building a faster ship, a sturdier plow, a better spinning wheel, or the like could be a quick route to prosperity, and encouraged by governments eager to cash in new inventions for the more valued coinage of national wealth and military victory.

The Baconian revolution, like those that followed it, brought with it a specific suite of technologies. Square-rigged ships capable of  long deepwater voyages revolutionized international trade and naval warfare; canals and canal boats had a similar impact on domestic transport systems. New information and communication media—newspapers, magazines, and public libraries—were crucial elements of the Baconian technological suite, which also encompassed major improvements in agriculture and in metal and glass manufacture, and significant developments in the use of wind and water power, as well as the first factories using division of labor to allow mass production.

The second revolution—corresponding to Mumford’s paleotechnic phase—was the Wattean revolution, which got started around 1780. This takes its name, of course, from James Watt, whose redesign of the steam engine turned it from a convenience for the mining industry to the throbbing heart of a wholly new technological regime, replacing renewable energy sources with concentrated fossil fuel energy and putting that latter to work in every economically viable setting. The steamship was the new vehicle of international trade, the railroad the corresponding domestic transport system; electricity came in with steam, and so did the telegraph, the major new communications technology of the era, while mass production of steel via the Bessemer process had a massive impact across the economic sphere.

The third revolution—corresponding to Mumford’s neotechnic phase—was the Ottonian revolution, which took off around 1890. I’ve named this revolution after Nikolaus Otto, who invented the four-cycle internal combustion engine in 1876 and kickstarted the process that turned petroleum from a source of lamp fuel to the resource that brought the industrial age to its zenith. In the Ottonian era, international trade shifted to diesel-powered ships, supplemented later on by air travel; the domestic transport system was the automobile; the rise of vacuum-state electronics made radio (including television, which is simply an application of radio technology) the major new communications technology; and the industrial use of organic chemistry, turning petroleum and other fossil fuels into feedstocks for plastics, gave the Ottonian era its most distinctive materials.

The fourth, partial revolution, which hadn’t yet begun when Mumford wrote his book, was the Fermian revolution, which can be dated quite precisely to 1942 and is named after Enrico Fermi, the designer and builder of the first successful nuclear reactor.  The keynote of the Fermian era was the application of subatomic physics, not only in nuclear power but also in solid-state electronic devices such as the transistor and the photovoltaic cell. In the middle years of the 20th century, a great many people took it for granted that the Fermian revolution would follow the same trajectory as its Wattean and Ottonian predecessors: nuclear power would replace diesel power in freighters, electricity would elbow aside gasoline as the power source for domestic transport, and nucleonics would become as important as electronics as a core element in new technologies yet unimagined.

Unfortunately for those expectations, nuclear power turned out to be a technical triumph but an economic flop.  Claims that nuclear power would make electricity too cheap to meter ran face first into the hard fact that no nation anywhere has been able to have a nuclear power industry without huge and ongoing government subsidies, while nuclear-powered ships were relegated to the navies of very rich nations, which didn’t have to turn a profit and so could afford to ignore the higher construction and operating costs. Nucleonics turned out to have certain applications, but nothing like as many or as lucrative as the giddy forecasts of 1950 suggested.  Solid state electronics, on the other hand, turned out to be economically viable, at least in a world with ample fossil fuel supplies, and made the computer and the era’s distinctive communications medium, the internet, economically viable propositions.

The Wattean, Ottonian, and Fermian revolutions thus had a core theme in common. Each of them relied on a previously untapped energy resource—coal, petroleum, and uranium, respectively—and set out to build a suite of technologies to exploit that resource and the forms of energy it made available. The scientific and engineering know-how that was required to manage each power source then became the key toolkit for the technological suite that unfolded from it; from the coal furnace, the Bessemer process for making steel was a logical extension, just as the knowledge of hydrocarbon chemistry needed for petroleum refining became the basis for plastics and the chemical industry, and the same revolution in physics that made nuclear fission reactors possible also launched solid state electronics—it’s not often remembered, for example, that Albert Einstein got his Nobel prize for understanding the process that makes PV cells work, not for the theory of relativity.

Regular readers of this blog will probably already have grasped the core implication of this common theme. The core technologies of the Wattean, Ottonian, and Fermian eras all depend on access to large amounts of specific nonrenewable resources.  Fermian technology, for example, demands fissible material for its reactors and rare earth elements for its electronics, among many other things; Ottonian technology demands petroleum and natural gas, and some other resources; Wattean technology demands coal and iron ore. It’s sometimes possible to substitute one set of materials for another—say, to process coal into liquid fuel—but there’s always a major economic cost involved, even if there’s an ample and inexpensive supply of the other resource that isn’t needed for some other purpose.

In today’s world, by contrast, the resources needed for all three technological suites are being used at breakneck rates and thus are either already facing depletion or will do so in the near future. When coal has already been mined so heavily that sulfurous, low-energy brown coal—the kind that miners in the 19th century used to discard as waste—has become the standard fuel for coal-fired power plants, for example, it’s a bit late to talk about a coal-to-liquids program to replace any serious fraction of the world’s petroleum consumption: the attempt to do so would send coal prices soaring to economy-wrecking heights.  Richard Heinberg has pointed out in his useful book Peak Everything, for that matter, that a great deal of the coal still remaining in the ground will take more energy to extract than it will produce when burnt, making it an energy sink rather than an energy source.

Thus we can expect very large elements of Wattean, Ottonian, and Fermian technologies to stop being economically viable in the years ahead, as depletion drives up resource costs and the knock-on effects of the resulting economic contraction force down demand. That doesn’t mean that every aspect of those technological suites will go away, to be sure.  It’s not at all unusual, in the wake of a fallen civilization, to find “orphan technologies” that once functioned as parts of a coherent technological suite, still doing their jobs long after the rest of the suite has fallen out of use.  Just as Roman aqueducts kept bringing water to cities in the post-Roman dark ages whose inhabitants had neither the resources nor the knowledge to build anything of the kind, it’s quite likely that (say) hydroelectric facilities in certain locations will stay in use for centuries to come, powering whatever electrical equipment can maintained or built from local resources, even if the people who tend the dams and use the electricity have long since lost the capacity to build turbines, generators, or dams at all.

Yet there’s another issue involved, because the first of the four industrial revolutions I’ve discussed in this essay—the Baconian revolution—was not dependent on nonrenewable resources.  The suite of technologies that unfolded from Francis Bacon’s original project used the same energy sources that everyone in the world’s urban-agricultural societies had been using for more than three thousand years: human and animal muscle, wind, water, and heat from burning biomass. Unlike the revolutions that followed it, to put the same issue in a different but equally relevant way, the Baconian revolution worked within the limits of the energy budget the Earth receives each year from the Sun, instead of drawing down stored sunlight from the Earth’s store of fossil carbon or its much more limited store of fissible isotopes.  The Baconian era simply used that annual solar budget in a more systematic way than previous societies managed, by directing the considerable intellectual skills of the natural philosophers of the day toward practical ends.

Because of their dependence on nonrenewable resources, the three later revolutions were guaranteed all along to be transitory phases. The Baconian revolution need not be, and I think that there’s a noticeable chance that it will not be. By that I mean, to begin with, that the core intellectual leap that made the Baconian revolution possible—the  scientific method—is sufficiently widespread at this point that with a little help, it may well get through the decline and fall of our civilization and become part of the standard toolkit of future civilizations, in much the same way that classical logic survived the wreck of Rome to be taken up by successor civilizations across the breadth of the Old World.

Still, that’s not all I mean to imply here. The specific technological suite that developed in the wake of the Baconian revolution will still be viable in a post-fossil fuel world, wherever the ecological and social circumstances will permit it to exist at all. Deepwater maritime shipping, canal-borne transport across nations and subcontinents, mass production of goods using the division of labor as an organizing principle, extensive use of wind and water power, and widespread literacy and information exchange involving print media, libraries, postal services, and the like, are all options available to societies in the deindustrial world. So are certain other technologies that evolved in the post-Baconian era, but fit neatly within the Baconian model: solar thermal technologies, for example, and those forms of electronics that can be economically manufactured and powered with the limited supplies of concentrated energy a sustainable society will have on hand.

I’ve suggested in previous posts here, and in my book The Ecotechnic Future, that our current industrial society may turn out to be merely the first, most wasteful, and least durable of what might  best be called “technic societies”—that is, human societies that get a large fraction of their total energy supply from sources other than human and animal muscle, and support complex technological suites on that basis. The technologies of the Baconian era, I propose, offer a glimpse of what an emerging ecotechnic society might look like in practice—and a sense of the foundations on which the more complex ecotechnic societies of the future will build.

When the book mentioned at the beginning of this essay claimed that “technology will always be with us,” it’s a safe bet that the author wasn’t thinking of tall ships, canal boats, solar greenhouses, and a low-power global radio net, much less the further advances along the same lines that might well be possible in a post-fossil fuel world. Still, it’s crucial to get outside the delusion that the future must either be a flashier version of the present or a smoldering wasteland full of bleached bones, and start to confront the wider and frankly more interesting possibilities that await our descendants.

***************
Along these same lines, I’d like to remind readers that this blog’s second post-peak oil science fiction contest has less than a month left to run. Those of you who are still working on stories need to get them finished, posted online, and linked to a comment on this blog before May 1 to be eligible for inclusion in the second After Oil anthology. Get ‘em in!

Wednesday, April 02, 2014

Mentats Wanted, Will Train

The theme of last week’s post here on The Archdruid Report—the strategy of preserving or reviving technologies for the deindustrial future now, before the accelerating curve of decline makes that task more difficult than it already is—can be applied very broadly indeed. Just now, courtesy of the final blowoff of the age of cheap energy, we have relatively easy access to plenty of information about what worked in the past; some other resources are already becoming harder to get, but there’s still time and opportunity to accomplish a great deal.

I’ll be talking about some of the possibilities as we proceed, and with any luck, other people will get to work on projects of their own that I haven’t even thought of. This week, though, I want to take Gustav Erikson’s logic in a direction that probably would have made the old sea dog scratch his head in puzzlement, and talk about how a certain set of mostly forgotten techniques could be put back into use right now to meet a serious unmet need in contemporary American society.

The unmet need I have in mind is unusually visible just now, courtesy of the recent crisis in the Ukraine. I don’t propose to get into the whys and wherefores of that crisis just now, except to note that since the collapse of the Austro-Hungarian Empire, the small nations of eastern Europe have been grist between the spinning millstones of Russia and whichever great power dominates western Europe. It’s not a comfortable place to be; Timothy Snyder’s terse description of 20th century eastern Europe as “bloodlands” could be applied with equal force to any set of small nations squeezed between empires, and it would take quite a bit of unjustified faith in human goodness to think that the horrors of the last century have been safely consigned to the past.

The issue I want to discuss, rather, has to do with the feckless American response to that crisis. Though I’m not greatly interested in joining the chorus of American antigovernment activists fawning around Vladimir Putin’s feet these days, it’s fair to say that he won this one. Russia’s actions caught the United States and EU off balance, secured the Russian navy’s access to the Black Sea and the Mediterranean, and boosted Putin’s already substantial popularity at home. By contrast, Obama came across as amateurish and, worse, weak.  When Obama announced that the US retaliation would consist of feeble sanctions against a few Russian banks and second-string politicians, the world rolled its eyes, and the Russian Duma passed a resolution scornfully requesting Obama to apply those same sanctions to every one of its members.

As the crisis built, there was a great deal of talk in the media about Europe’s dependence on Russian natural gas, and the substantial influence over European politics that Russia has as a result of that unpalatable fact. It’s a major issue, and unlikely to go away any time soon; around a third of the natural gas that keeps Europeans from shivering in the dark each winter comes from Russian gas fields, and the Russian government has made no bones about the fact that it could just as well sell that gas to somebody to Russia’s south or east instead. It was in this context that American politicians and pundits started insisting at the top of their lungs that the United States had a secret weapon against the Sov—er, Russian threat: exports of abundant natural gas from America, which would replace Russian gas in Europe’s stoves, furnaces, and power plants.

As Richard Heinberg pointed out trenchantly a few days back in a typically spot-on essay, there’s only one small problem with this cozy picture: the United States has no spare natural gas to export.  It’s a net importer of natural gas, as it typically burns over a hundred billion more cubic feet of gas each month than it produces domestically.  What’s more, even according to the traditionally rose-colored forecasts issued by the EIA, it’ll be 2020 at the earliest before the United States has any natural gas to spare for Europe’s needs. Those forecasts, by the way, blithely assume that the spike in gas production driven by the recent fracking bubble will just keep on levitating upwards for the foreseeable future; if this reminds you of the rhetoric surrounding tech stocks in the runup to 2000, housing prices in the runup to 2008, or equivalent phenomena in the history of any other speculative swindle you care to name, let’s just say you’re not alone.

According to those forecasts that start from the annoying fact that the laws of physics and geology do actually apply to us, on the other hand, the fracking boom will be well into bust territory by 2020, and those promised torrents of natural gas that will allegedly free Europe from Russian influence will therefore never materialize at all. At the moment, furthermore, boasting about America’s alleged surplus of natural gas for export is particularly out of place, because US natural gas inventories currently in storage are less than half their five-year average level for this time of year, having dropped precipitously since December. Since all this is public information, we can be quite confident that the Russians are aware of it, and this may well explain some of the air of amused contempt with which Putin and his allies have responded to American attempts to rattle a saber that isn’t there.

Any of the politicians and pundits who participated in that futile exercise could have found out the problems with their claim in maybe two minutes of internet time.  Any of the reporters and editors who printed those claims at face value could have done the same thing. I suppose it’s possible that the whole thing was a breathtakingly cynical exercise of Goebbels’ “Big Lie” principle, intended to keep Americans from noticing that the Obama’s people armed themselves with popguns for a shootout at the OK Corral. I find this hard to believe, though, because the same kind of thinking—or, more precisely, nonthinking—is so common in America these days.

It’s indicative that my post here two weeks ago brought in a bumper crop of the same kind of illogic. My post took on the popular habit of using the mantra “it’s different this time” to insist that the past has nothing to teach us about the present and the future. Every event, I pointed out, has some features that set it apart from others, and other features that it shares in common with others; pay attention to the common features and you can observe the repeating patterns, which can then be adjusted to take differences into account.  Fixate on the differences and deny the common features, though, and you have no way to test your beliefs—which is great if you want to defend your beliefs against reasonable criticism, but not so useful if you want to make accurate predictions about where we’re headed.

Did the critics of this post—and there were quite a few of them—challenge this argument, or even address it? Not in any of the peak oil websites I visited. What happened instead was that commenters brandished whatever claims about the future are dearest to their hearts and then said, in so many words, “It’s different this time”—as though that somehow answered me. It was quite an impressive example of sheer incantation, the sort of thing we saw not that long ago when Sarah Palin fans were trying to conjure crude oil into America’s depleted oilfields by chanting “Drill, baby, drill” over and over again. I honestly felt as though I’d somehow dozed off at the computer and slipped into a dream in which I was addressing an audience of sheep, who responded by bleating “But it’s different this ti-i-i-i-ime” in perfect unison.

A different mantra sung to the same bleat, so to speak, seems to have been behind the politicians and pundits, and all that nonexistent natural gas they thought was just waiting to be exported to Europe. The thoughtstopping phrase here is “America has abundant reserves of natural gas.” It will doubtless occur to many of my readers that this statement is true, at least for certain values of that nicely vague term “abundant,” just as it’s true that every historical event differs in at least some way from everything that’s happened in the past, and that an accelerated program of drilling can (and in fact did) increase US petroleum production by a certain amount, at least for a while. The fact that each of these statements is trivially true does not make any of them relevant.

That is to say, a remarkably large number of Americans, including the leaders of our country and the movers and shakers of our public opinion, are so inept at the elementary skills of thinking that they can’t tell the difference between mouthing a platitude and having a clue.

I suppose this shouldn’t surprise me as much as it does. For decades now, American public life has been dominated by thoughtstoppers of this kind—short, emotionally charged declarative sentences, some of them trivial, some of them incoherent, none of them relevant and all of them offered up as sound bites by politicians, pundits, and ordinary Americans alike, as though they meant something and proved something. The redoubtable H.L. Mencken, writing at a time when such things were not quite as universal in the American mass mind than they have become since then, called them “credos.”  It was an inspired borrowing from the Latin credo, “I believe,” but its relevance extends far beyond the religious sphere. 

Just as plenty of believing Americans in Mencken’s time liked to affirm their fervent faith in the doctrines of whatever church they attended without having the vaguest idea of what those doctrines actually meant, a far vaster number of Americans these days—religious, irreligious, antireligious, or concerned with nothing more supernatural than the apparent capacity of Lady Gaga’s endowments to defy the laws of gravity—gladly affirm any number of catchphrases about which they seem never to have entertained a single original thought. Those of my readers who have tried to talk about the future with their family and friends will be particularly familiar with the way this works; I’ve thought more than once of providing my readers with Bingo cards marked with the credos most commonly used to silence discussions of our future—“they’ll think of something,” “technology can solve any problem,” “the world’s going to end soon anyway,” “it’s different this time,” and so on—with some kind of prize for whoever fills theirs up first.

The prevalence of credos, though, is only the most visible end of a culture of acquired stupidity that I’ve discussed here in previous posts, and Erik Lindberg has recently anatomized in a crisp and thoughtful blog post. That habit of cultivated idiocy is a major contributor to the crisis of our age, but a crisis is always an opportunity, and with that in mind, I’d like to propose that it’s time for some of us, at least, to borrow a business model from the future, and start getting prepared for future job openings as mentats.

In Frank Herbert’s iconic SF novel Dune, as many of my readers will be aware, a revolt against computer technology centuries before the story opened led to a galaxywide ban on thinking machines—“Thou shalt not make a machine in the image of a human mind”—and a corresponding focus on developing human capacities instead of replacing them with hardware. The mentats were among the results: human beings trained from childhood to absorb, integrate, and synthesize information. Think of them as the opposite end of human potential from the sort of credo-muttering couch potatoes who seem to make up so much of the American population these days:  ask a mentat if it really is different this time, and after he’s spent thirty seconds or so reviewing the entire published literature on the subject, he’ll give you a crisp first-approximation analysis explaining what’s different, what’s similar, which elements of each category are relevant to the situation, and what your best course of action would be in response.

Now of course the training programs needed to get mentats to this level of function haven’t been invented yet, but the point still stands: people who know how to think, even at a less blinding pace than Herbert’s fictional characters manage, are going to be far better equipped to deal with a troubled future than those who haven’t.  The industrial world has been conducting what amounts to a decades-long experiment to see whether computers can make human beings more intelligent, and the answer at this point is a pretty firm no. In particular, computers tend to empower decision makers without making them noticeably smarter, and the result by and large is that today’s leaders are able to make bad decisions more easily and efficiently than ever before. That is to say, machines can crunch data, but it takes a mind to turn data into information, and a well-trained and well-informed mind to refine information into wisdom.

What makes a revival of the skills of thinking particularly tempting just now is that the bar is set so low. If you know how to follow an argument from its premises to its conclusion, recognize a dozen or so of the most common logical fallacies, and check the credentials of a purported fact, you’ve just left most Americans—including the leaders of our country and the movers and shakers of our public opinon—way back behind you in the dust. To that basic grounding in how to think, add a good general knowledge of history and culture and a few branches of useful knowledge in which you’ve put some systematic study, and you’re so far ahead of the pack that you might as well hang out your shingle as a mentat right away.

Now of course it may be a while before there’s a job market for mentats—in the post-Roman world, it took several centuries for those people who preserved the considerable intellectual toolkit of the classical world to find a profitable economic niche, and that required them to deck themselves out in tall hats with moons and stars on them. In the interval before the market for wizards opens up again, though, there are solid advantages to be gained by the sort of job training I’ve outlined, unfolding from the fact that having mental skills that go beyond muttering credos makes it possible to make accurate predictions about the future that are considerably more accurate than the ones guiding most Americans today. .

This has immediate practical value in all sorts of common, everyday situations these days. When all the people you know are rushing to sink every dollar they have in the speculative swindle du jour, for example, you’ll quickly recognize the obvious signs of a bubble in the offing, walk away, and keep your shirt while everyone else is losing theirs. When someone tries to tell you that you needn’t worry about energy costs or shortages because the latest piece of energy vaporware will surely solve all our problems, you’ll be prepared to ignore him and go ahead with insulating your attic, and when someone else insists that the Earth is sure to be vaporized any day now by whatever apocalypse happens to be fashionable that week, you’ll be equally prepared to ignore him and go ahead with digging the new garden bed. 

When the leaders of your country claim that an imaginary natural gas surplus slated to arrive six years from now will surely make Putin blink today, for that matter, you’ll draw the logical conclusion, and get ready for the economic and political impacts of another body blow to what’s left of America’s faltering global power and reputation. It may also occur to you—indeed, it may have done so already—that the handwaving about countering Russia is merely an excuse for building the infrastructure needed to export American natural gas to higher-paying global markets, which will send domestic gas prices soaring to stratospheric levels in the years ahead; this recognition might well inspire you to put a few extra inches of insulation up there in the attic, and get a backup heat source that doesn’t depend either on gas or on gas-fired grid electricity, so those soaring prices don’t have the chance to clobber you.

If these far from inconsiderable benefits tempt you, dear reader, I’d like to offer you an exercise as the very first step in your mentat training.  The exercise is this: the next time you catch someone (or, better yet, yourself) uttering a familiar thoughtstopper about the future—“It’s different this time,” “They’ll think of something,” “There are no limits to what human beings can achieve,” “The United States has an abundant supply of natural gas,” or any of the other entries in the long and weary list of contemporary American credos—stop right there and think about it. Is the statement true? Is it relevant? Does it address the point under discussion?  Does the evidence that supports it, if any does, outweigh the evidence against it? Does it mean what the speaker thinks it means? Does it mean anything at all?

There’s much more involved than this in learning how to think, of course, and down the road I propose to write a series of posts on the subject, using as raw material for exercises more of the popular idiocies behind which America tries to hide from the future. I would encourage all the readers of this blog to give this exercise a try, though. In an age of accelerating decline, the habit of letting arbitrary catchphrases replace actual thinking is a luxury that nobody can really afford, and those who cling to such things too tightly can expect to be blindsided by a future that has no interest in playing along with even the most fashionable credos.

*******************
In not unrelated news, I’m pleased to report that the School of Economic Science will be hosting a five week course in London on Economics, Energy and Environment, beginning April 29 of this year, based in part on ideas from my book The Wealth of Nature. The course will finish up with a conference on June 1 at which, ahem, I’ll be one of the speakers. Details are at www.eeecourse.org.

Wednesday, March 26, 2014

Captain Erikson's Equation

I have yet to hear anyone in the peak oil blogosphere mention the name of Captain Gustaf Erikson of the Åland Islands and his fleet of windjammers.  For all I know, he’s been completely forgotten now, his name and accomplishments packed away in the same dustbin of forgotten history as solar steam-engine pioneer Augustin Mouchot, his near contemporary. If so, it’s high time that his footsteps sounded again on the quarterdeck of our collective imagination, because his story—and the core insight that committed him to his lifelong struggle—both have plenty to teach about the realities framing the future of technology in the wake of today’s era of fossil-fueled abundance.

Erikson, born in 1872, grew up in a seafaring family and went to sea as a ship’s boy at the age of nine. At 19 he was the skipper of a coastal freighter working the Baltic and North Sea ports; two years later he shipped out as mate on a windjammer for deepwater runs to Chile and Australia, and eight years after that he was captain again, sailing three- and four-masted cargo ships to the far reaches of the planet. A bad fall from the rigging in 1913 left his right leg crippled, and he left the sea to become a shipowner instead, buying the first of what would become the 20th century’s last major fleet of windpowered commercial cargo vessels.

It’s too rarely remembered these days that the arrival of steam power didn’t make commercial sailing vessels obsolete across the board. The ability to chug along at eight knots or so without benefit of wind was a major advantage in some contexts—naval vessels and passenger transport, for example—but coal was never cheap, and the long stretches between coaling stations on some of the world’s most important trade routes meant that a significant fraction of a steamship’s total tonnage had to be devoted to coal, cutting into the capacity to haul paying cargoes. For bulk cargoes over long distances, in particular, sailing ships were a good deal more economical all through the second half of the 19th century, and some runs remained a paying proposition for sail well into the 20th.

That was the niche that the windjammers of the era exploited. They were huge—up to 400 feet from stem to stern—square-sided, steel-hulled ships, fitted out with more than an acre of canvas and miles of steel-wire rigging.  They could be crewed by a few dozen sailors, and hauled prodigious cargoes:  up to 8,000 tons of Australian grain, Chilean nitrate—or, for that matter, coal; it was among the ironies of the age that the coaling stations that allowed steamships to refuel on long voyages were very often kept stocked by tall ships, which could do the job more economically than steamships themselves could. The markets where wind could outbid steam were lucrative enough that at the beginning of the 20th century, there were still thousands of working windjammers hauling cargoes across the world’s oceans.

That didn’t change until bunker oil refined from petroleum ousted coal as the standard fuel for powered ships. Petroleum products carry much more energy per pound than even the best grade of coal, and the better grades of coal were beginning to run short and rise accordingly in price well before the heyday of the windjammers was over. A diesel-powered vessel had to refuel less often, devote less of its tonnage to fuel, and cost much less to operate than its coal-fired equivalent. That’s why Winston Churchill, as head of Britain’s Admiralty, ordered the entire British Navy converted from coal to oil in the years just before the First World War, and why coal-burning steamships became hard to find anywhere on the seven seas once the petroleum revolution took place. That’s also why most windjammers went out of use around the same time; they could compete against coal, but not against dirt-cheap diesel fuel.

Gustav Erikson went into business as a shipowner just as that transformation was getting under way. The rush to diesel power allowed him to buy up windjammers at a fraction of their former price—his first ship, a 1,500-ton bark, cost him less than $10,000, and the pride of his fleet, the four-masted Herzogin Cecilie, set him back only $20,000.  A tight rein on operating expenses and a careful eye on which routes were profitable kept his firm solidly in the black. The bread and butter of his business came from shipping wheat from southern Australia to Europe; Erikson’s fleet and the few other windjammers still in the running would leave European ports in the northern hemisphere’s autumn and sail for Spencer Gulf on Australia’s southern coast, load up with thousands of tons of wheat, and then race each other home, arriving in the spring—a good skipper with a good crew could make the return trip in less than 100 days, hitting speeds upwards of 15 knots when the winds were right.

There was money to be made that way, but Erikson’s commitment to the windjammers wasn’t just a matter of profit. A sentimental attachment to tall ships was arguably part of the equation, but there was another factor as well. In his latter years, Erikson was fond of telling anyone who would listen that a new golden age for sailing ships was on the horizon:  sooner or later, he insisted, the world’s supply of coal and oil would run out, steam and diesel engines would become so many lumps of metal fit only for salvage, and those who still knew how to haul freight across the ocean with only the wind for power would have the seas, and the world’s cargoes, all to themselves.

Those few books that mention Erikson at all like to portray him as the last holdout of a departed age, a man born after his time. On the contrary, he was born before his time, and lived too soon. When he died in 1947, the industrial world’s first round of energy crises were still a quarter century away, and only a few lonely prophets had begun to grasp the absurdity of trying to build an enduring civilization on the ever-accelerating consumption of a finite and irreplaceable fuel supply. He had hoped that his sons would keep the windjammers running, and finish the task of getting the traditions and technology of the tall ships through the age of fossil fuels and into the hands of the seafarers of the future. I’m sorry to say that that didn’t happen; the profits to be made from modern freighters were too tempting, and once the old man was gone, his heirs sold off the windjammers and replaced them with diesel-powered craft.

Erikson’s story is worth remembering, though, and not simply because he was an early prophet of what we now call peak oil. He was also one of the very first people in our age to see past the mythology of technological progress that dominated the collective imagination of his time and ours, and glimpse the potentials of one of the core strategies this blog has been advocating for the last eight years.

We can use the example that would have been dearest to his heart, the old technology of windpowered maritime cargo transport, to explore those potentials. To begin with, it’s crucial to remember that the only thing that made tall ships obsolete as a transport technology was cheap abundant petroleum. The age of coal-powered steamships left plenty of market niches in which windjammers were economically more viable than steamers.  The difference, as already noted, was a matter of energy density—that’s the technical term for how much energy you get out of each pound of fuel; the best grades of coal have only about half the energy density of petroleum distillates, and as you go down the scale of coal grades, energy density drops steadily.  The brown coal that’s commonly used for fuel these days provides, per pound, rather less than a quarter the heat energy you get from a comparable weight of bunker oil.

As the world’s petroleum reserves keep sliding down the remorseless curve of depletion, in turn, the price of bunker oil—like that of all other petroleum products—will continue to move raggedly upward. If Erikson’s tall ships were still in service, it’s quite possible that they would already be expanding their market share; as it is, it’s going to be a while yet before rising fuel costs will make it economical for shipping firms to start investing in the construction of a new generation of windjammers.  Nonetheless, as the price of bunker oil keeps rising, it’s eventually going to cross the line at which sail becomes the more profitable option, and when that happens, those firms that invest in tall ships will profit at the expense of their old-fahioned, oil-burning rivals.

Yes, I’m aware that this last claim flies in the face of one of the most pervasive superstitions of our time, the faith-based insistence that whatever technology we happen to use today must always and forever be better, in every sense but a purely sentimental one, than whatever technology it replaced. The fact remains that what made diesel-powered maritime transport standard across the world’s oceans was not some abstract superiority of bunker oil over wind and canvas, but the simple reality that for a  while, during the heyday of cheap abundant petroleum, diesel-powered freighters were more profitable to operate than any of the other options.  It was always a matter of economics, and as petroleum depletion tilts the playing field the other way, the economics will change accordingly.

All else being equal, if a shipping company can make larger profits moving cargoes by sailing ships than by diesel freighters, coal-burning steamships, or some other option, the sailing ships will get the business and the other options will be left to rust in port. It really is that simple. The point at which sailing vessels become economically viable, in turn, is determined partly by fuel prices and partly by the cost of building and outfitting a new generation of sailing ships. Erikson’s plan was to do an end run around the second half of that equation, by keeping a working fleet of windjammers in operation on niche routes until rising fuel prices made it profitable to expand into other markets. Since that didn’t happen, the lag time will be significantly longer, and bunker fuel may have to price itself entirely out of certain markets—causing significant disruptions to maritime trade and to national and regional economies—before it makes economic sense to start building windjammers again.

It’s a source of wry amusement to me that when the prospect of sail transport gets raised, even in the greenest of peak oil circles, the immediate reaction from most people is to try to find some way to smuggle engines back onto the tall ships. Here again, though, the issue that matters is economics, not our current superstitious reverence for loud metal objects. There were plenty of ships in the 19th century that combined steam engines and sails in various combinations, and plenty of ships in the early 20th century that combined diesel engines and sails the same way.  Windjammers powered by sails alone were more economical than either of these for long-range bulk transport, because engines and their fuel supplies cost money, they take up tonnage that can otherwise be used for paying cargo, and their fuel costs cut substantially into profits as well.

For that matter, I’ve speculated in posts here about the possibility that Augustin Mouchot’s solar steam engines, or something like them, could be used as a backup power source for the windjammers of the deindustrial future. It’s interesting to note that the use of renewable energy sources for shipping in Erikson’s time wasn’t limited to the motive power provided by sails; coastal freighters of the kind Erikson skippered when he was nineteen were called “onkers” in Baltic Sea slang, because their windmill-powered deck pumps made a repetitive “onk-urrr, onk-urrr” noise. Still, the same rule applies; enticing as it might be to imagine sailors on a becalmed windjammer hauling the wooden cover off a solar steam generator, expanding the folding reflector, and sending steam down belowdecks to drive a propeller, whether such a technology came into use would depend on whether the cost of buying and installing a solar steam engine, and the lost earning capacity due to hold space being taken up by the engine, was less than the profit to be made by getting to port a few days sooner.

Are there applications where engines are worth having despite their drawbacks? Of course. Unless the price of biodiesel ends up at astronomical levels, or the disruptions ahead along the curve of the Long Descent cause diesel technology to be lost entirely, tugboats will probably have diesel engines for the imaginable future, and so will naval vessels; the number of major naval battles won or lost in the days of sail because the wind blew one way or another will doubtless be on the minds of many as the age of petroleum winds down. Barring a complete collapse in technology, in turn, naval vessels will no doubt still be made of steel—once cannons started firing explosive shells instead of solid shot, wooden ships became deathtraps in naval combat—but most others won’t be; large-scale steel production requires ample supplies of coke, which is produced by roasting coal, and depletion of coal supplies in a postpetroleum future guarantees that steel will be much more expensive compared to other materials than it is today, or than it was during the heyday of the windjammers.

Note that here again, the limits to technology and resource use are far more likely to be economic than technical. In purely technical terms, a maritime nation could put much of its arable land into oil crops and use that to keep its merchant marine fueled with biodiesel. In economic terms, that’s a nonstarter, since the advantages to be gained by it are much smaller than the social and financial costs that would be imposed by the increase in costs for food, animal fodder, and all other agricultural products. In the same way, the technical ability to build an all-steel merchant fleet will likely still exist straight through the deindustrial future; what won’t exist is the ability to do so without facing prompt bankruptcy. That’s what happens when you have to live on the product of each year’s sunlight, rather than drawing down half a billion years of fossil photosynthesis:  there are hard economic limits to how much of anything you can produce, and increasing production of one thing pretty consistently requires cutting production of something else. People in today’s industrial world don’t have to think like that, but their descendants in the deindustrial world will either learn how to do so or perish.

This point deserves careful study, as it’s almost always missed by people trying to think their way through the technological consequences of the deindustrial future. One reader of mine who objected to talk about abandoned technologies in a previous post quoted with approval the claim, made on another website, that if a deindustrial society can make one gallon of biodiesel, it can make as many thousands or millions of gallons as it wants.  Technically, maybe; economically, not a chance.  It’s as though you made $500 a week and someone claimed you could buy as many bottles of $100-a-bottle scotch as you wanted; in any given week, your ability to buy expensive scotch would be limited by your need to meet other expenses such as food and rent, and some purchase plans would be out of reach even if you ignored all those other expenses and spent your entire paycheck at the liquor store. The same rule applies to societies that don’t have the windfall of fossil fuels at their disposal—and once we finish burning through the fossil fuels we can afford to extract, every human society for the rest of our species’ time on earth will be effectively described in those terms.

The one readily available way around the harsh economic impacts of fossil fuel depletion is the one that Gunnar Erikson tried, but did not live to complete—the strategy of keeping an older technology in use, or bringing a defunct technology back into service, while there’s still enough wealth sloshing across the decks of the industrial economy to make it relatively easy to do so.  I’ve suggested above that if his firm had kept the windjammers sailing, scraping out a living on whatever narrow market niche they could find, the rising cost of bunker oil might already have made it profitable to expand into new niches; there wouldn’t have been the additional challenge of finding the money to build new windjammers from the keel up, train crews to sail them, and get ships and crews through the learning curve that’s inevitably a part of bringing an unfamiliar technology on line.

That same principle has been central to quite a few of this blog’s projects. One small example is the encouragement I’ve tried to give to the rediscovery of the slide rule as an effective calculating device. There are still plenty of people alive today who know how to use slide rules, plenty of books that teach how to crunch numbers with a slipstick, and plenty of slide rules around. A century down the line, when slide rules will almost certainly be much more economically viable than pocket calculators, those helpful conditions might not be in place—but if people take up slide rules now for much the same reasons that Erikson kept the tall ships sailing, and make an effort to pass skills and slipsticks on to another generation, no one will have to revive or reinvent a dead technology in order to have quick accurate calculations for practical tasks such as engineering, salvage, and renewable energy technology.

The collection of sustainable-living skills I somewhat jocularly termed “green wizardry,” which I learned back in the heyday of the appropriate tech movement in the late 1970s and early 1980s, passed on to the readers of this blog in a series of posts a couple of years ago, and have now explored in book form as well, is another case in point. Some of that knowledge, more of the attitudes that undergirded it, and nearly all the small-scale, hands-on, basement-workshop sensibility of the movement in question has vanished from our collective consciousness in the years since the Reagan-Thatcher counterrevolution foreclosed any hope of a viable future for the industrial world. There are still enough books on appropriate tech gathering dust in used book shops, and enough in the way of living memory among those of us who were there, to make it possible to recover those things; another generation and that hope would have gone out the window.

There are plenty of other possibilities along the same lines. For that matter, it’s by no means unreasonable to plan on investing in technologies that may not be able to survive all the way through the decline and fall of the industrial age, if those technologies can help cushion the way down. Whether or not it will still be possible to manufacture PV cells at the bottom of the deindustrial dark ages, as I’ve been pointing out since the earliest days of this blog, getting them in place now on a home or local community scale is likely to pay off handsomely when grid-based electricity becomes unreliable, as it will.  The modest amounts of electricity you can expect to get from this and other renewable sources can provide critical services (for example, refrigeration and long-distance communication) that will be worth having as the Long Descent unwinds.

That said, all such strategies depend on having enough economic surplus on hand to get useful technologies in place before the darkness closes in. As things stand right now, as many of my readers will have had opportunity to notice already, that surplus is trickling away. Those of us who want to help make a contribution to the future along those lines had better get a move on.

Wednesday, March 19, 2014

American Delusionalism, or Why History Matters

One of the things that reliably irritates a certain fraction of this blog’s readers, as I’ve had occasion to comment before, is my habit of using history as a touchstone that can be used to test claims about the future. No matter what the context, no matter how wearily familiar the process under discussion might be, it’s a safe bet that the moment I start talking about historical parallels, somebody or other is going to pop up and insist that it really is different this time.
 
In a trivial sense, of course, that claim is correct. The tech stock bubble that popped in 2000, the real estate bubble that popped in 2008, and the fracking bubble that’s showing every sign of popping in the uncomfortably near future are all different from each other, and from every other bubble and bust in the history of speculative markets, all the way back to the Dutch tulip mania of 1637. It’s quite true that tech stocks aren’t tulips, and bundled loans backed up by dubious no-doc mortgages aren’t the same as bundled loans backed up by dubious shale leases—well, not exactly the same—but in practice, the many differences of detail are irrelevant compared to the one crucial identity.  Tulips, tech stocks, and bundled loans, along with South Sea Company shares in 1730, investment trusts in 1929, and all the other speculative vehicles in all the other speculative bubbles of the last five centuries, different as they are, all follow the identical trajectory:  up with the rocket, down with the stick.

That is to say, those who insist that it’s different this time are right where it doesn’t matter and wrong where it counts. I’ve come to think of the words “it’s different this time,” in fact, as the nearest thing history has to the warning siren and flashing red light that tells you that something is about to go very, very wrong. When people start saying it, especially when plenty of people with plenty of access to the media start saying it, it’s time to dive for the floor, cover your head with your arms, and wait for the blast to hit.

With that in mind, I’d like to talk a bit about the recent media flurry around the phrase “American exceptionalism,” which has become something of a shibboleth among pseudoconservative talking heads in recent months. Pseudoconservatives? Well, yes; actual conservatives, motivated by the long and by no means undistinguished tradition of conservative thinking launched by Edmund Burke in the late 18th century, are interested in, ahem, conserving things, and conservatives who actually conserve are about as rare these days as liberals who actually liberate. Certainly you won’t find many of either among the strident voices insisting just now that the last scraps of America’s democracy at home and reputation abroad ought to be sacrificed in the service of their squeaky-voiced machismo.

As far as I know, the phrase “American exceptionalism” was originally coined by none other than Josef Stalin—evidence, if any more were needed, that American pseudoconservatives these days, having no ideas of their own, have simply borrowed those of their erstwhile Communist bogeyman and stood them on their heads with a Miltonic “Evil, be thou my good.”  Stalin meant by it the opinion of many Communists in his time that the United States, unlike the industrial nations of Europe, wasn’t yet ripe for the triumphant proletarian revolution predicted (inaccurately) by Marx’s secular theology. Devout Marxist that he was, Stalin rejected this claim with some heat, denouncing it in so many words as “this heresy of American exceptionalism,” and insisting (also inaccurately) that America would get its proletarian revolution on schedule. 

While Stalin may have invented the phrase, the perception that he thus labeled had considerably older roots. In a previous time, though, that perception took a rather different tone than it does today. A great many of the early leaders and thinkers of the United States in its early years, and no small number of the foreign observers who watched the American experiment in those days, thought and hoped that the newly founded republic might be able to avoid making the familiar mistakes that had brought so much misery onto the empires of the Old World. Later on, during and immediately after the great debates over American empire at the end of the 19th century, a great many Americans and foreign observers still thought and hoped that the republic might come to its senses in time and back away from the same mistakes that doomed those Old World empires to the misery just mentioned. These days, by contrast, the phrase “American exceptionalism” seems to stand for the conviction that America can and should make every one of those same mistakes, right down to the fine details, and will still somehow be spared the logically inevitable consequences.

The current blind faith in American exceptionalism, in other words, is simply another way of saying “it’s different this time.”  Those who insist that God is on America’s side when America isn’t exactly returning the favor, like those who have less blatantly theological reasons for their belief that this nation’s excrement emits no noticeable odor, are for all practical purposes demanding that America must not, under any circumstances, draw any benefit from the painfully learnt lessons of history.  I suggest that a better name for the belief in question might be "American delusionalism;" it’s hard to see how this bizarre act of faith can do anything other than help drive the American experiment toward a miserable end, but then that’s just one more irony in the fire.

The same conviction that the past has nothing to teach the present is just as common elsewhere in contemporary culture. I’m thinking here, among other things, of the ongoing drumbeat of claims that our species will inevitably be extinct by 2030. As I noted in a previous post here, this is yet another expression of the same dubious logic that generated the 2012 delusion, but much of the rhetoric that surrounds it starts from the insistence that nothing like the current round of greenhouse gas-driven climate change has ever happened before.

That insistence bespeaks an embarrassing lack of knowledge about paleoclimatology. Vast quantities of greenhouse gases being dumped into the atmosphere over a century or two? Check; the usual culprit is vulcanism, specifically the kind of flood-basalt eruption that opens a crack in the earth many miles in length and turns an area the size of a European nation into a lake of lava. The most recent of those, a smallish one, happened about 6 million years ago in the Columbia River basin of eastern Washington and Oregon states.  Further back, in the Aptian, Toarcian, and Turonian-Cenomanian epochs of the late Mesozoic, that same process on a much larger scale boosted atmospheric CO2 levels to three times the present figure and triggered what paleoclimatologists call "super-greenhouse events." Did those cause the extinction of all life on earth? Not hardly; as far as the paleontological evidence shows, it didn’t even slow the brontosaurs down.

Oceanic acidification leading to the collapse of calcium-shelled plankton populations? Check; those three super-greenhouse events, along with a great many less drastic climate spikes, did that. The ocean also contains very large numbers of single-celled organisms that don’t have calcium shells, such as blue-green algae, which aren’t particularly sensitive to shifts in the pH level of seawater; when such shifts happen, these other organisms expand to fill the empty niches, and everybody further up the food chain gets used to a change in diet. When the acidification goes away, whatever species of calcium-shelled plankton have managed to survive elbow their way back into their former niches and undergo a burst of evolutionary radiation; this makes life easy for geologists today, who can figure out the age of any rock laid down in an ancient ocean by checking the remains of foraminifers and other calcium-loving plankton against a chart of what existed when.

Sudden climate change recently enough to be experienced by human beings? Check; most people have heard of the end of the last ice age, though you have to read the technical literature or one of a very few popular treatments to get some idea of just how drastically the climate changed, or how fast.  The old saw about a slow, gradual warming over millennia got chucked into the dumpster decades ago, when ice cores from Greenland upset that particular theory. The ratio between different isotopes of oxygen in the ice laid down in different years provides a sensitive measure of the average global temperature at sea level during those same years. According to that measure, at the end of the Younger Dryas period about 11,800 years ago, global temperatures shot up by 20° F. in less than a decade.

Now of course that didn’t mean that temperatures shot up that far evenly, all over the world.  What seems to have happened is that the tropics barely warmed at all, the southern end of the planet warmed mildly, and the northern end experienced a drastic heat wave that tipped the great continental ice sheets of the era into rapid collapse and sent sea levels soaring upwards. Those of my readers who have been paying attention to recent scientific publications about Greenland and the Arctic Ocean now have very good reason to worry, because the current round of climate change has most strongly affected the northern end of the planet, too, and scientists have begun to notice historically unprecedented changes in the Greenland ice cap. In an upcoming post I plan on discussing at some length what those particular historical parallels promise for our future, and it’s not pretty.

Oh, and the aftermath of the post-Younger Dryas temperature spike was a period several thousand years long when global temperatures were considerably higher than they are today. The Holocene Hypsithermal, as it’s called, saw global temperatures peak around 7° F. higher than they are today—about the level, that is, that’s already baked into the cake as a result of anthropogenic emissions of greenhouse gases.  It was not a particularly pleasant time. Most of western North America was desert, baked to a crackly crunch by drought conditions that make today’s dry years look soggy; much of what’s now, at least in theory, the eastern woodland biome was dryland prairie, while both coasts got rapidly rising seas with a side order of frequent big tsunamis—again, we’ll talk about those in the upcoming post just mentioned. Still, you’ll notice that our species survived the experience.

As those droughts and tsunamis might suggest, the lessons taught by history don’t necessarily amount to "everything will be just fine." The weird inability of the contemporary imagination to find any middle ground between business as usual and sudden total annihilation has its usual effect here, hiding the actual risks of anthropogenic climate change behind a facade of apocalyptic fantasies. Here again, the question "what happened the last time this occurred?" is the most accessible way to avoid that trap, and the insistence that it’s different this time and the evidence of the past can’t be applied to the present and future puts that safeguard out of reach.

For a third example, consider the latest round of claims that a sudden financial collapse driven by current debt loads will crash the global economy once and for all. That sudden collapse has been being predicted year after weary year for decades now—do any of my readers, I wonder, remember Dr. Ravi Batra’s The Great Depression of 1990?—and its repeated failure to show up and perform as predicted seems only to strengthen the conviction on the part of believers that this year, like some financial equivalent of the Great Pumpkin, the long-delayed crash will finally put in its long-delayed appearance and bring the global economy crashing down.

I’m far from sure that they’re right about the imminence of a crash; the economy of high finance these days is so heavily manipulated, and so thoroughly detached from the real economy where real goods and services have to be produced using real energy and resources, that it’s occurred to me more than once that the stock market and the other organs of the financial sphere might keep chugging away in a state of blissful disconnection to the rest of existence for a very long time to come. Stil, let’s grant for the moment that the absurd buildup of unpayable debt in the United States and other industrial nations will in fact become the driving force behind a credit collapse, in which drastic deleveraging will erase trillions of dollars in notional wealth. Would such a crash succeed, as a great many people are claiming just now, in bringing the global economy to a sudden and permanent stop?

Here again, the lessons of history provide a clear and straightforward answer to that question, and it’s not one that supports the partisans of the fast-crash theory. Massive credit collapses that erase very large sums of notional wealth and impact the global economy are hardly a new phenomenon, after all. One example—the credit collapse of 1930-1932—is still just within living memory; the financial crises of 1873 and 1893 are well documented, and there are dozens of other examples of nations and whole continents hammered by credit collapses and other forms of drastic economic crisis. Those crises have had plenty of consequences, but one thing that has never happened as a result of any of them is the sort of self-feeding, irrevocable plunge into the abyss that current fast-crash theories require.

The reason for this is that credit is merely one way by which a society manages the distribution of goods and services. That’s all it is. Energy, raw materials, and labor are the factors that have to be present in order to produce goods and services.  Credit simply regulates who gets how much of each of these things, and there have been plenty of societies that have handled that same task without making use of a credit system at all. A credit collapse, in turn, doesn’t make the energy, raw materials, and labor vanish into some fiscal equivalent of a black hole; they’re all still there, in whatever quantities they were before the credit collapse, and all that’s needed is some new way to allocate them to the production of goods and services.

This, in turn, governments promptly provide. In 1933, for example, faced with the most severe credit collapse in American history, Franklin Roosevelt temporarily nationalized the entire US banking system, seized nearly all the privately held gold in the country, unilaterally changed the national debt from "payable in gold" to "payable in Federal Reserve notes" (which amounted to a technical default), and launched a flurry of other emergency measures.  The credit collapse came to a screeching halt, famously, in less than a hundred days. Other nations facing the same crisis took equally drastic measures, with similar results. While that history has apparently been forgotten across large sections of the peak oil blogosphere, it’s a safe bet that none of it has been forgotten in the corridors of power in Washington DC and elsewhere in the world.

More generally, governments have an extremely broad range of powers that can be used, and have been used, in extreme financial emergencies to stop a credit or currency collapse from terminating the real economy. Faced with a severe crisis, governments can slap on wage and price controls, freeze currency exchanges, impose rationing, raise trade barriers, default on their debts, nationalize whole industries, issue new currencies, allocate goods and services by fiat, and impose martial law to make sure the new economic rules are followed to the letter, if necessary, at gunpoint. Again, these aren’t theoretical possibilities; every one of them has actually been used by more than one government faced by a major economic crisis in the last century and a half. Given that track record, it requires a breathtaking leap of faith to assume that if the next round of deleveraging spirals out of control, politicians around the world will simply sit on their hands, saying "Whatever shall we do?" in plaintive voices, while civilization crashes to ruin around them.

What makes that leap of faith all the more curious is in the runup to the economic crisis of 2008-9, the same claims of imminent, unstoppable financial apocalypse we’re hearing today were being made—in some cases, by the same people who are making them today.  (I treasure a comment I fielded from a popular peak oil blogger at the height of the 2009 crisis, who insisted that the fast crash was upon us and that my predictions about the future were therefore all wrong.) Their logic was flawed then, and it’s just as flawed now, because it dismisses the lessons of history as irrelevant and therefore fails to take into account how the events under discussion play out in the real world.

That’s the problem with the insistence that this time it really is different: it disables the most effective protection we’ve got against the habit of thought that cognitive psychologists call "confirmation bias," the tendency to look for evidence that supports one’s pet theory rather than seeking the evidence that might call it into question. The scientific method itself, in the final analysis, is simply a collection of useful gimmicks that help you sidestep confirmation bias.  That’s why competent scientists, when they come up with a hypothesis to explain something in nature, promptly sit down and try to think up as many ways as possible to disprove the hypothesis.  Those potentials for disproof are the raw materials from which experiments are designed, and only if the hypothesis survives all experimental attempts to disprove it does it take its first step toward scientific respectability.

It’s not exactly easy to run controlled double-blind experiments on entire societies, but historical comparison offers the same sort of counterweight to confirmation bias. Any present or future set of events, however unique it may be in terms of the fine details, has points of similarity with events in the past, and those points of similarity allow the past events to be taken as a guide to the present and future. This works best if you’ve got a series of past events, as different from each other as any one of them is from the present or future situation you’re trying to predict; if you can find common patterns in the whole range of past parallels, it’s usually a safe bet that the same pattern will recur again.

Any time you approach a present or future event, then, you have two choices: you can look for the features that event has in common with other events, despite the differences of detail, or you can focus on the differences and ignore the common features.  The first of those choices, it’s worth noting, allows you to consider both the similarities and the differences.  Once you’ve got the common pattern, it then becomes possible to modify it as needed to take into account the special characteristics of the situation you’re trying to understand or predict: to notice, for example, that the dark age that will follow our civilization will have to contend with nuclear and chemical pollution on top of the more ordinary consequences of decline and fall.

If you start from the assumption that the event you’re trying to predict is unlike anything that’s ever happened before, though, you’ve thrown out your chance of perceiving the common pattern. What happens instead, with motononous regularity, is that pop-culture narratives such as the sudden overnight collapse beloved of Hollywood screenplay writers smuggle themselves into the picture, and cement themselves in place with the help of confirmation bias. The result is the endless recycling of repeatedly failed predictions that plays so central a role in the collective imagination of our time, and has helped so many people blind themselves to the unwelcome future closing in on us.