Saturday, August 4, 2018

A Candid Conversation With Vince Gilligan on ‘Better Call Saul’

Perhaps the most surprising thing about Better Call Saul – other than the fact that many Breaking Bad fans have said they prefer the spinoff, and even the ones who disagree don’t find that a ludicrous notion – is how it’s become beloved for the exact opposite reason that its creators expected it to be.

Vince Gilligan and Peter Gould — and for that matter, all of us at home — assumed the fun of the prequel would be in spending more time with Bob Odenkirk in the role of Walter White’s shyster lawyer Saul Goodman; it was a way for the show to fill in blanks in the Heisenberg-verse. Instead, most of what makes the show great involves the man he used to be: slick but largely well-meaning lawyer Jimmy McGill, who has the depth and emotional resonance that Saul lacks. The longer we spend with this version of the character – which he still is at the start of Season Four, premiering on August 6th – the less we want to see of Goodman or even Walt himself.

I recently spoke with Gilligan about those early days when he and Gould — who became sole showrunner this year while Gilligan largely focused on developing other ideas — had to wonder if they’d made a terrible mistake. He also talked about the painful process of figuring out how Saul could work, the gradual insertion of other Breaking Bad characters into the spinoff and a lot more. (With occasional kibitzing from Gould and some other writers, since Gilligan will be the first to tell you that he has a terrible memory for detail.)

It took you and Peter a while to figure out what the show was. At what point did you say to yourselves, “Wait a minute, this is actually good? This isn’t just a folly that we’ve done, to keep everyone together?”

We would never put anything on that we had worked less than 100 percent on. Having said that, I didn’t know it would come together. I knew it would be the product of a lot of hard work and a lot of talent, in front of and behind the camera. I thought at worst, we would create something that was admirable and a perfectly legitimate attempt at a show. But I didn’t realize it would be as successful as it is in terms of a fully jelled world, a full totality of creation … [one] that is as satisfying as it is.

When we first started concocting the idea of doing a spinoff, we literally thought it’d be a half-hour show. It’d be something akin to Dr Katz, where it’s basically Saul Goodman in his crazy office with the styrofoam columns and he’s visited every week by a different stand-up comic. It was basically, I guess, legal problems. We talked about that for a day or two. And then Peter Gould and I realized, we don’t know anything about the half-hour idiom. And then we thought, okay, well, so it’s an hour … but it’s going to be a really funny hour. I said, “Breaking Bad is about 25-percent humor, 75-percent drama and maybe this will be the reverse of that.” Well this thing, especially in Season Four, is every bit as dramatic as Breaking Bad ever was. I just didn’t see any of that coming. I didn’t know how good it would all be. I really didn’t.

It’s amazing how hard it was to get it right.

The question we should’ve ask ourselves from the beginning; “Is Saul Goodman an interesting enough character to build a show around?” And the truth is, we came to the conclusion, after we already had the deal in hand [and] AMC and Sony had already put up the money, “I don’t think we have a show here, because I don’t think we have a character who could support a show.” He’s a great flavoring, he’s a wonderful saffron that you sprinkle on your Risotto. But you don’t want to eat a bowl full of saffron, you gotta have the rice, you know? You gotta have the substance.

And it dawned on us that this character seemed so comfortable in his own skin. Peter and I do not possess those kinds of personalities. We thought, “Regardless how much comedy is in it, how do you find drama in a guy who’s basically okay with himself?” So then we thought, “Well, who was he before he was Saul Goodman?”

Because the show is named Better Call Saul, we thought that we had to get to this guy quick or else people will accuse us of false advertising — a bait and switch. Then lo and behold, season after season went by and it dawned on us, we don’t want to get to Saul Goodman … and that’s the tragedy.

If we had thought all of this from the get-go, that would have made us very smart. But as it turns out, we’re very plodding and dumb, and it takes forever to figure this stuff out. Which is why we’re perfectly matched for a TV schedule versus a movie schedule, because you got to get it right the first time when you’re writing a movie. It took us forever to get it right. (...)

Going in, did you expect to be featuring as many Breaking Bad characters as you have? Did you assume at some point we would get to Gus, for instance?

We always assumed we’d get to Gus — I think we thought we might get to him quicker. Just speaking for myself and no one else: I thought we’d have gotten to Walt or Jesse by this point, as sort of the first fan of both shows. I’m greedy to see all of these characters. I thought we would see plenty of Breaking Bad characters. I didn’t know we’d dig as deep for some of them, as we have.

We’ve gotten a great deal of satisfaction from seeing, for instance, that the real estate agent who shows Mike and [his] daughter the new house, was a real estate agent in Breaking Bad, who had the run-in with Marie. Little shout-outs like that, we love for two reasons. We love those Easter eggs for the really astute students of Breaking Bad. And we also know that that young woman who was such a wonderful actress and so much fun to work with on Breaking Bad. We love when someone did a great job for us on a previous show, to pay ’em back by having them on the new show. Which is not to say that we’ll get every single one of those folks, even though we’d love to. There’s probably plenty we’ll never get to, just for lack of time, lack of episodes … but it’s fun to be able to do that.

When you say you expected to get to Walt and Jesse by now, do you mean the Rosencrantz and Guildenstern Are Dead approach to the Breaking Bad years? Or just that you would have seen what they were up to in this time period?

I thought we would have touched base with them already. But having said that, it makes perfect sense that we haven’t yet touched base with them. Just being in the writers room, you realize that there’s a lot to do before that happens — if and when it does happen. I don’t even want to promise that it will. It’s like what I was saying a minute ago: You play the play the cards that you’ve dealt yourself. There’s no point in cheating in solitaire. That’s a weird analogy, but ultimately, a pretty good one. You can cheat in solitaire, but there’s nothing satisfying about cheating in solitaire.

And the analogy holds when you get to the writer’s room with Better Call Saul. You can change the character’s history, you can have it be that Walter White never comes into it, but it wouldn’t ultimately be satisfying. And when you play the cards out correctly and you see that it’s time to bring Walter White in, for instance, it’s a wonderfully satisfying moment. If you force it, if you cheat the cards, if you bring them in just because folks are demanding it or expecting it, and you kind of bullshit the character’s way into the show, it’s just not going to satisfy anybody. I believe that in my heart.

Has the show evolved and become good enough to the point where it doesn’t need Walter White?

Maybe. I mean, it would be satisfying to see Walt. Not to see him shoehorned in — that would not satisfy me. But to see the character properly arrive at a nexus point with Better Call Saul. That’d be wonderful … [though] it’s very possible it won’t happen if it doesn’t feel properly arrived at. And yes, I believe that Better Call Saul is so much its own creation now, its own thing. It absolutely stands on its own.

We’re enjoying this overlap between Breaking Bad and Better Call Saul that we’re continuing to arrive at. But there’s a version of the show where you don’t see it as Breaking Bad stuff at all. Where, for instance, we leave out Mike Ehrmantraut, because he barely ever interacts with Jimmy McGill anymore. We could just stick with the Jimmy McGill story: him, Kim Wexler, Howard Hamlin, all of that stuff. We could have a perfectly satisfying show. But we feel like we’re giving the fans two shows for the price of one. It really does feel like two TV shows in one now.

When Breaking Bad was coming to an end, this was already in the works to some degree. But was there a part of you thinking, “Alright, this show is ending. This is the best thing I’ve ever done it, it’s the best thing I will ever do, my career has peaked. What do I do now?”

That’s exactly why I did this, because I was thinking those thoughts exactly as you just put it: “This is the best thing I’m ever going to do. This in the height of my creative life, my career, it’s never going to get any better than Breaking Bad.” And that’s why I wanted to get right into something else, because I was still only 48, 49 years old, I didn’t want to stop working. I knew in my heart if I took six months off, because everyone said I needed a vacation, then six months would go by, the world would’ve moved on — and worst of all, I would’ve been paralyzed creatively. I would have said to myself, “Okay, time to do something else now. What is it? What’s the next big thing?” And then I would just freeze up, because I would say I would come up with an idea, thinking, “Oh, that’s fun.” And then the editing portion of my brain, which I’ve given too loud a voice over the years, would say: “It’s not to the level of Breaking Bad.”

The best thing I could’ve done personally was to just jump headlong into a show that, admittedly, we didn’t fully understand. Once we really got into it, we thought, “Oh man, we got nothing here.” And then luckily, we just kept banging at it until we figured it out, with the help of a lot of great writers. But the smartest thing I ever did was to keep moving.

And Breaking Bad … the beauty of it is, some people are always going to love Breaking Bad more. But I run into people every day now who say Better Call Saul is their favorite of the two. I love hearing that. I don’t know where I fall personally on that scale, that continuum — I try not to choose. I don’t have children, but this is as close as I’ll ever get to having children. I find it hard to choose between them. But I’m just glad they both exist.

by Alan Sepinwall, Rolling Stone |  Read more:
Image: Nicole Wilder/AMC/Sony Pictures Television

Friday, August 3, 2018

Tom Petty


Tom Petty
via:
[ed. Straight into Darkness  See also: 400 Days (Documentary - highly recommended)]

There was a little girl, I used to know her
I still think about her, time to time
There was a moment when I really loved her
Then one day the feeling just died

We went straight into darkness
Out over the line
Yeah straight into darkness
Straight into night

I remember flying out to London
I remember the feeling at the time
Out the window of the 747
Man there was nothing, only black sky

We went straight into darkness
Out over the line
Yeah straight into darkness
Straight into night

Oh give it up to me I need it
Girl, I know a good thing when I see it
Baby wrong or right I mean it
I don't believe the good times are all over
I don't believe the thrill is all gone
Real love is a man's salvation
The weak ones fall the strong carry on

Straight into darkness
Out over the line
Yeah straight into darkness
Straight into night


Ernst Haas
via:

This Is My Nerf Blaster, This Is My Gun

One late spring day in April, several years ago—one of the last breezy afternoons before the suffocating summer humidity would descend on the rolling green hills of central Virginia—I went to visit friends in Charlottesville. I was on a break from Gaza at the time, where I’d been living for a year and a half while working on a security project for an NGO and reporting on the Israeli-Palestinian conflict for the Virginia Quarterly Review. I’d grown used to the simmering sounds of war; I would hear the thump of Hamas and Islamic Jihad mortars during my afternoon runs and would wake to my windows rattling as Israeli gunboats fired at Palestinian fishermen. Still, I remained hypervigilant—ready to fight, or flee, at any second.

As I approached my friends’ doorstep, I was suddenly caught in an ambush of foam darts, and I looked down to see their seven-year-old son, Jack, grinning behind an azalea bush, aiming his Nerf blaster at my chest.

“Gotcha!” Jack shouted, before sprinting off behind the house in a flash of spindly limbs and towheaded glee.

Jack’s ambushes became a ritual we’d reenact every time I visited. Jack’s first blaster was a Nite Finder, a pistol that fired single foam darts with rubber tips, and had a mock laser sight mounted in front of the trigger assembly, mimicking the emerging fashion in tactical handguns. It was made of gray-and-yellow molded plastic, and though the blaster’s grip bore some resemblance to the sweep of a real semiautomatic pistol grip, it would’ve been more at home on the set of Lost in Space than Die Hard. A few years later, when Jack and his family moved to Nebraska, he got a Nerf rifle called the N-Strike Alpha Trooper CS-18, which featured a detachable stock and a magazine that held 18 foam darts. It had a charging handle on the barrel like a pump shotgun, allowing for rapid fire and a max range of 35 feet—which meant Jack could hide around the side of the house and get me coming down the driveway.

Last year, when Jack was 13 and I was 35, I had the honor of teaching him the fundamentals of firearms safety at a range near my home in Bozeman, Montana, using the same Marlin .22 rifle I’ve had since my 10th or 11th birthday. I remember when my dad and I first brought that rifle home: Running my hands over the smooth, dark-stained wood stock, and the fascination I felt whenever I slid it out of its khaki-colored soft case, the delightful clack of the bolt sliding home and locking down. There was no kick, and wearing earplugs, the shots sounded like bursts from an air compressor—but all the same, the rifle was not a toy. When I put the stock to my shoulder and the scope in front of my eye, I immediately felt more grown-up. Jack clearly did as well, treating the gun with respect and seriousness.

I spent years working as a war correspondent, and for a good portion of the past year I have been reporting on the National Rifle Association’s fear-mongering, gun culture, and the crisis of gun violence in America. Until recently, I had never read too far into our Nerf play, mine and Jack’s, and I had never heard people link Nerf blasters to real violence the way they did with violent video games and movies. But in an era of mass shootings, I’ve started to reconsider the banality of Nerf blasters and other toy guns.

Over the past two decades, Nerf has upped the ante on the power and functionality of its blasters. One model shoots foam balls up to 100 feet per second—fast enough to sting bare skin. Some models, such as the “Doomlands” series, are cartoonish in their appearance, taking the concept of mega firepower to gonzo levels. Others, like the N-Strike models, have become increasingly streamlined, drawing closer to the souped-up tactical firearms that now dominate the real gun market, namely the endless variations on the popular AR-15.

Do toys like these play any part in the fetishizing of guns? Do they blur the line between fantasy and reality, helping to inspire mass shooters like Nikolas Cruz and Dylann Roof? Or are they just good, clean, foam fun? I don’t know if it’s possible to answer those questions, but I know one thing unequivocally: if the kinds of blasters that Nerf offers today had existed when I was little, I would have been completely, hopelessly enthralled.

Nerf’s deep dive into imaginative gunplay began humbly in 1989, when the company introduced Blast-a-Ball, a pair of simple plastic tubes with plunger handles on one end that could launch foam balls up to 30 feet. Nerf called it the “shoot ’em, dodge ’em, catch ’em” game, and, from the very beginning, it was clear that Nerf did not intend for its new toy to be enjoyed alone—each box came with two blasters.

I was born in 1981, and I remember playing with those original ball blasters, but the Nerf products that really took my suburban Washington, D.C., neighborhood by storm were the company’s foam footballs. The Turbo was about four-fifths the size of a leather pigskin, which made it easy to throw spirals. In 1991, the same year that Nerf introduced the Vortex—a whistling football with rocket fins—the company also launched the Bow ‘n’ Arrow, a blaster in the shape of a bow that fired large foam missiles. Nerf dominated the birthday-party scene that year. Now, almost 30 years later, Nerf balls appear to have been overshadowed by its toy weapons.

Since their debut in the late 1980s, Nerf blasters have evolved into sophisticated toys capable of rapid fire, some models sporting what are known (on real guns) as high-capacity magazines, each holding a dozen rounds or more—in some cases, as many as 200. Nerf has sometimes looked to historical gun models for inspiration, like the Nerf Zombie Strike SlingFire Blaster, which uses the lever-action reload of the .30-30 Winchester Model 94 rifle, with dashes of fluorescent green and orange to diminish its verisimilitude. The overall aesthetic of Nerf’s blaster lineup remains playful and sci-fi, with wild color schemes and plenty of high-visibility orange, especially on the business end of the barrels. But anyone with a remotely trained eye can see that Nerf’s newer models are edging closer to the features of what are commonly known as assault weapons.

The expiration of the 1994 Assault Weapons Ban in 2004—along with a 2005 law that protected firearms manufacturers from lawsuits—contributed to a period of furious growth in the firearms market. Sales of handguns more than quadrupled between 1999 and 2016 (spiking in 2013 after the Sandy Hook Elementary School shooting, in anticipation of incoming gun-control legislation). Firearms imports into the United States also increased fivefold. After ten years of restrictions, manufacturers were now free to market a seemingly limitless array of military-style semiautomatic rifles and accessories, benefiting from the free advertising of the wars in Iraq and Afghanistan. At gun shows and in a proliferating number of firearms publications and enthusiast websites, hunting rifles and shotguns took a backseat to variations on the AR-15, the AK-47, and the Bullpup, a close-quarters combat rifle favored by the Israeli and British militaries.

Nerf appears to have taken notice of both the marketing and design tactics of the firearms industry over the years. The most obvious parallel between Nerf’s newer blasters and their deadly cousins is their focus on modularity. A seemingly infinite spectrum of accessories have made semiautomatic “black rifles” such as the AR-15 a hit among enthusiasts of real firearms, spurring enormous growth in aftermarket products. Similarly, recent upgrades to Nerf products have allowed for the reconfiguration of the company’s rifle-style blasters into pistols, and the addition of the Picatinny rail offers users the opportunity to mount accessories such as flashlights, bipods, and red-dot sights.

The company’s Modulus series includes a lineup of accessories that are obviously toy versions of the real add-ons beloved by black-rifle enthusiasts, including foregrips that mount under barrels, faux laser sights, collapsible stocks, and long-range barrel extenders. Certain battery-operated models are even capable of automatic fire, and some kids have figured out how to “bump fire” their nonautomatic models the same way you can bump fire a semiautomatic rifle: by hooking your finger around the trigger and moving the entire rifle back and forth.

Though there were lots of toy guns on the market that looked real when I was a kid, the opposite was not true: the only real guns I ever saw or handled were unmistakably not toys. They were made of black or polished steel and smooth, stained wood. (If they had plastic on them at all, it was black.) But just as Nerf seems to have co-opted the infinite accessorizing possibilities of the actual firearms industry, owners of AR-15s are sending their guns to third-party customizers to incorporate more playful features into their design: a gun can be anodized in virtually any color, or have a custom wrap applied featuring Star Wars and Marvel themes.

Growing up, my friends and I had toy-gun arsenals that would’ve equipped us for any conflict from the Revolutionary War to Vietnam: long-barreled muskets purchased on field trips to Colonial Williamsburg, chrome six-shooters, cork popguns and rubber-band shooters, and battery-operated squirt guns that looked like exact replicas of the TEC-9 and MAC-10. A company called Zap It sold guns shaped like miniature Uzis, which shot blood-colored ink that would stain clothes briefly, then quickly fade and disappear. (An ad from the late 1980s shows a kid popping out from behind a door to shoot the mailman. A few seconds later, his dad shoots him from behind the cover of the morning paper.)

The first toy gun I remember playing with was a chrome cap gun in the shape of a .45 pistol. I was so young I don’t even remember holding it for the first time, but it stayed in my toy bin well into my middle-school years. It had been my dad’s when he was a kid in the 1950s and had plastic grips with real stippling and fired caps from a roll, which meant there was real smoke. It smelled musty and oxidized, like everything else that came out of my Nana’s basement in Missouri—a smell I associated with a grandfather and great-uncles I had never known, who’d fought in the trenches of WWI and on the seas of the North Atlantic, in the Pacific, and across Europe in WWII.

I knew boys who weren’t allowed to play with toy guns at all. Our grandparents were part of the Greatest Generation, survivors of epic struggles that earned them awe and reverence bordering on fear. But we were the children of the Baby Boomers—a generation sent to fight in Vietnam, a confusing conflict with no clear objectives that killed and maimed young draftees by the tens of thousands. Many young people came out of the 1960s committed to breaking the cycle of macho violence by emphasizing nonviolent play at home. When they had kids of their own—my generation, somewhere between Gen Xers and millennials—they forbade backyard war games and the props they thought were necessary to play them.

These attempts were futile. Whenever I was at a friend’s house who wasn’t allowed to have toy guns, we used our fingers for pistols and sticks for rifles. We made machine-gun noises and explosions with our mouths, imagining bullets kicking up dust around the enemy fortifications, smoke and splintered timber rising skyward in theatrical columns of smoke.

by Elliott Woods, Topic | Read more:
Image: Greg Marinovich

California Burning

On the northwestern edge of Los Angeles, where I grew up, the wildfires came in late summer. We lived in a new subdivision, and behind our house were the hills, golden and parched. We would hose down the wood-shingled roof as fire crews bivouacked in our street. Our neighborhood never burned, but others did. In the Bel Air fire of 1961, nearly five hundred homes burned, including those of Burt Lancaster and Zsa Zsa Gabor. We were all living in the “wildland-urban interface,” as it is now called. More subdivisions were built, farther out, and for my family the wildfire threat receded.

Tens of millions of Americans live in that fire-prone interface today—the number keeps growing—and the wildfire threat has become, for a number of political and environmental reasons, immensely more serious. In LA, fire season now stretches into December, as grimly demonstrated by the wildfires that burned across Southern California in late 2017, including the Thomas Fire, in Santa Barbara County, the largest in the state’s modern history. Nationally, fire seasons are on average seventy-eight days longer than they were in 1970, according to the US Forest Service. Wildfires burn twice as many acres as they did thirty years ago. “Of the ten years with the largest amount of acreage burned in the United States,” Edward Struzik notes in Firestorm: How Wildfire Will Shape Our Future, nine have occurred since 2000. Individual fires, meanwhile, are bigger, hotter, faster, more expensive and difficult to fight, and more destructive than ever before. We have entered the era of the megafire—defined as a wildfire that burns more than 100,000 acres.

In early July 2018, there were twenty-nine large uncontained fires burning across the United States. “We shouldn’t be seeing this type of fire behavior this early in the year,” Chris Anthony, a division chief at the California Department of Forestry and Fire Protection, told The New York Times. It has been an unusually dry winter and spring in much of the West, however, and by the end of June three times as much land had already burned in California as burned in the first half of 2017, which was the state’s worst fire year ever. On July 7, my childhood suburb, Woodland Hills, was 117 degrees. On the UCLA campus, it was 111 degrees. Wildfires broke out in San Diego and up near the Oregon border, where a major blaze closed Interstate 5 and killed one civilian. The governor, Jerry Brown, has declared yet another state of emergency in Santa Barbara County.

How did this happen? One part of the story begins with a 1910 wildfire, known as the Big Burn, that blackened three million acres in Idaho, Montana, and Washington and killed eighty-seven people, most of them firefighters. Horror stories from the Big Burn seized the national imagination, and Theodore Roosevelt, wearing his conservationist’s hat, used the catastrophe to promote the Forest Service, which was then new and already besieged by business interests opposed to public management of valuable woodlands. The Forest Service was suddenly, it seemed, a band of heroic firefighters. Its budget and mission required expansion to prevent another inferno.

The Forest Service, no longer just a land steward, became the federal fire department for the nation’s wildlands. Its policy was total suppression of fires—what became known as the 10 AM rule. Any reported fire would be put out by 10 AM the next day, if possible. Some experienced foresters saw problems with this policy. It spoke soothingly to public fears, but periodic lightning-strike fires are an important feature of many ecosystems, particularly in the American West. Some “light burning,” they suggested, would at least be needed to prevent major fires. William Greeley, the chief of the Forest Service in the 1920s, dismissed this idea as “Paiute forestry.”

But Native Americans had used seasonal burning for many purposes, including hunting, clearing trails, managing crops, stimulating new plant growth, and fireproofing areas around their settlements. The North American “wilderness” encountered by white explorers and early settlers was in many cases already a heavily managed, deliberately diversified landscape. The total suppression policy of the Forest Service and its allies (the National Park Service, for instance) was exceptionally successful, reducing burned acreage by 90 percent, and thus remaking the landscape again—creating what Paul Hessburg, a research ecologist at the Forest Service, calls an “epidemic of trees.”

Preserving trees was not, however, the goal of the Forest Service, which worked closely with timber companies to clear-cut enormous swaths of old-growth forest. (Greeley, when he left public service, joined the timber barons.) The idea was to harvest the old trees and replace them with more efficiently managed and profitable forests. This created a dramatically more flammable landscape. Brush and woodland understory were no longer being cleared by periodic wildfires, and the trees in second-growth forest lacked the thick, fire-adapted bark of their old-growth predecessors. As Stephen Pyne, the foremost American fire historian, puts it, fire could “no longer do the ecological work required.” Fire needs fuel, and fire suppression was producing an unprecedented amount of wildfire fuel.

Climate change, meanwhile, has brought longer, hotter summers and a series of devastating droughts, priming landscapes to burn. Tree-killing insects such as the mountain pine beetle thrive in droughts and closely packed forests. The most recent outbreak of bark-beetle infestation, the largest ever recorded, has destroyed billions of trees in fourteen western states and much of western Canada. Dead trees make fine kindling for a megafire.

Invasive species also contribute. The sagebrush plains of the Great Basin, which spreads across six states in the Intermountain West, are being transformed by cheatgrass (Bromus tectorum), a weed that arrived in contaminated grain seed from Eurasia in the nineteenth century. Cheatgrass is highly flammable, grows rapidly, and is nearly indestructible. It has a fire return interval—the typical time between naturally occurring fires—of less than five years. Sagebrush, which is slow to reestablish itself after a fire, is unable to compete. Cheatgrass, with its ferocious fire cycle of burning and quick regeneration, now infests fifty million acres of the sagebrush steppe. Farther south, cheatgrass and other invasive weeds are threatening the towering saguaro cactus and, in California, the Joshua tree.

Nonnative species can also be a fire risk when they are deliberately introduced. Portugal has been tormented by wildfires, including an inferno last summer that killed more than sixty people, partly because of the flammability of eucalyptus, which is native to Australia and has become the mainstay of the national wood industry, transforming the Portuguese countryside, according to an environmental engineer who spoke to The New York Times, “from a pretty diverse forest into a big eucalyptus monoculture.”

In the United States, exurban and rural property development in the wildland-urban interface has been, perhaps, the final straw—or at least another lighted match tossed on the pile. Most wildfires that threaten or damage communities are caused by humans. Campfires, barbecues, sparks from chainsaws, lawnmowers, power lines, cars, motorcycles, cigarettes—the modes of inadvertent ignition in a bone-dry landscape are effectively limitless. Let’s say nothing of arson. Houses and other structures become wildfire fuel, and vulnerable communities hugely complicate forest management and disaster planning. In his panoramic 2017 book Megafire, the journalist (and former firefighter) Michael Kodas observes pithily that “during the century in which the nation attempted to exclude fire from forests, they filled with homes.”

Starting around the 1960s, the Forest Service and its sister agencies, including the National Park Service, did eventually come to see some of the deep flaws in the policy of total fire suppression. The virtues of “prescribed burning”—deliberately set, carefully planned fires, usually in the late fall or early spring, meant to reduce the amount of fuel and the risk of wildfires—had become blindingly obvious. Still, prescribed burns were, and are, a hard sell. People don’t like to see forest fires or grass fires, particularly not anywhere near their homes. Downwind communities hate the smoke, quite understandably. Politicians lose their nerve.

On rare occasions, a prescribed burn escapes the control of firefighters, and those disasters tend to be remembered. The 2000 Cerro Grande Fire, in New Mexico, started out as a prescribed burn. It escaped, destroyed four hundred homes, and nearly burned down the Los Alamos nuclear research facility. Political support for prescribed burning took a heavy hit. Bruce Babbitt, then secretary of the interior, suspended all federal prescribed burning west of the 100th meridian, which basically meant the entire West.

For backcountry fires, the wisdom of “let it burn” also slowly became clear to forest managers. National parks started calling wildfires that didn’t threaten lives or structures “prescribed natural fires.” Firefighters might herd a blaze in the direction they wanted it to go, but would otherwise let it run its course. This enlightened policy hasn’t always survived political pressure either. In 1988, a drought year in the West, hundreds of wildfires erupted in Yellowstone National Park. President Ronald Reagan denounced the wait-and-see response of firefighters as “cockamamie.” His interior secretary, Donald Hodel, ordered the park’s officials to fight the fires.

“Prescribed natural fires” were abandoned, and as many as nine thousand firefighters fought the Yellowstone megafire, which burned for four months. John Melcher, a Montana senator, told The New York Times, “They’ll never go back to this policy. From now on the policy will be putting the fire out when they see the flames.” The Yellowstone effort cost $120 million ($250 million in 2018 dollars). Cool weather and autumn snow ultimately put out the fires. Surprisingly few animals perished, and the land soon began to regenerate. The “let burn” policy took somewhat longer to recover.

This alternation between firefighting and wildfire risk reduction continues. But since wildfires are getting steadily worse, stop-it-now firefighting always gets more funding. The Forest Service spent 16 percent of its budget on fire suppression in 1995. In 2015, it spent $2.6 billion—more than half of its budget. In Stephen Pyne’s formulation, we’re getting more bad fires and fewer good fires. As resources are drained from the forest management side, the buildup of dangerous, unhealthy forests continues, fueling more terrible fires, many of which will need to be fought.

Into this breach, a small army of private contractors has streamed, ready to feed firefighters, wash their clothes, and rent them, at prices sure to make a taxpayer’s eyes water, anything from helicopters to bulldozers to twelve-stall shower trailers. Politicians, never eager to tramp along on a smoky prescribed burn or wade into the woods with crews doing mechanical brush-thinning, are generally happier to be seen calling for military aircraft, say, to drop retardant on a raging blaze. Firefighters call these “air shows.” Aviation has an important part to play in certain types of fire suppression, usually early in the course of a wildfire, but commanders on the ground have learned that it can also be necessary to let the governor or congressperson appear to be riding to the rescue of his constituents with a fleet of C-130s, no matter how expensive and unhelpful they may be. In Southern California, Representative Duncan Hunter, whose district is near San Diego, is known as the region’s leading wildfire showboat.

In Wildfire, the journalist Heather Hansen embeds herself in an elite crew of wildland firefighters based in Boulder, Colorado. The crew, known as Station 8, primarily works in the wildland-urban interface of Boulder and the surrounding Rocky Mountain Front Range, but its members also travel to every corner of the country to help fight wildfires. It’s a reciprocal arrangement—when they need help on a fire at home, hotshots from those far-flung places will show up. Hansen learns and shares some fire science and fire history, filling in the background of the current crisis. She describes the crew’s punishing training and their powerful camaraderie, and recounts their stories of fires fought, disasters survived, lessons learned.

Then she goes out on a prescribed burn near the edge of Boulder. It’s an eighty-five-acre open ridge on city-owned property, a small project, but not far from thousands of homes. The crew’s preparation has been long and meticulous, including outreach to the neighborhood. “You’re a hero when you put out fire but not when you start one, especially if something goes wrong,” the fire operations manager, Brian Oliver, tells her:
Boulder is a very smart community, a lot of PhDs, and they understand what we’re trying to do with the fuels reduction and the thinning…. In theory they are very supportive and receptive, but then, “Wait, you’re going to light fire on purpose? That’s weird. We don’t want you to do that.” Or it’s, “We want you to do it but we don’t want to be impacted.” As soon as the smell of smoke gets in their window it’s, “What are you guys doing? I can’t believe this, you’re terrible. My curtains smell like smoke; who’s going to pay for my dry-cleaning?” Or “Yeah I support prescribed burns but this is the trail I run on every day. You’re ruining my workout.”
The prescribed burn on the ridge is tense, unexpectedly dramatic. Dozens of firefighters from surrounding stations show up to help. Station 8 has made a close study of the diurnal wind patterns on the ridge and kept its own weather station up on the site for two months, but the wind this morning is fluky. They do a test burn, then quickly shut it down when an unexpected scrap of south wind puffs. They try again, and the wind whips harder. Oliver orders it shut down again, and this time it takes several minutes of furious water-spraying and hacking at burning stumps to put out the small test fire. That’s it. The burn is a no-go. Maybe they’ll get this ridge next year. Fire crews in today’s drought-plagued West have to work with “laughably small burn windows,” Hansen says, referring to the periods in which prescribed burns can be safely attempted. The burn windows in Boulder amount to eleven days a year.

by William Finnegan, NYRB | Read more:
Image: Joe Sohm/Visions of America

This Japanese Shrine Has Been Torn Down And Rebuilt Every 20 Years for the Past Millennium

Every 20 years, locals tear down the Ise Jingu grand shrine in Mie Prefecture, Japan, only to rebuild it anew. They have been doing this for around 1,300 years. Some records indicate the Shinto shrine is up to 2,000-years old. The process of rebuilding the wooden structure every couple decades helped to preserve the original architect’s design against the otherwise eroding effects of time. “It’s secret isn’t heroic engineering or structural overkill, but rather cultural continuity,” writes the Long Now Foundation. (...)

Japan for Sustainability’s Junko Edahiro describes the history of the ceremony at length and reports on the upcoming festivities:
This is an important national event. Its underlying concept — that repeated rebuilding renders sanctuaries eternal — is unique in the world. 
The Sengu is such a large event that preparations take over eight years, four years alone just to prepare the timber.
Locals take part in a parade to transport the prepared wood along with white stones—two per person—which they place in sacred spots around the shrine. In addition to reinvigorating spiritual and community bonds, the tradition keeps Japanese artisan skills alive. The shrine’s visitor site describes this aspect of the Shikinen Sengo ceremony:
It also involves the wish that Japanese traditional culture should be transmitted to the next generation. The renewal of the buildings and of the treasures has been conducted in the same traditional way ever since the first Shikinen Sengu had been performed 1300 years ago. Scientific developments make manual technology obsolete in some fields. However, by performing the Shikinen Sengu, traditional technologies are preserved.
As Edahiro describes, oftentimes local people will take part in the ceremony several times throughout the course of their lives. “I saw one elderly person who probably has experienced these events three or four times saying to young people who perhaps participated in the event as children last time, ‘I will leave these duties to you next time,’” she recalls. “ I realized that the Sengu ceremony also plays a role as a “device” to preserve the foundations of traditions that contribute to happiness in people’s lives.”

by Rachel Nuwer, Smithsonian | Read more:
Image: N Yotarou
[ed. The author László Krasznahorkai took part in the ritual rebuilding of a Shinto shrine. There he witnessed ancient tradition, and the toll it takes. For one disciple, “his job is to plane this piece of hinoki cypress, and he planes it all day. And the master comes at the end of the day and he throws it away. And he keeps on planing and planing it…until the master decides that it’s OK. That’s tradition. But there’s no nostalgia in that.” (The Economist). 

Also, from the JFS Junko Edahiro link (above): As many as 10,000 Japanese cypress trees are needed each time the Jingu sanctuaries of Ise are rebuilt. How have people secured so many Japanese cypress trees every 20 years?

The Jingu Shrine itself owns a large parcel of land 5,500 hectares in extent, and over 90 percent of this land is covered in forest. This forest, called the "Misoma-yama," was created as a result of learning from experience in the past. Timber was formerly taken from this forest to use for the Sengu rebuilding ceremony as well as for firewood. In the Edo Period (1603-1867), about 7 to 9 million people - about the same number as in modern times -- came to worship at Jingu Shrine every year. Firewood was needed for these pilgrims, who normally stayed near the site for several days. As a result the local forest was increasingly exploited, and the timber resource became depleted.

During the Edo Period, the central government (the shogunate) designated a forest in the Kiso area owned by the Owari clan in today's Nagano Prefecture to supply timber to Jingu Shrine. However, toward the end of the Edo Period, this forest became Imperial property, and after World War II it was designated a national forest. Jingu Shrine is given priority in purchasing timber for the Sengu ceremony from this forest, but it is not the only buyer of this rather expensive timber.

Thus it became more and more difficult for Jingu Shrine to depend entirely on domestic resources for the Sengu rebuilding ceremony. This possibility was foreseen by shrine staff, who started taking action 90 years ago. Thinking that the shrine should have its own forest to provide timber for reconstruction, the shrine secretariat ("Jingu Shicho," part of the Interior Ministry) formed a forest management plan during the Taisho Period (1912 - 1926), and started planting trees. At the time, the nominal purpose of the project was said to be landscape conservation and enhancement of the water resource recharging function of the Isuzu River, but Japanese cypresses were also planted on southern slopes.

This afforestation plan encompassed a 200-year time-scale, and aimed to start semi-permanently supplying all the timber for the Sengu ceremony from Shrine-owned forest within 200 years. This plan made it possible to obtain one-fourth of the necessary timber for this year's Sengu ceremony from Shrine lands. This proportion will increase every 20 years. Although the remainder must be purchased from other domestic sources, shrine forests are expected to be able to provide all the timber for future reconstruction ceremonies earlier than originally planned.

The Sengu is such a large event that preparations take over eight years, four years alone just to prepare the timber. Logs are soaked in a lumber pond for two years after felling, a method known as "underwater drying," used to leach extraneous oil out of the logs. The logs are then stacked outside for a year to acclimatize them to the severities of the four seasons. It takes another year to saw them into shape, and finally to cover them with Japanese paper to keep them in good condition until the ceremony.This long curing process strengthens the timber, prevents it from warping or cracking, and prepares it to play its proper part in the ceremony with its central concept of protecting life.]

Thursday, August 2, 2018

The “Next” Financial Crisis and Public Banking as the Response

In this episode of The Hudson Report, we speak with Michael Hudson about the implications of the flattening yield curve, the possibility of another global financial crisis, and public banking as an alternative to the current system.

Paul Sliker: Michael Hudson welcome back to another episode of The Hudson Report.

Michael Hudson: It’s good to be here again.

Paul Sliker: So, Michael, over the past few months the IMF has been sending warning signals about the state of the global economy. There are a bunch of different macroeconomic developments that signal we could be entering into another crisis or recession in the near future. One of those elements is the yield curve, which shows the difference between short-term and long-term borrowing rates. Investors and financial pundits of all sorts are concerned about this, because since 1950 every time the yield curve has flattened, the economy has tanked shortly thereafter.

Can you explain what the yield curve signifies, and if all these signals I just mentioned are forecasting another economic crisis?

Michael Hudson: Normally, borrowers have to pay only a low rate of interest for a short-term loan. If you take a longer-term loan, you have to pay a higher rate. The longest term loans are for mortgages, which have the highest rate. Even for large corporations, the longer you borrow – that is, the later you repay – the pretense is that the risk is much higher. Therefore, you have to pay a higher rate on the pretense that the interest-rate premium is compensation for risk. Banks and the wealthy get to borrow at lower rates.

Right now what’s happened is that the short-term rates you can get by putting your money in Treasury bills or other short-term instruments are even higher than the long-term rates. That’s historically unnatural. But it’s not really unnatural at all when you look at what the economy is doing.

You said that we’re entering into a recession. That’s just the flat wrong statement. The economy’s been in a recession ever since 2008, as a result of what President Obama did by bailing out the banks and not the economy at large.

Since 2008, people talk about “look at how that GDP is growing.” Especially in the last few quarters, you have the media saying look, “we’ve recovered. GDP is up.” But if you look at what they count as GDP, you find a primer on how to lie with statistics.

The largest element of fakery is a category that is imputed – that is, made up – for rising rents that homeowners would have to pay if they had to rent their houses from themselves. That’s about 6 percent of GDP right there. Right now, as a result of the 10 million foreclosures that Obama imposed on the economy by not writing down the junk mortgage debts to realistic values, companies like Blackstone have come in and bought up many of the properties that were forfeited. So now there are fewer homes that are available to buy. Rents are going up all over the country. Homeownership has dropped by abut 10 percent since 2008, and that means more people have to rent. When more people have to rent, the rents go up. And when rents go up, people lucky enough to have kept their homes report these rising rental values to the GDP statisticians.

If I had to pay rent for the house that I have, could charge as much money as renters down the street have to pay – for instance, for houses that were bought out by Blackstone. Rents are going up and up. This actually is a rise in overhead, but it’s counted as rising GDP. That confuses income and output with overhead costs.

The other great jump in GDP has been people paying more money to the banks as penalties and fees for arrears on student loans and mortgage loans, credit card loans and automobile loans. When they fall into arrears, the banks get to add a penalty charge. The credit-card companies make more money on arrears than they do on interest charges. This is counted as providing a “financial service,” defined as the amount of revenue banks make over and above their borrowing charges.

The statistical pretense is that they’re taking the risk on making loans to debtors that are going bad. They’re cleaning up on profits on these bad loans, because the government has guaranteed the student loans including the higher penalty charges. They’ve guaranteed the mortgages loans made by the FHA – Fannie Mae and the other groups – that the banks are getting penalty charges on. So what’s reported is that GDP growth is actually more and more people in trouble, along with rising housing costs. What’s good for the GDP here is awful for the economy at large! This is bad news, not good news.

As a result of this economic squeeze, investors see that the economy is not growing. So they’re bailing out. They’re taking their money and running.

If you’re taking your money out of bonds and out of the stock market because you worry about shrinking markets, lower profits and defaults, where are you going to put it? There’s only one safe place to put your money: short-term treasuries. You don’t want to buy a long-term Treasury bond, because if the interest rates go up then the bond price falls. So you want buy short-term Treasury bonds. The demand for this is so great that Bogle’s Vanguard fund management company will only let small investors buy ten thousand dollars worth at a time for their 401K funds.

The reason small to large investors are buying short term treasuries is to park their money safely. There’s nowhere else to put it in the real economy, because the real economy isn’t growing.

What has grown is debt. It’s grown larger and larger. Investors are taking their money out of state and local bonds because state and local budgets are broke as a result of pension commitments. Politicians have cut taxes in order to get elected, so they don’t have enough money to keep up with the pension fund contributions that they’re supposed to make.

This means that the likelihood of a break in the chain of payments is rising. In the United States, commercial property rents are in trouble. We’ve discussed that before on this show. As the economy shrinks, stores are closing down. That means that the owners who own commercial mortgages are falling behind, and arrears are rising.

Also threatening is what Trump is doing. If his protectionist policies interrupt trade, you’re going to see companies being squeezed. They’re not going to make the export sales they expected, and will pay more for imports.

Finally, banks are having problems of they hold Italian government bonds. Germany is unwilling to use European funds to bail them out. Most investors expect Italy to do exit the euro in the next three years or so. It looks like we’re entering a period of anarchy, so of course people are parking their money in the short term. That means that they’re not putting it into the economy. No wonder the economy isn’t growing.

Dante Dallavalle: So to be clear: a rise in demand for these short-term Treasuries is an indication that investors and businesses find too much risk in the economy as it stands now to be investing in anything more long-term.

Michael Hudson: That’s exactly right.

Dante Dallavelle: OK. So we have prominent economists and policymakers, like Geithner, Bernanke Paulson, etc., making the point that we need not worry about a future crisis in the near term, because our regulatory infrastructure is more sound now than it was in the past, for instance before 2008. I know you’ve talked a lot about the weak nature of financial regulation both here at home in the United States and internationally. What are the shortcomings of Dodd Frank? Haven’t recent policies gutting certain sections of the law made us more vulnerable, not less, to crises in the future?

Michael Hudson: Well, you asked two questions. First of all, when you talk about Geithner and Bernanke – the people who wrecked the economy – what they mean by “more sound” is that the government is going to bail out the banks again at public expense.

It cost $4.3 trillion last time. They’re willing to bail out the banks all over again. In fact, the five largest banks have grown much larger since 2008, because they were bailed out. Depositors and companies think that if a bank is so crooked that it grows so fast that it’s become too big to fail, they had better take their money out of the local bank and put it in the crooked big bank, because that’s going to be bailed out – because the government can’t afford to let it go under.

The pretense was that Dodd Frank was going to regulate them, by increasing the capital reserves that banks had to have. Well, first of all, the banks have captured the regulatory agencies. They’re in charge of basically approving Federal Reserve members, and also members of the local and smaller bank regulatory agencies. So you have deregulators put in charge of these agencies. Second, bank lobbyists have convinced Congress to de-tooth the Dodd Frank Act.

For instance, banks are very heavily into derivatives. That’s what brought down AIG in 2008. These are bets on which way currencies or interest rates will go. There are trillions of dollars nominally of bets that have been placed. They’re not regulated if a bank does this through a special-purpose entity, especially if it does it through those that are in Britain. That’s where AIG’s problems were in 2008. So the banks basically have avoided having to back up capital against making a bad bet.

If you have bets over where trillions of dollars of securities, interest rates, bonds and currencies are going to go, somebody is going to be on the losing side. And someone on the losing side of these bets is going to go under, like Lehman Brothers did. They’re not going to be able to pay their customers. You’re going to have rolling defaults.

You’ve also had Trump de-tooth to the Consumer Financial Protection Agency. So the banks say, well, let’s do what Wells Fargo did. Their business model is fraud, but their earnings are soaring. They’re growing a lot, and they’re paid a tiny penalty for cheating their customers and making billions of dollars off it. So more banks are jumping on the high-risk consumer exploitation bandwagon. That’s certainly not helping matters.

Michael Palmieri: So, Michael we’ve talked a little bit about the different indicators that point towards a financial crisis. It’s also clear from what you just stated from a regulatory standpoint that the U.S. is extremely vulnerable. Back in 2008 many argue that there was a huge opportunity lost in terms of transforming our private banking system to a publicly owned banking system. Recently the Democracy Collaborative published a report titled, The Crisis Next Time: Planning for Public ownership as Alternative to Corporate Bailouts. That was put out by Thomas Hanna. He was calling for a transition from private to public banking. He also made the point, which you’ve made in earlier episodes, that it’s not a question of ifanother financial crisis is going to occur, but when. Can you speak a little bit about how public banking as an alternative would differ from the current corporate private banking system we have today?

Michael Hudson: Sure. I’m actually part of the Democracy Collaborative. The best way to think about this is that suppose that back in 2008, Obama and Wall Street bagman Tim Geithner had not blocked Sheila Bair from taking over Citigroup and other insolvent banks. She wrote that Citigroup had gambled with money and were incompetent, and outright crooked. She wanted to take them over.

Now suppose that Citibank would had been taken over by the government and operated as a public bank. How would a public bank have operated differently from Citibank?

For one thing, a public entity wouldn’t make corporate takeover loans and raids. They wouldn’t lend to payday loan sharks. Instead they’d make local branches so that people didn’t have to go to payday loan sharks, but could borrow from a local bank branch or a post office bank in the local communities that are redlined by the big banks.

A public entity wouldn’t make gambling loans for derivatives. What a public bank woulddo is what’s called the vanilla bread-and-butter operation of serving small depositors, savers and consumers. You let them have checking accounts, you clear their checks, pay their bills automatically, but you don’t make gambling and financial loans.

Banks have sort of turned away from small customers. They’ve certainly turned away from the low-income neighborhoods, and they’re not even lending to businesses anymore. More and more American companies are issuing their own commercial paper to avoid the banks. In other words, a company will issue an IOU itself, and pay interest more than pension funds or mutual funds can get from the banks. So the money funds such as Vanguard are buying commercial paper from these companies, because the banks are not making these loans.

So a public bank would do what banks are supposed to do productively, which is to help finance basic production and basic consumption, but not financial gambling at the top where all the risk is. That’s the business model of the big banks, and some will lose money and crash like in 2008. A public bank wouldn’t make junk mortgage loans. It wouldn’t engage in consumer fraud. It wouldn’t be like Wells Fargo. It wouldn’t be like Citibank. This is so obvious that what is needed is a bank whose business plan is not exploitation of consumers, not fraud, and isn’t gambling. That basically is the case for public ownership.

Paul Sliker: Michael as we’re closing this one out, I know you’re going to hate me for asking this question. But you were one of the few economists to predict the last crisis. What do you think is going to happen here? Are we looking at another global financial crisis and when do you think, if so, that might be coming?

Michael Hudson: We’re emphatically not looking for “another” global crisis, because we’re in the same crisis! We’re still in the 2008 crisis! This is the middle stage of that crisis. The crisis was caused by not writing down the bad debts, which means the bad loans, especially the fraudulent loans. Obama kept these junk mortgage loans and outright fraud on the books – and richly rewarded the banks in proportion to how badly and recklessly they had lent.

The economy’s been limping along ever since. They say there’s been a recovery, but even with the fake lying with statistics – with a GDP rise – the so-called “recovery” is the slowest that there’s been at any time since World War II. If you break down the statistics and look at what is growing, it’s mainly the financial and real estate sector, and monopolies like health care that raise the costs of living and crowd out spending in the real economy.

So this is the same crisis that we were in then. It’s never been fixed, and it can’t be fixed until you get rid of the bad-debt problem. The bad debts require restructuring the way in which pensions are paid – to pay them out of current income, not financializing them. The economy has to be de-financialized, but I don’t see that on the horizon for a while. That’s s why I think that rather than a new crisis, there will be a slow shrinkage until there’s a break in the chain of payments. Then they’re going to call that the crisis.

by Ives Smith and Michael Hudson, Naked Capitalism |  Read more:

Zero 7 ft. Sia and Sophie Barker

Comcast, Charter Dominate US; Telcos “Abandoned Rural America"

You already knew that home broadband competition is sorely lacking through much of the US, but a new report released today helps shed more light on Americans who have just one choice for high-speed Internet.

Comcast is the only choice for 30 million Americans when it comes to broadband speeds of at least 25Mbps downstream and 3Mbps upstream, the report says. Charter Communications is the only choice for 38 million Americans. Combined, Comcast and Charter offer service in the majority of the US, with almost no overlap.

Yet many Americans are even worse off, living in areas where DSL is the best option. AT&T, Verizon, and other telcos still provide only sub-broadband speeds over copper wires throughout huge parts of their territories. The telcos have mostly avoided upgrading their copper networks to fiber—except in areas where they face competition from cable companies.

These details are in "Profiles of Monopoly: Big Cable and Telecom," a report by the Institute for Local Self-Reliance (ILSR). The full report should be available at this link today.

“Market is broken”

"The broadband market is broken," the report's conclusion states. "Comcast and Charter maintain a monopoly over 68 million people. Some 48 million households (about 122 million people) subscribe to these cable companies, whereas the four largest telecom companies combined have far fewer subscribers—only 31.6 million households (about 80.3 million people). The large telecom companies have largely abandoned rural America—their DSL networks overwhelmingly do not support broadband speeds—despite years of federal subsidies and many state grant programs." (...)

Comcast and Charter

Comcast, the nation's biggest cable company and broadband provider, offers service to about 110 million people in 39 states and Washington, DC.

"All of these people have access to broadband-level service through Comcast Xfinity, but about 30 million of these people have no other option for broadband service," the ILSR wrote.

Comcast's broadband subscribers included 25.5 million households, or about 64.8 million people, based on the average US household size of 2.54 people.

Charter, the second biggest cable company after Comcast, offers service to 101 million people in 45 states. 22.5 million households covering about 57.2 million people were subscribing to Charter Internet, according to the numbers cited by the ILSR.

Like Comcast, Charter offers broadband-level speeds throughout its territory. "About 38 million [people in Charter territory] have no other option for broadband service," the report said.

Comcast and Charter generally don't compete against each other. They have a combined territory covering about 210 million people, yet the companies' overlapping service territory covers only about 1.5 million people, according to the Form 477 data cited by the ILSR. The overlap is mostly in Florida, where Charter purchased Bright House Networks, and may be overstated because an entire census block is counted as served even if an ISP offers service to just one resident in the block.

by Jon Brodkin, Ars Technica |  Read more:
Image: ILSR

Wednesday, August 1, 2018

Losing Earth: The Decade We Almost Stopped Climate Change

Editor’s Note. This narrative by Nathaniel Rich is a work of history, addressing the 10-year period from 1979 to 1989: the decisive decade when humankind first came to a broad understanding of the causes and dangers of climate change. Complementing the text is a series of aerial photographs and videos, all shot over the past year by George Steinmetz. With support from the Pulitzer Center, this two-part article is based on 18 months of reporting and well over a hundred interviews. It tracks the efforts of a small group of American scientists, activists and politicians to raise the alarm and stave off catastrophe. It will come as a revelation to many readers — an agonizing revelation — to understand how thoroughly they grasped the problem and how close they came to solving it. Jake Silverstein

The world has warmed more than one degree Celsius since the Industrial Revolution. The Paris climate agreement — the nonbinding, unenforceable and already unheeded treaty signed on Earth Day in 2016 — hoped to restrict warming to two degrees. The odds of succeeding, according to a recent study based on current emissions trends, are one in 20. If by some miracle we are able to limit warming to two degrees, we will only have to negotiate the extinction of the world’s tropical reefs, sea-level rise of several meters and the abandonment of the Persian Gulf. The climate scientist James Hansen has called two-degree warming “a prescription for long-term disaster.” Long-term disaster is now the best-case scenario. Three-degree warming is a prescription for short-term disaster: forests in the Arctic and the loss of most coastal cities. Robert Watson, a former director of the United Nations Intergovernmental Panel on Climate Change, has argued that three-degree warming is the realistic minimum. Four degrees: Europe in permanent drought; vast areas of China, India and Bangladesh claimed by desert; Polynesia swallowed by the sea; the Colorado River thinned to a trickle; the American Southwest largely uninhabitable. The prospect of a five-degree warming has prompted some of the world’s leading climate scientists to warn of the end of human civilization.

Is it a comfort or a curse, the knowledge that we could have avoided all this?

Because in the decade that ran from 1979 to 1989, we had an excellent opportunity to solve the climate crisis. The world’s major powers came within several signatures of endorsing a binding, global framework to reduce carbon emissions — far closer than we’ve come since. During those years, the conditions for success could not have been more favorable. The obstacles we blame for our current inaction had yet to emerge. Almost nothing stood in our way — nothing except ourselves.

Nearly everything we understand about global warming was understood in 1979. By that year, data collected since 1957 confirmed what had been known since before the turn of the 20th century: Human beings have altered Earth’s atmosphere through the indiscriminate burning of fossil fuels. The main scientific questions were settled beyond debate, and as the 1980s began, attention turned from diagnosis of the problem to refinement of the predicted consequences. Compared with string theory and genetic engineering, the “greenhouse effect” — a metaphor dating to the early 1900s — was ancient history, described in any Introduction to Biology textbook. Nor was the basic science especially complicated. It could be reduced to a simple axiom: The more carbon dioxide in the atmosphere, the warmer the planet. And every year, by burning coal, oil and gas, humankind belched increasingly obscene quantities of carbon dioxide into the atmosphere.

Why didn’t we act? A common boogeyman today is the fossil-fuel industry, which in recent decades has committed to playing the role of villain with comic-book bravado. An entire subfield of climate literature has chronicled the machinations of industry lobbyists, the corruption of scientists and the propaganda campaigns that even now continue to debase the political debate, long after the largest oil-and-gas companies have abandoned the dumb show of denialism. But the coordinated efforts to bewilder the public did not begin in earnest until the end of 1989. During the preceding decade, some of the largest oil companies, including Exxon and Shell, made good-faith efforts to understand the scope of the crisis and grapple with possible solutions.

Nor can the Republican Party be blamed. Today, only 42 percent of Republicans know that “most scientists believe global warming is occurring,” and that percentage is falling. But during the 1980s, many prominent Republicans joined Democrats in judging the climate problem to be a rare political winner: nonpartisan and of the highest possible stakes. Among those who called for urgent, immediate and far-reaching climate policy were Senators John Chafee, Robert Stafford and David Durenberger; the E.P.A. administrator, William K. Reilly; and, during his campaign for president, George H.W. Bush. As Malcolm Forbes Baldwin, the acting chairman of the president’s Council for Environmental Quality, told industry executives in 1981, “There can be no more important or conservative concern than the protection of the globe itself.” The issue was unimpeachable, like support for veterans or small business. Except the climate had an even broader constituency, composed of every human being on Earth.

It was understood that action would have to come immediately. At the start of the 1980s, scientists within the federal government predicted that conclusive evidence of warming would appear on the global temperature record by the end of the decade, at which point it would be too late to avoid disaster. More than 30 percent of the human population lacked access to electricity. Billions of people would not need to attain the “American way of life” in order to drastically increase global carbon emissions; a light bulb in every village would do it. A report prepared at the request of the White House by the National Academy of Sciences advised that “the carbon-dioxide issue should appear on the international agenda in a context that will maximize cooperation and consensus-building and minimize political manipulation, controversy and division.” If the world had adopted the proposal widely endorsed at the end of the ’80s — a freezing of carbon emissions, with a reduction of 20 percent by 2005 — warming could have been held to less than 1.5 degrees.

A broad international consensus had settled on a solution: a global treaty to curb carbon emissions. The idea began to coalesce as early as February 1979, at the first World Climate Conference in Geneva, when scientists from 50 nations agreed unanimously that it was “urgently necessary” to act. Four months later, at the Group of 7 meeting in Tokyo, the leaders of the world’s seven wealthiest nations signed a statement resolving to reduce carbon emissions. Ten years later, the first major diplomatic meeting to approve the framework for a binding treaty was called in the Netherlands. Delegates from more than 60 nations attended, with the goal of establishing a global summit meeting to be held about a year later. Among scientists and world leaders, the sentiment was unanimous: Action had to be taken, and the United States would need to lead. It didn’t.

The inaugural chapter of the climate-change saga is over. In that chapter — call it Apprehension — we identified the threat and its consequences. We spoke, with increasing urgency and self-delusion, of the prospect of triumphing against long odds. But we did not seriously consider the prospect of failure. We understood what failure would mean for global temperatures, coastlines, agricultural yield, immigration patterns, the world economy. But we have not allowed ourselves to comprehend what failure might mean for us. How will it change the way we see ourselves, how we remember the past, how we imagine the future? Why did we do this to ourselves? These questions will be the subject of climate change’s second chapter — call it The Reckoning. There can be no understanding of our current and future predicament without understanding why we failed to solve this problem when we had the chance.

That we came so close, as a civilization, to breaking our suicide pact with fossil fuels can be credited to the efforts of a handful of people, among them a hyperkinetic lobbyist and a guileless atmospheric physicist who, at great personal cost, tried to warn humanity of what was coming. They risked their careers in a painful, escalating campaign to solve the problem, first in scientific reports, later through conventional avenues of political persuasion and finally with a strategy of public shaming. Their efforts were shrewd, passionate, robust. And they failed. What follows is their story, and ours.

Part One 1979–1982

The first suggestion to Rafe Pomerance that humankind was destroying the conditions necessary for its own survival came on Page 66 of the government publication EPA-600/7-78-019. It was a technical report about coal, bound in a coal-black cover with beige lettering — one of many such reports that lay in uneven piles around Pomerance’s windowless office on the first floor of the Capitol Hill townhouse that, in the late 1970s, served as the Washington headquarters of Friends of the Earth. In the final paragraph of a chapter on environmental regulation, the coal report’s authors noted that the continued use of fossil fuels might, within two or three decades, bring about “significant and damaging” changes to the global atmosphere.

Pomerance paused, startled, over the orphaned paragraph. It seemed to have come out of nowhere. He reread it. It made no sense to him. Pomerance was not a scientist; he graduated from Cornell 11 years earlier with a degree in history. He had the tweedy appearance of an undernourished doctoral student emerging at dawn from the stacks. He wore horn-rimmed glasses and a thickish mustache that wilted disapprovingly over the corners of his mouth, though his defining characteristic was his gratuitous height, 6 feet 4 inches, which seemed to embarrass him; he stooped over to accommodate his interlocutors. He had an active face prone to breaking out in wide, even maniacal grins, but in composure, as when he read the coal pamphlet, it projected concern. He struggled with technical reports. He proceeded as a historian might: cautiously, scrutinizing the source material, reading between the lines. When that failed, he made phone calls, often to the authors of the reports, who tended to be surprised to hear from him. Scientists, he had found, were not in the habit of fielding questions from political lobbyists. They were not in the habit of thinking about politics.

Pomerance had one big question about the coal report. If the burning of coal, oil and natural gas could invite global catastrophe, why had nobody told him about it? If anyone in Washington — if anyone in the United States — should have been aware of such a danger, it was Pomerance. As the deputy legislative director of Friends of the Earth, the wily, pugnacious nonprofit that David Brower helped found after resigning from the Sierra Club a decade earlier, Pomerance was one of the nation’s most connected environmental activists. That he was as easily accepted in the halls of the Dirksen Senate Office Building as at Earth Day rallies might have had something to do with the fact that he was a Morgenthau — the great-grandson of Henry Sr., Woodrow Wilson’s ambassador to the Ottoman Empire; great-nephew of Henry Jr., Franklin D. Roosevelt’s Treasury secretary; second cousin to Robert, district attorney for Manhattan. Or perhaps it was just his charisma — voluble, energetic and obsessive, he seemed to be everywhere, speaking with everyone, in a very loud voice, at once. His chief obsession was air. After working as an organizer for welfare rights, he spent the second half of his 20s laboring to protect and expand the Clean Air Act, the comprehensive law regulating air pollution. That led him to the problem of acid rain, and the coal report.

He showed the unsettling paragraph to his office mate, Betsy Agle. Had she ever heard of the “greenhouse effect”? Was it really possible that human beings were overheating the planet?

Agle shrugged. She hadn’t heard about it, either.

That might have been the end of it, had Agle not greeted Pomerance in the office a few mornings later holding a copy of a newspaper forwarded by Friends of the Earth’s Denver office. Isn’t this what you were talking about the other day? she asked.

Agle pointed to an article about a prominent geophysicist named Gordon MacDonald, who was conducting a study on climate change with the Jasons, the mysterious coterie of elite scientists to which he belonged. Pomerance hadn’t heard of MacDonald, but he knew all about the Jasons. They were like one of those teams of superheroes with complementary powers that join forces in times of galactic crisis. They had been brought together by federal agencies, including the C.I.A, to devise scientific solutions to national-security problems: how to detect an incoming missile; how to predict fallout from a nuclear bomb; how to develop unconventional weapons, like plague-infested rats. The Jasons’ activities had been a secret until the publication of the Pentagon Papers, which exposed their plan to festoon the Ho Chi Minh Trail with motion sensors that signaled to bombers. After the furor that followed — protesters set MacDonald’s garage on fire — the Jasons began to use their powers for peace instead of war.

There was an urgent problem that demanded their attention, MacDonald believed, because human civilization faced an existential crisis. In “How to Wreck the Environment,” a 1968 essay published while he was a science adviser to Lyndon Johnson, MacDonald predicted a near future in which “nuclear weapons were effectively banned and the weapons of mass destruction were those of environmental catastrophe.” One of the most potentially devastating such weapons, he believed, was the gas that we exhaled with every breath: carbon dioxide. By vastly increasing carbon emissions, the world’s most advanced militaries could alter weather patterns and wreak famine, drought and economic collapse.

In the decade since then, MacDonald had been alarmed to see humankind begin in earnest to weaponize weather — not out of malice, but unwittingly. During the spring of 1977 and the summer of 1978, the Jasons met to determine what would happen once the concentration of carbon dioxide in the atmosphere doubled from pre-Industrial Revolution levels. It was an arbitrary milestone, the doubling, but a useful one, as its inevitability was not in question; the threshold would most likely be breached by 2035. The Jasons’ report to the Department of Energy, “The Long-Term Impact of Atmospheric Carbon Dioxide on Climate,” was written in an understated tone that only enhanced its nightmarish findings: Global temperatures would increase by an average of two to three degrees Celsius; Dust Bowl conditions would “threaten large areas of North America, Asia and Africa”; access to drinking water and agricultural production would fall, triggering mass migration on an unprecedented scale. “Perhaps the most ominous feature,” however, was the effect of a changing climate on the poles. Even a minimal warming “could lead to rapid melting” of the West Antarctic ice sheet. The ice sheet contained enough water to raise the level of the oceans 16 feet.

The Jasons sent the report to dozens of scientists in the United States and abroad; to industry groups like the National Coal Association and the Electric Power Research Institute; and within the government, to the National Academy of Sciences, the Commerce Department, the E.P.A., NASA, the Pentagon, the N.S.A., every branch of the military, the National Security Council and the White House.

Pomerance read about the atmospheric crisis in a state of shock that swelled briskly into outrage. “This,” he told Betsy Agle, “is the whole banana.”

Gordon MacDonald worked at the federally funded Mitre Corporation, a think tank that works with agencies throughout the government. His title was senior research analyst, which was another way of saying senior science adviser to the national-intelligence community. After a single phone call, Pomerance, a former Vietnam War protester and conscientious objector, drove several miles on the Beltway to a group of anonymous white office buildings that more closely resembled the headquarters of a regional banking firm than the solar plexus of the American military-industrial complex. He was shown into the office of a brawny, soft-spoken man in blocky, horn-rimmed frames, who extended a hand like a bear’s paw.

“I’m glad you’re interested in this,” MacDonald said, sizing up the young activist.

“How could I not be?” Pomerance said. “How could anyone not be?”

MacDonald explained that he first studied the carbon-dioxide issue when he was about Pomerance’s age — in 1961, when he served as an adviser to John F. Kennedy. Pomerance pieced together that MacDonald, in his youth, had been something of a prodigy: In his 20s, he advised Dwight D. Eisenhower on space exploration; at 32, he became a member of the National Academy of Sciences; at 40, he was appointed to the inaugural Council on Environmental Quality, where he advised Richard Nixon on the environmental dangers of burning coal. He monitored the carbon-dioxide problem the whole time, with increasing alarm.

MacDonald spoke for two hours. Pomerance was appalled. “If I set up briefings with some people on the Hill,” he asked MacDonald, “will you tell them what you just told me?”

Thus began the Gordon and Rafe carbon-dioxide roadshow. Beginning in the spring of 1979, Pomerance arranged informal briefings with the E.P.A., the National Security Council, The New York Times, the Council on Environmental Quality and the Energy Department, which, Pomerance learned, had established an Office of Carbon Dioxide Effects two years earlier at MacDonald’s urging. The men settled into a routine, with MacDonald explaining the science and Pomerance adding the exclamation points. They were surprised to learn how few senior officials were familiar with the Jasons’ findings, let alone understood the ramifications of global warming. At last, having worked their way up the federal hierarchy, the two went to see the president’s top scientist, Frank Press.

Press’s office was in the Old Executive Office Building, the granite fortress that stands on the White House grounds just paces away from the West Wing. Out of respect for MacDonald, Press had summoned to their meeting what seemed to be the entire senior staff of the president’s Office of Science and Technology Policy — the officials consulted on every critical matter of energy and national security. What Pomerance had expected to be yet another casual briefing assumed the character of a high-level national-security meeting. He decided to let MacDonald do all the talking. There was no need to emphasize to Press and his lieutenants that this was an issue of profound national significance. The hushed mood in the office told him that this was already understood. (...)

MacDonald’s voice was calm but authoritative, his powerful, heavy hands conveying the force of his argument. He was a geophysicist trapped in the body of an offensive lineman — he had turned down a football scholarship to Rice in order to attend Harvard — and seemed miscast as a preacher of atmospheric physics and existential doom. His audience listened in bowed silence. Pomerance couldn’t read them. Political bureaucrats were skilled at hiding their opinions. Pomerance wasn’t. He shifted restlessly in his chair, glancing between MacDonald and the government suits, trying to see whether they grasped the shape of the behemoth that MacDonald was describing.

MacDonald’s history concluded with Roger Revelle, perhaps the most distinguished of the priestly caste of government scientists who, since the Manhattan Project, advised every president on major policy; he had been a close colleague of MacDonald and Press since they served together under Kennedy. In a 1957 paper written with Hans Suess, Revelle concluded that “human beings are now carrying out a large-scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future.” Revelle helped the Weather Bureau establish a continuous measurement of atmospheric carbon dioxide at a site perched near the summit of Mauna Loa on the Big Island of Hawaii, 11,500 feet above the sea — a rare pristine natural laboratory on a planet blanketed by fossil-fuel emissions. A young geochemist named Charles David Keeling charted the data. Keeling’s graph came to be known as the Keeling curve, though it more closely resembled a jagged lightning bolt hurled toward the firmament. MacDonald had a habit of tracing the Keeling curve in the air, his thick forefinger jabbing toward the ceiling.

After nearly a decade of observation, Revelle had shared his concerns with Lyndon Johnson, who included them in a special message to Congress two weeks after his inauguration. Johnson explained that his generation had “altered the composition of the atmosphere on a global scale” through the burning of fossil fuels, and his administration commissioned a study of the subject by his Science Advisory Committee. Revelle was its chairman, and its 1965 executive report on carbon dioxide warned of the rapid melting of Antarctica, rising seas, increased acidity of fresh waters — changes that would require no less than a coordinated global effort to forestall.

In 1974, the C.I.A. issued a classified report on the carbon-dioxide problem. It concluded that climate change had begun around 1960 and had “already caused major economic problems throughout the world.” The future economic and political impacts would be “almost beyond comprehension.” Yet emissions continued to rise, and at this rate, MacDonald warned, they could see a snowless New England, the swamping of major coastal cities, as much as a 40 percent decline in national wheat production, the forced migration of about one-quarter of the world’s population. Not within centuries — within their own lifetimes.

“What would you have us do?” Press asked.

The president’s plan, in the wake of the Saudi oil crisis, to promote solar energy — he had gone so far as to install 32 photovoltaic panels on the roof of the White House to heat his family’s water — was a good start, MacDonald thought. But Carter’s plan to stimulate production of synthetic fuels — gas and liquid fuel extracted from shale and tar sands — was a dangerous idea. Nuclear power, despite the recent tragedy at Three Mile Island, should be expanded. But even natural gas and ethanol were preferable to coal. There was no way around it: Coal production would ultimately have to end.

The president’s advisers asked respectful questions, but Pomerance couldn’t tell whether they were persuaded. The men all stood and shook hands, and Press led MacDonald and Pomerance out of his office. After they emerged from the Old Executive Office Building onto Pennsylvania Avenue, Pomerance asked MacDonald what he thought would happen.

Knowing Frank as I do, MacDonald said, I really couldn’t tell you.

In the days that followed, Pomerance grew uneasy. Until this point, he had fixated on the science of the carbon-dioxide issue and its possible political ramifications. But now that his meetings on Capitol Hill had concluded, he began to question what all this might mean for his own future. His wife, Lenore, was eight months pregnant; was it ethical, he wondered, to bring a child onto a planet that before much longer could become inhospitable to life? And he wondered why it had fallen to him, a 32-year-old lobbyist without scientific training, to bring greater attention to this crisis.

Finally, weeks later, MacDonald called to tell him that Press had taken up the issue. On May 22, Press wrote a letter to the president of the National Academy of Sciences requesting a full assessment of the carbon-dioxide issue. Jule Charney, the father of modern meteorology, would gather the nation’s top oceanographers, atmospheric scientists and climate modelers to judge whether MacDonald’s alarm was justified — whether the world was, in fact, headed to cataclysm.

Pomerance was amazed by how much momentum had built in such a short time. Scientists at the highest levels of government had known about the dangers of fossil-fuel combustion for decades. Yet they had produced little besides journal articles, academic symposiums, technical reports. Nor had any politician, journalist or activist championed the issue. That, Pomerance figured, was about to change. If Charney’s group confirmed that the world was careering toward an existential crisis, the president would be forced to act.

by Nathaniel Rich, NY Times | Read more:
Image: Daniel Becker
[ed. See also: We Were Warned and How Not to Talk About Climate Change]