Monday, August 6, 2018

The World’s Most Peculiar Company

When I was growing up, my family, like many others, got the Hammacher Schlemmer catalog delivered to our apartment. We never actually ordered anything from it, but I liked to daydream about belonging to a family who did. Whereas my real parents’ mail-order shopping was limited to the occasional windbreaker from Lands’ End, my imaginary Hammacher mom and dad purchased hovercraft, personal submarines, and giant floating trampolines with abandon. They knew how to party.

Between 1983 and 2005, there was a Hammacher Schlemmer store at the foot of Tribune Tower on Michigan Avenue. I remember gawking outside, thinking that the stones and bricks embedded in the building’s façade—Colonel McCormick’s prized samples from the Great Pyramid, Notre Dame Cathedral, and beyond—were just part of the store’s inventory. After all, if anyone were going to sell a section of the Parthenon, it would be Hammacher Schlemmer.

The store shuttered its doors, but the company is still around, headquartered in the northwest suburbs. And it continues to publish its signature catalog, as it has for the past 137 years. Hammacher Schlemmer mails out 50 million of them a year, in fact. It’s the longest-running catalog in American history.

These mail-order catalogs of bizarre gadgets, esoteric tchotchkes, and peculiar wellness treatments adhere to the same format and style as the ones delivered to my family’s apartment more than 20 years ago. With few exceptions, the four items per page are laid out in a quadrant, each with a photo, a dense block of explanatory text, and, most famously, a descriptive title. Open the 2018 spring catalog supplement and you’ll find the Genuine Handmade Irish Shillelagh, the 911 Instant Speakerphone, the Clarity Enhancing Sunglasses, and the Closet Organizing Trouser Rack all on one page.

In the age of Amazon, few things represent an ethos more diametrically opposed to the “everything store” than the Hammacher Schlemmer catalog. Typing “socks” into Amazon’s search bar yields a seemingly infinite number of options. But the Hammacher Schlemmer spring catalog supplement offers only the Best Circulation Enhancing Travel Socks and the Plantar Fasciitis Foot Sleeves, 45 pages apart. There are no algorithmically predicted product placements or targeted suggestions.

The mere existence of Hammacher Schlemmer these days invites some fair, yet pointed, questions. Who’s buying this stuff? immediately pops to mind. As does: How has the company lasted this long? And: What kind of person sees the Wearable Mosquito Net and thinks, I must have this?

For much of its history, Hammacher Schlemmer was a distinctly New York brand. It still maintains its only physical store on East 57th Street in Manhattan, but the headquarters have been in the Chicago area since merchandiser and collectible-plate magnate J. Roderick MacArthur (of the MacArthur “genius” grant family) bought the company and relocated it in 1981. As the home of catalog pioneers Montgomery Ward and Sears, Roebuck, Chicago was a natural fit for the nation’s most august purveyor of the mail-order medium.

You can find Hammacher Schlemmer’s offices on a broad stretch of Milwaukee Avenue in Niles. The first thing you see when you walk through the double glass doors of the former car dealership is a sunken indoor park, where ferns surround a gurgling stream. A series of displays in the carpeted lobby off the atrium documents the company’s history. One is dedicated to Hammacher Schlemmer’s “notable patrons,” including Steve Jobs, Marilyn Monroe, and Queen Elizabeth II.

Past those displays you’ll come to the Wall of Firsts, a long row of framed posters depicting various objects that debuted in the pages of the catalog. It begins with the First Pop Up Toaster (1931) and proceeds to such advents as the First Electric Food Blender (1934) and the First Microwave Oven (1968). It loses a little steam in the 2010s, thanks to items like the First Fashionista Christmas Tree (2012), yet finishes strong with the First Wellness Monitor Wristband (2015)—a Fitbit, though Hammacher Schlemmer won’t tell you that.

Hammacher Schlemmer’s policy has long been to remove product logos and brand names from its catalog. In the 1980s and ’90s, this was just another example of the retailer’s quirks, a vague gesture toward the privilege of ignorance: Just give me the best vacuum, I don’t care who makes it or how much it costs.

But these days there’s a more practical reason. Stephen Farrell, Hammacher Schlemmer’s director of merchandising, leads the team of buyers responsible for filling out the company’s eclectic inventory. He says the no-brand-name strategy is “particularly relevant today,” as Hammacher Schlemmer hopes to prevent people from simply searching for the products on Amazon and buying them there. (About 45 percent of the catalog inventory is exclusive to Hammacher Schlemmer. “We would prefer nothing is on Amazon,” Farrell tells me, though he says it’s not a deal breaker.)

For example, Hammacher Schlemmer features an item it calls the Barber Eliminator. Per the catalog: “The unit is moved through your hair while accommodating the contours of your pate.” It took me 20 minutes to find the electric razor on Amazon under its official name: the Conair Even Cut Rotary Hair Cut Cutting System. It’s $20 cheaper on Amazon, though it doesn’t come with the lifetime guarantee Hammacher includes with all its products. This is a feature that seemingly everyone I encounter in Niles is eager to tell me about, usually along with the question of whether or not I have heard the story about the poop Roomba.

The folks at Hammacher Schlemmer love the poop Roomba story. It goes like this: In 2016, a man in Little Rock, Arkansas, purchased a robotic vacuum from Hammacher Schlemmer. One evening, while on its automatic timer, the Roomba encountered a pile of puppy excrement and proceeded to spread and spray dog feces all over the house as it traveled along its algorithmically determined route. The man’s Facebook post about the ordeal went viral (359,709 shares, as of this writing), and in it he gives “mad props to Hammacher Schlemmer” for making good on its lifetime guarantee and issuing a full $400 refund.

I can’t imagine the Barber Eliminator getting into any similar kind of trouble, but it carries the same guarantee nonetheless. Were I in the market for an at-home haircutting device, I’m not sure page 32 of Hammacher Schlemmer’s spring catalog supplement would be the first place I’d look for it, but that’s not the point. The catalog tries to sell the item’s purpose (the elimination of my barber) before the product itself. The goal is to persuade page flippers to enter the DIY haircut market right then and there, when they’re least expecting it.

by Nick Greene, Chicago Magazine |  Read more:
Image: Ryan Segedi

Sunday, August 5, 2018

Everything's Not O.K.

The story of my professional hockey career isn’t a pretty one. It’s not overflowing with highlight-reel goals or big-game hat tricks.

For the 11 years I played in the NHL, between 2000 and 2011, I was mainly known as a tough guy. I was a fighter, a thug — someone you wouldn’t want to mess with unless you were looking to get punched in the face.

But let me be more specific. You want to know how I played the game?

I tried to hurt people.

That’s what I was there for. A lot of people don’t want to hear that, but it’s the honest truth. So, yes, for instance, I would try to injure you if that was the difference between winning and losing a hockey game. I’d do whatever was asked of me. And I can tell you that, yes, coaches do actually sometimes tap you on the back and tell you to get out there on the ice and fight. Whether you want to believe it or not, it happens.

And I was always game — right there at a moment’s notice, ready to oblige.

I’d do it for my team, and, as weird as it sounds, for … the game. Because as best I could tell, being tough, and one guy knocking the snot out of another guy, and showing no mercy, well … those things had always been part of our sport.

I had it in my head that there was a specific way that hockey needed to be played. And there was a level honor to it, a certain pride that came along with kicking some ass.

It’s what I did, and it paid the bills and allowed me to support my family. But I never loved it. (...)

The thing about hockey is that it’s a fast game. Things happen in the blink of an eye. People are flying around. And when you get your bell rung, it’s not like everything stops. You know what I mean? You just keep playing. That’s how it works.

And it wasn’t really my coaches who pushed me to be that way. I expected it from myself. It was the only way I knew — me basically doing what I thought I was supposed to do, and what I saw everyone else doing. Push through, ignore the pain, finish out the shift, all that shit. It was all second nature to me.

So I’m definitely not looking to blame my coaches or anyone else for all those head hits I took over the years and never really said anything about.

I did it to myself. No doubt.

But over time, all those hits to the head … they add up. And when you look back on it, honestly, it’s hard not to shake your head at how bad things actually were. (...)

I was always hurting. And in order for me to carry on, I had to mask all that pain.

At one point during my career, I was taking so many painkillers and other drugs on a daily basis that I started to not even be able to recognize the person I had become.

Trainers always had painkillers. So I took them. Often. And it just escalated from there. Eventually I couldn’t get as many as I wanted, and so I started buying them from people on the street. Just more and more and more.

After a while, each day, and even entire chunks of the season, became almost like a daze. I was so medicated, and it began to get pretty frightening for me. So I decided that I needed to do something. I got my courage up, and got my shit together, and found a way to tell some people with the team I was playing for that I had a problem. It took everything I had in me to do that, but the response I received when I spoke to people was really uplifting. Everyone I talked to was so understanding. Every single person said they were there for me, and that they wanted to get me the help I needed.

A few weeks later, after the season had ended, I was back home in Nobleton, Ontario, at the old town hall, helping my folks set up for my sister’s buck and doe party before her wedding, when the phone rang.

One of my buddies had seen my name on the ESPN ticker.

“Nick, what the hell, man? I can’t believe it.”

I had no clue what he was talking about.

Turns out that less than a month after I’d gone to my team and asked for help, I got traded away to another city.

Honestly. (...)

By the time I finally started getting help for everything I had been doing to try and ease the pain, I was already in my 30s. And at that point I was basically drinking and self-medicating and doing drugs nonstop. I stayed away from heroin, but other than that everything else was pretty much fair game.

I was a zombie, man. It’s not easy to admit that, but I really, really was.

And anytime I’d drink, I would almost always move on to drugs.

At the tail end of my career, I really, genuinely thought that I was going to die one night during the season. It’s hard to talk about, for sure, but … I had stayed up late doing an obscene amount of coke and things just got out of control. After a while my heart felt like it was going to burst out of my chest. I couldn’t get it to slow down. Nothing I did worked. It was probably the most scared I’ve ever been in my life.

I was playing for the Flyers at the time, and we had a morning skate I needed to be at in a few hours. So it was either go to the hospital and check in without anyone noticing or getting word about what had gone down, and then somehow get my ass to practice in the morning … or tell the trainer what had happened and try to make a change.

Basically, it was: Keep putting on an act, or come clean.

You’d maybe think it would have been an easy decision. Like, You were about to die. Get help. What the fuck? Stop living like this. Immediately. But I can tell you that, at the time, it was one of the hardest decisions I’d ever had to make. I agonized over it. Because I knew if I told the trainer, I was going to get in a ton of trouble.

But you know what, though? I fucking told the trainer.

Somehow I landed on the right call. And that was absolutely huge for me.

The Flyers and Paul Holmgren, who was the GM in Philly at the time, didn’t judge me or make me feel like an outcast when they found out. They sent me to rehab and pledged their support. They looked out for me. Even though I hadn’t been looking out for myself.

And to this day, I honestly believe Paul saved my life back then.

If I had been somewhere else, and they had just traded me away … I’d probably be dead.

Actually, there’s no doubt about it. I wouldn’t be sitting here today writing this thing if that had happened. That’s for sure.

I’d be six feet under.

The problem for me since then has been that rehab just hasn’t worked.

When the Flyers sent me, just a few months before I retired, I got off the painkillers and stopped using drugs. And eventually I even stopped drinking, too. But things just kept getting worse and worse for me mentally. A year and a half after I got sober, I was experiencing depression and anxiety worse than anything I’d ever felt before. I was sad all the time, and I’d constantly be on edge — sweating, shaking, nervous, having panic attacks. I’d call family members or friends and just be sobbing for no reason, and making no sense because I was in full-on panic mode. Then, on the day to day, it was almost like a constant state of having the wind knocked out of you. Like walking around your whole life unable to breathe.

I was completely clean, and looking healthy again, and at the same time … I was such a mess on the inside that I couldn’t even leave the house.

Since then, I’ve been to two more drug-and-alcohol rehab places. The NHL paid for me do that, and I commend the league for it. But … I just never got any relief. Those places work for lots of people, and I think that’s great. But with me, I could only get so far with them because they just never really addressed the root problems. They just dealt with what was apparent on the surface.

In some ways, I guess that’s not too surprising, because the types of issues I’ve been dealing with … I don’t know, they’re just not as obvious as some other medical problems. My ankle isn’t broken, you know? There’s no cast to sign. I can’t show my injury to you. And lots of times it’s hard to even describe it. So I can’t really even prove it to you, either.

Depression, anxiety, mental-health issues … that sort of stuff can seem invisible sometimes to those on the outside, but it’s worse than anything else I’ve ever dealt with. It can make you unbelievably sad to the point where you’re crying your eyes out. And then, the next day, you’ll just be so angry that you’re almost out of control. With me, there have been times when the anger has been so bad that I legitimately worried that I might hurt someone, or that I’d injure myself. But when family members, people I truly love and care about, would ask me what was going on, or why I was so mad … I wouldn’t really be able to tell them. I honestly wasn’t even sure.

And, like, AA meetings are supposed to somehow fix something so deep-seated?

That’s fantasyland stuff, right there.

But any time I reached out to the league, or to the players’ union doctors about mental-health issues, that’s all I’d hear. They basically just told me that I was an addict, and that I should sign up for some self-help groups — and that what I actually, really needed was to go do 90 meetings in 90 days.

Over time it became increasingly frustrating, because I tried everything they told me to do … and the depression and anxiety hadn’t gone away.

It’s just not as simple as going to some meetings. You know what I mean?

The kind of stuff I’m talking about here — the things that eventually became too much for those guys I played with who are no longer with us today — just runs so much deeper than some fucking self-help meetings at the neighborhood YMCA.

by Nick Boynton, The Player's Tribune |  Read more:
Image: Christopher Szagola/Icon Sportswire

Saturday, August 4, 2018

Capitalism Killed Our Climate Momentum, Not “Human Nature”

This Sunday, the entire New York Times Magazine will be composed of just one article on a single subject: the failure to confront the global climate crisis in the 1980s, a time when the science was settled and the politics seemed to align. Written by Nathaniel Rich, this work of history is filled with insider revelations about roads not taken that, on several occasions, made me swear out loud. And lest there be any doubt that the implications of these decisions will be etched in geologic time, Rich’s words are punctuated with full-page aerial photographs by George Steinmetz that wrenchingly document the rapid unraveling of planetary systems, from the rushing water where Greenland ice used to be to massive algae blooms in China’s third largest lake.

The novella-length piece represents the kind of media commitment that the climate crisis has long deserved but almost never received. We have all heard the various excuses for why the small matter of despoiling our only home just doesn’t cut it as an urgent news story: “Climate change is too far off in the future”; “It’s inappropriate to talk about politics when people are losing their lives to hurricanes and fires”; “Journalists follow the news, they don’t make it — and politicians aren’t talking about climate change”; and of course: “Every time we try, it’s a ratings killer.”

None of the excuses can mask the dereliction of duty. It has always been possible for major media outlets to decide, all on their own, that planetary destabilization is a huge news story, very likely the most consequential of our time. They always had the capacity to harness the skills of their reporters and photographers to connect abstract science to lived extreme weather events. And if they did so consistently, it would lessen the need for journalists to get ahead of politics because the more informed the public is about both the threat and the tangible solutions, the more they push their elected representatives to take bold action.

Which is why it was so exciting to see the Times throw the full force of its editorial machine behind Rich’s opus — teasing it with a promotional video, kicking it off with a live event at the Times Center, and accompanying educational materials.

That’s also why it is so enraging that the piece is spectacularly wrong in its central thesis.

According to Rich, between the years of 1979 and 1989, the basic science of climate change was understood and accepted, the partisan divide over the issue had yet to cleave, the fossil fuel companies hadn’t started their misinformation campaign in earnest, and there was a great deal of global political momentum toward a bold and binding international emissions-reduction agreement. Writing of the key period at the end of the 1980s, Rich says, “The conditions for success could not have been more favorable.”

And yet we blew it — “we” being humans, who apparently are just too shortsighted to safeguard our future. Just in case we missed the point of who and what is to blame for the fact that we are now “losing earth,” Rich’s answer is presented in a full-page callout: “All the facts were known, and nothing stood in our way. Nothing, that is, except ourselves.”

Yep, you and me. Not, according to Rich, the fossil fuel companies who sat in on every major policy meeting described in the piece. (Imagine tobacco executives being repeatedly invited by the U.S. government to come up with policies to ban smoking. When those meetings failed to yield anything substantive, would we conclude that the reason is that humans just want to die? Might we perhaps determine instead that the political system is corrupt and busted?)

This misreading has been pointed out by many climate scientists and historians since the online version of the piece dropped on Wednesday. Others have remarked on the maddening invocations of “human nature” and the use of the royal “we” to describe a screamingly homogenous group of U.S. power players. Throughout Rich’s accounting, we hear nothing from those political leaders in the Global South who were demanding binding action in this key period and after, somehow able to care about future generations despite being human. The voices of women, meanwhile, are almost as rare in Rich’s text as sightings of the endangered ivory-billed woodpecker — and when we ladies do appear, it is mainly as long-suffering wives of tragically heroic men.

All of these flaws have been well covered, so I won’t rehash them here. My focus is the central premise of the piece: that the end of the 1980s presented conditions that “could not have been more favorable” to bold climate action. On the contrary, one could scarcely imagine a more inopportune moment in human evolution for our species to come face to face with the hard truth that the conveniences of modern consumer capitalism were steadily eroding the habitability of the planet. Why? Because the late ’80s was the absolute zenith of the neoliberal crusade, a moment of peak ideological ascendency for the economic and social project that deliberately set out to vilify collective action in the name of liberating “free markets” in every aspect of life. Yet Rich makes no mention of this parallel upheaval in economic and political thought.

When I delved into this same climate change history some years ago, I concluded, as Rich does, that the key juncture when world momentum was building toward a tough, science-based global agreement was 1988. That was when James Hansen, then director of NASA’s Goddard Institute for Space Studies, testified before Congress that he had “99 percent confidence” in “a real warming trend” linked to human activity. Later that same month, hundreds of scientists and policymakers held the historic World Conference on the Changing Atmosphere in Toronto, where the first emission reduction targets were discussed. By the end of that same year, in November 1988, the United Nations’ Intergovernmental Panel on Climate Change, the premier scientific body advising governments on the climate threat, held its first session.

But climate change wasn’t just a concern for politicians and wonks — it was watercooler stuff, so much so that when the editors of Time magazine announced their 1988 “Man of the Year,” they went for “Planet of the Year: Endangered Earth.” The cover featured an image of the globe held together with twine, the sun setting ominously in the background. “No single individual, no event, no movement captured imaginations or dominated headlines more,” journalist Thomas Sancton explained, “than the clump of rock and soil and water and air that is our common home.”

(Interestingly, unlike Rich, Sancton didn’t blame “human nature” for the planetary mugging. He went deeper, tracing it to the misuse of the Judeo-Christian concept of “dominion” over nature and the fact that it supplanted the pre-Christian idea that “the earth was seen as a mother, a fertile giver of life. Nature — the soil, forest, sea — was endowed with divinity, and mortals were subordinate to it.”)

When I surveyed the climate news from this period, it really did seem like a profound shift was within grasp — and then, tragically, it all slipped away, with the U.S. walking out of international negotiations and the rest of the world settling for nonbinding agreements that relied on dodgy “market mechanisms” like carbon trading and offsets. So it really is worth asking, as Rich does: What the hell happened? What interrupted the urgency and determination that was emanating from all these elite establishments simultaneously by the end of the ’80s?

Rich concludes, while offering no social or scientific evidence, that something called “human nature” kicked in and messed everything up. “Human beings,” he writes, “whether in global organizations, democracies, industries, political parties or as individuals, are incapable of sacrificing present convenience to forestall a penalty imposed on future generations.” It seems we are wired to “obsess over the present, worry about the medium term and cast the long term out of our minds, as we might spit out a poison.”

When I looked at the same period, I came to a very different conclusion: that what at first seemed like our best shot at lifesaving climate action had in retrospect suffered from an epic case of historical bad timing. Because what becomes clear when you look back at this juncture is that just as governments were getting together to get serious about reining in the fossil fuel sector, the global neoliberal revolution went supernova, and that project of economic and social reengineering clashed with the imperatives of both climate science and corporate regulation at every turn.

The failure to make even a passing reference to this other global trend that was unfolding in the late ’80s represents an unfathomably large blind spot in Rich’s piece. After all, the primary benefit of returning to a period in the not-too-distant past as a journalist is that you are able to see trends and patterns that were not yet visible to people living through those tumultuous events in real time. The climate community in 1988, for instance, had no way of knowing that they were on the cusp of the convulsive neoliberal revolution that would remake every major economy on the planet.

But we know. And one thing that becomes very clear when you look back on the late ’80s is that, far from offering “conditions for success [that] could not have been more favorable,” 1988-89 was the worst possible moment for humanity to decide that it was going to get serious about putting planetary health ahead of profits.

by Naomi Klein, The Intercept |  Read more:
Image:Saul Loeb/AFP/Getty Images
[ed. Part 1: here.]

A Candid Conversation With Vince Gilligan on ‘Better Call Saul’

Perhaps the most surprising thing about Better Call Saul – other than the fact that many Breaking Bad fans have said they prefer the spinoff, and even the ones who disagree don’t find that a ludicrous notion – is how it’s become beloved for the exact opposite reason that its creators expected it to be.

Vince Gilligan and Peter Gould — and for that matter, all of us at home — assumed the fun of the prequel would be in spending more time with Bob Odenkirk in the role of Walter White’s shyster lawyer Saul Goodman; it was a way for the show to fill in blanks in the Heisenberg-verse. Instead, most of what makes the show great involves the man he used to be: slick but largely well-meaning lawyer Jimmy McGill, who has the depth and emotional resonance that Saul lacks. The longer we spend with this version of the character – which he still is at the start of Season Four, premiering on August 6th – the less we want to see of Goodman or even Walt himself.

I recently spoke with Gilligan about those early days when he and Gould — who became sole showrunner this year while Gilligan largely focused on developing other ideas — had to wonder if they’d made a terrible mistake. He also talked about the painful process of figuring out how Saul could work, the gradual insertion of other Breaking Bad characters into the spinoff and a lot more. (With occasional kibitzing from Gould and some other writers, since Gilligan will be the first to tell you that he has a terrible memory for detail.)

It took you and Peter a while to figure out what the show was. At what point did you say to yourselves, “Wait a minute, this is actually good? This isn’t just a folly that we’ve done, to keep everyone together?”

We would never put anything on that we had worked less than 100 percent on. Having said that, I didn’t know it would come together. I knew it would be the product of a lot of hard work and a lot of talent, in front of and behind the camera. I thought at worst, we would create something that was admirable and a perfectly legitimate attempt at a show. But I didn’t realize it would be as successful as it is in terms of a fully jelled world, a full totality of creation … [one] that is as satisfying as it is.

When we first started concocting the idea of doing a spinoff, we literally thought it’d be a half-hour show. It’d be something akin to Dr Katz, where it’s basically Saul Goodman in his crazy office with the styrofoam columns and he’s visited every week by a different stand-up comic. It was basically, I guess, legal problems. We talked about that for a day or two. And then Peter Gould and I realized, we don’t know anything about the half-hour idiom. And then we thought, okay, well, so it’s an hour … but it’s going to be a really funny hour. I said, “Breaking Bad is about 25-percent humor, 75-percent drama and maybe this will be the reverse of that.” Well this thing, especially in Season Four, is every bit as dramatic as Breaking Bad ever was. I just didn’t see any of that coming. I didn’t know how good it would all be. I really didn’t.

It’s amazing how hard it was to get it right.

The question we should’ve ask ourselves from the beginning; “Is Saul Goodman an interesting enough character to build a show around?” And the truth is, we came to the conclusion, after we already had the deal in hand [and] AMC and Sony had already put up the money, “I don’t think we have a show here, because I don’t think we have a character who could support a show.” He’s a great flavoring, he’s a wonderful saffron that you sprinkle on your Risotto. But you don’t want to eat a bowl full of saffron, you gotta have the rice, you know? You gotta have the substance.

And it dawned on us that this character seemed so comfortable in his own skin. Peter and I do not possess those kinds of personalities. We thought, “Regardless how much comedy is in it, how do you find drama in a guy who’s basically okay with himself?” So then we thought, “Well, who was he before he was Saul Goodman?”

Because the show is named Better Call Saul, we thought that we had to get to this guy quick or else people will accuse us of false advertising — a bait and switch. Then lo and behold, season after season went by and it dawned on us, we don’t want to get to Saul Goodman … and that’s the tragedy.

If we had thought all of this from the get-go, that would have made us very smart. But as it turns out, we’re very plodding and dumb, and it takes forever to figure this stuff out. Which is why we’re perfectly matched for a TV schedule versus a movie schedule, because you got to get it right the first time when you’re writing a movie. It took us forever to get it right. (...)

Going in, did you expect to be featuring as many Breaking Bad characters as you have? Did you assume at some point we would get to Gus, for instance?

We always assumed we’d get to Gus — I think we thought we might get to him quicker. Just speaking for myself and no one else: I thought we’d have gotten to Walt or Jesse by this point, as sort of the first fan of both shows. I’m greedy to see all of these characters. I thought we would see plenty of Breaking Bad characters. I didn’t know we’d dig as deep for some of them, as we have.

We’ve gotten a great deal of satisfaction from seeing, for instance, that the real estate agent who shows Mike and [his] daughter the new house, was a real estate agent in Breaking Bad, who had the run-in with Marie. Little shout-outs like that, we love for two reasons. We love those Easter eggs for the really astute students of Breaking Bad. And we also know that that young woman who was such a wonderful actress and so much fun to work with on Breaking Bad. We love when someone did a great job for us on a previous show, to pay ’em back by having them on the new show. Which is not to say that we’ll get every single one of those folks, even though we’d love to. There’s probably plenty we’ll never get to, just for lack of time, lack of episodes … but it’s fun to be able to do that.

When you say you expected to get to Walt and Jesse by now, do you mean the Rosencrantz and Guildenstern Are Dead approach to the Breaking Bad years? Or just that you would have seen what they were up to in this time period?

I thought we would have touched base with them already. But having said that, it makes perfect sense that we haven’t yet touched base with them. Just being in the writers room, you realize that there’s a lot to do before that happens — if and when it does happen. I don’t even want to promise that it will. It’s like what I was saying a minute ago: You play the play the cards that you’ve dealt yourself. There’s no point in cheating in solitaire. That’s a weird analogy, but ultimately, a pretty good one. You can cheat in solitaire, but there’s nothing satisfying about cheating in solitaire.

And the analogy holds when you get to the writer’s room with Better Call Saul. You can change the character’s history, you can have it be that Walter White never comes into it, but it wouldn’t ultimately be satisfying. And when you play the cards out correctly and you see that it’s time to bring Walter White in, for instance, it’s a wonderfully satisfying moment. If you force it, if you cheat the cards, if you bring them in just because folks are demanding it or expecting it, and you kind of bullshit the character’s way into the show, it’s just not going to satisfy anybody. I believe that in my heart.

Has the show evolved and become good enough to the point where it doesn’t need Walter White?

Maybe. I mean, it would be satisfying to see Walt. Not to see him shoehorned in — that would not satisfy me. But to see the character properly arrive at a nexus point with Better Call Saul. That’d be wonderful … [though] it’s very possible it won’t happen if it doesn’t feel properly arrived at. And yes, I believe that Better Call Saul is so much its own creation now, its own thing. It absolutely stands on its own.

We’re enjoying this overlap between Breaking Bad and Better Call Saul that we’re continuing to arrive at. But there’s a version of the show where you don’t see it as Breaking Bad stuff at all. Where, for instance, we leave out Mike Ehrmantraut, because he barely ever interacts with Jimmy McGill anymore. We could just stick with the Jimmy McGill story: him, Kim Wexler, Howard Hamlin, all of that stuff. We could have a perfectly satisfying show. But we feel like we’re giving the fans two shows for the price of one. It really does feel like two TV shows in one now.

When Breaking Bad was coming to an end, this was already in the works to some degree. But was there a part of you thinking, “Alright, this show is ending. This is the best thing I’ve ever done it, it’s the best thing I will ever do, my career has peaked. What do I do now?”

That’s exactly why I did this, because I was thinking those thoughts exactly as you just put it: “This is the best thing I’m ever going to do. This in the height of my creative life, my career, it’s never going to get any better than Breaking Bad.” And that’s why I wanted to get right into something else, because I was still only 48, 49 years old, I didn’t want to stop working. I knew in my heart if I took six months off, because everyone said I needed a vacation, then six months would go by, the world would’ve moved on — and worst of all, I would’ve been paralyzed creatively. I would have said to myself, “Okay, time to do something else now. What is it? What’s the next big thing?” And then I would just freeze up, because I would say I would come up with an idea, thinking, “Oh, that’s fun.” And then the editing portion of my brain, which I’ve given too loud a voice over the years, would say: “It’s not to the level of Breaking Bad.”

The best thing I could’ve done personally was to just jump headlong into a show that, admittedly, we didn’t fully understand. Once we really got into it, we thought, “Oh man, we got nothing here.” And then luckily, we just kept banging at it until we figured it out, with the help of a lot of great writers. But the smartest thing I ever did was to keep moving.

And Breaking Bad … the beauty of it is, some people are always going to love Breaking Bad more. But I run into people every day now who say Better Call Saul is their favorite of the two. I love hearing that. I don’t know where I fall personally on that scale, that continuum — I try not to choose. I don’t have children, but this is as close as I’ll ever get to having children. I find it hard to choose between them. But I’m just glad they both exist.

by Alan Sepinwall, Rolling Stone |  Read more:
Image: Nicole Wilder/AMC/Sony Pictures Television

Friday, August 3, 2018

Tom Petty


Tom Petty
via:
[ed. Straight into Darkness  See also: 400 Days (Documentary - highly recommended)]

There was a little girl, I used to know her
I still think about her, time to time
There was a moment when I really loved her
Then one day the feeling just died

We went straight into darkness
Out over the line
Yeah straight into darkness
Straight into night

I remember flying out to London
I remember the feeling at the time
Out the window of the 747
Man there was nothing, only black sky

We went straight into darkness
Out over the line
Yeah straight into darkness
Straight into night

Oh give it up to me I need it
Girl, I know a good thing when I see it
Baby wrong or right I mean it
I don't believe the good times are all over
I don't believe the thrill is all gone
Real love is a man's salvation
The weak ones fall the strong carry on

Straight into darkness
Out over the line
Yeah straight into darkness
Straight into night


Ernst Haas
via:

This Is My Nerf Blaster, This Is My Gun

One late spring day in April, several years ago—one of the last breezy afternoons before the suffocating summer humidity would descend on the rolling green hills of central Virginia—I went to visit friends in Charlottesville. I was on a break from Gaza at the time, where I’d been living for a year and a half while working on a security project for an NGO and reporting on the Israeli-Palestinian conflict for the Virginia Quarterly Review. I’d grown used to the simmering sounds of war; I would hear the thump of Hamas and Islamic Jihad mortars during my afternoon runs and would wake to my windows rattling as Israeli gunboats fired at Palestinian fishermen. Still, I remained hypervigilant—ready to fight, or flee, at any second.

As I approached my friends’ doorstep, I was suddenly caught in an ambush of foam darts, and I looked down to see their seven-year-old son, Jack, grinning behind an azalea bush, aiming his Nerf blaster at my chest.

“Gotcha!” Jack shouted, before sprinting off behind the house in a flash of spindly limbs and towheaded glee.

Jack’s ambushes became a ritual we’d reenact every time I visited. Jack’s first blaster was a Nite Finder, a pistol that fired single foam darts with rubber tips, and had a mock laser sight mounted in front of the trigger assembly, mimicking the emerging fashion in tactical handguns. It was made of gray-and-yellow molded plastic, and though the blaster’s grip bore some resemblance to the sweep of a real semiautomatic pistol grip, it would’ve been more at home on the set of Lost in Space than Die Hard. A few years later, when Jack and his family moved to Nebraska, he got a Nerf rifle called the N-Strike Alpha Trooper CS-18, which featured a detachable stock and a magazine that held 18 foam darts. It had a charging handle on the barrel like a pump shotgun, allowing for rapid fire and a max range of 35 feet—which meant Jack could hide around the side of the house and get me coming down the driveway.

Last year, when Jack was 13 and I was 35, I had the honor of teaching him the fundamentals of firearms safety at a range near my home in Bozeman, Montana, using the same Marlin .22 rifle I’ve had since my 10th or 11th birthday. I remember when my dad and I first brought that rifle home: Running my hands over the smooth, dark-stained wood stock, and the fascination I felt whenever I slid it out of its khaki-colored soft case, the delightful clack of the bolt sliding home and locking down. There was no kick, and wearing earplugs, the shots sounded like bursts from an air compressor—but all the same, the rifle was not a toy. When I put the stock to my shoulder and the scope in front of my eye, I immediately felt more grown-up. Jack clearly did as well, treating the gun with respect and seriousness.

I spent years working as a war correspondent, and for a good portion of the past year I have been reporting on the National Rifle Association’s fear-mongering, gun culture, and the crisis of gun violence in America. Until recently, I had never read too far into our Nerf play, mine and Jack’s, and I had never heard people link Nerf blasters to real violence the way they did with violent video games and movies. But in an era of mass shootings, I’ve started to reconsider the banality of Nerf blasters and other toy guns.

Over the past two decades, Nerf has upped the ante on the power and functionality of its blasters. One model shoots foam balls up to 100 feet per second—fast enough to sting bare skin. Some models, such as the “Doomlands” series, are cartoonish in their appearance, taking the concept of mega firepower to gonzo levels. Others, like the N-Strike models, have become increasingly streamlined, drawing closer to the souped-up tactical firearms that now dominate the real gun market, namely the endless variations on the popular AR-15.

Do toys like these play any part in the fetishizing of guns? Do they blur the line between fantasy and reality, helping to inspire mass shooters like Nikolas Cruz and Dylann Roof? Or are they just good, clean, foam fun? I don’t know if it’s possible to answer those questions, but I know one thing unequivocally: if the kinds of blasters that Nerf offers today had existed when I was little, I would have been completely, hopelessly enthralled.

Nerf’s deep dive into imaginative gunplay began humbly in 1989, when the company introduced Blast-a-Ball, a pair of simple plastic tubes with plunger handles on one end that could launch foam balls up to 30 feet. Nerf called it the “shoot ’em, dodge ’em, catch ’em” game, and, from the very beginning, it was clear that Nerf did not intend for its new toy to be enjoyed alone—each box came with two blasters.

I was born in 1981, and I remember playing with those original ball blasters, but the Nerf products that really took my suburban Washington, D.C., neighborhood by storm were the company’s foam footballs. The Turbo was about four-fifths the size of a leather pigskin, which made it easy to throw spirals. In 1991, the same year that Nerf introduced the Vortex—a whistling football with rocket fins—the company also launched the Bow ‘n’ Arrow, a blaster in the shape of a bow that fired large foam missiles. Nerf dominated the birthday-party scene that year. Now, almost 30 years later, Nerf balls appear to have been overshadowed by its toy weapons.

Since their debut in the late 1980s, Nerf blasters have evolved into sophisticated toys capable of rapid fire, some models sporting what are known (on real guns) as high-capacity magazines, each holding a dozen rounds or more—in some cases, as many as 200. Nerf has sometimes looked to historical gun models for inspiration, like the Nerf Zombie Strike SlingFire Blaster, which uses the lever-action reload of the .30-30 Winchester Model 94 rifle, with dashes of fluorescent green and orange to diminish its verisimilitude. The overall aesthetic of Nerf’s blaster lineup remains playful and sci-fi, with wild color schemes and plenty of high-visibility orange, especially on the business end of the barrels. But anyone with a remotely trained eye can see that Nerf’s newer models are edging closer to the features of what are commonly known as assault weapons.

The expiration of the 1994 Assault Weapons Ban in 2004—along with a 2005 law that protected firearms manufacturers from lawsuits—contributed to a period of furious growth in the firearms market. Sales of handguns more than quadrupled between 1999 and 2016 (spiking in 2013 after the Sandy Hook Elementary School shooting, in anticipation of incoming gun-control legislation). Firearms imports into the United States also increased fivefold. After ten years of restrictions, manufacturers were now free to market a seemingly limitless array of military-style semiautomatic rifles and accessories, benefiting from the free advertising of the wars in Iraq and Afghanistan. At gun shows and in a proliferating number of firearms publications and enthusiast websites, hunting rifles and shotguns took a backseat to variations on the AR-15, the AK-47, and the Bullpup, a close-quarters combat rifle favored by the Israeli and British militaries.

Nerf appears to have taken notice of both the marketing and design tactics of the firearms industry over the years. The most obvious parallel between Nerf’s newer blasters and their deadly cousins is their focus on modularity. A seemingly infinite spectrum of accessories have made semiautomatic “black rifles” such as the AR-15 a hit among enthusiasts of real firearms, spurring enormous growth in aftermarket products. Similarly, recent upgrades to Nerf products have allowed for the reconfiguration of the company’s rifle-style blasters into pistols, and the addition of the Picatinny rail offers users the opportunity to mount accessories such as flashlights, bipods, and red-dot sights.

The company’s Modulus series includes a lineup of accessories that are obviously toy versions of the real add-ons beloved by black-rifle enthusiasts, including foregrips that mount under barrels, faux laser sights, collapsible stocks, and long-range barrel extenders. Certain battery-operated models are even capable of automatic fire, and some kids have figured out how to “bump fire” their nonautomatic models the same way you can bump fire a semiautomatic rifle: by hooking your finger around the trigger and moving the entire rifle back and forth.

Though there were lots of toy guns on the market that looked real when I was a kid, the opposite was not true: the only real guns I ever saw or handled were unmistakably not toys. They were made of black or polished steel and smooth, stained wood. (If they had plastic on them at all, it was black.) But just as Nerf seems to have co-opted the infinite accessorizing possibilities of the actual firearms industry, owners of AR-15s are sending their guns to third-party customizers to incorporate more playful features into their design: a gun can be anodized in virtually any color, or have a custom wrap applied featuring Star Wars and Marvel themes.

Growing up, my friends and I had toy-gun arsenals that would’ve equipped us for any conflict from the Revolutionary War to Vietnam: long-barreled muskets purchased on field trips to Colonial Williamsburg, chrome six-shooters, cork popguns and rubber-band shooters, and battery-operated squirt guns that looked like exact replicas of the TEC-9 and MAC-10. A company called Zap It sold guns shaped like miniature Uzis, which shot blood-colored ink that would stain clothes briefly, then quickly fade and disappear. (An ad from the late 1980s shows a kid popping out from behind a door to shoot the mailman. A few seconds later, his dad shoots him from behind the cover of the morning paper.)

The first toy gun I remember playing with was a chrome cap gun in the shape of a .45 pistol. I was so young I don’t even remember holding it for the first time, but it stayed in my toy bin well into my middle-school years. It had been my dad’s when he was a kid in the 1950s and had plastic grips with real stippling and fired caps from a roll, which meant there was real smoke. It smelled musty and oxidized, like everything else that came out of my Nana’s basement in Missouri—a smell I associated with a grandfather and great-uncles I had never known, who’d fought in the trenches of WWI and on the seas of the North Atlantic, in the Pacific, and across Europe in WWII.

I knew boys who weren’t allowed to play with toy guns at all. Our grandparents were part of the Greatest Generation, survivors of epic struggles that earned them awe and reverence bordering on fear. But we were the children of the Baby Boomers—a generation sent to fight in Vietnam, a confusing conflict with no clear objectives that killed and maimed young draftees by the tens of thousands. Many young people came out of the 1960s committed to breaking the cycle of macho violence by emphasizing nonviolent play at home. When they had kids of their own—my generation, somewhere between Gen Xers and millennials—they forbade backyard war games and the props they thought were necessary to play them.

These attempts were futile. Whenever I was at a friend’s house who wasn’t allowed to have toy guns, we used our fingers for pistols and sticks for rifles. We made machine-gun noises and explosions with our mouths, imagining bullets kicking up dust around the enemy fortifications, smoke and splintered timber rising skyward in theatrical columns of smoke.

by Elliott Woods, Topic | Read more:
Image: Greg Marinovich

California Burning

On the northwestern edge of Los Angeles, where I grew up, the wildfires came in late summer. We lived in a new subdivision, and behind our house were the hills, golden and parched. We would hose down the wood-shingled roof as fire crews bivouacked in our street. Our neighborhood never burned, but others did. In the Bel Air fire of 1961, nearly five hundred homes burned, including those of Burt Lancaster and Zsa Zsa Gabor. We were all living in the “wildland-urban interface,” as it is now called. More subdivisions were built, farther out, and for my family the wildfire threat receded.

Tens of millions of Americans live in that fire-prone interface today—the number keeps growing—and the wildfire threat has become, for a number of political and environmental reasons, immensely more serious. In LA, fire season now stretches into December, as grimly demonstrated by the wildfires that burned across Southern California in late 2017, including the Thomas Fire, in Santa Barbara County, the largest in the state’s modern history. Nationally, fire seasons are on average seventy-eight days longer than they were in 1970, according to the US Forest Service. Wildfires burn twice as many acres as they did thirty years ago. “Of the ten years with the largest amount of acreage burned in the United States,” Edward Struzik notes in Firestorm: How Wildfire Will Shape Our Future, nine have occurred since 2000. Individual fires, meanwhile, are bigger, hotter, faster, more expensive and difficult to fight, and more destructive than ever before. We have entered the era of the megafire—defined as a wildfire that burns more than 100,000 acres.

In early July 2018, there were twenty-nine large uncontained fires burning across the United States. “We shouldn’t be seeing this type of fire behavior this early in the year,” Chris Anthony, a division chief at the California Department of Forestry and Fire Protection, told The New York Times. It has been an unusually dry winter and spring in much of the West, however, and by the end of June three times as much land had already burned in California as burned in the first half of 2017, which was the state’s worst fire year ever. On July 7, my childhood suburb, Woodland Hills, was 117 degrees. On the UCLA campus, it was 111 degrees. Wildfires broke out in San Diego and up near the Oregon border, where a major blaze closed Interstate 5 and killed one civilian. The governor, Jerry Brown, has declared yet another state of emergency in Santa Barbara County.

How did this happen? One part of the story begins with a 1910 wildfire, known as the Big Burn, that blackened three million acres in Idaho, Montana, and Washington and killed eighty-seven people, most of them firefighters. Horror stories from the Big Burn seized the national imagination, and Theodore Roosevelt, wearing his conservationist’s hat, used the catastrophe to promote the Forest Service, which was then new and already besieged by business interests opposed to public management of valuable woodlands. The Forest Service was suddenly, it seemed, a band of heroic firefighters. Its budget and mission required expansion to prevent another inferno.

The Forest Service, no longer just a land steward, became the federal fire department for the nation’s wildlands. Its policy was total suppression of fires—what became known as the 10 AM rule. Any reported fire would be put out by 10 AM the next day, if possible. Some experienced foresters saw problems with this policy. It spoke soothingly to public fears, but periodic lightning-strike fires are an important feature of many ecosystems, particularly in the American West. Some “light burning,” they suggested, would at least be needed to prevent major fires. William Greeley, the chief of the Forest Service in the 1920s, dismissed this idea as “Paiute forestry.”

But Native Americans had used seasonal burning for many purposes, including hunting, clearing trails, managing crops, stimulating new plant growth, and fireproofing areas around their settlements. The North American “wilderness” encountered by white explorers and early settlers was in many cases already a heavily managed, deliberately diversified landscape. The total suppression policy of the Forest Service and its allies (the National Park Service, for instance) was exceptionally successful, reducing burned acreage by 90 percent, and thus remaking the landscape again—creating what Paul Hessburg, a research ecologist at the Forest Service, calls an “epidemic of trees.”

Preserving trees was not, however, the goal of the Forest Service, which worked closely with timber companies to clear-cut enormous swaths of old-growth forest. (Greeley, when he left public service, joined the timber barons.) The idea was to harvest the old trees and replace them with more efficiently managed and profitable forests. This created a dramatically more flammable landscape. Brush and woodland understory were no longer being cleared by periodic wildfires, and the trees in second-growth forest lacked the thick, fire-adapted bark of their old-growth predecessors. As Stephen Pyne, the foremost American fire historian, puts it, fire could “no longer do the ecological work required.” Fire needs fuel, and fire suppression was producing an unprecedented amount of wildfire fuel.

Climate change, meanwhile, has brought longer, hotter summers and a series of devastating droughts, priming landscapes to burn. Tree-killing insects such as the mountain pine beetle thrive in droughts and closely packed forests. The most recent outbreak of bark-beetle infestation, the largest ever recorded, has destroyed billions of trees in fourteen western states and much of western Canada. Dead trees make fine kindling for a megafire.

Invasive species also contribute. The sagebrush plains of the Great Basin, which spreads across six states in the Intermountain West, are being transformed by cheatgrass (Bromus tectorum), a weed that arrived in contaminated grain seed from Eurasia in the nineteenth century. Cheatgrass is highly flammable, grows rapidly, and is nearly indestructible. It has a fire return interval—the typical time between naturally occurring fires—of less than five years. Sagebrush, which is slow to reestablish itself after a fire, is unable to compete. Cheatgrass, with its ferocious fire cycle of burning and quick regeneration, now infests fifty million acres of the sagebrush steppe. Farther south, cheatgrass and other invasive weeds are threatening the towering saguaro cactus and, in California, the Joshua tree.

Nonnative species can also be a fire risk when they are deliberately introduced. Portugal has been tormented by wildfires, including an inferno last summer that killed more than sixty people, partly because of the flammability of eucalyptus, which is native to Australia and has become the mainstay of the national wood industry, transforming the Portuguese countryside, according to an environmental engineer who spoke to The New York Times, “from a pretty diverse forest into a big eucalyptus monoculture.”

In the United States, exurban and rural property development in the wildland-urban interface has been, perhaps, the final straw—or at least another lighted match tossed on the pile. Most wildfires that threaten or damage communities are caused by humans. Campfires, barbecues, sparks from chainsaws, lawnmowers, power lines, cars, motorcycles, cigarettes—the modes of inadvertent ignition in a bone-dry landscape are effectively limitless. Let’s say nothing of arson. Houses and other structures become wildfire fuel, and vulnerable communities hugely complicate forest management and disaster planning. In his panoramic 2017 book Megafire, the journalist (and former firefighter) Michael Kodas observes pithily that “during the century in which the nation attempted to exclude fire from forests, they filled with homes.”

Starting around the 1960s, the Forest Service and its sister agencies, including the National Park Service, did eventually come to see some of the deep flaws in the policy of total fire suppression. The virtues of “prescribed burning”—deliberately set, carefully planned fires, usually in the late fall or early spring, meant to reduce the amount of fuel and the risk of wildfires—had become blindingly obvious. Still, prescribed burns were, and are, a hard sell. People don’t like to see forest fires or grass fires, particularly not anywhere near their homes. Downwind communities hate the smoke, quite understandably. Politicians lose their nerve.

On rare occasions, a prescribed burn escapes the control of firefighters, and those disasters tend to be remembered. The 2000 Cerro Grande Fire, in New Mexico, started out as a prescribed burn. It escaped, destroyed four hundred homes, and nearly burned down the Los Alamos nuclear research facility. Political support for prescribed burning took a heavy hit. Bruce Babbitt, then secretary of the interior, suspended all federal prescribed burning west of the 100th meridian, which basically meant the entire West.

For backcountry fires, the wisdom of “let it burn” also slowly became clear to forest managers. National parks started calling wildfires that didn’t threaten lives or structures “prescribed natural fires.” Firefighters might herd a blaze in the direction they wanted it to go, but would otherwise let it run its course. This enlightened policy hasn’t always survived political pressure either. In 1988, a drought year in the West, hundreds of wildfires erupted in Yellowstone National Park. President Ronald Reagan denounced the wait-and-see response of firefighters as “cockamamie.” His interior secretary, Donald Hodel, ordered the park’s officials to fight the fires.

“Prescribed natural fires” were abandoned, and as many as nine thousand firefighters fought the Yellowstone megafire, which burned for four months. John Melcher, a Montana senator, told The New York Times, “They’ll never go back to this policy. From now on the policy will be putting the fire out when they see the flames.” The Yellowstone effort cost $120 million ($250 million in 2018 dollars). Cool weather and autumn snow ultimately put out the fires. Surprisingly few animals perished, and the land soon began to regenerate. The “let burn” policy took somewhat longer to recover.

This alternation between firefighting and wildfire risk reduction continues. But since wildfires are getting steadily worse, stop-it-now firefighting always gets more funding. The Forest Service spent 16 percent of its budget on fire suppression in 1995. In 2015, it spent $2.6 billion—more than half of its budget. In Stephen Pyne’s formulation, we’re getting more bad fires and fewer good fires. As resources are drained from the forest management side, the buildup of dangerous, unhealthy forests continues, fueling more terrible fires, many of which will need to be fought.

Into this breach, a small army of private contractors has streamed, ready to feed firefighters, wash their clothes, and rent them, at prices sure to make a taxpayer’s eyes water, anything from helicopters to bulldozers to twelve-stall shower trailers. Politicians, never eager to tramp along on a smoky prescribed burn or wade into the woods with crews doing mechanical brush-thinning, are generally happier to be seen calling for military aircraft, say, to drop retardant on a raging blaze. Firefighters call these “air shows.” Aviation has an important part to play in certain types of fire suppression, usually early in the course of a wildfire, but commanders on the ground have learned that it can also be necessary to let the governor or congressperson appear to be riding to the rescue of his constituents with a fleet of C-130s, no matter how expensive and unhelpful they may be. In Southern California, Representative Duncan Hunter, whose district is near San Diego, is known as the region’s leading wildfire showboat.

In Wildfire, the journalist Heather Hansen embeds herself in an elite crew of wildland firefighters based in Boulder, Colorado. The crew, known as Station 8, primarily works in the wildland-urban interface of Boulder and the surrounding Rocky Mountain Front Range, but its members also travel to every corner of the country to help fight wildfires. It’s a reciprocal arrangement—when they need help on a fire at home, hotshots from those far-flung places will show up. Hansen learns and shares some fire science and fire history, filling in the background of the current crisis. She describes the crew’s punishing training and their powerful camaraderie, and recounts their stories of fires fought, disasters survived, lessons learned.

Then she goes out on a prescribed burn near the edge of Boulder. It’s an eighty-five-acre open ridge on city-owned property, a small project, but not far from thousands of homes. The crew’s preparation has been long and meticulous, including outreach to the neighborhood. “You’re a hero when you put out fire but not when you start one, especially if something goes wrong,” the fire operations manager, Brian Oliver, tells her:
Boulder is a very smart community, a lot of PhDs, and they understand what we’re trying to do with the fuels reduction and the thinning…. In theory they are very supportive and receptive, but then, “Wait, you’re going to light fire on purpose? That’s weird. We don’t want you to do that.” Or it’s, “We want you to do it but we don’t want to be impacted.” As soon as the smell of smoke gets in their window it’s, “What are you guys doing? I can’t believe this, you’re terrible. My curtains smell like smoke; who’s going to pay for my dry-cleaning?” Or “Yeah I support prescribed burns but this is the trail I run on every day. You’re ruining my workout.”
The prescribed burn on the ridge is tense, unexpectedly dramatic. Dozens of firefighters from surrounding stations show up to help. Station 8 has made a close study of the diurnal wind patterns on the ridge and kept its own weather station up on the site for two months, but the wind this morning is fluky. They do a test burn, then quickly shut it down when an unexpected scrap of south wind puffs. They try again, and the wind whips harder. Oliver orders it shut down again, and this time it takes several minutes of furious water-spraying and hacking at burning stumps to put out the small test fire. That’s it. The burn is a no-go. Maybe they’ll get this ridge next year. Fire crews in today’s drought-plagued West have to work with “laughably small burn windows,” Hansen says, referring to the periods in which prescribed burns can be safely attempted. The burn windows in Boulder amount to eleven days a year.

by William Finnegan, NYRB | Read more:
Image: Joe Sohm/Visions of America

This Japanese Shrine Has Been Torn Down And Rebuilt Every 20 Years for the Past Millennium

Every 20 years, locals tear down the Ise Jingu grand shrine in Mie Prefecture, Japan, only to rebuild it anew. They have been doing this for around 1,300 years. Some records indicate the Shinto shrine is up to 2,000-years old. The process of rebuilding the wooden structure every couple decades helped to preserve the original architect’s design against the otherwise eroding effects of time. “It’s secret isn’t heroic engineering or structural overkill, but rather cultural continuity,” writes the Long Now Foundation. (...)

Japan for Sustainability’s Junko Edahiro describes the history of the ceremony at length and reports on the upcoming festivities:
This is an important national event. Its underlying concept — that repeated rebuilding renders sanctuaries eternal — is unique in the world. 
The Sengu is such a large event that preparations take over eight years, four years alone just to prepare the timber.
Locals take part in a parade to transport the prepared wood along with white stones—two per person—which they place in sacred spots around the shrine. In addition to reinvigorating spiritual and community bonds, the tradition keeps Japanese artisan skills alive. The shrine’s visitor site describes this aspect of the Shikinen Sengo ceremony:
It also involves the wish that Japanese traditional culture should be transmitted to the next generation. The renewal of the buildings and of the treasures has been conducted in the same traditional way ever since the first Shikinen Sengu had been performed 1300 years ago. Scientific developments make manual technology obsolete in some fields. However, by performing the Shikinen Sengu, traditional technologies are preserved.
As Edahiro describes, oftentimes local people will take part in the ceremony several times throughout the course of their lives. “I saw one elderly person who probably has experienced these events three or four times saying to young people who perhaps participated in the event as children last time, ‘I will leave these duties to you next time,’” she recalls. “ I realized that the Sengu ceremony also plays a role as a “device” to preserve the foundations of traditions that contribute to happiness in people’s lives.”

by Rachel Nuwer, Smithsonian | Read more:
Image: N Yotarou
[ed. The author László Krasznahorkai took part in the ritual rebuilding of a Shinto shrine. There he witnessed ancient tradition, and the toll it takes. For one disciple, “his job is to plane this piece of hinoki cypress, and he planes it all day. And the master comes at the end of the day and he throws it away. And he keeps on planing and planing it…until the master decides that it’s OK. That’s tradition. But there’s no nostalgia in that.” (The Economist). 

Also, from the JFS Junko Edahiro link (above): As many as 10,000 Japanese cypress trees are needed each time the Jingu sanctuaries of Ise are rebuilt. How have people secured so many Japanese cypress trees every 20 years?

The Jingu Shrine itself owns a large parcel of land 5,500 hectares in extent, and over 90 percent of this land is covered in forest. This forest, called the "Misoma-yama," was created as a result of learning from experience in the past. Timber was formerly taken from this forest to use for the Sengu rebuilding ceremony as well as for firewood. In the Edo Period (1603-1867), about 7 to 9 million people - about the same number as in modern times -- came to worship at Jingu Shrine every year. Firewood was needed for these pilgrims, who normally stayed near the site for several days. As a result the local forest was increasingly exploited, and the timber resource became depleted.

During the Edo Period, the central government (the shogunate) designated a forest in the Kiso area owned by the Owari clan in today's Nagano Prefecture to supply timber to Jingu Shrine. However, toward the end of the Edo Period, this forest became Imperial property, and after World War II it was designated a national forest. Jingu Shrine is given priority in purchasing timber for the Sengu ceremony from this forest, but it is not the only buyer of this rather expensive timber.

Thus it became more and more difficult for Jingu Shrine to depend entirely on domestic resources for the Sengu rebuilding ceremony. This possibility was foreseen by shrine staff, who started taking action 90 years ago. Thinking that the shrine should have its own forest to provide timber for reconstruction, the shrine secretariat ("Jingu Shicho," part of the Interior Ministry) formed a forest management plan during the Taisho Period (1912 - 1926), and started planting trees. At the time, the nominal purpose of the project was said to be landscape conservation and enhancement of the water resource recharging function of the Isuzu River, but Japanese cypresses were also planted on southern slopes.

This afforestation plan encompassed a 200-year time-scale, and aimed to start semi-permanently supplying all the timber for the Sengu ceremony from Shrine-owned forest within 200 years. This plan made it possible to obtain one-fourth of the necessary timber for this year's Sengu ceremony from Shrine lands. This proportion will increase every 20 years. Although the remainder must be purchased from other domestic sources, shrine forests are expected to be able to provide all the timber for future reconstruction ceremonies earlier than originally planned.

The Sengu is such a large event that preparations take over eight years, four years alone just to prepare the timber. Logs are soaked in a lumber pond for two years after felling, a method known as "underwater drying," used to leach extraneous oil out of the logs. The logs are then stacked outside for a year to acclimatize them to the severities of the four seasons. It takes another year to saw them into shape, and finally to cover them with Japanese paper to keep them in good condition until the ceremony.This long curing process strengthens the timber, prevents it from warping or cracking, and prepares it to play its proper part in the ceremony with its central concept of protecting life.]

Thursday, August 2, 2018

The “Next” Financial Crisis and Public Banking as the Response

In this episode of The Hudson Report, we speak with Michael Hudson about the implications of the flattening yield curve, the possibility of another global financial crisis, and public banking as an alternative to the current system.

Paul Sliker: Michael Hudson welcome back to another episode of The Hudson Report.

Michael Hudson: It’s good to be here again.

Paul Sliker: So, Michael, over the past few months the IMF has been sending warning signals about the state of the global economy. There are a bunch of different macroeconomic developments that signal we could be entering into another crisis or recession in the near future. One of those elements is the yield curve, which shows the difference between short-term and long-term borrowing rates. Investors and financial pundits of all sorts are concerned about this, because since 1950 every time the yield curve has flattened, the economy has tanked shortly thereafter.

Can you explain what the yield curve signifies, and if all these signals I just mentioned are forecasting another economic crisis?

Michael Hudson: Normally, borrowers have to pay only a low rate of interest for a short-term loan. If you take a longer-term loan, you have to pay a higher rate. The longest term loans are for mortgages, which have the highest rate. Even for large corporations, the longer you borrow – that is, the later you repay – the pretense is that the risk is much higher. Therefore, you have to pay a higher rate on the pretense that the interest-rate premium is compensation for risk. Banks and the wealthy get to borrow at lower rates.

Right now what’s happened is that the short-term rates you can get by putting your money in Treasury bills or other short-term instruments are even higher than the long-term rates. That’s historically unnatural. But it’s not really unnatural at all when you look at what the economy is doing.

You said that we’re entering into a recession. That’s just the flat wrong statement. The economy’s been in a recession ever since 2008, as a result of what President Obama did by bailing out the banks and not the economy at large.

Since 2008, people talk about “look at how that GDP is growing.” Especially in the last few quarters, you have the media saying look, “we’ve recovered. GDP is up.” But if you look at what they count as GDP, you find a primer on how to lie with statistics.

The largest element of fakery is a category that is imputed – that is, made up – for rising rents that homeowners would have to pay if they had to rent their houses from themselves. That’s about 6 percent of GDP right there. Right now, as a result of the 10 million foreclosures that Obama imposed on the economy by not writing down the junk mortgage debts to realistic values, companies like Blackstone have come in and bought up many of the properties that were forfeited. So now there are fewer homes that are available to buy. Rents are going up all over the country. Homeownership has dropped by abut 10 percent since 2008, and that means more people have to rent. When more people have to rent, the rents go up. And when rents go up, people lucky enough to have kept their homes report these rising rental values to the GDP statisticians.

If I had to pay rent for the house that I have, could charge as much money as renters down the street have to pay – for instance, for houses that were bought out by Blackstone. Rents are going up and up. This actually is a rise in overhead, but it’s counted as rising GDP. That confuses income and output with overhead costs.

The other great jump in GDP has been people paying more money to the banks as penalties and fees for arrears on student loans and mortgage loans, credit card loans and automobile loans. When they fall into arrears, the banks get to add a penalty charge. The credit-card companies make more money on arrears than they do on interest charges. This is counted as providing a “financial service,” defined as the amount of revenue banks make over and above their borrowing charges.

The statistical pretense is that they’re taking the risk on making loans to debtors that are going bad. They’re cleaning up on profits on these bad loans, because the government has guaranteed the student loans including the higher penalty charges. They’ve guaranteed the mortgages loans made by the FHA – Fannie Mae and the other groups – that the banks are getting penalty charges on. So what’s reported is that GDP growth is actually more and more people in trouble, along with rising housing costs. What’s good for the GDP here is awful for the economy at large! This is bad news, not good news.

As a result of this economic squeeze, investors see that the economy is not growing. So they’re bailing out. They’re taking their money and running.

If you’re taking your money out of bonds and out of the stock market because you worry about shrinking markets, lower profits and defaults, where are you going to put it? There’s only one safe place to put your money: short-term treasuries. You don’t want to buy a long-term Treasury bond, because if the interest rates go up then the bond price falls. So you want buy short-term Treasury bonds. The demand for this is so great that Bogle’s Vanguard fund management company will only let small investors buy ten thousand dollars worth at a time for their 401K funds.

The reason small to large investors are buying short term treasuries is to park their money safely. There’s nowhere else to put it in the real economy, because the real economy isn’t growing.

What has grown is debt. It’s grown larger and larger. Investors are taking their money out of state and local bonds because state and local budgets are broke as a result of pension commitments. Politicians have cut taxes in order to get elected, so they don’t have enough money to keep up with the pension fund contributions that they’re supposed to make.

This means that the likelihood of a break in the chain of payments is rising. In the United States, commercial property rents are in trouble. We’ve discussed that before on this show. As the economy shrinks, stores are closing down. That means that the owners who own commercial mortgages are falling behind, and arrears are rising.

Also threatening is what Trump is doing. If his protectionist policies interrupt trade, you’re going to see companies being squeezed. They’re not going to make the export sales they expected, and will pay more for imports.

Finally, banks are having problems of they hold Italian government bonds. Germany is unwilling to use European funds to bail them out. Most investors expect Italy to do exit the euro in the next three years or so. It looks like we’re entering a period of anarchy, so of course people are parking their money in the short term. That means that they’re not putting it into the economy. No wonder the economy isn’t growing.

Dante Dallavalle: So to be clear: a rise in demand for these short-term Treasuries is an indication that investors and businesses find too much risk in the economy as it stands now to be investing in anything more long-term.

Michael Hudson: That’s exactly right.

Dante Dallavelle: OK. So we have prominent economists and policymakers, like Geithner, Bernanke Paulson, etc., making the point that we need not worry about a future crisis in the near term, because our regulatory infrastructure is more sound now than it was in the past, for instance before 2008. I know you’ve talked a lot about the weak nature of financial regulation both here at home in the United States and internationally. What are the shortcomings of Dodd Frank? Haven’t recent policies gutting certain sections of the law made us more vulnerable, not less, to crises in the future?

Michael Hudson: Well, you asked two questions. First of all, when you talk about Geithner and Bernanke – the people who wrecked the economy – what they mean by “more sound” is that the government is going to bail out the banks again at public expense.

It cost $4.3 trillion last time. They’re willing to bail out the banks all over again. In fact, the five largest banks have grown much larger since 2008, because they were bailed out. Depositors and companies think that if a bank is so crooked that it grows so fast that it’s become too big to fail, they had better take their money out of the local bank and put it in the crooked big bank, because that’s going to be bailed out – because the government can’t afford to let it go under.

The pretense was that Dodd Frank was going to regulate them, by increasing the capital reserves that banks had to have. Well, first of all, the banks have captured the regulatory agencies. They’re in charge of basically approving Federal Reserve members, and also members of the local and smaller bank regulatory agencies. So you have deregulators put in charge of these agencies. Second, bank lobbyists have convinced Congress to de-tooth the Dodd Frank Act.

For instance, banks are very heavily into derivatives. That’s what brought down AIG in 2008. These are bets on which way currencies or interest rates will go. There are trillions of dollars nominally of bets that have been placed. They’re not regulated if a bank does this through a special-purpose entity, especially if it does it through those that are in Britain. That’s where AIG’s problems were in 2008. So the banks basically have avoided having to back up capital against making a bad bet.

If you have bets over where trillions of dollars of securities, interest rates, bonds and currencies are going to go, somebody is going to be on the losing side. And someone on the losing side of these bets is going to go under, like Lehman Brothers did. They’re not going to be able to pay their customers. You’re going to have rolling defaults.

You’ve also had Trump de-tooth to the Consumer Financial Protection Agency. So the banks say, well, let’s do what Wells Fargo did. Their business model is fraud, but their earnings are soaring. They’re growing a lot, and they’re paid a tiny penalty for cheating their customers and making billions of dollars off it. So more banks are jumping on the high-risk consumer exploitation bandwagon. That’s certainly not helping matters.

Michael Palmieri: So, Michael we’ve talked a little bit about the different indicators that point towards a financial crisis. It’s also clear from what you just stated from a regulatory standpoint that the U.S. is extremely vulnerable. Back in 2008 many argue that there was a huge opportunity lost in terms of transforming our private banking system to a publicly owned banking system. Recently the Democracy Collaborative published a report titled, The Crisis Next Time: Planning for Public ownership as Alternative to Corporate Bailouts. That was put out by Thomas Hanna. He was calling for a transition from private to public banking. He also made the point, which you’ve made in earlier episodes, that it’s not a question of ifanother financial crisis is going to occur, but when. Can you speak a little bit about how public banking as an alternative would differ from the current corporate private banking system we have today?

Michael Hudson: Sure. I’m actually part of the Democracy Collaborative. The best way to think about this is that suppose that back in 2008, Obama and Wall Street bagman Tim Geithner had not blocked Sheila Bair from taking over Citigroup and other insolvent banks. She wrote that Citigroup had gambled with money and were incompetent, and outright crooked. She wanted to take them over.

Now suppose that Citibank would had been taken over by the government and operated as a public bank. How would a public bank have operated differently from Citibank?

For one thing, a public entity wouldn’t make corporate takeover loans and raids. They wouldn’t lend to payday loan sharks. Instead they’d make local branches so that people didn’t have to go to payday loan sharks, but could borrow from a local bank branch or a post office bank in the local communities that are redlined by the big banks.

A public entity wouldn’t make gambling loans for derivatives. What a public bank woulddo is what’s called the vanilla bread-and-butter operation of serving small depositors, savers and consumers. You let them have checking accounts, you clear their checks, pay their bills automatically, but you don’t make gambling and financial loans.

Banks have sort of turned away from small customers. They’ve certainly turned away from the low-income neighborhoods, and they’re not even lending to businesses anymore. More and more American companies are issuing their own commercial paper to avoid the banks. In other words, a company will issue an IOU itself, and pay interest more than pension funds or mutual funds can get from the banks. So the money funds such as Vanguard are buying commercial paper from these companies, because the banks are not making these loans.

So a public bank would do what banks are supposed to do productively, which is to help finance basic production and basic consumption, but not financial gambling at the top where all the risk is. That’s the business model of the big banks, and some will lose money and crash like in 2008. A public bank wouldn’t make junk mortgage loans. It wouldn’t engage in consumer fraud. It wouldn’t be like Wells Fargo. It wouldn’t be like Citibank. This is so obvious that what is needed is a bank whose business plan is not exploitation of consumers, not fraud, and isn’t gambling. That basically is the case for public ownership.

Paul Sliker: Michael as we’re closing this one out, I know you’re going to hate me for asking this question. But you were one of the few economists to predict the last crisis. What do you think is going to happen here? Are we looking at another global financial crisis and when do you think, if so, that might be coming?

Michael Hudson: We’re emphatically not looking for “another” global crisis, because we’re in the same crisis! We’re still in the 2008 crisis! This is the middle stage of that crisis. The crisis was caused by not writing down the bad debts, which means the bad loans, especially the fraudulent loans. Obama kept these junk mortgage loans and outright fraud on the books – and richly rewarded the banks in proportion to how badly and recklessly they had lent.

The economy’s been limping along ever since. They say there’s been a recovery, but even with the fake lying with statistics – with a GDP rise – the so-called “recovery” is the slowest that there’s been at any time since World War II. If you break down the statistics and look at what is growing, it’s mainly the financial and real estate sector, and monopolies like health care that raise the costs of living and crowd out spending in the real economy.

So this is the same crisis that we were in then. It’s never been fixed, and it can’t be fixed until you get rid of the bad-debt problem. The bad debts require restructuring the way in which pensions are paid – to pay them out of current income, not financializing them. The economy has to be de-financialized, but I don’t see that on the horizon for a while. That’s s why I think that rather than a new crisis, there will be a slow shrinkage until there’s a break in the chain of payments. Then they’re going to call that the crisis.

by Ives Smith and Michael Hudson, Naked Capitalism |  Read more:

Zero 7 ft. Sia and Sophie Barker

Comcast, Charter Dominate US; Telcos “Abandoned Rural America"

You already knew that home broadband competition is sorely lacking through much of the US, but a new report released today helps shed more light on Americans who have just one choice for high-speed Internet.

Comcast is the only choice for 30 million Americans when it comes to broadband speeds of at least 25Mbps downstream and 3Mbps upstream, the report says. Charter Communications is the only choice for 38 million Americans. Combined, Comcast and Charter offer service in the majority of the US, with almost no overlap.

Yet many Americans are even worse off, living in areas where DSL is the best option. AT&T, Verizon, and other telcos still provide only sub-broadband speeds over copper wires throughout huge parts of their territories. The telcos have mostly avoided upgrading their copper networks to fiber—except in areas where they face competition from cable companies.

These details are in "Profiles of Monopoly: Big Cable and Telecom," a report by the Institute for Local Self-Reliance (ILSR). The full report should be available at this link today.

“Market is broken”

"The broadband market is broken," the report's conclusion states. "Comcast and Charter maintain a monopoly over 68 million people. Some 48 million households (about 122 million people) subscribe to these cable companies, whereas the four largest telecom companies combined have far fewer subscribers—only 31.6 million households (about 80.3 million people). The large telecom companies have largely abandoned rural America—their DSL networks overwhelmingly do not support broadband speeds—despite years of federal subsidies and many state grant programs." (...)

Comcast and Charter

Comcast, the nation's biggest cable company and broadband provider, offers service to about 110 million people in 39 states and Washington, DC.

"All of these people have access to broadband-level service through Comcast Xfinity, but about 30 million of these people have no other option for broadband service," the ILSR wrote.

Comcast's broadband subscribers included 25.5 million households, or about 64.8 million people, based on the average US household size of 2.54 people.

Charter, the second biggest cable company after Comcast, offers service to 101 million people in 45 states. 22.5 million households covering about 57.2 million people were subscribing to Charter Internet, according to the numbers cited by the ILSR.

Like Comcast, Charter offers broadband-level speeds throughout its territory. "About 38 million [people in Charter territory] have no other option for broadband service," the report said.

Comcast and Charter generally don't compete against each other. They have a combined territory covering about 210 million people, yet the companies' overlapping service territory covers only about 1.5 million people, according to the Form 477 data cited by the ILSR. The overlap is mostly in Florida, where Charter purchased Bright House Networks, and may be overstated because an entire census block is counted as served even if an ISP offers service to just one resident in the block.

by Jon Brodkin, Ars Technica |  Read more:
Image: ILSR