Wednesday, October 31, 2018

If You See This Symbol on Your Favorite Costco Item, Stock Up Now

Big box store members are already smitten with perks like these little-known benefits of having a Costco card. But then there are the products themselves. Not only does Costco have unbeatable prices, but it also carries top-quality options. So when you find a favorite item, it can be devastating to realize it’s no longer carried the next time you stop by.

Part of the reason Costco’s prices are so cheap is that it only carries a limited number of products. Of course, that also means that the store won’t hang onto an item that isn’t selling when it could replace it with something more appealing. Luckily, there’s an easy way to find out if your favorite product is about to be discontinued.

Take a look at the upper right corner of a Costco price tag. If you see an asterisk, that’s a sign that the wholesale store won’t be restocking the item. Maybe the product hasn’t been selling well, or maybe the manufacturer upped its prices. Either way, it could be your last shot to get your hands on that item in-store—at least for now. To start, see if any of these 15 Costco must-buy products have the special price tag.

Even if a product disappears from the shelves temporarily, it might pop up again in the future. For instance, seasonal items like holiday gift wrap or certain foods might not appear until the next year, but they’ll show up again once they’re back in season. Still, if you have your sights on a nonperishable item that you know you’ll use up within a few months, might as well stock up on more to make it through the year. Next, find out the 15 secrets Costco employees won’t tell you.

by Marissa Laliberte, Reader's Digest | Read more:
Image:John Greim/REX/Shutterstock
[ed. Be sure to click on the links (which include other links) for additional Costco insights. See also: coupons for additional savings at Costco Insider.]

Tuesday, October 30, 2018

Everything But The Girl


Reworked Video - Peter Lindbergh's shoot for Lancome starring Isabella Rossellini.
Lyrics

When Is Screen Addiction Actually Addiction?

Addiction is one of those things where, the more you learn about it, the more terrifying it gets. For instance, some studies suggest it can impede your ability to manage pain in your body and even enjoy chocolate or sex. For years or decades.

And anyone who follows brain science knows that brain plasticity is pretty hip these days. Now we know it lasts way into old age and can do some pretty amazing things. But it’s not unlimited, especially during crucial developmental periods. In fact, there is some evidence that regular teenage drug users lose their plasticity – their ability to create new connections in the brain – which can change the way the brain is wired.

Connections in the brain are a little like roads. And you can only build so many over the landscape. This may account for some cognitive deficits observed in regular drug users. Drug addiction, it seems, may hoard all the roads for itself, which can be devastating for a teen who is building the roads she will use the rest of her life.

But what is addiction, though? How do you know if you are addicted? I spend at least 10 hours per day in front of a screen – am I addicted? Can you get addicted to Microsoft Word? Or Facetime? My sister has an unhealthy obsession with the NFL and the Patriots in particular. Is she addicted to Tom Brady (yeah right, in her dreams)? I smoked cigarettes on and off for years – just at parties and on top of rocks, mind you. Is that addiction?

Yes, actually, that one probably is. I would literally kill the next person I saw if it would allow me to smoke again. I doubt I’d even feel all that bad about it.

Now, no one is saying that screen addiction – if it even is a true addiction – is just like heroin addiction or even smoking. In one case, you are getting hooked on your own internal reward chemicals and in the other you are hooked on a chemical that is tailor made to hook you. Certainly being hooked on screen time is healthier for your liver and lungs than alcohol, drugs, or tobacco.

“We don’t think that every child that is given lots of screen time will show ADHD or will become a screen time addict,” says Susan Ferguson, an addiction expert at the University of Washington, who was involved in the mouse work. “We just don’t know how addictive it is.”

Screen time may not be as addictive as traditional drugs, but measuring addictiveness is notoriously hard to do. Indeed, gambling, running, and sex can all be addictive. And tobacco addiction is arguably more powerful than harder drugs, without considering any other factors.

Man, I could really use a smoke. Anyone want to split one with me?

Anyway, the real test of addiction is how it affects your life. Does it negatively impact you? Experts disagree over the particulars on this, which sounds like petty bickering at first, but then you realize that how you define addiction vastly changes the scale of the problem. One definition puts it at half a percent of the population while another puts it at ten percent. That’s tens of millions of people in the United States.

Mark Griffiths, a British addiction expert at the University of Nottingham Trent University (which is very different from their rivals, Nottingham University, fact I only learned about after I had gotten it wrong, sadly), had one of my favorite lists. He says addiction a) becomes the most – or almost the most – important thing in your life, b) changes your mood considerably, c) pushes you to get ever more of it, d) triggers withdrawal if you don’t, e) often triggers relapses into addictive cycles, and, most importantly, f) causes conflict.

That last one can be between you and loved ones or you and yourself. It can also be terrifying. I heard stories of families ripped apart and lives ruined by something as stupid as a smart phone. Though, I suppose it’s no more stupid than online poker, methamphetamines, or some weird leaf that’s dried out, crushed up, lit on fire, and inhaled through a filter.

God, I would kill for a cigarette right now.

Anyway, Griffiths’ list isn’t too strange, but he is unusual in that he insists that all criteria be met before using the label “addiction,” ensuring that very few people cross that threshold. And it’s interesting. Take my smoking – only b), c), maybe d), and e) really applied. I guess I have friends who experienced a) and that’s why quitting was so much harder for them.

But f) is the one that really catches my eye, because there was really only conflict later in my life, once I realized how bad it was for me. That’s when I felt conflicted about it. So, paradoxically, it was only when I saw it as an addiction that it might have actually become an addiction.

There is another reason to have such a strict criteria for addiction. It cordons off a few people who really need to focus on this as a truly life-destroying problem, ideally with professional help. It’s hard to know how many people fit into this category, but it’s likely to be less than one percent of users (given that gambling addiction, which is better studied, hovers around one percent and screen addiction does not seem to have risen to that level yet).

This creates a wide, crucial space for those who have a problem but aren’t technically addicted. Those of us who sense that maybe screens have crept a little too far into our lives but snort derisively when someone calls us an addict. It kind of frees screen addiction from the controversy of “addiction.”

Like me and my smoking. I wasn’t addicted, according to Griffith’s list, but it was a problem. And, while I would go months without a smoke, it was always there in the back of my mind as something I’d like to do. And if I had a stressful day or was out with buddies – bang! – I was smoking as soon as I could.

According to experts I talked to, this is where many people are with screen addiction. They like it a lot, do it too much, kinda know it, but manage their lives just fine. And like me and cigarettes, they will have to make a choice. Independent of labels and stigma, is this something that is lessening our quality of life? Is this causing damage to us?

by Erik Vance, The Last Word on Nothing |  Read more:
Image: Brian Moore, Mister Guy 11

What I Learned About Life at My 30th College Reunion

On the weekend before the opening gavel of what’s being dubbed the Harvard affirmative-action trial, a record-breaking 597 of my fellow members of the class of ’88 and I, along with alumni from other reunion classes, were seated in a large lecture hall, listening to the new president of Harvard, Lawrence Bacow, address the issue of diversity in the admissions process. What he said—and I’m paraphrasing, because I didn’t record it—was that he could fill five whole incoming classes with valedictorians who’d received a perfect score on the SAT, but that’s not what Harvard is or will ever be. Harvard tries—and succeeds, to my mind—to fill its limited spots with a diversity not only of race and class but also of geography, politics, interests, intellectual fields of study, and worldviews.

I loved my four years at Harvard, largely because of the diversity of its student body. I don’t love the fact—now made public through the trial but previously understood by all of us to be true—that the kids whose parents donate buildings are given preferential treatment over those whose parents don’t. But I understand why the development office, which allows the university to give a free ride to any student whose family makes less than $65,000 a year, might encourage such a practice, which is hardly unique to Harvard. I also don’t love the fact that the Harvard fight song is still “Ten Thousand Men of Harvard,” in a school populated by at least as many women as men, and yet hearing its opening notes can still make me deeply nostalgic. Moreover, I am appalled that all-male final clubs—fraternity-like eating clubs in which the sons of America’s privileged class have traditionally gathered—still exist on campus (albeit with sanctions) without commensurate opportunities, with rare exceptions, for women, minorities, and others, but I also call some of their alumni members my closest friends.

Intelligence, it has been said, is the ability to hold two opposing ideas at the same time and still function, and if universities could be said to have one overriding goal as institutions of higher learning, it is to teach its students this critical skill, Harvard no more than others. Seeing the coin from either of its two sides has never been more important, particularly now, in this nuance-lacking era of divisiveness and nationalism. It’s no wonder that in fascist regimes, the intellectuals are always the first to be silenced.

I believe in the benefits of diversity, even if it means choosing an immigrant kid with a lower-than-usual SAT score (for Harvard) but other stellar qualities, like Thang Q. Diep, Harvard class of ’19, whose application has been trotted out by the lawsuit for all to see. And I’m also aware, as a Jew, that Harvard’s diversity initiative was first put into motion as a way to keep the university’s burgeoning Jewish population in check. I can hold both of these truths—diversity is good; the roots of diversity in the admissions process were prejudiced against my own people—and not only still be able to function but also to see that sometimes good results can come from less-than-good intentions.

Because the point of diversity on a college campus, no matter its less-than-honorable roots, is not to count how many brown faces versus how many white and black faces a school has. It is to provide a rainbow of politics and upbringings and thought processes and understandings that might teach us, through our differences, how similar we are.

Though we all went to the same school, and Harvard’s name likely opened doors for many of us, at the end of the day—or at the end of 30 years since graduation, in this case—what was so fascinating about meeting up with my own richly diverse class during reunion was that no matter our original background, no matter our current income or skin color or struggles or religion or health or career path or family structure, the common threads running through our lives had less to do with Harvard and more with the pressing issues of being human.

Life does this. To everyone. No matter if or where they go to college. At a certain point midway on the timeline of one’s finite existence, the differences between people that stood out in youth take a backseat to similarities, with that mother of all universal themes—a sudden coming to grips with mortality—being the most salient. Not that this is an exhaustive list, but here are 30 simple shared truths I discovered at my 30th reunion of Harvard’s class of 1988.
  1. No one’s life turned out exactly as anticipated, not even for the most ardent planner.
  2. Every classmate who became a teacher or doctor seemed happy with the choice of career.
  3. Many lawyers seemed either unhappy or itching for a change, with the exception of those who became law professors. (See No. 2 above.)
  4. Nearly every single banker or fund manager wanted to find a way to use accrued wealth to give back (some had concrete plans, some didn’t), and many, at this point, seemed to want to leave Wall Street as soon as possible to take up some sort of art.
  5. Speaking of art, those who went into it as a career were mostly happy and often successful, but they had all, in some way, struggled financially.
  6. They say money can’t buy happiness, but in an online survey of our class just prior to the reunion, those of us with more of it self-reported a higher level of happiness than those with less.
  7. Our strongest desire, in that same pre-reunion class survey—over more sex and more money—was to get more sleep.
  8. “Burning Down the House,” our class’s favorite song, by the Talking Heads, is still as good and as relevant in 2018 as it was blasting out of our freshman dorms.
  9. Many of our class’s shyest freshmen have now become our alumni class leaders, helping to organize this reunion and others.
  10. Those who chose to get divorced seemed happier, post-divorce.
  11. Those who got an unwanted divorce seemed unhappier, post-divorce.
  12. Many classmates who are in long-lasting marriages said they experienced a turning point, when their early marriage suddenly transformed into a mature relationship. “I’m doing the best I can!” one classmate told me she said to her husband in the middle of a particularly stressful couples’-therapy session. From that moment on, she said, he understood: Her imperfections were not an insult to him, and her actions were not an extension of him. She was her own person, and her imperfections were what made her her. Sometimes people forget this, in the thick of marriage.
by Deborah Copaken, The Atlantic |  Read more:
Image: Dave Kotinsky/Getty

Jerry S., Naknek River, Alaska 2018
via: Jerry
[ed. Good going, buddy.]

Monday, October 29, 2018

Around the Clock at Pike Place Market

The Morning Rollout

On Pike Place Market time, 8:45 isn’t exactly early. The fish market guys were here at 5:30 to break down halibut; the flower stalls had their buckets filled with peonies and irises by 7. The main arcade is mostly empty, save the strains of a busker’s morning warmup on her violin. But at the farthest reach of the North Arcade, the swarm of vendors is seven people deep and spreading. Still more have climbed on the worn concrete slab countertops for a better view. There’s a slight fizz of anticipation in the cool morning air as everyone trains their gaze on a whiteboard that’s clearly been tacked up for ages.

Beyond the permanent produce stands known as high stalls are the flat tables for farmers who claim their specific turf via reservation the day before. The rest are for the day-stall vendors, who make anything from cheese boards to bongs to liners for rubber boots. Every morning for half a century they report for roll call, a ritual jockeying for the most desirable stall their seniority will allow, an old-school Wall Street trading floor reimagined with more flowing beards and fleece. But also a genuine sense of community; some of these men and women have spent decades side by side at these tables.

Posted over the board are the 230 people eligible to sell crafts in Pike Place Market. They’re listed in seniority from Bob Crew of Metamorphosis Leathers, who has been here 40 years, down to the handful of newcomers approved a few weeks ago. Technically guys like Crew have the market’s equivalent of tenure; other senior vendors can dart up and lay claim to their preferred location with blue dry-erase marker—provided they do it before Zack Cook rings the bell at 9am sharp to start the day’s roll call.

Cook, bearded and baby faced beneath his Pike Place Market cap, usually works with the farmers (he even visits their fields to confirm they actually grow their own wares). Today it’s his turn as designated market master, ready to rattle off the names of the vendors assembled before him, in order of seniority, and record their requested location. The faster this goes, the more time everyone has to set up before Cook does his compliance rounds at 11.

Munko? Two sixteen!

Seppa? Bridge sixty-three!

Parriott? Fourteen out, please.

Yocco? Two twenty-four dogleg.

The covered arcade spots go first, then the bridge; given the sunny forecast, outside berths go fast, too. At last, Cook hits the final name on his list. He counts down from five, a last call for anyone who wants to change their spot. His blue marker records the time on the board: 9:18am. Roll call over, the vendors scatter to bring their part of the market to life. Racks of art and textiles start to bloom in the gray spaces almost instantly.

Roughly 40,000 people will pass through Pike Place Market today. Most of them will never know about roll call, or the 49-page rulebook that shapes who sells what here, and where. By 11am, the arcade fills up. A remarkably tanned couple in shorts and matching San Francisco sweatshirts break their resolute stride to browse Stone City Farm’s goat-milk soaps; two girls in Boise State softball team gear examine the felt Donut Cats at the MarninSaylor booth. A display of art photographs nearly covers the all-powerful whiteboard, hiding in plain sight until the day is done.

Afternoons and Busker Tunes

At the corner of First and Stewart, just a block from Pike Place Market, a woman in the lavender sweatshirt and leggings turns to me with a sheepish expression as we await the green light. She knows locals must roll their eyes at what she’s about to ask. “Am I close to the first Starbucks?”

The man in a mint green button-down next to us turns as we step into the crosswalk—“Don’t feel bad; I just moved here yesterday.”

Just head down to that brick street right there, I told her, and you can’t miss all the Starbucks groupies taking photos. Except, it’s lunch hour: Crowds clog every sidewalk.

At Corner Produce, a vendor who’s a dead ringer for Zach Galifianakis hews off a chunk of a red Jazz apple and offers it to a young woman, along with his finely honed sales pitch: “It’s the jazziest of apples.” Beyond them, and beyond the wall of Instagrammers, is the institution that’s both ambassador and bellwether for the entire market. Once upon a time the vendors here spoke an assortment of languages; now it’s the visitors. Two guys chattering in German weave past a slow-moving tour group whose leader narrates in Japanese. A new slate of musicians takes its turn at the market’s designated busking sites; lively bluegrass permeates the air.

Simply follow the lodestar of the Public Market Center clock, and you can’t miss the Pike Place Fish Market guys. If a big convention of lawyers is in town, they know it. If a cruise ship just docked at Pier 66, its passengers will soon proliferate around the counter. When they do, Jaison Scott is ready.

He and his comrades in aprons are the Flying Wallendas of fishmongering, walking a daily high-wire of salmon-throwing showmanship and legitimate commerce, conducted quickly, and with knives, in very close quarters. “Love the people,” is the mantra they repeat among themselves, a reminder how seriously they take their role as most visitors’ introduction to Seattle. Scott knows how to say assorted phrases in Italian or Tagalog—“most of them ‘I love you,’ to little old ladies”—to break the ice, and hopefully turn audience members into actual customers. He can also spot the regulars, with their canvas bags and “I need something, then I want to get the hell out” look in their eye.

Scott’s mother worked here for longtime owner Johnny Yokoyama in her youth, then ran the counter at the old Wonder Freeze just down the arcade up until the day Scott was born in 1972. When he was an infant, Yokoyama’s mom, Helen, would babysit Scott, tucked in a banana box while his mom worked nearby. That was kind of a thing here, thanks to the market’s particular combination of multiple generations and a preponderance of empty fruit crates. It would be another decade before the day care opened on the lower level.

Down the arcade, Lina C. Fronda arranges tomatoes at high stall no. 7, across from Lowell’s. She emigrated from the Philippines in 1963 for an arranged marriage at age 23; he was 59 and brought her to work here the day after her flight touched down in Seattle. “He commanded me like a little kid,” she recalls. “When he address me, he say, ‘Hey kid—do this.’ ” The next year, her infant son Donnie joined her, tucked in a banana box. Now Lina’s 78 and she’s still here, bustling around the stand in patterned yoga pants, washing heads of cabbage. So is Donnie; he handles customers.

Just across the bricks of Pike Place, Mila Apostol’s daughter Joy helps her into one of the high counter seats at Oriental Mart. Apostol also came here from the Philippines and first rented this space in 1971; the market reminded her of the one in her hometown. Originally she sold round, flat baskets while her youngest son dozed in, yes, a banana box. Now her children—and their children—work here. Joy runs the shop; Apostol’s other daughter, Leila, holds court over the six-burner range, beneath an unruly assortment of handwritten house rules—“We do not accept difficult customers, so know your role!”—designed to manage crowds, but also put diners on notice that the hospitality here is as home style as the food.

Leila makes whatever she feels like each day, in summer often the dinuguan stew that’s a hit with Filipino cruise ship workers and flight crews, but pretty much always the sinigang, the brisk tamarind soup she’s adapted with salmon collars. She goes through so many collars that she has to get them from two nearby markets—Pure Fish and Jaison Scott and his crew at Pike Place Fish.

Scott cruises past on breaks, sometimes just to say hi to the women he’s known since he was a kid and peek at the pans of adobo and pancit. “I’m always snacking there, just rice and whatever she has.”

At high noon, when it can take an eternity to traverse a single block, it’s easy to mistake tourists for the dominant narrative of this place. But even amid the crowds, you can see evidence of a generations-old community from the corner of your eye: The day stall vendors who chat between customers; the fishmonger who dashes in on a break to kiss Mila on the cheek.

“We like to support each other here,” says Leila, as she attends to a pan of plump longanisa sausages. “I buy from them; they eat over here. It all kind of works out.” She looks up and spies the trio of twentysomethings directly in front of her display of food. “You have any questions? Oh, not right now?” Even the most oblivious loiterer can’t miss the tinge of reproach in her voice.

by Allecia Vermillion , Seattle Met | Read more:
Image: Amber Fouts

"Preferences", My Ass

A coffee shop near my office recently went out of business. No surprise: It was across the street from a Starbucks. But it reminded me that the idea of the Free Market satisfying consumer “preferences” is nonsense.

When Big Box stores or corporate chains show up in town, and the “mom and pop” stores disappear, many people will lament the arrival of the chain. But they will be informed that what has happened is a gain in economic efficiency. Yes, Wal Mart is gigantic and can undercut the “mom and pop” stores’ prices. But the people of the town had a choice: They could have had higher prices and “small business” or lower prices and “big business.” The people chose lower prices. Criticizing what happened means criticizing the freely-made choice of consumers. Should they have been forced to pay higher prices and go to a store they clearly didn’t prefer?

Now, I want to leave all issues about whether big business Creates Jobs and whether those jobs are better or worse. Here I want to focus on a narrow contention: the idea that if Starbucks comes to town and puts another coffee shop out of business, it necessarily reflects the preferences of coffee-drinkers in the community and whatever price-coffee combination Starbucks was offering was more appealing to more people.

This isn’t necessarily so, though. When Starbucks moves in, it could charge exactly the same prices as Friendly Neighborhood Coffee. And perhaps, with prices the same, Starbucks coffee being garbage, and the company surviving solely on branding and convenience, only a fraction of the coffee-drinkers would switch to Starbucks. The majority of them stick with FNC. Starbucks may still put FNC out of business! As we know, it’s far cheaper for Starbucks to produce each individual cup of coffee than it is for FNC, thanks to economies of scale. They order greater quantities of ingredients and paper cups and such, so each one is far, far cheaper. Starbucks will make far more money charging $5 for a latte than FNC will. If FNC’s marginal profits are quite low, even a small amount of defection of its customer base to Starbucks could kill it. Every day, more people in the community choose FNC over Starbucks. The Starbucks is half-empty, the FNC is three-quarters full. It doesn’t matter. FNC’s going down, and then its customers are left with Starbucks from then on.

Here’s the implication: Given two options, A and B, most people in a town may choose B, yet end up getting stuck with A. The story we’re told to justify the effect of chains on small businesses is that they end up satisfying a community more. But they can easily result in the opposite. Thanks to the concentration of corporate power, we get an undemocratic result. Now, in the example I used, the chain didn’t even need to undercut prices in order to massacre the competition. It’s enough just to peel away a few of their customers who have some minor preference for the chain for other reasons. (Although theoretically, zero people could favor the chain over the small business, and the small business still goes broke. If there is a sufficient proportion of the population that basically chooses where they go at random, and half of those people end up going to A and half to B, then the entire population could be comprised of People Who Don’t Care One Way Or The Other and People Who Prefer B, and the town ends up with A.) (...)

The idea of “letting the market decide” is that businesses compete for customers, and people choose the products or services they like the most, and those products or services win. It seems very logical at first. But differential amounts of wealth and power will mean it doesn’t happen that way. The Amazon bookstore can come to town and shut down the little community bookshop even if most people still shopped local. In fact, Amazon can ruin pretty much any small business it wants! All it needs to do is lure enough people away. It will win every competition. Once you have concentrated corporate power, it may no longer be true that the triumphant business offers the best “quality of deal.” (And of course, once it has killed every small business, it can always start offering a substantially worse quality of deal than those others offered, but people are stuck with it forever, because nobody can possibly hope to compete. Amazon is on its way to being virtually monopolistic, and once it succeeds, there will be no such thing whatsoever as “free choice” among competing products. This is one of the reasons people are talking about reviving strong anti-trust enforcement!)

Most defenses of the existing concentrations of economic power rest on fables. They treat the market as if it were a world of lemonade stands rather than gigantic behemoths that will eat you alive the moment you compete with them. If you invent a nifty new type of coffee that most people like better, they can threaten you with a carrot/stick approach: Sell the rights to it to us, or we will ruin you. Then, they may not even sell the device, even if everyone wants one! (This forms the plot of the charming 1950s British film The Man In The White Suit, in which a chemist invents a suit that never gets dirty, and both soap manufacturers and labor unions—people had unions back then—stop the thing from reaching the public. The usual reply here is that if there was a market for it, some venture capitalist would obviously invest, but nine-dozen millionaires won’t be able to stop an extremely determined Jeff Bezos.)

For the purposes of this example, I have completely accepted the foundational premises of mainstream economics, which I actually totally reject. (The concept of “rationality,” the lack of distinction between price and value, etc.) But even if we pretend to be strict utilitarians, and care only about “the maximization of well-being units,” Starbucks coming to town can clearly make your community worse off.

by Nathan J. Robinson, Current Affairs |  Read more:
Image: Zhang Peng—LightRocket/Getty Images via

Paxton Chadwick, Under the Surface
via:

Sunday, October 28, 2018

Defensible Space

“Megafires” are now a staple of life in the Pacific Northwest, but how we talk about them illustrates the tension at the heart of the western myth itself.

In the Pacific Northwest, people are beginning to refer to the month of August as “smoke season.” For most of this past August, for example, the Methow Valley in Washington State was choked with smoke from the Crescent Mountain fire to the southwest and from the McLeod Fire to the north. The Okanogan County post offices and community centers were offering free particulate respirator masks, and fire progression maps were updated daily and posted outside the town halls. Local businesses offered 10 percent off to all firefighting personnel, who were camped in tents on the sprawling rodeo grounds outside of town. Helicopters with drop buckets of water and red fire retardant were constantly overhead. And at dinner, everyone’s cell phone rang at once with fire updates from the county.

The irony is that, when I was growing up there, August was the month that could be most relied upon for sunny weather. But in August of 2014, during the massive Carlton Complex wildfire, nearly 260,000 acres of Okanogan County burned and destroyed 363 homes, the largest single fire in state history. In August of 2015, the Okanogan Complex fires burned over 300,000 acres, killed three U.S. Forest Service firefighters, and forced the evacuation of several towns. My parents were evacuated for several days in 2015, and this summer, I helped them dust ashes from the vegetables in the garden. On particularly bad days, the sun shone red and the air smelled like campfires and hurt your lungs. Not being able to see the mountains hurt your heart.

Smoke season is not exactly new, for the forests of the West have always burned. But the scale of these huge wildfires—“megafires,” they are called—have grown, due to a complex interplay of increased human habitation in and near the forests, the multifaceted effects of climate change, and the long practice of fire suppression rather than fire management by the U.S. Forest Service. While wildfires are a constant of the forests’ ecology, the once-exceptional burns have now become routine.

So routine, in fact, that researchers now study the mental health effects of prolonged exposure to the “smoke apocalypse.” Last summer, New York Times contributing opinion writer (and, like me, a Pacific Northwesterner) Lindy West described smoke-blanketed Seattle, four hours southwest of Okanogan County, as filled with “the claustrophobia, the tension, the suffocating, ugly air,” and rightly pointed to it as a phenomenon exacerbated by climate change. “In Seattle, in a week or so, a big wind will come and give us our blue sky back,” she wrote. “Someday, though, it won’t.”

Indeed, friends of my parents are talking about moving away. Those who stay long for the smoke to clear and for the summer sky to be as blue as it once was. But this nostalgia is worth attending to, for how we talk about the wildfires is also how we talk about the West. The idea of the West—as region, ideology, national mythos—is all about desiring the authentic in a landscape of inauthenticity, about safely yearning for something never there in the first place, about obscuring violence with romance.

Since the landscape of the West is indelibly shaped by its own story, talking about land in the West always contains a moral. How we talk about the wildfires illustrates the tension at the heart of the western myth itself, one that will need to collapse from its own weight if we ever hope to see the sky for what it truly is. And each summer now, that sky is on fire.

Forest fires are an intrinsic part of our world’s carbon-rich ecology. Ecosystems such as Washington’s thick central and eastern forests are reliant on fire to help liberate nutrients in the soil, open the tree cones that need heat to release their seeds, clear out unwanted underbrush, and produce a healthily shifting mosaic of micro-ecologies on the forest floor. Fire is also one of the oldest—and perhaps the most determinative—parts of the human world, and native economies used it to transform the North American landscape well before Europeans arrived. Native peoples turned forests into grassland and savannah, cleared and carefully curated forest vegetation and fauna to better hunt and gather, and even practiced fire prevention and, when necessary, fought wildfires.

Living among wildfire smoke is also not new, especially in the Northwest as settlements formed in the drainages and valleys of mountains where smoke tends to pool. During the big fires—1865, when a million acres burned from the Olympics to the Sierras, the Tillamook cycle, which burned from 1933 until 1951 in a series of reburns—smoke was endemic to the Pacific Northwest. In the 1880s, smoke was reportedly so thick through the summer and fall seasons that geological survey crews in the Cascades had to abandon their work.

Yet today Washington State has more homes in fire-prone wildland areas—known as the “wildland-urban interface,” or WUI—than anywhere else in the country. There is estimated to be a 40 percent increase in homes in the WUI between 2001 and 2030, with no sign of such development abating, despite the megafires. New developments have no mandatory review procedures to assess wildfire risk. The Okanogan County Comprehensive Plan on managing growth, for example, released just after the Carlton Complex fires in 2014, didn’t include a single concrete guideline or requirement. Instead, it is up to each individual property owner to reduce risk on their own land.

As a result, state and federal firefighters have to actively suppress fires—not merely manage them—in order to save homes (which they do with remarkable and laudable precision). This suppression leaves forests overly dense and ready to burn while the increased presence of people also makes fires much more likely: in the dry tinderbox of southern California, for instance, 95 percent of fires are started by human activity.

The reigning ethos of development is, of course, private property: let people do what they like on their own land. There is a byzantine patchwork of environmental regulations and land usage laws at the county, state, and federal level, but these are largely geared toward managing growth rather than suppressing it. “I’m not real big on over-regulating people,” Andy Hover, one of the current Okanogan County Commissioners, said in the middle of this year’s fire season. “Rules and regulations are kind of like—well, is that really what we want?”

Whether or not “we” really want rules and regulations in the West is the historically vexed question that has driven the development of the West since colonial settlement. Despite its mythic ethos of self-reliance, independence, and rugged autonomy, a massive influx of federal funds and intervention has always been necessary for non-Native settlers to live in the West. The federal government funded decades of military campaigns and genocidal wars against indigenous people to clear the land. Federal land grants of over 100 million acres, tax incentives, and government loans all helped build the transcontinental railroads, which both opened the West to increased settlement and built the power of banks and finance on Wall Street. The Homestead Act of 1862 offered free land to white farmers if they agreed to “improve” it for five years; and the Dawes Severalty Act in 1887 broke up the grants of reservation land initially sanctioned for Native Americans. One name for this, popularized by historian Frederick Jackson Turner, is the frontier thesis; another is manifest destiny. Yet another is imperialism. Its legacy continues in the approach to western housing developments today: what was once held in common is nominally and culturally understood as the preserve of the individual yet underwritten by the federal government.

Today, more than half of the U.S. Forest Service’s budget goes to fighting wildfires and, increasingly, keeping them away from people’s private property.

So while fire season is not new, it still feels new to many of us who are used to seeing summer mountain skies where the blue was so vast it humbled even the mountains at its edge. It feels new when the hills you’ve driven through for years are lined with blackened, charred trunks, and the old and chipping Smokey Bear sign, just across the street from the tiny U.S. Forest Service office in the Methow Valley, continually points to the color-coded scale of today’s fire danger: red for EXTREME. (...)

I was reading a book about wildfires in a local bakery in Winthrop when a contractor who rents firefighting equipment to the Forest Service gamely tried to pick me up. But because this is a western story, instead of offering me his phone number, he offered me a pamphlet on how to defend my home from wildfire.

“Defensible space,” I learned, is the goal behind any wildfire preparedness campaign. It denotes the area between a house and an oncoming fire that has been managed by the homeowner to reduce wildfire risk and provide firefighters with a clear space of operations. Defensible space has become the watchword of private programs such as Firewise USA®, a partnership between a nonprofit organization and federal agencies that teaches property owners how to “adapt to living with wildfire” and prepare their homes for fire risk.

Creating defensible space involves reducing excessive vegetation (shrubs, dense clusters of trees, dried grass) from around the house, and replacing them with well-irrigated lawn or flowerbeds, as well as surrounding your home with inflammable materials to deflect burning embers. Depending on your particular vegetation type and the percent of slope on which your house rests, you will need between 30 to 200 feet of defensible space surrounding your home.

The idea of defensible space strikes me as an intrinsically western one. It has taken a tremendous amount of government money, environmental engineering, and colonial violence for there to be such a thing as “private property” in the West, and for people to live out their—historically speaking—absurd fantasies of independence and self-reliance, to create their own western defensible space. And yet still, for the one third of the United States that lives in the wildland-urban interface, each house in each subdivision attempts to surround itself by its own barrier of self-created defensible space, each pretending to be self-reliant yet in need of massive federal funds for power, water, roads, and firefighting.

by Jessie Kindig, Boston Review |  Read more:
Image: Ashley Siple

Who Shot the Sheriff?

Goings-on in the Tivoli Gardens: A Brief History of Seven Killings

Bob Marley had called a break during a band rehearsal at his house on the evening of 3 December 1976 when two cars pulled up and seven or more gunmen got out. One found his way to the kitchen, where Marley was eating a grapefruit, and opened fire. A bullet scraped his chest before hitting his upper arm, and four or five hit his manager, Don Taylor, who was standing between him and the doorway. The keyboard player’s girlfriend saw ‘a kid’ with his eyes squeezed shut emptying a pistol into the rehearsal area. The lead guitarist, an American session man on his first visit to Jamaica, took cover behind a flight case. The bass player and others – accounts vary as to how many – dived into a metal bathtub. Marley’s wife, Rita, was hit in the driveway while trying to get their children out and went down with a bullet fragment in her scalp. There were shouts: ‘Did you get him?’ ‘Yeah! I shot him!’ Then police arrived to investigate the gunfire and the attackers took off.

The manager had to be flown to Miami for surgery, but all the victims survived, and while each of the gunmen gets killed in A Brief History of Seven Killings, the novel restages the assault on Marley’s house with eight shooters, most of whom get given names: Josey Wales, Weeper, Bam-Bam, Demus, Heckle and Funky Chicken, plus ‘two man from Jungle, one fat, one skinny’. (‘Jungle’ is a nickname for one of the many social housing developments that sprang up in Kingston in the 1960s and 1970s.) The killings in the title of Marlon James’s novel – a novel that’s built around the attempt on Marley’s life much as Don DeLillo’s Libra (1988) and James Ellroy’s American Tabloid (1995) are built around the Kennedy assassination – turn out, after hundreds of pages, to be modelled on a massacre carried out years later in an American crack house, allegedly by Lester Coke, a Kingston gang boss who burned to death, in unexplained circumstances, in a high-security prison cell in 1992. His son and heir, Christopher ‘Dudus’ Coke, is the man the Jamaican army and police were looking for when they killed at least 73 civilians in a raid on the Tivoli Gardens estate in West Kingston in 2010. So there are more than enough killings to go around.

James begins his story with the build-up to Marley’s shooting and ends with the burning of Josey Wales, the character corresponding to Lester Coke, with a Dudus-like figure ready in the wings. (A sequel was projected early on, but I wouldn’t be surprised if it got slowed down by James’s work on a script for HBO, which bought the screen rights to the novel in April.) He has no trouble constructing a plausible narrative connecting the attack to many aspects of Jamaican history, and in outline his plot sticks closely, especially in its opening stages, to the facts and testimony and rumours gathered up by Timothy White, an American music journalist who periodically updated his 1983 biography of Marley, Catch a Fire, until his death in 2002. The characters are all freely imagined even when they’re filling the roles of real people, with the exception of Marley, who’s seen only through the eyes of a range of first-person narrators, and whose stage time is judiciously rationed. He’s referred to throughout as ‘the Singer’, though James doesn’t tie himself in knots for the sake of consistency: a character called Alex Pierce, a writer for Rolling Stone whose research seems to be a fantasticated version of White’s, urges himself at one point to ‘head back to Marley’s house’.

Marley isn’t left blank, exactly: we hear quite a lot about his under-the-table philanthropy, his physical beauty, his politico-religious worldview, and about the sniffiness with which he was viewed by the small, determinedly self-improving black middle class, which wasn’t at first thrilled by the outside world’s interest in some ‘damn nasty Rasta’, all ‘ganja smell and frowsy arm’, as an angry mother puts it. Other characters do impressions of foreign music-business types – ‘You reggae dudes are far out, man, got any gawn-ja?’ – or fulminate about Eric Clapton, who drunkenly shared his views on ‘wogs’ and ‘fucking Jamaicans’ with an audience in Birmingham in August 1976, two years after he had his first American number one with a cover of Marley’s ‘I Shot the Sheriff’. (‘He think naigger boy never going read the Melody Maker.’) But animating a pre-mythic Marley, ‘outside of him being in every frat boy’s dorm room’, as James put it in an interview last year, isn’t the first order of business. ‘The people around him, the ones who come and go,’ Alex the journalist muses, ‘might actually provide a bigger picture than me asking him why he smokes ganja. Damn if I’m not fooling myself I’m Gay Talese again.’

‘The ones who come and go’, in James’s telling, include a young woman called Nina Burgess, who’s had a one night stand with Marley; Barry Diflorio, a CIA man; and Alex. The rest are gangsters, and the bigger picture they open up is a view from the ground of the working relationship between organised crime and Jamaican parliamentary politics. Marley’s shooting is a good device for getting at that, because no one seriously disputes that it was triggered by the 1976 election campaign, then the most violent in the country’s history, contested by two sons of the light-skinned post-independence elite: Michael Manley, the leader of the social democratic People’s National Party, and Edward Seaga, the leader of the conservative Jamaica Labour Party. The Jamaican system of ‘garrisons’ – social housing estates, usually built over bulldozed shantytowns, run by ‘dons’ on behalf of one or other of the parties – was up and running by the 1970s, with Tivoli Gardens, a pet project of Seaga’s and his electoral power base, as exhibit A. The novel reimagines it as ‘Copenhagen City’, perhaps to emphasise the contrast between the name’s promise of Scandinavian sleekness and the reality of votes delivered by armed enforcers.

Marley wasn’t faking it when he sang about his memories of a similarly downtrodden ‘government yard’, and didn’t need instruction on the dons’ multiple roles as providers of stuff the state wasn’t supplying, such as arbitration and policing of sorts, on top of their function as political goons and in workaday criminal enterprises. After he’d become a national celebrity in the 1960s, he sometimes played host to Claudie Massop, the JLP gang boss of Tivoli Gardens, whom he’d known as a child. Massop’s counterpart in the novel is called Papa-Lo. James casts him as an enforcer of the old school, still capable of murdering a schoolboy when necessary but sick at heart and out of his depth in an increasingly vicious electoral struggle. Papa-Lo’s younger ally, who calls himself Josey Wales after the Clint Eastwood character (Lester Coke himself operated as ‘Jim Brown’ in tribute to the only African-American star of The Dirty Dozen), is better adapted to the shifting state of affairs. Josey is made to seem dangerous not so much because he’s irretrievably damaged by previous rounds of slum clearance, gang warfare and police brutality – so is everyone around him – as because he’s attuned to goings-on in the wider world.

The opportunities Josey sees come from the external pressures that made the 1976 election, in the eyes of many participants, a Cold War proxy conflict. Manley’s PNP government, in power since 1972, had annoyed the bauxite companies, Washington and large swathes of local elite opinion with its leftish reforms and friendliness to Cuba. Manley blamed a rise in political shootouts and some of the country’s economic setbacks on a covert destabilisation campaign, and the Americans were widely understood – thanks partly to the writings of Philip Agee, a CIA whistleblower – to be shipping arms and money to Seaga’s JLP. Seaga’s supporters countered by putting it about that Castro was training the other side’s gunmen, and portrayed the sweeping police powers introduced by Manley’s government as a step towards a one-party state. Either way, no one was badly off for guns and grievances when Manley offered himself for re-election. ‘The world,’ Papa-Lo says, ‘now feeling like the seven seals breaking one after the other. Hataclaps’ – from ‘apocalypse’ – ‘in the air.’

Marley dropped a hint about his stance towards all this in one of the less cryptic lines on Rastaman Vibration, released eight months before the election: ‘Rasta don’t work for no CIA.’ Formal politics, he felt, belonged to Babylon, the modern materialist society, and he tried to keep his distance from it. But he was suspected, with some reason, of supporting the PNP. Both party leaders took an interest in the kinds of constituency Marley spoke for, and kept an ear to the ground when it came to popular culture. Seaga, early on in his career, had produced a few ska recordings in West Kingston, some of them featuring Marley’s mentor Joe Higgs. Manley, not to be outdone, had visited Ethiopia and returned with – in White’s words – ‘an elaborate miniature walking stick’, a gift from Haile Selassie, to show Rasta voters. Back in 1971 he had also pressed Marley into joining an explicitly PNP-oriented Carnival of Stars tour to warm up his first campaign. And in 1976 his people issued Marley with a pressing invitation to play a free concert in the name of national unity. It was to take place shortly before the election with an eye to overshadowing a JLP campaign event, and it’s what Marley was rehearsing for when, two days before the concert, the shooters arrived.
by Christopher Tayler, LRB |  Read more:
Image: Jonathan Player/Shutterstock via Rolling Stone
[ed. Netflix apparently has a new "docuseries" out about the 1976 attempted assassination of Bob Marley - Who Shot the Sheriff? (the subject of Marlon James' Booker Prize winning fictional novel A Brief History of Seven Killings... one of the most violent novels I've read since Cormac McCarthy's Blood Meridian or Bolano's 2066). A tough read.]

“Bohemian Rhapsody” Is the Least Orgiastic Rock Bio-Pic

Extra teeth. That was the secret of Freddie Mercury, or, at any rate, of the singular sound he made. In “Bohemian Rhapsody,” a new bio-pic about him, Mercury (Rami Malek) reveals all: “I was born with four more incisors. More space in my mouth, and more range.” Basically, he’s walking around with an opera house in his head. That explains the diva-like throb of his singing, and we are left to ponder the other crowd-wooing rockers of his generation; do they, too, rely upon oral eccentricity? Is it true that Rod Stewart’s vocal cords are lined with cinders, and that Mick Jagger has a red carpet instead of a tongue? What happens inside Elton John’s mouth, Lord knows, although “Rocketman,” next year’s bio-pic about him, will presumably spill the beans.

“Bohemian Rhapsody” starts with the Live Aid concert, in 1985. That was the talent-heavy occasion on which Queen, fronted by Mercury, took complete command of Wembley Stadium and, it is generally agreed, destroyed the competition. We then flip back to 1970, and to the younger Freddie—born Farrokh Bulsara, in Zanzibar, and educated partly at a boarding school in India, but now dwelling in the London suburbs. This being a rock movie, his parents are required to be conservative and stiff, and he is required to vex them by going out at night to see bands.

If the film is to be trusted (and one instinctively feels that it isn’t), the birth of Queen was smooth and unproblematic. Mercury approaches two musicians, Roger Taylor (Ben Hardy) and Brian May (Gwilym Lee), in a parking lot, having enjoyed their gig; learns that their group’s lead singer has defected; and, then and there, launches into an impromptu audition for the job. Bingo! The resulting lineup, now graced with John Deacon (Joseph Mazzello) on bass, lets rip onstage, with Freddie tearing the microphone from its base to create the long-handled-lollipop look that will stay with him forever. Queen already sounds like Queen, and, before you know it, the boys have a manager, a contract, an album, and a cascade of wealth. It’s that easy. As for their first global tour, it is illustrated by the names of cities flashing up on the screen—“Tokyo,” “Rio,” and so forth, in one of those excitable montages which were starting to seem old-fashioned by 1940.

As a film, “Bohemian Rhapsody” is all over the place. So is “Bohemian Rhapsody” as a song, yet somehow, by dint of shameless alchemy and professional stamina, it coheres; the movie shows poor Roger Taylor doing take after take of the dreaded “Galileo!” shrieks, bravely risking a falsetto-related injury in the cause of art. Anyone hoping to be let in on Queen’s trade secrets will feel frustrated, although I liked the coins that rattled and bounced on the skin of Taylor’s drum, and it’s good to watch Deacon noodle a new bass riff—for “Another One Bites the Dust”—purely to stop the other band members squabbling. The later sections of the story, dealing with Mercury’s AIDS diagnosis, are carefully handled, but most of the film is stuffed with lumps of cheesy rock-speak (“We’re just not thinking big enough”; “I won’t compromise my vision”), and gives off the delicious aroma of parody. When Mercury tries out the plangent “Love of My Life” on the piano, it’s impossible not to recall the great Nigel Tufnel, in “This Is Spinal Tap” (1984), playing something similar in D minor, “the saddest of all keys,” and adding that it’s called “Lick My Love Pump.”

The funniest thing about the new film is that its creation was clearly more rocklike than anything to be found in the end product. Bryan Singer, who is credited as the director, was fired from the production last year and replaced by Dexter Fletcher, although some scenes appear to have been directed by no one at all, or perhaps by a pizza delivery guy who strayed onto the set. The lead role was originally assigned to Sacha Baron Cohen (a performance of which we can but dream), although Malek, mixing shyness with muscularity, and sporting a set of false teeth that would make Bela Lugosi climb back into his casket, spares nothing in his devotion to the Mercurial. The character’s carnal wants, by all accounts prodigious, are reduced to the pinching of a waiter’s backside, plus the laughable glance that Freddie receives from a bearded American truck driver at a gas station as he enters the bathroom. With its PG-13 rating, and its solemn statements of faith in the band as a family, “Bohemian Rhapsody” may be the least orgiastic tribute ever paid to the world of rock. Is this the real life? Nope. Is this just fantasy? Not entirely, for the climax, quite rightly, returns us to Live Aid—to a majestic restaging of Queen’s contribution, with Malek displaying his perfect peacock strut in front of the mob. If only for twenty minutes, Freddie Mercury is the champion of the world.

by Anthony Lane, New Yorker |  Read more:
Image: Zohar Lazar 

Friday, October 26, 2018

The Great Risk Shift

To many economic commentators, insecurity first reared its ugly head in the wake of the financial crisis of the late-2000s. Yet the roots of the current situation run much deeper. For at least 40 years, economic risk has been shifting from the broad shoulders of government and corporations onto the backs of American workers and their families.

This sea change has occurred in nearly every area of Americans’ finances: their jobs, their health care, their retirement pensions, their homes and savings, their investments in education and training, their strategies for balancing work and family. And it has affected Americans from all demographic groups and across the income spectrum, from the bottom of the economic ladder almost to its highest rungs.

I call this transformation “The Great Risk Shift” — the title of a book I wrote in the mid-2000s, which I’ve recently updated for a second edition. My goal in writing the book was to highlight a long-term trend toward greater insecurity, one that began well before the 2008 financial crisis but has been greatly intensified by it.

I also wanted to make clear that the Great Risk Shift wasn’t a natural occurrence — a financial hurricane beyond human control. It was the result of deliberate policy choices by political and corporate leaders, beginning in the late 1970s and accelerating in the 1980s and 1990s. These choices shredded America’s unique social contract, with its unparalleled reliance on private workplace benefits. They also left existing programs of economic protection more and more threadbare, penurious and outdated — and hence increasingly incapable of filling the resulting void.

To understand the change, we must first understand what is changing. Unique among rich democracies, the United States fostered a social contract based on stable long-term employment and widespread provision of private workplace benefits. As the figure below shows, our government framework of social protection is indeed smaller than those found in other rich countries. Yet when we take into account private health and retirement benefits — mostly voluntary, but highly subsidized through the tax code — we have an overall system that is actually larger in size than that of most other rich countries. The difference is that our system is distinctively private.


This framework, however, is coming undone. The unions that once negotiated and defended private benefits have lost tremendous ground. Partly for this reason, employers no longer wish to shoulder the burdens they took on during more stable economic times. In an age of shorter job tenure and contingent work, as Monica Potts will describe in her forthcoming contribution to this series, employers also no longer highly value the long-term commitments to workers that these arrangements reflected and fostered.

Of course, policymakers could have responded to these changes by shoring up existing programs of economic security. Yet at the same time as the corporate world was turning away from an older model of employment, the political world was turning away from a longstanding approach to insecurity known as “social insurance.” The premise of social insurance is that widespread economic risks can be dealt with effectively only through institutions that spread their costs across rich and poor, healthy and sick, able-bodied and disabled, young and old.

Social insurance works like any other insurance program: We pay in — in this case, through taxes — and, in return, are offered a greater degree of protection against life’s risks. The idea is most associated with FDR, but, from the 1930s well into the 1970s, it was promoted by private insurance companies and unionized corporations, too. During this era of rising economic security, both public and private policymakers assumed that a dynamic capitalist economy required a basic foundation of protection against economic risks.

That changed during the economic and political turmoil of the late 1970s. With the economy becoming markedly more unequal and conservatives gaining political ground, many policy elites began to emphasize a different credo — one premised on the belief that social insurance was too costly and inefficient and that individuals should be given “more skin in the game” so they could manage and minimize risks on their own. Politicians began to call for greater “personal responsibility,” a dog whistle that would continue to sound for decades.

Instead of guaranteed pensions, these policymakers argued, workers should have tax-favored retirement accounts. Instead of generous health coverage, they should have high-deductible health plans. Instead of subsidized child care or paid family leave, they should receive tax breaks to arrange for family needs on their own. Instead of pooling risks, in short, companies and government should offload them.

The transformation of America’s retirement system tells the story in miniature. Thirty years ago, most workers at larger firms received a guaranteed pension that was protected from market risk. These plans built on Social Security, then at its peak. Today, such “defined-benefit” pensions are largely a thing of the past. Instead, private workers lucky enough to get a pension receive “defined-contribution” plans such as 401(k)s — tax-favored retirement accounts, first authorized in the early 1980s, that don’t require contributions and don’t provide guaranteed benefits. Meanwhile, Social Security has gradually declined as a source of secure retirement income for workers even as private guaranteed retirement income has been in retreat.

The results have not been pretty. We will not be able to assess the full extent of the change until today’s youngest workers retire. But according to researchers at Boston College, the share of working-age households at risk of being financially unprepared for retirement at age 65 has jumped from 31 percent in 1983 to more than 53 percent in 2010. In other words, more than half of younger workers are slated to retire without saving enough to maintain their standard of living in old age.

Guaranteed pensions have not been the only casualty of the Great Risk Shift. At the same time as employers have raced away from safeguarding retirement security, health insurance has become much less common in the workplace, even for college-educated workers. Indeed, coverage has risen in recent years only because more people have become eligible for Medicare and Medicaid and for subsidized plans outside the workplace under the Affordable Care Act. As late as the early 1980s, 80 percent of recent college graduates had health insurance through their job; by the late 2000s, the share had fallen to around 60 percent. And, of course, the drop has been far greater for less educated workers.

In sum, corporate retrenchment has come together with government inaction — and sometimes government retrenchment — to produce a massive transfer of economic risk from broad structures of insurance onto the fragile balance sheets of American households. Rather than enjoying the protections of insurance that pools risk broadly, Americans are increasingly facing economic risks on their own, and often at their peril.

The erosion of America’s distinctive framework of economic protection might be less worrisome if work and family were stable sources of security themselves. Unfortunately, they are not. The job market has grown more uncertain and risky, especially for those who were once best protected from its vagaries. Workers and their families now invest more in education to earn a middle-class living. Yet in today’s postindustrial economy, these costly investments are no guarantee of a high, stable, or upward-sloping path. [ed. See also:
A Follow-Up on the Reasons for Prime Age Labor Force Non-Participation]

Meanwhile, the family, a sphere that was once seen as solely a refuge from economic risk, has increasingly become a source of risk of its own. Although median wages have essentially remained flat over the last generation, middle-income families have seen stronger income growth, with their real median incomes rising around 13 percent between 1979 and 2013. Yet this seemingly hopeful statistic masks the reality that the whole of this rise is because women are working many more hours outside the home than they once did. Indeed, without the increased work hours and pay of women, middle-class incomes would have fallen between 1979 and 2013.

by Jacob S. Hacker, TPM |  Read more:
Image: Christine Frapech/TPM

Unfair Advantage

Every year Americans make more and more purchases online, many of them at Amazon.com. What shoppers don’t see when browsing the selections at Amazon are the many ways the online store is transforming the economy. Our country is losing small businesses. Jobs are becoming increasingly insecure. Inequality is rising. And Amazon plays a key role in all of these trends.

Stacy Mitchell believes Amazon is creating a new type of monopoly. She says its founder and CEO, Jeff Bezos, doesn’t want Amazon to merely dominate the market; he wants it to
become the market.

Amazon is already the world’s largest online retailer, drawing so much consumer Web traffic that many other retailers can compete only by becoming “Amazon third-party sellers” and doing business through their competitor. It’s a bit like the way downtown shops once had to move to the mall to survive — except in this case Amazon owns the mall, monitors the other businesses’ transactions, and controls what shoppers see.

From early in her career Mitchell has focused on retail monopolies. During the 2000s she researched the predatory practices and negative impacts of big-box stores such as Walmart. Her 2006 book, Big-Box Swindle: The True Cost of Mega-Retailers and the Fight for America’s Independent Businesses, documented the threat these supersized chains pose to independent local businesses and community well-being. (stacymitchell.com)

Now Amazon is threatening to overtake Walmart as the biggest retailer in the world. Mitchell says she occasionally shops at Amazon herself, when there’s something she can’t find locally, but this hasn’t stopped her from being a vocal critic of the way the company uses its monopoly power to stifle competition. She’s among a growing number of advocates who are calling for more vigorous enforcement of antitrust laws.
(...)

Frisch: Many consumers welcome Amazon as a wonderful innovation that makes shopping more convenient, but you say the corporation has a “stranglehold” on commerce. Why?

Mitchell: Without many of us noticing, Amazon has become one of the most powerful corporations in the U.S. It is common to talk about Amazon as though it were a retailer, and it certainly sells a lot of goods — more books than any other retailer online or off, and it will soon be the top seller of clothing, toys, and electronics. One of every two dollars Americans spend online now goes to Amazon. But to think of Amazon as a retailer is to miss the true nature of this company.

Amazon wants to control the underlying infrastructure of commerce. It’s becoming the place where many online shoppers go first. Even just a couple of years ago, most of us, when we wanted to buy something online, would type the desired product into a search engine. We might search for New Balance sneakers, for example, and get multiple results: sporting-goods stores, shoe stores, and, of course, Amazon. Today more than half of shoppers are skipping Google and going directly to Amazon to search for a product. This means that other companies, if they want access to those consumers, have to become sellers on Amazon. We’re moving toward a future in which buyers and sellers no longer meet in an open public market, but rather in a private arena that Amazon controls.

From this commanding position Amazon is extending its reach in many directions. It’s building out its shipping and package-delivery infrastructure, in a bid to supplant UPS and the U.S. Postal Service. Its Web-services division powers much of the Internet and handles data storage for entities ranging from Netflix to the CIA. Amazon is producing hit television shows and movies, publishing books, and manufacturing a growing share of the goods it sells. It’s making forays into healthcare and finance. And with the purchase of Whole Foods, it’s beginning to extend its online presence into the physical world. (...)

Frisch: We hear a lot about the power of “disruptive” ideas and technologies to transform our society. Amazon seems like the epitome of a disrupter.

Mitchell: Because Amazon grew alongside the Internet, it’s easy to imagine that the innovations and conveniences of online shopping are wedded to it. They aren’t. Jeff Bezos would prefer that we believe Amazon’s dominance is the inevitable result of innovation, and that to challenge the company’s power would mean giving up the benefits of the Internet revolution. But history tells us that when monopolies are broken up, there’s often a surge of innovation in their wake.

Frisch: You don’t think e-commerce in itself is a problem?

Mitchell: No. There’s no reason why making purchases through the Internet is inherently destructive. I do think a world without local businesses would be a bad idea, because in-person, face-to-face shopping generates significant social and civic benefits for a community. But lots of independent retailers have robust e-commerce sites, including my local bookstore, hardware store, and several clothing retailers. Being online gives customers another way to buy from them. We can even imagine a situation in which many small businesses might sell their wares on a single website to create a full-service marketplace. It wouldn’t be a problem as long as the rules that govern that website are fair, the retailers are treated equally, and power isn’t abused.

Frisch: But that’s not the case with Amazon?

Mitchell: No. As search traffic migrates to Amazon, independent businesses face a Faustian bargain: Do they continue to hang their shingle on a road that is increasingly less traveled or do they become an Amazon seller? It’s no easy decision, because once you become a third-party seller, 15 percent of your revenue typically goes to Amazon — more if you use their warehouse and fulfillment services. Amazon also uses the data that it gleans from monitoring your sales to compete against you by offering the same items. And it owns the customer relationship, particularly if you use Amazon’s fulfillment services — meaning you store your goods in its warehouses and pay it to handle the shipping. In that case, you cannot communicate with your customer except through Amazon’s system, and Amazon monitors those communications. If you go out of bounds, it can suspend you as a seller.

Frisch: What’s out of bounds? Let’s say a customer wants to know which product would be better, A or B. Can a seller tell them?

Mitchell: You’re allowed to respond to that question, but if, in the process of responding, you violate Amazon’s rules, you can be suspended from Amazon and see your livelihood disappear. An example of this is a small company that made custom-designed urns for ashes.

Frisch: For people who’ve been cremated?

Mitchell: Yes. They sold these urns through their website and also through Amazon. A customer contacted the urn-maker through Amazon to ask about engraving. The company responded truthfully that there was no way to place an order for engraving through Amazon, but it could be done through the company’s website. Within twenty-four hours the urn maker got slapped down by Amazon. The rules for third-party sellers say you can never give a customer a URL, because Amazon does not want that customer going anywhere else — even in a case where Amazon can’t provide what the customer wants.

An independent retailer’s most valuable assets are its knowledge of products and ability to spot trends. Once you become a seller on Amazon, you forfeit your expertise to them. They use your sales figures to spot the latest trends. Researchers at Harvard Business School have found that when you start selling through Amazon, within a short time Amazon will have figured out what your most popular items are and begun selling them itself. Amazon is now producing thousands of products, from batteries to blouses, under its own brands. It’s copying what other companies are selling and then giving its own products top billing in its search results. For example, a company called Rain Design in San Francisco made a popular laptop stand and built a business selling it through Amazon. A couple of years ago Rain Design found that Amazon had introduced a nearly identical product. The only difference is that the company’s raindrop logo had been swapped for Amazon’s smiling arrow. (...)

Frisch: You’ve characterized Amazon as a throwback to the age of the robber barons. How so?

Mitchell: The robber barons were nineteenth-century industrialists who dominated industries like oil and steel. During the Gilded Age, toward the end of the nineteenth century, these industrialists gained control of a technology that was opening up a new way of doing business: the railroad. They used their command of the rails to disadvantage their competitors. John D. Rockefeller, who ran Standard Oil, for example, conspired with the railroad magnate Cornelius Vanderbilt to charge competing oil companies huge sums to ship their product by rail. The first antitrust laws were written in response to industrialists’ attempts to control access to the market.

It’s striking how similar this history is to what Amazon has done: a new technology comes along that gives people a novel way to bring their wares to market, but a single company gains control over it and uses that power to undermine competitors and create a monopoly.

Amazon sells nearly half of all print books and has more than 80 percent of the e-book market. That’s enough to make it a gatekeeper: if Amazon suppresses a book in its search results or removes the book’s BUY button, as it has done during disputes with certain publishers, it causes that book’s sales to plummet. That is a monopoly.

Frisch: When did the Gilded Age monopolies get broken up?

Mitchell: A turning point came in the 1930s, during Franklin D. Roosevelt’s second term as president. Roosevelt concluded that corporate concentration was impeding the economy by closing off opportunity and slowing job and wage growth. So he set about dusting off the nation’s antitrust policies and using them to go after monopolies. This aggressive approach lasted for decades. Republican and Democratic presidents alike talked about the importance of fighting monopolies.

Then in the 1970s a group of legal and economic scholars, led by Robert Bork, argued that corporate consolidation should be allowed to go unchecked as long as consumer prices stayed low. The Reagan administration embraced this view. Under Reagan the antitrust laws were left intact, but how the antitrust agencies interpret and enforce the laws was radically altered. Antitrust policy was stripped of its original purpose and power. Subsequent administrations, including Democrats, followed suit.

All of the concerns that used to drive antitrust enforcement have collapsed into a single concern: low prices. But we aren’t just consumers. We’re workers who need to earn a living. We’re small businesspeople. We’re innovators and inventors. As the economy has grown more consolidated, with fewer and fewer companies dominating just about every industry, one consequence is lower wages. Economic consolidation means workers have fewer options for employment. This appears to be a big reason why wages have been stagnant now for decades. We should also remember that our antitrust laws, at their heart, are about protecting democracy. Amazon shouldn’t be allowed to decide which books succeed or fail, which companies are allowed to compete. (...)

Frisch: Before you took on Amazon, you helped galvanize community opposition to Walmart. Why should people be against the big-box retailer coming to their town?

Mitchell: Walmart’s pitch to communities is always that it will offer low prices and create jobs and tax revenue. Particularly for smaller communities, this seems like a great deal. But an overwhelming majority of research has found that Walmart is much more of an extractive force. Poverty actually rises in places where Walmart opens a store.

Independent businesses, on the other hand, help communities thrive, because they buy many goods and services locally. When a small business needs an accountant, it’s likely to hire someone nearby. When it needs a website, it hires a local web designer. It banks at the local bank and advertises on the local radio station. It also tends to carry more local and regional products. An independent bookstore, for example, might feature local authors prominently.

Economic relationships often involve other types of relationships, too. When you shop at a small business, you’re dealing with your neighbors. You’re buying from someone whose kids go to school with your kids. That matters for the health of communities.

When Walmart comes in, it systematically wipes out a lot of those relationships. Instead of circulating locally, most dollars spent at the Walmart store leave the community. You’re left with fewer jobs than you had to start with, and they’re low-wage positions.

by Tracy Frisch and Stacy Mitchell, The Sun |  Read more:
Image: uncredited

Tech to Blame for Ever-Growing Repair Costs

It's hard to remove a part from a new car without coming across a wire attached to it. As tech grows to occupy every spare corner of the car, many buyers might not realize that all that whiz-bang stuff is going to make collision repair an absolute bear.

Even seemingly small damages to a vehicle's front end can incur costs nearing $3,000, according to new research from AAA. The study looked at three solid sellers in multiple vehicle segments, including a small SUV, a midsize sedan and a pickup truck. It looked at repair costs using original equipment list prices and an established average for technician labor rates.

Let's use AAA's examples for some relatable horror stories. Mess up your rear bumper? Well, if you have ultrasonic parking sensors or radar back there, it could cost anywhere from $500 to $2,000 to fix. Knock off a side mirror equipped with a camera as part of a surround-view system? $500 to $1,100. (...)

AAA wasn't the first group to realize how nuts these costs can get. On a recent episode of Autoline, a CEO of a nonprofit focused on collision repair education pointed out that a front-corner collision repair on a Kia K900 could cost as much as $34,000. Sure, it's a low-production luxury sedan, but is anyone truly ready to drop $34,000 on a car that starts around $50,000?

by Andrew Krok, CNET |  Read more:
Image: AAA

Thursday, October 25, 2018


David Michael Bowers, State of the nation
via:

Nominating Oneself for the Short End of a Tradeoff

I’ve gotten a chance to discuss The Whole City Is Center with a few people now. They remain skeptical of the idea that anyone could “deserve” to have bad things happen to them, based on their personality traits or misdeeds.

These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer. I don’t know if this passes an Intellectual Turing Test. When I think of people deserving bad things, I think of them having nominated themselves to get the short end of a tradeoff.

Let me give three examples:

1. Imagine an antidepressant that works better than existing antidepressants, one that consistently provides depressed people real relief. If taken as prescribed, there are few side effects and people do well. If ground up, snorted, and taken at ten times the prescribed dose – something nobody could do by accident, something you have to really be trying to get wrong – it acts as a passable heroin substitute, you can get addicted to it, and it will ruin your life.

The antidepressant is popular and gets prescribed a lot, but a black market springs up, and however hard the government works to control it, a lot of it gets diverted to abusers. Many people get addicted to it and their lives are ruined. So the government bans the antidepressant, and everyone has to go back to using SSRIs instead.

Let’s suppose the government is being good utilitarians here: they calculated out the benefit from the drug treating people’s depression, and the cost from the drug being abused, and they correctly determined the costs outweighed the benefits.

But let’s also suppose that nobody abuses the drug by accident. The difference between proper use and abuse is not subtle. Everybody who knows enough to know anything about the drug at all has heard the warnings. Nobody decides to take ten times the recommended dose of antidepressant, crush it, and snort it, through an innocent mistake. And nobody has just never heard the warnings that drugs are bad and can ruin your life.

Somebody is going to get the short end of the stick. If the drug is banned, depressed people will lose access to relief for their condition. If the drug is permitted, recreational users will continue having the opportunity to destroy their lives. And we’ve posited that the utilitarian calculus says that banning the antidepressant would be better. But I still feel, in some way, that the recreational users have nominated themselves to get the worse end of this tradeoff. Depressed people shouldn’t have to suffer because you see a drug that says very clearly on the bottle “DO NOT TAKE TOO MUCH OF THIS YOU WILL GET ADDICTED AND IT WILL BE TERRIBLE” and you think “I think I shall take too much of this”.

(this story is loosely based on the history of tianeptine in the US)

2. Suppose you’re in a community where some guy is sexually harassing women. You tell him not to and he keeps doing it, because that’s just the kind of guy he is, and it’s unclear if he can even stop himself. Eventually he does it so much that you kick him out of the community.

Then one of his friends comes to you and says “This guy harassed one woman per month, and not even that severely. On the other hand, kicking him out of the community costs him all of his friends, his support network, his living situation, and his job. He is a pretty screwed-up person and it’s unclear he will ever find more friends or another community. The cost to him of not being in this community, is actually greater than the cost of being harassed is to a woman.”

Somebody is going to have their lives made worse. Either the harasser’s life will be worse because he’s kicked out of the community. Or women’s lives are worse because they are being harassed. Even if I completely believe the friend’s calculation that kicking him out will bring more harm on him than keeping him would bring harm to women, I am still comfortable letting him get the short end of the tradeoff.

And this is true even if we are good determinists and agree he only harasses somebody because of an impulse control problem secondary to an underdeveloped frontal lobe, or whatever the biological reason for harassing people might be.

(not going to bring up what this story is loosely based on, but it’s not completely hypothetical either)

3. Sometimes in discussions of basic income, someone expresses concern that some people’s lives might become less meaningful if they didn’t have a job to give them structure and purpose.

And I respond “Okay, so those people can work, basic income doesn’t prohibit you from working, it just means you don’t have to.”

And they object “But maybe these people will choose not to work even though work would make them happier, and they will just suffer and be miserable.”

Again, there’s a tradeoff. Somebody’s going to suffer. If we don’t grant basic income, it will be people stuck in horrible jobs with no other source of income. If we do grant basic income, it will be people who need work to have meaning in their lives, but still refuse to work. Since the latter group has a giant door saying “SOLUTION TO YOUR PROBLEMS” wide open in front of them but refuses to take it, I find myself sympathizing more with the former group. That’s true even if some utilitarian were to tell me that the latter group outnumbers them.

I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?

One option is to dismiss them as misfirings of the heuristic “expose people to the consequences of their actions so that they are incentivized to make the right action”. I’ve tried to avoid that escape by specifying in each example that even when they’re properly exposed and incentivized the calculus still comes out on the side of making the tradeoff in their favor. But maybe this is kind of like saying “Imagine you could silence this one incorrect person without any knock-on effects on free speech anywhere else and all the consequences would be positive, would you do it?” In the thought experiment, maybe yes; in the real world this either never happens, or never happens with 100% certainty, or never happens in a way that’s comfortably outside whatever Schelling fence you’ve built for yourself. I’m not sure I find that convincing because in real life we don’t treat “force people to bear the consequences of their action” as a 100% sacred principle that we never violate.

Another option is to dismiss them as people “revealing their true preferences”, eg if the harasser doesn’t stop harassing women, he must not want to be in the community too much. But I think this operates on a really sketchy idea of revealed preference, similar to the Caplanian one where if you abuse drugs that just means you like drugs so there’s no problem. Most of these situations feel like times when that simplified version of preferences breaks down.

A friend reframes the second situation in terms of the cost of having law at all. It’s important to be able to make rules like “don’t sexually harass people”, and adding a clause saying “…but we’ll only enforce these when utilitarianism says it’s correct” makes them less credible and creates the opportunity for a lot of corruption. I can see this as a very strong answer to the second scenario (which might be the strongest), although I’m not sure it applies much to the first or third.

I could be convinced that my desire to let people who make bad choices nominate themselves for the short end of tradeoffs is just the utilitarian justifications (about it incentivizing behavior, or it revealing people’s true preferences) crystallized into a moral principle. I’m not sure if I hold this moral principle or not. I’m reluctant to accept the ban-antidepressant, tolerate-harasser, and repeal-basic-income solutions, but I’m also not sure what justification I have for not doing so except “Here’s a totally new moral principle I’m going to tack onto the side of my existing system”.

But I hope people at least find this a more sympathetic way of understanding when people talk about “desert” than a caricatured story where some people just need to suffer because they’re bad.

by Scott Alexander, Slate Star Codex |  Read more:
[ed. I don't know what Scott's been doing in psychiatry these days since moving to SF, but his blog has benefited greatly. See also: Cognitive Enhancers: Mechanisms and Tradeoffs.]