Wednesday, May 10, 2017

Johnny Depp: A Star in Crisis and the Insane Story of His "Missing" Millions

Early one afternoon in October 2012, Jake Bloom and Joel Mandel left their respective Beverly Hills offices, slipped into their luxury cars and embarked on the roughly 30-minute journey to the Hollywood Hills compound of their client, Johnny Depp. Bloom was a rumpled and graying lawyer whose disheveled style camouflaged an intellect exercised on behalf of such luminaries as Martin Scorsese and Sylvester Stallone. Mandel, then in his early 50s, was a tall, rather amiable accountant who favored loose-fitting jeans and looser-fitting shirts, sartorial code designed to assure his clients he was just another boy in their band as well as a top-flight business manager steeped in the arcana of arbitrage and amortization.

Both men had been close to Depp for years. Bloom, indeed, was such a confidant to the actor that he had even joined him for an induction ceremony into the Comanche nation when he played Tonto in The Lone Ranger; as for Mandel, he had accompanied Depp to his three-island property in the Bahamas, atolls Mandel had helped his client buy for a total of $5.35 million.

These men were part of Depp's inner circle, at least as far as any lawyer or accountant could belong to the inner circle of an artist this mercurial, one with a skull-and-crossbones tattoo on his leg and "Death is certain" scrawled beneath it, whose soul mates were such creative titans as Marlon Brando, Keith Richards and Hunter S. Thompson — the journalist whose ashes Depp fired from a cannon hauled to the top of a 153-foot tower, a tribute for which the actor says he paid $5 million.

Leaving their cars that day, the advisers approached one of Depp's five houses on a dead-end stretch of North Sweetzer Avenue. A modernist affair that was simply referred to as 1480, the building had been converted into a recording studio and was an appendage to an eight-bedroom, castle-like mansion once owned by music producer Berry Gordy. One of the star's two omnipresent assistants led the men in, past a painting that British artist Banksy had created for Depp, and into a den, where the actor was leaning back in a slightly battered chair, surrounded by dozens upon dozens of classic guitars.

After the obligatory small talk, the visitors got to the point: Depp's cash flow had reached a crisis point, they declared. Even though the star had become wildly wealthy (later, Mandel would claim Depp earned more than $650 million in the 13-plus years he had been represented by The Management Group, the company Mandel had started in 1987 with his brother Robert), there just wasn't enough liquid money to cover Depp's $2 million in monthly bills.

Without a fire sale, Depp — then arguably the biggest star in Hollywood and certainly one of the best paid, thanks to the Pirates of the Caribbean franchise — would never be able to meet his obligations. Not the payments on his portfolio of real estate around the world. Not the impulse purchases such as the three Leonor Fini paintings he had bought from a Manhattan gallery (the first two for $320,000, the third as a $245,000 gift for then-girlfriend Amber Heard). Not the $3.6 million he paid annually for his 40-person staff. Not the $350,000 he laid out each month to maintain his 156-foot yacht. And not the hundreds of thousands of dollars he paid to sustain his ex-partner, Vanessa Paradis, and their children, Lily-Rose and Jack.

by Stephen Galloway, Ashley Cullins, Hollywood Reporter | Read more:
Image: uncredited via:

Ogata Kenzan, Tea bowl with pines (ca. 1700)
via:

Tuesday, May 9, 2017


Bird calls
via:

How Homeownership Became the Engine of American Inequality

The son of a minister, Ohene Asare grew up poor. His family immigrated from Ghana when he was 8 and settled down in West Bridgewater, Mass., a town 30 miles south of Boston, where he was one of the few black students at the local public school. “It was us and this Jewish family,” Asare remembered. “It was a field day.” His white classmates bullied him, sometimes using racial slurs. His father transferred Asare when he was 14 to Milton Academy, which awarded Asare a scholarship that covered tuition and board. His parents still had to take out loans worth about $20,000 for his living expenses. But the academy set Asare up for future success. He and his wife, Régine Jean-Charles, whom he got to know at Milton, are in their late 30s. She is a tenured professor of romance languages and literature at Boston College, and Asare is a founder of Aesara, a consulting and technology company.

Two years ago, the couple bought a new home. Set on a half-acre lot that backs up to conservation land in Milton, Mass., the 2,350-square-foot split-level has four bedrooms, three bathrooms, an open-concept kitchen and dining area, a finished basement, hardwood floors and beautiful touches throughout, like the Tennessee marble fireplace and hearth. It cost $665,000. “This is the nicest house I’ve ever lived in,” Asare told me.

Asare and Jean-Charles have four children and earn roughly $290,000 a year, which puts them in the top 5 percent of household incomes in the country. After renting for the first years of their marriage, they participated in a home buyers’ program administered by the nonprofit Neighborhood Assistance Corporation of America. The program allowed Asare and Jean-Charles to purchase their first home in 2009 for $360,000 with a 10 percent down payment, half of what is typically required. In 2015, they sold it for $430,000. There is a reason so many Americans choose to develop their net worth through homeownership: It is a proven wealth builder and savings compeller. The average homeowner boasts a net worth ($195,400) that is 36 times that of the average renter ($5,400).

Asare serves on the advisory board for HomeStart, a nonprofit focused on ending and preventing homelessness. Like most organizations, HomeStart is made up of people at various rungs on the economic ladder. Asare sits near the top; his salary exceeds that of anyone on staff at the nonprofit he helps advise. When Crisaliz Diaz was a staff member at HomeStart, she was at the other end of the ladder. She earned $38,000 a year, putting her near the bottom third of American household incomes. A 26-year-old Latina with thick-rimmed glasses, Diaz rents a small two-bedroom apartment in Braintree, Mass., an outer suburb of Boston. Her two sons, Xzayvior and Mayson — Zay and May, she calls them — share a room plastered with Lego posters and Mickey Mouse stickers. Her apartment is spare and clean, with ceiling tiles you can push up and views of the parking lot and busy street.

When Diaz moved in four years ago, the rent was $1,195 a month, heat included, but her landlord has since raised the rent to $1,385 a month, which takes 44 percent of her paycheck. Even with child-support payments and side jobs, she still doesn’t bring in enough to pay her regular bills. She goes without a savings account and regularly relies on credit cards to buy toilet paper and soap. “There’s no stop to it,” she told me. “It’s just a consistent thing.”

Diaz receives no housing assistance. She has applied to several programs, but nothing has come through. The last time Boston accepted new applications for rental-assistance Section 8 vouchers was nine years ago, when for a few precious weeks you were allowed to place your name on a very long waiting list. Boston is not atypical in that way. In Los Angeles, the estimated wait time for a Section 8 voucher is 11 years. In Washington, the waiting list for housing vouchers is closed indefinitely, and over 40,000 people have applied for public housing alone. While many Americans assume that most poor families live in subsidized housing, the opposite is true; nationwide, only one in four households that qualifies for rental assistance receives it. Most are like Diaz, struggling without government help in the private rental market, where housing costs claim larger and larger chunks of their income.

Almost a decade removed from the foreclosure crisis that began in 2008, the nation is facing one of the worst affordable-housing shortages in generations. The standard of “affordable” housing is that which costs roughly 30 percent or less of a family’s income. Because of rising housing costs and stagnant wages, slightly more than half of all poor renting families in the country spend more than 50 percent of their income on housing costs, and at least one in four spends more than 70 percent. Yet America’s national housing policy gives affluent homeowners large benefits; middle-class homeowners, smaller benefits; and most renters, who are disproportionately poor, nothing. It is difficult to think of another social policy that more successfully multiplies America’s inequality in such a sweeping fashion.

Consider Asare and Diaz. As a homeowner, Asare benefits from tax breaks that Diaz does not, the biggest being the mortgage-interest deduction — or MID, in wonk-speak. All homeowners in America may deduct mortgage interest on their first and second homes. In 2015, Asare and Jean-Charles claimed $21,686 in home interest and other real estate deductions, which saved them $470 a month. That’s roughly 15 percent of Diaz’s monthly income. That same year, the federal government dedicated nearly $134 billion to homeowner subsidies. The MID accounted for the biggest chunk of the total, $71 billion, with real estate tax deductions, capital gains exclusions and other expenditures accounting for the rest. That number, $134 billion, was larger than the entire budgets of the Departments of Education, Justice and Energy combined for that year. It is a figure that exceeds half the entire gross domestic product of countries like Chile, New Zealand and Portugal.

Recently, Gary Cohn, the chief economic adviser to President Trump, heralded his boss’s first tax plan as a “once-in-a-generation opportunity to do something really big.” And indeed, Trump’s plan represents a radical transformation in how we will fund the government, with its biggest winners being corporations and wealthy families. But no one in his administration, and only a small (albeit growing) group of people in either party, is pushing to reform what may very well be the most regressive piece of social policy in America. Perhaps that’s because the mortgage-interest deduction overwhelmingly benefits the sorts of upper-middle-class voters who make up the donor base of both parties and who generally fail to acknowledge themselves to be beneficiaries of federal largess. “Today, as in the past,” writes the historian Molly Michelmore in her book “Tax and Spend,” “most of the recipients of federal aid are not the suspect ‘welfare queens’ of the popular imagination but rather middle-class homeowners, salaried professionals and retirees.” A 15-story public housing tower and a mortgaged suburban home are both government-subsidized, but only one looks (and feels) that way. It is only by recognizing this fact that we can begin to understand why there is so much poverty in the United States today.

When we think of entitlement programs, Social Security and Medicare immediately come to mind. But by any fair standard, the holy trinity of United States social policy should also include the mortgage-interest deduction — an enormous benefit that has also become politically untouchable.

The MID came into being in 1913, not to spur homeownership but simply as part of a general policy allowing businesses to deduct interest payments from loans. At that time, most Americans didn’t own their homes and only the rich paid income tax, so the effects of the mortgage deduction on the nation’s tax proceeds were fairly trivial. That began to change in the second half of the 20th century, though, because of two huge transformations in American life. First, income tax was converted from an elite tax to a mass tax: In 1932, the Bureau of Internal Revenue (precursor to the I.R.S.) processed fewer than two million individual tax returns, but 11 years later, it processed over 40 million. At the same time, the federal government began subsidizing homeownership through large-scale initiatives like the G.I. Bill and mortgage insurance. Homeownership grew rapidly in the postwar period, and so did the MID.

By the time policy makers realized how extravagant the MID had become, it was too late to do much about it without facing significant backlash. Millions of voters had begun to count on getting that money back. Even President Ronald Reagan, who oversaw drastic cuts to housing programs benefiting low-income Americans, let the MID be. Subsequent politicians followed suit, often eager to discuss reforms to Social Security and Medicare but reluctant to touch the MID, even as the program continued to grow more costly: By 2019, MID expenditures are expected to exceed $96 billion.

by Matthew Desmond, NY Times | Read more:
Image: Damon Casarez

“Well, there’s your problem right there—you need to sauté the onions in white wine before adding the ginger.”

Is the Gig Economy Working?

Not long ago, I moved apartments, and beneath the weight of work and lethargy a number of small, nagging tasks remained undone. Some art work had to be hung from wall moldings, using wire. In the bedroom, a round mirror needed mounting beside the door. Just about anything that called for careful measuring or stud-hammering I had failed to get around to—which was why my office walls were bare, no pots yet dangled from the dangly-pot thing in the kitchen, and my bedside shelf was still a doorstop. There are surely reasons that some of us resist being wholly settled, but when the ballast of incompletion grew too much for me I logged on to TaskRabbit to finish what I had failed to start.

On its Web site, I described the tasks I needed done, and clicked ahead. A list of fourteen TaskRabbits appeared, each with a description of skills and a photograph. Many of them wore ties. I examined one called Seth F., who had done almost a thousand tasks. He wore no tie, but he had a ninety-nine-per-cent approval rating. “I’m a smart guy with tools. What more can you want?” he’d written in his profile. He was listed as an Elite Tasker, and charged fifty-five dollars an hour. I booked him for a Wednesday afternoon.

TaskRabbit, which was founded in 2008, is one of several companies that, in the past few years, have collectively helped create a novel form of business. The model goes by many names—the sharing economy; the gig economy; the on-demand, peer, or platform economy—but the companies share certain premises. They typically have ratings-based marketplaces and in-app payment systems. They give workers the chance to earn money on their own schedules, rather than through professional accession. And they find toeholds in sclerotic industries. Beyond TaskRabbit, service platforms include Thumbtack, for professional projects; Postmates, for delivery; Handy, for housework; Dogvacay, for pets; and countless others. Home-sharing services, such as Airbnb and its upmarket cousin onefinestay, supplant hotels and agencies. Ride-hailing apps—Uber, Lyft, Juno—replace taxis. Some on-demand workers are part-timers seeking survival work, akin to the comedian who waits tables on the side. For growing numbers, though, gigging is not only a living but a life. Many observers see it as something more: the future of American work.

Seth F.—the “F” stood for Flicker— showed up at my apartment that Wednesday bearing a big backpack full of tools. He was in his mid-forties, with a broad mouth, brown hair, and ears that stuck out like a terrier’s beneath a charcoal stocking cap. I poured him coffee and showed him around.

“I have molding hooks and wire,” I said, gesturing with unfelt confidence at some coils of translucent cord. “I was thinking they could maybe hang . . .” It struck me that I lacked a vocabulary to address even the basics of the job; I swirled my hands around the middle of the wall, as if blindfolded and turned loose in a strange room.

Seth F. seemed to gather that he was dealing with a fool. He offered a decision tree pruned to its stump. “Do you want them at eye level?” he asked.

“Eye level sounds great,” I said.

Seth F. had worked for TaskRabbit for three years, he told me as he climbed onto my kitchen stool—“like twenty-one years in normal job time.” In college, he had sold a screenplay to Columbia Pictures, and the film, though never made, launched his career. He wrote movies for nine years, and was well paid and sought after, but none of his credited work made it to the big screen, so he took a job as a senior editor at Genre, a now defunct gay magazine, where he covered the entertainment industry. He liked magazine work, but was not a true believer. “I’m one of those people, I think, who has to change jobs frequently,” he told me. He got a master’s degree in education, and taught fourth grade at Spence and at Brooklyn Friends. Fourteen years in, a health condition flared up, leaving his calendar checkered with days when it was hard to work. He’d aways found peculiar joy in putting together ikea furniture, so he hired himself out as an assembly wiz: easy labor that paid the bills while he got better. He landed on TaskRabbit.

“There are so many clients, I rarely get bored,” he told me. He was feeding cord through the molding hooks to level my pictures. At first, he said, hourly rates at TaskRabbit were set through bidding, but taskers now set their own rates, with the company claiming thirty per cent. A constellation of data points—how quickly he answers messages, how many jobs he declines—affect his ranking when users search the site. He took as many jobs as he could, generating about eighty paid hours each month. “The hardest part is not knowing what your next paycheck is from,” he told me.

Seth F. worked quickly. Within an hour, he had hung six frames from the molding over my couches. Sometimes, he confessed, his jobs seem silly: he was once booked to screw in a light bulb. Other work is harder, and strange. Seth F. has been hired to assemble five jigsaw puzzles for a movie set, to write articles for a newspaper in Alaska, and to compose a best-man speech to be delivered by the brother of the groom, whom he had never met. (“The whole thing was about, ‘In the future, we’re going to get to know each other better,’ ” he explained.) Casper, the mattress company, booked him to put sheets on beds; Oscar, the health-insurance startup, had him decorate its offices for Christmas.

As we talked, his tone warmed. I realized that he probably visited strangers several times a day, meting out bits of himself, then moving on, often forever, and I considered what an odd path through professional experience that must be. He told me that he approached the work with gratitude but little hope.

“These are jobs that don’t lead to anything,” he said, without looking up from his work. “It doesn’t feel”—he weighed the word—“sustainable to me.”

by Nathan Heller, New Yorker |  Read more:
Image: Janine Ilvonen

Monday, May 8, 2017

My So-Called (Instagram) Life

“You’re like a cartoon character,” he said. “Always wearing the same thing every day.”

He meant it as an intimate observation, the kind you can make only after spending a lot of time getting to know each other. You flip your hair to the right. You only eat ice cream out of mugs. You always wear a black leather jacket. I know you.

And he did know me. Rather, he knew the caricature of me that I had created and meticulously cultivated. The me I broadcast to the world on Instagram and Facebook. The witty, creative me, always detached and never cheesy or needy.

That version of me got her start online as my social media persona, but over time (and I suppose for the sake of consistency), she bled off the screen and overtook my real-life personality, too. And once you master what is essentially an onstage performance of yourself, it can be hard to break character.

There was a time when I allowed myself to be more than what could fit onto a 2-by-4-inch screen. When I wasn’t so self-conscious about how I was seen. When I embraced my contradictions and desires with less fear of embarrassment or rejection.

There was a time when I swore in front of my friends and said grace in front of my grandmother. When I wore lipstick after seeing “Clueless,” and sneakers after seeing “Remember the Titans.” When I flipped my hair every way, ate ice cream out of anything, and wore coats of all types and colors.

Since then, I have consolidated that variety — scrubbed it away, really — to emerge as one consistently cool girl: one face, two arms, one black leather jacket.

And so it was a validation of sorts when Joe fell for her, the me in the leather jacket. He was brilliant, the funniest guy in our TV writing program, and my ideal cool counterpart. I could already see us on screen; we made sense.

Best of all, he thought he liked me more than I liked him, and that was perfect too, because it gave me the upper hand. I was above love, above emotional complication, dedicated to higher pursuits.

Periodically Joe would confront me about this imbalance. We would meet at a park on Second Avenue and 10th Street, and he would tell me that I drove him crazy, that he couldn’t be as removed as me.

And, of course, the truth was that I wasn’t removed at all. Over the many months we were together, as we went from being friends to more than friends, I had fallen for him completely. The singular syllable of his name had started to feel permanently tucked between my molars and was always on my mind.

But I was reluctant to change my character midseason and become someone who was more open and, God forbid, earnest about love. He had fallen for the cool, detached me, so that’s who I remained. And he got bored.

That’s the way it goes with half-hour TV shows. Consistency can become boring. The will-they-or-won’t-they characters have to get together, and at that point the show is closing in on its finale. It’s all in the build, and when that becomes tired, the show gets canceled.

Like an allergic reaction to becoming unloved, my Instagram account went into overdrive, all aimed at one audience member: Joe. Through hundreds of screens, I was screaming at him: “I’m here! I’m funny! I’m at that fish taco place I showed you!”

The likes I got from my followers did little to quell my crushing need for Joe’s cyberapproval. “Like me again, like me again,” became my subconscious mantra.

But he didn’t like me, and each time he didn’t, the heartache felt like a warm bullet exploding in my gut. I would lie on the couch and clutch my stomach so tightly it was as if I were trying to expel the shrapnel from my throat. I knew no one else could extract it for me because no one knew it was there.

I was embarrassed for the people I saw who pined publicly on Instagram, but I also envied them. They were showered with support, with reassurance. If they were not completely cured, at least the illness seemed to run a shorter course.

Meanwhile, every time I twisted my spine, I felt that warm bullet scraping my insides. I was scared it might fossilize there and become permanently embedded.

In an effort to self-soothe, I wrote letters to Joe — actual, physical letters, pen to notepad — that felt like some ancient ritual, using my whole hand and not just my thumbs. Staring at his cowlick in class, I would write down everything I wanted: for him to critique my writing, to stroke my hair while we watched “Curb Your Enthusiasm” on his ugly futon, to read his plays and believe I was moving ever closer to his core.

Rather than give him any of these letters, I burned them, trying and failing to cremate that side of myself.

Day by day, hour by hour, my Instagram feed became more manic, nasty and petulant. Posts that were once meant as romantic gestures became tiny, pixelated middle fingers.

Joe began to notice, but instead of magically falling back in love with me, he became hurt and angry. I was inexplicably cold to him, posting photos of parties I threw that he wasn’t invited to, pictures of me abroad where I hadn’t told him I was studying, and pieces of art I made but hadn’t shared with him.

In return, he sent me messages of unvarnished honesty: “Why didn’t you invite me?” “Why are you being like this?”

Oh, it’s just who I am. I am fun, I feel nothing and I have completely forgotten you.

And so it went, and I kept at the beautiful box I was crafting for myself. A shoe box covered in stickers and fake jewels. The kind you would make for a pet parakeet you have to bury. I would dream about Joe at night, and in the morning I would post something silvery and eye catching. It was always just tinfoil, though, not truth. And I prayed no one would notice.

I posted a photo of me standing next to a shirt that said “The World Shook at Adam’s bar mitzvah, 1995,” with a witty caption about simpler times, before global warming. A girl who follows me, with whom I’ve spoken only a handful of times, told me it was so “on brand.”

My brand, specifically: funny, carefree, unromantic, a realist.

by Clara Dollar, NY Times |  Read more:
Image :Brian Rea
[ed. I hope this is parody.]
 
 

via:

Forbidden Questions?

24 Key Issues That Neither the Washington Elite Nor the Media Consider Worth Their Bother

Donald Trump's election has elicited impassioned affirmations of a renewed commitment to unvarnished truth-telling from the prestige media. The common theme: you know you can’t trust him, but trust us to keep dogging him on your behalf. The New York Times has even unveiled a portentous new promotional slogan: “The truth is now more important than ever.” For its part, the Washington Post grimly warns that “democracy dies in darkness,” and is offering itself as a source of illumination now that the rotund figure of the 45th president has produced the political equivalent of a total eclipse of the sun. Meanwhile, National Public Radio fundraising campaigns are sounding an increasingly panicky note: give, listener, lest you be personally responsible for the demise of the Republic that we are bravely fighting to save from extinction.

If only it were so. How wonderful it would be if President Trump’s ascendancy had coincided with a revival of hard-hitting, deep-dive, no-holds-barred American journalism. Alas, that’s hardly the case. True, the big media outlets are demonstrating both energy and enterprise in exposing the ineptitude, inconsistency, and dubious ethical standards, as well as outright lies and fake news, that are already emerging as Trump era signatures. That said, pointing out that the president has (again) uttered a falsehood, claimed credit for a nonexistent achievement, or abandoned some position to which he had previously sworn fealty requires something less than the sleuthing talents of a Sherlock Holmes. As for beating up on poor Sean Spicer for his latest sequence of gaffes -- well, that’s more akin to sadism than reporting.

Apart from a commendable determination to discomfit Trump and members of his inner circle (select military figures excepted, at least for now), journalism remains pretty much what it was prior to November 8th of last year: personalities built up only to be torn down; fads and novelties discovered, celebrated, then mocked; “extraordinary” stories of ordinary people granted 15 seconds of fame only to once again be consigned to oblivion -- all served with a side dish of that day’s quota of suffering, devastation, and carnage. These remain journalism’s stock-in-trade. As practiced in the United States, with certain honorable (and hence unprofitable) exceptions, journalism remains superficial, voyeuristic, and governed by the attention span of a two year old.

As a result, all those editors, reporters, columnists, and talking heads who characterize their labors as “now more important than ever” ill-serve the public they profess to inform and enlighten. Rather than clearing the air, they befog it further. If anything, the media’s current obsession with Donald Trump -- his every utterance or tweet treated as “breaking news!” -- just provides one additional excuse for highlighting trivia, while slighting issues that deserve far more attention than they currently receive.

To illustrate the point, let me cite some examples of national security issues that presently receive short shrift or are ignored altogether by those parts of the Fourth Estate said to help set the nation’s political agenda. To put it another way: Hey, Big Media, here are two dozen matters to which you’re not giving faintly adequate thought and attention.

1. Accomplishing the “mission”: Since the immediate aftermath of World War II, the United States has been committed to defending key allies in Europe and East Asia. Not long thereafter, U.S. security guarantees were extended to the Middle East as well. Under what circumstances can Americans expect nations in these regions to assume responsibility for managing their own affairs? To put it another way, when (if ever) might U.S. forces actually come home? And if it is incumbent upon the United States to police vast swaths of the planet in perpetuity, how should momentous changes in the international order -- the rise of China, for example, or accelerating climate change -- affect the U.S. approach to doing so?

2. American military supremacy: The United States military is undoubtedly the world’s finest. It’s also far and away the most generously funded, with policymakers offering U.S. troops no shortage of opportunities to practice their craft. So why doesn’t this great military ever win anything? Or put another way, why in recent decades have those forces been unable to accomplish Washington’s stated wartime objectives? Why has the now 15-year-old war on terror failed to result in even a single real success anywhere in the Greater Middle East? Could it be that we’ve taken the wrong approach? What should we be doing differently?

3. America’s empire of bases: The U.S. military today garrisons the planet in a fashion without historical precedent. Successive administrations, regardless of party, justify and perpetuate this policy by insisting that positioning U.S. forces in distant lands fosters peace, stability, and security. In the present century, however, perpetuating this practice has visibly had the opposite effect. In the eyes of many of those called upon to “host” American bases, the permanent presence of such forces smacks of occupation. They resist. Why should U.S. policymakers expect otherwise?

4. Supporting the troops: In present-day America, expressing reverence for those who serve in uniform is something akin to a religious obligation. Everyone professes to cherish America’s “warriors.” Yet such bountiful, if superficial, expressions of regard camouflage a growing gap between those who serve and those who applaud from the sidelines. Our present-day military system, based on the misnamed All-Volunteer Force, is neither democratic nor effective. Why has discussion and debate about its deficiencies not found a place among the nation’s political priorities?

5. Prerogatives of the commander-in-chief: Are there any military actions that the president of the United States may not order on his own authority? If so, what are they? Bit by bit, decade by decade, Congress has abdicated its assigned role in authorizing war. Today, it merely rubberstamps what presidents decide to do (or simply stays mum). Who does this deference to an imperial presidency benefit? Have U.S. policies thereby become more prudent, enlightened, and successful?

6. Assassin-in-chief: A policy of assassination, secretly implemented under the aegis of the CIA during the early Cold War, yielded few substantive successes. When the secrets were revealed, however, the U.S. government suffered considerable embarrassment, so much so that presidents foreswore politically motivated murder. After 9/11, however, Washington returned to the assassination business in a big way and on a global scale, using drones. Today, the only secret is the sequence of names on the current presidential hit list, euphemistically known as the White House “disposition matrix.” But does assassination actually advance U.S. interests (or does it merely recruit replacements for the terrorists it liquidates)? How can we measure its costs, whether direct or indirect? What dangers and vulnerabilities does this practice invite?

7. The war formerly known as the “Global War on Terrorism”: What precisely is Washington’s present strategy for defeating violent jihadism? What sequence of planned actions or steps is expected to yield success? If no such strategy exists, why is that the case? How is it that the absence of strategy -- not to mention an agreed upon definition of “success” -- doesn’t even qualify for discussion here?

by Andrew J. Bacevich, TomDispatch | Read more:

Too Much Information

One of the few detectable lies in David Foster Wallace's books occurs in his essay on the obscure '90s-era American tennis prodigy Michael Joyce, included in Wallace's first nonfiction anthology, A Supposedly Fun Thing I'll Never Do Again. Apart from some pages in his fiction, it's the best thing he wrote about tennis—better even than his justly praised but disproportionately famous piece on Roger Federer—precisely because Joyce was a journeyman, an unknown, and so offered Wallace's mind a white canvas. Wallace had almost nothing to work with on that assignment: ambiguous access to the qualifying rounds of a Canadian tournament, a handful of hours staring through chain link at a subject who was both too nice to be entertaining and not especially articulate. Faced with what for most writers would be a disastrous lack of material, Wallace looses his uncanny observational powers on the tennis complex, drawing partly on his knowledge of the game but mainly on his sheer ability to consider a situation, to revolve it in his mental fingers like a jewel whose integrity he doubts. In the mostly empty stadium he studies the players between matches. "They all have the unhappy self-enclosed look of people who spend huge amounts of time on planes and waiting around in hotel lobbies," he writes, "the look of people who have to create an envelope of privacy around them with just their expressions." He hears the "authoritative pang" of tour-tight racket strings and sees ball boys "reconfigure complexly." He hits the practice courts and watches players warm up, their bodies "moving with the compact nonchalance I've since come to recognize in pros when they're working out: the suggestion is one of a very powerful engine in low gear."

The lie comes at the start of the piece, when Wallace points out a potential irony of what he's getting ready to do, namely write about people we've never heard of, who are culturally marginal, yet are among the best in the world at a chosen pursuit. "You are invited to try to imagine what it would be like to be among the hundred best in the world at something," Wallace says. "At anything. I have tried to imagine; it's hard."

What's strange is that this was written in 1996—by then, Wallace had completed his genre-impacting second novel, Infinite Jest, as well as the stories, a couple already considered classic, in the collection Girl with Curious Hair. It's hard to believe he didn't know that he was indeed among the hundred best at a particular thing, namely imaginative prose, and that there were serious people ready to put him among an even smaller number. Perhaps we should assume that, being human, he knew it sometimes and at other times feared it wasn't true. Either way, the false modesty—asking us to accept the idea that he'd never thought of himself as so good and had proposed the experiment naively—can't help reading as odd. Which may itself be deliberate. Not much happens by accident in Wallace's stuff; his profound obsessive streak precluded it. So could it be there's something multilayered going on with sport as a metaphor for writing—even more layers than we expect? It does seem curious that Wallace chose, of all the players, one named Joyce, whose "ethnic" Irishness Wallace goes out of his way to emphasize, thereby alluding to an artist whose own fixation on technical mastery made him a kind of grotesque, dazzling but isolated from healthful, human narrative concerns. Certainly Wallace played textual games on that level.

Here's a thing that is hard to imagine: being so inventive a writer that when you die, the language is impoverished. That's what Wallace's suicide did, two and a half years ago. It wasn't just a sad thing, it was a blow. (...)

It's hard to do the traditional bio-style paragraph about Wallace for readers who, in this oversaturated mediascape, don't know who he was or why he mattered, because you keep flashing on his story "Death Is Not the End," in which he parodies the practice of writing the traditional bio-style paragraph about writers, listing all their honors and whatnot, his list becoming inexplicably ridiculous as he keeps naming the prizes, and you get that he's digging into the frequent self-congratulating silliness of the American literary world, "a Lannan Foundation Fellowship, [...] a Mildred and Harold Strauss Living Award from the American Academy and Institute of Arts and Letters...a poet two separate American generations have hailed as the voice of their generation." Wallace himself had many of the awards on the list, including "a 'Genius Grant' from the prestigious MacArthur Foundation." Three novels, three story collections, two books of essays, the Roy E. Disney Professorship of Creative Writing at Pomona College...

When they say that he was a generational writer, that he "spoke for a generation," there's a sense in which it's almost scientifically true. Everything we know about the way literature gets made suggests there's some connection between the individual talent and the society that produces it, the social organism. Cultures extrude geniuses the way a beehive will make a new queen when its old one dies, and it's possible now to see Wallace as one of those. I remember well enough to know it's not a trick of hindsight, hearing about and reading Infinite Jest for the first time, as a 20-year-old, and the immediate sense of: This is it. One of us is going to try it. The "it" being all of it, to capture the sensation of being alive in a fractured superpower at the end of the twentieth century. Someone had come along with an intellect potentially strong enough to mirror the spectacle and a moral seriousness deep enough to want to in the first place. About none of his contemporaries—even those who in terms of ability could compete with him—can one say that they risked as great a failure as Wallace did.

by John Jeremiah Sullivan, GQ |  Read more:
[ed. From the archive: Readers of this blog know I'm an unabashed DFW fan, and so, please excuse another review of his postumous book The Pale King, which I somehow managed to miss the first time around.]

Sunday, May 7, 2017

Now THAT Was Music

Some of us are more susceptible than others, but eventually it happens to us all. You know what I’m talking about: the inability to appreciate new music – or at least, to appreciate new music the way we once did. There’s a lot of disagreement about why exactly this happens, but virtually none about when. Call it a casualty of your 30s, the first sign of a great decline. Recently turned 40, I’ve seen it happen to me – and to a pretty significant extent – but refuse to consider myself defeated until the moment I stop fighting.

I’ve been fighting it for more than 10 years now, with varying degrees of vigour and resolve. Sometimes the fight becomes too much – one tires of the small victories that never break open into anything larger – and the spirit flags. I continually if not consistently stay abreast of what’s deemed the best of the new – particularly in rap and rock and R&B (which I stubbornly and unapologetically refer to, like a true devotee of its 1960s incarnation, as ‘soul’). These ventures into the current and contemporary have reaped dividends so small, they can be recounted – will be recounted – with no trouble at all.

But why should I care? Why should any of us care? Maybe it’s about the fear of becoming what we’ve always loathed: someone reflexively and guiltlessly willing to serve up a load of things-were-better-in-my-day, one of the most facile and benighted of all declarations. If you take pride in regarding yourself as culturally current, always willing to indulge the best of everything wherever it’s found, such taste blockages can be pretty frustrating, even embarrassing. And that hoary old consolation for the erectile dysfunction of the slightly older – ‘It happens to everyone’ – is no consolation at all.

For one thing, it doesn’t happen to everyone. Musicians seem particularly immune, for obvious reasons, and so do certain types of journalists, for reasons touched on in the paragraph above. Still, it’s a very real phenomenon, as real as anything that transpires in the mind. Famously, something similar happens to us with sports, particularly spectator sports, and at a much younger age. But no one really feels too badly about that, because of the inherent meaninglessness of watching other humans engage in physical activity. It’s like ruing the day you ever stopped liking porn. But music is different. Denounce the music of the present day, and you’ve instantly become a walking, talking, (barely) breathing cliché, ripe for ridicule, a classic figure of parody and invective.

It doesn’t happen to everyone, but it could certainly happen to you.

It’s axiomatic in our culture that a sense of wonder is something to be encouraged in others and coveted for ourselves. But a sense of wonder is dependent on an ability to experience surprise, and if as an adult you’re still surprised by certain things, then you haven’t been keeping up the way you should.

Most of us stop responding to new music because we know better. You can read that sentence and its last word any way you want; it’s still going to apply. But even if we don’t know better, per se, we still know just as good, and so we know enough to understand that it’s been done before, whatever this is we’re listening to. All of which is another way of saying: you lose your virginity only once.

This is only compounded by another factor, and it’s something I’ve never seen or heard mentioned in any discussion of this topic. It has to do with the callowness (perceived and real) of musicians younger than ourselves. As something that by its very nature appeals to our emotions, music requires that we be emotionally engaged. This can be a very difficult thing to achieve on behalf of someone who hasn’t endured as much of the world as we have.

I’m talking here about music made by those who were younger than us when we first heard them. Anybody who listens to a Beatles song today is listening to a song made by people in their 20s, but we don’t mind – we seldom even notice – because we were younger than that when we first heard the Beatles – or at least, we were younger than the living Beatles were then.

I’m not saying it makes sense, any more than emotions themselves make sense. But there’s no denying their validity. The best music achieves its effects by realising a bittersweet tension – a bit of melancholy touched with exuberance, or vice versa. This requires soul, and something resembling wisdom, and it requires the listener’s complicity, too. More than with any other art form, music requires that its consumer not just appreciate adroit execution but take ownership of a sensibility. I’m not saying it’s impossible with musicians younger than ourselves – it’s happened to me many times. But it’s certainly rare, because for the effect to work – the way it works for me whenever I hear Sleater-Kinney’s Jumpers (2005) or the Decemberists’ Here I Dreamt I Was an Architect (2002), both of which I first encountered well into my 30s – it’s because there are absolutely no weaknesses in the songs’ construction, and because the musicians manage to achieve an old-souled wistfulness and longing that transcend their youth.

More important than any of this is the adult’s safety within his identity. No longer casting about for an anthem, no longer trying on identities like new clothes, the well-adjusted adult is far less likely to succumb to the sound of a musician’s soul, unless it’s a sound that got to him before his ultimate emancipation.

The early-30s solidification of this soul is part of a process begun much earlier, when one is hitting adolescence. In an article headlined ‘Forever Young? In Some Ways, Yes’ (2011) in The New York Times, the cultural historian David Hajdu noticed something shared among a dozen or so legendary musicians then turning 70: they had all ‘turned 14 around 1955 and 1956, when rock ’n’ roll was first erupting’.

He took his hunch and drew it out a little further, with compelling results. Bob Dylan and Paul McCartney both had their heads turned around by Elvis when they were precisely 14 years old; Sidney Bechet, Jimmie Rodgers and Fletcher Henderson – all ‘future innovators of vernacular, cross-racial music’ – were 14 in 1911 when Irving Berlin’s Alexander’s Ragtime Band was released; Billie Holiday and Frank Sinatra turned 14 in 1929, the year Rudy Vallée codified the art of crooning; and Bruce Springsteen, Stevie Wonder, Gene Simmons and Billy Joel turned 14 right around the time that the Beatles played The Ed Sullivan Show in 1964.

It’s simply not realistic to expect someone to respond to music with such life-defining fervour more than once. And it’s not realistic, either, to expect someone comfortable with his personality to be flailing about for new sensibilities to adopt. I’ve always been somewhat suspicious of those who truly do, as the overused phrase has it, listen to everything. Such schizophrenic tastes seem not so much a symptom of well-roundedness as of an unstable sense of self. Liking everything means loving nothing. If you’re so quick to adopt new sentiments and their expression, then how serious were you about the ones you pushed aside to accommodate them?

Oh yeah, and one more thing: music today fucking sucks.

by Lary Wallace, Aeon |  Read more:
Image: via:
[ed. Wikipedia should use this photo of Neil Young under the heading: 'Grumpy Old Men'. Just kidding...! love you Neil.]

Press the Button

The little tablet at the end of my table at Olive Garden glows brighter than the too-bright lights of the restaurant around it, shuffling through a slideshow of pastas and wine.

The tablet is a Ziosk, made by the Dallas company formerly known as TableTop Media. In the past five years, Ziosks and their main competitor, Presto tablets, made by a Silicon Valley company called E la Carte, have trickled onto the tables of many of America’s great suburban chains. Today, you can find them in nearly five thousand restaurants across the country, in Chili’s and Outbacks, in Red Robins and Applebee’s.

A few swipes, a couple cautious pokes, and I’ve ordered a glass of pinot grigio for me, a pinot noir for my date, plus a three-app platter (mozz sticks,stuffed shrooms, fried calamari) to share.

Then Jessica, our server, stops by. She asks if she can get us started with any drinks or appetizers. There’s an awkward pause, like when an acquaintance asks after a recent ex. Waving toward the tablet, I explain we’ve already ordered. I feel guilty that the device could steal her job, but she doesn’t seem to mind.

Maybe that’s because the tablet can’t carry food, or handle cash, or convey to the kitchen that I might like my linguine di mare with sauce on the side and meatballs instead of shrimp. It definitely can’t compliment my date’s haircut, or make a joke about the traffic, or answer my questions about whether or not it likes working at a place with tablets on every table. But it can take a normal order, and lets me pay with a card at the precise moment I want to leave, and then fill out a little survey about my meal. If I happen to have a kid with me, or realize with sudden revulsion that I can no longer stand to even look across the table at my companion, I could even pay an extra $1.99 for access to the tablet’s library of games, and then crush some trivia while I wait for my bottomless breadsticks to be replenished.

In the fancier precincts of the food-service world, where watching a barista spend four minutes prepping a pour-over coffee is a customer’s idea of a good time, robots might not seem like the future of food culture. But spend some time at the restaurants where the majority of Americans eat every day, and you’ll catch a distinct whiff of automation in the air.

Andrew Puzder, former CEO of the company that owns Carl’s Jr. and Hardee’s (and Donald Trump’s humiliatingly rejected nominee for secretary of labor), has been leading the charge, loudly trumpeting the benefits of replacing front-of-house workers with self-service kiosks at every chance, specifically in response to what he claims are the business-crippling threats of higher minimum wages and—of course—Obamacare. Machines, you see, don’t need to get paid or go to the hospital. And as he told Business Insider last year, “they’re always polite, they always upsell, they never take a vacation, they never show up late, and there’s never a slip and fall, or an age-, sex-, or race discrimination case.”

Given job creators’ distaste for organic employees, it’s easy to see how automation might play out in Quick-Service Restaurants, or QSRs—the industry term for both fast-food operations like Hardee’s and slightly more upscale “fast casual” restaurants, like Chipotle. You already have to stand in line, order your own food, and then (in most cases) pick your order up at the counter when it’s ready. Pop a couple kiosks up front, maybe let people order on their phones, and bingo, you’ve automated away all the cashiers. This process is already under way: Panera Bread has had kiosks for years, McDonald’s has started to test them out at certain stores, and Wendy’s announced in February that it plans to install bots at one thousand locations by the end of 2017.

But the tabletop tablets at the Olive Gardens and Outbacks of the world seem like a stranger fit. In the industry jargon, these are “casual dining” restaurants, table-service operations that aren’t quite as fancy as “fine dining” restaurants. Technically, any cheapish place with full service falls into the casual dining bucket, from greasy spoons to dim sum palaces, but the big chains make up about half of the category, and the bigger companies in the ecosystem—like Darden Restaurants, which owns Olive Garden, and DineEquity, which owns Applebee’s and IHOP—are some of the biggest employers in the country. Could touch screens swipe away half of those jobs?

by Sam Dean, Lucky Peach |  Read more:
Image: Erik Carter

Saturday, May 6, 2017

Al Jarreau

Magic Carpet Ride

It’s a thriving, little-known industry catering to the planet’s richest and most demanding travelers—European billionaires, Arabian heads-of-state, and CEOs who want to jet between continents as if they’d never left their their trophy penthouses.

Welcome to the world of airline conversions, where artisanal project managers reconfigure big commercial aircraft into custom-designed, airborne luxury suites where the boss or monarch can huddle in a posh conference lounge that seats 40 colleagues.

“The big appeal is that a new generation of private jetliners, for the first time, can fly anywhere in the world without refueling,” says Stephen Vella, founder of Kestrel Aviation Management, a leader in the conversions field. “The world’s wealthiest no longer have to stop on their way to China.” And they’re willing to pay hundreds of millions of dollars for the regal convenience.

Kestrel introduced its latest offering––an individually-tailored version, of course––in late May at the business aviation air show in Geneva. Though the jetliner was already sold to a Chinese conglomerate for close to $350 million, representatives for sheiks, princes, and tech billionaires toured Kestrel’s creation, a converted Boeing BBJ787-8 Dreamliner. It’s the first entry in the hottest new area in the conversions field: medium-sized commercial planes refitted for private owners. The wide-body 787 has major advantages over the two categories of jetliners previously favored by the ultra-wealthy and heads of state. Traditionally, the bottom end of the market has consisted of the narrow-body Boeing 737 and Airbus A320. Dominating the top end is the super-sized 747, the behemoth that’s long transported U.S. presidents as Air Force One and the jetliner of choice for Middle Eastern monarchs.

While 737 and A320 private jets travel a maximum of 9.5 to 10 hours, or around 4,300 miles––the distance, say, from New York to Warsaw––on a tank of jet fuel, the converted twin-engine 787 can carry its privileged passengers nonstop between any two cities in the world, no matter the distance. For example, the 787 covers the 9,200 miles from Los Angeles to Dubai, a 17.5-hour journey, with fuel to spare. To be sure, many customers will still prefer the 747’s superior size; it’s twice as roomy as the 787. But the 787's globe-spanning range makes it a tempting choice even for traditional 747 customers. The 787 can fly as far nonstop, or even a bit farther, than the new generation Boeing 747-8. Its twin engines burn less than half as much fuel on the same trip as the four-engine 747. Hence, the 787 is far cheaper to operate.

The 787 entered commercial service in late 2011, followed by the A350 in early 2015. But it’s only now that the two models are being refitted for private use. They are the first aircraft to be primarily built from carbon fiber composite, rather than the traditional aluminum and steel construction, making them far lighter than previous jetliners of similar size. Because of its slender weight, the 787 can pack far more fuel than other mid-sized planes with the passenger and baggage loads. Even with a brimming tank, it still weighs a lot less at takeoff than, say, the Boeing 767 it is gradually replacing. The lower aircraft weight and greater fuel efficiency helps the 787 travel farther on each gallon of fuel.

Its composite frame, however, presented Vella with new challenges in “going private.” In the 737 or A330, Vella would drill into the aluminum bulkheads and attach cabinets and other wall-units with rivets. But that system doesn’t work so well with carbon fiber. So Vella had to secure wall-to-ceiling cabinets and credenzas to aluminum rails running along the floors that, in commercial versions, anchor the rows of seats.

Vella had to work with Boeing on the structural modifications. The 787s flown by Delta or Japan Airlines are equipped with a single satellite communications system used by the pilots. Vella had Boeing cut a hole in the aircraft’s roof and install a satellite antenna and radome to deliver cellphone and internet service thousands of feet over any spot on the globe.

That’s just one of the abundant luxury features unveiled at the 787’s Geneva debut. Passengers enter into a grand circular foyer adorned with cherry hardwood floors and walls sheathed in leather. The two main salons in the 2,400-square-foot interior are the dining and conference rooms, equipped with a row of coffee tables that, at the flick of a switch, rise and unfold into a long banquet table, and the main lounge featuring first-class-style, lay-flat armchairs, and twin divans that merge electrically into a daybed. The 40 passengers and 7 cabin crew can access WiFi for their iPads and laptops, and make calls on their smartphones over GSM, at any time and at any altitude. TV shows are streamed live via internet onto the five giant TV screens. Billionaire couples can retreat to the master bedroom suite, a sanctuary offering what Vella calls “a California king-sized bed” and a dual-sink vanity clad in Italian marble.

by Shawn Tully, Fortune | Read more:
Image: Kestrel Aviation Management

Berkeley Author George Lakoff Says, ‘Don’t Underestimate Trump’

George Lakoff, retired UC Berkeley professor and author of Don’t Think of an Elephant, is one of a very few people in Berkeley who does not underestimate Donald Trump. “Trump is not stupid,” he tells anyone who will listen. “He is a super salesman, and he knows how to change your brain and use it to his advantage.”

In fact, Lakoff predicted a year ago that Trump would win with 47% of the vote. (The actual total was 46%.) Lakoff even told Hillary Clinton’s campaign and PAC staffers how to counteract Trump’s message. But they couldn’t hear him.

As far back as 2006, Lakoff saw the writing on the wall. “A dark cloud of authoritarianism looms over the nation,” he wrote in his book Thinking Points, A Progressive’s Handbook. ”Radical conservatives have taken over the reins of government and have been controlling the terms of the political debate for many years.” The progressives couldn’t hear him, either.

Lakoff’s message is simple, but it is couched in the language of cognitive linguistics and neuroscience. The problem is that political candidates rely on pollsters and PR people, not linguists or neuroscientists. So when Lakoff repeatedly says that “voters don’t vote their self-interest, they vote their values,” progressive politicians continually ignore him. His ideas don’t fit in with their worldview, so they can’t hear him.

But a worldview is exactly what Lakoff is talking about. “Ideas don’t float in the air, they live in your neuro-circuitry,” Lakoff said. Each time ideas in our neural circuits are activated, they get stronger. And over time, complexes of neural circuits create a frame through which we view the world. “The problem is, that frame is unconscious,” Lakoff said. “You aren’t aware of it because you don’t have access to your neural circuits.” So what happens when you hear facts that don’t fit in your worldview is that you can’t process them: you might ignore them, or reject or attack them, or literally not hear them.

This theory explains why even college-educated Trump voters could ignore so many facts about their candidate. And it also explains why progressives have been ignoring Lakoff’s findings for more than two decades. Progressives are still living in the world of Descartes and the Enlightenment, Lakoff said, a neat world governed by the rules of logic. Descartes said, “I think therefore I am,” but Lakoff claims that we are embodied beings and that 98 percent of thought is unconscious.

Our thoughts are chemical in nature, and occur within the confines of a physical body: we are not 100 percent rational beings.

So if you are going to craft a message that can reach people who disagree with you, you have to understand their subconscious worldview. Lakoff calls this worldview a “frame,” and claims that Republicans have done a much better job with framing over the past 30 or 40 years. Republicans understand the narrative that governs many people in this country, and they target their message directly to that worldview. Democrats, on the other hand, ignore the worldview and focus instead on rationality, facts and policies.

It is a myth that the truth will set us free, Lakoff said. Case in point: Hillary Clinton’s well-thought-out policy positions vs. Donald Trump’s tweets. The tweets had one central and fact-free message: “Make America great again.” Clinton’s message was more detailed and fact-based, but also much more diffuse. Heavy on Enlightenment, short on metaphor. “I spoke to people at the center of Hillary Clinton’s campaign in 2016, and told them they were doing everything they could to lose,” Lakoff said. “It didn’t make any difference. People are who they are, and they were going to do things their way. I could see the disaster happening the entire year.”

Lakoff started teaching linguistics at UC Berkeley in 1972 and retired as the Richard and Rhoda Goldman Distinguished Professor of Cognitive Science and Linguistics in 2016. Since his retirement, he has spent much of his time traveling around the country, giving talks and interviews. He has written or co-authored 11 books, and is at work on another. Lakoff is the kind of professor who will tell you, in answer to a question, that he wrote a 500-page book about that very topic. “I wrote two 500-page books and three 600-page books,” he adds, laughing. “I like to be thorough.”

In non-academic circles, Lakoff is best known for his slim book Don’t Think of an Elephant. The book, recently reprinted, was a New York Times best-seller when it first came out in 2004, after the “disaster” of the George W. Bush election. Don’t Think of an Elephant was mostly a compilation of essays, and the main point was that trying to use Republican’s language and theories against them is counter-productive.

“What George has done is tie the question of political belief to cognitive science,” said Lawrence Rosenthal, chair and lead researcher of the UC Berkeley Center for Right-Wing Studies. “He understands that the way to get at people’s political opinions is by talking about values, rather than specific arguments about specific issues. He believes conservatives are much better at this than liberals and have been for a very long time. They have a much better track record of crafting political appeals by way of the appropriate value statements for their audience.”

The reason Democrats have such a hard time with Lakoff’s message, Rosenthal said, “is because George is going up against something very deep-rooted, something that goes back to the Enlightenment. He would argue that the Enlightenment approach to political persuasion was never appropriate… Every time I hear a political candidate say the word ‘percent,’ I think of ‘Oh God, they haven’t read George’.”

Lakoff gave a talk recently at the Center for Right-Wing Studies and pointed out that students who become Democratic operatives tend to study political studies and statistics and demographics in college. “Students who lean Republican study marketing. “And that’s his point,” Rosenthal said. “It’s a very different way of thinking.”

Lakoff’s core finding revolves around the metaphor of family. He claims there are two core beliefs about the role of families in society, and the belief one holds determines whether one is conservative or liberal. Moderates are people in the middle who are able to hold some ideas from both sides, and being able to understand and persuade them is crucial to winning any election.

Conservatives believe in a what Lakoff calls the “strict father family,” while progressives believe in a “nurturant parent family.” In the strict father family, father knows best and he has the moral authority. The children and spouse have to defer to him, and when they disobey, he has the right to punish them so they will learn to do the right thing.

“The basic idea is that authority is justified by morality, and that, in a well-ordered world, there should be a moral hierarchy in which those who have traditionally dominated shoulddominate,” Lakoff said. “The hierarchy is God above man; man above nature; the rich above the poor; employers above employees; adults above children; Western culture above other cultures; our country above other countries. The hierarchy also extends to men above women, whites above nonwhites, Christians above non-Christians, straights above gays.” Since this is seen as a “natural” order, it is not to be questioned.

Trump and those crafting the Republican message play straight into this strict father worldview, which is accepted in many parts of the country. Even traditionally Democratic groups such as union members and Hispanics include members who are strict fathers at home or in their private life, Lakoff says. The Republican message plays well with them.

The nurturant parent family, on the other hand, believes that children are born good and can be made better. Both parents are responsible for raising children, and their role is to nurture their children and raise them to nurture others. Empathy and responsibility toward your child also extend to empathy and responsibility toward those who are less powerful, or suffering from pollution or disease, or are marginalized in some way.

While Lakoff is an unabashed Berkeley progressive, he said Democrats are decades behind in understanding how to frame issues in a way that can reach swing voters.

by Daphne White, Berkeleyside | Read more:
Image: Daphne White

A Guide to Escaping Facebook’s Evil Clutches

Earlier this week, leaked documents revealed Facebook can identify when teenagers feel “stressed”, “defeated” and “overwhelmed” and could use this information to target advertisements. According to The Australian, the social network told a top Australian bank that they could monitor young users’ emotional states and target them when they’re feeling insecure. Facebook claimed the report was misleading. Headlines ensued.

While it’s disturbing that Facebook can – and according to one ex-employee, does – do this, the technology involved isn’t the stuff of a harrowing dystopian novel. The report says Facebook can determine when young people feel “anxious”, “nervous”, “stupid”, “silly”, “useless” and a “failure” – to which, duh. It can most likely tell this because these are literally options on Facebook’s “Feeling” button (yep, even “useless”), which allows users to post a status about their emotional state. The most shocking thing about the report, then, is that teenagers are bothering to tell Facebook how they feel at all.

Facebook is dead. Not only do headlines like the above surface every week about the network’s dodgy dystopian dealings (here’s a list of every Facebook controversy in 2016), the site also simply isn’t cool. No one likes Facebook. No wants to be on Facebook. But we all keep using it.

Why? A Twitter search for the words “want to delete Facebook but” reveals a myriad of reasons. Some don’t want to lose pictures, others need to keep in touch with friends and family, others need it for their jobs, or to remember birthdays. Facebook is incredibly troubling – but it’s also incredibly useful, meaning all too often the “Delete Account” button remains untouched.

So what do you do if you don’t want Facebook to turn you into a puppet for its mind-control games, but still also really want to look at Sarah from Year Nine’s new baby to remind you that oh my God, babies really can look that weird?

1. Review what Facebook knows about you

Are you ready to be shocked? Visit www.facebook.com/ads/preferences to find out everything that Facebook knows about you (and uses to send you ads).

Under “Your interests”, click through the tabs such as “Hobbies and activities” and “Shopping and fashion” to view the very specific things Facebook knows. It can be very eerie (it knows I like the colour red, chicken nuggets, and Harry Potter) and also hilariously wrong (it thinks I like Prince Charles, a singular “eyelash”, and the sport curling).

Whether it’s right or wrong, the sheer amount Facebook knows is sure to unnerve. Under the “Your information” tab (scroll down from “Your interests”), the site knows what it defines as “Your categories” – things such as your political leanings, the devices you use, and how many close friends have their birthdays coming up. It knows that I have housemates, am a millennial, and am “close friends of ex-pats”.

by Amelia Tate, The New Statesman |  Read more:
Image: Getty/Facebook/New Statesman

At $495, Lonzo Ball’s ZO2 Sneakers Have Tastemakers Saying No Thanks

At the Flight Club sneaker store just south of Union Square on Thursday night, eager customers perused the gleaming shelves, hunting for classic kicks. The big names — the players whose signature shoes are most highly sought — are the ones you would expect: Kevin Durant, LeBron James and, even after all these years, Michael Jordan.

At the time, the larger world was trying to wrap its head around the ZO2, Lonzo Ball’s first signature shoe, which he had hours earlier announced in a video released to Slam magazine.

Ball, the former U.C.L.A. point guard who is expected to be a top-three pick in June’s N.B.A. draft, had declined contracts with the big sneaker companies: Nike, Under Armour and Adidas. Instead, he placed his chips on Big Baller Brand, the company founded by his father, LaVar Ball, for the frank purpose of maintaining control over the merchandise revenue generated by his three sons, Lonzo, 19; LiAngelo, 18, a U.C.L.A. commit; and LaMelo, 15, who scored 92 points in a high school game last season.

LaVar Ball’s opening bid to shoe companies some weeks ago was a marketing deal with all three sons worth $1 billion. The companies reportedly declined. Last month, a Nike executive called LaVar Ball “the worst thing to happen to basketball in the last 100 years.” Translation: Nike doesn’t like when someone rejects the business model that has enabled the company to dominate the multibillion-dollar sneaker and apparel industry.

Then, on Thursday, came Big Baller Brand’s ZO2, the least expensive version of which costs $495. Yes, four hundred ninety-five dollars. For comparison’s sake, the most recent signature shoe of James, basketball’s biggest star and best player, began retailing last year at $175; it carries the Nike swoosh.

While the shoe and the Ball family buzzed all over social media on Thursday, the sneaker intelligentsia were lined up in Flight Club’s consignment area. These are the people who camp out on street corners to get first crack at new releases and then sell them to Flight Club for a cut of the subsequent resale. The leading edge of sneaker cool, they help decide which shoes will be on the shelves of Flight Club and stores like it 10 and 25 years from now. And they had reached a verdict on the ZO2: No thanks.

“I wouldn’t buy those,” said T.Q. Jones, who wore Nike Prestos as he waited in the consignment line.

Haitham Khan — who was rocking a Comme des Garçons hoodie, a Supreme bag and blush-colored Common Projects sneakers — made a bold statement: “I can answer your question: No one’s going to buy them.”

This conclusion matched those of industry experts, who nonetheless marveled — through laughter — at what Bob Dorfman, a sports marketing expert at Baker Street Advertising, labeled “the brazenness, the audacity, the ego” of LaVar Ball, the father.

Matt Powell, a sports industry analyst at NPD Group, estimated Big Baller Brand would sell 10,000 pairs, which he described as a “rounding error” given the 400 million pairs of shoes Nike made last year.

“If you did it in snakeskin and pixie dust, it might cost $500,” Powell added.

Sneaker culture is shaped by substance as much as flash. Flight Club, for instance, prominently features sneakers linked to long-retired stars like Scottie Pippen, Patrick Ewing and Jordan not merely because those shoes are aesthetically pleasing, but because they are connected to incredible basketball talents.

As Jones said of Lonzo Ball: “I don’t know if he’s going to be a star.”

by Marc Tracy, NY Times |  Read more:
Image: Big Baller Brand

Bo Bartlett, The Promised Land
via: