Friday, August 17, 2018

How TripAdvisor Changed Travel

Should one be so unlucky as to find oneself, as I did, lying awake in bed in the early hours of the morning in a hostel in La Paz, Bolivia, listening anxiously to the sound of someone trying to force their way into one’s room, one could do worse than to throw a chair under the doorknob as a first line of defence. But this is not what I did. Instead, I held my breath and waited until the intruder, ever so mercifully, abandoned his project and sauntered down the hall. The next morning, when I raised the incident with the hostel employee at the front desk, he said the attempted intrusion had just been an innocent mistake, a misdirected early-morning wake-up call gone wrong, and what was the big deal, anyway? Fuming, I turned to the highest authority in the world of international travel, the only entity to which every hotel, restaurant, museum and attraction in the world is beholden: I left the hostel a bad review on TripAdvisor.

TripAdvisor is where we go to praise, criticise and purchase our way through the inhabited world. It is, at its core, a guestbook, a place where people record the highs and lows of their holiday experiences for the benefit of hotel proprietors and future guests. But this guestbook lives on the internet, where its contributors continue swapping advice, memories and complaints about their journeys long after their vacations have come to an end.

Every month, 456 million people – about one in every 16 people on earth – visit some tentacle of TripAdvisor.com to plan or assess a trip. For virtually every place, there exists a corresponding page. The Rajneeshee Osho International Meditation Resort in Pune, India, has 140 reviews and a 4 out of 5 rating, Cobham Service Station on the M25 has 451 reviews and a rating of 3.5, while Wes Anderson’s fictional Grand Budapest Hotel currently has358 reviews and a rating of 4.5. (At the top of the page, there is a message from TripAdvisor: “This is a fictional place, as seen in the movie The Grand Budapest Hotel. Please do not try to book a visit here.”)

Over its two decades in business, TripAdvisor has turned an initial investment of $3m into a $7bn business by figuring out how to provide a service that no other tech company has quite mastered: constantly updated information about every imaginable element of travel, courtesy of an ever-growing army of contributors who provide their services for free. Browsing through TripAdvisor’s 660m reviews is a study in extremes. As a kind of mirror of the world and all its wonders, the site can transport you to the most spectacular landmarks, the finest restaurants, the most “adrenaline-pumping” water parks, the greatest “Hop-On Hop-Off Experiences” that mankind has ever devised. Yet TripAdvisor reviews are also a ruthless audit of the earth’s many flaws. For every effusive review of the Eiffel Tower (“Worth the hype at night,” “Perfect Backdrop!”), there is another that suggests it is a blight on the face of the earth (“sad, ugly, don’t bother”; “similar to the lobby of a big Vegas casino, but outside”.)

TripAdvisor is to travel as Google is to search, as Amazon is to books, as Uber is to cabs – so dominant that it is almost a monopoly. Bad reviews can be devastating for business, so proprietors tend to think of them in rather violent terms. “It is the marketing/PR equivalent of a drive-by shooting,” Edward Terry, the owner of a Lebanese restaurant in Weybridge, UK, wrote in 2015. Marketers call a cascade of online one-star ratings a “review bomb”. Likewise, positive reviews can transform an establishment’s fortunes. Researchers studying Yelp, one of TripAdvisor’s main competitors, found that a one-star increase meant a 5-9% increase in revenue. Before TripAdvisor, the customer was only nominally king. After, he became a veritable tyrant, with the power to make or break lives. In response, the hospitality industry has lawyered up, and it is not uncommon for businesses to threaten to sue customers who post negative reviews.

As the so-called “reputation economy” has grown, so too has a shadow industry of fake reviews, which can be bought, sold and traded online. For TripAdvisor, this trend amounts to an existential threat. Its business depends on having real consumers post real reviews. Without that, says Dina Mayzlin, a professor of marketing at the University of Southern California, “the whole thing falls apart”. And there have been moments, over the past several years, when it looked like things were falling apart. One of the most dangerous things about the rise of fake reviews is that they have also endangered genuine ones – as companies like TripAdvisor raced to eliminate fraudulent posts from their sites, they ended up taking down some truthful ones, too. And given that user reviews can go beyond complaints about bad service and peeling wallpaper, to much more serious claims about fraud, theft and sexual assault, their removal becomes a grave problem.

Thus, in promising a faithful portrait of the world, TripAdvisor has, like other tech giants, found itself in the unhappy position of becoming an arbiter of truth, of having to determine which reviews are real and which are fake, which are accurate and which are not, and how free speech on their platform should be. It is hard to imagine that when CEO Stephen Kaufer and his co-founders were sitting in a pizza restaurant in a suburb of Boston 18 years ago dreaming up tripadvisor.com, they foresaw their business growing so powerful and so large that they would find themselves tangled up in the kinds of problems that vex the minds of the world’s most brilliant philosophers and legal theorists. From the vantage point of 2018, one of the company’s early mottos now seems comically naive: “Get the truth and go.”

Many of the difficult questions the company faces are also questions about the nature of travel itself, about what it means to enter unknown territory, to interact with strangers, and to put one’s trust in them. These are all things that one also does online – it is no coincidence that the some of the earliest analogies that we once used to talk about the digital world (“information superhighway”, “electronic frontier”) tended to belong to the vocabulary of travel. In this sense, the story of TripAdvisor, one of the least-examined and most relied-upon tech companies in the world, is something like a parable of the internet writ large.
***
The travel guide is an ancient genre, one that has never been far removed from the questions that trouble TripAdvisor LLC. For nearly all of human history, people have wanted to know everything about where they were going before they got there. The Greek geographer Pausanias is often credited with authoring the first travel guide, his Description of Greece, sometime in the second century AD. Over 10 books, he documented the sights and stories of his native land. Of Lake Stymphalia, in Corinth, for example, Pausanias writes: “In the Stymphalian territory is a spring, from which the emperor Hadrian brought water to Corinth … at one time man-eating birds bred on it, which Heracles is said to have shot down.” Today, on TripAdvisor, Lake Stymphalia gets a meagre rating of 3.5, below the average of 4: “It is more like a swampy marshland … there isn’t really anywhere to chill out and relax so we didn’t stay long,” writes one reviewer. Beneath this review, and beneath all TripAdvisor reviews, is a disclaimer: “This review is the subjective opinion of a TripAdvisor member and not of TripAdvisor LLC.”

When TripAdvisor was founded, in 2000 – six years after Amazon, four years before Facebook and Yelp – consumer reviews were still thought of as a risky endeavour for businesses, a losing bet. Amazon first allowed customers to post reviews in 1995, but it was a controversial move that some critics derided as retail suicide. When TripAdvisor launched, it did so as a simple aggregator of guidebook reviews and other established sources, keeping its distance from the unpredictable world of crowd-sourced content.

Kaufer envisaged TripAdvisor as an impartial referee, providing “reviews you can trust”, as one of its former taglines promised. But as an experiment, in February 2001, he and his partners created a way for consumers to post their own reviews. The first-ever review was of the Captain’s House Inn, on Cape Cod, which received four “bubbles”. (TripAdvisor uses “bubbles” rather than stars to evaluate companies to avoid confusing its ratings with more conventional luxury hotel ratings.)

Soon, Kaufer noticed that users were gravitating away from expert opinion and towards the crowdsourced reviews, so he abandoned his original concept and began focusing exclusively on collecting original consumer input. He hoped that selling ads on the site would be enough to keep the company afloat, but when it became clear that this wasn’t bringing in enough money, his team shifted to a new model. From late 2001, every time a visitor clicked on a link to a given hotel or restaurant, TripAdvisor would charge the business a small fee for the referral. Within three months, the company was making $70,000 a month, and in March 2002, it broke even. “I think they call it a pivot now,” Kaufer said in 2014. “I called it running for my life back then.”

By 2004, TripAdvisor had 5 million unique monthly visitors. That year, Kaufer sold TripAdvisor to InterActiveCorp (IAC), the parent company of the online travel company Expedia, for $210m in cash, but stayed on as CEO. It seemed like a good deal at the time – as Kaufer told Harvard Business School’s student newspaper in 2013, none of the founders were previously wealthy, so the windfall was a “life-changing event”. But he eventually regretted selling out so early on: “In hindsight, this was the stupidest move I ever made!”

For the next few years, TripAdvisor continued to grow, hiring more than 400 new employees around the world, from New Jersey to New Delhi. By 2008, it had 26 million monthly unique visitors and a yearly profit of $129m; by 2010, it was the largest travel site in the world. To cement its dominance, TripAdvisor began buying up smaller companies that focused on particular elements of travel. Today, it owns 28 separate companies that together encompass every imaginable element of the travel experience – not just where to stay and what to do, but also what to bring, how to get there, when to go, and whom you might meet along the way. Faced with such competition, traditional guidebook companies have struggled to keep up. In 2016, Fodor’s, one of the most established American travel guide companies, was bought by a company called Internet Brands.

Over time, hoteliers largely accepted that TripAdvisor wasn’t going away, even as they watched it turn their industry upside down. “The online world has changed pretty much every industry, but hospitality beyond recognition,” Peter Ducker, chief executive of the Institute of Hospitality, told me. “For a long time when [TripAdvisor] first came out, hoteliers didn’t like it. We didn’t want to air our dirty laundry in public,” he said. Now, though, “hotels have learned that a) it’s not going away, so get over it, and b) you can use it to your advantage … They use good TripAdvisor ratings in their marketing materials, because to a lot of the public, that means more than a star rating, more than a government accreditation. It transcends borders.”
***
By 2011, TripAdvisor was drawing 50 million monthly visitors, and its parent company, IAC, decided that the time had come to spin it out as a separate, publicly traded entity. Its IPO was valued at $4bn, but in December, on the first day of trading, shares fell. TripAdvisor was in new and uncertain territory, and no one knew how the company would fare on its own.

TripAdvisor had become a tech giant, but its leadership did not quite realise that yet. The year it went public was the final year that TripAdvisor published its annual lists of the “Top 10 Dirtiest Hotels” in the US and Europe. A couple of months before the IPO, Kenneth Seaton, owner of what had been voted “America’s dirtiest hotel” (the Grand Resort Hotel & Convention Center, in Pigeon Forge, Tennessee), filed a lawsuit against TripAdvisor for defamation, claiming $10m in damages. The suit was tossed out in 2012, after the judge ruled that any review posted to TripAdvisor is an opinion and therefore protected under the first amendment. Seaton appealed, but the original verdict was upheld on the grounds that the use of the word “dirtiest” could not count as defamation as it was no more than “rhetorical hyperbole”. TripAdvisor won the legal battle, but it still decided to scrub the “dirtiest” list from its site. “We want to stay more on the positive side,” Kaufer told the New York Times.

In 2012, the media behemoth Liberty Interactive purchased $300m in TripAdvisor shares. TripAdvisor had become an established giant of the travel industry, an inevitable part of even the most cursory vacation planning. As the company sought to clean up its public profile, its audience grew, but so did the pressure to turn a profit. “When [platforms] start to commercialise, it changes the DNA,” says Rachel Botsman, a lecturer at Oxford University’s Saïd Business School who has chronicled the rise of the reputation economy. “When that happens, it’s a problem.” Many of the website’s most loyal users feel most aggrieved by the way the site has changed.

by Linda Kinstler, The Guardian |  Read more:
Image: AFP/Getty/Guardian Design

Arizona Students' Stand on Gun Control Switches to Voter Registration

Four months ago, hundreds of Arizona students staged a die-in on the floor of their state Capitol to protest for stricter gun laws.

Now, many of those same students are working on a new campaign: registering their high school classmates to vote, with the goal of voting out the politicians who have blocked the passage of gun safety laws.

“This entire thing is led by mostly kids who can’t vote yet,” said Jordan Harb, 17, one of the organizers of March for Our Lives Arizona, a group of teenage gun violence prevention advocates running a statewide voter registration program.

Harb himself will not be old enough to vote this November. But that has not stopped him and his fellow teenage activists from leading an intensive campaign to shift the balance of power in the midterm elections.Sign up to receive the top US stories every morning

To vote National Rifle Association-backed candidates out of office, a coalition of gun violence prevention groups has launched a $1.75m campaign to register 50,000 young voters before this November’s midterm elections. Part of that money is going to nearly a dozen local groups, including March for Our Lives Phoenix, who are working to register 18- and 19-year-olds to vote.

Lower youth voter turnout in midterm elections tends to favor Republican candidates, who have blocked the passage of stricter federal gun control laws for decades. But gun violence prevention activists are trying to change that dynamic by bringing a wave of young voters to the polls.

The Our Lives Our Vote campaign is backed by Everytown for Gun Safety and Giffords, two gun violence prevention groups, and NextGen America, an advocacy group founded by the billionaire Tom Steyer, a major Democratic donor. The coalition says it has registered 27,000 voters through online and mail-in voter registration drives, focusing on 10 states where National Rifle Association-backed politicians are on the ballot. It’s now dedicating $600,000 to local groups organizing voter registration drives, including two groups run by high school students.

Since the 1999 Columbine high school shooting, and then the 2012 Sandy Hook elementary school shooting, schools across the United States added drills to prepare students for how to respond if an attacker with a gun targeted their school.

The school shooting at Marjory Stoneman Douglas high school in Parkland, Florida, this February, which left 17 people dead, sparked an unprecedented wave of youth gun control protests across the country. Thousands of schools nationwide held walkouts to protest against government inaction on preventing school shootings. The March for Our Lives, organized by student survivors from Parkland, Florida, sparked hundreds of rallies and marches worldwide, some with tens or even hundreds of thousands of participants.

After the Parkland shooting, students who had grown up with “active shooter” drills as a normal part of their lives had suddenly had enough.

by Lois Beckett, The Guardian |  Read more:
Image: Evelyn Hockstein
[ed. It's the only way.]

Wednesday, August 15, 2018

Remembering Anthony Bourdain as Only His Fixers Could

Michiko Zentoh was Anthony Bourdain’s first fixer. A freelance television producer in Japan, she worked with Bourdain on the initial two episodes of his first series, A Cook’s Tour, which were set in Tokyo and the onsen towns of Atami and Yugawara. It was 2000, and Bourdain was no longer working the same kind of schedule at New York’s Les Halles brasserie as he had before writing his best-selling Kitchen Confidential. Yet in those early shows it’s clear he still thinks of himself as a chef first, expertly evaluating a piece of bluefin and remarking on how much he’d like to get an octopus he sees at Tsukiji Fish Market back into the kitchen. What Zentoh remembers most from those days is his enthusiasm. “He told me, ‘I feel like I won the lottery,’” she recalls. “He spent so many years never leaving the kitchen and now he was traveling the world.”

Bourdain’s enthusiasm is evident in those early episodes. The characteristic intonation is there, but his voice seems an octave or two higher, and as he delights in a kaiseki meal or struggles through a bowl of mucilaginous nattō, there’s a sweetness to his demeanor, a naïveté, that belies the confidence of later years. He’s the quintessential innocent abroad—eager for new experiences but left vulnerable by them, too. On-screen, he admits to feeling intimidated, not only by the sumo wrestlers whose practice sessions he attends but even by the bullet train, where the crew shot him eating a bento lunch of eel. “He was very modest, very cautious about protocol,” Zentoh says. At one point she corrected his bowl handling, gently suggesting that he stop using both palms to cup it. “He asked me at every step, ‘Am I doing it right?’ He was the opposite of arrogant.”

He was also the opposite of profligate. Although at age 44 Bourdain was able, he said, to open a savings account for the first time in his life with the proceeds from Kitchen Confidential, budgets during A Cook’s Tour remained tight. Bourdain traveled in the same van as the rest of the small team, and their accommodations, if not dives, weren’t exactly posh. Zentoh recalls staying in a hotel with rooms so tiny Bourdain barely had room for luggage. “That’s why the geisha in the second episode are so old,” she says. “We couldn’t afford younger ones.”

Behind every bite of Moroccan sheep testicle or sip of high-octane Georgian chacha that Anthony Bourdain took on-screen was a fixer like Zentoh. Before the start of any shoot, from Reykjavík to Congo, the chef turned television star’s production company, Zero Point Zero, hired a local—usually a freelance journalist, or producer—to suggest segment ideas, set up shoots, get permissions, act as Bourdain’s interpreter, and occasionally appear on camera. These fixers may not have written the scripts or edited the footage, but they ultimately played a significant role in what viewers saw on-screen. And because, for the few days or weeks that a shoot lasted, most were also thrust into this suddenly intimate relationship with someone they knew only from TV, they possess a view onto the man that few share.

When news spread in early June that Bourdain had committed suicide at age 61, the shock, rippling across social media, felt seismic. It wasn’t just that he was so influential a figure, though countless viewers learned to eat—lustfully and catholically—from him, and there are legions of chefs today who were drawn to the profession, for better and for worse, by the pirate-ship approach to the kitchen he so vividly described. Nor was it simply the fact of his celebrity, though after nearly two decades spent crisscrossing the globe for his television series, he was recognized on the street everywhere from Beijing to Buenos Aires. It wasn’t even the confounding tragedy of his suicide, that he might choose to end a life so seemingly enviable. Rather, the thing that made his death so terribly traumatic to so many was the loss of connection. It was the loss of a real, if fleeting, sense that Bourdain somehow found time and space for an actual human moment with every person who ever cooked him a meal or even interrupted one to ask for a selfie.

For those who fixed for him, it was so often more than just a moment. Fixing is among the lowest jobs on the production hierarchy, and yet Bourdain not only treated his fixers well, but engaged with them, soliciting their insight into whatever place and people he had landed among that week and gradually coming to call several of them friends. Though most of them never met one another, they formed a sort of unspoken international network, these people who helped Bourdain know the world more deeply and who, in turn, were shaped by his way of experiencing it.

When Matt Walsh began working for No Reservations in 2005, Bourdain’s enthusiasm and curiosity were the first qualities the fixer noticed. An American journalist living in Hong Kong, Walsh had seen A Cook’s Tour, recognized the similarities between the emerging star’s New Jersey heritage and his Long Island own, and decided he wanted to have the kind of fun Bourdain seemed to be having. He pitched himself to No Reservations’ producers and was soon leading Bourdain to a roast-duck restaurant in Beijing and a family meal in Chengdu. “It was all new to him, and he was really hungry,” says Walsh. “He wanted to see it all, do it all, taste it all.”

And imbibe it all. Bourdain made no secret of his predilections. “The Tony we used to work with back then was always laughing and drinking. We got loaded all the time,” says Walsh. “By the end of some nights we were all a little slurry.”

His fixers from those early years recall Bourdain as especially happy when he was having the kind of experience that allowed him to connect with a place and its people. After the Khmer Rouge largely destroyed Cambodia’s train system, locals used what they called lorries or norries—basically a platform on wheels, outfitted with a rudimentary engine and a hand brake—to travel the rails in areas where there were no roads. On a shoot there in 2010, the crew took one out for a meal with a family in the rice fields. “It was pouring rain, but it didn’t matter,” Walsh recalls. After “riding back through those electric-green rice paddies, having smoked a lot of weed, with the wind [from] going 30 kilometers an hour—the sensation of all that. I looked at Tony and the expression on his face was exactly what I was feeling: it doesn’t get better than this.”

The lorry trip exemplifies the kind of authentic experience that Bourdain craved and that he attempted to bring to his show. For No Reservations’ second season, Zentoh was charged with coming up with a segment that took the crew to Japan’s Kiso Valley. The only dates available for the shoot fell during Obon, a holiday typically celebrated with family, but the fixer managed to wrangle an invitation with the latest three generations of the family that cares for the country’s sacred hinoki trees. “Tony started drinking shochu and sake with the head of the family,” Zentoh recalls. “After a while, he turned to us and said, ‘Forget about the shoot. I don’t care. I just want to drink with this guy. I want to be 100 percent there.’ That’s why people liked him—he showed up.”

He was also utterly authentic in his own responses. “Tony didn’t do fake,” Zentoh says. “He really would eat what was on the plate, drink what was in the glass.” He would try anything, but if he didn’t like, say, a bite of dried sea-cucumber liver that elicited an “I don’t need to try that again,” he wouldn’t pretend otherwise.

No Reservations gave Bourdain the space to express not only his political and social beliefs, but his artistic passions as well. Lucio Mollica first worked with Bourdain on the Naples episode that aired in 2011. By then, the crew had already produced a Rome episode intended as an homage to Fellini. In Naples, he wanted to shoot in the neighborhood where the film Gomorrah, released a couple of years earlier, had been set. “He wasn’t only a fine connoisseur of Italian cuisine, but of Italian culture, and Italian cinema,” Mollica says. “His knowledge of that was amazing.”

Yet as he made aspects of the show more closely in his own image, others slipped from him. As the crew grew, they increasingly had the budget to stay in nicer hotels. The pressure to produce had increased, too. “As the budget got bigger, the amount of content that was needed grew as well, and we had so little time,” Zentoh says. “It was a brutal schedule for the production team. The whole experience was like a goose being made into foie gras. Tony had no time to digest anything–not the food or the experience.”

At the time Bourdain was well on his way to becoming internationally famous. “I met him about halfway into this journey,” Mollica says. “He wasn’t so famous in Italy then.” Still, the Italian fixer glimpsed a hint of what Bourdain was losing during that first shoot. “It was a Sunday in Naples, and all the places we wanted to bring him were closed. Finally someone asked the driver, ‘Where are you eating?’ And he said, ‘My mom’s house.’ So we all went there, to the driver’s mom’s house, this tiny apartment in the historic part of town. Tony came over when lunch was ready, and stayed for three hours. She made ragù. We had been eating in these fantastic restaurants up and down the beautiful Amalfi Coast. But that was the happiest I saw him.”

In 2012, Bourdain announced he was moving from the Travel Channel to CNN to launch Parts Unknown. By all accounts, he was giddily excited about the opportunities the new show and the network’s resources would afford him; within the first few years, he would shoot episodes in Libya, Tanzania, and Iran. But even to a new fixer such as Alex Roa, a local producer who worked with Bourdain on shoots in Mexico City, Oaxaca, and Cuernavaca in 2014, it was evident that the demands—and the constant attention—were weighing on him. “I think it was not only the demands of the job, but also the intensity of it, the constant traveling and being away–in that moment–from his daughter,” says Roa. “Every episode demanded so much of him, because that was the way he was.”

By then, the eating was the least of it. “He told me that food is just a way to get into people’s bodies and minds,” Roa recalls. “It was a way to talk to someone, to get them to go deeper.” The more superficial food-porn stuff was losing its allure. In Oaxaca, when a director wanted to shoot Tony buying and eating tamales, he was frustrated, Roa says. “He just said, ‘That’s horrible. Do you know how many times I’ve done this before?’” In Mexico City a chef at the Four Seasons hotel where Bourdain was staying so wanted to cook for him that he sent word he was going to close a room of the restaurant for him; Bourdain’s response, according to Roa, was a polite but conversation-ending “No thanks.”

Were the fame, the pressure, and weariness from all that travel—and all that food—getting to him? Bourdain remained the consummate professional. “We had to ask his driver to delay and make detours so that he wouldn’t show up too early,” the fixer says. But he didn’t seem to be having as much fun. “He only went out with us one night during the whole 10 days,” Roa recalls. “Otherwise, he would just show up for a call, do the shoot, and go straight back to the hotel. He’d stay in and order room service.”

by Lisa Abend, Vanity Fair |  Read more:
Image: William Mebane

Should I do a PhD?

There are lots of good reasons for deciding to do a PhD. Deepening your knowledge of a subject you love is an excellent one. Wondering what to do with the next three years of your life and finding out your university will pay you to stay isn’t so bad either. But seeing it as a fast track to a cushy academic job probably shouldn’t be one of them.

PhDs are often glamourised in popular culture. If you grew up watching Friends, you might recall Ross Geller celebrating getting tenure at New York University. Getting tenure in a US university means you are virtually impossible to fire. Your university trusts in your intellectual brilliance to the extent that it’s willing to give you total academic freedom to research what you want. In short, it sounds like a dream.

Unfortunately, that’s exactly what it is. If Ross were a real person and not a fictional character, he wouldn’t have been celebrating getting tenure aged about 30 years old – unless he were a palaeontology prodigy. Instead, he’d be on his first or second postdoc, possibly in underpaid, insecure employment. He would also probably be so busy writing research grant applications he’d have no time to hang around in a coffee shop. If – in a decade’s time – he eventually secured a permanent academic position, he’d be one of only 3.5% of his science PhD cohort who did.

The problem with the academic dream is that the pipeline is broken. Employing lots of PhD students is a great deal for universities – they’re a source of inexpensive academic labour for research and teaching. But it’s not such a great deal for the students themselves. The oversupply of PhDs perpetuates the illusion that there are a lot of academic jobs around. There aren’t – and competition for the few that there are is fierce.

The oversupply of early career researchers means they often feel exploited by their universities. According to the University and College Union, which represents lecturers, more than three-quarters of junior academics are on precarious or zero-hours contracts. Meanwhile, competition for research funding and power-imbalanced relationships between supervisors and junior researchers can make labs and libraries ripe for bullying.

The result, according to recent research from the Royal Society and the Wellcome Trust, is that academia is one of the worst careers for stress. Nearly four in 10 academics have reported experiencing mental health conditions.

So why do so many intelligent people who would probably do fantastically well in alternative careers, put themselves through this? Because being an academic can be one of the world’s best jobs. You might get to push the boundaries of knowledge in an area you’re passionate about, work in international teams comprising the world’s greatest minds, and produce work with visible social impact – whether that’s through lecturing students or seeing your research inform policy.

But is it worth it for the majority of PhD students, who’ll never become academics? In some countries, such as the US and Germany, PhDs are increasingly seen not just as a conveyor belt to an academic job, but as an important high-level qualification that leads to a diverse range of careers. In certain industries in the UK, such as science and pharmaceuticals, demand for PhD graduates is growing as their emphasis on research increases.

But at present, a PhD qualification isn’t essential for most jobs. In some industries, a PhD might even set you back, as business leaders often see them as driving a largely pointless three-year wedge between an undergraduate degree and an entry-level position. This is often compounded by unhelpful careers advice from academic supervisors disinterested in the world outside academia.

But doing a PhD in most cases might not hinder your career either. And, if you’re an undergraduate, you certainly won’t be the only one to drift into a three-year stipend while you work out what comes next. Even if you’re not willing to slog it out in pursuit of a professorship, in some subjects more than others, there’s evidence of an earnings premium. In 2010, 3.5 years after graduation, 72% of doctoral graduate respondents were earning more than £30,000 compared with 22% of first-degree graduates.

by Rachel Hall, The Guardian | Read more:
Image: Alarmy

Elizabeth Warren Has a Plan to Save Capitalism

Elizabeth Warren has a big idea that challenges how the Democratic Party thinks about solving the problem of inequality.

Instead of advocating for expensive new social programs like free college or health care, she’s introducing a bill Wednesday, the Accountable Capitalism Act, that would redistribute trillions of dollars from rich executives and shareholders to the middle class — without costing a dime.

Warren’s plan starts from the premise that corporations that claim the legal rights of personhood should be legally required to accept the moral obligations of personhood.

Traditionally, she writes in a companion op-ed for the Wall Street Journal, “corporations sought to succeed in the marketplace, but they also recognized their obligations to employees, customers and the community.” In recent decades they stopped, in favor of a singular devotion to enriching shareholders. And that’s what Warren wants to change.

The new energy on the left is all about making government bigger and bolder, an ideal driven by a burgeoning movement toward democratic socialism. It’s inspired likely 2020 Democratic contenders to draw battle lines around how far they’d go to change the role of government in American life.

Warren supports expanding many of the programs in play, and she’s voted to do so. But the rollout of her bill suggests that as she weighs whether to get into the presidential race, she’ll focus on how to prioritize workers in the American economic system while leaving businesses as the primary driver of it.

Warren wants to eliminate the huge financial incentives that entice CEOs to flush cash out to shareholders rather than reinvest in businesses. She wants to curb corporations’ political activities. And for the biggest corporations, she’s proposing a dramatic step that would ensure workers and not just shareholders get a voice on big strategic decisions.

Warren hopes this will spur a return to greater corporate responsibility, and bring back some other aspects of the more egalitarian era of American capitalism post-World War II — more business investment, more meaningful career ladders for workers, more financial stability, and higher pay.

As much as Warren’s proposal is about ending inequality, it’s also about saving capitalism.

The Accountable Capitalism Act — real citizenship for corporate persons

The conceit tying together Warren’s ideas is that if corporations are going to have the legal rights of persons, they should be expected to act like decent citizens who uphold their fair share of the social contract and not act like sociopaths whose sole obligation is profitability — as is currently conventional in American business thinking.

Warren wants to create an Office of United States Corporations inside the Department of Commerce and require any corporation with revenue over $1 billion — only a few thousand companies, but a large share of overall employment and economic activity — to obtain a federal charter of corporate citizenship.

The charter tells company directors to consider the interests of all relevant stakeholders — shareholders, but also customers, employees, and the communities in which the company operates — when making decisions. That could concretely shift the outcome of some shareholder lawsuits but is aimed more broadly at shifting American business culture out of its current shareholders-first framework and back toward something more like the broad ethic of social responsibility that took hold during WWII and continued for several decades.

Business executives, like everyone else, want to have good reputations and be regarded as good people but, when pressed about topics of social concern, frequently fall back on the idea that their first obligation is to do what’s right for shareholders. A new charter would remove that crutch, and leave executives accountable as human beings for the rights and wrongs of their own decisions.

More concretely, United States Corporations would be required to allow their workers to elect 40 percent of the membership of their board of directors.

Warren also tacks on a couple of more modest ideas. One is to limit corporate executives’ ability to sell shares of stock that they receive as pay — requiring that such shares be held for at least five years after they were received, and at least three years after a share buyback. The aim is to disincentivize stock-based compensation in general as well as the use of share buybacks as a tactic for executives to maximize their one pay.

The other proposal is to require corporate political activity to be authorized specifically by both 75 percent of shareholders and 75 percent of board members (many of whom would be worker representatives under the full bill), to ensure that corporate political activity truly represents a consensus among stakeholders, rather than C-suite class solidarity.

It’s easy to imagine the restrictions on corporate political activity and some curbs on stock sales shenanigans becoming broad consensus points for congressional Democrats, and even part of a 2019 legislative agenda if the midterms go well. But the bigger ideas about corporate governance would be a revolution in American business practice to undo about a generation’s worth of shareholder supremacy.

The rise of shareholder capitalism

The conceptual foundations of the current version of American capitalism are found in Milton Friedman’s well-titled 1970 New York Times Magazine article “The Social Responsibility of Business Is to Increase its Profits.”

Friedman meant this provocative thesis quite literally. In his view, which has since become the dominant perspective in American law and finance, corporate shareholders should be understood to own the company and its executives should be seen as their hired help. The shareholders, as individuals, can obviously have a variety of goals they favor in life. But their common goal is to maximize the value of their shares.

Therefore, for executives to set aside shareholder profits in pursuit of some other goal like environmental protection, racial justice, community stability, or simple common decency would be a form of theft. If reformulating your product to be more addictive or less healthy increases sales, then it’s not only permissible but actually required to do so. If closing a profitable plant and outsourcing the work to a low-wage country could make your company even more profitable, then it’s the right thing to do.

Friedman allows that executives are obligated to follow the law — an important caveat — establishing a conceptual framework in which policy goals should be pursued by the government, while businesses pursue the prime business directive of profitability.

One important real-world complication that Friedman’s article largely neglects is that business lobbying does a great deal to determine what the laws are. It’s all well and good, in other words, to say that businesses should follow the rules and leave worrying about environmental externalities up to the regulators. But in reality, polluting companies invest heavily in making sure that regulators underregulate — and it seems to follow from the doctrine of shareholder supremacy that if lobbying to create bad laws is profitable for shareholders, corporate executives are required to do it.

On the flip side, an investor-friendly policy regime was supposed to supercharge investment, creating a more prosperous economy for everyone. The question is whether that’s really worked out.

The economics of shareholder supremacy

(...) Since 80 percent of the value of the stock market is owned by about 10 percent of the population and half of Americans own no stock at all, this has been a huge triumph for the rich. Meanwhile, CEO pay has soared as executive compensation has been redesigned to incentivize shareholder gains, and the CEOs have delivered. Gains for shareholders and greater inequality in pay has led to a generation of median compensation lagging far behind economy-wide productivity, with higher pay mostly captured by a relatively small number of people rather than being broadly shared.

Investment, however, has not soared. In fact, it’s stagnated.

Whether one sees this as a cause or a consequence of poor growth outcomes is up for debate, but the Warren view is that fundamentally, shareholder supremacy is a cause of poor economic performance by starving the business sector of funds that would otherwise be used to invest in equipment or training or simply to pay people more and increase their purchasing power.

But while on an optimistic view, stakeholder capitalism would produce stronger long-run growth and higher living standards for the vast majority of the population, there’s no getting around the fact that Warren’s proposal would be bad — really bad — for rich people. That’s a fight her team says she welcomes. (...)

In exchange, the laboring majority would make important gains.

Most obviously, the large share of the private sector workforce that is employed by companies with more than $1 billion in revenue would gain a measure of democratic control over the future of their workplace. That wouldn’t make tough business decisions around automation, globalization, scheduling, family responsibilities, etc. go away, but it would ensure that the decisions are made with a balanced set of interests in mind.

Studies from Germany’s experience with codetermination indicate that it leads to less short-termism in corporate decision-making and much higher levels of pay equality, while other studies demonstrate positive results on productivity and innovation.

One intuitive way of thinking about the proposal is that under the American system of shareholder supremacy, an executive increases his pay by finding ways to squeeze workers as hard as possible — kicking out the surplus to shareholders and then watching his stock-linked compensation soar. That’s brought America to the point where CEOs make more than 300 times as much as rank-and-file workers at big companies.

by Matthew Yglesias, Vox |  Read more:
Image: Chip Somodevilla/Getty Images
[ed. I can't wait to give her my vote.]

Tuesday, August 14, 2018

Pat Metheny (feat. Anna Maria Jopek and Pedro Aznar)


What the Year 2050 Has in Store for Humankind

Part one: Change is the only constant

Humankind is facing unprecedented revolutions, all our old stories are crumbling and no new story has so far emerged to replace them. How can we prepare ourselves and our children for a world of such unprecedented transformations and radical uncertainties? A baby born today will be thirty-something in 2050. If all goes well, that baby will still be around in 2100, and might even be an active citizen of the 22nd century. What should we teach that baby that will help him or her survive and flourish in the world of 2050 or of the 22nd century? What kind of skills will he or she need in order to get a job, understand what is happening around them and navigate the maze of life?

Unfortunately, since nobody knows how the world will look in 2050 – not to mention 2100 – we don’t know the answer to these questions. Of course, humans have never been able to predict the future with accuracy. But today it is more difficult than ever before, because once technology enables us to engineer bodies, brains and minds, we can no longer be certain about anything – including things that previously seemed fixed and eternal.

A thousand years ago, in 1018, there were many things people didn’t know about the future, but they were nevertheless convinced that the basic features of human society were not going to change. If you lived in China in 1018, you knew that by 1050 the Song Empire might collapse, the Khitans might invade from the north, and plagues might kill millions. However, it was clear to you that even in 1050 most people would still work as farmers and weavers, rulers would still rely on humans to staff their armies and bureaucracies, men would still dominate women, life expectancy would still be about 40, and the human body would be exactly the same. Hence in 1018, poor Chinese parents taught their children how to plant rice or weave silk, and wealthier parents taught their boys how to read the Confucian classics, write calligraphy or fight on horseback – and taught their girls to be modest and obedient housewives. It was obvious these skills would still be needed in 1050.

In contrast, today we have no idea how China or the rest of the world will look in 2050. We don’t know what people will do for a living, we don’t know how armies or bureaucracies will function, and we don’t know what gender relations will be like. Some people will probably live much longer than today, and the human body itself might undergo an unprecedented revolution thanks to bioengineering and direct brain-computer interfaces. Much of what kids learn today will likely be irrelevant by 2050.

At present, too many schools focus on cramming information. In the past this made sense, because information was scarce, and even the slow trickle of existing information was repeatedly blocked by censorship. If you lived, say, in a small provincial town in Mexico in 1800, it was difficult for you to know much about the wider world. There was no radio, television, daily newspapers or public libraries. Even if you were literate and had access to a private library, there was not much to read other than novels and religious tracts. The Spanish Empire heavily censored all texts printed locally, and allowed only a dribble of vetted publications to be imported from outside. Much the same was true if you lived in some provincial town in Russia, India, Turkey or China. When modern schools came along, teaching every child to read and write and imparting the basic facts of geography, history and biology, they represented an immense improvement.

In contrast, in the 21st century we are flooded by enormous amounts of information, and even the censors don’t try to block it. Instead, they are busy spreading misinformation or distracting us with irrelevancies. If you live in some provincial Mexican town and you have a smartphone, you can spend many lifetimes just reading Wikipedia, watching TED talks, and taking free online courses. No government can hope to conceal all the information it doesn’t like. On the other hand, it is alarmingly easy to inundate the public with conflicting reports and red herrings. People all over the world are but a click away from the latest accounts of the bombardment of Aleppo or of melting ice caps in the Arctic, but there are so many contradictory accounts that it is hard to know what to believe. Besides, countless other things are just a click away, making it difficult to focus, and when politics or science look too complicated it is tempting to switch to funny cat videos, celebrity gossip or porn.

In such a world, the last thing a teacher needs to give her pupils is more information. They already have far too much of it. Instead, people need the ability to make sense of information, to tell the difference between what is important and what is unimportant, and above all to combine many bits of information into a broad picture of the world.

In truth, this has been the ideal of western liberal education for centuries, but up till now even many western schools have been rather slack in fulfilling it. Teachers allowed themselves to focus on shoving data while encouraging pupils “to think for themselves”. Due to their fear of authoritarianism, liberal schools had a particular horror of grand narratives. They assumed that as long as we give students lots of data and a modicum of freedom, the students will create their own picture of the world, and even if this generation fails to synthesise all the data into a coherent and meaningful story of the world, there will be plenty of time to construct a good synthesis in the future. We have now run out of time. The decisions we will take in the next few decades will shape the future of life itself, and we can take these decisions based only on our present world view. If this generation lacks a comprehensive view of the cosmos, the future of life will be decided at random.

Part two: The heat is on

Besides information, most schools also focus too much on providing pupils with a set of predetermined skills such as solving differential equations, writing computer code in C++, identifying chemicals in a test tube or conversing in Chinese. Yet since we have no idea how the world and the job market will look in 2050, we don’t really know what particular skills people will need. We might invest a lot of effort teaching kids how to write in C++ or how to speak Chinese, only to discover that by 2050 AI can code software far better than humans, and a new Google Translate app enables you to conduct a conversation in almost flawless Mandarin, Cantonese or Hakka, even though you only know how to say “Ni hao”.

So what should we be teaching? Many pedagogical experts argue that schools should switch to teaching “the four Cs” – critical thinking, communication, collaboration and creativity. More broadly, schools should downplay technical skills and emphasise general-purpose life skills. Most important of all will be the ability to deal with change, to learn new things and to preserve your mental balance in unfamiliar situations. In order to keep up with the world of 2050, you will need not merely to invent new ideas and products – you will above all need to reinvent yourself again and again.

For as the pace of change increases, not just the economy, but the very meaning of “being human” is likely to mutate. In 1848, the Communist Manifesto declared that “all that is solid melts into air”. Marx and Engels, however, were thinking mainly about social and economic structures. By 2048, physical and cognitive structures will also melt into air, or into a cloud of data bits.

In 1848, millions of people were losing their jobs on village farms, and were going to the big cities to work in factories. But upon reaching the big city, they were unlikely to change their gender or to add a sixth sense. And if they found a job in some textile factory, they could expect to remain in that profession for the rest of their working lives.

By 2048, people might have to cope with migrations to cyberspace, with fluid gender identities, and with new sensory experiences generated by computer implants. If they find both work and meaning in designing up-to-the-minute fashions for a 3D virtual-reality game, within a decade not just this particular profession, but all jobs demanding this level of artistic creation might be taken over by AI. So at 25, you introduce yourself on a dating site as “a twenty-five-year-old heterosexual woman who lives in London and works in a fashion shop.” At 35, you say you are “a gender-non-specific person undergoing age- adjustment, whose neocortical activity takes place mainly in the NewCosmos virtual world, and whose life mission is to go where no fashion designer has gone before”. At 45, both dating and self-definitions are so passé. You just wait for an algorithm to find (or create) the perfect match for you. As for drawing meaning from the art of fashion design, you are so irrevocably outclassed by the algorithms, that looking at your crowning achievements from the previous decade fills you with embarrassment rather than pride. And at 45, you still have many decades of radical change ahead of you.

Please don’t take this scenario literally. Nobody can really predict the specific changes we will witness. Any particular scenario is likely to be far from the truth. If somebody describes to you the world of the mid-21st century and it sounds like science fiction, it is probably false. But then if somebody describes to you the world of the mid 21st-century and it doesn’t sound like science fiction – it is certainly false. We cannot be sure of the specifics, but change itself is the only certainty.

Such profound change may well transform the basic structure of life, making discontinuity its most salient feature. From time immemorial, life was divided into two complementary parts: a period of learning followed by a period of working. In the first part of life you accumulated information, developed skills, constructed a world view, and built a stable identity. Even if at 15 you spent most of your day working in the family’s rice field (rather than in a formal school), the most important thing you were doing was learning: how to cultivate rice, how to conduct negotiations with the greedy rice merchants from the big city and how to resolve conflicts over land and water with the other villagers. In the second part of life you relied on your accumulated skills to navigate the world, earn a living, and contribute to society. Of course, even at 50 you continued to learn new things about rice, about merchants and about conflicts, but these were just small tweaks to well-honed abilities.

By the middle of the 21st century, accelerating change plus longer lifespans will make this traditional model obsolete. Life will come apart at the seams, and there will be less and less continuity between different periods of life. “Who am I?” will be a more urgent and complicated question than ever before.

by Yuval Noah Harari, Wired |  Read more:
Image: Britt Spencer
[ed. See also: Get with the Programme]

Monday, August 13, 2018

Broken Time


It was supposed to be the best day of Richard “Blue” Mitchell’s life, but June 30, 1958, turned out to be one of the worst. The trumpeter had been summoned to New York City from Miami for a recording session with Julian “Cannonball” Adderley, an old friend who was being hailed as the hottest alto sax player since Charlie Parker.

But things started going wrong even before Mitchell arrived at Reeves Sound Studios on East Forty-Fourth Street. First, his luggage went astray en route from Florida. Then there was a surprise waiting for him in the control room: Miles Davis, one of his musical heroes, who had taken the extraordinary step of composing a new melody as a gift to Cannonball. Mitchell was supposed to play Miles’s part.

That wasn’t going to be easy, because the tune, called “Nardis,” was anything but a standard workout on blues-based changes. The melody had a haunting, angular, exotic quality, like the “Gypsy jazz” that guitarist Django Reinhardt played with the Hot Club de France in the 1930s. And it didn’t exactly swing, but unfurled at its own pace, like liturgical music for some arcane ritual. For three takes, the band diligently tried to make it work, but Mitchell couldn’t wrap his head around it, particularly under Miles’s intimidating gaze. The producer of the session, legendary Riverside Records founder Orrin Keepnews, ended up scrapping the night’s performances entirely.

The next night was more productive. After capturing tight renditions of “Blue Funk” and “Minority,” the quintet took two more passes through “Nardis,” yielding a master take for release, plus a credible alternate. But the arrangement still sounded stiff, and the horns had a pinched, sour tone.

Only one man on the session, Miles would say later, played the tune “the way it was meant to be played.” It was the shy, unassuming piano player, who was just shy of twenty-eight years old. His name was Bill Evans.

And that might have been the end of “Nardis.” Miles never recorded the tune himself—the fate suffered by another of his originals, “Mimosa,” recorded once by Herbie Hancock and never heard from again. In this case, however, the lack of a definitive performance by the composer created a kind of musical vacuum that other players have hastened to fill. Despite its inauspicious debut, the tune has become one of the most frequently recorded modern jazz standards, played in an impressive variety of settings ranging from piano trios, to Latin jazz combos, to ska-jazz ensembles, to a full orchestra featuring players from the US Air Force. For some musicians, “Nardis” becomes an object of fascination—an earworm that can be expelled only by playing it.

Though superb versions of “Nardis” have been recorded by everyone from tenor sax titan Joe Henderson to bluegrass guitar virtuoso Tony Rice, no one embodied its melodic potential more than Bill Evans. For him, Miles’s serpentine melody was a terrain he never tired of exploring. For more than twenty years, Evans played it nearly every night with his trios, often as the show-stopping climax of the second set. Indeed, he became so closely associated with the tune that some of his fans dispute that Miles actually wrote it, insisting that Evans deserves the credit. It’s certainly true that “Nardis” radically evolved over the course of Evans’s career, morphing into new forms, reinventing itself, and achieving new levels of poignancy as it became inextricably entwined with the arc of Evans’s turbulent life.

For this listener, “Nardis” has become a full-on musical obsession. I have more than ninety official and bootleg recordings of the tune stored in the cloud, ranked in a fluid and continually updated order of preference, so they follow me wherever I go. In my travels as a writer, I use “Nardis” as a litmus test of musical competence: if I see a jazz band in a bar or a busker taking requests, I inevitably suggest it. (If they’ve never heard of it, I understand that they must be new at this game.) By now I’ve heard so many different interpretations, in such a far-flung variety of settings, that a Platonic ideal of the melody resides in my mind untethered to any actual performance. It’s as if “Nardis” were always going on somewhere, with players dropping in and out of a musical conversation beyond space and time.

Evans once told a friend that a musician should be able to maintain focus on a single tone in his mind for at least five minutes—and in playing like this, he achieved a nearly mystical immersion in the music: a state of pure, undistracted concentration. Even before writers like Jack Kerouac and Gary Snyder made Buddhism a subject of popular fascination in America, Evans saw parallels between meditative practice and the keen, alert state that jazz improvisation demands, when years of work on perfecting tone and technique suddenly drop away and a direct channel opens up between the musician’s brain and his or her fingers. He listened to other pianists closely, but rather than imitate a player like Bud Powell, he would try to extract the essence of Powell’s approach and apply it to different types of material. “It’s more the mind ‘that thinks jazz’ than the instrument ‘that plays jazz’ which interests me,” Evans told an interviewer.

By maintaining a singularly intense focus on “Nardis” over the course of his career, Evans managed to turn the melody that had frustrated “Blue” Mitchell that night in 1958 into a vehicle for dependably accessing “the mind that thinks jazz,” like a homegrown form of meditation that could be performed on a piano bench before rapt audiences in clubs night after night. By bringing the story of Evans’s quest for a kind of jazz samadhi to light, I hope to understand the enduring hold that “Nardis” has on the ever-widening circle of musicians who play it, while reckoning with my own personal fixation.

Pale, bespectacled, and soft-spoken, Bill Evans looked more like a graduate student of theology than a hard-swinging jazzman. He was already working for Miles full-time on the night he recorded “Nardis” for Cannonball. He had been recommended for the job by George Russell, an avant-garde composer whose book of music theory, The Lydian Chromatic Concept of Tonal Organization, was a decisive influence on Miles’s modal conceptions of jazz in the late 1950s.

When Russell first mentioned Evans’s name, Miles asked, “Is he white?”

“Yeah,” Russell replied.

“Does he wear glasses?”

“Yeah.”

“I know that motherfucker,” Miles said. “I heard him at Birdland—he can play his ass off.” Indeed, the first time Evans played a beginner’s intermission set at the Village Vanguard—Max Gordon’s basement club, the Parnassus of jazz—the pianist was astonished to look up and see the legendary trumpeter standing there, listening intently.

After being invited to sit in with Miles’s sextet at a bar called the Colony Club in Bedford-Stuyvesant, Brooklyn, Evans got the gig, though he was in for several more rounds of hazing before being allowed to play alongside Cannonball Adderley, John Coltrane, Paul Chambers, Philly Joe Jones, and Miles himself, all at the peak of their powers. At one point, Miles, in his inimitably raspy voice, told the wan young pianist that to prove his devotion to the music, he would have to “fuck” his bandmates, “because we all brothers and shit.” Evans wandered off for fifteen minutes to entertain the possibility, before telling Miles that while he wanted to make everyone happy, he just couldn’t do it. The sly trumpeter grinned and said, “My man!”

Still, the ribbing continued. Miles would counter Evans’s musical suggestions by saying, “Man, cool it. We don’t want no white opinions.” At the same time, the trumpeter became the young pianist’s staunchest advocate, saying that he “played the piano the way it should be played,” and comparing his supremely expressive touch on the keys to “sparkling water cascading down from some clear waterfall.” He would sometimes call Evans and ask him to just set the handset down and leave the line open while Evans played piano at home.

But Miles’s hard-core fans continued to shun Evans. They saw a white nerd evicting the beloved Red Garland from the prestigious keyboard chair at a time when black pride and appreciation of jazz as a distinctively black cultural form were ascendant. For months, while his bandmates got thunderous ovations after solos, Evans got the silent treatment, which reinforced his self-doubt. In his eagerness to be regarded as an equal, he accepted a first fix of heroin from Philly Joe, whom Evans respected more than any drummer on earth. He also began dating a chic young black woman, Peri Cousins, for whom he wrote one of his sprightly early originals, “Peri’s Scope.” Cousins observed how quickly the drug filled a crucial role in Evans’s existence, providing a buffer between his acute sensitivity and the realities of life on the road. “When he came down, when he kicked it, which he did on numerous occasions, the world was—I don’t know how to say it—too beautiful,” she said. “It was too sharp for him. It’s almost as if he had to blur the world for himself by being strung out.”

On Kind of Blue, widely regarded as the greatest jazz recording ever made, Evans became a conduit of that unbearable beauty, mapping a middle path between Russell’s Lydian concepts, Miles’s unerring sense of swing, and the luminous romanticism of Ravel and Debussy. His leads on “So What” and “Flamenco Sketches” seem geological, like majestic cliff faces carved outside of time.

By the time he recorded the tracks on Kind of Blue, however, Evans had already decided to leave Miles’s band. After his baptism of fire on the road, he was physically, mentally, and spiritually exhausted, but he also felt more confident about pursuing his own vision. He had a specific goal in mind: achieving a level of communication in a piano trio that would enable all three players to make creative statements and respond to one another conversationally, without any of them being obliged to explicitly state the beat. This approach came to be known as “broken time,” because no player was locked into a traditional time-keeping role; instead the one was left to float, in an implied pulse shared by all the players. Evans compared broken time to the kind of typography in which the raised letters are visible only in the shadows they cast.

That kind of collective sympathy, akin to three-way telepathy, demanded major commitment from the trio, and required high levels of personal chemistry. Evans met the perfect fellow travelers in two young musicians named Scott LaFaro and Paul Motian.

Ironically, the two men came into Evans’s band on the wings of the worst gig the pianist had ever played, a three-week stand at Basin Street East, on East Forty-Ninth Street, working opposite Benny Goodman. The “King of Swing” was enjoying a revival of interest, and his band was getting the red-carpet treatment, with VIPs arriving in limousines and lavish champagne dinners on the house, while the members of Evans’s trio bought their own Cokes at the bar. Occasionally they’d play a set only to discover that their mikes had been turned off. Evans ran through a series of illustrious accompanists that month as each man decided he could no longer take the abuse. (Philly Joe split when the club owner told Evans to stop letting him take solos.) But when LaFaro and Motian sat in, Evans felt things start to click, and he would look back on the three-week ordeal as a karmic process of eliminating the wrong players from the trio.

Boyishly handsome, six years younger than the pianist, and confident to the point of arrogance, LaFaro was the brazen yang to Evans’s ascetic yin. He spent hours every day commandeering attic rooms and hotel basements to practice, and restrung his instrument with nylon-wrapped strings years before they became standard, which enabled him to get a guitar-like tone and articulation in the upper register. “He was freer than free jazz,” Ornette Coleman said. “Scotty was just a natural, played so naturally, had a love of creation. I’m not only talking about music, but being human. I would say he was closer to a mystic.” Like Evans, who pored over volumes by Plato, Thomas Merton, Jean-Paul Sartre, and Jiddu Krishnamurti at home, LaFaro was intuitively attracted to Zen. The two men spent hours discussing philosophy on the road.

Motian came of age providing a solid four-four foundation for classic horn men like Coleman Hawkins and Roy Eldridge, but had also proved his versatility and ability to handle advanced musical concepts by supporting avant-garde players like George Russell, Thelonious Monk, and Lennie Tristano. He could swing harder with a pair of brushes than most guys could with a whole kit, and he instantly gravitated toward the concept of broken time, which gave an unprecedented amount of expressive freedom to him and his bandmates. This style proved so influential that it has become nearly ubiquitous in jazz even outside of the piano-trio context, though it requires an intense level of dedication.

The enduring effect of Miles’s endorsement ensured Evans a steady stream of gigs, and the three men made a pact: no matter what opportunities came up, their primary commitment for the next phase of their lives would be to the trio.

A warbly-sounding bootleg reel recorded at Birdland in 1960 shows the Bill Evans Trio distilling “Nardis” down to its essence and making it swing. After Evans authoritatively states the theme, he plays rollicking variations on it with LaFaro and Motian close behind. Then LaFaro takes an astonishing lead, climbing the neck of his instrument to make those nylon strings ring. As each member of the trio explores the implications of the melody, the other players lay out or step in as appropriate, so that the whole trio becomes a unified organism, “thinking” jazz as naturally as breathing.

When Evans first met LaFaro, he said, “There was so much music in him, he had a problem controlling it… Ideas were rolling out on top of each other; he could barely handle it. It was like a bucking horse.” On the road with the trio, LaFaro would learn to handle that bucking horse without taming it, expanding the range of possibility for every bass player who followed. He attained such a level of rapport with Evans that tears would come to Motian’s eyes on the bandstand.

As word spread that something special was happening in Evans’s trio, their days of having to buy their own Cokes ended. They began appearing on bills with top-ranked groups, including Miles’s bands, and other musicians flocked to see them on off nights. Keepnews wrote of “a definite feeling in the air… the almost mystical aura that marks the arrival of an artist.”

The constant touring, however, was tough on the pianist, who developed chronic hepatitis in tandem with his raging addiction. The cover photographs on Evans’s LPs became a time lapse of his physical degeneration. On the back of Undercurrent, Evans is depicted urbanely perusing a score with guitarist Jim Hall, a Band-Aid on his right wrist marking the spot where his needle went awry.

Evans was a polite junkie. For decades, he kept tabs on how much money he owed various friends, and he always endeavored to pay them back, even if his benefactor had long forgotten the debt. But among the people disturbed by his accelerating decline was the fearlessly outspoken LaFaro, who had no problem confronting the pianist in the bluntest terms. “You’re fucking up the music,” he would say. “Look in the mirror!

It was in this combative atmosphere that Evans made his second attempt to commit “Nardis” to vinyl, at Bell Sound Studios, on February 2, 1961, under Keepnews’s watchful eye. Though Keepnews gamely tried to keep everyone’s spirits up, the whole session seemed jinxed, with Evans and LaFaro openly arguing about the pianist’s drug use and Evans suffering a splitting headache. By the time the ordeal was over, both the players and the producer assumed that the tapes would be quietly filed away and never released. “We had a very, very bad feeling,” Evans recalled. “We felt there was nothing happening.”

Listening back, however, everyone was shocked to discover how well the trio had played. Upon the album’s release, Explorations was hailed by critics for its bold, unsentimental reinvention of well-worn standards like “Sweet and Lovely” and “How Deep Is the Ocean,” the dynamism of the group’s interactions, and the sublime sensitivity of Evans’s phrasing and voicings. Humbled by the inadequacy of his own ability to judge how well the session had gone, Evans began to think of “the mind that thinks jazz” as something larger than the consciousness of any individual musician, as if the music organized itself at a higher order of awareness that wasn’t always discernible to the players. The rendition of “Nardis” that appears on the album, a refinement of the arrangement that the trio had been playing on the road, became the default canonical version in the absence of a Miles original—the basis for twenty years of Evans’s performances, and for hundreds of interpretations by others.

Ben Sidran once suggested, somewhat implausibly, that Miles told him that the name of the tune had something to do with nuclear energy. Others have suggested that it’s just a sound, like Charlie Parker’s “Klactoveedsedstene.” Perhaps the most amusing explanation (though it’s almost certainly apocryphal) was offered by bassist Bill Crow in Ted Gioia’s compendium The Jazz Standards. One night when Evans was playing with Miles, Crow reported, a fan requested a tune that the pianist felt was beneath him. “I don’t play that crap,” Evans replied. “I’m an artist”—with Evans’s nasal New Jersey drawl doing the work of eliding the phrase into the song’s cryptic title. (...)

Unlike most of the acts in Keepnews’s stable at Riverside, the young Evans often resisted his producer’s promptings to make a new record, feeling that he had nothing new to say. (That would change later in his career when loan sharks threatened to break his fingers.) But perhaps sensing that the trio had attained an extraordinary level of empathy, the pianist agreed to allow an engineer to record the output of the whole last day of a two-week run, five sets in total, running the gamut from Evans’s unbearably poignant exploration of classic ballads like “I Loves You, Porgy” and “My Man’s Gone Now” to cooking modal workouts on angular modern tunes like “Solar” and “Milestones.” LaFaro’s vibrant declarations danced around Evans’s richly harmonized lines as Motian kept the whole thing swinging with subtle shadings and accents. (...)

For Evans, wrote jazz critic Whitney Balliett, improvisation was “a contest between his intense wish to practice a wholly private, inner-ear music and an equally intense wish to express his jubilation at having found such a music within himself.” In the trio with LaFaro and Motian, Evans finally found the support he needed to take that music as deep as he felt it to be, and by doing so deepened the subjective possibilities of jazz itself.

Ten nights later, after a few beers, LaFaro and a friend decided it was worth driving eighty miles to Geneva, New York, where another friend had a good stereo. For the next several hours they drank coffee and listened to records, including Bartók’s “The Miraculous Mandarin” and the trio’s Explorations, which LaFaro was proud to show off.

After hearing Chet Baker sing his mournful “Grey December,” LaFaro remarked that Baker had it all—talent, movie-star looks, recording contracts—but because of his addiction, he had ended up another jazz casualty, his teeth knocked out while attempting to buy drugs in Sausalito, ruining his embouchure. LaFaro called Baker “an American tragedy.” The young bass player and his friend were invited to spend the night, but they turned down the invitation, having chores at home.

Driving east on Route 5-20, LaFaro fell asleep at the wheel. The car careened onto the shoulder, struck a tree, and burst into flames. The twenty-five-year-old bassist and his friend died instantly.

by Steve Silberman, The Believer |  Read more:
Video: YouTube
[ed. Nice to see this retrospective, but sad also. I was deeply into jazz for a long time and Bill Evans was (and still is) my favorite (especially, The Village Vanguard Sessions). Also, Pat Metheny, Joe Pass (who I got to chat with after setting up his stage), McCoy Tyner and of course, the Duke (Ellington). See also: Peri's Scope]

Gilberto Gil and Jorge Ben

Sunday, August 12, 2018

For Voters Sick of Money in Politics, a New Pitch: No PAC Money Accepted

Like many political candidates, Dean Phillips spends hours each day fund-raising and thanking his donors. But because he refuses to accept PAC money from corporations, unions or other politicians, he has adopted a unique approach.

“Norbert?” he asked on the doorstep of a man who’d donated $25 to his campaign. “I’m here with goodies!”

Mr. Phillips, who is running for Congress in the suburbs of Minneapolis, handed over a gift bag containing a T-shirt and bumper sticker. The exchange was recorded in a video that was shared later with his supporters to encourage them to contribute as well. Norbert Gernes, an 80-year-old retiree, was impressed.

“We desperately need to get the money out of the political system,” he said in an interview afterward. “Because I don’t think we have a Congress that’s representing the people any more.”

Campaign finance was once famously dismissed by Mitch McConnell, the Senate majority leader, as being of no greater concern to American voters than “static cling.” But since the Supreme Court’s Citizens United decision in 2010 opened the floodgates for unrestricted political spending, polls have shown that voters are growing increasingly bitter about the role of money in politics.

The issue is now emerging in midterm races around the country, with dozens of Democrats rejecting donations from political action committees, or PACs, that are sponsored by corporations or industry groups. A handful of candidates, including Mr. Phillips, are going a step further and refusing to take any PAC money at all, even if it comes from labor unions or fellow Democrats.

Rather than dooming the campaigns, these pledges to reject PAC money have become central selling points for voters. And for some of the candidates, the small-donor donations are adding up.

In Minnesota, Mr. Phillips, a Democrat, has raised more than $2.3 million, 99 percent of it from individuals, and has used his no-PAC-money pledge to mount a formidable challenge in a district that Republicans have held since 1961. His opponent, Representative Erik Paulsen, who sits on the powerful House Ways and Means Committee, has raised $3.6 million, more than half of it from PACs.

In Texas, Representative Beto O’Rourke, a Democrat running to unseat Senator Ted Cruz, has raised more than $23 million in this election cycle — considerably more than Mr. Cruz — without accepting any PAC money.

“It’s a major theme of the campaign,” said Chris Evans, Mr. O’Rourke’s communications director. “People want to know that you are going to respond to them and their interests, and not the most recent check you received.”

In Pennsylvania, Conor Lamb, a Democrat who pledged not to take corporate PAC money, eked out a victory in a special election in March in a district that President Trump won by 20 points in 2016. In Ohio, another Democrat running in a red district, Daniel O’Connor, made the same pledge, and performed so well in a special election earlier this month that the race is still too close to call.

A recent Pew report found that 75 percent of the public said “there should be limits on the amount of money individuals and organizations” can spend on political campaigns.

“Poll after poll is showing that money in politics has more traction today than it has had in my life time,” said Meredith McGehee, executive director of Issue One, a nonpartisan advocacy group concerned with ethics and accountability, who has been working on the campaign finance issue for decades.

Under current federal rules, a candidate’s campaign cannot accept more than $2,700 from any individual donor or $5,000 from any single PAC. Groups known as Super PACs, however, can legally receive and spend unlimited amounts to influence a race, as long as they do not coordinate their activity directly with a candidate’s campaign. (...)

Candidates can do this in part because of a sharp rise in giving by small donors.

In the last midterm election year, 2014, some 1.5 million small donors contributed a total of $335 million to Democratic campaigns across the country through ActBlue, an online platform that raises money for Democrats. This time around, about 3.8 million small donors have already contributed more than $1 billion, and are on a pace to exceed $1.5 billion before Election Day in November, according to Erin Hill, ActBlue’s Executive director. The average donation is $33.85.

That’s good news for Mr. Phillips in Minnesota, who has staked his candidacy on the proposition that voters care about who he takes money from.

On the campaign trail, he ties nearly every issue back to campaign finance. When people complain to him about the high cost of drugs or health care, he tells them that corporate influence is to blame.

“Can anybody guess how much the big pharma industry spent on lobbying last year?” he asked a group of small business owners at a round-table discussion. “Take a guess.”

He answered his own question. “Two hundred and forty million,” he said, adding: “They all but pooh-poohed any legislation that would allow Medicare to negotiate prescription drug prices.”

His message is particularly potent because his opponent, Mr. Paulsen, has taken in the sixth-largest haul from PACs out of the 435 members of the House of Representatives, according to the Center for Responsive Politics. (...)

Mr. Paulsen’s campaign has tried to make an issue of Mr. Phillips’ wealth.

“Dean Phillips is a hypocrite spending his vast inherited wealth on his campaign, which he’s padded with investments in the very things he campaigns against,” said John-Paul Yates, Mr. Paulsen’s campaign manager.

According to Federal Election Commission filings, Mr. Phillips has contributed less than $6,000 of his own money to the campaign, and given less than $30,000 worth of in-kind donations, including the use of a pontoon boat for campaigning on Lake Minnetonka.

Mr. Phillips says his family fortune is what opened his eyes to the way money influences politics, after he began hearing from candidates who were eager to enlist him as a major donor.

“I watched the Hillary Clinton campaign, and recognized that it was so predicated on spending time with wealthy donors and not spending time in middle-class neighborhoods and rural areas,” he said.

Don Kuster, who said he has ticked the box for Mr. Paulsen in every previous election, now volunteers for Mr. Phillips’s campaign. He drives the pontoon boat and has held a meet-the-candidate party at his home, which was attended by about sixty Republicans.

“I asked him ‘What’s your thing?’ and he said, ‘Campaign finance reform,’” Mr. Kuster recalled from his first conversation with Mr. Phillips. “He said, ‘I’m not taking any PAC money. I’m not taking it from the Sierra Club. I’m not taking money from Planned Parenthood. I want to be able to make my own decisions.’ I thought, ‘Ok, that’s something I can support.’”

by Farah Stockman, NY Times | Read more:
Image: Jenn Ackerman

Google and the Resurgence of Italian Design

Once upon a time, we had products that were colorful, in shapes that were quirky, whimsical, and expressive. Interesting! And then, almost every tech product became white, silver, gray, black, flat, square, round, and minimalist. Boring.

But there are hints that this is changing. And one of the leaders of this change is, somewhat improbably, Google.

Anybody who’s been paying attention knows that Google has been pretty serious about hardware for a while, and at least as serious about doing quality design in hardware, software, and overall experiences. What fascinates me is the unexpectedly expressive direction the company has taken with its design language.

Google’s emerging design language (obviously still a work in progress) is reminiscent of the Italian branch of industrial design, which we haven’t seen much of in the last two or three decades. Instead, the German branch has dominated. It’s characterized by clean geometric shapes (cubes, cylinders), white and black glossy colors, and smooth unadorned surfaces. Think Bauhaus and — especially — Braun.

It’s a cliché at this point to draw parallels between Braun and Apple’s design languages. But nevertheless, aside from its own diversion into the more expressive realm of Italian-inspired design in the 2000s (original iMac, toilet-seat-shaped original iBook, etc.), Apple has hugely influenced other tech companies to follow the minimalist, German-flavored tradition.

This isn’t a knock against the Braun style — there are many beautiful products that have sprung from that well. It’s just that diversity is the spice of life, and tech products have become too much of a monoculture, stylistically. (...)

With Google’s launch of the Pixel 2 phone, wireless earbuds, VR headsets, Clips camera, and Home products, I was delighted to see touches of color and form that can clearly be traced back to the Italian branch of design, circa 1960s and ’70s. In particular, the pioneering company Olivetti — most well-known for its famous Valentine typewriter — but which had an incredible run of groundbreaking designs, including the world’s first programmable desktop computer (the Programma 101 of 1964).


In this same vein, I was struck by the forms, the playful use of color, and the novel material choices on Google’s new Daydream View VR headset. It’s an even more intimate product in that it mounts to your head and creates a new world in front of your eyes. So it’s only appropriate that it takes on a feel of fashion and clothing, bringing a new soft material to a technological object and heathered colors that are a stark contrast to the edgy gaming aesthetic that dominates VR. It almost announces “This is VR for the rest of us.”

Google’s Home products, which debuted in 2016, began the motif of cloth-covered surfaces that the Daydream extends. The new Home update continues that motif, with bolder use of colors, and a new “fun-size” puck version. Early on there was some joking that Home looks like an air freshener, but it was clearly all part of the plan to make the product blend in to the home and make it less of a sore thumb sticking out.

by Adam Richardson, Medium |  Read more:
Images: Olivetti/Google
[ed. As long as it doesn't descend into kitsch, a fine line.]