Thursday, September 20, 2018

Life in the Spanish City That Banned Cars

Pople don’t shout in Pontevedra – or they shout less. With all but the most essential traffic banished, there are no revving engines or honking horns, no metallic snarl of motorbikes or the roar of people trying make themselves heard above the din – none of the usual soundtrack of a Spanish city.

What you hear in the street instead are the tweeting of birds in the camellias, the tinkle of coffee spoons and the sound of human voices. Teachers herd crocodiles of small children across town without the constant fear that one of them will stray into traffic.

“Listen,” says the mayor, opening the windows of his office. From the street below rises the sound of human voices. “Before I became mayor 14,000 cars passed along this street every day. More cars passed through the city in a day than there are people living here.”

Miguel Anxo Fernández Lores has been mayor of the Galician city since 1999. His philosophy is simple: owning a car doesn’t give you the right to occupy the public space.

“How can it be that the elderly or children aren’t able to use the street because of cars?” asks César Mosquera, the city’s head of infrastructures. “How can it be that private property – the car – occupies the public space?”

Lores became mayor after 12 years in opposition, and within a month had pedestrianised all 300,000 sq m of the medieval centre, paving the streets with granite flagstones.

“The historical centre was dead,” he says. “There were a lot of drugs, it was full of cars – it was a marginal zone. It was a city in decline, polluted, and there were a lot of traffic accidents. It was stagnant. Most people who had a chance to leave did so. At first we thought of improving traffic conditions but couldn’t come up with a workable plan. Instead we decided to take back the public space for the residents and to do this we decided to get rid of cars.”

They stopped cars crossing the city and got rid of street parking, as people looking for a place to park is what causes the most congestion. They closed all surface car parks in the city centre and opened underground ones and others on the periphery, with 1,686 free places. They got rid of traffic lights in favour of roundabouts, extended the car-free zone from the old city to the 18th-century area, and used traffic calming in the outer zones to bring the speed limit down to 30km/h.

The benefits are numerous. On the same streets where 30 people died in traffic accidents from 1996 to 2006, only three died in the subsequent 10 years, and none since 2009. CO2 emissions are down 70%, nearly three-quarters of what were car journeys are now made on foot or by bicycle, and, while other towns in the region are shrinking, central Pontevedra has gained 12,000 new inhabitants. Also, withholding planning permission for big shopping centres has meant that small businesses – which elsewhere have been unable to withstand Spain’s prolonged economic crisis – have managed to stay afloat.

Lores, a member of the leftwing Galician Nationalist Bloc, is a rarity in the solidly conservative northwestern region. Pontevedra, population 80,000, is the birthplace of Mariano Rajoy, the former Spanish prime minister and leader of the rightwing People’s party. However, the mayor says Rajoy has never shown any interest in an urban scheme that has earned his native city numerous awards.

Naturally, it hasn’t all gone off without a hitch. People don’t like being told they can’t drive wherever they want, but Lores says that while people claim it as a right, in fact what they want are privileges.

“If someone wants to get married in the car-free zone, the bride and groom can come in a car, but everyone else walks,” he says. “Same with funerals.”

by Stephen Burgen, The Guardian |  Read more:
Image:Luis Pereiro Gomez

Wednesday, September 19, 2018

Steely Dan: More Than Just a Band


The NFL’s Very Profitable Existential Crisis

Consider the curious case of the National Football League: It’s the largest single entertainment property in the U.S., a $14 billion per year attention-sucking machine with a steady hold on the lives of tens of millions. And its future is now in widespread doubt.

Ratings for regular-season games fell 17 percent over the past two years, according to Nielsen, and after one week of play in the new season, viewership has been flat. February marked the third-straight year of audience decline for the Super Bowl and the smallest audience since 2009. Youth participation in tackle football, meanwhile, has declined by nearly 22 percent since 2012 in the face of an emerging scientific consensus that the game destroys the brains of its players. Once a straightforward Sunday diversion, the NFL has become a daily exercise in cognitive dissonance for fans and a hotly contested front in a culture war that no longer leaves space for non-combatants.

To many outside observers, this looks like the end of an era. “The NFL probably peaked two years ago,” says Andrew Zimbalist, a professor of economics at Smith College who specializes in the business of sports. “It’s basically treading water.”

Yet even a middling franchise, the Carolina Panthers, sold in May for a league record $2.3 billion. Advertisers spent a record $4.6 billion for spots during NFL games last season, as well as an all-time high $5.24 million per 30 seconds of Super Bowl time. The reason is clear: In 2017, 37 of the top 50 broadcasts on U.S. television were NFL games, including four of the top five.

The Green Bay Packers, the only NFL team that shares financial statements with the public, has posted revenue increases for 15 straight seasons. Leaguewide revenue has grown more than 47 percent since 2012. Commissioner Roger Goodell’s official target is $25 billion in revenue by 2027, or roughly 6 percent annual growth.

“The business of the NFL is very strong and continues to get stronger,” says Marc Ganis, president of the consulting firm Sportscorp Ltd., and an unofficial surrogate for league owners. “It’s a great time to own an NFL franchise,” says Atlanta Falcons owner and Home Depot co-founder Arthur Blank.

The dominant sport in America has become Schrödinger’s league, both doomed and doing better than ever at the same time. This is a guide to how the NFL reached its remarkable moment of contradiction.

Early in August, during an otherwise unremarkable day of training camp for the Minnesota Vikings, a safety for the team put on a black baseball cap with a message across the front: “Make football violent again.” Andrew Sendejo, who plays one of the game’s most violent positions with exceptional violence, was protesting a new NFL rule that bans players from initiating contact with their helmets. When asked what he thought of the new rule, Sendejo replied, “I don’t.”

Until two years ago, the NFL officially denied any link between football and increased risk of degenerative brain disease. That changed when Jeff Miller, the league’s senior vice president for health and safety, told members of Congress that there is “certainly” a link between the sport and diseases such as chronic traumatic encephalopathy, which has been found in the brains of more than 100 former NFL players and is linked to mood swings, depression, impulsiveness, memory loss, and in a handful of cases, suicide. “I think the broader point, and the one that your question gets to, is what that necessarily means—and where do we go from here with that information,” Miller said in response to a question from a congresswoman.

The question now is whether football can be played safely and still be football. In the short run, the NFL has to worry about ruining the fun for the group of people, including Trump, who see football as a vital tool in forging American manhood. As far as they’re concerned, any effort to subtract violence from the game and improve safety is a threat to the country.

“If we lose football, we lose a lot in America. I don’t know if America can survive,” David Baker, president of the Pro Football Hall of Fame, said in January. A few months later, North Carolina’s head football coach Larry Fedora echoed his sentiments: “I fear that the game will be pushed so far from what we know that we won’t recognize it 10 years from now. And if it does, our country will go down, too.”

In the long run, though, the NFL also has to worry that the widespread, lasting damage to players will alienate fans. “The CTE issue is the biggest challenge facing the NFL,” says Chris Nowinski, a former Harvard University football player and professional wrestler who started the Concussion Legacy Foundation. “If they don’t change—and change soon—their legends will keep being diagnosed with the disease and it will turn people off.”

At the moment, CTE can only be diagnosed post-mortem, by slicing into brain tissue. Researchers at Boston University, working with brains donated by families, have found that at least 10 percent of deceased NFL players suffered from the disease. Once scientists find a way to diagnose CTE in the living, which researchers expect to have in fewer than five years, Nowinski believes that this number is bound to double or triple: “If some day you knew that half the players you are watching on the field already have this disease, would you be comfortable watching?”

This year the Concussion Legacy Foundation launched a campaign called “Flag Football Under 14,” based on the research that shows one of the biggest predictors of CTE is the number of years spent playing tackle football. Parents, by the looks of it, were already getting the message. Since 2012, according to annual data compiled by the Sports & Fitness Industry Association (SFIA), the number of children aged 6 to 17 playing tackle football dropped 22 percent, to just above 3 million. In a study published in JAMA Pediatrics this year, researchers found that the fall in participation coincides closely with the rise of media coverage of football’s links to traumatic brain injuries.

The attention to brain injury risks turning football playing into a regional pursuit. In New England, according to SFIA data, the number of players has decreased by 61 percent in the past decade.

Bob Broderick, co-founder of football pad company Xtech, says he has spoken to nearly 2,000 high schools in the past few years and the appetite for youth football remains undiminished in Texas and the rest of the Southeast. “Whether you want to call it a religion, culture, or way of life, that’s the way it is down there,” he says. His most common problem is parents who want pads in smaller sizes for younger kids. “I bet you, in the last month, I’ve turned away 300 kids because we don’t make a product that’s small enough.”

It’s not clear that youth football’s shrinking footprint matters much for the health of the NFL. “The vast majority of people who watch the NFL have never played tackle football in their lives,” says Ganis. As long as elite players keep coming through the college ranks, he says, the league will be fine. And if the next generation’s Tom Brady opts to play baseball, who’s going to notice?

“The reality is that football is such a fun game for fans and a good game for TV,” says Nowinski, the anti-concussion activist, “that even if the quality was slightly worse, it would still be a massively popular enterprise.”

Jerry Richardson, the 82-year-old fast-food magnate who had owned the Carolina Panthers for a quarter century, was forced to sell the team earlier this year following revelations that he had sexually harassed team employees. Richardson, who had been one of the NFL’s most powerful owners, was a prime example of the old boys’ club that runs the league. The ownership ranks include the CEO of a truck-stop chain that has been accused by federal prosecutors of cheating customers out of fuel rebates, the scion of a heating and air conditioning fortune with a DUI on his record, and several heirs to oil money. They are not necessarily the group one would choose to steer an enterprise into the chaotic future of sports and entertainment in America.

But there’s no shortage of new economy billionaires lining up to replace them, just as hedge fund chief David Tepper did with his $2.3 billion takeover the Panthers. The fury that now surrounds these men, and they are mostly men, is both a test of their power and a testament to it. As much as they might long for the days before CTE was a household term, Kaepernick was a civil rights hero, and Trump was president, they’re happy to be in the middle of the conversation. It’s proof that they still matter.

by Ira Boudway and Eben Novy-Williams, Bloomberg | Read more:
Image: Getty

[ed. Welcome back to school.]
via:

How Asia Got Crazy Rich

True to its title, Crazy Rich Asians features two hours of Asian people doing crazy and rich things. They purchase million-dollar-plus earrings; they fly helicopters to a bachelor party hosted on a floating container ship; and they host a wedding in an interior botanical garden, in which the bride walks down the aisle knee-deep in an artificial creek. Based upon Singaporean-American novelist Kevin Kwan’s 2013 novel, the film centers on a middle-class Chinese American economics professor, Rachel Chu, who travels back to her boyfriend Nick Young’s childhood home in Singapore and is introduced to his friends and their unfathomably opulent lifestyles. Its central tension pits Rachel’s American-bred individuality against the traditional, familial piety of Nick’s mother, Eleanor, who insists upon keeping the largest real estate and financial empire in the southeast Asian city-state within the families of the Singaporean elite.

The film has enjoyed substantial critical approval and been rewarded by box office numbers. For its champions, it succeeds in widening the Hollywood universe to include an underrepresented American minority group, portraying it in exceedingly optimistic terms. Many have echoed the director’s claim that “it’s not a movie, it’s a movement.” For its critics, the film is a disappointing foray into representation, obeying romantic-comedy formulae at the expense of saying something edgier about Asian-American life.

What is shared between these views is the choice to judge this film solely upon the basis of its portrayal of Asia, Asians, and Asian Americans, without a history or even acknowledgment of how they became so “crazy rich” in the first place. Without dismissing the film’s significance for so many, it should be recognized that the “Crazy Rich” and “Asian” in its title are performing different roles in the story. On the one hand, “Asian” provides political cover to “Crazy Rich,” as the film markets itself as a celebration of diversity rather than a celebration of the elite in an age of historic inequality, including within Asia and for Asian Americans themselves. On the other hand, neither is the “Crazy Rich” incidental, for to be wealthy is what marks the Asian characters as modern and relatable, even endearing.

This comes out clearly when Kwan’s story is contrasted against Amy Tan’s The Joy Luck Club. That older film drew upon stories from the life of Tan’s mother, spent in Republican-era Shanghai (1911–1949), and it featured stock imagery from turn-of-the-century China: opium dens, concubinage and rape, arranged marriages, and foot binding. I can recall such scenes because they have been seared into my brain since I was 9 years old, dragged to the theater by my Taiwan-raised yet pro-China parents (an important distinction these days), and made slightly nauseous imagining the world my grandparents had left behind. The Joy Luck Club suggests that strong family bonds were what helped Chinese women weather and ultimately escape an oppressive, traditional society. Crazy Rich Asians turns that idea on its head. The conflict between Rachel and Eleanor conveys that strong family bonds are obstacles to empowerment for a new cosmopolitan Chinese diaspora that values individualism and romance. There is an implied historical process here, then, from old Asia as the antithesis of western individualism transformed dramatically into a new Asia embodying the future of capitalism.

The film has also come under criticism for presenting only a narrow slice of the Asian experience. Despite casting ethnic Japanese, Korean, Malay, and Filipino actors, it is ultimately rooted in the international history of the Chinese diaspora and its particular brands of capitalism. It also focuses exclusively upon the diaspora’s most elite segments.

But Crazy Rich Asians was written as something loosely inspired by Kwan’s own lived experiences, and the result is a story that has more nuance than most English-language works about the Chinese diaspora. Rather than chide him for not writing a more inclusive story, it seems more useful to ask why Kwan’s tale, based upon his idiosyncratic childhood as the scion of a Singaporean banking family, has resonated so strongly with a wider audience. What has it meant in the past, and what does it mean today, to celebrate Asian wealth? (...)

Afer independence in 1959, Singapore briefly attempted to unify with Malaysia to pursue a leftist strategy of national development via import substitution industrialization. But in 1965, Singapore separated again and joined a handful of small capitalist Asian countries in projects of export-led growth, inviting foreign investment, and promoting labor-intensive light industries to move up the global value chain. They were eventually dubbed the “four tiger” or “little dragon” economies: Taiwanese televisions, South Korean cars, Hong Kong wigs, and Singaporean semiconductors.

The “four tigers” era was deemed an economic miracle, marked by relatively egalitarian development and low unemployment. By the late ’70s and ’80s, they were facing diminishing returns. Rather than follow Japan, South Korea, and Taiwan into high-tech manufacturing, Singapore pivoted into invisible exports, offering those other economies the services of accounting, legal work, and management. The government also encouraged Singaporean capital to look abroad and invest in poorer Asian countries such as Indonesia, Vietnam, Malaysia, and China, while it opened the doors for migrant workers from South Asia and other low-wage regions. It has since become a hub for international finance, but new growth has come at the cost of widening inequality.

In this sense, Singapore is not a new type of society. A century before Asian industrialization, similar patterns of inequality and patrimonial capitalism animated the celebrated novels about the European bourgeoisie, like Mansfield Park and Buddenbrooks. What those dense family dramas demonstrated was that capitalism is not just a static marketplace but also entails long processes of wealth accumulation marked by different phases and logics. A charitable reading for Crazy Rich Asians is that it is doing for the late 20th-century Chinese diaspora what those novels did for the bourgeoisie of Western Europe.

The most prominent family in Kwan’s story are the Youngs, whose original fortune dates back to Nick’s Chinese-born great-grandmother, presumably at the turn of the 20th century. The Youngs got in on the ground floor of an older, Victorian-era wealth, viewed by its caretakers as sociologically distinct from the newer elites found across the Asia-Pacific. The unstated irony is that owning lots of land in Singapore—and Malaysia and China, not to mention London and Hawaii—made the Young family this fabulously wealthy only because the rest of Asia, along with its nouveau riche, made the region so economically productive in recent decades. These tensions across geography and generation appear at the margins of the romantic plot. Nick’s cousin explains to Rachel that in Asia’s richest circles, you will find Hong Kongers, “Taiwan Tycoons,” and “Beijing Billionaires.” These families are not equals. In the novel, Eleanor initially mistakes Rachel for the heiress to a Taiwan plastics company, which Eleanor calculates as “very new money, made in the seventies and eighties, most likely.” A more palpable clash emerges from the story of Nick’s fabulously wealthy cousin, a real-estate investor, and her rocky marriage to a middle-class software engineer who frequently takes business trips to Shenzhen, China—Shenzhen, of course, a symbol of China’s own movement up the global value chain since the 1980s, having absorbed light industry and electronics manufacturing from the “four tigers.”

The film’s producers allegedly sought to minimize the book’s details of specific stereotypes between Asian groups, wary of alienating unfamiliar audience members. But the distinctions are inescapable throughout the story, and the story in fact would make little sense without them. (...)

All this is to say that for most observers in America by now, “Asia” has shed much of its earlier connotation as land of opium and concubinage, instead symbolizing the latest elite to ascend onto the world stage. For many American audiences, depictions of luxurious Singaporean parties will appear less as shocking revelation than as confirmation of a vague sense that the global economy is in transition. As satisfying as the Calthorpe hotel scene was, it is difficult to ignore just how much it mirrored “Yellow Peril” discourses by reductively portraying Chinese diasporic capitalists as a powerful and international economic force. It also points to the need to go beyond the very American, very management-inspired idea of “diversity” that would equate this film with “ethnic” movies centered on Black or Latinx American life. If modern racial categories have historically functioned as a way to make social inequality in market societies appear rooted in nature, then it follows that each of these groups has been typologized in different ways, owing to their different histories. The historic racist narrative of Black Americans was that they were lazy and undeserving of social mobility. The current narrative of Asian Americans is that they are too mobile, drilled in math and piano at an early age, hence unfair competition. This contrast in forms of racism should have been made clear, for instance, once journalists began openly to pit Black against Asian students in education policy debates. In this context, one wonders how the film will be received by the anti-globalization left or right. There is already a creeping sentiment of “Yellow Peril” in the US today, shared by all sides, suspicious of Chinese capital, labor, and college enrollments. The film borrows many of the same tropes but casts them in an innocent and humorous light. It is walking a fine line. Perhaps this is why Rachel must resolve the film’s encounters with the Singaporean capitalist sublime by insisting upon her individual desire, threatening to walk away from Nick’s family in the name of love, reassuring the audience that she may be Chinese by heritage but at heart remains unmistakably American.

The result is a certain ambivalence about Crazy Rich Asians and its reception. The film embodies an effort by the Asian diaspora to assert greater power in Hollywood, but many of them are already powerful economically, something that made both the story and its commercial success possible. It is fully understandable why the Asian diaspora is pushing for a formal equality with the European and American bourgeoisie before them; why the suggestion that Asians cannot also have the good life is a type of double standard or just textbook racism. But the substance of that equality takes the form of a highly destructive social behavior: endless wealth accumulation for its own sake, embodied in finance and real estate. So while the “four tigers” epoch successfully redistributed global wealth in a relatively egalitarian manner—as did other state-driven development projects across Asia, Africa, and the Americas—one fears that the future destiny of the new Asian bourgeoisie is to follow a by-now very old playbook of dynamic growth calcifying into a myopic old guard.

by Andrew Liu, n+1 |  Read more:
Image: Crazy Rich Asians

Tuesday, September 18, 2018

An Avalanche of Japanese Shave Ice

Before Norie Uematsu became a pastry chef, she waited all year for shave-ice season at home in Japan. Now, she decides when that season begins and ends.

At Cha-an Teahouse, in the East Village of New York, Ms. Uematsu serves refreshing bowls of kakigori — the Japanese shave ice — as soon as the subway stations are hot and sticky. She turns the handle of her vintage shave-ice machine through the end of September, or until she runs out of ripe white peaches, whichever comes first.

All kakigori starts with a block of plain ice. A machine locks the ice in place and spins it against a blade, shaving off soft, sheer flakes. As the ice piles up, kakigori makers add syrups, purées and other sweet toppings. The dessert is endlessly adaptable, which is one reason so many pastry chefs in the United States are not only adding kakigori to their menus but also extending its season.

When prepared with skill, kakigori is a feat of texture — a tall structure of uniformly light, airy and almost creamy crystals that never crunch, but deliver flavor as they dissolve on the tongue.

“To get it really fluffy, you adjust the angle of the blade,” said Ms. Uematsu, turning an iron knob on her machine. “But the finer it is, the harder it is to work with.” As the ice melts, or is worn down, the machine must be adjusted to keep the shavings downy.

In August, at a cafe in Yamanashi, Japan, I ordered a bowl of kakigori made from a block of natural ice. Someone had delivered it from the Yatsugatake Mountains, a volcanic range to the north. It seemed over the top — all that labor for a piece of ice? — but it also testified to the history of kakigori.

Before the development of freezers, shave ice was an extravagant dessert reserved only for those who could pay for the luxury of ice carved from frozen lakes and mountains and transported at great cost.

As Ms. Uematsu pointed out, kakigori has come a long way from its elite roots in the Heian period (from the end of the eighth through the 12th century). “When I was a kid, every house in Japan had a cheap kakigori machine, usually with a cute character on it, like Hello Kitty,” said Ms. Uematsu, who was born in 1980 in Numazu, Shizuoka Prefecture. “And you could buy commercial syrups for flavoring them.”

But kakigori masters at cafes in Japan can still be fiercely competitive. Many shops have lines out the door, and attentive hosts to manage those lines. Atelier Sekka, a small, serene dessert shop in the Sugamo neighborhood of Tokyo, buys enormous glassy blocks of natural ice from Mount Fuji to use as the base for its pristine mounds of kakigori. On a recent weekday morning, there was an hourlong wait for a seat.

A vintage shave-ice machine sits at the center of the stylish Tokyo tearoom Higashiya Ginza, where servers layer the shavings with plums poached in honey. At Himitsudo, where you can order while standing in line on the street, cooks turn out bowls overflowing with puréed mango and other fruits.

I found my favorite kakigori of the summer at a cafe called Kuriya Kashi Kurogi, on the grounds of the University of Tokyo. The ice was beautifully shaved with an electric machine and saturated with fresh soy milk and sweetened condensed milk, layered with whipped cheese and finally crowned with a thick, sweet and salty purée of fresh edamame. Every now and then, digging around, I hit a ridge of red bean paste.

Yoojin Chung, the general manager of Stonemill Matcha in San Francisco, added kakigori to the menu in June, about a month after the cafe opened. Though elaborately built kakigori are in style, Ms. Chung remembers tasting a particularly simple version at a cafe in Kyoto, with no toppings or creams at all, just matcha syrup.

“It was this ginormous green spectacle that came on a tray, at least 12 inches tall, and it was very intense,” Ms. Chung recalled. “I was shocked how it kept its shape despite having all this syrup.”

She compared the texture of perfect kakigori to flower petals — not quite powder and not quite grain — making it distinct from other kinds of shave ice. “It’s a simple thing that’s really hard to execute,” Ms. Chung said.

by Tejal Rao, NY Times | Read more:
Image: An Rong Xu

Le Japon Artistique
via:

The Miracle of the Mundane

On a good day, all of humanity’s accomplishments feel personal: the soaring violins of the second allegretto movement of Beethoven’s Symphony no. 7, the intractable painted stare of Frida Kahlo, the enormous curving spans of the Golden Gate Bridge, the high wail of PJ Harvey’s voice on “Victory,” the last melancholy pages of Wallace Stegner’s Angle of Repose. These works remind us that we’re connected to the past and our lives have limitless potential. We were built to touch the divine.

On a bad day, all of humanity’s failures feel unbearably personal: coyotes wandering city streets due to encroaching wildfires, American citizens in Puerto Rico enduring another day without electricity or potable water in the wake of Hurricane Maria, neo-Nazis spouting hatred in American towns, world leaders testing missiles that would bring the deaths of millions of innocent people. We encounter bad news in the intimate glow of our cell phone screens, and then project our worries onto the flawed artifacts of our broken world: the for lease sign on the upper level of the strip mall, the crow picking at a hamburger wrapper in the gutter, the pink stucco walls of the McMansion flanked by enormous square hedges, the blaring TVs on the walls of the local restaurant. On bad days, each moment is haunted by a palpable but private sense of dread. We feel irrelevant at best, damned at worst. Our only hope is to numb and distract ourselves as well as we can on our long, slow march to the grave.

On a good day, humankind’s creations make us feel like we’re here for a reason. Our belief sounds like the fourth molto allegro movement of Mozart’s Symphony no. 41, Jupiter: Our hearts seem to sing along to Mozart’s climbing strings, telling us that if we’re patient, if we work hard, if we believe, if we stay focused, we will continue to feel joy, to do meaningful work, to show up for each other, to grow closer to some sacred ground. We are thrillingly alive and connected to every other living thing, in perfect, effortless accord with the natural world.

But it’s hard to sustain that feeling, even on the best of days — to keep the faith, to stay focused on what matters most—because the world continues to besiege us with messages that we are failing. You’re feeding your baby a bottle and a voice on the TV tells you that your hair should be shinier. You’re reading a book but someone on Twitter wants you to know about a hateful thing a politician said earlier this morning. You are bedraggled and inadequate and running late for something and it’s always this way. You are busy and distracted. You are not here.

It’s even worse on a bad day, when humankind’s creations fill us with the sense that we are failing as a people, as a planet, and nothing can be done about it. The chafing smooth jazz piped into the immaculate coffee joint, the fake cracks painted on the wall at the Cheesecake Factory, the smoke from fires burning thousands of acres of dry tinder, blotting out the sun — they remind us that even though our planet is in peril, we are still being teased and flattered into buying stuff that we don’t need, or coaxed into forgetting the truth about our darkening reality. As the crowd around us watches a fountain dance to Frank Sinatra’s “Somewhere Beyond the Sea” at the outdoor mall, we peek at our phones and discover the bellowed warnings of an erratic foreign leader, threatening to destroy us from thousands of miles away. Everything cheerful seems to have an ominous shadow looming behind it now. The smallest images and bits of news can feel so invasive, so frightening. They erode our belief in what the world can and should be.

As the first total solar eclipse in America in thirty-nine years reveals itself, an email lands in my inbox from ABC that says The Great American Eclipse at the top. People are tweeting and retweeting the same eclipse jokes all morning. As the day grows dimmer, I remember that Bonnie Tyler is going to sing her 1983 hit “Total Eclipse of the Heart” on an eclipse-themed cruise off the coast of Florida soon.

Even natural wonders aren’t what they used to be, because nothing can be experienced without commentary. In the 1950s, we worried about how TV would affect our culture. Now our entire lives are a terrible talk show that we can’t turn off. It often feels like we’re struggling to find ourselves and each other in a crowded, noisy room. We are plagued, around the clock, by the shouting and confusion and fake intimacy of the global community, mid–nervous breakdown.

Sometimes it feels like our shared breakdown is making us less generous and less focused. On a bad day, the world seems to be filled with bad books and bad buildings and bad songs and bad choices. Worthwhile creations and ego-driven, sloppy works are treated to the same hype and praise; soon it starts to feel as if everything we encounter was designed merely to make some carefully branded human a fortune. Why aren’t we reaching for more than this? Isn’t art supposed to inspire or provoke or make people feel emotions that they don’t necessarily want to feel? Can’t the moon block out the sun without a 1980s pop accompaniment? So much of what is created today seems engineered to numb or distract us, keeping us dependent on empty fixes indefinitely.

Such creations feel less like an attempt to capture the divine than a precocious student’s term paper. If any generous spirit shines through, it’s manufactured in the hopes of a signal boost, so that some leisure class end point can be achieved. Our world is glutted with products that exist to help someone seize control of their own life while the rest of the globe falls to ruin. Work (and guidance, and leadership) that comes from such a greedy, uncertain place has more in common with that fountain at the outdoor mall, playing the same songs over and over, every note an imitation of a note played years before.

But human beings are not stupid. We can detect muddled and self-serving intentions in the artifacts we encounter. Even so, such works slowly infect us with their lopsided values. Eventually, we can’t help but imagine that this is the only way to proceed: by peddling your own wares at the expense of the wider world. Can’t we do better than this, reach for more, insist on more? Why does our culture make us feel crazy for trying?

by Heather Havrilesky, Longreads |  Read more:
Image: What If This Were Enough?
[ed. What a remarkable essay. The antithesis is here: Instagram is Supposed to Be Friendly. So Why is it Making People so Miserable?]

Burning Man: The First Time

A Premature Attempt at the 21st Century Canon

A panel of critics tells us what belongs on a list of the 100 most important books of the 2000s … so far.

Why Now?

Okay, assessing a century’s literary legacy after only 18 and a half years is kind of a bizarre thing to do.

Actually, constructing a canon of any kind is a little weird at the moment, when so much of how we measure cultural value is in flux. Born of the ancient battle over which stories belonged in the “canon” of the Bible, the modern literary canon took root in universities and became defined as the static product of consensus — a set of leather-bound volumes you could shoot into space to make a good first impression with the aliens. Its supposed permanence became the subject of more recent battles, back in the 20th century, between those who defended it as the foundation of Western civilization and those who attacked it as exclusive or even racist.

But what if you could start a canon from scratch? We thought it might be fun to speculate (very prematurely) on what a canon of the 21st century might look like right now. A couple of months ago, we reached out to dozens of critics and authors — well-established voices (Michiko Kakutani, Luc Sante), more radical thinkers (Eileen Myles), younger reviewers for outlets like n+1, and some of our best-read contributors, too. We asked each of them to name several books that belong among the most important 100 works of fiction, memoir, poetry, and essays since 2000 and tallied the results. The purpose was not to build a fixed library but to take a blurry selfie of a cultural moment.

Any project like this is arbitrary, and ours is no exception. But the time frame is not quite as random as it may seem. The aughts and teens represent a fairly coherent cultural period, stretching from the eerie decadence of pre-9/11 America to the presidency of Donald Trump. This mini-era packed in the political, social, and cultural shifts of the average century, while following the arc of an epic narrative (perhaps a tragedy, though we pray for a happier sequel). Jonathan Franzen’s The Corrections, one of our panel’s favorite books, came out ten days before the World Trade Center fell; subsequent novels reflected that cataclysm’s destabilizing effects, the waves of hope and despair that accompanied wars, economic collapse, permanent-seeming victories for the once excluded, and the vicious backlash under which we currently shudder. They also reflected the fragmentation of culture brought about by social media. The novels of the Trump era await their shot at the canon of the future; because of the time it takes to write a book, we haven’t really seen them yet.

You never know exactly what you’ll discover when sending out a survey like this, the results of which owe something to chance and a lot to personal predilections. But given the sheer volume of stuff published each year, it is remarkable that a survey like this would yield any kind of consensus—which this one did. Almost 40 books got more than one endorsement, and 13 had between three and seven apiece. We have separately listed the single-most popular book; the dozen “classics” with several votes; the “high canon” of 26 books with two votes each; and the rest of the still-excellent but somewhat more contingent canon-in-utero. (To better reflect that contingency, we’ve included a handful of critics’ “dissents,” arguing for alternate books by the canonized authors.)

Unlike the old canons, ours is roughly half-female, less diverse than it should be but generally preoccupied with difference, and so fully saturated with what we once called “genre fiction” that we hardly even think of Cormac McCarthy’s post-apocalyptic The Road, Colson Whitehead’s zombie comedy Zone One, Helen Oyeyemi’s subversive fairy tales, or even the Harry Potter novels as deserving any other designation than “literature.” And a whole lot of them are, predictably, about instability, the hallmark of the era after the “end of history” that we call now.

At least one distinctive new style has dominated over the past decade. Call it autofiction if you like, but it’s really a collapsing of categories. (Perhaps not coincidentally, such lumping is better suited to “People Who Liked” algorithms than brick-and-mortar shelving systems.) This new style encompasses Elena Ferrante’s Neapolitan novels; Sheila Heti’s self-questing How Should a Person Be?; Karl Ove Knausgaard’s just-completed 3,600-page experiment in radical mundanity; the essay-poems of Claudia Rankine on race and the collage­like reflections of Maggie Nelson on gender. It’s not really a genre at all. It’s a way of examining the self and letting the world in all at once. Whether it changes the world is, as always with books, not really the point. It helps us see more clearly.

Our dozen “classics” do represent some consensus; their genius seems settled-on. Among them are Kazuo Ishiguro’s scary portrait of replicant loneliness in Never Let Me Go; Roberto Bolaño’s epic and powerfully confrontational 2666; Joan Didion’s stark self-dissection of grief in The Year of Magical Thinking. They aren’t too surprising, because they are (arguably as always, but still) great.

And then there’s The Last Samurai, Helen DeWitt’s debut: published at the start of the century, relegated to obscurity (and overshadowed by a bad and unrelated Tom Cruise movie of the same name), and now celebrated by more members of our panel than any other book. That’s still only seven out of 31, which gives you a sense of just how fragile this consensus is. Better not launch this canon into space just yet.

by Boris Kachka/Editors, Vulture | Read more:
Image:Tim McDonagh

Monday, September 17, 2018

Wanna Get Really High?

Dabbing, consuming a cannabis concentrate using a vaporizing device, has moved into the mainstream as companies produce high-THC concentrates

Concentrates, a rapidly growing segment of the legal marijuana market, reduce the plant to its chemical essence. The point is to get as high as possible. An't reed it works.

Manufacturing concentrates involves using solvents like alcohol, carbon dioxide and other chemicals to strip away the plant’s leaves and then processing the potent remains. The final products can resemble cookie crumbles, wax and translucent cola spills.

A standard method of concentrate consumption, known as dabbing, uses vaporizing devices called rigs that resemble bongs, but instead of a bowl to hold the weed, there’s a nail made from titanium, quartz or a similarly sturdy material. The dabber heats the nail with a blowtorch and then uses a metal tool to vaporize a dab of concentrate on the nail.

Common sense suggests a dabbing habit could be more harmful than an ordinary marijuana habit, but the research is limited. Visually, the process is sometimes compared to smoking strongly stigmatized drugs like crack and crystal methamphetamine.

For years, dabbing has been considered an outcast subculture within the misfit world of cannabis. With so many companies angling to associate themselves with moderate use for functional adults, many want nothing to do with dabbing.

But as cannabis consumption has moved into the mainstream, dabbing has followed. Today a number of portable devices aim to deliver the intense high of dabbing concentrates in a more user-friendly way. At cannabis industry parties, there’s often a “dab bar” where attendants fire up the rigs, and wipe off the mouthpieces after each use. Machines called e-nails allow users to set a rig’s exact temperature to maximize vapor and flavor. On YouTube, there’s a lively competition among brain surgeons and rocket scientists to see who can inhale the heftiest dab.

Strong west coast weed can approach 30% THC. Concentrates, which dispensaries sell by the gram, range between 60% and 80%, but they can be even stronger. One form called crystalline is reportedly 99% THC. (The oil in increasingly ubiquitous vape pens can also be 70% or higher THC but it’s vaporized in smaller doses.)

Concentrates aren’t a new concept; hash or hashish, the compacted resin of the cannabis plant, has been used in central and south Asia for more than 1,000 years. But legalization in North America has laid the groundwork for innovation in the craft. As with most things cannabis, concentrate fanatics can argue endlessly about their preferences – solvent-free, whole-plant, resin, live resin, shatter – and the uninitiated struggle to discern much difference in the effect.

by Alex Halperin, The Guardian |  Read more:
Image: George Wylesol
[ed. This at the local pot shop: The Most Exotic Hash on the Market.]

What Dying Traditions Should Be Preserved?


Castles, postcards, drive-in theaters, artistic matchboxes, home economics classes, sitting on porches, neon signs, two martini lunches, manual transmissions, cursive writing, handwritten letters, handkerchiefs, canning and preserving, Viking funerals, manners/politeness, taxidermy, knitting, listening to full albums, bridge (card game), stamp/coin collecting, whistling, corn cob holders, rolling joints, learning new languages, shoe repair, friendship bracelets, civilized debate, and more...

And the apparent winner (by predominant number of posts): a nice sit-down dinner with good conversation, home-cooked food, no hats, and no distractions (phones, tv, games, etc.).

[ed. From the Reddit post: What is a dying tradition you believe should be preserved?]
Image: via

Sunday, September 16, 2018


Sparks, Kimono My House, Island Records, UK LP, 1974
via:

Edward Snowden Reconsidered

This summer, the fifth anniversary of Edward Snowden’s revelations about NSA surveillance passed quietly, adrift on a tide of news that now daily sweeps the ground from under our feet. It has been a long five years, and not a period marked by increased understanding, transparency, or control of our personal data. In these years, we’ve learned much more about how Big Tech was not only sharing data with the NSA but collecting vast troves of information about us for its own purposes. And we’ve started to see the strategic ends to which Big Data can be put. In that sense, we’re only beginning to comprehend the full significance of Snowden’s disclosures.

This is not to say that we know more today about Snowden’s motivations or aims than we did in 2013. The question of whether or not Snowden was a Russian asset all along has been raised and debated. No evidence has been found that he was, just as no evidence has been found that he was a spy for China. His stated cause was the troubling expansion of surveillance of US citizens, but most of the documents he stole bore no relation to this avowed concern. A small percentage of what Snowden released of the 1.7 million documents that intelligence officials believe he accessed did indeed yield important information about domestic programs—for example, the continuation of Stellar Wind, a vast warrantless surveillance program authorized by George W. Bush after 9/11, creating legal structures for bulk collection that Obama then expanded. But many of them concerned foreign surveillance and cyberwarfare. This has led to speculation that he was working on behalf of some other organization or cause. We can’t know.

Regardless of his personal intentions, though, the Snowden phenomenon was far larger than the man himself, larger even than the documents he leaked. In retrospect, it showed us the first glimmerings of an emerging ideological realignment—a convergence, not for the first time, of the far left and the far right, and of libertarianism with authoritarianism. It was also a powerful intervention in information wars we didn’t yet know we were engaged in, but which we now need to understand.

In 2013, the good guys and bad guys appeared to sort themselves into neat and recognizable groups. The “war on terror” still dominated national security strategy and debate. It had made suspects of thousands of ordinary civilians, who needed to be monitored by intelligence agencies whose focus throughout the cold war had been primarily on state actors (the Soviet Union and its allies) that were presumed to have rational, if instrumental intentions. The new enemy was unreason, extremism, fanaticism, and it was potentially everywhere. But the Internet gave the intelligence community the capacity, if not the legal right, to peer behind the curtains of almost any living room in the United States and far beyond.

Snowden, by his own account, came to warn us that we were all being watched, guilty and innocent alike, with no legal justification. To those concerned primarily with security, the terrorists were the hidden hostile force. To many of those concerned about liberty, the “deep state” monitoring us was the omnipresent enemy. Most people managed to be largely unconcerned about both. But to the defenders of liberty, whether left liberals or libertarians, Snowden was straightforwardly a hero. Alan Rusbridger, the editor of The Guardian at the time, said of him:
His motives are remarkable. Snowden set out to expose the true behaviour of the US National Security Agency. On present evidence he has no interest in money… Nor does he have the kind of left-wing or Marxist sentiments which could lead him to being depicted as un-American. On the contrary, he is an enthusiast for the American constitution, and, like other fellow “hacktivists,” is a devotee of libertarian politician Ron Paul, whose views are well to the right of many Republicans.
The patriotic right, the internationalist left: these were the recognized camps in the now far-distant world of 2013. Snowden, who kept a copy of the US Constitution on his desk at the NSA, could be regarded by his sympathizers as a patriot engaging in a lone act of bravery for the benefit of all.

Of course, it wasn’t a solitary act. Snowden didn’t want to be purely a whistleblower like Mark Felt or Daniel Ellsberg; he wanted to be a figurehead. And he largely succeeded. For the last five years, the quietly principled persona he established in the public mind has galvanized opposition to the American “deep state,” and it has done so, in part, because it was promoted by an Academy Award-winning documentary film in which Snowden starred, a feature film about him directed by Oliver Stone in which he made an appearance, and the many talks he gives by video-link that have become his main source of income. He now has 3.83 million Twitter followers. He is an “influencer,” and a powerful one. Any assessment of the impact of his actions has to take into account not just the content of the documents he leaked, but the entire Edward Snowden Show.

In fact, most of what the public knows about Snowden has been filtered through the representations of him put together by a small, tight circle of chosen allies. All of them were, at the time, supporters of WikiLeaks, with whom Snowden has a troubled but intimate relationship. He initially considered leaking documents through WikiLeaks but changed his mind, he claims, in 2012 when Assange was forced into asylum at the Ecuadorian embassy in London under heavy surveillance, making access to him seem too difficult and risky. Instead, Snowden tried to make contact with one of WikiLeaks’ most vocal defenders, the independent journalist Glenn Greenwald. When he failed, he contacted the documentary filmmaker Laura Poitras, whom Greenwald had also vociferously defended when she drew unwanted government scrutiny after making a documentary film that followed a man who had been Osama bin Laden’s bodyguard. The scrutiny turned into harassment in 2011, she claims, when she began making a film about WikiLeaks.

Poitras had been a member of the Tor Project community (which developed the encrypted Tor web browser to make private online interactions possible) since 2010 when she reached out to Jacob Appelbaum, an important member of both the Tor Project and also WikiLeaks, after becoming a close friend and ally of Assange. We know from Wired’s Kevin Poulsen that Snowden was already in touch with the Tor community at least as early as 2012, having contacted Tor’s Runa Sandvik while he was still exfiltrating documents. In December 2012, he and Sandvik hosted a “crypto party” in Honolulu, where Snowden ran a session teaching people how to set up Tor servers. And it was through Tor’s Micah Lee (now working for The Intercept) that Snowden first contacted Poitras. In order to vet Snowden, Poitras turned to Appelbaum. Given the overlap between the Tor and WikiLeaks communities, Snowden was involved with the latter at least as early as his time working as a contractor for the NSA, in a job he took specifically in order to steal documents, in Hawaii.

Few people knew, when Citizenfour was released in 2014, how deeply embedded in both Tor and WikiLeaks Poitras was or how close an ideological affinity she then had with Assange. The Guardian had sensibly sent the experienced news reporter Ewen MacAskill with Poitras and Greenwald to Hong Kong, and this helped to create the impression that the interests of Snowden’s confidants were journalistic rather than ideological. We have subsequently seen glimpses of Poitras’s complex relationship with Assange in Risk, the version of her WikiLeaks film that was released in 2017. But Risk is not the movie she thought she was making at the time. The original film, called Asylum, was premiered at Cannes in 2016. Steven Zeitchik, of the Los Angeles Times, described it as a “lionizing portrait,” presenting Assange as a “maverick hero.” In Risk, on the other hand, we are exposed more to Assange’s narcissism and extremely unpleasant attitudes toward women, along with a wistful voiceover from Poitras reading passages from her production diary, worrying that Assange doesn’t like her, recounting a growing ambivalence about him.

In between the two films, Assange lost many supporters because of the part he played in the 2016 US elections, when WikiLeaks published stolen emails—now believed to have been hacked and supplied by Russian agents—that were damaging to Hillary Clinton. But Zeitchik discovered, when he asked Poitras about her own change of heart, that it wasn’t political but personal. Assange had turned his imperious attitude toward women on her, demanding before the Cannes screening that she cut material relating to accusations of rape by two women in Sweden. His tone, in particular, offended her. But her view of his actions leading up to the US election remained consistent with that of WikiLeaks supporters; he published the DNC emails because they were newsworthy, not as a tactic in an information war.

When Snowden initially contacted Poitras, she tells us in Risk, her first thought was that the FBI was trying to entrap her, Appelbaum, or Assange. Though Micah Lee and Appelbaum were both aware of her source, she tells us that she left for Hong Kong without Assange’s knowledge and that he was furious that she failed to ensure WikiLeaks received Snowden’s documents. Although Poitras presents herself retrospectively as an independent actor, while filming Snowden in Hong Kong she contacted Assange about arranging Snowden’s asylum and left him in WikiLeaks’ hands (through Assange’s emissary, Sarah Harrison). Poitras’s relations with Assange later became strained, but she remained part of the Tor Project and was involved in a relationship with Jacob Appelbaum. (She shows in the film that Appelbaum was subsequently accused of multiple counts of sexual harassment over a number of years.)

In Risk’s added, post-production voiceover, Poitras says of the Snowden case: “When they investigate this leak, they will create a narrative to say it was all a conspiracy. They won’t understand what really happened. That we all kept each other in the dark.” It’s not clear exactly what she means. But it is clear that “we all” means a community of like-minded and interdependent people; people who may each have their own grandiose ambitions and who have tortuously complex, manipulative, and secretive personal relationships with one another. Snowden chose to put himself in their hands.

If this group of people shared a political ideology, it was hard to define. They were often taken to belong to the left, since this is where criticisms of the national security state have tended to originate. But when Harrison, the WikiLeaks editor and Assange adviser, flew to Hong Kong to meet Snowden, she was coming directly from overseeing Assange’s unsuccessful electoral campaign for the Australian Senate, in which the WikiLeaks Party was apparently aligned with a far-right party. The WikiLeaks Party campaign team, led by Assange’s father and party secretary John Shipton, had made a high-profile visit to Syria’s authoritarian leader, Bashar al-Assad, and Shipton had heaped praise on Vladimir Putin’s efforts in the region, in contrast to America’s, in an interview with the state radio network Voice of Russia. The political historian Sean Wilentz, in what at the time, in 2014, was a rare critical article on Assange, Snowden, and Greenwald, argued that they shared nothing so coherent as a set of ideas but a common political impulse, one he described as “paranoid libertarianism.” With hindsight, we can also see that when they first became aligned, the overwhelming preoccupation of Poitras, Greenwald, Assange, and Snowden was the hypocrisy of the US state, which claimed to abide by international law, to respect human rights, to operate within the rule of law internally and yet continually breached its own purported standards and values.

They had good grounds for this view. The Iraq War, which was justified to the public using lies, fabricated evidence, and deliberate obfuscation of the overall objective, resulted in hundreds of thousands of deaths, as well as the rendition and torture of suspected “enemy combatants” at CIA black sites and their indefinite detention at Guantánamo Bay. The doctrine of preemptive war had been revived, along with imperialist ambitions for a global pax Americana.

But cynicism about the rule of law exists on a spectrum. At one end, exposing government hypocrisy is motivated by a demand that a liberal-democratic state live up to its own ideals, that accountability be reinforced by increasing public awareness, establishing oversight committees, electing proactive politicians, and employing all the other mechanisms that have evolved in liberal democracies to prevent arbitrary or unchecked rule. These include popular protests, the civil disobedience that won civil rights battles, and, indeed, whistleblowing. At the other end of the spectrum is the idea that the law is always really politics in a different guise; it can provide a broad set of abstract norms but fails to specify how these should be applied in particular cases. Human beings make those decisions. And the decision-makers will ultimately be those with the most power.

On this view, the liberal notions of legality and legitimacy are always hypocritical. This was the view promulgated by one of the most influential legal theorists of the twentieth century, Carl Schmitt. He was a Nazi, who joined the party in 1933 and became known as the “crown jurist” of the Third Reich. But at the turn of the millennium, as Bush took America to war, Schmitt’s criticisms of liberalism were undergoing a renaissance on both the far right and the far left, especially in the academy. This set of attitudes has not been limited to high theory or confined to universities, but its congruence with authoritarianism has often been overlooked.

In Risk, we hear Assange say on the phone, regarding the legality of WikiLeaks’ actions in the US: “We say we’re protected by the First Amendment. But it’s all a matter of politics. Laws are interpreted by judges.” He has repeatedly expressed the view that the idea of legality is just a political tool (he especially stresses this when the one being accused of illegality is him). But the cynicism of the figures around Snowden derives not from a meta-view about the nature of law, like Schmitt’s, but from the view that America, the most powerful exponent of the rule of law, merely uses this ideal as a mask to disguise the unchecked power of the “deep state.” Snowden, a dissenting agent of the national security state brandishing his pocket Constitution, was seen by Rusbridger as an American patriot, but by his chosen allies as the most authoritative revealer of the irremediable depth of American hypocrisy.

by Tamsin Shaw, NY Review of Books |  Read more:
Image: Patricia de Melo Moreira/AFP/Getty Images
[ed. See also: The Known Known.]

Teens Are Protesting In-Class Presentations

For many middle- and high-school students, giving an in-class presentation was a rite of passage. Teachers would call up students, one by one, to present their work in front of the class and, though it was often nerve-racking, many people claim it helped turn them into more confident public speakers.

“Coming from somebody with severe anxiety, having somebody force me to do a public presentation was the best idea to happen in my life,” one woman recently tweeted. According to a recent survey by the Association of American Colleges and Universities, oral communication is one of the most sought-after skills in the workplace, with over 90 percent of hiring managers saying it’s important. Some educators also credit in-class presentations with building essential leadership skills and increasing students’ confidence and understanding of material.

But in the past few years, students have started calling out in-class presentations as discriminatory to those with anxiety, demanding that teachers offer alternative options. This week, a tweet posted by a 15-year-old high-school student declaring “Stop forcing students to present in front of the class and give them a choice not to” garnered more than 130,000 retweets and nearly half a million likes. A similar sentiment tweeted in January also racked up thousands of likes and retweets. And teachers are listening. (...)

Students who support abolishing in-class presentations argue that forcing students with anxiety to present in front of their peers is not only unfair because they are bound to underperform and receive a lower grade, but it can also cause long-term stress and harm.

“Nobody should be forced to do something that makes them uncomfortable,” says Ula, a 14-year-old in eighth grade, who, like all students quoted, asked to be referred to only by her first name. “Even though speaking in front of class is supposed to build your confidence and it’s part of your schoolwork, I think if a student is really unsettled and anxious because of it you should probably make it something less stressful. School isn’t something a student should fear.”

“It feels like presentations are often more graded on delivery when some people can’t help not being able to deliver it well, even if the content is the best presentation ever,” says Bennett, a 15-year-old in Massachusetts who strongly agrees with the idea that teachers should offer alternative options for students. “Teachers grade on public speaking which people who have anxiety can’t be great at.” (...)

Those campaigning against in-class presentations said that it was important to distinguish between students with actual diagnosable anxiety disorders and those who might just want to get out of the assignment. Addie, a 16-year-old in New York, said that schools like hers already make accommodations for students with certain learning issues to get extra time on tests. She thinks similar processes could be put in place for students with public-speaking anxiety. “I think it’s important these accommodations are accessible, but that they’re also given to those who are need it instead of those who just say they don’t want to present,” she said. “There’s a big difference between nervousness and anxiety.”

Students who have been successful in the campaign to end in-class presentations credit social media. Unlike previous generations, high schoolers today are able to have a direct impact on their educational system by having their voices heard en masse online. Teenagers, most of whom are extremely adept at social media, say that platforms like Twitter and Instagram have allowed them to meet more kids at other schools and see how other school districts run things. They can then wage campaigns for changes at their own school, sometimes partnering with teens in other districts to make their voice louder.

Henry said that he’s seen the effects of these types of campaigns firsthand. This year his district shifted the school start time an hour and fifteen minutes later, something he and his fellow students campaigned for aggressively on social media, which he believes played a role in the decision. High-school students across the country have also waged social-media campaigns against discriminatory dress codes, excessive homework, and, most notably, to advocate for gun-control policies on campus. “Teens view social media as a platform to make changes,” Carver says.

Part of why students feel social media is such a powerful mechanism for changing education is because so many teachers are on these platforms. Nicholas Ferroni, a high-school teacher in New Jersey, said that “a lot of teachers use social media as a great way to learn methodologies.”

“Instead of trying to go to a school-board meeting with a bunch of adults in suits—that’s how it was—you can just talk to everyone directly,” said Addie. “We don’t have to do all that stuff formally. We can go online and say what we want to say and people have to listen to us.” “I think social media is a great way to reach educators,” said Bennett.

by Taylor Lorenz, The Atlantic |  Read more:
Image: Getty

Why Zillow Addicts Can’t Look Away

A couple of times a week, Nick Spencer checks the value of his four-bedroom house in Haddon Heights, N.J., on Zillow. He has no plans to move, describing the town, located about 10 miles from Philadelphia, as “Americana at its best,” and his Cape Cod style home as “a labor of love.”

Yet there he is, clicking on Zillow every few days to see what the house he bought for $399,900 in 2006 is now worth. The last time he looked, the Zestimate — a Zillow algorithm that not only calculates current values for 110 million homes, but also predicts what they’ll be worth in the future — pegged Mr. Spencer’s home at $503,744. A little green arrow showed it up 1.7 percent from a month ago.

Mr. Spencer thinks it’s extremely unlikely that anyone would pay anywhere near that much for his house, charming as it may be. A neighbor down the street just took his house off the market after two years, even after dropping the price by more than $100,000, to $369,000. Zillow has that house pegged at $447,000, and rising.

Mr. Spencer blames location for the discrepancy. He and his neighbor live along the border of two other towns, including Haddonfield, where home prices are much higher, a fact that might skew Zillow’s algorithm. The numbers might be divorced from reality, but that doesn’t stop Mr. Spencer from tracking them.

“It’s entertainment,” he said. “Like a hobby.” The Zestimate has been a Zillow mainstay since the company started in 2006, drawing so many curious visitors that the site crashed within hours of its launch. With nothing comparable at the time, the Zestimate became a post-party snooping activity — on the ride home, you could gawk at the presumed price of the host’s house. It also became an exercise in aspirational ownership, with email updates reminding you to chart the ebbs and flows of your home’s worth like a 401(k). Except, unlike a 401(k), this graph is based on an algorithm, not actual money.

The Zestimate is marketed as a tool designed to take the mystery out of real estate for consumers who would otherwise have to rely on brokers and guesswork. But where Zillow sees transparency, some brokers and homeowners see fantasy, arguing that an algorithm, its clever graphics notwithstanding, cannot account for the nuances that determine a home’s worth, like whether your kitchen is brand new or from the disco era.

“Most people are kind of obsessed” with the Zestimate, said Stacey Simens, a saleswoman for Coach Realtors in Hewlett, N.Y., on Long Island. Once a potential seller has a number in mind, it can be hard to pull them away from it, regardless of reality. “They’re looking for that magic button that will tell them that their house is worth exactly what they want it to be,” she said.

But unless you’re actually thinking about selling, and invite a parade of brokers into your house to look at the granite countertops, all you’ve got is neighborhood gossip and the estimates you see on Zillow and other sites like Redfin and Trulia, a Zillow-owned company.

A Nerdwallet survey released this month found that of the 78 percent of homeowners who thought they knew what their home was worth, nearly a quarter got their information from an online calculator. (...)

Why are we obsessively clicking on fuzzy calculators for homes we are not selling? The answer lies in how we think about our homes. When Zillow arrived in 2006, at the height of the last housing bubble, houses were seen as liquid investments you could track like a stock. Now, a dozen years later, with many of us still traumatized by the housing crash, we keep checking in for reassurance that the ground is stable.

Get a green arrow, and you know that all is right; a red one gives you incentive to check back a few days later in the hope of better news. But the information you’re getting, even with all the charts and graphs, is just a rough draft.

“It’s conveying a truth that doesn’t exist,” said Jonathan J. Miller, the president of Miller Samuel Real Estate Appraisers and Consultants. So why do we keep doing it? “Why do you read your horoscope?” he said.

by Ronda Kaysen, NY Times | Read more:
Image: Trisha Krauss