Friday, January 31, 2020

'Ülili E



[ed. See also: Hawaiian Slack Key Guitar Masters (full album).]

Etienne Buyse, Get on the bus
via:

Book Review: Human Compatible

Clarke’s First Law goes: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Stuart Russell is only 58. But what he lacks in age, he makes up in distinction: he’s a computer science professor at Berkeley, neurosurgery professor at UCSF, DARPA advisor, and author of the leading textbook on AI. His new book Human Compatible states that superintelligent AI is possible; Clarke would recommend we listen.

I’m only half-joking: in addition to its contents, Human Compatible is important as an artifact, a crystallized proof that top scientists now think AI safety is worth writing books about. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies previously filled this role. But Superintelligence was in 2014, and by a philosophy professor. From the artifactual point of view, HC is just better – more recent, and by a more domain-relevant expert. But if you also open up the books to see what’s inside, the two defy easy comparison.

S:PDS was unabashedly a weird book. It explored various outrageous scenarios (what if the AI destroyed humanity to prevent us from turning it off? what if it put us all in cryostasis so it didn’t count as destroying us? what if it converted the entire Earth into computronium?) with no excuse beyond that, outrageous or not, they might come true. Bostrom was going out on a very shaky limb to broadcast a crazy-sounding warning about what might be the most important problem humanity has ever faced, and the book made this absolutely clear.

HC somehow makes risk from superintelligence not sound weird. I can imagine my mother reading this book, nodding along, feeling better educated at the end of it, agreeing with most of what it says (it’s by a famous professor! I’m sure he knows his stuff!) and never having a moment where she sits bolt upright and goes what? It’s just a bizarrely normal, respectable book. It’s not that it’s dry and technical – HC is much more accessible than S:PDS, with funny anecdotes from Russell’s life, cute vignettes about hypothetical robots, and the occasional dad joke. It’s not hiding any of the weird superintelligence parts. Rereading it carefully, they’re all in there – when I leaf through it for examples, I come across a quote from Moravec about how “the immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria”. But somehow it all sounds normal. If aliens landed on the White House lawn tomorrow, I believe Stuart Russell could report on it in a way that had people agreeing it was an interesting story, then turning to the sports page. As such, it fulfills its artifact role with flying colors.

How does it manage this? Although it mentions the weird scenarios, it doesn’t dwell on them. Instead, it focuses on the present and the plausible near-future, uses those to build up concepts like “AI is important” and “poorly aligned AI could be dangerous”. Then it addresses those abstractly, sallying into the far future only when absolutely necessary. Russell goes over all the recent debates in AI – Facebook, algorithmic bias, self-driving cars. Then he shows how these are caused by systems doing what we tell them to do (ie optimizing for one easily-described quantity) rather than what we really want them to do (capture the full range of human values). Then he talks about how future superintelligent systems will have the same problem. (...)

If you’ve been paying attention, much of the book will be retreading old material. There’s a history of AI, an attempt to define intelligence, an exploration of morality from the perspective of someone trying to make AIs have it, some introductions to the idea of superintelligence and “intelligence explosions”. But I want to focus on three chapters: the debate on AI risk, the explanation of Russell’s own research program, and the section on misuse of existing AI. (...)

Chapters 7 and 8, “AI: A Different Approach” and “Provably Beneficial AI” will be the most exciting for people who read Bostrom but haven’t been paying attention since. Bostrom ends by saying we need people to start working on the control problem, and explaining why this will be very hard. Russell is reporting all of the good work his lab at UC Berkeley has been doing on the control problem in the interim – and arguing that their approach, Cooperative Inverse Reinforcement Learning, succeeds at doing some of the very hard things. If you haven’t spent long nights fretting over whether this problem was possible, it’s hard to convey how encouraging and inspiring it is to see people gradually chip away at it. Just believe me when I say you may want to be really grateful for the existence of Stuart Russell and people like him.

Previous stabs at this problem foundered on inevitable problems of interpretation, scope, or altered preferences. In Yudkowsky and Bostrom’s classic “paperclip maximizer” scenario, a human orders an AI to make paperclips. If the AI becomes powerful enough, it does whatever is necessary to make as many paperclips as possible – bulldozing virgin forests to create new paperclip mines, maliciously misinterpreting “paperclip” to mean uselessly tiny paperclips so it can make more of them, even attacking people who try to change its programming or deactivate it (since deactivating it would cause fewer paperclips to exist). You can try adding epicycles in, like “make as many paperclips as possible, unless it kills someone, and also don’t prevent me from turning you off”, but a big chunk of Bostrom’s S:PDS was just example after example of why that wouldn’t work.

Russell argues you can shift the AI’s goal from “follow your master’s commands” to “use your master’s commands as evidence to try to figure out what they actually want, a mysterious true goal which you can only ever estimate with some probability”. Or as he puts it:
The problem comes from confusing two distinct things: reward signals and actual rewards. In the standard approach to reinforcement learning, these are one and the same. That seems to be a mistake. Instead, they should be treated separately…reward signals provide information about the accumulation of actual reward, which is the thing to be maximized.
So suppose I wanted an AI to make paperclips for me, and I tell it “Make paperclips!” The AI already has some basic contextual knowledge about the world that it can use to figure out what I mean, and my utterance “Make paperclips!” further narrows down its guess about what I want. If it’s not sure – if most of its probability mass is on “convert this metal rod here to paperclips” but a little bit is on “take over the entire world and convert it to paperclips”, it will ask me rather than proceed, worried that if it makes the wrong choice it will actually be moving further away from its goal (satisfying my mysterious mind-state) rather than towards it.

Or: suppose the AI starts trying to convert my dog into paperclips. I shout “No, wait, not like that!” and lunge to turn it off. The AI interprets my desperate attempt to deactivate it as further evidence about its hidden goal – apparently its current course of action is moving away from my preference rather than towards it. It doesn’t know exactly which of its actions is decreasing its utility function or why, but it knows that continuing to act must be decreasing its utility somehow – I’ve given it evidence of that. So it stays still, happy to be turned off, knowing that being turned off is serving its goal (to achieve my goals, whatever they are) better than staying on.

by Scott Alexander, Slate Star Codex |  Read more:
Image: Amazon
[ed. AI Friday, inspired by this post. See also:]

Moravec's Paradox

Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning (which is high-level in humans) requires very little computation, but sensorimotor skills (comparatively low-level in humans) require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".

Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are unconscious. "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".

The biological basis of human skills

One possible explanation of the paradox, offered by Moravec, is based on evolution. All human skills are implemented biologically, using machinery designed by the process of natural selection. In the course of their evolution, natural selection has tended to preserve design improvements and optimizations. The older a skill is, the more time natural selection has had to improve the design. Abstract thought developed only very recently, and consequently, we should not expect its implementation to be particularly efficient.

As Moravec writes:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
A compact way to express this argument would be:
  • We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.
  • The oldest human skills are largely unconscious and so appear to us to be effortless.
  • Therefore, we should expect skills that appear effortless to be difficult to reverse-engineer, but skills that require effort may not necessarily be difficult to engineer at all.
Some examples of skills that have been evolving for millions of years: recognizing a face, moving around in space, judging people’s motivations, catching a ball, recognizing a voice, setting appropriate goals, paying attention to things that are interesting; anything to do with perception, attention, visualization, motor skills, social skills and so on.

Some examples of skills that have appeared more recently: mathematics, engineering, human games, logic and scientific reasoning. These are hard for us because they are not what our bodies and brains were primarily evolved to do. These are skills and techniques that were acquired recently, in historical time, and have had at most a few thousand years to be refined, mostly by cultural evolution.

Historical influence on artificial intelligence

In the early days of artificial intelligence research, leading researchers often predicted that they would be able to create thinking machines in just a few decades (see history of artificial intelligence). Their optimism stemmed in part from the fact that they had been successful at writing programs that used logic, solved algebra and geometry problems and played games like checkers and chess. Logic and algebra are difficult for people and are considered a sign of intelligence. Many prominent researchers assumed that, having (almost) solved the "hard" problems, the "easy" problems of vision and commonsense reasoning would soon fall into place. They were wrong, and one reason is that these problems are not easy at all, but incredibly difficult. The fact that they had solved problems like logic and algebra was irrelevant, because these problems are extremely easy for machines to solve.

Rodney Brooks explains that, according to early AI research, intelligence was "best characterized as the things that highly educated male scientists found challenging", such as chess, symbolic integration, proving mathematical theorems and solving complicated word algebra problems. "The things that children of four or five years could do effortlessly, such as visually distinguishing between a coffee cup and a chair, or walking around on two legs, or finding their way from their bedroom to the living room were not thought of as activities requiring intelligence."

This would lead Brooks to pursue a new direction in artificial intelligence and robotics research. He decided to build intelligent machines that had "No cognition. Just sensing and action. That is all I would build and completely leave out what traditionally was thought of as the intelligence of artificial intelligence." This new direction, which he called "Nouvelle AI" was highly influential on robotics research and AI.

by Wikipedia |  Read more:

The Secretive Company That Might End Privacy as We Know It

Until recently, Hoan Ton-That’s greatest hits included an obscure iPhone game and an app that let people put Donald Trump’s distinctive yellow hair on their own photos.

Then Mr. Ton-That — an Australian techie and onetime model — did something momentous: He invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

“The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”

Clearview has shrouded itself in secrecy, avoiding debate about its boundary-pushing technology. When I began looking into the company in November, its website was a bare page showing a nonexistent Manhattan address as its place of business. The company’s one employee listed on LinkedIn, a sales manager named “John Good,” turned out to be Mr. Ton-That, using a fake name. For a month, people affiliated with the company would not return my emails or phone calls.

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

by Kashmir Hill, NY Times | Read more:
Image: Clearview
[ed. See also: Facing Up to Facial Recognition (IEEE Spectrum).]

Why You Should or Shouldn't Fear Slaughterbots

Thursday, January 30, 2020

Brad Pitt and the Beauty Trap

The meaning of Brad Pitt — as actor, star and supreme visual fetish — can be traced to the moment in the 1991 film “Thelma & Louise” when the camera pans up from his bare chest to his face like a caress. William Bradley Pitt was born 1963, but Brad Pitt sprung forth in that 13-second ode to eroticized male beauty, initiating a closely watched career and life, dozens of movies, and libraries of delirious exaltations, drooling gossip and porny magazine layouts.

The delirium has resumed with Quentin Tarantino’s “Once Upon a Time … in Hollywood,” in which Pitt plays the Pitt-perfect role of Cliff Booth, a seasoned stunt man and coolest of cats. Everything about Cliff looks so good, so effortlessly smooth, whether he’s behind the wheel of a Coupe de Ville or strolling across a dusty wasteland. The novelist Walter Kirn once wrote that Robert Redford “stands for the [movie] industry itself, somehow, in all its California dreaminess.” In “Once Upon a Time,” Tarantino recasts that idea-ideal with Cliff, exploiting Pitt’s looks and charm to create another sun-kissed, golden and very white California dream.

So of course Tarantino being Tarantino has Cliff-Pitt doff his shirt, in a scene that both nods to the actor’s foundational “Thelma & Louise” display and offers another effusive paean to masculine beauty. It’s a hot day; Cliff is scarcely working. So he grabs his tools and a beer and scrambles on a roof to fix an antenna, wearing pretty much what Pitt first wears in “Thelma & Louise.” Then Cliff strips off his Hawaiian shirt and the Champion tee underneath it and once again, Brad Pitt stands bare-chested, soaring above both Hollywood and our gaze, the already porous line between actor and character blurring delectably further.

On Feb. 9, Oscar night, our gaze will again fix on Pitt, who has been nominated for best supporting actor for his role in “Once Upon a Time.” It’s nice that his peers bothered because they’ve been reluctant to honor him in the past. Despite his years of service and critically praised roles, Pitt has won just one Oscar: a best picture statuette for helping produce “12 Years a Slave.” As an actor, he has been nominated three previous times: once for supporting (“12 Monkeys”) and twice for lead (“The Curious Case of Benjamin Button” and “Moneyball”). As a reminder, Rami Malek, Eddie Redmayne and Roberto Benigni have all won best actor. (...)

Critics could be unkind (guilty), but as the bad movies gave way to good, the notices improved. Soon, it became a favorite cliché to write that he was a character actor trapped in the body of a star (guilty again). Some of this, I think, stems from a suspicion of beauty, that it can’t be trusted, is “merely” superficial and silly, which makes the beautiful one also superficial and maybe even worthy of contempt that can lurk under obsession. There’s nothing new about how we punish beauty. The history of movies is filled with the victims of this malignant love-to-love and love-to-hate dynamic, not all of them women. (...)

Pitt should have been nominated this year for best actor for his delicate, deep work in James Gray’s “Ad Astra,” a meditation on the unbearable weight of masculinity set largely in outer space. The film was praised as was Pitt’s turn, but neither found awards momentum. The performance was too good and certainly too subtle and interiorized for the academy. It has a historic weakness for showboating — the more suffering the better — which is why Joaquin Phoenix (often otherwise worthy) and his jutting rib cage in “Joker” seem like a lock. But Pitt has time. It took seven nominations for Paul Newman to win best actor; Redford has been nominated only once for acting (he lost).

Like Newman and Redford, Pitt has always seemed born to the screen, a natural. He has a palpable physical ease about him that seems inseparable from his looks, that silkiness that seems, at least in part, to come from waking up every day and going through life as a beautiful person. This isn’t to say that good-looking people don’t have the same issues, the neuroses and awkwardness that plague us mortals. But Pitt has always moved with the absolute surety you see in some beautiful people (and dancers), the casualness of movement that expresses more than mere confidence, but a sublime lack of self-consciousness and self-doubt about taking up space, something not everyone shares. This isn’t swagger; this is flow. (...)

In the years since “Fight Club,” the film has been embraced without irony and apparently without humor by men’s rights partisans. I wonder if they think Tyler is hot, and what exactly they see when they look at his body. Movies have always banked on the audience’s love of male violence. Throughout their history, they have exploited male beauty, tapping the passion it inspires. “Everybody wants to be Cary Grant. Even I want to be Cary Grant,” said, well, Cary Grant.

But the beautiful man can make us nervous, partly because he complicates gender norms. George Clooney is more than a pretty face, more than one writer has insisted. Yes, but he is also pretty. Some of this anxiety reeks of gay panic and misogyny.

by Manohla Dargis, NY Times | Read more:
Image: Andrew Cooper/Columbia via

The Truth About "Dramatic Action"

"As far as I know, trying to contain a city of 11 million people is new to science.” This was how Dr. Gauden Galea, the World Health Organization’s country representative in China, described the situation facing the city of Wuhan when asked late last week for his update on the coronavirus outbreak.

It was clear from Galea’s remarks that the total containment of Wuhan, the city where I have lived for the past few decades, was not a course of action the WHO had recommended. Nor did the organization have any clear view on whether such an action would prove effective in limiting the spread of the disease. “It has not been tried before as a public health measure,” he said, “so we cannot at this stage say it will or will not work.”

I am now one of 11 million people in Wuhan who are living through this grand experiment, a measure that, Galea also said, shows “a very strong public health commitment and a willingness to take dramatic action.” From inside the curtain that now encloses my city, I wish to offer my thoughts on this “dramatic action,” and to judge what we have actually seen and experienced in terms of commitment to public health.

Closing Up the Cities

At 2AM on January 23, authorities in Wuhan suddenly issued the order to close off the city. According to the order, from 10AM that same day, all public buses, subways, ferries, long-distance buses and other transport services would be suspended; the airport and train stations would be shuttered. At this point, the WHO might have had reservations about the necessity and effectiveness of this strategy – but in any case, is was irreversible, and it would soon extend to neighboring cities as well.

In less than two days, up to noon on January 24, a total of 14 cities in Hubei province would be brought into the quarantine zone. These cities, with a population of around 35 million, include: Huanggang (黄冈) and E’zhou (鄂州), were quickly brought under the order for closure. More cities followed: Chibi (赤壁), Xiantao (仙桃), Zhijiang (枝江), Qianjiang (潜江), Xianning (咸宁), Huangshi (黄石), Enshi (恩施)、Dangyang (当阳), Jingzhou (荆州), Jingmen (荆门) and Xiaogan (孝感).

This was no longer a city under lockdown, but effectively an entire province under quarantine.

Galea and other foreign experts have expressed a sense of awe about the boldness of the quarantine in Hubei province. Over the weekend, the New York Times quoted Dr. William Schaffner, an expert on infectious disease from Vanderbilt University, as saying that the lockdown is a “public health experiment, the scale of which has not been done before.” Schaffner was clearly astonished: “Logistically, it’s stunning, and it was done so quickly.”

China’s capacity to impress with such grand gestures calls to mind talk of the “Chinese miracle,” often used to describe the performance of the country’s economy over four decades. But is it fair to regard this case of large-scale quarantine also as a “Chinese miracle” in public health?

Shutting People’s Mouths

Everyone must understand, first of all, that this epidemic was allowed to spread for a period of more than forty days before any of the abovementioned cities were closed off, or any decisive action taken. In fact, if we look at the main efforts undertaken by the leadership, and by provincial and city governments in particular, these were focused mostly not on the containment of the epidemic itself, but on the containment and suppression of information about the disease.

The early suppression of news about the epidemic is now fairly common knowledge among Chinese, and many people view this failure to grapple openly with the outbreak as the chief reason why it was later seen as necessary to take the “dramatic action” of closing down my city and many others.

The direct cause of all of this trouble is of course the new coronavirus that has spread now from Wuhan across the globe and has everybody talking. Up to January 24, in Hubei province alone, there were 549 admitted cases of the virus. Among these there have been 24 deaths. But the real numbers are still unknown.

According to reports from Caixin Media, one of China’s leading professional news outlets, the entire situation began on December 8, with the discovery of the first known case of an infected patient in Wuhan, a stall operator from the Huanan Seafood Market. The Huanan Seafood Market is a large-scale wet market, with an area about the size of seven football pitches and more than 1,000 stalls. The market has a constant flow of customers, making it the ideal place for the spread of infectious disease. A seafood market only in name, it sells a wide array of live animals, including hedgehogs, civet cats, peacocks, bamboo rats and other types of wild animals. At this market, the nearly inexhaustible appetite, and insatiable greed and curiosity of Chinese diners is on full display.

The number of infected people rose rapidly, reaching 27 people within a short period of time. Health professionals in Wuhan began suspecting in early December that this was an unknown infectious disease, not unlike the Severe Acute Respiratory Syndrome (SARS) that emerged in southern China in 2003. The ghost of SARS seemed to wander Wuhan in December, and rumors spread farther and farther afield of a new disease on the prowl.

China is a society closely monitored by the government, and the shadow of Big Brother is everywhere. Social media in particular are subject to very close surveillance. So when the authorities detected chatter about the re-emergence of SARS, or of a similar unknown outbreak, they took two major steps initially. First, they tried to ensure that this new outbreak remained a secret; second, they put the stability preservation system into effect (启动稳控机制). On December 30, the Wuhan Health Commission (武汉市卫建委) issued an order to hospitals, clinics and other healthcare units strictly prohibiting the release of any information about treatment of this new disease. As late as December 31, the government in Wuhan was still saying publicly that there were no cases of human-to-human transmission, and that no medical personnel had become infected.

by Da Shiji, China Media Project | Read more:
Image: uncredited

Wednesday, January 29, 2020


Jack Whitten, Space Flower #9, 2006

How Billie Eilish Harnesses The Power Of ASMR In Her Music

The Little Man on the Big Screen

In the early 1940s, there was a cinematic battle raging between two populist filmmakers — Frank Capra and Preston Sturges. If you watch their movies today, you’re almost certain to like Preston Sturges’s better. They’re wild, chaotic, hilarious films that assume all governing officials are ridiculously corrupt and pretty much all ordinary citizens are scrambling and hustling and stuttering and screeching and flailing in their mad slapstick efforts to succeed in America.

It’s telling that the Coen brothers, the contemporary filmmakers most overtly engaged with conveying the American experience, cite Sturges frequently. One of their most popular films, O Brother, Where Art Thou? (2000), was directly inspired by Sturges’s Sullivan’s Travels (1941), in which a successful Hollywood director of comedies, John L. Sullivan, yearns to make a serious, socially conscious drama entitled O Brother, Where Art Thou?, a movie Sullivan explicitly sees as his own Capra picture. And The Hudsucker Proxy (1994), perhaps the Coens’ least popular film, was directly inspired by Capra’s film Meet John Doe (1941). Though the plotting is Capraesque, the tone of the film is far closer to Sturges — hectic and satirical. The combination was an uneasy one.

Sturges himself had better luck taking on Capra. In the 1930s, when Sturges arrived in Hollywood, Capra had reached the dizzying peak of his career, directing hit after hit with It Happened One Night, Mr. Deeds Goes to Town, You Can’t Take It with You, and Mr. Smith Goes to Washington. Capra was the Spielberg of his day — world-famous, revered, loaded up with Academy Awards, and celebrated on the cover of Time magazine. He rivaled director John Ford in presenting America to itself in instantly mythologizing terms that the public loved. Decades later, actor and independent filmmaker John Cassavetes would say, “Maybe there really wasn’t an America. Maybe there was only Frank Capra.”

It was this towering patriotic mythology that Sturges riotously satirized in a series of frenetic screwball comedies such as The Great McGinty, Christmas in July, The Lady Eve, Sullivan’s Travels, The Palm Beach Story, The Miracle of Morgan’s Creek, and Hail the Conquering Hero. For Sturges, American life was the experience of being whipsawed around by unseen forces of corruption and incompetence. Kicking the slats out from under meritocracy by showing the ways sky-high success and abject failure result from a fundamentally insane system are signature Sturges moves. (...)

Among cinephiles, the reputation of Sturges grows shinier every year, even as Capra’s becomes more tarnished. Both directors were conservatives who fundamentally distrusted politics. Both were also enamored of their own ideas about America and its people, and both were egomaniacal auteurs, film “authors,” long before that term came into use.

But beyond those similarities, the two directors could hardly have seemed more at odds. Capra was the driven child of impoverished immigrant Sicilian laborers. Like many people who come up the hard way, he grew to love the idea of its hardness, and to romanticize the lone individual struggling for success as the figure that should be at the center of American society, a moral paragon setting the bootstrapping standards for others to emulate.

Sturges, on the other hand, came up the soft way. He was the son of a bold American self-creationist named Mary Dempsey, who changed her name to D’Este and convinced herself she truly was the daughter of Italian nobility until one of the real D’Estes sued her for using the family name on her line of cosmetics. She altered the name to Desti and found some success as an entrepreneur, able to spend most of her time lounging around the Continent having affairs with a wide range of fringe characters including occultist Aleister Crowley and participating in art happenings with her idol and best friend, Isadora Duncan, the modern dance pioneer. Desti gave Duncan the dramatically long scarf that caught on the back wheel of Duncan’s convertible and snapped her neck, a perfect bohemian death.

Preston “got dragged through every museum in Europe” by his art-addled mother and came to hate anything smacking of pretentious high culture. Instead, he adored his stepfather, Solomon Sturges, a Chicago stockbroker whose mild, stable, plainspoken character came as a refreshing change. Though Preston was often broke, making and losing several fortunes in his lifetime, he was never poor. He had tremendous cultural capital: he spoke fluent French, wore custom-made suits, and had gone to boarding schools with the sons of dukes and prime ministers.

His vision of America was of a chaotic but protean place where the next person you met might be the key to either dizzying success or total disaster. Sturges attempted to forge a career as a cosmetics tycoon, an inventor, a songwriter, and (reluctantly) the kept husband of a wealthy wife, failing at everything until he tried his hand at being a playwright. Finally, his erratic energies and brilliant facility with language found a home and a hit with Strictly Dishonorable, a title he got from his own line to a date who asked what his intentions were that evening. He loved America, with its fast pace, high risk, and popping energy.

It was an altogether different America that Capra loved. Capra, unlike Sturges, was generally and erroneously regarded as a highly political, left-wing populist, always bravely willing to court controversy in order to make “significant” films celebrating the common man and exposing the ways the system didn’t work. Mr. Smith Goes to Washington was considered borderline seditious by members of Congress, who raised a ruckus over its portrayal of a US Senate rotten with corruption. The Daily Worker even praised Capra for his “notable progressive films in the 1930s.”

And the French public, given the chance by the Vichy government to vote on which Hollywood movie they’d like to see before the Nazis ended the import of American films, chose Mr. Smith Goes to Washington as an inspirational representation of a still-thriving democracy that can speak truth to power.

Capra created heroes who are idealists full of small-town American values, brave and adventuresome in their Boy Scout–ish endeavors, generally tongue-tied except when quoting their role models Jefferson and Lincoln. They’re often played by tall, lanky, all-American actors like Gary Cooper or Jimmy Stewart. They go to the big city and meet the cynical, self-serving representatives of financial and state power and are almost undone by the depths of their corruption. But then, at the climactic point, they come back strong and show that the individual can go up against the capitalist forces of darkness and save American democracy.

This last-ditch triumph always occurs with the crucial help of a tough, brainy, and cynical career woman typically played by Jean Arthur or Barbara Stanwyck, who actually has the know-how to fight the system she learned from the inside, once she’s won over to the hero’s cause.

One idealistic man saving American democracy, aided by one formidable woman, can still only do it by inspiring the people to action, starting with that man’s hordes of friends. Like the angel says in It’s a Wonderful Life, “No man is a failure who has friends.” Especially friends who show up with money when the shit hits the fan, as occurs at key points in It’s a Wonderful Life and You Can’t Take It with You, defying the machinations of the rich and powerful with heartwarming hatfuls of crumpled dollar bills. And don’t think you won’t be moved by these scenes either — they still work like magic. (...)

The suicidal urges of a desperate working-class man who can see no other way out was a specialty of theirs in both Meet John Doe (1941) and It’s a Wonderful Life (1946), the latter of which was a notable financial failure when it came out. It was right after the end of World War II, when Americans were in no mood for the film’s bleak look at an unhappy life consumed by relentless money troubles, capped by watching the despairing Jimmy Stewart plunge off a bridge into icy black water on the night before Christmas, even if an intervention from heaven makes it all turn out fine in the end. The 1940s public probably saw the film more clearly than we do now, when it’s considered a cornball Christmas classic.

Capra’s reputation has suffered badly over the course of several decades, as his films came to represent populism’s supposedly slippery slope to fascism. It was shrewd of Capra to obscure his actual politics during the 1930s and ’40s, when most people were fooled by his films into believing the director was “quite liberal,” as Katharine Hepburn did when she agreed to star in State of the Union. Capra was, in fact, an open admirer of both Mussolini and Franco. During the McCarthy era, he served as an FBI informer, helping to persecute his fellow film industry professionals as a way of making sure his own history of working with left-wing writers didn’t come back to bite him.

But as corny as Capra looks today, Sturges relied on his films. He couldn’t help but pay tribute even as he wrestled with Capra’s idea of the American experience. 

by Eileen Jones, Jacobin |  Read more:
Image: A still from Sullivan's Travels (1942)

‘BoJack Horseman’ and ‘The Good Place’ Took Us to Hell and Back

In the penultimate episode of “The Good Place,” after four seasons wandering the afterlife, our dear-departed heroes finally make it to the destination promised in the title. It is, of course, beautiful, with lush gardens and buildings with alabaster walls.

It’s also familiar. The first time I watched, I felt like I knew this place. Was I recovering a memory from another life, or a state before life? Had I — good Lord — had I been to heaven?

Turns out I had, kind of. It took a few minutes of searching my memory and Google Images to realize that the location the producers chose to represent the Good Place was … the Getty Center, the art museum in the hills overlooking Los Angeles.

It’s a fitting choice for a humanist Hollywood reboot of paradise. “The Good Place,” whose finale airs Thursday night on NBC, is a slapstick survey of moral philosophy that places its faith not in a higher power (or a lower one) but in human culture and creation.

It’s also a visual echo of another great comedy, the zoologically incorrect Hollywood satire “BoJack Horseman,” whose final eight episodes arrive on Netflix Friday. Its title sequence begins with a wide shot of the cliffside house where the title character (Will Arnett), an anthropomorphic horse and former ’90s sitcom star, has spent six seasons chugging booze, pills and the occasional chaser of remorse.

If heaven is in the L.A. hills, so is hell. And over the past several years, these two comedies have wandered the crooked path between the two, trying to figure out how to be a decent person in a fallen world. (...)

The moral universe of “BoJack” is darker and messier than its NBC counterpart. Even its aesthetic is baroque, Hieronymus Bosch-like, compared with the clean, jewel-tone fantasy of “The Good Place.”

In “BoJack,” there are no cosmic do-overs, no second or two-thousandth chances. In one of the final episodes, BoJack imagines seeing a long-dead friend, who tells him: “There is no other side. This” — i.e., mortal life — “is it.”

It’s a dark statement. But dark is not the same as hopeless. Really, “BoJack” is making a kind of moral argument from atheism. In its universe, you have to do right not because you might end up in The Bad Place but because this, right here, is the only place.

Where “BoJack” is most like “The Good Place” is that it, too, is about the moral obligation to help others to be good. But it’s complicated; the show is also aware of the blurry line between help and enabling.

Throughout the series, BoJack is bailed out and pulled from the brink by others: his friend Mr. Peanutbutter (Paul F. Tompkins), a chuckleheaded Labrador retriever; his overstressed feline agent, Princess Caroline (Amy Sedaris); and his ghostwriter-turned-confidante, Diane (Alison Brie).

But Diane — as close as anything to the show’s moral center — starts to wonder if she’s really helping BoJack improve or (à la Dr. Melfi counseling Tony Soprano) just making him a more efficient miscreant. There’s an entire showbiz industry built around performative contrition, and BoJack has mastered its turns and straightaways like Secretariat. (He walks out of one supposedly harrowing confessional interview as if he’d aced the SAT: “I felt like I could see the matrix!”)

If “The Good Place” is how we need to raise one another up, “BoJack” is often about the need not to let one another off the hook. At the end of Season 5, for instance, Diane rejects BoJack’s plea that she write an exposé on him after a #MeToo incident, realizing that she’d just be stage-managing his redemption theater.

But she’s also reluctant to cut him off entirely. As she says, toward the end of the series: “Maybe it’s everybody’s job to save each other.”

As different as “The Good Place” and “BoJack” are in tone, each in its absurdist way gets at a piece of the current moment, in which many of our public fights are as much about morality — complicity, complacency, enabling — as they are about politics. In very different ways, both shows ask: Is being good simply an individual act that you can undertake in isolation? Is it enough to tend your personal moral garden if you allow evil to flourish around you?

by James Poniewozik, NY Times | Read more:
Image: BoJack Horseman, Netflix

Gutting the Clean Water Act

It may be hard to remember these days, but the nation that led the world on to the stage of modern environmental protection was the United States.

Starting in the early 70s, the US Congress enacted bold bipartisan laws to protect America’s wildlife, air and water. America’s skies cleared. Waterfronts across the nation went from blighted dumping grounds into vital civic hearts.

And, in this journey from smog to light, America’s economy thrived. Our environment improved even as our economy grew. Both Republican and Democratic administrations upheld this commitment to a clean environment, and it endured for decades.

Following the 2016 election, polluting-industry veterans commandeered the country’s environmental agencies with one central aim: make pollution free again.

The assaults have been fast, furious and many. But the latest one stands out above, or below, the others. Administration officials have now targeted the Clean Water Act, perhaps the most fundamental environmental law ever enacted by the US Congress.

The law’s main mechanism is simple: before discharging waste into the nation’s waters, polluters must first try to clean it up.

So how did the former lobbyists running the agencies sabotage the act? By radically shrinking it. By its terms, the act only protects waters “of the United States”. But according to this administration, waters “of” the United States does not mean waters in the United States. In their view, the Clean Water Act only applies to a subset of waters, and the rest are unprotected.

The scope of the contraction is staggering. In some states out west, 80% of stream miles would lose their protection. Drinking water sources for millions of Americans would be at risk from pollution. The administration’s redefinition would leave millions of acres open for destruction – wetlands that buffer communities from storms, serve as homes for wildlife and nurseries for fish and shellfish, and act as natural water filters.

This is the single largest loss of clean water protections that America has ever seen. And the timing couldn’t be worse. From lead contamination in drinking water to the proliferating threat of toxic industrial chemicals, new threats to water quality are emerging daily. (...)

Now the administration wants to scrap all that by only defending the very largest rivers and declaring open season on the smaller tributaries upstream. That’s like trying to address heart disease by ignoring the blood that travels through it.

by Blan Holman, The Guardian | Read more:
Image: Chris O'Meara/AP via
[ed. Not to mention, gutting NEPA (National Environmental Policy Act). See also: Trump Administration Cuts Back Federal Protections For Streams And Wetlands (NPR)]

Linda Ronstadt


[ed. See also: It Doesn't Matter Anymore (YouTube).]

Tuesday, January 28, 2020


Anders Kjær, Untitled, 1981
via:

Peter Hutchinson, Somewhere. 2017
via:

The Most Loved and Hated Classic Novels

Here are the top ten most popular classics, which likely corresponds with the list of books most assigned in American high schools:


Every book listed is a “great novel”. These books wouldn’t have been read hundreds of thousands of times if that weren’t the case. However, we can recognize a book as a “great novel” while also recognizing that many readers will not enjoy it.

These rankings matter because reading books you love is the gateway to a love of reading and reading books you hate is the gateway to a life without reading. Too often people are turned off from reading by being fed books they hate, either through school, or because the internet/friends make a certain book seem like it must be read.

via: The Most Loved and Hated Classic Novels According to Goodreads Users (Goodreads).

[ed. I don't know what the average high school literature curriculum is these days, but if these moldy oldies are at the core of it, no wonder kids get disconnected from reading for pleasure and enlightenment. See also: On the Hatred of Literature (The Point).]

Steely Dan


[I'm working on gospel time these days (Summer, the summer. This could be the cool part of the summer). The sloe-eyed creature in the reckless room, she's so severe. A wise child walks right out of here. I'm so excited I can barely cope. I'm sizzling like an isotope. I'm on fire, so cut me some slack. First she's way gone, then she comes back. She's all business, then she's ready to play. She's almost Gothic in a natural way. This house of desire is built foursquare. (City, the city. The cleanest kitten in the city). When she speaks, it's like the slickest song I've ever heard. I'm hanging on her every word. As if I'm not already blazed enough. She hits me with the cryptic stuff. That's her style, to jerk me around. First she's all feel, then she cools down. She's pure science with a splash of black cat. She's almost Gothic and I like it like that. This dark place, so thrilling and new. It's kind of like the opposite of an aerial view. Unless I'm totally wrong. I hear her rap, and, brother, it's strong. I'm pretty sure that what she's telling me is mostly lies. But I just stand there hypnotized. I'll just have to make it work somehow. I'm in the amen corner now. It's called love, I spell L-U-V. First she's all buzz, then she's noise-free. She's bubbling over, then there's nothing to say. She's almost Gothic in a natural way. She's old school, then she's, like, young. Little Eva meets the Bleecker Street brat. She's almost Gothic, but it's better than that. ~ Almost Gothic. 

[ed. Sizzling like an isotope. See also: What a Shame About Me (lyrics) and West of Hollywood.]

Monday, January 27, 2020

Remembering Jim Lehrer

This is FRESH AIR. Jim Lehrer, the respected journalist and a nightly fixture on PBS for more than three decades, died Thursday at his home in Washington. He was 85. Lehrer is best-known for co-anchoring "The MacNeil/Lehrer NewsHour" from 1983 to '95 with co-host Robert MacNeil and then, when McNeil retired, "The NewsHour With Jim Lehrer" until his retirement in 2011.

Lehrer grew up in Texas and was a newspaper journalist before getting into broadcasting. He was also a prolific writer. He published more than 20 novels and three memoirs and wrote four plays. Known for a calm, unflappable style and a commitment to fairness, Lehrer moderated presidential debates in every election from 1988 through 2012. He won numerous Emmys, a George Foster Peabody Award and a National Humanities Medal.

Jim Lehrer spoke to Terry Gross in 1988, five years after he'd suffered a heart attack and had double bypass surgery. (...)

TERRY GROSS: You have been the subject of many interviews since your heart attack, really, in 1983 and then since the writing of your plays and your new novel. Have you learned a lot about interviewing from being an interviewee yourself?

JIM LEHRER: I have, I think. I - MacNeil says, I think correctly, that I am a terrible interviewee because I give very long answers. In fact, as he said, you know, Lehrer, if you were ever on our program, we'd never invite you back because your answers are very - you asked me a question when we started. You know, I went on for five minutes, I think. I mean, that's a problem I have, and I understand. I sympathize, and I'm sure you must, too. I mean, I have great sympathy for the people I'm interviewing because I ask a question of somebody - now, keep in mind 99% of the interviews I do are live. I ask somebody a question, and then I'm immediately jumping, ready for the next question or ready to go on with it, you know?

I mean, I would much rather interview than be interviewed. I have learned a lot just out of sympathy for the people as a result of being the subject of the interview. There's no question about it. I now understand how difficult it is.

GROSS: Well, do you tell the people who are appearing on your program to give you short answers (laughter), and how do you stop them if the answers...

LEHRER: No.

GROSS: ...Are long?

LEHRER: What I tell the folks to do is to give their best answer. If it's short, that's fine. If it's long, that's fine. I can always interrupt them. I interrupt people for a living. That's what I tell them. It's very important that the person not have to be - not have to confine themselves to your rules. For instance, if - let's say somebody is like me, gives long answers like I'm giving you right now, as a matter of fact. And - but, I mean, that's your problem, see? That's not my problem.

GROSS: (Laughter).

LEHRER: I mean, if I'm going...

GROSS: Hey; thanks a lot.

LEHRER: Yeah, right. I mean, I've come - if you asked me the story of my life and if it takes two hours to tell you the story of my life, I think it takes two hours. And it's your job as the professional to cut it down a little bit. And I think that also, you get better answers that way. If I say to somebody who sits down who's already nervous - now, that's not true of people that are used to television. But if somebody comes in there very nervous - live show going all over the country, their mother's watching and everybody's there - and I say to them, all right; keep your answer short and blah blah blah, all it does is add to their anxiety. And I want people to be relaxed. I want them to forget that there are all these lights and cameras around and have eye contact. Our studios are set up, both in Washington and New York - are set up...

GROSS: This answer's too long. No, I'm kidding.

LEHRER: No, I know (laughter). I know it is.

GROSS: Just thought I'd try that out, see what happened (laughter).

LEHRER: See, it doesn't work with me. That's - it feels right. But we set our people - our guests are very close to us, and there's direct eyeball-to-eyeball contact. So that - so you try to confine the situation so the person is comfortable, and all they have to do is look at you. They're not - they don't have to look around. There's not a place to - you know, to be distracted. It's to make people comfortable.

GROSS: You've had to interview many politicians over the years, and I think that is always so difficult because politicians give you answers, but they're not necessarily answers to the questions you've asked. I don't mean you in particular.

LEHRER: Oh, I know.

GROSS: But in general, what are some of the techniques you've come up with for actually getting an answer to the question that you want answered because you're just not necessarily going to get it?

LEHRER: Terry, there's only one technique that works, and that's to have enough time to ask the question a second time and then a third time and maybe a fourth time. And then, if Billy Bob Senator isn't going to answer it, you at least have a stab. That's his option. If he didn't want to - you know, I mean, there's no law that says he has to answer all the questions that Jimmy Charles Lehrer asks him on television, but I have the time. We have the time on our program.

Senator, what is your position on selling grain to the Soviet Union? Well, you know, Jim, that reminds me of when I was a little boy growing up in Oklahoma. And then he tells you a story. And you ask, yeah, but Senator - you give him the time, you know? He does that, and you say, yes, but what's your position on selling grain? Well, you - first of all, you got to understand what grain is. Grains are these little - he still hasn't answered. So then you say, yes, but Senator, again - you know? And then finally, you have to decide. And you're sitting there in a live situation. Do I ask this sucker this question again, or do I go on? You have to - at some point, you have to have real confidence in your audience that they realize, hey; this jerk isn't going to answer this question, or, this wonderful man isn't going to answer this question, or whatever the situation is. Then you go on with it.

I do not believe in beating up on guests. I don't - we don't invite people on our program to abuse them. And so the other way to do it if you don't have the time is you say, you didn't answer my question, you know? Hey, hey, blah, blah, blah, you know? We don't do it that way. And it's because - it's not because we object to it. That's somebody else's job to object to it. That's just not our style. We're not comfortable doing that. And we have the luxury of time.

GROSS: You know, you strike me as one of the few news anchors on television - I mean, you and MacNeil, really - who do more than just read the news while the newscast is on. Does the emphasis that American news viewers put on news anchors on commercial news seem a little absurd to you?

LEHRER: It seems incredibly absurd to me. I don't understand it. I do - I simply do not understand the value that is placed on the ability of somebody to look into a television camera and read a teleprompter. Now, that's called a short answer.

by Terry Gross, NPR |  Read more:
Image: via

Are You Local?

When it comes to thinking about being local in Hawaii, most might not immediately think back to a notorious murder case of nearly a century ago.

Yet, the Massie case of 1931-1932, in which a young Native Hawaiian was tragically killed by a group of whites associated with the Navy, is precisely the historic event that scholars at the University of Hawaii say is central to appreciating the concept of local identity.

“The Massie Case has since become a kind of origins story of the development of local identity in Hawaii among working-class people of color,” John P. Rosa writes in his 2014 book, “Local Story: The Massie Kahahawai Case and the Culture of History.”

In his view and that of other scholars, it represented the first time the term “local” was used in Hawaii with any significance.

And while definitions of local identity have evolved, at its core local identity is as much about dividing people as it is about uniting them, and about who has power and influence and who does not.

It’s common to hear people define local as where someone went to high school, taking your slippers off before entering someone’s home, preferring your peanuts boiled or speaking pidgin English.

But, while these habits are not without comfort and significance, they are in a sense only surface-level connections that may prevent the people of Hawaii from recognizing what really brings us together, and what may be in the way of bridging differences to address the many troubles in our society.

What defines local identity, says Jonathan Okamura, an ethnic studies professor at UH Manoa, is a shared appreciation of the land, the peoples and the cultures of the islands.

But now that shared identity could be imperiled by the same powers that held sway in the 1930s: a local and national government inattentive to their concerns, abetted by economic forces controlled by others.

Hawaii was already becoming too reliant on outside economic forces, especially tourism, Okamura warned 25 years ago, disrupting the value of a shared identity.

The color of one’s skin may not serve as the best way to identify who is and is not local.

“Local identity, while not organized into a viable social movement, will continue in its significance for Hawaii’s people if only because of their further marginalization through the ongoing internationalization of the economy and over-dependence on tourism,” he wrote. (...)

Today, the troubles that are dividing us are made all the more difficult by economic dependency on tourism, the large military presence in the islands, and foreign investment and ownership that Okamura writes about.

Local identity and any disconnect that comes with it is also being shaped by increased immigration from the mainland and the broader Asia-Pacific region to Hawaii even as the local-born population is moving elsewhere.

Rosa says that local identity doesn’t necessarily divide us as long as we continue to discuss what it means to be local.

“Sometimes things get a little emotional when we think about identity and ‘who I am,’ but when we think of what place and shared values might be, that is one way to think about it,” he said in an interview. “It is people committed to this place in particular ways.” (...)

What Is Local?

It is easy to think of local identity as being based on race and ethnicity.

Indeed, in the Massie case Grace Fortescue singled out Joseph Kahahawai as the “darkest” of the five men. And the words malihini haole are frequently and sometimes pejoratively used to describe whites who move here from the mainland.

The working-class origins of local identity were informed by the labor needs of the plantations that brought large numbers of migrants from China, Portugal, Japan, Puerto Rico, the Philippines and Korea to Hawaii in the mid-to-late 19th century and into the early 20th. Many stayed, and it is their descendants that “made up the core of locals” since the 1930s, says Rosa in a 2018 book, “Beyond Ethnicity: New Politics of Race in Hawai’i.”

Meanwhile, a white oligarchy remained in power in the islands for decades following the Massie case.

But demographics gave way to substantial change through several transformative periods since that time: martial law during World War II, the return of Japanese-American veterans to the islands, the so-called 1954 revolution that saw the territorial Legislature wrestled away from mostly white Republicans by racially diverse Democrats, the tourism and development boom that begins in the 1950s and 1960s, the Hawaiian Renaissance of the 1970s, the Japanese investment of the 1980s and the economic slowdown of the 1990s.

Hawaii is now in the midst of another transformative period, one whose dimensions are still being drawn but one that continues to reflect the dynamics of previous generations. It is also driven by something that did not exist until recently: the online world and social media.

All through it, local identity has continued.

“Over the years, local identity gained greater importance through the social movements to unionize plantation workers by the International Longshoremen’s and Warehousemen’s Union in 1946 and to gain legislative control by the Democratic Party in 1954,” Okamura writes.

Today, those who might identify as local are no longer just members of the working class. There are whites whose roots go back multiple generations. And the color of one’s skin may not serve as the best way to identify who is and is not local.

Changing Demographics

There is also a new category of people besides Native Hawaiian, haole and local — one that Rosa calls “other.”

Their arrivals began in small numbers in the 19th century but have grown significantly, more recently from places such as Latin America — including Mexicans and Brazilians — Southeast Asia (Vietnamese) Micronesia (Marshallese and Chuukese) and other parts of the Pacific (Samoans).

Are these groups considered locals?

It depends, in part on whether they acquire local knowledge, language and customs, whether they have respect for the indigenous population, the degree of their intermarriage rates, and on whether these groups are still primarily connected to their former homes or are nurturing ties to their new ones.

There is no litmus test for being local. But newer arrivals to Hawaii who integrate into local society rather than resist it — who seek to transplant themselves in a new environment with the same trappings of their old one — may sometimes find it easier to get along. (...)

‘Where You ‘Wen Grad?’

The topic of what it means to be local in Hawaii has been written about extensively in local media, including Civil Beat.

One of the most popular occasions was from the Honolulu Advertiser in 1996, which published readers’ answers to the question, “You Know You’re A Local If …”

The newspaper was flooded with countless letters, postcards, emails and faxes. It ended up publishing the “ones that made us laugh the hardest” while running more in a new column that would debut later that year.

Here are just a few excerpts from the initial article in the Advertiser that August, broken into categories for food, fashion, philosophy, habits, awareness and the like:
  • “Your only suit is a bathing suit.”
  • “You have at least five Hawaiian bracelets.”
  • “You know ‘The Duke’ is not John Wayne.”
  • “You measure the water for the rice by the knuckle of your index finger.”
  • “You let other cars ahead of you on the freeway and you give shaka to anyone who lets you in.”
  • “Your first question is, ‘Where you ’wen grad?’ And you don’t mean college.”
The entries and ideas kept on coming.

In a May 2002 column, the Advertiser’s Lee Cataluna revisited the topic. She wrote, “Every couple of months, a new one will show up in your e-mail inbox, one of those ‘You know you’re local if …’ lists.”

But Cataluna also observed that, “The only problem with those lists is they’re made for people who have no doubt that they’re local.” They are for “entertainment purposes only, eliciting happy nods of recognition rather than gasps of self-revelation.”

What Cataluna wanted to talk about was people who did not grow up in Hawaii but who had spent “some serious time and effort to understand and adopt the culture.”

She asked, “When do they know they’ve turned the corner to local-ness? How can they tell when they’ve passed major milestones?”

Such a list, she said, would include these characteristics:
  • “You know you’re turning local when you no longer think eating rice for breakfast is strange.”
  • “You know you’re turning local when, even though you hate seafood, you love poke cuz’ that’s different.”
  • “You know you’re turning local when you say the word ‘pau’ so often that you forget what it means in English. Pau is pau.”
Cataluna concluded with what she called “the big one”: “You know you’re local when you get irked by people who act too ‘Mainland.’” (...)

But there is also much to celebrate and even honor in localisms.

“Our cultural expression is manifest through the adoption of others’ customs as our own,” said Davianna Pōmaikaʻi McGregor, an ethnic studies professor and the department’s director for the Center for Oral History, in an interview. “It is identified with Hawaiians — mixed plates, that sort of thing — and if you lose that you begin to erode at those cultures that cohere us and connect us.

“And the fact that people are coming together to celebrate life events, bringing food and sharing — on Molokai, people go and clean yards when someone passes away — if we stop doing those things, we are going to lose that connection. So it is important.”

by Chad Blair, Honolulu Civil Beat |  Read more:
Image: Cory Lum
[ed. See also: Can A White Person Ever Be ‘Local’ In Hawaii? (HCB).]

Why Netflix’s Fantastic New Docuseries Cheer Is So Addictive

Fifty-three seconds into the first episode of Netflix’s docuseries Cheer, teenaged Morgan talks about pain. Fifty-four seconds into Cheer, she’s thrown into the air, twisting and flipping like a fish on a line. She comes careening back down into three sets of arms one second later, and she lands with a thunderclap of brutality, muscle smacking against muscle.

“Are you okay, Morgan?” someone asks. My untrained eye can’t pick up what’s wrong — just that something is wrong. And though Morgan walks off the rough landing, her body, gingerly stiff and wobbling unevenly, is what I think it looks like to silently scream.

After watching Cheer’s first 55 seconds, I knew I was going to spend the next six hours of my life breathing, consuming, Googling, and social media-stalking everything about the show. I knew then that it was my favorite new show of this very young year.

Cheer focuses on a competitive sport that fuses turgid, erotic tribalism with the body-breaking violence of muscular humans flinging tinier, lighter humans into the air and then catching them — callused hands atop thickly taped wrists, clawing into triceps and ankles. To that mixture, the show adds the us-against-the-world mentality of Charles Xavier’s X-Men and the small-town glamour of Friday Night Lights.

This is competitive junior college cheerleading at the dynastic Navarro College. This is Cheer. And this show is ballistically addictive.

Cheer takes place in the mecca of junior college competitive cheerleading, a place called Navarro College, Navarro for short. It’s in a town 60 miles south of Dallas called Corsicana, and absolutely nothing competes with the Navarro cheerleaders. They are the biggest and only thing in town, having won 14 National Cheerleaders Association National Championships and five “grand national” wins, which basically means they got the highest score at the national championships regardless of division and designation.

But while the Navarro Bulldogs dominate on the mat, they’re still underdogs.

Director Greg Whiteley (of Netflix’s college football docuseries Last Chance U) doesn’t shy away from showing the grim reality of many of these cheerleaders’ futures. Not many have options beyond cheering at this National Championship-caliber school, many stating that the team is the only thing that’s keeping them from getting into trouble or making bad decisions. The one kid who has seemingly solid post-cheer prospects, an Instagram “cheerlebrity” with nearly a million followers, is blatantly being used by her parents as a cash cow.

Even then, the escape Navarro cheer offers these young women and men is temporary, as cheerleading is something that ends after college. Professional cheerleading is more like dancing, and those gigs aren’t usually fairly paid. This makes the years spent at Navarro so important for the kids there, especially the ones who would otherwise be at risk and out of school.

In the crosshairs of Cheer’s urgency, desperation, and drama are the National Championships in Daytona Beach, Florida. Specifically, the national championship performance, the two minutes and 15 seconds that’s relegated to an intricate and difficult routine where anything, even moves drilled into muscle memory by thousands of repetitions, could go wrong. And it’s coach Monica Aldama’s job to create a team that won’t break in those 135 seconds, as she’s done 14 times in her life.

by Alex Abad-Santos, Vox | Read more:
Image: Netflix
[ed. Highly recommended (I'm in love with Monica). See also: How Cheer’s Superstar Coach Monica Gets It Done (The Cut).]

Sunday, January 26, 2020

In and Out


[ed. Talk about getting robbed.]

The Myth of the “Millennial-Friendly City”

If there is one thing that is true about Millennials, it is that we are mystifying, and therefore constantly being asked to explain ourselves. This is the premise, I think, behind Angela Lashbrook’s recent viral article for OneZero titled “Millennials Love Zillow Because They’ll Never Own a Home.” The piece rightly points out that often, our wish to escape our terrible lives leads to us fantasizing about buying nice houses in cities where we do not, and, due to the circumstances of our personal lives and/or careers, probably could not, live. In fact, there is an entire genre of internet content — some of it reputable, some of it laughably not so — that seemingly exists to either supplement these fantasies of skipping town or to actively encourage them.

The most recent example of this phenomenon came from the commercial real estate listings start-up, which last week proclaimed that it had objectively determined the most Millennial-friendly cities in the country. Judging by things like population trends, affordability, average commute times, and the number of young people in a city whose jobs offer health insurance, Commercial Cafe determined that the metro areas surrounding places like Denver, Austin, Seattle, and Portland were, definitively, friendly to Millennials. Of course, I already knew these cities were Millennial-friendly through another methodology: being friends with people who aren’t boring as hell, since if you’re friends with any kind of young, cool or cool enough person, you’ll invariably hear one of them talking about how they’re thinking about moving to that city, if they haven’t already.

Still, this is not the only study that claims to have figured out what makes a city Millennial-friendly, a concept I find fascinating because of how arbitrary it seems. Politico believes that Millennials choose which city to live in based on the number of other young people, especially those with college degrees or who have recently relocated there, as well as the average GDP and the possibility of taking an “alternative commute” to work. Business Insider has its own rankings, based on population changes, increases in median wages, and decreases in unemployment rate. The Penny Hoarder developed a formula for Millennial-friendliness which factored in “Millennial happiness” and ended up placing St. Louis, MO and Grand Rapids, MI at one and two, respectively. This is just random enough for me to believe that these places might secretly be tight.

But these lists, including Penny Hoarder’s (whose counterintuitive conclusions I honestly do appreciate), fail to grasp what makes a city a genuinely compelling place to live. Cities like New York, Berlin, and Austin are not “cool” because of their public transportation or how many jobs there are there; instead, they were all direct beneficiaries of a cycle in which artists, punks, and general counterculture types ended up moving there when they were still cheap, treating these underpopulated cities as places where they could live affordably and in close quarters with likeminded people, together producing the sort of radical art and culture that end up being cool enough to get vacuumed into the city’s self-conception, after which a bunch of yuppies move in and fuck it all up. (I don’t have specific numbers to back this up, but my landlord once told me if I ever wanted to buy an investment property, I should buy something in a town where an anarchist bookstore just opened up.)

This isn’t a great cycle, especially since the arrival of the artists and punks is the first sign that the local population — in these neighborhoods, that most often means people of color and immigrants — is only a decade or two away from being priced out. Think of it as Lenin’s theory of the two-stage revolution, except in reverse, and instead of communism, it’s a path for gentrifying a city until it sucks ass.

Since 2016, I’ve lived in the Raleigh-Durham municipal area, which is frequently pegged as one of the most Millennial-friendly locales in the nation. Durham in particular has seen its star rise dramatically, to the point that all the artists and punks barely had a chance to set up shop before everybody else started moving in. Case in point: About a year ago, I was sitting in the backyard of a local bar when I ended up talking to a bro wearing a Patagonia sweater and Sperry boat shoes who told me that he and his roommates from architecture school had all moved down to the area after graduation because a friend had told them that, “the job market was poppin’.” (In case I have not been clear enough: this person was white and very fratty.)

Ever since then, I have noticed an influx of “that type” of person — preppy out-of-towners who flock to an area during a boom period and, through sheer force of numbers, end up changing its character in increasingly generic ways. Previously fun bars where adult people can simply relax while drinking an adult beverage either get overrun or run out of the neighborhood, with “experiential” bars that Millennials allegedly enjoy (read: bars where you can throw axes, play arcade games, or do mini-golf) popping up in their place. Music venues start booking different acts who appeal to this growing market of kinda-generic Millennials, letting local scenes languish in the background.

When people treat the place they live as a giant AirBnB they can check out of after a few years working as a “creative lead” at a mid-sized start-up before moving elsewhere, they become less attuned to local issues, specifically the problems faced by those outside their specific, transplant-y milieu. In other words, there are two types of people: those for whom such lists apply, and those who are negatively affected by those for whom such lists apply.

by Drew Millard, The Conversation |  Read more:
Image: uncredited