Thursday, June 6, 2019


[ed. See next post. I don't do video games and was surprised to learn that marquee actors like Mads Mikkelsen and Léa Seydoux are somehow involved in the process. I thought it was all CGI stuff or something. This picture from Death Stranding reminds me of George Saunders' short essay the Semplica-Girl Diaries.]
Image: via

The Video Games Industry is Bigger Than Hollywood

Force Majeure


[ed. Another climate change skeptic.]

Wednesday, June 5, 2019

Book Review: The Secret of Our Success

“Culture is the secret of humanity’s success” sounds like the most vapid possible thesis. The Secret Of Our Success by anthropologist Joseph Henrich manages to be an amazing book anyway.

Henrich wants to debunk (or at least clarify) a popular view where humans succeeded because of our raw intelligence. In this view, we are smart enough to invent neat tools that help us survive and adapt to unfamiliar environments.

Against such theories: we cannot actually do this. Henrich walks the reader through many stories about European explorers marooned in unfamiliar environments. These explorers usually starved to death. They starved to death in the middle of endless plenty. Some of them were in Arctic lands that the Inuit considered among their richest hunting grounds. Others were in jungles, surrounded by edible plants and animals. One particularly unfortunate group was in Alabama, and would have perished entirely if they hadn’t been captured and enslaved by local Indians first.

These explorers had many advantages over our hominid ancestors. For one thing, their exploration parties were made up entirely of strong young men in their prime, with no need to support women, children, or the elderly. They were often selected for their education and intelligence. Many of them were from Victorian Britain, one of the most successful civilizations in history, full of geniuses like Darwin and Galton. Most of them had some past experience with wilderness craft and survival. But despite their big brains, when faced with the task our big brains supposedly evolved for – figuring out how to do hunting and gathering in a wilderness environment – they failed pathetically.

Nor is it surprising that they failed. Hunting and gathering is actually really hard. Here’s Henrich’s description of how the Inuit hunt seals:
You first have to find their breathing holes in the ice. It’s important that the area around the hole be snow-covered—otherwise the seals will hear you and vanish. You then open the hole, smell it to verify it’s still in use (what do seals smell like?), and then assess the shape of the hole using a special curved piece of caribou antler. The hole is then covered with snow, save for a small gap at the top that is capped with a down indicator. If the seal enters the hole, the indicator moves, and you must blindly plunge your harpoon into the hole using all your weight. Your harpoon should be about 1.5 meters (5ft) long, with a detachable tip that is tethered with a heavy braid of sinew line. You can get the antler from the previously noted caribou, which you brought down with your driftwood bow. 
The rear spike of the harpoon is made of extra-hard polar bear bone (yes, you also need to know how to kill polar bears; best to catch them napping in their dens). Once you’ve plunged your harpoon’s head into the seal, you’re then in a wrestling match as you reel him in, onto the ice, where you can finish him off with the aforementioned bear-bone spike. 
Now you have a seal, but you have to cook it. However, there are no trees at this latitude for wood, and driftwood is too sparse and valuable to use routinely for fires. To have a reliable fire, you’ll need to carve a lamp from soapstone (you know what soapstone looks like, right?), render some oil for the lamp from blubber, and make a wick out of a particular species of moss. You will also need water. The pack ice is frozen salt water, so using it for drinking will just make you dehydrate faster. However, old sea ice has lost most of its salt, so it can be melted to make potable water. Of course, you need to be able to locate and identify old sea ice by color and texture. To melt it, make sure you have enough oil for your soapstone lamp.
No surprise that stranded explorers couldn’t figure all this out. It’s more surprising that the Inuit did. And although the Arctic is an unusually hostile place for humans, Henrich makes it clear that hunting-gathering techniques of this level of complexity are standard everywhere. Here’s how the Indians of Tierra del Fuego make arrows:
Among the Fuegians, making an arrow requires a 14-step procedure that involves using seven different tools to work six different materials. Here are some of the steps: 
– The process begins by selecting the wood for the shaft, which preferably comes from chaura, a bushy, evergreen shrub. Though strong and light, this wood is a non-intuitive choice since the gnarled branches require extensive straightening (why not start with straighter branches?). 
– The wood is heated, straightened with the craftsman’s teeth, and eventually finished with a scraper. Then, using a pre-heated and grooved stone, the shaft is pressed into the grooves and rubbed back and forth, pressing it down with a piece of fox skin. The fox skin becomes impregnated with the dust, which prepares it for the polishing stage (Does it have to be fox skin?). 
– Bits of pitch, gathered from the beach, are chewed and mixed with ash (What if you don’t include the ash?). 
– The mixture is then applied to both ends of a heated shaft, which must then be coated with white clay (what about red clay? Do you have to heat it?). This prepares the ends for the fletching and arrowhead. 
– Two feathers are used for the fletching, preferably from upland geese (why not chicken feathers?). 
– Right-handed bowman must use feathers from the left wing of the bird, and vice versa for lefties (Does this really matter?). 
– The feathers are lashed to the shaft using sinews from the back of the guanaco, after they are smoothed and thinned with water and saliva (why not sinews from the fox that I had to kill for the aforementioned skin?). 
Next is the arrowhead, which must be crafted and then attached to the shaft, and of course there is also the bow, quiver and archery skills. But, I’ll leave it there, since I think you get the idea.
How do hunter-gatherers know how to do all this? We usually summarize it as “culture”. How did it form? Not through some smart Inuit or Fuegian person reasoning it out; if that had been it, smart European explorers should have been able to reason it out too.

The obvious answer is “cultural evolution”, but Henrich isn’t much better than anyone else at taking the mystery out of this phrase. Trial and error must have been involved, and less successful groups/people imitating the techniques of more successful ones. But is that really a satisfying explanation? (...)

Remember, Henrich thinks culture accumulates through random mutation. Humans don’t have control over how culture gets generated. They have more control over how much of it gets transmitted to the next generation. If 100% gets transmitted, then as more and more mutations accumulate, the culture becomes better and better. If less than 100% gets transmitted, then at some point new culture gained and old culture lost fall into equilibrium, and your society stabilizes at some higher or lower technological level. This means that transmitting culture to the next generation is maybe the core human skill. The human brain is optimized to make this work as well as possible.

Human children are obsessed with learning things. And they don’t learn things randomly. There seem to be “biases in cultural learning”, ie slots in an infant’s mind that they know need to be filled with knowledge, and which they preferentially seek out the knowledge necessary to fill.

One slot is for language. Human children naturally listen to speech (as early as in the womb). They naturally prune the phonemes they are able to produce and distinguish to the ones in the local language. And they naturally figure out how to speak and understand what people are saying, even though learning a language is hard even for smart adults.

Another slot is for animals. In a world where megafauna has been relegated to zoos, we still teach children their ABCs with “L is for lion” and “B is for bear”, and children still read picture books about Mr. Frog and Mrs. Snake holding tea parties. Henrich suggests that just as the young brain is hard-coded to want to learn language, so it is hard-coded to want to learn the local animal life (maybe little boys’ vehicle obsession is an outgrowth of this – buses and trains are the closest thing to local megafauna that most of them will encounter!)

by Scott Alexander, Slate Star Codex |  Read more:
Image: Princeton University Press

Image: someecards

How to Save the (Institutional) Humanities

The large majority of our fellow-citizens care as much about literature as they care about aeroplanes or the programme of the Legislature. They do not ignore it; they are not quite indifferent to it. But their interest in it is faint and perfunctory; or, if their interest happens to be violent, it is spasmodic. Ask the two hundred thousand persons whose enthusiasm made the vogue of a popular novel ten years ago what they think of that novel now, and you will gather that they have utterly forgotten it, and that they would no more dream of reading it again than of reading Bishop Stubbs’s Select Charters.
— Arnold Bennet, Literary Taste (1907)
Humanities departments are not doomed to oblivion. They might deserve oblivion, but they are not doomed to it. This post is going to suggest one relatively painless institutional fix that has the potential to dam the floods up before they sweep the entire profession away. (...)

Confusing a subject with the narrow band of institutions currently devoted to credentializing those who study it clouds our thinking. The collapse of humanity departments on university campuses is a best an indirect signal of the health of the humanities overall. At times the focus on the former distracts us from real problems facing the latter. The death of professorships in poetry is far less alarming than American societies' rejection of poetry writ large. In as much as the creeping reach of the academy has contributed to poetry's fall from popular acclaim, the collapse of graduate programs in literature and creative writing may be a necessary precondition for its survival.

Academics don't want to hear this, of course. But the truth is that few academics place "truth," "beauty," or "intersectional justice" at the top of their personal hierarchy of values. The motivating drive of the American academic is bourgeois respectability. The academic wants to continue excelling in the same sort of tasks they have excelled in since they were 10 years old, and want to be respected for it. The person truelycommitted to the humanist impulse would be ready pack things up and head into the woods with Tao Qian and Thoreau. But that is not what academia is for. Academia is a quest for status and certitude.

If pondering on these things you still feel the edifice is worth preserving, then I am here to tell you that this possible. The solution I endorse is neat in its elegance, powerful in its simplicity. It won't bring the halcyon days of the '70s of back, but it will divert enough students into humanities programs to make them somewhat sustainable. (...)

"Many things not at all" is what the current system teaches. The structure of generals and elective courses struggles to produce any other outcome. Learning something well depends on a cumulative process of practice and recall. Memories not used soon fade; methods not refined soon dull; facts not marshaled are soon forgotten. I remember the three credits I took in Oceanography as a grand experience (not least for field lab at the beach), but years later I find I cannot recall anything I was tested on. And why would I? After that class was over the information I learned was never used in any of the other classes I took.

This sounds like an argument against learning anything but one carefully selected major. That takes things a step too far. There is a benefit to having expertise in more than one domain. I am reminded of Scott Adam's "top 25%" principle, which I first found in Marc Andreeson's guide to career planning:
If you want an average successful life, it doesn’t take much planning. Just stay out of trouble, go to school, and apply for jobs you might like. But if you want something extraordinary, you have two paths: 
Become the best at one specific thing.
Become very good (top 25%) at two or more things. 
The first strategy is difficult to the point of near impossibility. Few people will ever play in the NBA or make a platinum album. I don’t recommend anyone even try. 
The second strategy is fairly easy. Everyone has at least a few areas in which they could be in the top 25% with some effort. In my case, I can draw better than most people, but I’m hardly an artist. And I’m not any funnier than the average standup comedian who never makes it big, but I’m funnier than most people. The magic is that few people can draw well and write jokes. It’s the combination of the two that makes what I do so rare. And when you add in my business background, suddenly I had a topic that few cartoonists could hope to understand without living it. 
....Get a degree in business on top of your engineering degree, law degree, medical degree, science degree, or whatever. Suddenly you’re in charge, or maybe you’re starting your own company using your combined knowledge. 
Capitalism rewards things that are both rare and valuable. You make yourself rare by combining two or more “pretty goods” until no one else has your mix... 
It sounds like generic advice, but you’d be hard pressed to find any successful person who didn’t have about three skills in the top 25%.
To this I would add a more general statement about the purpose of a university education. In my days as a teacher in history and literature, I used to give a lecture to the Chinese students I had helped prepare for American university life. This lecture would touch on many things. This was one of them. I would usually say something close to this:
Students who go to America usually fall into one of two groups. The first group is focused like a laser beam on grinding through coursework that will easily open up a new career to them upon graduation. You will know the type when you see them--they will be carrying around four books on accounting or chemical engineering, and will constantly be fretting over whether their GPA is high enough for them to land an internship with Amazon. In many ways those students will spend their university years doing the exact same thing they are doing now: jumping through one hoop after another to get good grades and secure what they hope will be a good future.  
On the other hand, you have many students who arrive in America and immediately devote themselves to the pleasures they could not chase at home. These students jump at the obscure class in 19th century French poetry, glorying in their newfound freedom to learn about something just because they want to learn about it. They follow their passions. Such passions rarely heed the demands of a future job market. 
Which student should you be? 
My advise: be both
The trouble with our new expert in Romantic poetry or classical Greek is that even if she is smart enough to do just about any job out there, she has no way to prove that to her potential future employers. Her teachers will have her write term papers and book reviews. Your ability to write an amazing term paper impresses nobody outside of the academy (even if the research skills needed to write one are in demand out there). If you do not have a technical skillset they can understand — or even better, a portfolio of projects you have completed that you can give them — you will struggle greatly when it comes time to find a job. Your success will not be legible to the outside world. You must find ways to make it legible. You must ponder this problem from your very first year of study. It is not wise to spend your entire university experience pretending that graduation day will not come. It will, and you must be prepared for it. 
On the flip side, I cannot endorse the path of Mr. I-Only-Take-Accounting-Classes either. He lives for the Next Step. My friends, there will always be a Next Step. Life will get busier, not easier, after college. You may never again be given such grand opportunity to step back and think about what is most important. 
What is wrong? What is right? What is true, and how will I know it? What is beauty, and where can I find it? What does it mean to be good? What does it mean to live a meaningful life? Your accounting classes will not answer that question. Now the odds are high that your literature, art, and history classes won't really answer them either—but they will ask you to develop your own answers to them. That is truly valuable. 
I will say it again: you may have another period in your life where you have the time, resources, and a supporting community designed to help you do this. If you are not having experiences in university that force you to spend time wrestling in contemplation, then you have wasted a rare gift. 
So that is my advice. Do both! 
I cannot tell you exactly how to do both — that will be for each of you to decide. But recognize which sort of student you are, and find ways to counter-act your natural tendency. If you have no desire greater than diving into a pile of history books, perhaps take three or four classes of GIS on the side, and create skins for Google Earth that draw on your data. If you are driven to find a career in finance, go do so — but then arrange to spend a semester abroad in Spain, or Japan, or somewhere that let's you experience a new culture and lifestyle. 
Prepare for your career. Expand your mind. Find a way to do both.
Far fewer students have taken this advice than I hoped. I am partially fond of my alma mater's new system because it forces all of its students do exactly what I advocate they should. But the logic of the system is compelling on its own grounds. By requiring a science based minor, all students are required to master the basics of statistics and the scientific method. They do this not through a series of university-required, general-purpose, mind-numbing courses, but through a minor they choose themselves. All students are required to master a professional skill that will give them options on the post-college job market. They will learn how to make their work and talents legible to the world outside of academia. And all students are required to round this education out with an in-depth study of art, history, or culture.

From an organizational sense, the system's greatest boon goes to the humanities departments. The prime reason students do not take humanities courses is that college is too expensive to afford a degree which does not guarantee a career. That is it. As the number of people graduating from college increases, merely having a degree is no longer a signal of extraordinary competence. Any student that goes hundreds of thousands of dollars into debt for the sake of a degree which will not provide them with the skill-set they need to pay it back is extremely foolish, and most of them know it.

by T. Greer, The Scholar's Stage |  Read more:
Image: via
[ed. Congratulations to this year's graduates.]

It’s Time To Take Octopus Civilization Seriously

Intelligence is a hot topic of discussion these days. Human intelligence. Plant intelligence. Artificial intelligence. All kinds of intelligence. But while the natures of human and plant intelligence are subjects mired in heated debate, derision, and controversy, the subject of artificial intelligence inspires an altogether different kind of response: fear. In particular, fear for the continued existence of any human civilization whatsoever. From Elon Musk to Stephen Hawking, the geniuses of the Zeitgeist agree. AI will take our jobs and then, if we’re not careful, everything else too, down to every last molecule in the universe. A major Democratic presidential candidate, Andrew Yang, has turned managing the rise of AI into one of the core principles of his political platform. It is not a laughing matter.

But artificial general intelligence is not the type of intelligence that humanity should fear most. Far from the blinking server rooms of Silicon Valley or the posh London offices of DeepMind, another type of intelligence lurks silently out of human sight, biding its time in the Lovecraftian deep. Watching. Waiting. Organizing. Unlike artificial intelligence, this intelligence is not hypothetical, but very real. Forget about AGI. It’s time to worry about OGI—octopus general intelligence.

In late 2017, it was reported that an underwater site called “Octlantis” had been discovered by researchers off the coast of Australia. Normally considered to be exceptionally solitary, fifteen octopuses were observed living together around a rocky outcropping on the otherwise flat ocean floor. Fashioning homes—dens—for themselves out of shells, the octopuses were observed mating, fighting, and communicating with each other. Most importantly, this was not the first time that this had happened. Another similar site called “Octopolis” had been previously discovered in the vicinity in 2009.

One of the researchers, Stephanie Chancellor, described the octopuses in “Octlantis” as “true environmental engineers.” The octopuses were observed conducting both mate defense and “evictions” of octopuses from dens, defending their property rights from infringement by other octopuses. The other “Octopolis” site had been continuously inhabited for at least seven years. Given the short lifespans of octopuses, lasting only a few years on the high end, it is clear that “Octopolis” has been inhabited by several generations of octopuses. We are presented with the possibility of not only one multi-generational octopus settlement chosen for defense from predators and engineered for octopus living, but two. And those are just the ones we’ve discovered. The oceans cover over 70% of Earth’s surface.

None of the three experts I spoke with for this article would rule out the possibility of further octopus settlements.

The octopus is a well-known creature, but poorly understood. The primal fear inspired by the octopus frequently surfaces in horror movies, pirate legends, political cartoons depicting nefarious and tentacled political enemies, and, understandably, in Japanese erotic art. For all that, the octopus is, to most people, just another type of seafood you can order at the sushi bar. But the octopus is more than just sushi. It’s more than the sum of its eight arms. A lot more, in fact—it may be the most alien creature larger than a speck of dust to inhabit the known ecosystems of the planet Earth. Moreover, it’s not just strange. It’s positively talented.

Octopuses can fully regenerate limbs. They can change the color and texture of their skin at will, whether to camouflage themselves, make a threat, or for some other unknown purpose. They can even “see” with their skin, thanks to the presence of the light-sensitive protein rhodopsin, also found in human retinas. They can shoot gobs of thick black ink with a water jet, creating impenetrable smokescreens for deceit and escape. Octopuses can use their boneless, elastic bodies to shapeshift, taking on the forms of other animals or even rocks. Those same bodies allow even the larger species of octopuses to squeeze through holes as small as one inch in diameter. The octopus’ arms are covered in hundreds of powerful suckers that are known to leave visible “octo-hickeys” on humans. The larger ones can hold at least 35 lbs. each. The suckers can simultaneously taste and smell. All octopus species are venomous.

Despite all of these incredible abilities, the octopus’ most terrifying feature remains its intelligence. The octopus has the highest brain-to-body-mass ratio of any invertebrate, a ratio that is also higher than that of many vertebrates. Two thirds of its neurons, however, are located in its many autonomous arms, which can react to stimuli and even identify and grab food after being severed from the rest of the octopus, whether still dead or alive. In other words, the intelligence of an octopus is not centralized. It is decentralized, like a blockchain. Like blockchains, this makes them harder to kill. It has been reported that octopuses are capable of observational learning, short- and long-term memory, tool usage, and much more. One might wonder: if octopuses have already mastered blockchain technology, what else are they hiding?

We can see octopuses frequently putting this intelligence to good use, and not only in their burgeoning aquatic settlements. Some octopuses are known to use coconut shells for shelter, even dismantling and transporting the shell only to reassemble it later. In laboratory settings, octopuses are able to solve complex puzzles and open different types of latches in order to obtain food. They don’t stop there, though. Captive octopuses have been known to escape their tanks, slither across the floor, climb into another tank, feast on the helpless fish and crabs within, and then return to their original tank. Some do it only at night, knowingly keeping their human overseers in the dark. Octopuses do not seem to have qualms about deceiving humans. They are known to steal bait from lobster traps and climb aboard fishing boats to get closer to fishermen’s catches.

One octopus in New Zealand even managed to escape an aquarium and make it back to the sea. When night fell and nobody was watching, “Inky”—his human name, as we do not know how octopuses refer to themselves in private—climbed out of his tank, across the ground, and into a drainpipe leading directly to the ocean.

Given the advanced intelligence and manifold abilities of octopuses, it may not be a surprise, in hindsight, that they are developing settlements off the coast of Australia. By establishing a beachhead in the Pacific Ocean, a nascent octopus civilization would be well-placed to challenge the primary geopolitical powers of the 21st century, namely, the United States and China. Australia itself is sparsely inhabited and rich in natural resources vital for any advanced civilization. The country’s largely coastal population would be poorly prepared to deal with an invasion from the sea.

by Marko Jukic, Palladium |  Read more:
Image: Qijin Xu/Octopus

How China Is Planning to Rank 1.3 Billion People

China has a radical plan to influence the behavior of its 1.3 billion people: It wants to grade each of them on aspects of their lives to reflect how good (or bad) a citizen they are. Versions of the so-called social credit system are being tested in a dozen cities with the aim of eventually creating a network that encompasses the whole country. Critics say it’s a heavy-handed, intrusive and sinister way for a one-party state to control the population. Supporters, including many Chinese (at least in one survey), say it’ll make for a more considerate, civilized and law-abiding society.

1. Is this for real?

Yes. In 2014, China released sweeping plans to establish a national social credit system by 2020. Local trials covering about 6% of the population are already rewarding good behavior and punishing bad, with Beijing due to begin its program by 2021. There are also other ways the state keeps tabs on citizens that may become part of an integrated system. Since 2015, for instance, a network that collates local- and central- government information has been used to blacklist millions of people to prevent them from booking flights and high-speed train trips.

2. Why is China doing this?

“Keeping trust is glorious and breaking trust is disgraceful.” That’s the guiding ideology of the plan as outlined by the government. China has suffered from rampant corruption, financial scams and corporate scandals in its breakneck industrialization of the past several decades. The social credit system is billed as an attempt to raise standards of behavior and restore trust as well as a means to uphold basic laws that are regularly flouted.

3. How are people judged?

That varies place to place. In the eastern city of Hangzhou, “pro-social” activity includes donating blood and volunteer work, while violating traffic laws lowers an individual’s credit score. In Zhoushan, an island near Shanghai, no-nos include smoking or driving while using a mobile phone, vandalism, walking a dog without a leash and playing loud music in public. Too much time playing video games and circulating fake news can also count against individuals. According to U.S. magazine Foreign Policy, residents of the northeastern city of Rongcheng adapted the system to include penalties for online defamation and spreading religion illegally.

4. What happens if someone’s social credit falls?

“Those who violate the law and lose the trust will pay a heavy price,” the government warned in one document. People may be denied basic services or prevented from borrowing money. “Trust-breakers” might be barred from working in finance, according to a 2016 directive. A case elsewhere highlighted by the advocacy group Human Rights Watch showed that citizens aren’t always aware that they’ve been blacklisted, and that it can be difficult to rectify mistakes. The National Development and Reform Commission — which is spearheading the social credit plan — said in its 2018 report it had added 14.2 million incidents to a list of “dishonest” activities. People can appeal, however. The commission said 2 million people had been removed from its blacklist, while Zhejiang, south of Shanghai, brought in rules to give citizens a year to rectify a bad score with good behavior. And people who live in Yiwu have 15 days to appeal social-credit information that’s released by the authorities.

5. What part is technology playing?

Advances in computer processing have simplified the task of collating vast databases, such as the network used to blacklist travelers. Regional officials are applying facial-recognition technology to identify jaywalkers and cyclists who run red lights.

by Karen Leigh and Dandan Li, Bloomberg | Read more:
Image: Pedestrian-detection technology at China's SenseTime Group Ltd. Gilles Sabrie/Bloomberg

Keanu Reeves Is Too Good for This World

Last week, I read a report in the Times about the current conditions on Mt. Everest, where climbers have taken to shoving one another out of the way in order to take selfies at the peak, creating a disastrous human pileup. It struck me as a cogent metaphor for how we live today: constantly teetering on the precipice to grasp at the latest popular thing. The story, like many stories these days, provoked anxiety, dread, and a kind of awe at the foolishness of fellow human beings. Luckily, the Internet has recently provided us with an unlikely antidote to everything wrong with the news cycle: the actor Keanu Reeves.

Take, for instance, a moment, a few weeks ago, when Reeves appeared on “The Late Show” to promote “John Wick: Chapter 3—Parabellum,” the latest installment in his action-movie franchise. Near the end of the interview, Stephen Colbert asked the actor what he thought happens after we die. Reeves was wearing a dark suit and tie, in the vein of a sensitive mafioso who is considering leaving it all behind to enter the priesthood. He paused for a moment, then answered, with some care, “I know that the ones who love us will miss us.” It was a response so wise, so genuinely thoughtful, that it seemed like a rebuke to the usual canned blather of late-night television. The clip was retweeted more than a hundred thousand times, but, when I watched it, I felt like I was standing alone in a rock garden, having a koan whispered into my ear.

Reeves, who is fifty-four, has had a thirty-five-year career in Hollywood. He was a moody teen stoner in “River’s Edge” and a sunny teen stoner in the “Bill & Ted” franchise; he was the tortured sci-fi action hero in the “Matrix” movies and the can-do hunky action hero in “Speed”; he was the slumming rent boy in “My Own Private Idaho,” the scheming Don John in “Much Ado About Nothing,” and the eligible middle-aged rom-com lead in “Destination Wedding.” Early in his career, his acting was often mocked for exhibiting a perceived skater-dude fuzziness; still, today, on YouTube, you can find several gleeful compilations of Reeves “acting badly.” (“I am an F.B.I. agent,” he shouts, not so convincingly, to Patrick Swayze in “Point Break.”) But over the years the peculiarities of Reeves’s acting style have come to be seen more generously. Though he possesses a classic leading-man beauty, he is no run-of-the-mill Hollywood stud; he is too aloof, too cipher-like, too mysterious. There is something a bit “Man Who Fell to Earth” about him, an otherworldliness that comes across in all of his performances, which tend to have a slightly uncanny, declamatory quality. No matter what role he plays, he is always himself. He is also clearly aware of the impression he makes. In the new Netflix comedy “Always Be My Maybe,” starring the standup comedian Ali Wong, he makes a cameo as a darkly handsome, black-clad, self-serious Keanu, speaking in huskily theatrical, quasi-spiritual sound bites that either baffle or arouse those around him. “I’ve missed your spirit,” he gasps at Wong, while kissing her, open-mouthed.

Though we’ve spent more than three decades with Reeves, we still know little about him. We know that he was born in Beirut, and that he is of English and Chinese-Hawaiian ancestry. (Ali Wong has said that she cast him in “Always Be My Maybe” in part because he’s Asian-American, even if many people forget it.) His father, who did a spell in jail for drug dealing, left home when Keanu was a young boy. His childhood was itinerant, as his mother remarried several times and moved the family from Sydney to New York and, finally, Toronto. We know that he used to play hockey, and that he is a motorcycle buff, and that he has experienced unthinkable tragedy: in the late nineties, his girlfriend, Jennifer Syme, gave birth to their child, who was stillborn; two years later, Syme died in a car accident. Otherwise, Reeves’s life is a closed book. Who is he friends with? What is his relationship with his family like? As Alex Pappademas wrote, for a cover story about the actor in GQ, in May, Reeves has somehow managed to “pull off the nearly impossible feat of remaining an enigmatic cult figure despite having been an A-list actor for decades.”

by Naomi Fry, New Yorker |  Read more:
Image: Karwai Tang/Getty
[ed. Interesting how our culture fixates on certain celebrity icons, seemingly at random: Frida Kahlo, Debbie Harry, Bob Ross, David Byrne, Vermeer's Girl With A Pearl Earring, Serge Gainsbourg and Jane Birkin, for example. Suddenly they're everywhere.]

Tuesday, June 4, 2019

The Coming G.O.P. Apocalypse

For much of the 20th century, young and old people voted pretty similarly. The defining gaps in our recent politics have been the gender gap (women preferring Democrats) and the education gap. But now the generation gap is back, with a vengeance.

This is most immediately evident in the way Democrats are sorting themselves in their early primary preferences. A Democratic voter’s race, sex or education level doesn’t predict which candidate he or she is leaning toward, but age does.

In one early New Hampshire poll, Joe Biden won 39 percent of the vote of those over 55, but just 22 percent of those under 35, trailing Bernie Sanders. Similarly, in an early Iowa poll, Biden won 41 percent of the oldster vote, but just 17 percent of the young adult vote, placing third, behind Sanders and Elizabeth Warren.

As Ronald Brownstein pointed out in The Atlantic, older Democrats prefer a more moderate candidate who they think can win. Younger Democrats prefer a more progressive candidate who they think can bring systemic change.

The generation gap is even more powerful when it comes to Republicans. To put it bluntly, young adults hate them.

In 2018, voters under 30 supported Democratic House candidates over Republican ones by an astounding 67 percent to 32 percent. A 2018 Pew survey found that 59 percent of millennial voters identify as Democrats or lean Democratic, while only 32 percent identify as Republicans or lean Republican.

The difference is ideological. According to Pew, 57 percent of millennials call themselves consistently liberal or mostly liberal. Only 12 percent call themselves consistently conservative or mostly conservative. This is the most important statistic in American politics right now.

Recent surveys of Generation Z voters (those born after 1996) find that, if anything, they are even more liberal than millennials.

In 2002, John B. Judis and Ruy Teixeira wrote a book called “The Emerging Democratic Majority,” which predicted electoral doom for the G.O.P. based on demographic data. That prediction turned out to be wrong, or at least wildly premature.

The authors did not foresee how older white voters would swing over to the Republican side and the way many assimilated Hispanics would vote like non-Hispanic whites. The failure of that book’s predictions has scared people off from making demographic forecasts.

But it’s hard to look at the generational data and not see long-term disaster for Republicans. Some people think generations get more conservative as they age, but that is not borne out by the evidence. Moreover, today’s generation gap is not based just on temporary intellectual postures. It is based on concrete, lived experience that is never going to go away.

Unlike the Silent Generation and the boomers, millennials and Gen Z voters live with difference every single day. Only 16 percent of the Silent Generation is minority, but 44 percent of the millennial generation is. If you are a millennial in California, Texas, Florida, Arizona or New Jersey, ethnic minorities make up more than half of your age cohort. In just over two decades, America will be a majority-minority country.

Young voters approve of these trends. Seventy-nine percent of millennials think immigration is good for America. Sixty-one percent think racial diversity is good for America.

They have constructed an ethos that is mostly about dealing with difference. They are much more sympathetic to those who identify as transgender. They are much more likely than other groups to say that racial discrimination is the main barrier to black progress. They are much less likely to say the U.S. is the best country in the world.

These days the Republican Party looks like a direct reaction against this ethos — against immigration, against diversity, against pluralism. Moreover, conservative thought seems to be getting less relevant to the America that is coming into being. (...)

The most burning question for conservatives should be: What do we have to say to young adults and about the diverse world they are living in? Instead, conservative intellectuals seem hellbent on taking their 12 percent share among the young and turning it to 3.

by David Brooks, NY Times |  Read more:
Image: Eric Thayer for The New York Times
[ed. We can only hope. Here's a question for conservatives: wouldn't it be great to see a Trump/Palin ticket in the next election? If not, why? See also: George Will’s Political Philosophy (NY Times).]

Metamotivation


Maslow's hierarchy of needs is often portrayed in the shape of a pyramid, with the largest and most fundamental levels of needs at the bottom, and the need for self-actualization at the top.While the pyramid has become the de facto way to represent the hierarchy, Maslow himself never used a pyramid to describe these levels in any of his writings on the subject.

The most fundamental and basic four layers of the pyramid contain what Maslow called "deficiency needs" or "d-needs": esteem, friendship and love, security, and physical needs. With the exception of the most fundamental (physiological) needs, if these "deficiency needs" are not met, the body gives no physical indication but the individual feels anxious and tense. Maslow's theory suggests that the most basic level of needs must be met before the individual will strongly desire (or focus motivation upon) the secondary or higher level needs. Maslow also coined the term Metamotivation to describe the motivation of people who go beyond the scope of the basic needs and strive for constant betterment. Metamotivated people are driven by B-needs (Being Needs), instead of deficiency needs (D-Needs).

via: Wikipedia
[ed. Repost]
[ed. I was familiar with Maslow's general hierarchy of needs but not the term Metamotivation i.e., striving to realize one's fullest potential. I wonder how a person's outlook on life and their personality are affected by an inability to achieve that need (if it is felt)? Furthermore, since basic needs are fluid (like health, friendship, economic security, intimacy, etc.) is metamotivation a temporary luxury (and ultimately an unsustainable goal)?]

RIP, iTunes

At long last, the demise of iTunes has come. On Monday, Apple announced that it will phase out the software once and for all. “If there’s one thing we hear over and over, it’s ‘Can iTunes do even more?’” quipped Apple’s senior vice president of software engineering, Craig Federighi, onstage during the company’s annual developers conference. He then ran a demo of the trio of apps that will soon replace the platform: Music, TV, and Podcasts.

There are plenty of strategic reasons to put that poor 18-year-old software down to rest. Business-wise, this is all part of Apple’s strategy to transform into a bonafide entertainment studio. Separating different content into specialized platforms allows the company to promote its ever-growing slate of original programming, while expanding the possibilities for monthly subscription services. But even from the user perspective, iTunes has long felt old and clunky. What began as a music player meant to compete with winamp (rest its soul) has over the years become a bloated catch-all for every media file on a person’s computer. It lurches between music, movies, podcasts, and audiobooks, and blends confusing promotional subcategories like “For You” and “Browse” with a person’s permanent libraries. In its past few updates, it has grown only more confusing to navigate, becoming a constant flashpoint on Apple bulletin boards. I interact with iTunes in its current form in the same way that I interact with that one package of chicken that has become a permanent, icy fixture in the back of my freezer: accidentally and as infrequently as possible.

iTunes is the 8-track of the millennial generation: a mostly inefficient technology that bridged shifting eras of music distribution. The media player launched in January 2001, a little less than a year before the iPod, and laid the groundwork for a DIY listening experience that was no longer dictated by albums, or CDs, or buying music altogether. In the late ’90s, most teens’ exposure to artists was limited to the radio, Total Request Live, and what they could afford at their local Sam Goody. iTunes’s easy-to-use CD-burning capabilities, paired with rising peer-to-peer file-sharing networks like Napster (and Limewire, and Kazaa), broke open the possibilities of what a young internet-savvy music fan could listen to—basically anything, anytime—and paved the way for the on-demand streaming services we have today.

Though iTunes quite famously became the first major platform to sell songs for a dollar, most millennials of a certain age used the program as a laundering system for illegally downloaded music. The ritual went like this: (1) find the right moment to capitalize the family room computer—preferably a late evening when the little download progress bars from your P2P network could stretch uninterrupted into the night— and jumpstart as many illegal downloads as you possibly could without crashing your computer, (2) go to bed, (3) harvest your mp3s in the morning, discarding the specimen that languished in an unsteady DSL connection, (4) transfer the completed files to your library, and (5) transfer them onto a CD or iPod, for mobile consumption. (Note: steps 1 through 3 could be skipped if you happened to be sourcing your music from a CD you rented from your local library.)

The fourth step of that process was the most time-intensive form of data entry that I’ve endured in my entire life. It was important—no, necessary—that the names and titles and album art of every song I stole from the internet looked as if it had arrived there legitimately. Standard capitalization. No symbols. No electronic signature from the downloader. I vaguely recall a get-off-my-lawn narrative at the time, suggesting that all these free-flowing mp3s were bound to leave us directionless and musically inept—listening to stray songs with no appreciation of the album or discography from which they came. But if anything, all those hours of editing were like a musical boot camp, an accelerated way to catalog discographies both in my digital library and my mind. Of course, there was always the chance that when you finally got around to listening to the hours and hours of music you’d stolen, a song would rudely be interrupted by grating static, or Bill Clinton saying “I did not have sexual relations with that woman.” Napster was free, yes, but full of trolls.

The end goal, I suppose, was ownership. As a teen, you strive to assert an identity separate from your family at every opportunity possible, all while living under their close supervision. The ability to amass as much music as your hard drive could shoulder meant a listening experience tailored to your own personal tastes. Not the radio. Not MTV. Not what was in your parents’ CD collection. Your stuff. And when I finally got my hands on an iPod, it was a little like carrying around a comfort blanket. My iTunes library was my identity, and I spent hours cherishing and growing it.

by Alyssa Bereznak , The Ringer | Read more:
Image: Getty Images/Ringer illustration

via:

Too Many People Want to Travel

Late in May, the Louvre closed. The museum’s workers walked out, arguing that overcrowding at the home of the Mona Lisa and the Venus de Milo had made the place dangerous and unmanageable. “The Louvre suffocates,” the workers’ union said in a statement written in French, citing the “total inadequacy” of the museum’s facilities to manage the high volume of visitors.

Half a world away, a conga line of mountaineers waited to approach the summit of Mount Everest, queued up on a knife’s-edge ridge, looking as if they had chosen to hit the DMV at lunchtime. A photograph of the pileup went viral; nearly a dozen climbers died, with guides and survivors arguing that overcrowding at the world’s highest peak was a primary cause, if not the only one.

Such incidents are not isolated. Crowds of Instagrammers caused a public-safety debacle during a California poppy super bloom. An “extreme environmental crisis” fomented a “summer of action” against visitors to the Spanish island of Mallorca. Barcelona and Venice and Reykjavik and Dubrovnik, inundated. Beaches in Thailand and Mexico and the Philippines, destroyed. Natural wonders from the Sierra Nevadas to the Andes, jeopardized. Religious sites from Cambodia to India to Rome, damaged.

This phenomenon is known as overtourism, and like breakfast margaritas on an all-inclusive cruise, it is suddenly everywhere. A confluence of macroeconomic factors and changing business trends have led more tourists crowding to popular destinations. That has led to environmental degradation, dangerous conditions, and the immiseration and pricing-out of locals in many places. And it has cities around the world asking one question: Is there anything to be done about being too popular?

Locals have of course complained about tourists since time immemorial, and the masses have disrespected, thronged, and vandalized wonders natural and fabricated for as long as they have been visiting them. But tourism as we know it was a much more limited affair until recent decades. Through the early 19th century, travel for personal fulfillment was the provenance of “wealthy nobles and educated professionals” only, people for whom it was a “demonstrative expression of their social class, which communicated power, status, money and leisure,” as one history of tourism notes. It was only in the 1840s that commercialized mass tourism developed, growing as the middle class grew.

If tourism is a capitalist phenomenon, overtourism is its demented late-capitalist cousin: selfie-stick deaths, all-you-can-eat ships docking at historic ports, stag nights that end in property crimes, the live-streaming of the ruination of fragile natural habitats, et cetera. There are just too many people thronging popular destinations—30 million visitors a year to Barcelona, population 1.6 million; 20 million visitors to Venice, population 50,000. La Rambla and the Piazza San Marco fit only so many people, and the summertime now seems like a test to find out just how many that is.

The root cause of this surge in tourism is macroeconomic. The middle class is global now, and tens of millions of people have acquired the means to travel over the past few decades. China is responsible for much of this growth, with the number of overseas trips made by its citizens rising from 10.5 million in 2000 to an estimated 156 million last year. But it is not solely responsible. International-tourist arrivals around the world have gone from a little less than 70 million as of 1960 to 1.4 billion today: Mass tourism, again, is a very new thing and a very big thing.

by Annie Lowrey, The Atlantic |  Read more:
Image: Charles Platiau/Reuters

Monday, June 3, 2019

King Weir

The first thing I want to talk to Bob Weir about is the dead.

Not the Dead, but the departed. The deceased. The ex-Dead, of which there are now as many as there once were Grateful Dead members—an entire shadow band, albeit made up entirely of keyboardists, plus one notable guitar. Pigpen. Keith. Brent. Vince. And, of course, Jerry. This is not to mention all the other compatriots and family members lost along the way. Death surrounded this band, and death suffused its music—a mournful leitmotif that's inescapable once you release whatever preconceptions you might have about peace, love, and dancing bears.

“You reach a certain age and you're going to have lost some friends,” Weir says. Perhaps so, but for him that age was around 20.

We're sitting on his tour bus, a shiny black monolithic slab, which is parked on the street in New Orleans. Outside is the Fillmore theater, a venue named for the San Francisco concert hall synonymous with the psychedelic explosion of the Grateful Dead's earliest days, now a chain owned by Live Nation, with this branch located in Harrah's casino. In a few hours, he'd be going onstage with the band he's calling Bob Weir & Wolf Bros, a trio that includes the legendary producer Don Was on stand-up bass and Jay Lane—a veteran of several post-Jerry Garcia Grateful Dead variations, as well as Primus—on drums. The band played in Austin the day before and then drove through the night, Weir sleeping in a comfy-looking bunk in back as Texas and western Louisiana rolled by a few feet beneath.

Weir sits in one of the bus's leather armchairs, wearing shorts, a T-shirt, and an Apple Watch with two silver skull-and-crossbones studs on the black band. Cross-legged and barefoot, he looks top-of-the-mountain wise, largely on account of the profusion of whiskers that has taken over his face, from neck to cheekbone, like rosebushes gone wild on the side of an abandoned house. Add in bushy eyebrows and a luminous crown of white hair and other metaphors suggest themselves: Lorax, gold-mad Western sidekick, holy guru, homemade Albert Einstein costume… Weir prefers “Civil War cavalry colonel” to describe what he saw in the mirror one morning after not shaving for a few weeks on tour. Sometime later, he saw a photo of an ancestor. “He had a full-on Yosemite Sam mustache. I said to myself, ‘That's a look that's fallen from favor for the past 150 years or so. I'm just the guy to bring it back.’ ” It is possible that Weir's tongue is in his cheek, but it is hard to tell. On account of all the beard. (...)

“This motion”—he mimics strumming a guitar—“this one limited motion, repeated a million times, has turned my right rhomboid muscle into a strip of gristle that gets extremely painful after a couple of hours, to the point where it's like trying to play with an ice pick in your back. I went to doctors. I went to physical therapists. But the only thing that really worked was opiates, and so I got good and strung out on them. I would have to come home and go through withdrawal after every tour.”

He's always used alcohol, too—wine, in particular—to combat stage fright, a condition he says he shared with Garcia. “Every night, before I go on, it's I can't believe I put myself in this position again. Thousands of times.” He is so self-conscious about his playing before warming up that he needs to do so in solitude.

Weir's struggles became publicly apparent in 2013, when he collapsed onstage with Furthur. The next year, RatDog called off an entire tour. Today he is, to all appearances, healthy. He has replaced a drink before getting onstage with a shot of ginseng and, for the most part, pharmaceutical painkillers with herbal supplements. But he stops short of saying he's sober.

“I've tried that, and I'm not as happy as when I drink,” he says. He is adamant that he is able to have a glass of wine these days and stop there. Likewise, the occasional painkiller when the exercise and herbal remedies prove inadequate.

“There was a time, way back, when getting trashed and completely nuts was, I felt, my best approach to the blank page—which is a horrifying prospect in and of itself,” he says. “But I've been there and done that, and I don't think there is anything more to be found there for me. What I want now is to be in the same frame of mind when I wake up in the morning as when I went to bed. That's pretty much how I operate.”

This flies in the face of conventional thinking about how addiction works, but Weir says he's not cut out for traditional 12-step programs. “I'm not sure I buy the basic tenet, which is that you're powerless,” he says. “I think that we humans are enormously powerful, and I tend to think there's nothing that you can't do. It's a matter of self-mastery, and if self-mastery amounts to total abstinence, I think that's incomplete. I think you're selling yourself short. But I get that that's real dangerous for some people. So I don't talk about it much.” (...)

In New Orleans, Weir had told me a story to illustrate how, by the end, addiction and the pressures of fame had conspired to shrink Garcia's life.

“One time we came here after a long absence, and our publicist, who was also a good friend, asked Jerry how was it getting back to New Orleans, because it's such a great music town,” he said. “Jerry's answer was, ‘Well, one hotel's the same as another.’ That was pretty much the life he was given.”

We sat there for a few moments, listening to the bus's air-conditioning hum, sunlight peeking in around the edges of the blackout curtains.

“Yeah, well,” Weir said dryly, “I don't get out much, either.” (...)

There is a sequence in the 2014 documentary The Other One: The Long Strange Trip of Bob Weir in which various musical admirers struggle to describe Weir's style of rhythm guitar, falling back on such terms as “unique,” “unusual,” and “strange.” It would be easy to conclude that these were euphemisms, but it's apparently not so. When I ask Don Was about it, he is silent for several seconds.

“I've spent hundred of hours focused on him in the past few months, and he's still absolutely enigmatic to me,” Was says. “He's part Segovia and part John Lee Hooker, and he does both simultaneously—this exotic blend of the raw and the cerebral. He obliterates the lines between rhythm guitar and lead guitar. He doesn't just bash out chords—his rhythm parts are really melodic, so they also serve as lead parts. Sometimes I think there's a second guitarist sitting in, because he can also play separate lead lines and rhythm parts at the same time.”

This is reminiscent of Weir's description of his lifelong dyslexia, how words on the page, as he tells it, refuse to hold their shape and meaning, threatening always to go off in some new direction. “I let my brain run, I guess. I let it go and have more freedom than some folks do,” he says. “So if I'm reading a word, there are innumerable considerations to take into account about what I just read.”

According to Mickey Hart, “He became totally unique because he was in a band that was totally unique! Remember that Bobby had to play under the shadow of Jerry. It was a benevolent shadow, but that was challenging. Once Jerry got cranked up, he could really take a band away. So Bob had to learn a new way of playing. He had to re-invent himself as this partner, this other side to Jerry. He started playing strange.”

“I derived a lot of what I do on guitar from listening to piano players,” Weir says, citing McCoy Tyner's work with John Coltrane in particular. “He would constantly nudge and coax amazing stuff out of Coltrane.”

He says Garcia is still present when he plays. “I can hear him: ‘Don't go there. Don't go there,’ or ‘Go here. Go here.’ And either I listen or I don't, depending on how I'm feeling. But it's always ‘How's old Jerry going to feel about this riff?’ Sometimes I know he'd hate it. But he'd adjust.”

by Brett Martin, GQ |  Read more:
Image: Adrian Boot
[ed. A greatly underrated guitarist.]

Congressman Duncan Hunter Says He Probably Killed "Hundred of Civilians" in Iraq

California representative Duncan Hunter has been very vocal in his defense of Eddie Gallagher, a Navy SEAL accused of stabbing an injured captive to death in Iraq. In a recent interview on a Barstool Sports podcast, Hunter said, "I frankly don’t care if he was killed. I just don’t care. And that’s my personal point of view. And as a congressman, that’s my prerogative to help a guy out like that. If—even if everything that the prosecutors say is true in this case, then, you know, Eddie Gallagher should still be given a break, I think."

Hunter's making an unusual and pretty extreme argument for why Gallagher shouldn't be held responsible for the war crimes charges he's facing—by claiming it's a non-issue because he's done the same. "I was an artillery officer," he said. "And we fired hundreds of rounds into Fallujah, killed probably hundreds of civilians, if not scores, if not hundreds of civilians. Probably killed women and children, if there were any left in the city when we invaded. So do I get judged, too?" Probably yes.

According to CNN, Gallagher also reportedly "shot at noncombatants, posed for a photo and performed his re-enlistment ceremony next to a corpse." The biggest difference between what the two men did, Hunter seems to be arguing, is a technical one: Hunter's artillery unit was following orders while Gallagher wasn't.

Hunter isn't the only high-profile conservative veteran arguing that what Gallagher did isn't remarkable enough to hold him accountable for it. Fox News contributor and veteran Pete Hegseth has been defending Gallagher in private phone calls to Donald Trump, according to the Daily Beast. (Trump reportedly so trusts Hegseth that he nearly appointed him head of Veterans Affairs.) In a Fox & Friends Weekend segment with Hunter, Hegseth said, "If he committed premeditated murder, then Duncan did as well, then I did as well. What do you think you do in war?"

Gallagher is one of several accused U.S. war criminals that Donald Trump is considering giving presidential pardons, along with a Blackwater mercenary who reportedly opened fire on a crowd of Iraqi civilians and Army Ranger Michael Behenna who reportedly murdered a prisoner he was tasked with transporting. The American Civil Liberties Union has called the pardons "an endorsement of murder."

by Luke Darby, GQ |  Read more:
Image: Bill Clark
[ed. Yeah, this guy. What a piece of work. The only conclusion I can come to is it has to be some long-term US strategy to inflame as many Middle-Eastern jihadists as possible to justify our massive military-industrial spending (which at last count, totalled $750 billion/yr. - and that's just for "Defense", not counting Homeland Security, NSA, ICE, CIA, and all the "foreign aid" we bestow on other countries).