Monday, February 6, 2023


via:
[ed. Brutal.]

Biden Is Reviving Democratic Capitalism

How can inflation be dropping at the same time job creation is soaring?

It has taken one of the oldest presidents in American history, who has been in politics for over half a century, to return the nation to an economic paradigm that dominated public life between 1933 and 1980, and is far superior to the one that has dominated it since.

Call it democratic capitalism.

The Great Crash of 1929 followed by the Great Depression taught the nation a crucial lesson that we forgot after Ronald Reagan’s presidency: the so-called “free market” does not exist. Markets are always and inevitably human creations. They reflect decisions by judges, legislators and government agencies as to how the market should be organized and enforced – and for whom.

The economy that collapsed in 1929 was the consequence of decisions that organized the market for a monied elite, allowing nearly unlimited borrowing, encouraging people to gamble on Wall Street, suppressing labor unions, holding down wages, and permitting the Street to take huge risks with other people’s money.

Franklin D Roosevelt and his administration reversed this. They reorganized the market to serve public purposes – stopping excessive borrowing and Wall Street gambling, encouraging labor unions, establishing social security and creating unemployment insurance, disability insurance and a 40-hour workweek. They used government spending to create more jobs. During the second world war, they controlled prices and put almost every American to work.

Democratic and Republican administrations enlarged and extended democratic capitalism. Wall Street was regulated, as were television networks, airlines, railroads and other common carriers. CEO pay was modest. Taxes on the highest earners financed public investments in infrastructure (such as the national highway system) and higher education.

America’s postwar industrial policy spurred innovation. The Department of Defense developed satellite communications, container ships and the internet. The National Institutes of Health did trailblazing basic research in biochemistry, DNA and infectious diseases.

Public spending rose during economic downturns to encourage hiring. Even Richard Nixon admitted “we’re all Keynesians”. Antitrust enforcers broke up AT&T and other monopolies. Small businesses were protected from giant chain stores. By the 1960s, a third of all private-sector workers were unionized.

Large corporations sought to be responsive to all their stakeholders – not just shareholders but employees, consumers, the communities where they produced goods and services, and the nation as a whole.

Then came a giant U-turn. The Opec oil embargo of the 1970s brought double-digit inflation followed by the Fed chair Paul Volcker’s effort to “break the back” of inflation by raising interest rates so high the economy fell into deep recession.

All of which prepared the ground for Reagan’s war on democratic capitalism.

From 1981, a new bipartisan orthodoxy emerged that the so-called “free market” functioned well only if the government got out of the way (conveniently forgetting that the market required government). The goal of economic policy thereby shifted from public welfare to economic growth. And the means shifted from public oversight of the market to deregulation, free trade, privatization, “trickle-down” tax cuts, and deficit-reduction – all of which helped the monied interests make more money.

What happened next? For 40 years, the economy grew but median wages stagnated. Inequalities of income and wealth ballooned. Wall Street reverted to the betting parlor it had been in the 1920s. Finance once again ruled the economy. Spurred by hostile takeovers, corporations began focusing solely on maximizing shareholder returns – which led them to fight unions, suppress wages, abandon their communities and outsource abroad.

Corporations and the super-rich used their increasing wealth to corrupt politics with campaign donations – buying tax cuts, tax loopholes, government subsidies, bailouts, loan guarantees, non-bid government contracts and government forbearance from antitrust enforcement, allowing them to monopolize markets.

Democratic capitalism, organized to serve public purposes, all but disappeared. It was replaced by corporate capitalism, organized to serve the monied interests.

Joe Biden is reviving democratic capitalism.

From the Obama administration’s mistake of spending too little to pull the economy out of the Great Recession, he learned that the pandemic required substantially greater spending, which would also give working families a cushion against adversity. So he pushed for the giant $1.9tn American Rescue Plan.

This was followed by a $550bn initiative to rebuild bridges, roads, public transit, broadband, water and energy systems. And in 2022, the biggest investment in clean energy in American history – expanding wind and solar power, electric vehicles, carbon capture and sequestration, and hydrogen and small nuclear reactors. This was followed by the largest public investment ever in semiconductors, the building blocks of the next economy. (...)

I don’t want to overstate Biden’s accomplishments. His ambitions for childcare, eldercare, paid family and medical leave were thwarted by Senators Joe Manchin and Kyrsten Sinema. And now he has to contend with a Republican House.

Biden’s larger achievement has been to change the economic paradigm that has reigned since Reagan. He is teaching America a lesson we once knew but have forgotten: that the “free market” does not exist. It is designed. It either advances public purposes or it serves the monied interests.

by Robert Reich, The Guardian |  Read more:
Image: Evan Vucci/Associated Press via:
[ed. Nice concise history lesson.]

Fixing Ticketing?

Let’s start with the fees. Everybody wants them baked in, except for the acts. Ironically, even those acts complaining about the fees!

Everybody on the inside knows the real price of the ticket is the face price plus the fees, otherwise the whole concert promotion paradigm doesn’t work. The promoter needs those fees to make a profit.

But here’s where Ticketmaster takes the blame once again. The hate is focused on the ticketing company when it’s really the fault of the act! The act can ask for an all-in price, Ticketmaster has no problem with this, but so many acts don’t want this.

Let’s use an example. A club show. $25 face value plus $25 in fees. The act can side with the fan, you’re getting ripped-off! But the truth is the ticket really costs $50. It’s just by making half of it fees, the act looks like it’s not overcharging, that it’s on the fans’ side, when this is not the truth.

Of course there are acts that would go to all-in pricing, but unless there’s uniformity, there is no solution, no happiness.

Never mind all the other industries, like hospitality, that survive on fees.

So let’s move on to the bots.

Do you still get spam e-mail? Even worse, do you get spam texts? OF COURSE YOU DO! We’ve been doing this internet thing for decades but spam hasn’t been eradicated yet. Talk about money… Google provides Gmail, the number one e-mail service, the company has tons of money, but even Google can’t solve the problem!

So if you think you can legislate bots away…

And even if you have a law, without manpower, without enforcement, the law is toothless. Think about the IRS… The Republicans want funding removed, saying that the IRS targets small businesses. Don’t you see there’s going to be the same argument when it comes to anti-bot enforcement? Even if there’s a law, if it’s enforced, the blowback will be loud.

As for the scalpers utilizing these bots… The truth is both promoters and fans like scalpers. On risky shows, promoters sell directly to scalpers, to take some of the risk off the table. Especially in sports. And the public likes to know that a ticket is always available if they’re willing to pay. And, the public wants to be able to resell/scalp its own tickets. So tying the ticket to the individual and disallowing resale, the fans are not happy with this.

Okay, how do we address the evil Ticketmaster?

Forget the merger with Live Nation, that ship has sailed. How can Ticketmaster be hobbled?

The only way is by declaring it a monopoly. On the surface, this appears to be the case, with even Ticketmaster saying it has 60% of the market, others saying as much as 80%.

Easy to throw the m-word around, but proving a monopoly? Much harder. Now under previous administrations antitrust laws have not been strictly enforced. This has changed under Lina Khan, who is experienced and knows the landscape. This is important, unlike previous heads of the Federal Trade Commission, Khan has worked in the field and understands it. Whereas the public and congresspeople don’t understand ticketing.

So, one way of proving a monopoly is harm to the consumer. Just raw market share is not enough to take action. (...)

So, jumping to the end here, let’s just say the FTC says Ticketmaster is a monopoly. Now if this happens, the FTC must come up with a solution, THAT OBVIATES THE MONOPOLY! In other words, when the decision is dealt, the resulting company or companies must not have a monopoly.

Well everybody inside knows that as much as it’s a national punching bag, Ticketmaster is the best ticketing company. Sure, use someone else for a club, but if you want scale, Ticketmaster is the only choice. As for someone rising up and competing? Why invest all that money if Ticketmaster has exclusive deals.

by Bob Lefsetz, The Lefsetz Letter |  Read more:
Image: Taylor Swift via

How YouTube Created the Attention Economy

YouTube has consumed a good part of my days for more than a decade. As a teen-ager, I used the video-streaming platform to scrounge for crumbs of knowledge, watching free lectures on everything from algebra to literary modernism. Now I navigate to the YouTube app on my television most mornings to watch the news. I stream workout videos. I listen to music. I watch celebrities give tours of their garishly decorated mansions. Sometimes I stay on the site for hours, lost in the maze of memes, dinner ideas, and all manner of distraction.

My YouTube habit is far from unique. According to the company, the site has more than two billion monthly “logged-in” users. In a given twenty-four-hour period, more than a billion hours of video are streamed, and every minute around five hundred hours of video are uploaded. The torrent of content added to the site has helped establish new forms of entertainment (unboxing videos) and revolutionized existing ones (the mukbang). YouTube is a social network, but it is more than that; it is a library, a music-streaming platform, and a babysitting service. The site hosts the world’s largest collection of instructional videos. If you want to fix a tractor or snake a drain or perfectly dice an onion, you can learn how to do these things on YouTube. Of course, these are not the only things you can learn. Anti-vaxxers, 9/11 truthers, live-streamed acts of mass violence—all of these have surfaced on YouTube, too.

“No company has done more to create the online attention economy we’re all living in today,” Mark Bergen writes at the start of “Like, Comment, Subscribe,” his detailed history of YouTube, from 2005, the year it was founded, to the present. Among the titans of social media, YouTube is sometimes overlooked. It has not attracted as much adulation, censure, theorizing, or scrutiny as its rivals Facebook and Twitter. Its founders are not public figures on the order of Mark Zuckerberg or Jack Dorsey. Aaron Sorkin hasn’t scripted a movie about YouTube. But Bergen argues that YouTube “set the stage for modern social media, making decisions throughout its history that shaped how attention, money, ideology, and everything else worked online.” It’s one thing to attract attention on the Internet; it’s another thing to turn attention into money, and this is where YouTube has excelled. The site, Bergen writes, was “paying people to make videos when Facebook was still a site for dorm-room flirting, when Twitter was a techie fad, and a decade before TikTok existed.” Posting on Facebook or Twitter might net you social capital, an audience, or even a branded-content deal, but the benefits of uploading videos to YouTube are more tangible: its users can get a cut of the company’s revenue.

The site has been compensating “creators” since 2007, a scant two years after it launched, and only a year after Google acquired the company for a price tag of $1.65 billion. YouTube splits its advertising revenue fifty-five per cent to forty-five per cent, in favor of creators—one of the best deals available to anyone hoping to be paid for their time on the Internet. Since 2018, the main prerequisites a creator has needed to monetize their videos is a minimum of a thousand subscribers and four thousand “watch hours” in the previous twelve months. Recipe developers, video-game live-streamers, podcasters, teen-age trolls, children playing with toys, aspiring entrepreneurs hawking get-rich-quick schemes, right-wing shock jocks (at least those who haven’t been demonetized), and major television networks are all members of the baronial class of YouTube moneymakers. In a recent interview, the veteran science-and-education vlogger Hank Green said that the site presented such favorable terms that the idea he would “walk away” from YouTube would be like leaving America: “There are things I very much do not like about it, but I feel a little like a citizen, so that would be such a big decision to make.”

Bergen, a reporter for Bloomberg News and Businessweek, catalogues YouTube’s rise and the billions (of users, dollars, hours of video) it controls in a tone that is at once resigned, rhapsodic, and disgusted. The story his book unspools is one of breathtaking profit and foolish stumbles, violence and greed and corporate obfuscation. It is also one of surprising stability: YouTube, Bergen writes, is “the sleeping giant of social media.” Even as TikTok has become a megalith and other social networks have lost their touch with the youth, the site has retained its audience. A recent Pew Poll found that YouTube is used by ninety-five per cent of American teen-agers aged thirteen to seventeen, compared to sixty-seven per cent who used TikTok. As one of its employees told Bergen, “How do you boycott electricity?”

by Kevin Lozano, New Yorker |  Read more:
Image: via

Refik Anadol, “Fluid Dreams” computer generated art at MoMA.
 via: Vincent Tullo for The New York Times

Whispers of A.I.’s Modular Future

One day in late December, I downloaded a program called Whisper.cpp onto my laptop, hoping to use it to transcribe an interview I’d done. I fed it an audio file and, every few seconds, it produced one or two lines of eerily accurate transcript, writing down exactly what had been said with a precision I’d never seen before. As the lines piled up, I could feel my computer getting hotter. This was one of the few times in recent memory that my laptop had actually computed something complicated—mostly I just use it to browse the Web, watch TV, and write. Now it was running cutting-edge A.I.

Despite being one of the more sophisticated programs ever to run on my laptop, Whisper.cpp is also one of the simplest. If you showed its source code to A.I. researchers from the early days of speech recognition, they might laugh in disbelief, or cry—it would be like revealing to a nuclear physicist that the process for achieving cold fusion can be written on a napkin. Whisper.cpp is intelligence distilled. It’s rare for modern software in that it has virtually no dependencies—in other words, it works without the help of other programs. Instead, it is ten thousand lines of stand-alone code, most of which does little more than fairly complicated arithmetic. It was written in five days by Georgi Gerganov, a Bulgarian programmer who, by his own admission, knows next to nothing about speech recognition. Gerganov adapted it from a program called Whisper, released in September by OpenAI, the same organization behind ChatGPT and dall-e. Whisper transcribes speech in more than ninety languages. In some of them, the software is capable of superhuman performance—that is, it can actually parse what somebody’s saying better than a human can.

What’s so unusual about Whisper is that OpenAI open-sourced it, releasing not just the code but a detailed description of its architecture. They also included the all-important “model weights”: a giant file of numbers specifying the synaptic strength of every connection in the software’s neural network. In so doing, OpenAI made it possible for anyone, including an amateur like Gerganov, to modify the program. Gerganov converted Whisper to C++, a widely supported programming language, to make it easier to download and run on practically any device. This sounds like a logistical detail, but it’s actually the mark of a wider sea change. Until recently, world-beating A.I.s like Whisper were the exclusive province of the big tech firms that developed them. They existed behind the scenes, subtly powering search results, recommendations, chat assistants, and the like. If outsiders have been allowed to use them directly, their usage has been metered and controlled.

There have been a few other open-source A.I.s in the past few years, but most of them have been developed by reverse engineering proprietary projects. LeelaZero, a chess engine, is a crowdsourced version of DeepMind’s AlphaZero, the world’s best computer player; because DeepMind didn’t release AlphaZero’s model weights, LeelaZero had to be trained from scratch, by individual users—a strategy that was only workable because the program could learn by playing chess against itself. Similarly, Stable Diffusion, which conjures images from descriptions, is a hugely popular clone of OpenAI’s dall-e and Google’s Imagen, but trained with publicly available data. Whisper may be the first A.I. in this class that was simply gifted to the public. In an era of cloud-based software, when all of our programs are essentially rented from the companies that make them, I find it somewhat electrifying that, now that I’ve downloaded Whisper.cpp, no one can take it away from me—not even Gerganov. His little program has transformed my laptop from a device that accesses A.I. to something of an intelligent machine in itself. (...)

A textbook from 1999, which described a then state-of-the-art speech-recognition system similar to Dragon NaturallySpeaking, ran to more than four hundred pages; to understand it, one had to master complicated math that was sometimes specific to sound—hidden Markov models, spectral analysis, and something called “cepstral compensation.” The book came with a CD-rom containing thirty thousand lines of code, much of it devoted to the vagaries of speech and sound. In its embrace of statistics, speech recognition had become a deep, difficult field. It appeared that progress would come now only incrementally, and with increasing pain.

But, in fact, the opposite happened. As Sutton put it in his 2019 essay, seventy years of A.I. research had revealed that “general methods that leverage computation are ultimately the most effective, and by a large margin.” Sutton called this “the bitter lesson”: it was bitter because there was something upsetting about the fact that packing more cleverness and technical arcana into your A.I. programs was not only inessential to progress but actually an impediment. It was better to have a simpler program that knew how to learn, running on a fast computer, and to task it with solving a complicated problem for itself. The lesson kept having to be relearned, Sutton wrote, because jamming everything you knew into an A.I. often yielded short-term improvements at first. With each new bit of knowledge, your program would get marginally better—but, in the long run, the added complexity would make it harder to find the way to faster progress. Methods that took a step back and stripped expert knowledge in favor of raw computation always won out. Sutton concluded that the goal of A.I. research should be to build “agents that can discover like we can” rather than programs “which contain what we have discovered.” In recent years, A.I. researchers seem to have learned the bitter lesson once and for all. The result has been a parade of astonishing new programs.

by James Somers, New Yorker |  Read more:
Image: Pierre Buttin
[ed. For creating images similar to DALL-E, see also: the free, open source program Stable Diffusion Online (no sign-up required). And, in other AI news: California congressman proposes a new government agency to regulate various AI issues; another wants to create general operating standards, including digital watermarks; BuzzFeed says it will use AI to create content (stock jumps 150 percent); and, Mostly Skeptical Thoughts On The Chatbot Propaganda Apocalypse (ACX). {ed.} Why does everything AI suddenly seem like it's all moving too damn fast?]
"Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. This dystopian future may sound like science fiction, but the truth is that without proper regulations for the development and deployment of Artificial Intelligence (AI), it could become a reality. The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits.

I didn’t write the above paragraph. It was generated in a few seconds by an A.I. program called ChatGPT, which is available on the internet. I simply logged into the program and entered the following prompt: “Write an attention grabbing first paragraph of an Op-Ed on why artificial intelligence should be regulated.


I’m a Congressman Who Codes. A.I. Freaks Me Out. (NY Times)

Sunday, February 5, 2023

Sorry, Not Sorry

The scientist at the heart of the scandal involving the world’s first gene-edited babies has said he moved “too quickly” by pressing ahead with the procedure.

He Jiankui sent shock waves across the world of science when he announced in 2018 that he had edited the genes of twin girls, Lulu and Nana, before birth. He was subsequently sacked by his university in Shenzhen, received a three-year prison sentence, and was broadly condemned for having gone ahead with the risky, ethically contentious and medically unjustified procedure with inadequate consent from the families involved.

Speaking to the Guardian in one of his first interviews since his public re-emergence last year, He said: “I’ve been thinking about what I’ve done in the past for a long time. To summarise it up in one sentence: I did it too quickly.”

However, he stopped short of expressing regret or apologising, saying “I need more time to think about that” and “that’s a complicated question”. (...)

Gene-edited cells were already beginning to be used in clinical treatments for adults. But genetically modifying embryos was – and is – far more ethically contentious, because changes are made to every cell in the body and are passed down to subsequent generations. Some question whether such a step could ever be medically justified.

Against this backdrop, He dropped the bombshell at an international conference in Hong Kong four years ago that he had modified two embryos before they were placed in their mother’s womb. It later emerged that a third gene-edited baby had been born. (...)

“According to Chinese law, when a person has served the prison [sentence], after that they begin again with full rights,” he said. “Compared to the past experience, it’s more important what we’re doing today that determine whether I move on or not.”

Asked whether the past four years had been difficult, He said he preferred to focus on the future. “I like the Beatles song Let It Be,” he said. “Let’s move on to my new project.”

by Hannah Devlin, The Guardian | Read more:
Image:Mark Schiefelbein/AP

Friday, February 3, 2023

What Did Robert Johnson Encounter at the Crossroads?

Son House, John The Revelator

Let’s now turn to the blues.

I’ve chosen it as the focal point of this chapter, because for many music historians it represents the rise of secular themes in African-American music. It’s the last place you would look for transcendence.

Even the clichéd opening of so many blues songs (I woke up this morning…), focuses on the day-to-day, however depressing, instead of otherworldly experiences. And the subject matter of the blues, much like hip-hop a half-century later, is a litany of sins and vices, with every one of the Ten Commandments getting trampled upon, sooner or later, with scandalous persistence. This wasn’t just an abandonment of the sacred music tradition, but its total renunciation—or so it seems.

The worldview of blues music was the exact opposite of the spiritual. That’s how the story is usually told—and for a good reason. Blues records sold well because they offered a strident alternative to sanctimony and religiosity. They told about real life, and with all the gritty details.

But as soon as we peer into the inner life of the blues, its apparent secularism and modern ways start to disappear. In so many instances, its earliest exponents in rural America performed spiritual music as well, and not just any kind of religious music—rather, fervent and quasi-apocalyptic songs that repeatedly describe spiritual quests to another world. (...)

This story—the best known tale in the entire history of blues music—tells how Johnson obtained his legendary skills as a guitarist by making a deal with the Devil at a crossroads at midnight. I’ve found that even people who know little or nothing about the blues, are still aware of this story. It has been commemorated in books, documentaries, Hollywood movies, and tourist attractions.

No one knows where the crossroads are located, but that hasn’t stopped people from promoting various locations as the place where Johnson made his infamous deal. Visitors to Clarksdale, Mississippi can even see a pole at the intersection of Highway 61 and Highway 49 where the transaction took place. It’s a shame that this intersection didn’t exist at the time Johnson was learning guitar, although that hasn’t prevented the Clarksdale crossroads from generating significant tourism dollars for the city. (...)

According to the revisionist narrative, this embarrassing tale about the crossroads gained credence because of a 1966 interview with Robert Johnson’s mentor Son House—who told journalist Pete Welding that Robert Johnson “sold his soul to the devil to play like that.” This passing comment has caused much discomfort among blues writers, and I’ve even heard grumbling that Welding never shared a tape recording of the interview, implying that he might just have made the whole thing up.

But around that same time, Harvard-trained blues researcher Dr. David Evans encountered an even more detailed crossroads story while researching Tommy Johnson, a blues guitarist of that same era—unrelated to Robert Johnson, but apparently another Delta musician who had made a deal with the Devil.

by Ted Gioia, The Honest Broker |  Read more:
Image: YouTube
[ed. From the excellent book (currently being serialized on Ted's Substack site): Music to Raise the Dead.]

Arms and the Man

How Not to Write an Action Movie

If the images from the James Webb Space Telescope have taught us anything, it’s this: if you look deep into the darkness of the universe and consider what we estimate to be its two trillion galaxies and the trillions of solar systems they contain, it is statistically unthinkable that an uncountable number of these solar systems don’t harbor planets in the so-called Goldilocks zone, planets on which organic life flourishes, creatures of untold variety and splendor, creatures with one thing in common: they all have connections in Hollywood.

Here on Earth, everyone’s uncle’s brother’s barber’s kid’s girlfriend’s depressed cousin’s psychic’s personal trainer’s family friend is a gaffer or key grip in Hollywood, one who might be able to get your script to some factotum rowing in a minor production company’s development galley—the number of young people in Hollywood “working in development” exceeding the population of the great state of Maine—one of whom could (it was possible!) land you a meeting that could lead to a deal. These things happen! So of course I had a connection, mine turning out to be a little different, for as decades dragged on, my attempted exploitation of said person may be understood as the most humiliating face-plant in the history of nepotism.

When I was a kid, I was pen pals with the daughter of my New York parents’ closest California friends. Said pen pal’s older brother, was—I discovered on visits to Woodland Hills, California—cool. I tucked my Pittsburgh Steelers sweatshirts into my pants. Said pen pal’s older brother modeled. He drove a BMW. Coolest of all, he was also nice! Said cool/nice person went into the film business, bulldozing forward from intern to assistant to script reader to development drone—one who, thanks to bootstrap industry and perfect judgment, found a script in the slush that became a huge movie. One thing led to another until he was the huge thing, so much so that, at this writing, he’s arguably the most powerful person in film.

After my third year of college, in 1990, and after I’d taken a fiction workshop confirming my soul’s improvident hope that I Was a Writer, the nice but not yet absurdly powerful person, after I’d expressed an interest in writing movies, sent me a box of scripts. Like, twenty. Some were classic examples (Citizen Kane, Ordinary People, Butch Cassidy and the Sundance Kid), but most were movies in some stage of production, spanning various genres, all generated by talented young writers: Regarding Henry, by J. J. Abrams; Seven, by Andrew Kevin Walker; Quiz Show, by Paul Attanasio; The Last Boy Scout, by Shane Black. All the scripts were entertaining and instructive: having metabolized their methods and modes, the motivated student would need no further guide to the form. It was a four-ream-thick MFA.

Of that stack, Shane Black’s was the stick of dynamite in the box of Cohibas. Black was the only member of the bunch with a produced movie, Lethal Weapon (1987). A subsequent script, The Last Boy Scout, the expediter of the box explained, had sold for $1.75 million, more than any before it. It almost doesn’t pain me to say that reading it was one of the most exhilarating reading experiences I’ve ever had, up there with finishing William Gaddis’s The Recognitions, T. S. Eliot’s Four Quartets, Vladimir Nabokov’s Pnin, Ben Metcalf’s Against the Country, and—wait for it—Moby-Dick. You’ll have noted my qualificatory “almost.” My hesitation isn’t out of snobbery or shame; rather, a fear, in part, that you’ll recall the actual movie it became: an abominable R-rated travesty starring Bruce Willis, one which lacked, totally, the qualities of Black’s script.

If you read the thing now, the film’s failure to capture the narcotic thrill of the original makes perfect sense. As much as Black was a master of pacing, a fine crafter of set pieces, and delightfully de trop as a writer of snappy, manly dialogue, the most galvanic features manifested themselves in stage directions, interstitial material steering the reader through the gleeful nonsense. No context for this bonbon because who cares:
int. dingy dressing room—night

Cory and Jimmy are engaged in very hot sex. This is not a love scene; this is a sex scene.

Sigh. I’m not even going to attempt to write this quote-unquote “steamy” scene here, for several good reasons:

A) The things that I find steamy are none of your damn business, Jack, in addition to which—

B) The two actors involved will no doubt have wonderful, highly athletic ideas which manage to elude most fat-assed writers anyhow, and finally—

C) My mother reads this shit. So there.

(P.S.: I think we lost her back at the Jacuzzi blowjob scene.)
That’s the tenor of what keeps Black’s scenes taped together. Not that this idea—breaking the fourth wall of the script—was new. William Goldman, one of the greatest modern screenwriters, wrote charming, cajoling stage directions, addressing the reader directly, if passingly, with light touches of confederacy. Black cites Goldman as an influence, but Black’s version of the Goldmanic mode is on steroids. The reader is not cajoled so much as strong-armed into having the most delightful time: pigs in blankets appear just as the tummy grumbles; cheap champagne is sloppily topped off; cocaine, likely cut with creatine, is spooned into nostrils so that attention never lags. A reader of a Black script—first and foremost a reader-buyer—would feel giardia-level sick not to love it, so hospitable is it to the reader’s fat ass.

In the first ten pages of Boy Scout, a running back, heading upfield in a pro-football game, pulls a gun from under his jersey and, before sixty thousand witnesses, shoots the opposing players in his way. (“Pumps three shots into the free safety’s head. The bullets go straight through. On the back of his helmet. A mixture of blood and fiberglass.”) He makes it to the end zone, where he utters an appropriate witticism (“I’m going to Disneyland”), then blows his own head off. In the next scene, the drunk middle-aged hero (Willis) threatens to shoot a child with a .38 (a dead squirrel is involved), not long after which the reader reaches the “Jacuzzi blowjob scene” that Black’s mom probably didn’t like, wherein a jacked pro-baller repeatedly plunges a woman’s head underwater so that she might, against her will, perform aquatic fellatio. The script’s other hero saves the day by grabbing a football and throwing a sixty-mile-per-hour spiral at the attempted rapist’s face. But it was none of these particular instances of crudity that registered most with me on a sinking-feeling reread. Rather, it was the way that Black was, through his Virgilian shepherding of the reader through the carnage, ironizing the shit out of what has always been central to cinema: violence.

by Wyatt Mason, Harper's |  Read more:
Image: Chloe Niclas

Thursday, February 2, 2023

Why Is Everything So Complicated?

I started shaving, tackling the wispiest of bumfluff, 40 years ago. I did so in an attempt to stimulate growth in order to make me look older, so I would have a better chance of getting served in pubs. Not one of these three things came to pass. The razor I used had two blades. I remember thinking how that felt excessive for my needs; one would have done. This was 1983 – 11 years after, according to its website, Gillette came up with the “Trac II®, the first twin-blade shaving system”. And it was a good 15 years before Gillette was “breaking the performance barrier with the MACH3®, the first three-blade technology, for an even smoother, closer shave”.

The blade arms race was on, providing a rich source of comic material... But on the razor makers ploughed regardless, breaking new ground with ever more blades. Gillette, with a fine flourish, skipped four blades and went straight to five in 2006. And at five it has stuck, instead coming up with other stuff to keep our excitement high, most recently a heated razor that “delivers instant warmth in less than one second at the push of a button and provides a noticeably more comfortable shave”. Reassuringly, though, the blade race continues apace with the Dorco Pace 7, “World’s First and Only Seven Blade Razor”. Seven!

Look, everyone’s got to make a living, but this is getting silly. We’re approaching Spinal Tap territory, with their amplifiers calibrated to 11 instead of 10. Innovation, we’re always told, is a wonderful thing. But what about innovation with no real purpose other than to drive sales? To be fair, I’m sure Gillette and others could provide evidence of improved performance, but while my dictionary defines innovate as “to introduce something new”, it also, tellingly, has it as “to introduce novelties”.

Kitchens are crammed with cooked-up novelties. We need ovens to get hot, fridges to get cool, and dishwashers to wash dishes. But oh, the features I’ve fallen for in my time. Ovens that spurt steam and are equipped with integrated temperature probes, for a start. Both vaguely useful, I must admit, but both conked out before long. This is another unhappy outcome of innovation: there’s ever more stuff to go wrong. The top-rated American-style fridge freezer on Which? will set you back around two and a half thousand pounds. It sports a large touch screen on which you can see who’s at your front door, play music and videos and plan your meals. Inside, believe it or not, there’s a camera so you can use your smartphone to see what’s in there, alert you to use-by dates and even add to your online shopping list. Why? Please make it stop.

by Adrian Chiles, The Guardian |  Read more:
Image: adventtr/Getty Images/iStockphoto
[ed. No kidding. Can't we all accept that touch screens are a disaster for anything other than smartphones and computers? Anything that used to require some tactile touch (especially in the dark)? Where's the innovation in making buttons ergonomically better and more integrated these days? Non-existent. I'd buy an appliance with solid button thingys over a touch screen any day of the week.] 

More Than 600 Mass Shootings Since January 2022

There have been more than 600 mass shootings since Jan. 1, 2022 in the United States, according to the Gun Violence Archive. (...)

Mass shootings — where four or more people, not including the shooter, are injured or killed — have averaged more than one per day since January 2022. Not a single week in 2022 has passed without at least four mass shootings. 9...)

With eight days to go, this January has had more shootings than any other January on the database’s records. The toll is immense. In 2022 alone, mass shootings have killed 673 people and injured 2,700.

by Júlia Ledur and Kate Rabinowitz, Washington Post | Read more:
Image: NY Times. Sources: Institute for Health Metrics and Evaluation, Univ. of Washington, Small Arms Survey, World Bank
[ed. At least we still lead the world in something (also, Defense spending - my personal bugaboo - getting pretty close to $800  billion/yr. now. I wonder if that'll come up in the debt ceiling debate? Haha...just kidding.). Yay. We're Number 1.]

WA’s ‘Death With Dignity’ Law Failed My Wife

My beloved wife of 27 years had to die alone.

It shouldn’t have been that way. We both wanted Toni’s suffering to end with her dying peacefully in my arms, but Washington state’s supposedly enlightened “Death with Dignity” law wouldn’t let her.

Despite first-class medical care, her disease, one of the many variants of ALS, was slowly grinding her down.

Toni’s primary identity was not as an attorney, wife or prankster nonpareil, but as a distance runner. She enthusiastically, and sometimes doggedly, ran every day for 30 years, missing only a couple of days due to the flu. So it was brutally ironic that the first thing the disease took was her legs’ ability to support her.

We knew what lay ahead. Her brother had recently died of the same disease, and she didn’t want a repeat of the prolonged misery that he and his family endured.

Our current law says that to get aid in dying a patient must have a diagnosis of natural death within six months. But because her disease was slow and inexorable, it would mean more years of suffering before she could qualify for medically aided death.

Finally, after about eight years of decline, the force and bewildering variety of her symptoms overwhelmed her uncommon ability to extract every last drop of fun out of living. Her effervescence still shined on occasion, but less and less. We knew that Wild Thing (my pet name for her) would have to figure out how to end it all.

And not merely how to do it. Because she didn’t qualify for medical aid in dying, she would have to do it alone.

I hate this fact.

It’s bad enough that nature deprived us of an expected 25 more years of love, but it breaks my heart that our poorly formulated laws prevented me, and everyone else, from giving her aid and comfort during the planning and carrying it out. If it appeared that I had assisted in any way, I would be in legal jeopardy. (...)

But how does a person decide when the pain is persistent enough, when the happiness is rare enough, to actually do it? The story of the frog in slowly heating water comes to mind. If anybody could resolve to do it without discussing plans with anyone, and then accomplish it, it would be Toni.

I am fiercely proud of her bravery and force of will.

But I’m also angry because we, as citizens in charge of our laws, have badly failed her and many others. We need to improve these laws as Canada’s Parliament did in 2016.

We should drop the six-month requirement and keep the requirements that a patient have a grievous and irremediable medical condition, an advanced state of decline and unbearable suffering from the illness. And we should keep the more general safeguards regarding the patient’s age, mental health, informed consent, unacceptable motives, pressure from family or others, et cetera.

If Washington had such law, we and our loved ones would have come together for a wonderful and tearful goodbye, rather than attend a memorial service. And Toni’s last moments would have been in my warm, loving embrace.

by Peter Haley, Seattle Times |  Read more:
Image: Peter Haley
[ed. The DwD process is unnecessarily convoluted (on purpose). All our lives (from birth) we're expected to take responsibility for ourselves and maintain self-control, then near the end that control is taken away. Why exactly? Because it's morally wrong to have agency over your own life? 
***
Contrast End of Life's mission statement:
Our mission is to guide people in preparing for the final days of their lives. We believe that a peaceful death should be within reach of everyone and that no one should face intolerable suffering at the end of life. We promote advance planning and envision a day when all Washington residents will make informed decisions so they may experience peaceful deaths consistent with their values.
... with those expressed, for example, in: The European Way to Die by Michel Houellebecq (Harper's), which seem (to me) mean and incoherent:]
Little by little, and without anyone’s objecting—or even seeming to notice—our civil law has moved away from the moral law whose fulfillment should be its sole purpose. It is difficult and exhausting to live in a country where the laws are held in contempt, whether they sanction acts that have nothing to do with morality or condone acts that are morally abject. But it’s even worse to live among people whom one begins to disdain for their submission to these laws they hold in contempt as well as for their greediness in demanding new ones. An assisted suicide—in which a doctor prescribes a lethal cocktail that the patient self-administers under circumstances of his own choosing—is still a suicide.

We are demonstrating once again our feeble respect for individual liberty and an unhealthy appetite for micromanagement—a state of affairs we deceptively call welfare but is more accurately described as servitude. This mixture of extreme infantilization, whereby one grants a physician the right to end one’s life, and a petulant desire for “ultimate liberty” is a combination that, quite frankly, disgusts me.

Liziqi

[ed. I used to collect Hawaiian sea salt in lava rock depressions, but this is next level (still traditional). See also: this past post.]

Wednesday, February 1, 2023


Eyvind Earle

Manifestation

How Manifestation is Turning Billions into Believers

Three months ago, Macey Irving, an 18-year-old in Ontario, Canada, manifested a boyfriend.

One night, she wrote down specific qualities she wanted in a “perfect” partner: tall, dark hair, green eyes, into conspiracy theories, extroverted, not cocky. At the bottom of the page, she wrote, “with harm to none, I summon my ‘perfect’ person into my life.” A month later, they met someone who fit the exact qualities she’d written down.

“I was surprised because we were talking and I said, ‘What color eyes do you have?’ just to make sure that I was checking off a box and they were like, ‘green eyes’ and I was like, OK, these are checking too many boxes. I was like, ‘OK, this is my manifestation,’” Irving tells NYLON.

Ultimately, Irving decided not to date this person because she wanted to focus on school, but she couldn’t believe the manifestation worked. “I felt like I needed to work on my manifestations for something bigger,” they said. “There are much more important things, like career and all that.”

Irving is one of the hundreds of thousands of people using TikTok to learn about manifesting. Whether it’s manifesting love, cash, or career opportunities, the platform is full of people who want things and even more ways to get them. The hashtag #manifestation currently has more than 15 billion views. In a time when dating feels like a hellscape, we’re witnessing the live collapse of nearly every system, and we’re still on the hook for student loans, you might as well try lighting some incense and asking for what it is that you really want. There’s a range of ways to manifest that include everything from the spiritual — being in touch with yourself, journaling, meditation — to the straight-up witchy, like writing seven times in a row that you want your crush to text you while lighting sage over your iPhone.

“Manifestation is a catchall phrase for spell work, for setting intentions, for creating a more honest experience for yourself for what you are looking for in this lifetime,” says Aliza Kelly, astrologer and author of This Is Your Destiny: Using Astrology to Manifest Your Best Life. “For me, manifesting is not just about getting things — it’s also about living an honest life and living a life that is aligned with who you are, what you desire and the ways you want to show up in the world.”

Kelly says there are two parts to manifestation — one exists in the physical realm, which are action items like applying to jobs you want or touring apartments you’re interested in, and the other exists in the spiritual realm, where we do things like set intentions, create vision boards, do rituals, or light candles and incense. For manifestation to work, you must do both in tandem.

“You can’t just do one magical ritual candle work and not follow it up with actionable items in the physical world,” she says.

by Sophia June, Nylon |  Read more:
Image: Screenshots/TikTok/Shutterstock/Getty

Why VR/AR Gets Farther Away as It Comes Into Focus

As we observe the state of XR in 2023, it’s fair to say the technology has proved harder than many of the best-informed and most financially endowed companies expected. When it unveiled Google Glass, Google suggested that annual sales could reach the tens of millions by 2015, with the goal of appealing to the nearly 80% of people who wear glasses daily. Though Google continues to build AR devices, Glass was an infamous flop, with sales in the tens of thousands (the company’s 2022 AR device no longer uses the Glass brand). Throughout 2015 and 2016, Mark Zuckerberg repeated his belief that within a decade, “normal-looking” AR glasses might be a part of daily life, replacing the need to bring out a smartphone to take a call, share a photo, or browse the web, while a bigscreen TV would be transformed into a $1 AR app. Now it looks like Facebook won’t launch a dedicated AR headset by 2025—let alone an edition that hundreds of millions might want.

In 2016, Epic Games founder/CEO Tim Sweeney predicted not only that within five to seven years, we would have not just PC-grade VR devices but also that these devices would have shrunk down into Oakley-style sunglasses. Seven years later, this still seems at best seven years away. Recent reporting says Apple’s AR glasses, which were once targeted for a 2023 debut and then pushed to 2025, have been delayed indefinitely. Snap’s Spectacles launched to long lines and much fanfare, with another three editions launched by 2021. In 2022, the division was largely shuttered, with the company refocusing on smartphone-based AR. Amazon has yet to launch any Echo Frames with a screen, rather than just onboard Alexa. (...)

Over the past 13 or so years, there has been material technical progress. And we do see growing deployment. Today, XR is selectively used in civil engineering and industrial design, in film production, on assembly lines and factory floors. Some schools use VR some of the time in some classes - and the utility a virtual classroom with virtual Bunsen Burners and virtual frogs to dissect, all overseen by an embodied instructor, while you sit beside and make eye contact with your peers, is obvious. VR is also increasingly popular for workplace safety training, especially in high-risk environments such as oil rigs; teaching personnel how, when, and where people look is already having life-saving applications. And on the topic of saving lives, Johns Hopkins has been using XR devices for live patient surgery for more than a year, beginning with the removal of cancerous spinal tumors. If you use a high-end VR headset such as the Varjo Aero (which also requires a physical tether to a gaming-grade PC and costs $2,000) to play a title such as Microsoft Flight Simulator (which operates a 500,000,000 square kilometer reproduction of the earth, with two trillion individual rendered trees, 1.5 billion buildings, and nearly every road, mountain, and city globally), there is the unmistakable feeling the future is near.

The examples listed above are technically impressive, meaningful, and better than ever . But the future was supposed to have arrived by now. In 2023, it’s difficult to say that a critical mass of consumers or businesses believe there’s a “killer” AR/VR/MR experience in market today; just familiar promises of the killer use cases that might be a few years away. These devices are even farther from substituting for the devices we currently use (and it doesn’t seem like they’re on precipice of mainstream adoption, either). There are some games with strong sales—a few titles have done over $100MM—but none where one might argue that, if only graphics were to improve by X%, large swaths of the population would use VR devices or those titles on a regular basis. I strongly prefer doing VR-based presentations to those on Zoom—where I spend 30-60 minutes staring at a camera as though no one else is there. But the experience remains fraught; functionality is limited; and onboarding other individuals is rarely worth the benefit because its participants seem to find these benefits both few and small. When the iPhone launched, Steve Jobs touted it did three distinct things—MP3 player, phone, internet communicator—better at launch than the single-use devices then on the market. The following year, the iPhone launched its App Store and “There’s an App for That” proliferated, with tens of millions doing everything they could on the device. The “killer app” was that it already had dozens of them. (...)

Of course, XR devices will not suddenly replace an existing device category. Hundreds of millions will first use VR/AR alongside their consoles, PCs, and smartphones before tens of millions drop one of the latter for the first – and hundreds of millions will continue to use both longer after (this essay is written on a PC, for example). But the timing of this transition is relevant for those investing. Return to my Johns Hopkins example. After completing the surgery, Dr. Timothy Witham, who is also director of the hospital’s Spinal Fusion Laboratory, likened the experience to driving a car with GPS. I love this analogy because it shows how XR can complement existing devices and behaviors rather than displace them (it also complements reality, rather than disconnecting us from it). Put another way, we drive a car with GPS; we don’t drive GPS instead of a car, and GPS doesn’t replace the onboard computer either. What’s more, many of us travel more often because GPS exists. Dr. Witham also provides a framework through which we can evaluate the utility XR devices. To exist, they need not upend convention, just deliver better and/or faster and/or cheaper and/or more reliable outcomes. But even under these more moderated measures, the future seems far off. GPS began to see non-military adoption in the 1990s, but it took another two decades to mature in cost and quality to become a part of daily life. Furthermore, the mainstream value in GPS was not only in improving commutes but in enabling applications as diverse as Tinder, Siri, Yelp, Spotify, and many others. (...)

Many entrepreneurs, developers, executives, and technologists still believe XR is the future (I do). In particular, these groups believe in AR glasses that will eventually replace most of our personal computers and TV screens. And history does show that over time, these devices get closer to our face, while also more natural and immersive in interface, leading to increased usage too. But why is this future so far behind? Where is the money going? What progress is being made? And most importantly, how many XR winters must come and go before a spring actually leads to summer?

“It Looks Like Wii Sports”

More than half of all households in the United States own a video game console. In almost all cases, this console is the most powerful computing device owned, used, or even seen by the members of that household. This includes those households who own the most recent model of iPad Pro or work in an office with a high-end enterprise PC or Mac. Regardless which one they choose, that video game console is also more affordable than most other consumer or even professional-grade computing devices. It typically costs more, for example, to purchase a comparably powered gaming PC or even to replace the graphics card on an existing PC. This is because consoles benefit from substantial economies of scale, with their manufacturers shipping 50–150MM mostly standardized units over a decade. Purchasing individual components, each one individually packaged, marked-up, and retailed, often with new models released annually, is expensive. Video game consoles are also subsidized, typically by $100–$200, as their manufacturers pursue a razor-and-blades model whereby subsequent software purchases eventually recoup the money lost selling the hardware. No graphics card or monitor manufacturer gets a cut of your Robux or V-Bucks.

Compared to everyday devices, the computing power of a video game console is so great that in 2000, Japan even placed export limitations on its own beloved giant, Sony, and its signature PlayStation 2 console. The government feared that the PS2 could be used for terrorism on a global scale, for instance to process missile guidance systems. The following year, in touting the importance of the consumer electronics industry, U.S. Secretary of Commerce Don Evans stated that “yesterday’s supercomputer is today’s PlayStation.” Evans’s pronouncement was powerful—even though it was arguably backwards; today’s PlayStation is often tomorrow’s supercomputer. In 2010, the U.S. Air Force Research Laboratory built the 33rd-largest supercomputer in the world using 1,760 Sony PlayStation 3s. The project’s director estimated that the “Condor Cluster” was 5% to 10% the cost of equivalent systems and used 10% of the energy. The supercomputer was used for radar enhancement, pattern recognition, satellite imagery processing, and artificial intelligence research.

Yet in many ways, video game consoles have it easy. Consider the PlayStation 5 or Xbox Series X, both top-of-the-line video game consoles released in 2020. These devices are nearly ten pounds and larger than a shoebox—brutal in comparison to other consumer electronics devices, but fine given that these devices are placed inside a media shelving console and never moved. In fact, it’s not fine—it’s an advantage! Because these devices can be large, unsightly, and stationary, Sony and Microsoft get to place large and loud fans inside their consoles, which keep these consoles cool as they perform their intensive calculations, and aid these fans with large intake and exhaust vents. Sony and Microsoft can also keep component costs down because they don’t need to prioritize their size the way a smartphone manufacturer must. And while Sony’s and Microsoft’s consoles are heavy, they, unlike most consumer devices, never need a battery. Instead, they receive constant power from the electrical grid. This reduces the size of the device, as well as the heat it generates, which in turn means that the fan can be smaller, too, and means they can run indefinitely, rather than just a few hours. (...)

This context around consoles is important to keep in mind as we consider VR/AR/MR. It’s common to hear the critique that the experiences produced by these devices look worse than those produced by the consoles of a decade ago that cost half as much at the time. When it comes to visually rendering a virtual environment, VR/AR/MR devices will always fall short of a modern video game console. Always. This is because the “work” performed by these devices is far, far harder while the constraints are far, far greater. (...)

But Does It Play Better

All consumer tech faces tradeoffs and hard problems. But XR devices require so many points of optimization - heat, weight, battery life, resolution, frame rate, cameras, sensors, cost, size, and so on. Zuckerberg’s belief in this device category, placed aside these problems, explains how it’s possible he’s spending $10B+ year after year after year. That money is being sunk into optics, LEDs, batteries, processors, cameras, software, operating systems, and the like. And if Zuckerberg can crack this, with nearly all of his competitors years behind (if they’re bothering at all), the financial returns may be extraordinary. In early 2021, Zuckerberg said “The hardest technology challenge of our time may be fitting a supercomputer into the frame of normal-looking glasses. But it's the key to bringing our physical and digital worlds together.”

The immense difficulty of XR also explains why “the graphics look like they’re from the Wii” is actually a compliment—it’s a bit like saying an adult ran 100 meters as fast as a 12-year-old, even though the adult was wearing a 50-pound backpack and solving math problems at the same time. This defense is separate from whether Meta’s art style is good relative to its constraints. There’s pretty widespread consensus it’s bad. However, it’s not quite fair to compare the graphics of Meta’s avatars or signature products, such as Horizon Workrooms, to those of third party VR titles such as VRChat or RecRoom. This fidelity is available to Meta, but only selectively – as we know, “graphics” are just one part of the computing equation. For example, a two-person meeting in Horizon Workrooms that expands to eight might require a halving of the frame rate or avatar definition or accuracy in eye reproduction, while also draining batteries far faster. Or your avatar—intended to be a representation of you—could look better or worse, more detailed or generic, legged or legless, depending on which application you’re using it in. This gets eerie, distracting, and annoying. (...)

Many people I know believe that absent extraordinary advances in battery technology and wireless power and optics and computer processing, we simply cannot achieve the XR devices that many of us imagine and that would conceivably replace the smartphone or merely (a smaller ask) engage a few hundred million people on a daily basis. Just last December, six years after he told Venture Beat that such devices were five to seven years away, Tim Sweeney told Alex Heath, “Well, I think that augmented reality is the platform of the future. But it’s very clear from the efforts of Magic Leap and others that we need not just new technology but, to some extent, new science in order to build an augmented reality platform that’s a substitute for smartphones. And it’s not clear to me whether that’s coming in 10 years or in 30 years. I hope we’ll see it in my lifetime, but I’m actually not sure about that.”

by Matthew Ball, MatthewBall.vc |  Read more:
Image: DALL-E, uncredited

EPA Blocks Pebble Mine Project In Alaska


EPA Blocks Alaska Pebble Mine In Salmon-Rich Bristol Bay Region (Seattle Times)
Images: Loren Holmes / Anchorage Daily News; Jason Ching (The Conservation Fund via)
[ed. Finally, after decades of study and political struggle. This is what I did in my 30 year career - analyzing, mitigating, and permitting or denying major development projects throughout Alaska (based on their feasibility and impact): mines, ports, dams, oil fields, transportation projects, etc. - also land acquisitions and conservation easements to protect critical fish and wildlife habitats and recreational access (eg., developing the Exxon Valdez Habitat Protection program resulting in the protection of nearly 650,000 acres). It almost seemed like a truism at the time that the best place to site a project always happened to be in the worst, most environmentally sensitive location. The Pebble project would have been one of the worst of the worst. See also: E.P.A. Blocks Long-Disputed Mine Project in Alaska (NYT); Over 44,000 Acres of Critical Bristol Bay Habitat Permanently Protected;  Pedro Bay Rivers Project, Alaska (The Conservation Fund); and, What Alaska leaders, advocacy groups and industries are saying about the EPA rejection of Pebble mine (ADN).]

Tuesday, January 31, 2023

Why Did the Beatles Get So Many Bad Reviews?

John Lennon’s concept sketch for the Sgt. Pepper’s cover, and the end result


On the 50th anniversary of Sgt. Pepper’s Lonely Hearts Club Band, the New York Times bravely reprinted the original review that ran in the newspaper on June 18, 1967. I commend the courage of the decision-makers who were willing to make Gray Lady look so silly. But it was a wise move—if only because readers deserve a reminder of how wrong critics can be.

“Like an over-attended child, ‘Sergeant Pepper’ is spoiled,” critic Richard Goldstein announced. And he had a long list of complaints. The album was just a pastiche, and “reeks of horns and harps, harmonica quartets, assorted animal noises and a 91-piece orchestra.” He mocks the lyrics as “dismal and dull.” Above all the album fails due to an “obsession with production, coupled with a surprising shoddiness in composition.” This flaw doesn’t just destroy the occasional song, but “permeates the entire album.”

Goldstein has many other criticisms—he gripes about dissonance, reverb, echo, electronic meandering, etc. He concludes by branding the entire record as an “undistinguished collection of work,” and even attacks the famous Sgt. Pepper’s cover—lauded today as one of the most creative album designs of all time—as “busy, hip, and cluttered.”

The bottom line, according to the newspaper of record: “There is nothing beautiful on ‘Sergeant Pepper.’ Nothing is real and there is nothing to get hung about.”

How could he get it so wrong?

by Ted Gioia, The Honest Broker |  Read more:
Images: John Lennon/The Beatles; Vevo

Monday, January 30, 2023

The Golden Age of Multiplayer

[ed. 39 million views. Wow. They stole my moves. Actually stumbled onto this while reading the following essay and was captivated by the animations. But also, it's beginning to dawn on me how many subcultures are out there - the scale - doing all kinds of unimaginable stuff (tens of thousands? hundreds?). For example, I posted something about cast iron seasoning below, and have personally watched the evolution of online guitar instruction over the last few years (and the personalities its produced), which has also exploded. Maybe this is just obvious stuff, but it seems incredible that so many people have coalesced into so many different communities over so short a time sharing their experience and expertise.]
***
The Golden Age of Multiplayer: How Online Gaming Conquered Video Games (The Ringer):

Globally, there were more than 1 billion gamers playing online in 2022, emphasis on playing online. Gaming has its eras. We’re living in the golden age of online multiplayer.

This golden age began a few years ago. In May 2016, Blizzard Entertainment released Overwatch, a team-based hero shooter that pits players against each other in game modes like capture the flag. A year later, Epic Games released Fortnite Battle Royale, a cartoonish survival game in which up to 100 players duke it out to be the last person standing. Gamers suffered no shortage of online multiplayer titles in the late 2000s and throughout the 2010s, but these two titles, Overwatch and Fortnite, brought the subculture to critical mass.

In October, Blizzard launched a long-anticipated sequel, Overwatch 2, hosting more than 25 million players across all platforms in the game’s first 10 days online. Epic countered with a new “chapter” of Fortnite—a new map, new mechanics, new rules—available to more than 250 million active players across all platforms. (...) In August 2013, Square Enix released Final Fantasy XIV, the odd MMORPG, or massively multiplayer online role-playing game, in a long-running and largely offline single-player series. A decade later, FFXIV hosts more than 27 million users total, with more than a million players on the servers on any given day. The biggest online multiplayer games often become subcultures unto themselves. (...)

In recent months, I’ve spoken with a variety of developers and players, and I’ve asked them to weigh in on a simple premise: Online multiplayer has become the dominant mode of video game culture. Most agreed; some wondered whether the multiplayer boom would eventually come at the expense of single-player game development at the major studios.

Gaming is more social than ever before, and gaming is extremely online. This shift was long- and hard-fought. It’s the story of exponential improvement in telecommunication infrastructure and matchmaking algorithms. But it’s also the story of a once-fractured subculture maturing, for better or worse, into an almost seamless monoculture.

by Justin Charity, The Ringer |  Read more:
Image: YouTube