Saturday, July 22, 2023

Stephen Curry: The Full Circle

There were too many bears roaming the woods behind the house and, with four daughters, far too many Barbies inside. Just before the school year ended in the early 1970s in Grottoes, Virginia, Wardell "Jack" Curry needed a solution, and fast. All he wanted was a way to keep his only son, Dell, occupied by something other than deadly animals or dolls during the long summer days ahead. As it turned out, though, with nothing more than an old utility pole, a fiberglass backboard and some fabricated steel brackets, Jack Curry ended up changing the sport of basketball and producing the ultimate point guard, his grandson Stephen Curry.

Jack's hoop was never much to look at. Its finest feature, by far, was the old reliable street lamp that hovered overhead and dutifully blinked on at dusk, bathing the key in warm yellow light. But this was Jack's plan all along: Only people who truly loved the game and understood the commitment it required would stick past dark on his country court.

The soft wings of the backboard had more give than a fence gate. The thick steel rim offered no absolution; only shots placed perfectly in the middle of the cylinder passed through. The institutional green metal breaker box just behind the hoop gave off a constant static hum that lured a shooter's focus away from the target. And the splintery wooden utility pole wasn't squared to a single landmark -- not the white ranch-style house, not the driveway, not the Blue Ridge mountains to the south or the creek to the north. So every shot required instant, expert recalibration.

Years of toil in the sun and mud honed Dell's fluid, deadly jumper -- a shot that produced a state title, a scholarship to Virginia Tech and a 16-year NBA career, mostly in Charlotte, that ended in 2002. And when Dell and his wife, Sonya, started their own family, their first child, Wardell Stephen Curry II, got more than just his name from Grandpa Jack. Stephen inherited the hoop and the same deep abiding love for the game it evokes. During frequent childhood trips to Grottoes, a sleepy mix of horse farms and trailer parks an hour northwest of Charlottesville, Stephen and his younger brother Seth (who played at Duke) would barely wait for the car to stop rolling before darting around back to start shooting. Their grandma, Juanita, 79, whom everyone calls Duckie, knew that if she wanted a kiss hello she had to position herself between the car and the hoop. (Jack died when Stephen was 2.) This is where Curry's love of the long ball was born, his trying to be the first one in the family to swish it from 60 feet, blind, peeking around the corner from the top kitchen step. "I always felt like the love and the lessons of that hoop got passed down to me," Stephen says. "It's crazy to think about how everything kinda started right there at this house with this one old hoop."

This season in Golden State, the legend grows larger by the minute. Nearly every night since the All-Star Game -- for which Curry was the top vote-getter and where he sank 13 straight shots to win the 3-point contest -- he's been expanding the lore of Jack's hoop as well as the parameters by which we define point guard greatness. Yes, his stats are MVP-worthy: Through March 24, he ranked seventh in points (23.4 per game), sixth in assists (7.9) and third in steals (2.1). Yes, he has the fourth-highest 3-point percentage, 43.6 percent, in NBA history and has led the league in total 3s since 2012, if you're counting. And yes, in six years, he has catapulted Golden State from perennial nonfactor to title favorite. But Curry's evolution this season is about something more profound than shooting, stats or hardware. The point guard groomed by that historic hoop in Grottoes has become the game's future.

Curry is standing at the forefront of a new era of playmaker. For the first time since Magic Johnson took an evolutionary leap for the position, we're witnessing the ultimate embodiment of the point guard. Not a shooter like Steve Nash, a passer like John Stockton, a defender like Gary Payton or a floor general like Isiah Thomas. Someone with the ability to do it all, excelling in each category while elevating everyone around him and then topping it the very next night: basketball's new 6-foot-3, 190-pound unstoppable force. "He's lethal," says Curry's coach, Steve Kerr. "He's mesmerizing," says his teammate Klay Thompson. He's the "best shooter I've ever seen," says his president, Barack Obama.

Oftentimes he's all three at once. During a 106-98 win over the Clippers on March 8, Curry needed all of seven seconds to transform LA's defense from a group of elite athletes to a gaggle of bewildered senior citizens stammering around at the wrong connecting gate. Up by 10 with just under nine minutes left in the third, Curry dribbled past half court near the high left wing and used a pick to split defenders Matt Barnes and Chris Paul. When he re-emerged, 7-1 power forward Spencer Hawes and center DeAndre Jordan had walled off his escape to the basket. Curry had a split second left before the Clippers converged on him like a junkyard car crusher. He stopped on a dime, dribbled backward through his legs to his left hand, then returned the ball behind his back to his right. The move caused Paul and Jordan to lunge awkwardly into the vortex Curry no longer occupied. Curry then spun away from the basket (and what looked like an impending bear hug from an exasperated Hawes) before dribble-lunging, back, 3 feet behind the arc, as if leaping a mud puddle in Jack Curry's gravel driveway.

In the blink of an eye -- well, less, actually -- Curry planted, coiled, elevated and snapped his wrist. Splash. "That could be the greatest move I've ever seen live," blurted stunned ESPN analyst Jeff Van Gundy, who coached against Michael Jordan many times. When his colleagues giggled at the suggestion, though, Van Gundy growled back without hesitation, "No, I'm being serious."

The sequence had everything: court presence, ballhandling, flawless shooting fundamentals, creativity and, above all, major, major cojones. It left Kerr looking like a young Macaulay Culkin on the bench. And across the country, it had Grandma Duckie cheering from her favorite burgundy chair in front of the TV. "Each time Stephen does his thing, we all picture big Jack up in heaven, nudging all the angels, gathering 'em up," says Steph's aunt and Dell's sister, Jackie Curry. "And he's yelling and pointing, 'Look, look down there at what I did! Y'all know I started this, right? Started all this with just that one little hoop, right there.'"

by David Fleming, ESPN |  Read more:
Image: Dylan Coulter

How AI is Bringing Film Stars Back From the Dead

Most actors dream of building a career that will outlive them. Not many manage it – show business can be a tough place to find success. Those that do, though, can achieve a kind of immortality on the silver screen that allows their names to live on in lights.

One such icon is the American film actor James Dean, who died in 1955 in a car accident after starring in just three films, all of which were highly acclaimed. Yet now, nearly seven decades after he died, Dean has been cast as the star in a new, upcoming movie called Back to Eden.

A digital clone of the actor – created using artificial intelligence technology similar to that used to generate deepfakes – will walk, talk and interact on screen with other actors in the film.

The technology is at the cutting edge of Hollywood computer generated imagery (CGI). But it also lies at the root of some of the concerns being raised by actors and screen writers who have walked out on strike in Hollywood for the first time in 43 years. They fear being replaced by AI algorithms – something they argue will sacrifice creativity for the sake of profit. Actor Susan Sarandon is among those who has spoken about her concerns, warning that AI could make her "say and do things I have no choice about". (Read about how the 2013 film The Congress predicted Hollywood's current AI crisis.) (...)

This is the second time Dean’s digital clone has been lined up for a film. In 2019, it was announced he would be resurrected in CGI for a film called Finding Jack, but it was later cancelled. Cloyd confirmed to BBC, however, that Dean will instead star in Back to Eden, a science fiction film in which "an out of this world visit to find truth leads to a journey across America with the legend James Dean".

The digital cloning of Dean also represents a significant shift in what is possible. Not only will his AI avatar be able to play a flat-screen role in Back to Eden and a series of subsequent films, but also to engage with audiences in interactive platforms including augmented reality, virtual reality and gaming. The technology goes far beyond passive digital reconstruction or deepfake technology that overlays one person's face over someone else's body. It raises the prospect of actors – or anyone else for that matter – achieving a kind of immortality that would have been otherwise impossible, with careers that go on long after their lives have ended.

But it also raises some uncomfortable questions. Who owns the rights to someone's face, voice and persona after they die? What control can they have over the direction of their career after death – could an actor who made their name starring in gritty dramas suddenly be made to appear in a goofball comedy or even pornography? What if they could be used for gratuitous brand promotions in adverts? (...)

Digital clones

Dean's image is one of hundreds represented by WRX and its sister licensing company CMG Worldwide – including Amelia Earhart, Bettie Page, Malcolm X and Rosa Parks.

When Dean died 68 years ago, he left behind a robust collection of his likeness in film, photographs and audio – what WRX's Cloyd calls "source material". Cloyd says that to achieve photorealistic representation of a Dean, countless images are scanned, tuned to high resolution and processed by a team of digital experts using advanced technologies. Add in audio, video and AI, and suddenly these materials become the building blocks of a digital clone that looks, sounds, moves and even responds to prompts like Dean. (...)

There are now even companies that allow users to upload deceased loved one's digital data to create "deadbots" that chat with the living from beyond the grave. The more source material, the more accurate and intelligent the deadbot, meaning the executor of a modern-day celebrity's estate could potentially allow for a convincingly realistic clone of the deceased star to continue working in the film industry – and interacting somewhat autonomously – in perpetuity.

by S.J. Velasquez, BBC | Read more:
Image: Getty Images

Friday, July 21, 2023

$$Kudzu$$: The Kingdom of Private Equity

These Are the Plunderers: How Private Equity Runs—and Wrecks—America by Gretchen Morgenson and Joshua Rosner. Simon & Schuster, 381 pages. 2023.
Our Lives in Their Portfolios: Why Asset Managers Own the World by Brett Christophers. Verso, 310 pages. 2023.
A specter is hauntng capitalism: the specter of financialization. Industrial capitalism—the capitalism of “dark Satanic mills”—was bad enough, but it had certain redeeming features: in a word (well, two words), people and place. Factory work may have been grueling and dangerous, but workers sometimes acquired genuine skills, and being under one roof made it easier for them to organize and strike. Factories were often tied, by custom and tradition as well as logistics, to one place, making it harder to simply pack up and move in the face of worker dissatisfaction or government regulation.

To put the contrast at its simplest and starkest: industrial capitalism made money by making things; financial capitalism makes money by fiddling with figures. Sometimes, at least, old-fashioned capitalism produced—along with pollution, workplace injuries, and grinding exploitation—useful things: food, clothing, housing, transportation, books, and other necessities of life. Financial capitalism merely siphons money upward, from the suckers to the sharps.

Marxism predicted that because of competition and technological development, it would eventually prove more and more difficult to make a profit through the relatively straightforward activity of industrial capitalism. It looked for a while—from the mid-1940s to the mid-1970s—as though capitalism had proven Marxism wrong. Under the benign guidance of the Bretton Woods Agreement, which used capital controls and fixed exchange rates to promote international economic stability and discourage rapid capital movements and currency speculation, the United States and Europe enjoyed an almost idyllic prosperity in those three decades. But then American companies began to feel the effects of European and Japanese competition. They didn’t like it, so they pressured the Nixon administration to scrap the accords. Wall Street, which the Bretton Woods rules had kept on a leash, sensed its opportunity and also lobbied hard—and successfully.

The result was a tsunami of speculation over the next few decades, enabled by wave after wave of financial deregulation. The latter was a joint product of fierce lobbying by financial institutions and the ascendancy of laissez faire ideology—also called “neoliberalism”—embraced by Ronald Reagan and Margaret Thatcher and subsequently by Bill Clinton and Tony Blair. The idiocy was bipartisan: Clinton and Obama were as clueless as their Republican counterparts.

Among these “reforms”—each of them a dagger aimed at the heart of a sane and fair economy—were: allowing commercial banks, which handle the public’s money, to take many of the same risks as investment banks, which handle investors’ money; lowering banks’ minimum reserve requirements, freeing them to use more of their funds for speculative purposes; allowing pension funds, insurance companies, and savings-and-loan associations (S&Ls) to make high-risk investments; facilitating corporate takeovers; approving new and risky financial instruments like credit default swaps, collateralized debt obligations, derivatives, and mortgage-based securities; and most important, removing all restrictions on the movement of speculative capital, while using the International Monetary Fund (IMF) to force unwilling countries to comply. Together these changes, as the noted economics journalist Robert Kuttner observed, forced governments “to run their economies less in service of steady growth, decent distribution, and full employment—and more to keep the trust of financial speculators, who tended to prize high interest rates, limited social outlays, low taxes on capital, and balanced budgets.”

Keynes, a principal architect of the Bretton Woods Agreement, warned: “Speculators may do no harm as bubbles on a steady stream of enterprise. But the position is serious when enterprise becomes the bubble on a whirlpool of speculation.” That was indeed the position roughly fifty years after Keynes’s death, and the predictable consequences followed. S&Ls were invited to make more adventurous investments in the 1980s. They did, and within a decade a third of them failed. The cost of the bailout was $160 billion. In the 1990s, a hedge fund named Long-Term Capital Management claimed to have discovered an algorithm that would reduce investment risk to nearly zero. For four years it was wildly successful, attracting $125 billion from investors. In 1998 its luck ran out. Judging that its failure would crash the stock market and bring down dozens of banks, the government organized an emergency rescue. The 2007–2008 crisis was an epic clusterfuck, involving nearly everyone in both the financial and political systems, though special blame should be attached to supreme con man Alan Greenspan, who persuaded everyone in government to repose unlimited confidence in the wisdom of the financial markets. Through it all, the Justice Department was asleep at the wheel. During the wild and woolly ten years before the 2008 crash, bank fraud referrals for criminal prosecution decreased by 95 percent.

The Washington Consensus, embodying the neoliberal dogma of market sovereignty, was forced on the rest of the world through the mechanism of “structural adjustments,” a set of conditions tacked onto all loans by the IMF. Latin American countries were encouraged to borrow heavily from U.S. banks after the 1973 oil shock. When interest rates increased later in the decade, those countries were badly squeezed; Washington and the IMF urged still more deregulation. The continent’s economies were devastated; the 1980s are known in Latin America as the “Lost Decade.” In 1997, in Thailand, Indonesia, the Philippines, and South Korea, the same causes—large and risky debts to U.S. banks and subsequent interest-rate fluctuations—produced similar results: economic contraction, redoubled exhortations to accommodate foreign investors, and warnings not to try to regulate capital flows. By the 2000s, Europe had caught the neoliberal contagion: in the wake of the 2008 crisis, the weaker, more heavily indebted economies—Greece, Italy, Portugal, and Spain—were forced to endure crushing austerity rather than default. Financialization was a global plague.
Slash, Burn, and Churn

In the 1960s, a brave new idea was born, which ushered in a brave new world. Traders figured out how to buy things without money. More precisely, they realized that you could borrow the money to buy the thing while using the thing itself as collateral. They could buy a company with borrowed money, using the company’s assets as collateral for the loan. They then transferred the debt to the company, which in effect had to pay for its own hijacking, and eventually sold it for a tidy profit. In the 1960s, when Jerome Kohlberg, a finance executive at Bear Stearns & Co., started to see the possibilities, it was called “bootstrap financing.” By the mid-1970s, when Kohlberg set up a company with Henry Kravis and George Roberts, it was known as the leveraged buyout (LBO).

The leveraged buyout was the key to the magic kingdom of private equity. But LBOs leave casualties. To service its new debt, the acquired company often must cut costs drastically. This usually means firing workers and managers and overworking those who remain, selling off divisions, renegotiating contracts with suppliers, halting environmental mitigation, and eliminating philanthropy and community service. And even then, many companies failed—a significant proportion of companies acquired in LBOs went bankrupt.

Fortunately, it was discovered around this time that workers, suppliers, and communities don’t matter. In the wake of Milton Friedman’s famous and influential 1972 pronouncement that corporations have no other obligations than to maximize profits, several business school professors further honed neoliberalism into an operational formula: the fiduciary duty of every employee is always and only to increase the firm’s share price. This “shareholder value theory,” which exalted the interests of investors over all others—indeed recognized no other interests at all—afforded the intellectual and moral scaffolding of the private equity revolution. (...)

An academic study found that around 20 percent of large, private-equity-acquired companies were bankrupt within ten years, compared with 2 percent of all other companies. Another study looked at ten thousand companies acquired by private equity firms over a thirty-year period and found that employment declined between 13 and 16 percent. A 2019 study found that “over the previous decade almost 600,000 people lost their jobs as retailers as retail collapsed after being bought by private equity.”

by George Scialabba, The Baffler |  Read more:
Image:© Brindha Kumar
[ed. Nice capsule history, and still going strong if not accelerating. Wait until AI gets weaponized to help.]

Thursday, July 20, 2023

Alpine lotus leaf flower


via:
[more]

Rat Kings of New York

Some say New York City lost the War on Rats with the invention of the plastic bag. Others point to global warming and the fact that a few more warm breeding days is enough for untold thousands of extra litters. In reality, we never stood a chance: we were doomed the moment the first pair of rattus norvegicus came ashore with the Hessians. For generations, city officials have tried to fight the furry tide. Mayor O’Dwyer convened an anti-rodent committee in 1948. Mayor Giuliani formed an Extermination Task Force—part of his controversial purge of feculence of every kind. All have failed. This time, vows Eric Adams, things will be different.

To get a sense of New York’s reinvigorated campaign, declared in November of 2022 with a tranche of anti-rodent legislation, the Rat Academy is a good enough place to start. There we were—young and old, landlords and tenants, rodenticide enthusiasts and professional rubberneckers—huddled in the basement of a Department of Health and Mental Hygiene building in East Harlem on the last day of May, getting to know our enemy. Their impressive birthrates, their stupendous gnawing strength, the trigger hairs on their heads that give them a feeling of safety under bricks and in cramped burrows. Kathleen Corradi, the city’s Rat Czar, was there to bless the proceedings. A good-looking blonde guy seated in front of me with a notepad turned out to be a reporter for a Danish newspaper. The presence of the Scandinavian media at this humble seminar is what’s known in the pest control business as a “sign”—that New York is at least winning the public relations side of this latest War on Rats.

The tactical session had the zeal of a new crusade, but the Rat Academy dates back to 2005, during billionaire Michael Bloomberg’s tenure as mayor. He called it the Rodent Academy, a three-day crash course for property owners and pest control pros. At some point in the last decade, the city added two-hour sessions for the uncredentialed and the curious. They give good practical advice, like how rats view a paver over their hole as an amenity and the importance of mixing wire with concrete when plugging cracks. And they’re good PR, a chance for a company like Infinity Shields to advertise its dubious miracle spray, and for city councilmembers to show they’re dedicated to taking action—the evening’s PowerPoint bore the logos of Manhattan Community Boards 9, 10, and 11, as well as councilmembers Shaun Abreu, Diana Ayala, and Kristin Richardson Jordan.

What you quickly learn is that the rat problem is really a trash problem. Alone among great cities, New York City residents and businesses drag some forty-four million pounds of garbage to the sidewalk every evening, providing the rats a situation that evokes Templeton’s binge at the fairgrounds in Charlotte’s Web.

One of the first salvos in Mayor Adams’s renewed campaign to take back the city was the announcement that set-out times for this black-bagged waste would be rolled back four hours, to 8 p.m. Of course, rats don’t mind dining on a European schedule. The mailers and bus ads promoting the new rules featured a morose grey rodent dragging a big, gaudy suitcase. “Send Rats Packing!” it announced. A T-shirt ($48) offered by the Department of Sanitation proclaims: “Rats don’t run this city. We Do.”

This and other rhetoric of the War on Rats comes uncomfortably close to anti-immigrant sloganeering and racist cartoons of the not-so-distant past, whipping up public opinion against “enemy populations” to justify corrective regimes—from the rodent abatement industry’s latest traps and poisons to advanced garbage receptacles. Nobody but the rats wants trash-strewn streets. But the patently absurd and endless war on these maligned creatures obscures the fact that any real gains will require systemic changes in urban infrastructure. The sanitation department’s first detailed study of the viability of containerization concluded last year that a complete overhaul of residential garbage collection—made possible by as-yet-undeveloped bins and trucks—could keep 89 percent of the city’s household refuse out of rodents’ reach. Promises and press releases abound, but the chance of such an overhaul actually coming to pass is slim. (...)

The theory of broken windows comes down to aesthetics—a taste for the appearance of order. In this way, the government acts like a neighborhood association with a monopoly on lethal force. And indeed, Giuliani’s tough-guy talk encouraged a level of racist brutality in the enforcement of a program that, on paper, is less a crime-busting blueprint than a way to strengthen the self-regulation of subjective community norms. Like Giuliani’s vendetta against Squeegee Men, the fight against the Murine Menace demands quick, visible busts, producing the feeling of safety, security, and cleanliness while conveniently downplaying the roots of the problem.

Adams has stationed cops on the subway platforms to make people feel “safe”—that is, if you’re the kind of person comforted by cops. He has gathered the unhoused into shelters, partly for their sake, partly for appearances. “The mayor has depicted the city’s rat situation much as he has portrayed its crime and homelessness issues,” writes the New York Times. “He says all illustrate a sense of disorder that Mr. Adams hopes to tamp down.” Indeed, “distasteful, worrisome encounters” certainly describes the experiences of New Yorkers who complain of rodents scampering over mountains of trash and between disused dining sheds. One spokesperson at the Rat Academy compared chasing rats from their nests to illegal evictions—presumably something both landlords and tenants could relate to. If you came home and your locks had been changed, would you give up? No. But if it happened every day, for two weeks . . . (...)

Who benefits from this forever war? The political sugar rush is already hitting. It’s an aesthetic contest, after all—the trick is visible change, a feeling that there is less trash and fewer rats. You may not necessarily notice a lack. You do notice a shiny new container with a tight-fitting lid where once there were mountains of seeping, rustling bags. But the problem with this style of perpetual, piecemeal warfare is that containerization must be consistent, covering residential and commercial, from house to house and block to block—or else the rats will simply adjust their habits. And here, the problem is not just our overwhelming failure to sensibly dispose of our garbage, it’s that we produce too much of it.

by Travis Diehl, The Baffler | Read more:
Image: © Marisa Nakasone

Can Barbie Have It All?

There was a good chance Barbie would topple under the weight of its expectations. In the 60 years since she debuted, Barbie has been embraced and disparaged as a paragon of idealized femininity, as a prompt for imaginative play, and as a tool of the patriarchy, upholding oppressive beauty standards and stereotypes. The Barbie movie winks at the doll’s cultural centrality, as it opens with a shot-for-shot remake of the first scene in 2001: A Space Odyssey, which depicts the dawn of human enlightenment. As humans’ precursors discovered the power of tools, little girls encountered a doll that represented an adult woman.
 
In the months leading up to its release, Barbie was shrouded with a certain mystique—the feminine kind—with trailers obscuring more than they revealed. It became more than a summer blockbuster based on a famous toy, one-half of a cinematic meme, and the vehicle for an increasingly popular “Barbiecore” aesthetic. Director and co-writer Greta Gerwig has leaned into the idea of Barbie as a unifying cultural phenomenon, a girlhood experience so broadly shared that it could bring all kinds of women together. She told The New York Times that she wanted viewers to find in Barbie a sort of benediction, hoping to replicate the feeling she had attending Shabbat dinner with friends as a child. “I want them to get blessed,” Gerwig said, aware of her subject’s cultural baggage. Could Barbie capture the magic of childhood play, while also contending with the doll’s complicated role in American culture?

More or less. (This article contains mild spoilers, so stop reading here if you want to avoid learning more about the plot.) While hardly a sophisticated vehicle for a feminist treatise, Barbie is a bright, creative endeavor that neatly wraps up the struggles of womanhood in a clever package. The film is a pleasing mishmash of genres, with elements fantastical, political, mystical, and musical, but at its core it is a coming of age story, a bildungsroman in shades of pink.

Barbie would be worth seeing in theaters for the visuals alone. Barbie Land, home to the Barbies, Kens, and their various discontinued doll sidekicks, is a colorful pastiche of life in plastic, and it’s fantastic. Watching the opening sequence of Barbie Land, depicting Barbie’s morning routine, makes the viewer feel like a little girl playing in her own plastic dream house. But it’s the main character who gives the world texture: Margot Robbie is incandescent as Barbie, and not only because with her megawatt smile and flowing blonde hair, it is easy to believe that she is a doll come to life.

Robbie is “Stereotypical Barbie”—the Barbie you picture when you think of “Barbie.” Her charmed existence is upended one day when her tiptoed feet start flattening, she experiences a surge of irrepressible thoughts of death, and notices the development of cellulite (gasp!). At a dance party with bespoke choreography, Barbie interrupts the festivities by asking if her fellow dolls ever think about dying.

To combat her existential woes, Barbie must venture into our reality—the Real World—to find and reconnect with the actual person playing with her, whose anxiety is manifesting in Barbie. Accompanied by Ken (Ryan Gosling), and buoyed by the Indigo Girls’ classic “Closer to Fine,” Barbie must discover who she is in the Real World.

by Grace Segers, TNR | Read more:
Image: Warner Bros. Pictures

Tuesday, July 18, 2023

Book Review: The Educated Mind

“The promise of a new educational theory”, writes Kieran Egan, “has the magnetism of a newspaper headline like ‘Small Earthquake in Chile: Few Hurt’”.

But — could a new kind of school make the world rational?

I discovered the work of Kieran Egan in a dreary academic library. The book I happened to find — Getting it Wrong from the Beginning — was an evisceration of progressive schools. As I worked at one at the time, I got a kick out of this.

To be sure, broadsides against progressivist education aren’t exactly hard to come by. But Egan’s account went to the root, deeper than any critique I had found. Better yet, as I read more, I discovered he was against traditionalist education, too — and that he had constructed a new paradigm that incorporated the best of both.

This was important to me because I was a teacher, and had at that point in my life begun to despair that all the flashy exciting educational theories I was studying were just superficial, all show and no go. I was stuck in a cycle: I’d discover some new educational theory, devour a few books about it, and fall head over heels for it — only to eventually get around to spending some time at a school and talk to some teachers and realize holy crap this does exactly one thing well and everything else horribly.

If my life were a movie, these years would be the rom-com montage where the heroine goes on twenty terrible first dates.

I got to look at some approaches in even more detail by teaching or tutoring in schools. Each approach promised to elevate their students’ ability to reason and live well in the world, but the adults I saw coming out of their programs seemed not terribly different from people who didn’t.

They seemed just about as likely to become climate deniers or climate doomers as the average normie, just as likely to become staunch anti-vaxxers or covid isolationists. They seemed just as likely to be sucked up by the latest moral panics. The strength of their convictions seemed untethered to the strength of the evidence, and they seemed blind to the potential disasters that their convictions, if enacted, might cause.

They seemed just about as rational as the average person of their community — which was to say, quite irrational!

Egan’s approach seemed different.

I began to systematically experiment with it — using it to teach science, math, history, world religions, philosophy, to students from elementary school to college. I was astounded by how easy it made it for me to communicate the most important ideas to kids of different ability levels. This, I realized, was what I had gotten into teaching for.

The man

Kieran Egan was born in Ireland, raised in England, and got his PhD in America (at Stanford and Cornell). He lived for the next five decades in British Columbia, where he taught at Simon Fraser University.

As a young man, he became a novice at a Franciscan monastery. By the time he died, he was an atheist, but — he would make clear — a Catholic atheist. His output was prodigious — fifteen books on education, one book on building a Zen garden, and, near the end of his life, two books of poetry, and a mystery novel!

He was whimsical and energetic, a Tigger of an educational philosopher. He was devoted to the dream that (as his obituary put it) “schooling could enrich the lives of children, enabling them to reach their full potential”.

He traveled the world, sharing his approach to education. He gained a devoted following of teachers and educational thinkers, and (from an outsider’s vantage point, at least) seemed perpetually on the edge of breaking through to a larger audience, and getting his approach in general practice: he won the Grawmeyer Award — perhaps educational theory’s highest prize. His books were blurbed by some of education’s biggest names (Howard Gardner, Nel Noddings); Michael Pollan even blurbed his Zen gardening book.

He died last year. I think it’s a particularly good moment to take a clear look at his theory.

The book

This is a review of his 1997 book, The Educated Mind: How Cognitive Tools Shape Our Understanding. It’s his opus, the one book in which he most systematically laid out his paradigm. It’s not an especially easy read — Egan’s theory knits together evolutionary history, anthropology, cultural history, and cognitive psychology, and tells a new big history of humanity to make sense of how education has worked in the past, and how we might make it work now.

But at the root of his paradigm is a novel theory about why schools, as they are now, don’t work.

Part 1: Why don’t schools work?

A school is a hole we fill with money

I got a master’s degree in something like educational theory from a program whose name looked good on paper, and when I was there, one of the things that I could never quite make sense of was my professors’ and fellow students’ rock-solid assumption that schools are basically doing a good job.

Egan disagrees. He opens his book by laying that out:
“Education is one of the greatest consumers of public money in the Western world, and it employs a larger workforce than almost any other social agency.

“The goals of the education system – to enhance the competitiveness of nations and the self-fulfillment of citizens – are supposed to justify the immense investment of money and energy.

“School – that business of sitting at a desk among thirty or so others, being talked at, mostly boringly, and doing exercises, tests, and worksheets, mostly boring, for years and years and years – is the instrument designed to deliver these expensive benefits.

“Despite, or because, the vast expenditures of money and energy, finding anyone inside or outside the education system who is content with its performance is difficult.” (...)
America isn’t so much of an outlier; numbers across the rest of the world are comparable. The 4.7 trillion-dollar question is why.

The usual suspects

Ask around, and you’ll find people’s mouths overflowing with answers. “Lazy teachers!” cry some; “unaccountable administrators” grumble others. Others blame the idiot bureaucrats who write standards. Some teachers will tell you parents are the problem; others point to the students themselves.

Egan’s not having any of it. He thinks all these players are caught in a bigger, stickier web. Egan’s villain is an idea — but to understand it, we’ll have to zoom out and ask a simple question — what is it, exactly, that we’ve been asking schools to do? What’s the job we’ve been giving them? If we rifle through history, Egan suggests we’ll find three potential answers.

Job 1: Shape kids for society

Before there were schools, there was culture — and culture got individuals to further the goals of the society.

Egan dubs this job “socialization”. A school built on the socialization model will mold students to fit into the roles of society. It will shape their sense of what’s “normal” to fit their locale — and what’s normal in say, a capitalist society will be different from what’s normal in a communist society. It’ll supply students with useful knowledge and life skills. A teacher in a school built on socialization will, first and foremost, be a role model — someone who can exemplify the virtues of their society.
 
Job 2: Fill kids’ minds with truth

In 387 BC, Plato looked out at his fellow well-socialized, worldly wise citizens of Athens, and yelled “Sheeple!”

Fresh off the death of his mentor Socrates, Plato argued that, however wonderful the benefits of socialization, the adults that it produced were the slaves of convention. So long as people were shaped by socialization, they were doomed to repeat the follies of the past. There was no foundation on which to stand to change society. Plato opened his Academy (the Academy, with a capital ‘A’ — the one that all subsequent academies are named after) to fix that. In his school, people studied subjects like math and astronomy so as to open their minds to the truth.

Egan dubs this job “academics”. A school built on the academic model will help students reflect on reality. It will lift up a child’s sense of what’s good to match the Good, even when this separates them from their fellow citizens. And a teacher in an academic school will, first and foremost, be an expert — someone who can authoritatively say what the Truth is.

Job 3: Cultivate each kid’s uniqueness

In 1762, Jean-Jacques Rousseau looked out at his fellow academically-trained European intellectuals, and called them asses loaded with books.

The problem with the academies, Rousseau argued, wasn’t that they hadn’t educated their students, but that they had — and this education had ruined them. They were “crammed with knowledge, but empty of sense” because their schooling had made them strangers to themselves. Rousseau’s solution was to focus on each child individually, to not force our knowledge on them but to help them follow what they’re naturally interested in. The word “natural” is telling here — just as Newton had opened up the science of matter, so we should uncover the science of childhood. We should work hard to understand what a child’s nature is, and plan accordingly.

Egan dubs this job “development”. A school built on the developmental model will invite students into learning. And a teacher in this sort of school will be, first and foremost, a facilitator — someone who can create a supportive learning environment for the child to learn at their own pace.
 
Q: Can you recap those?

We might sum these up by asking what’s at the very center of schooling. For a socializer, the answer is “society”. For an academicist, the answer is “content”. And for a developmentalist, the answer is “the child”. (...)

One of the things I love about Egan is that he looks at educational ideas historically. (Most histories of education start around the turn of the 20th century; I remember being excited when I found one that began in the 1600s. Egan begins in prehistory.) And what we’re reminded of, when we see these historically, is that these jobs were meant to supplant each other. Put together, they sabotage each other.

What are we asking of schools?

Of the three possible jobs, which are we asking mainstream schools to perform? Egan answers: all three.

by Anonymous, Astral Codex Ten |  Read more:
Image: ACT/uncredited

Monday, July 17, 2023

When Crack Was King


“I was not able to find a direct conspiracy of white guys in a back room saying, let’s destroy the Black community. It was actually more insidious, which is that conspiracy happened hundreds of years ago, that Black people were positioned in American society from very early on to be the Americans closest to harm. When any disaster happens, whether it’s Hurricane Katrina or Covid or crack, we are hit first and we are hit worst.”

When Crack Was King: looking back on an epidemic that destroyed lives (The Guardian)
Image: Andrew Lichtenstein/Corbis/Getty Images
[ed. As concise a summation of black struggles as any.]

Sunday, July 16, 2023

The Greens' Dilemma: Building Tomorrow's Climate Infrastructure Today

Abstract

“We need to make it easier to build electricity transmission lines.” This plea came recently not from an electric utility executive but from Senator Sheldon Whitehouse, one of the Senate’s champions of progressive climate change policy. His concern is that the massive scale of new climate infrastructure urgently needed to meet our nation’s greenhouse gas emissions reduction policy goals will face a substantial obstacle in the form of existing federal, state, and local environmental laws. A small but growing chorus of politicians and commentators with impeccable green credentials agrees that reform of that system will be needed. But how? How can environmental law be reformed to facilitate building climate infrastructure faster without unduly sacrificing its core progressive goals of environmental conservation, distributional equity, and public participation?

That hard question defines what this Article describes as the Greens’ Dilemma, and there are no easy answers. We take the position in this Article that the unprecedented scale and urgency of required climate infrastructure requires reconsidering the trade-off set in the 1970s between environmental protection and infrastructure development. Green interests, however, largely remain resistant even to opening that discussion. As a result, with few exceptions reform proposals thus far have amounted to modest streamlining “tweaks” compared to what we argue will be needed to accelerate climate infrastructure sufficiently to achieve national climate policy goals. To move “beyond tweaking,” we explore how to assess the trade-off between speed to develop and build climate infrastructure, on the one hand, and ensuring adequate conservation, distributional equity, and public participation on the other. We outline how a new regime would leverage streamlining methods more comprehensively and, ultimately, more aggressively than has been proposed thus far, including through federal preemption, centralizing federal authority, establishing strict timelines, and providing more comprehensive and transparent information sources and access.

The Greens’ Dilemma is real. The trade-offs inherent between building climate infrastructure quickly enough to achieve national climate policy goals versus ensuring strong conservation, equity, and participation goals are difficult. The time for serious debate is now. This article lays the foundation for that emerging national conversation.

by J. B. Ruhl and James E. Salzman, SSRN |  Read more:
[ed. Download the paper at the link above, or view the pdf here. See also: Two Theories of What I’m Getting Wrong (NYT).]

We Are All Background Actors

In Hollywood, the cool kids have joined the picket line.

I mean no offense, as a writer, to the screenwriters who have been on strike against film and TV studios for over two months. But writers know the score. We’re the words, not the faces. The cleverest picket sign joke is no match for the attention-focusing power of Margot Robbie or Matt Damon.

SAG-AFTRA, the union representing TV and film actors, joined the writers in a walkout over how Hollywood divvies up the cash in the streaming era and how humans can thrive in the artificial-intelligence era. With that star power comes an easy cheap shot: Why should anybody care about a bunch of privileged elites whining about a dream job?

But for all the focus that a few boldface names will get in this strike, I invite you to consider a term that has come up a lot in the current negotiations: “Background actors.”

You probably don’t think much about background actors. You’re not meant to, hence the name. They’re the nonspeaking figures who populate the screen’s margins, making Gotham City or King’s Landing or the beaches of Normandy feel real, full and lived-in.

And you might have more in common with them than you think.

The lower-paid actors who make up the vast bulk of the profession are facing simple dollars-and-cents threats to their livelihoods. They’re trying to maintain their income amid the vanishing of residual payments, as streaming has shortened TV seasons and decimated the syndication model. They’re seeking guardrails against A.I. encroaching on their jobs.

There’s also a particular, chilling question on the table: Who owns a performer’s face? Background actors are seeking protections and better compensation in the practice of scanning their images for digital reuse.

In a news conference about the strike, a union negotiator said that the studios were seeking the rights to scan and use an actor’s image “for the rest of eternity” in exchange for one day’s pay. The studios argue that they are offering “groundbreaking” protections against the misuse of actors’ images, and counter that their proposal would only allow a company to use the “digital replica” on the specific project a background actor was hired for. (...)

You could, I guess, make the argument that if someone is insignificant enough to be replaced by software, then they’re in the wrong business. But background work and small roles are precisely the routes to someday promoting your blockbuster on the red carpet. And many talented artists build entire careers around a series of small jobs. (Pamela Adlon’s series “Better Things” is a great portrait of the life of ordinary working actors.) (...)

Maybe it’s unfair that exploitation gets more attention when it involves a union that Meryl Streep belongs to. (If the looming UPS strike materializes, it might grab the spotlight for blue-collar labor.) And there’s certainly a legitimate critique of white-collar workers who were blasé about automation until A.I. threatened their own jobs.

But work is work, and some dynamics are universal. As the entertainment reporter and critic Maureen Ryan writes in “Burn It Down,” her investigation of workplace abuses throughout Hollywood, “It is not the inclination nor the habit of the most important entities in the commercial entertainment industry to value the people who make their products.”

If you don’t believe Ryan, listen to the anonymous studio executive, speaking of the writers’ strike, who told the trade publication Deadline, “The endgame is to allow things to drag out until union members start losing their apartments and losing their houses.”

by James Poniewozik, NY Times | Read more:
Image: Jenna Schoenefeld for The New York Times
[ed. See also: On ‘Better Things,’ a Small Story Goes Out With a Big Bang (NYT).]

Saturday, July 15, 2023

Lana Del Ray

[ed. Hadn't heard this one before (not much of a fan) but its pretty good, and the only song lauded in an otherwise pretty brutal essay on the emptiness of music today ("audio furniture"). The principal personification of this being the musician and producer Jack Antonoff: Dream of Antonoffication​ | Pop Music’s Blandest Prophet (Drift):]

"Then there is “Venice Bitch.” It is the one piece of music Antonoff has had a hand in that is downright numinous, with that hypnotic guitar figure pulsing away as the song dissolves into a six-minute vibe collage. The production is suffused with that signature, unshakeable Jack emptiness. But “Venice Bitch” works in large part because Lana embraces the emptiness and uses it to deliberate effect, rather than trying to fill it up with overheated emoting."

Can Cognitive Behavioral Therapy Change Our Minds?

The theory behind C.B.T. rests on an unlikely idea—that we can be rational after all.

Burns didn’t invent cognitive behavioral therapy, but he is connected to its founding lineage. He studied with the psychologist Aaron Beck, who created an approach known as cognitive therapy, or C.T., in the nineteen-sixties, and is often described as the “father” of C.B.T. Beck’s ideas dovetailed with the work of Albert Ellis, a psychologist who had invented rational-emotive behavior therapy, or R.E.B.T., the decade before. There are substantive differences between C.B.T. and R.E.B.T., but also essential commonalities. They all reflect the so-called cognitive revolution—a shift, which began in psychology during the mid-twentieth century, toward a more information-based view of the mind. Freudian thinkers had pictured our minds as hydraulic machines, with pressures rising against resistances and psychic forces that might get bottled up. The cognitive model, by contrast, imagined something more like a computer. Bad information, if it were stored in a crucial place, could cause system-wide problems; irrational or inaccurate thought patterns could shape feelings or behaviors in counterproductive ways, and vice versa. Coders get at a similar idea when they say, “Garbage in, garbage out.”

Ellis, who earned a Ph.D. in psychology in 1947, trained as a psychoanalyst but grew frustrated with the tradition’s approach to therapy, which he felt emphasized dwelling on one’s feelings and ultimately subordinated patients to their pasts. The “rational therapy” for which he became known in the sixties proposed that individuals had the power to reshape themselves willfully and deliberately, not by reinterpreting their life stories but by directly analyzing and modifying their own beliefs and behaviors. “We teach people that they upset themselves then and that they’re still doing it now,” Ellis said, in a 2001 interview. “We can’t change the past, so we change how people are thinking, feeling, and behaving today.” Ellis showed his patients how to avoid “catastrophic thinking,” and guided them toward “unconditional acceptance” of themselves—a rational position in which you acknowledge your weaknesses as well as your strengths.

Beck, like Ellis, trained in a Freudian tradition. “He was a psychoanalyst who had people lie on the couch and free-associate,” his daughter, the psychologist Judith Beck, who heads the Beck Institute for Cognitive Behavior Therapy and teaches at the University of Pennsylvania, told me. He switched from searching for repressed memories to identifying automatic thoughts after a client seemed anxious during her session and told him, “I’m afraid that I’m boring you.” Beck found that many of his patients had similar negative mental touchstones, and based cognitive therapy upon a model of the mind in which negative “core beliefs”—of being helpless, inferior, unlovable, or worthless—lead to a cascade of coping strategies and maladaptive behaviors. Someone “might have the underlying belief ‘If I try to do something difficult, I’ll just fail,’ ” Judith Beck told me. “And so we might see coping strategies flow from that—for example, avoiding challenges at work.” In C.T., patient and therapist joined in a kind of “collaborative empiricism,” examining thoughts together and investigating whether they were accurate and helpful. C.T. combined with elements from behavioral approaches, such as face-your-fear “exposure” therapy, to create C.B.T.

In the second half of the twentieth century, rational and cognitive therapies grew in prominence, their lingo sliding from psychology into culture in roughly the same way that Freudian language had. Ellen Kanner, a clinical psychologist who trained in the nineteen-seventies and has been in practice in New York since 1982, watched the rise of C.B.T. in her clinic. “I’ve seen psychology evolve from very Freudian, when I first did my training,” she told me. Cognitive behavioral therapy had an advantage, she recalled, because therapists and researchers liked its organized approach: exercises, worksheets, and even the flow of a therapy session were standardized. “You could more easily codify it and put it in a study with a control, and see whether it was effective,” she recalled. Patients, meanwhile, found the approach appealing because it was empowering. C.B.T. is openly pitched as a kind of self-help—“We tell people in the first session, ‘My goal is to make you your own therapist,’ ” Judith Beck told me—and patients were encouraged to practice its techniques between sessions, and to continue using them after therapy had ended. Compared with older approaches, C.B.T. was also unthreatening. “When they’re using it, therapists aren’t asking you about your sexuality or whether someone molested you,” Kanner said. “C.B.T. is more acceptable to more people. It’s more rational and less intrusive. The therapist doesn’t seem as powerful.”

The pivot that C.B.T. represented—from the unconscious to the conscious, and from idiosyncrasy to standardization—has enabled its broad adoption. In 2015, a study by Paulo Knapp, Christian Kieling, and Aaron Beck found that C.B.T. was the most widely used form of psychotherapy among therapists surveyed; in a paper published in 2018, titled “Why Cognitive Behavioral Therapy Is the Current Gold Standard of Psychotherapy,” the psychologist Daniel David and his collaborators concluded that it was the most studied psychotherapy technique. (“No other form of psychotherapy has been shown to be systematically superior to CBT,” they write. “If there are systematic differences between psychotherapies, they typically favor CBT.”) Meanwhile, the therapy keeps extending its reach. “I just got back from Japan, where they’re teaching C.B.T. in schools, and have used C.B.T. methods for people who were at risk for suicide,” Beck told me. In the U.S., many schools integrate aspects of C.B.T. into their curricula; the U.K.’s National Health Service has commissioned at least a hundred thousand C.B.T. sessions. Increasingly, C.B.T. is also delivered through apps or chat interfaces, by human therapists or bots; studies have shown that online C.B.T. can be as effective as therapy conducted in person. Even though C.B.T.’s central tenets are nearly half a century old, people who discover it today may still find that it feels au courant. It’s a serious therapeutic tool, but it’s also a little life-hacky; it’s well suited for an era in which we seek to optimize ourselves, clear our minds, and live more rationally.

Iasked Judson Brewer, a psychiatrist and neuroscientist and the director of research and innovation at Brown University’s Mindfulness Center, for his views on C.B.T., and he referred me to a comedy sketch, made in the early two-thousands, starring Bob Newhart as a therapist and Mo Collins as his patient. “I have this fear of being buried alive in a box,” Collins says. Newhart asks, rationally, “Has anyone ever tried to bury you alive in a box?” “No, no,” Collins replies. “But, truly, thinking about it does make my life horrible.”

“Well, I’m going to say two words to you right now,” Newhart explains. “I want you to take them out of the office with you and incorporate them into your life. . . . You ready?”

“Yes,” Collins says.

Stop it!” Newhart screams. (...)

C.B.T. does contain a theory of change—and it’s not entirely convincing. If people could change just because rational thinking told them to, we wouldn’t live in such a crazy world. Yet the rationality of C.B.T. is aspirational. We can wish that we were the kinds of people who could solve our biggest problems simply by seeing them more clearly. Sometimes, by acting as though we are those people, we can become them.

by Joshua Rothman, New Yorker |  Read more:
Image: Evan Cohen
[ed. Placeholder.]

Want to Live to 150? The World Needs More Humans

Last year, the global population crossed 8 billion. In recent decades, it has also enjoyed the highest levels of nourishment, housing conditions, comfort in travel, and education in all of history, as Steven Pinker documents in “Enlightenment Now.” The 20th century’s scientific breakthroughs gave us vaccines, chemotherapies, and antibiotics. In just one century, average life expectancy rose from 31 to 68.

But this extraordinary leap in life expectancy happened without a corresponding increase in health-span. Because aging itself hasn’t been considered a medical disorder, people today generally spend half their lifetime in declining health.

Some 90 percent of all deaths in developed countries are due to to age-related decline, including cancers, heart disease, dementias, and severe infection. By 2029, the United States will spend an unprecedented half of its annual federal budget — $3 trillion, or thrice its military outlay — on adults 65 or older, on measures like Alzheimer’s care and retirement pensions. By 2050, Japan will lose some 20 million people, while Brazil’s senior population is set to triple. About 50 million Americans — predominantly women — are now unpaid caretakers of older adults, at a $500-billion-a-year opportunity cost.

Could new technologies solve these problems by extending the healthy years of long-lived populations? This is the question the emerging field of aging research has set out to answer — and labs in some of Boston’s elite institutions are among those with data suggesting that aging can be not just slowed down, but also reversed.

If we solve aging, we may well solve our emerging underpopulation crisis. (...)

In his 2019 book “Lifespan,” Harvard geneticist David Sinclair wrote that “aging may be more easily treatable than cancer.” After several years working to understand and control the biological mechanisms of aging, he tells me, his lab is showing that aging may be “like scratches on a CD that can be polished off.” His team’s latest findings were published in the journal Cell on Jan. 12. Their paper suggested mammalian aging is in part the result of a reversible loss of epigenetic information: our cells’ ability to turn genes on or off at the right time.

In “Lifespan,” Sinclair points out that if we cloned a 65-year-old person, the clone wouldn’t be born old — which says a lot about how our “youthful digital information” remains intact, even if this 65-year-old’s genetic expression and cell regulation mechanisms are presently functioning less than optimally. There seems to be, as Sinclair notes, a backup information copy within each of us, which remains retrievable.

There is no guarantee that cellular reprogramming will work in humans — but after decades of (at times, highly criticized) work, Sinclair’s lab published what is set to become a widely influential study on the role of epigenetic change in aging. Futurist Peter Diamandis, trained as a physician and now executive chairman of the XPrize Foundation (a science and technology nonprofit for which I consult), tells me aging must be “either a software or a hardware problem — solvable by a species capable of developing vaccines for a novel virus within months.”

Indeed, the human life expectancy of 80 years and the current health span of roughly 40 (when most chronic illnesses begin to appear) are not just economically alarming — they don’t appear to be a biological imperative.

Humans are one of only five species in the animal kingdom that undergo menopause. Lobsters are often more fertile at 100 than at 30. Naked mole rats’ chances of dying do not increase with age. Bowhead whales live to 200 and are incredibly resistant to the diseases of aging.

Other examples abound — and the genetic therapies that could translate these features into human bodies are becoming increasingly precise. Promising research in human genes, cells, and blood drives home that aging is a malleable process that can be decoupled from the passing of time.

It is well established that aging can be sped up, slowed down, and reversed. This is done every day with diet, mental health practices, and exercise. What’s novel about this century’s science is the promise to engineer therapies that might control the aging process more effectively.
And we have, I will suggest, an ethical imperative to do both — even though tackling aging itself as a medical problem remains a contrarian idea. (...)

Yet despite the enormous promise of aging therapies, the field hasn’t attracted a correspondingly large amount of government funding. Out of the $4 billion devoted to the National Institute of Aging yearly, only $300 million goes to fundamental aging research.

Why?

by Raiany Romanni, Boston Globe |  Read more:
Image: uncredited

Brad Pitt and Angelina Jolie’s War of the Rosé

The call came in to the concierge at the Hôtel du Cap-Eden-Roc, the fabled luxury hotel on the French Riviera. It was spring 2007, and an assistant to Angelina Jolie said that Jolie and her fiancé, Brad Pitt, were seeking a sizable property in the South of France to rent.

So began a glorious holiday, away from the spotlight, for Jolie, Pitt, and their four children in a sprawling château with a staff of 12 in the leafy glades of southwestern France. It was so idyllic that they decided to find an even bigger and more secluded property, not to rent but to buy.

Now, in June 2007, Brad Pitt sat in a twin-engine helicopter, his blue eyes scanning the verdant Provençal landscape. He and Jolie were “helicopter-hopping” with Jonathan Gray, the real estate broker recommended by the hotel. “There were six of us: two pilots, Brad and Angelina, their daughter Shiloh—who was one at the time—their American Realtor, and myself,” says Gray, who had arranged a top secret three-day tour of the region. They spent their days flying from one address to the next, searching for what Pitt would later describe as “a European base for our family…where our kids could run free and not be subjected to the celebrity of Hollywood.”

They touched down at more than a dozen properties, from cozy châteaux to vast estates, from the Riviera to Provence. Finally, on the morning of the third day, they flew over a magnificent 1,000-acre château and vineyard nestled in its own valley. Its name reflected its grandeur: Miraval, French for “miracle.”

They were 50 miles east of Aix-en-Provence and just outside of the tiny town of Correns, population 830. “Brad and Angelina took one look and immediately said, ‘Yeah, let’s go see it!’ ” says Gray.

Wine has been made at Miraval for centuries. The privileged domain of an Italian noble family for five generations, it was sold in 1972 to the French jazz musician Jacques Loussier. He produced red, white, and rosé wine on the estate and converted an ancient water tower into a recording studio, which attracted the likes of Sting, Sade, and Pink Floyd, who recorded their 1979 hit album The Wall there.

In 1992, Miraval was purchased by a US Naval Academy engineering graduate named Tom Bove and his family for approximately $5 million. Bove had made a fortune in water treatment before catching what he calls “the wine bug.” He ramped up the estate’s winemaking operation, turning out a small selection of vintages, including Pink Floyd rosé, which he launched in 2006.

Now Miraval was on the market. Bove, whose wife had died, was ready to move on. Suddenly, a helicopter descended and out stepped two of the world’s biggest superstars.

Pitt extended his hand for a shake. Jolie had Shiloh in her arms and “asked for a place to change the baby,” says Bove. Then Jolie, Pitt, and their brokers scouted the vast estate, a wonderland of olive and oak trees, lavender fields, a lake, vineyards on stone terraces, and ancient stone buildings, including a 35-room manor house.

Bove sensed that Jolie was allowing her husband to take the lead. “She let him talk,” says Bove. “From my side she was very gentle. You read all this stuff now, but they were a very nice couple, very sweet and obviously in love with each other.”

And obviously in love with Miraval. So much so that, as the morning stretched into afternoon, Bove invited everyone to stay for a meal. “Brad and I moved the tables around in the garden, and we had lunch,” says Bove. “Drank the wines of Miraval, which were very good. And then we toured the rest of the property.”

Before the party climbed back on board the helicopter, Pitt told Bove, “We’ll be back.” Bove didn’t doubt it. “It seemed like they wanted to buy it right then.”

Bove says Pitt then added, “This is the first place we visited where Angie is smiling.”

Not so long ago, there was no couple in the world bigger than Brad Pitt and Angelina Jolie. From the moment they met, on the set of the 2005 action romance Mr. & Mrs. Smith, their attraction was magnetic, fiery, passionate, incendiary, almost insane. Their divorce would be even fiercer in its intensity.

At the center of it all is Miraval. Conceived as a family retreat, it became a high-end commercial enterprise, producing honey, olive oil, a skin care line, and music from Miraval Studios. It all began with a signature wine, Miraval Côtes de Provence rosé, whose revenues reached $50 million in 2021.

“Even now impossible to write this without crying,” Jolie would say of Miraval in an email she sent Pitt in 2021, four years after she filed for divorce. “Above all, it is the place we brought the twins home to, and where we were married over a plaque in my mother’s memory. A place…where I thought I would grow old…. But it is also the place that marks the beginning of the end of our family.”

The story of Miraval parallels the story of Jolie and Pitt. As their legal battle stretches ever onward, their dueling lawsuits offer competing versions of a dark and disturbing saga. In Jolie’s telling, it’s a tale of alcohol abuse and financial control. For Pitt, it’s one of vengeance and retribution. Neither can deny its sheer dramatic force. There are echoes, even, of the movie that brought them together, Mr. & Mrs. Smith, in which they played a “married couple” per IMDB, “surprised to learn that they are both assassins hired by competing agencies to kill each other.”

Soon after Pitt and Jolie’s visit to Miraval, Bove heard from their brokers, then from lawyers and more lawyers. The overtures became even more urgent when Jolie learned she was pregnant with twins. “They wanted to be in ahead of the birth,” says Bove. Through their respective holding companies, the couple purchased the property on May 8, 2008, for 25 million euros.

Eight days before the deal was signed, Jolie’s business manager suggested adding a doomsday clause stipulating that, should the couple ever split, each would have the right to buy the other’s share of Miraval. Pitt rejected the idea, according to her cross-complaint, telling his business manager it “wasn’t necessary for two reasonable people.” However, as a by-product of their supposedly everlasting love, he would insist that he and Jolie later made a pact agreeing never to “sell their respective interests in Miraval without the other’s consent.” It was a promise Jolie would deny, claiming “no such agreement ever existed.”

Bove agreed to stay on and continue running Miraval’s wine operation. He retained 220 acres to operate on his own. “I think at that moment, Brad wasn’t really interested in making wine,” Bove says. “He liked the idea that he’d have a vineyard, but he really left the wine thing to me.”

From his office, Bove watched an armada of vehicles deliver the family’s possessions. “Vans and vans arriving with furniture, incredibly packaged by the finest movers,” all of it orchestrated by Pitt, says Bove. “He came more often because he wanted to get things started with the renovation.”

Along with the possessions came the staff: “nannies, from Vietnam, the Congo, and the U.S.,” according to one published source, along with two personal assistants (not always in residence); “a cook; a maid; two cleaners; a plongeur, or busboy; [and] four close-protection bodyguards.” Extensive renovations began immediately. “More Californian as opposed to Provençal,” says Bove. The ancient stone chapel became a parking area for the four-wheel all-terrain vehicles the family used to traverse the massive property.

And finally came the movie stars and their growing brood of children. They joined an invasion of celebrity home buyers in Provence that would come to include, just within a 100-mile radius of Miraval, Johnny Depp and his then wife Vanessa Paradis, George Lucas, George Clooney, John Malkovich, and many more.

For a time, leading the pack, were the most famous of them all—the couple known worldwide as Brangelina. (...)

They were “our last great Hollywood couple,” the New York Post eulogized after the breakup. “Their lineage was Bogart and Bacall, Hepburn and Tracy, Liz and Dick—golden megastars all, with outsize love stories.”

They began as mere mortals. The daughter of Midnight Cowboy star Jon Voight and actor Marcheline Bertrand, Jolie was a wild child who collected knives and aspired to be a funeral home director. At age five, she reportedly said she intended to become “an actress, a big actress.” She won raves for her performance in the 1998 HBO movie Gia, whose tagline—“Everybody Saw the Beauty, No One Saw the Pain”—seemed apt in light of her fraught relationship with her father. The following year, she broke big in Girl, Interrupted, riding a wave of acclaim to the Oscar podium, where, all of 24 years old, she accepted the award for best supporting actress.

By then, William Bradley Pitt was already a star. He’d arrived in Los Angeles from Springfield, Missouri, in a battered Datsun with $325 in his pocket. The astonishingly handsome son of a trucking company dad and a stay-at-home mom, he had dropped out of college two weeks short of graduating with a journalism degree. He told his folks that he was moving to LA not to act but to “go into graphic design, so they wouldn’t worry.”

Unlike Jolie, he had no movie-star dad, no contacts, nothing but a dream. He stood in the streets in a yellow El Pollo Loco chicken costume, waving a sign to beckon diners inside, and drove exotic dancers to their Strip-a-Gram appointments. Then Billy Baldwin dropped out of the 1991 feminist road picture Thelma & Louise to do the firefighting movie Backdraft. Tapped as a last-minute replacement, Pitt set his scenes ablaze as a hunky hitchhiker who hooks up with Geena Davis before taking off with her cash.

In May 2000, Jolie, then 24, married 44-year-old Billy Bob Thornton in Las Vegas. That July, Pitt married Friends star Jennifer Aniston in a $1 million wedding in Malibu.

By the time Jolie and Pitt met four years later on the set of Mr. & Mrs. Smith, Jolie and Thornton had divorced, though not before she adopted an orphaned baby boy from Cambodia, whom she named Maddox. Pitt and Aniston were still very much a couple—until he collided with Jolie.

by Mark Seal, Vanity Fair |  Read more:
Image: Anthony Harvey. Chateau: Michel Gangne/AFP. Both Getty Images

Friday, July 14, 2023

How Russia Went from Ally to Adversary

In early December of 1989, a few weeks after the Berlin Wall fell, Mikhail Gorbachev attended his first summit with President George H. W. Bush. They met off the coast of Malta, aboard the Soviet cruise ship Maxim Gorky. Gorbachev was very much looking forward to the summit, as he looked forward to all his summits; things at home were spiralling out of control, but his international standing was undimmed. He was in the process of ending the decades-long Cold War that had threatened the world with nuclear holocaust. When he appeared in foreign capitals, crowds went wild.

Bush was less eager. His predecessor, Ronald Reagan, had blown a huge hole in the budget by cutting taxes and increasing defense spending; then he had somewhat rashly decided to go along with Gorbachev’s project to rearrange the world system. Bush’s national-security team, which included the realist defense intellectual Brent Scowcroft, had taken a pause to review the nation’s Soviet policy. The big debate within the U.S. government was whether Gorbachev was in earnest; once it was concluded that he was, the debate was about whether he’d survive.

On the summit’s first day, Gorbachev lamented the sad state of his economy and praised Bush’s restraint and thoughtfulness with regard to the revolutionary events in the Eastern Bloc—he did not, as Bush himself put it, jump “up and down on the Berlin Wall.” Bush responded by praising Gorbachev’s boldness and stressing that he had economic problems of his own. Then Gorbachev unveiled what he considered a great surprise. It was a heartfelt statement about his hope for new relations between the two superpowers. “I want to say to you and the United States that the Soviet Union will under no circumstances start a war,” Gorbachev said. “The Soviet Union is no longer prepared to regard the United States as an adversary.”

As the historian Vladislav Zubok explains in his recent book “Collapse: The Fall of the Soviet Union” (Yale), “This was a fundamental statement, a foundation for all future negotiations.” But, as two members of Gorbachev’s team who were present for the conversations noted, Bush did not react. Perhaps it was because he was recovering from seasickness. Perhaps it was because he was not one for grand statements and elevated rhetoric. Or perhaps it was because to him, as a practical matter, the declaration of peace and partnership was meaningless. As he put it, a couple of months later, to the German Chancellor, Helmut Kohl, “We prevailed and they didn’t.” Gorbachev thought he was discussing the creation of a new world, in which the Soviet Union and the United States worked together, two old foes reconciled. Bush thought he was merely negotiating the terms for the Soviets’ surrender. (...)

In February, 1990, two months after the summit with Bush on the Maxim Gorky, Gorbachev hosted James Baker, the U.S. Secretary of State, in Moscow. This was one of Gorbachev’s last opportunities to get something from the West before Germany reunified. But, as Mary Elise Sarotte relates in “Not One Inch: America, Russia, and the Making of Post-Cold War Stalemate” (Yale), her recent book on the complex history of NATO expansion, he was not up to the task. Baker posed to Gorbachev a hypothetical question. “Would you prefer to see a unified Germany outside of NATO, independent and with no U.S. forces,” Baker asked, “or would you prefer a unified Germany to be tied to NATO, with assurances that NATO’s jurisdiction would not shift one inch eastward from its present position?” This last part would launch decades of debate. Did it constitute a promise—later, obviously, broken? Or was it just idle talk? In the event, Gorbachev answered lamely that of course NATO could not expand. Baker’s offer, if that’s what it was, would not be repeated. In fact, as soon as people in the White House got wind of the conversation, they had a fit. Two weeks later, at Camp David, Bush told Kohl what he thought of Soviet demands around German reunification. “The Soviets are not in a position to dictate Germany’s relationship with NATO,” he said. “To hell with that.”

The U.S. pressed its advantage; Gorbachev, overwhelmed by mounting problems at home, settled for a substantial financial inducement from Kohl and some vague security assurances. Soon, the Soviet Union was no more, and the overriding priority for U.S. policymakers became nuclear deproliferation. Ukraine, newly independent, had suddenly become the world’s No. 3 nuclear power, and Western countries set about persuading it to give up its arsenal. Meanwhile, events in the former Eastern Bloc were moving rapidly. (...)

After the Soviet collapse, Western advisers, investment bankers, democracy promoters, and just plain con men flooded the region. The advice on offer was, in retrospect, contradictory. On the one hand, Western officials urged the former Communist states to build democracy; on the other, they made many kinds of aid contingent on the implementation of free-market reforms, known at the time as “shock therapy.” But the reason the reforms had to be administered brutally and all at once—why they had to be a shock—was that they were by their nature unpopular. They involved putting people out of work, devaluing their savings, and selling key industries to foreigners. The political systems that emerged in Eastern Europe bore the scars of this initial contradiction.

In almost every former Communist state, the story of reform played out in the same way: collapse, shock therapy, the emergence of criminal entrepreneurs, violence, widespread social disruption, and then, sometimes, a kind of rebuilding. Many of the countries are now doing comparatively well. Poland has a per-capita G.D.P. approaching Portugal’s; the Czech Republic exports its Å koda sedans all over the world; tiny Estonia is a world leader in e-governance. But the gains were distributed unequally, and serious political damage was done.

In no country did the reforms play out more dramatically, and more consequentially, than in Russia. Boris Yeltsin’s first post-Soviet Cabinet was led by a young radical economist named Yegor Gaidar. In a matter of months, he transformed the enormous Russian economy, liberalizing prices, ending tariffs on foreign goods, and launching a voucher program aimed at distributing the ownership of state enterprises among the citizenry. The result was the pauperization of much of the population and the privatization of the country’s industrial base by a small group of well-connected men, soon to be known as the oligarchs. When the parliament, still called the Supreme Soviet and structured according to the old Soviet constitution, tried to put a brake on the reforms, Yeltsin ordered it disbanded. When it refused to go, Yeltsin ordered that it be shelled. Many of the features that we associate with Putinism—immense inequality, a lack of legal protections for ordinary citizens, and super-Presidential powers—were put in place in the early nineteen-nineties, in the era of “reform.”

When it came to those reforms, did we give the Russians bad advice, or was it good advice that they implemented badly? And, if it was bad advice, did we dole it out maliciously, to destroy their country, or because we didn’t know what we were doing? Many Russians still believe that Western advice was calculated to harm them, but history points at least partly in the other direction: hollowing out the government, privatizing public services, and letting the free market run rampant were policies that we also implemented in our own country. The German historian Philipp Ther argues that the post-Soviet reform process would have looked very different if it had taken place even a decade earlier, before the so-called Washington Consensus about the benevolent power of markets had congealed in the minds of the world’s leading economists. One could add that it would also have been different two decades later, after the 2008 financial crisis had caused people to question again the idea that capitalism could be trusted to run itself.

Back during the last months of Gorbachev’s tenure, there was briefly talk of another Marshall Plan for the defeated superpower. A joint Soviet-American group led by the economist Grigory Yavlinsky and the Harvard political scientist Graham Allison proposed something they called a Grand Bargain, which would involve a huge amount of aid to the U.S.S.R., contingent on various reforms and nonproliferation efforts. In “Collapse,” Zubok describes a National Security Council meeting in June, 1991, at which the Grand Bargain was discussed. Nicholas Brady, then the Secretary of the Treasury, spoke out forcefully against extensive aid to the Soviet Union. He was candid about America’s priorities, saying, “What is involved is changing Soviet society so that it can’t afford a defense system. If the Soviets go to a market system, then they can’t afford a large defense establishment. A real reform program would turn them into a third-rate power, which is what we want.”

But, if our advice and actions did damage to Russia, they also did damage to us. In a forthcoming book, “How the West Lost the Peace” (Polity), translated by Jessica Spengler, Ther writes on the concept of “co-transformation.” Change and reform moved in both directions. Borders softened. We sent Russia Snickers bars and personal computers; they sent us hockey players and Tetris. But there were less positive outcomes, too. It was one thing to impose “structural adjustment” on the states of the former Eastern Bloc, quite another when their desperate unemployed showed up at our borders. Ther uses the example of Poland—a large country that underwent a jarring and painful reform period yet emerged successfully, at least from an economic perspective, on the other side. But in the process many people were put out of work; rural and formerly industrialized sections of the country did not keep up with the big cities. This generated a political reaction that was eventually expressed in support for the right-wing nationalist Law and Justice Party, which in 2020 all but banned abortions in Poland. At the same time, a great many Poles emigrated to the West, including to the United Kingdom, where their presence engendered a xenophobic reaction that was one of the proximate causes, in 2016, of Brexit.

The reforms did not merely cause financial pain. They led to a loss in social status, to a loss of hope. These experiences were not well captured by economic statistics. The worst years for Russians were the ones between 1988 and 1998; after that, the ruble was devalued, exports began to rise, oil prices went up, and, despite enormous theft at the top, the dividends trickled down to the rest of society. But the aftereffects of that decade of pain were considerable. Life expectancy had dropped by five years; there was severe social dislocation. At the end of it, many people were prepared to support, and some people even to love, a colorless but energetic former K.G.B. agent named Vladimir Putin.

by Keith Gessen, New Yorker |  Read more:
Image: Eduardo Morciano; Source photograph from Getty