Sunday, December 5, 2021

The Third Force

In 1943, after being interrogated by Vichy police officers who suspected him (rightly) of conspiring to rescue Jews from the occupying Nazis, a French clergyman named André Trocmé stepped into the open air with a revised view of the human condition. “Before he entered that police station in Limoges, he thought the world was a scene where two forces were struggling for power: God and the Devil,” writes one of his chroniclers. “From then on, he knew that there was a third force seeking hegemony over this world: stupidity.”

Trocmé’s eureka was by no means unique—his German contemporary and co-religionist Dietrich Bonhoeffer wrote that stupidity (or “folly,” depending on your translation) was “a more dangerous enemy of the good than malice”—and it still rings true today. From the troglodytic inanities of entertainments such as the Instagram account Girls Getting Hurt (894,000 followers) to the pyrotechnic disasters of gender-reveal parties, stupidity is everywhere we look, not least of all in those who look for it everywhere but within themselves.

My own Trocmé moment came with a photo in the New York Times of an angry crowd protesting the tyranny of face masks in the midst of an “exaggerated” pandemic, an ominous prelude to the storming of the Capitol the following year to overturn a “stolen” election. As luck would have it, the antimask protests were taking place at the same time that my wife was reading about England during the Second World War, so there was this repeated dinnertime comparison of the prodigious sacrifices made by bombed-out Londoners with those that peacetime Michiganders found insufferable enough to justify calling up the militia. “Delusional,” “obstinate,” and “perverse” seemed woefully inadequate descriptors, and stupid regrettably unkind, but there it was. What else could you call it?

“Stupid” doesn’t mean unintelligent or even uninformed. The political philosopher Eric Voegelin was closer to the mark when he defined stupidity as a “loss of reality.” It’s possible to take Voegelin’s definition a step further and say that stupidity is a denial of reality to the degree that one’s own survival, to say nothing of the survival of others, is imperiled. “Too dumb to live,” we might say, summoning metaphors of dodo birds and dinosaurs, creatures who may not have been especially unintelligent but who owe their reputations as lamebrains in large part to their extinction. Stupidity is oblivious to negative consequences; it falls into a pit. Gross stupidity invites negative consequences; it looks for a pit. There’s an element of willfulness to it: let the oceans rise, let the virus rage, you can’t scare me. Socrates held that human beings do not knowingly act against their best interests; perhaps his wisdom made it hard for him to imagine a human being who could say, “To hell with my best interests, and screw Socrates too.” A willful loss of reality, however death-defying it may appear, is never far from a wish for death.

The widespread stupidity that pinhead populism and COVID-19 have brought to the fore goes far beyond the disdain for intellectuals that has been a current in American culture since the nation’s inception. For a sense of how far, consider this curious passage from Richard Hofstadter’s 1963 Anti-intellectualism in American Life:
It would . . . be mistaken, as well as uncharitable, to imagine that the men and women who from time to time carry the banners of anti-intellectualism are of necessity committed to it as though it were a positive creed or a kind of principle. In fact, anti-intellectualism is usually the incidental consequence of some other intention, often some justifiable intention. Hardly anyone believes himself to be against thought and culture. Men do not rise in the morning, grin at themselves in their mirrors, and say: “Ah, today I shall torment an intellectual and strangle an idea!”
I find the passage striking for two reasons. First, because in light of such bumper-sticker slogans as MAKE LIBERALS CRY AGAIN and HOW 'BOUT I PUT MY CARBON FOOTPRINT UP YOUR LIBERAL ASS?, it would seem that some people do rise in the morning with the intention of tormenting their thoughtful neighbors and strangling any number of ideas, not a few of which are subsumed under the political philosophy with the carbon footprint up its rectum. It would also seem—and this is the second reason the passage hit me so hard—that the liberal idea typified by Hofstadter’s generous disclaimer, his implied insistence that most people are better than they seem, has indeed been strangled, or at the very least, is gasping for air. (...)

Writing several years before he would be executed for his progressively isolating role in a plot to overthrow Hitler, Bonhoeffer says,
We note . . . that people who have isolated themselves from others or who live in solitude manifest this defect less frequently than individuals or groups of people inclined or condemned to sociability. And so it would seem that stupidity is perhaps less a psychological than a sociological problem.
He goes on to say that although a stupid person is usually stubborn, his stubbornness shouldn’t be mistaken for independence. “In conversation with him, one virtually feels that one is dealing not at all with him as a person, but with slogans, catchwords, and the like that have taken possession of him.”

by Garret Keizer, Harper's | Read more:
Image: Untitled mixed-media artworks, 2020, by Fred Tomaselli

Geoengineering Whale Poop

In the 20th century, the largest animals that have ever existed almost stopped existing. Baleen whales—the group that includes blue, fin, and humpback whales—had long been hunted, but as whaling went industrial, hunts became massacres. With explosive-tipped harpoons that were fired from cannons and factory ships that could process carcasses at sea, whalers slaughtered the giants for their oil, which was used to light lamps, lubricate cars, and make margarine. In just six decades, roughly the life span of a blue whale, humans took the blue-whale population down from 360,000 to just 1,000. In one century, whalers killed at least 2 million baleen whales, which together weighed twice as much as all the wild mammals on Earth today.

All those missing whales left behind an enormous amount of uneaten food. In a new study, the Stanford ecologist Matthew Savoca and his colleagues have, for the first time, accurately estimated just how much. They calculated that before industrial whaling, these creatures would have consumed about 430 million metric tons of krill—small, shrimplike animals—every year. That’s twice as much as all the krill that now exist, and twice as much by weight as all the fish that today’s fisheries catch annually. But whales, despite their astronomical appetite, didn’t deplete the oceans in the way that humans now do. Their iron-rich poop acted like manure, fertilizing otherwise impoverished waters and seeding the base of the rich food webs that they then gorged upon. When the whales were killed, those food webs collapsed, turning seas that were once rain forest–like in their richness into marine deserts.

But this tragic tale doesn’t have to be “another depressing retrospective,” Savoca told me. Those pre-whaling ecosystems are “still there—degraded, but still there.” And his team’s study points to a possible way of restoring them—by repurposing a controversial plan to reverse climate change.

Baleen whales are elusive, often foraging well below the ocean’s surface. They are also elastic: When a blue whale lunges at krill, its mouth can swell to engulf a volume of water larger than its own body. For these reasons, scientists have struggled to work out how much these creatures eat. In the past, researchers either examined the stomachs of beached whales or extrapolated upward from much smaller animals, such as mice and dolphins. But new technologies developed over the past decade have provided better data. Drones can photograph feeding whales, allowing researchers to size up their ballooning mouths. Echo sounders can use sonar to gauge the size of krill swarms. And suction-cup-affixed tags that come with accelerometers, GPS, and cameras can track whales deep underwater—“I think of them as whale iPhones,” Savoca said.

Using these devices, he and his colleagues calculated that baleen whales eat three times more than researchers had previously thought. They fast for two-thirds of the year, subsisting on their huge stores of blubber. But on the 100 or so days when they do eat, they are incredibly efficient about it. Every feeding day, these animals can snarf down 5 to 30 percent of their already titanic body weight. A blue whale might gulp down 16 metric tons of krill.

Surely, then, the mass slaughter of whales must have created a paradise for their prey? After industrial-era whalers killed off these giants, about 380 million metric tons of krill would have gone uneaten every year. In the 1970s, many scientists assumed that the former whaling grounds would become a krilltopia, but instead, later studies showed that krill numbers had plummeted by more than 80 percent.

The explanation for this paradox involves iron, a mineral that all living things need in small amounts. The north Atlantic Ocean gets iron from dust that blows over from the Sahara. But in the Southern Ocean, where ice cloaks the land, iron is scarcer. Much of it is locked inside the bodies of krill and other animals. Whales unlock that iron when they eat, and release it when they poop. The defecated iron then stimulates the growth of tiny phytoplankton, which in turn feed the krill, which in turn feed the whales, and so on.

Just as many large mammals are known to do on land, the whales engineer the same ecosystems upon which they depend. They don’t just eat krill; they also create the conditions that allow krill to thrive. They do this so well that even in the pre-whaling era their huge appetites barely dented the lush wonderlands that they seeded. Back then, krill used to swarm so densely that they reddened the surface of the Southern Ocean. Whales feasted so intensely that sailors would spot their water spouts punching upward in every direction, as far as the eye could see. With the advent of industrial whaling, those ecosystems imploded. Savoca’s team estimates that the deaths of a few million whales deprived the oceans of hundreds of millions of metric tons of poop, about 12,000 metric tons of iron, and a lot of plankton, krill, and fish. (...)

In 1990, the oceanographer John Martin proposed that the Southern Ocean is starved of iron, and that deliberately seeding its waters with the nutrient would allow phytoplankton to grow. The blooming plankton would soak up carbon dioxide, Martin argued, and cool the planet and slow the pace of global warming. Researchers have since tested this idea in 13 experiments, adding iron to small stretches of the Southern and Pacific Oceans and showing that plankton do indeed flourish in response.

Such iron-fertilization experiments have typically been billed as acts of geoengineering—deliberate attempts to alter Earth’s climate. But Savoca and his colleagues think that the same approach could be used for conservation. Adding iron to waters where krill and whales still exist could push the sputtering food cycle into higher gear, making it possible for whales to rebound at numbers closer to their historical highs. “We’d be re-wilding a barren land by plowing in compost, and the whole system would recuperate,” says Victor Smetacek, an oceanographer at the Alfred Wegener Institute for Polar and Marine Research, in Germany. (Smetacek was involved in three past iron-fertilization experiments and has been in talks with Savoca’s group.)

The team plans to propose a small and carefully controlled experiment to test the effects of iron fertilization on the whales’ food webs. The mere idea of that “is going to be shocking to some people,” Savoca admitted. Scientists and advocacy groups alike have fiercely opposed past iron-addition experiments, over concerns that for-profit companies would patent and commercialize the technology and that the extra iron would trigger blooms of toxic algae.

by Ed Yong, The Atlantic |  Read more:
Image: The Asahi Shimbun/Getty


Chantal Goya, Marlène Jobert, Jean-Pierre Léaud @ Masculin Féminin (Jean-Luc Godard, 1966).
Images (more): via

Privacy Settings


Dear Human Resources,

My boyfriend will occasionally walk in on me while I’m on the toilet—to grab a toothbrush or floss, or take a quick shower, or just to show me a funny meme. I hate this. But when I tell him so he just laughs and says it’s no big deal. He sees his ability to be in the bathroom at the same time as a relationship accomplishment—for him it’s a sign of how comfortable we are together. But it makes me feel gross! I don’t think it’s just out of self-consciousness that I think bathroom time should not be a couple’s activity. Is this a fair boundary to draw—or am I just afraid of intimacy?

—Wanting Closure

via: Privacy Settings (The Point)
Image: (uncredited)

Friday, December 3, 2021

The Supreme Court Gaslights Its Way to the End of Roe

There are many reasons for dismay over the Supreme Court argument in the Mississippi abortion case, but it was the nonstop gaslighting that really got to me.

First there was Justice Clarence Thomas, pretending by his questions actually to be interested in how the Constitution might be interpreted to provide for the right to abortion, a right he has denounced and schemed to overturn since professing to the Senate Judiciary Committee 30 years ago that he never even thought about the matter.

Then there was Chief Justice John Roberts, mischaracterizing an internal memo that Justice Harry Blackmun wrote to his colleagues as the Roe v. Wade majority was discussing how best to structure the opinion Justice Blackmun was working on. The chief justice was trying to delegitimize the place of fetal viability in the court’s abortion jurisprudence, where for nearly 50 years, viability has been the unbreached firewall protecting the right of a woman to choose to terminate a pregnancy.

“It’s an unfortunate source, but it’s there,” he said, referring to Justice Blackmun’s papers, on file and open to the public at the Library of Congress. “In his papers, Justice Blackmun said that the viability line was — actually was dicta.”

“Dicta” is a dismissive word that refers to asides in an opinion that are not actually part of the court’s holding. The entry in the Blackmun papers to which the chief justice was most likely referring was a memo of Nov. 21, 1972 that the author of Roe v. Wade sent along with a new draft opinion to the other justices, noting: “In its present form it contains dictum but I suspect that in this area some dictum is indicated and not to be avoided.”

In that memo, of course referring to what was still a work in progress, Justice Blackmun proposed that the right to abortion be fully protected only until the end of the first trimester of pregnancy. “This is arbitrary,” he wrote, “but perhaps any other selected point, such as quickening or viability, is equally arbitrary.”

But two weeks later, after consulting with other justices, including Lewis Powell and Thurgood Marshall, Justice Blackmun circulated another memo endorsing the viability line. Far from describing this determination as arbitrary, he wrote in a memo dated Dec. 11, 1972, that viability “has logical and biological justifications,” namely, that “few could argue, or would argue, that a state’s interest by the time of viability, when independent life is presumably possible, is not sufficiently developed to justify appropriate regulation.”

In other words, by the time the court issued the final opinion in January 1973, viability was not dicta but rather an essential element of the decision. Chief Justice Roberts may not like viability — as clearly he doesn’t, observing to Julie Rikelman, the lawyer for the Mississippi clinic challenging the state’s ban on abortion after 15 weeks of pregnancy, that “viability, it seems to me, doesn’t have anything to do with choice” — but he was flatly wrong to suggest that it was an unconsidered aspect of Roe v. Wade.

(And of course it is extremely odd for a Supreme Court justice to dig into the court’s private work papers to cast aspersions on a published opinion.)

In fact, as the second Blackmun memo makes clear, the court that decided Roe saw a direct link between the viability line and a woman’s ability to choose abortion. In that second memo, Justice Blackmun referred to the “practical aspect” of the viability line, observing that “there are many pregnant women, particularly younger girls, who may refuse to face the fact of pregnancy and who, for one reason or another, do not get around to medical consultation until the end of the first trimester is upon them or, indeed, has passed.”

And then there was Justice Brett Kavanaugh, who rattled off a list of “the most consequential cases in this court’s history” that resulted from overruling prior decisions. If the court had adhered, for example, to the separate-but-equal doctrine of Plessy v. Ferguson rather than overruling that precedent in Brown v. Board of Education “the country would be a much different place,” he told Ms. Rikelman. “I assume you agree with most, if not all, the cases I listed there, where the court overruled the precedent,” Justice Kavanaugh continued. Why then, he asked, should the court stick with a case it now regarded as wrongly decided?

More gaslighting: The superficial plausibility of Justice Kavanaugh’s analogy between Plessy v. Ferguson and Roe v. Wade dissolves with a second’s contemplation. For one thing, Plessy negated individual liberty, while Roe expanded it. For another, Justice Kavanaugh’s list could have been 1,000 cases long without casting any light on whether today’s Supreme Court should repudiate Roe v. Wade.

But the justice’s goal was not to invite contemplation. It was to normalize the deeply abnormal scene playing out in the courtroom. President Donald Trump vowed to end the right to abortion, and the three justices he put on the court — Neil Gorsuch, to a seat that was not legitimately Mr. Trump’s to fill; Amy Coney Barrett, whose election-eve nomination and confirmation broke long settled norms; and Justice Kavanaugh — appear determined to do just that.

It was Justice Sonia Sotomayor who asked the uncomfortable question. “Will this institution survive the stench that this creates in the public perception that the Constitution and its reading are just political acts?” she demanded of Scott Stewart, a former law clerk to Justice Thomas who argued for Mississippi as the state’s solicitor general. Listening to the live-streamed argument, I first heard “political acts” as “political hacks,” I suppose because still in my mind were Justice Barrett’s words when she spoke in mid-September at a center in Louisville, Ky., named for her Senate confirmation mastermind, Senator Mitch McConnell. “My goal today is to convince you that the court is not comprised of a bunch of partisan hacks,” she said then.

Justice Barrett’s performance during Wednesday’s argument was beyond head-spinning. Addressing both Ms. Rikelman and Elizabeth Prelogar, the U.S. solicitor general who argued for the United States on behalf of the Mississippi clinic, Justice Barrett asked about “safe haven” laws that permit women to drop off their unwanted newborn babies at police stations or fire houses; the mothers’ parental rights are then terminated without further legal consequences. If the problem with “forced motherhood” was that it would “hinder women’s access to the workplace and to equal opportunities,” Justice Barrett asked, “why don’t safe haven laws take care of that problem?”

She continued: “It seems to me that it focuses the burden much more narrowly. There is, without question, an infringement on bodily autonomy, you know, which we have in other contexts, like vaccines. However, it doesn’t seem to me to follow that pregnancy and then parenthood are all part of the same burden.”

I’ll pass over the startling notion that being required to accept a vaccine is equivalent to being forced to carry a pregnancy to term. “Gaslighting” doesn’t adequately describe the essence of what Justice Barrett was suggesting: that the right to abortion really isn’t necessary because any woman who doesn’t want to be a mother can just hand her full-term baby over to the nearest police officer and be done with the whole business. As Justice Barrett, of all people, surely understands, such a woman will forever be exactly what she didn’t want to be: a mother, albeit one stripped of her ability to make a different choice.

by Linda Greenhouse, NY Times |  Read more:
Image: Damon Winter
[ed. In case you missed it, please take a moment to read an excellent summary of stare decisis (deference to judicial precedents): Precedent and the Conservative Court (Duck Soup).]

Wednesday, December 1, 2021

Lina Khan’s Battle to Rein in Big Tech

In the spring of 2011, a recent Williams College graduate named Lina Khan interviewed for a job at the Open Markets Program, in Washington, D.C. Open Markets, which was part of the New America think tank, was dedicated to the study of monopolies and the ways in which concentration in the American economy was suppressing innovation, depressing wages, and fuelling inequality. The program had been founded the previous year by Barry Lynn, who believed that monopolies posed a threat to democracy, and that policymakers and much of the public were blind to this threat. Unlike the practice at other think tanks, which publish research reports and white papers, Lynn, a former reporter and editor, disseminated the program’s findings directly to the public, through newspaper and magazine articles.

The study of antitrust law was far from fashionable; since the nineteen-eighties, the field had been dominated by a world view that favored corporate conglomeration, which was acceptable, mainstream experts believed, as long as consumer prices didn’t rise. Lynn was seeking a researcher without any formal economics training, who would come to the subject with fresh eyes. Khan had studied the 2008 financial crisis and was interested in the effects of power disparities in the economy. She checked out Lynn’s book, “Cornered: The New Monopoly Capitalism and the Economics of Destruction,” from the library and skimmed it the night before her interview. “When she walked in that door, she had no idea what this entailed or what she would become,” Lynn told me. “She was just a fantastically smart person who was very curious.”

During the interview, Lynn recalled, he asked Khan, “Do you ever get angry? Does anything make you outraged?” She replied, “No, not really.” Lynn said, “I think you’ll become angry while you’re doing this work. There will be things that you discover here that will outrage you.” Khan took the job.

Open Markets studied industries ranging from banking to agriculture. In case after case, Lynn found, the number of companies in each market had been reduced to a few big entities that had bought up their competitors, giving them a disproportionate amount of power. Consumers had the impression of vast choices among brands, but this was often misleading: many of the biggest furniture stores were owned by one company; a large percentage of the dozens of laundry detergents in most supermarkets were made by two corporations. After consolidation, it became easier for furniture sellers and detergent manufacturers to raise prices, compromise the quality of their products, or treat employees poorly, because consumers and workers had few other places to go. It also became much more difficult for entrepreneurs to break into the marketplace, because competing with these giants was almost impossible. As huge companies became even bigger, much of the American middle class struggled with stagnant wages. In Lynn’s view, the issues were connected.

Khan began researching book publishing. “There was a sense that this industry was in crisis,” she recalled. Publishers had come under pressure, first from chain stores like Barnes & Noble, and then from Amazon, which sold electronic books by pricing them at a loss, in order to encourage consumers to buy its Kindle e-book readers. Amazon eventually controlled more than seventy per cent of the e-book market, a dominance that gave it the ability to force publishers to accept its terms, undermining the business model they had long used to subsidize the creation of a wide variety of books. When publishers tried to band together to fight Amazon, the Justice Department sued them, fearing that their action would increase the retail price of e-books. The publishers saw Amazon’s power as potentially leading to a decline in the free exchange of ideas and as a crisis for democracy. Increasingly, so did Khan. Her work helped provide the basis for a piece that Lynn published in Harper’s, in February, 2012, called “Killing the Competition.” Today, he wrote, “a single private company has captured the ability to dictate terms to the people who publish our books, and hence to the people who write and read our books.”

Khan told me that she started to see the world differently. “It’s incredible, once you start studying industry structure and see how much consolidation there has been across industries—in airlines, contact-lens solution, funeral caskets,” she said. “Every nook and cranny of our economy has consolidated. I was discovering this new world.” At one point, she investigated the candy market, identifying nearly forty brands in her local store that were made by Hershey, Mars, or Nestlé. In another project, about the raising of poultry, she found that most farmers had to purchase chicks and feed from the giant poultry processor that bought their full-grown chickens, which, because it had no local competitors, could dictate the price it paid for them.

Lynn and Khan couldn’t seem to get lawmakers to pay attention. “It definitely felt like we were on the margins of the policy conversation,” Khan said. One afternoon, she looked up from an article she was reading on her computer. Lynn recalls her saying, “Barry, I think I’m starting to feel angry.”

On June 15, 2021, Khan was sworn in as the chair of the Federal Trade Commission, the agency responsible for consumer protection and for enforcing the branch of law that regulates monopolies. At the age of thirty-two, she is the youngest person ever to head the F.T.C. Matt Stoller, the director of research at the anti-monopoly think tank the American Economic Liberties Project, described Khan’s ascent as “earth-shattering.” The appointment represents the triumph of ideas advocated by people like Khan and Lynn that had been suppressed or ignored for decades. “She understands profoundly what monopoly power means for workers and for consumers and for innovation,” said David Cicilline, a Democratic congressman from Rhode Island and the chair of the House Committee on the Judiciary’s Subcommittee on Antitrust, Commercial, and Administrative Law. “She will use the full power of the F.T.C. to promote competition, which I think is good for our economy, good for workers, and good for consumers and businesses.”

After years spent publishing research about how a more just world could be achieved through a sweeping reimagining of anti-monopoly laws, Khan now has a much more difficult task: testing her theories—in an arena of lobbyists, partisan division, and the federal court system—as one of the most powerful regulators of American business. “There’s no doubt that the latitude one has as a scholar, critiquing certain approaches, is very different from being in the position of actually executing,” Khan told me. But she added that she intends to steer the agency to choose consequential cases, with less emphasis on the outcomes, and to generally be more proactive. “Even in cases where you’re not going to have a slam-dunk theory or a slam-dunk case, or there’s risk involved, what do you do?” she said. “Do you turn away? Or do you think that these are moments when we need to stand strong and move forward? I think for those types of questions we’re certainly at a moment where we take the latter path.

“There’s a growing recognition that the way our economy has been structured has not always been to serve people,” Khan went on. “Frankly, I think this is a generational issue as well.” She noted that coming of age during the financial crisis had helped people understand that the way the economy functions is not just the result of metaphysical forces. “It’s very concrete policy and legal choices that are made, that determine these outcomes,” she said. “This is a really historic moment, and we’re trying to do everything we can to meet it.”

Amazon taught a generation of consumers that they could order anything online, from packs of mints to swimming pools, and expect it to be delivered almost overnight. According to some estimates, the company controls close to fifty per cent of all e-commerce retail sales in the U.S. and occupies roughly two hundred and twenty-eight million square feet of warehouse space. It makes movies and publishes books; delivers groceries; provides home-security systems and the cloud-computing services that many other companies rely on. Amazon’s founder, Jeff Bezos, wants to colonize the moon. During the Presidency of Barack Obama, Amazon’s relentless expansion was largely encouraged by the government. The country was emerging from a devastating recession, and Obama saw entrepreneurs like Bezos as sources of innovation and jobs. In 2013, in a speech given at an Amazon warehouse in Chattanooga, Tennessee, Obama described the company’s role in bolstering the financial security of the middle class and creating stable, well-paying work. He spoke with near-awe of how, during the previous Christmas rush, Amazon had sold more than three hundred items per second. Obama was also close with Eric Schmidt, the former executive chairman of Alphabet, Google’s parent company. An analysis by the Intercept found that employees and lobbyists from Alphabet visited the White House more than those from any other company, and White House staff turned to Google technologists to troubleshoot the Affordable Care Act Web site and other projects. Between 2010 and 2016, Amazon, Google, and other tech giants bought up hundreds of competitors, and the government, for the most part, did not object. The analysis also found that nearly two hundred and fifty people moved between government positions and companies controlled by Schmidt, law and lobbying firms that did work for Alphabet, or Alphabet itself. When Obama left office, many of his top aides took jobs at tech companies: Jay Carney, Obama’s former press secretary, joined Amazon; David Plouffe, his campaign manager, and Tony West, a high-ranking official at the Department of Justice, joined Uber; and Lisa Jackson, the former head of the Environmental Protection Agency, went to Apple. (...)

As a result, antitrust policy, especially as it pertains to big technology firms, has emerged as one of the starkest differences between the Biden Presidency and the Obama one. Stacy Mitchell, a co-director of the Institute for Local Self-Reliance, an anti-monopoly think tank, described the contrast as “night and day.” Obama’s politics were “very much in the center of the road, in terms of the dominant thought of the last several decades,” Mitchell told me. She noted that evidence of this world view could be seen early in Obama’s tenure, when his Administration declined to break up the big banks that had helped cause the 2008 financial crisis, and, instead, allowed them to become even larger and more powerful, while millions of people lost their homes to foreclosure. “Because of his identity as someone who was very progressive on a lot of other issues, I don’t think people saw that very clearly,” she said.

Through a series of appointments to regulatory and legal positions, the Biden Administration has indicated that it wants to reshape the role that major technology companies play in the economy and in our lives. On March 5th, Biden named Tim Wu, a Columbia Law School professor and an anti-monopoly advocate who has argued that Facebook should be broken up, to the newly created position of head of competition policy at the National Economic Council, which advises the President on economic-policy matters. On March 22nd, Biden nominated Khan to her current role. And, in July, he selected Jonathan Kanter to head the antitrust division of the Department of Justice. Kanter left the law firm Paul, Weiss in 2020 because his work representing companies making antitrust claims against Big Tech firms posed a conflict for the firm’s work for Apple, among others. Wu, Khan, Kanter, and a handful of other anti-monopoly advocates have been referred to as members of a “New Brandeis movement,” after the Supreme Court Justice Louis Brandeis, whose decisions limited the power of big business. Because of Khan’s youth, she has also been called the leader of the “hipster antitrust” faction, but this doesn’t capture the seriousness of her intentions. On August 19th, she re-filed an aggressive antitrust complaint that the F.T.C. had initiated in 2020, seeking to break up Facebook. In September, the agency published a report analyzing hundreds of acquisitions made by the biggest tech companies which were never submitted for government review. Although the report didn’t call for any specific action, it was a sign that Khan intends to look far deeper into Big Tech’s business than her predecessors did.

by Sheelah Kolhatkar, New Yorker | Read more:
Image: Ibrahim Rayintakath

I Applied For LA’s Basic Income Program

Sitting in a Ralphs parking lot overlooking the Pacific Coast Highway at 8am on a Friday, hot and sticky in an ageing wetsuit, I clicked on the link for Big:Leap, Los Angeles’ guaranteed income pilot and the largest program of its kind in the US.

Applications for the program had opened that morning. Participants would be chosen by lottery and the criteria for eligibility were simple: applicants had to be over the age of 18, live in the city of Los Angeles, have one or more dependents, and be living in poverty according to the federal poverty guidelines – a somewhat outdated and controversial method of measuring poverty, but one which, in the absence of anything else, is still used widely. The project’s aim was straightforward, too: to study the effects of giving approximately 3,000 families $1,000 a month in cash with no strings attached.

To a single parent who had lost two jobs in 2021, the opportunity to receive an additional thousand dollars a month tax-free in a city where the median rent for a one-bedroom apartment is $2,195 seemed like a lifeline. I thought I knew what to expect from the process. I had applied for several aid programs before – CalWorks, CalFresh, MediCal, onerous and detailed applications that delved into my bank accounts, utility bills, rental agreement, child support, income and assets (or lack of them), and they often involved numerous trips to offices to clear up glitches that had tied my hypothetical aid up in a bureaucratic system. During the pandemic, I applied for – and received – $17,500 of SBA money. That application took just minutes to complete.

The approximately 80 questions that the Big:Leap application posed started predictably enough: What is your gender? How many children under the age of 18 do you have?

They soon delved into the personal: How much bodily pain have you had during the past 4 weeks?

Then the application took a nosedive into the deeply intimate:

Have you experienced any of the actions listed below from any current or former partner or partners?
  • Blame me for causing their violent behavior.
  • Shook, pushed, grabbed or threw me.
  • Tried to convince my family, children or friends that I am crazy or tried to turn them against me.
  • Used or threatened to use a knife or gun or other weapon to harm me.
  • Made me perform sex acts that I did not want to perform.
The application took me 45 minutes, several F-bombs and one packet of Kleenex to complete.

It’s the paradox of Big:Leap. The program aims to stop “controlling” people in poverty through policy by no longer dictating what recipients spend government assistance on. But to prove the project’s worth, researchers have developed a control program that felt frustrating and arduous – hurtful, even, at times. “That is proof that we have to get policy to stop forcing people in poverty to prove their need,” said Michael Tubbs, the former mayor of Stockton, California.

Stockton in recent years ran a wildly successful two-year guaranteed income pilot, the Stockton Economic Empowerment Demonstration (Seed). The program achieved the results many politicians and researchers aiming to combat wealth inequality had hoped for: critics of the scheme had argued that untaxed, additional, no-strings income – in Stockton’s case, $500 a month for two years – would quash people’s work ethic and that the money would be spent irresponsibly. Extensive surveys ultimately revealed that the money improved the 200 participants’ job prospects, financial stability, mental and physical health, and overall wellbeing. Only 1% of the money went towards alcohol and tobacco, researchers found.

Since then, other major California cities have launched their own pilots. San Francisco announced its program in September 2020. Oakland followed suit in March. Chicago, Illinois, passed its guaranteed income program in October.

The setup of each of these pilots has varied. The Stockton program had a similar structure to Big:Leap, using a control group and a trial group to draw its conclusions. But it had no criteria for entry other than a Stockton zip code, and potential applicants entered a simple lottery without any initial questionnaire.

Curious about the reactions of other LA residents to the application questions, I went along to one of the walk-in centers across the city, most of which were in the district of Curren D Price, the city councilmember who had initiated the LA scheme. It’s a predominantly working class, Spanish-speaking neighborhood with one of the highest poverty rates in the city. At Price’s office, 16 computer terminals were set up in a room with three bilingual volunteers ready to assist walk-in applicants who might not have the literacy or technology to complete the application at home. A reporter for KCRW, Aaron Schrank, sat outside the room, holding a voice recorder. On Friday, when applications opened, there had been lines around the block. When I visited three days later, 13 of the terminals were occupied, and bored children clutching crayons wandered around while their parents patiently typed away. One woman completed the application in three hours. Another took two.

Schrank told me that two people he’d interviewed early that morning had, like me, been confused and offended by the questions. Luis Riva, a former upholsterer, had told him: “They’re asking too many questions about my health. They’re asking questions that aren’t related to helping people with money. They’re asking other things like how is my health, how do I think, psychological stuff.” Bonnie Morales, who lost her father and then her job during the pandemic, complained: “They asked me about my partner, like if it was a girl or a boy. Like, what does that matter?… Why does it matter if I’m gay, a lesbian, bisexual, or trans? I just find those questions very fucking weird to me, you know … Ask me if I’m starving. Ask me if I can afford a bag of beans. Ask me that.”

A volunteer, Porsha Anderson, acknowledged that many of the applicants had struggled with the questions. “They want to know, ‘Why are they asking me about domestic issues? What am I meant to say? What’s the answer I need to give to get the money?’”

Dr Bo-Kyung Elizabeth Kim, an assistant professor at the University of Southern California Center on Education Policy, Equity and Governance who heads the local research overseers for Big:Leap, explained that the questionnaire includes core questions composed by researchers and questions added by the study sites.

“Both researchers and their political partners are hoping to understand how and why money provided through the program may or may not improve the specific experience of families in poverty,” she said, as well as the challenges that poverty can bring about.

The questions about intimate partner violence in the questionnaire were included by the city of Los Angeles, she said: “We suspect intimate partner violence is a widespread community issue based on police calls for domestic disturbances, but we actually do not have strong data on its prevalence as typically only physical violence is reported,” she noted. “Inclusion of those questions does help LA understand the prevalence of intimate partner violence among applicants, and helps the city determine if guaranteed income can actually help people move away from dangerous relationships.”

The questions on the application were not compulsory, she added. (The disclaimer at the start of the application did state that the questions weren’t obligatory, but there was no way for an applicant to avoid them. Everyone had to click through the entire application before they could be submitted for entry into the lottery.)

The Los Angeles mayor, Eric Garcetti, said that he, as a social scientist, wanted the LA program to have the biggest possible sample size and ask the deepest questions of that sample, to provide solid data for potential government policy aimed at combating poverty. The goal, he said, was to find solutions that allow people to exit poverty, not to simply survive it through cash, food, medical or tuition assistance.

Garcetti said what had convinced him to commit to the pilot were “Angeleno cards” – basic debit cards containing a cash amount handed out to LA residents in need during the pandemic. The cards had the added benefit of allowing the city to track where that money was spent, Garcetti said. After he saw that most of the money was spent on basic necessities such as food, rent and utilities, he became committed to the idea of a guaranteed income scheme that could act as a bellwether of sorts for programs at a federal and state level, he said. The city has since set up a new department to handle Big:Leap and other “community wealth initiatives” aimed at combating poverty.

The difference between traditional social services and the notion of a guaranteed income, Garcetti said, was “the trust that [the program] places in everyday people to make decisions for themselves”.

by Ruth Fowler, The Guardian |  Read more:
Image:Ringo Chiu/Zuma Press Wire/Rex/Shutterstock
[ed. Bureaucracy at its finest. See also: Rough and Unready (The Baffler).]

Rick Beato: Reacting To The Beatles "Get Back" Documentary

 

[ed. So far the film is everything I hoped it'd be (haven't watched the whole thing yet). The conversations, the stunning improvisations on classic Beatle songs - as they're being developed, seemingly out of thin air - the musicianship (if you didn't know before, Ringo is amazing), just the whole loose vibe of history being made. See also: ‘Get Back’ May Change the Way You Think About the Demise of the Beatles (The Ringer).]

Tuesday, November 30, 2021

Sergio Sarri “That Obscure Object of Desire”. Homage to Luis Buñuel


Caleografia (detail)

Harold and Maude: 50 Years On

Harold and Maude is a movie that celebrates the 1970s. By turns exuberant, psychedelic, hilarious and heartbreaking, it’s a product of the most prolific decade of Hal Ashby’s directorial output: his skewed, sweet-natured stamp is all over it.

From the opening minutes – a macabre mismatch of suicidal scene-setting to the accompaniment of Cat Stevens’ uplifting Don’t Be Shy – Ashby leaves viewers in no doubt about what they have signed up for. What follows is 91 minutes of sunlight and shadow juxtaposed in a way that will have them laughing and gasping in the same breath.

Twentysomething Harold Chasen (Bud Cort) spends his leisure time devising attention-seeking suicide scenarios within sight of his emotionally unavailable mother (an inspired Vivian Pickles). After his 15th staged suicide, Mrs Chasen – not averse to the occasional display of amateur dramatics herself – sends him to a psychiatrist who asks Harold if all 15 attempts were done for his mother’s benefit.

“I would not say benefit,” says a deadpan Harold, a master of the judicious use of looking straight to camera.

His other pastime is going to funerals, which is where he meets 79-year-old Maude (Ruth Gordon), a fellow funeral aficionado, occasional life model and self-described sunflower. Maude is given to “borrowing” other people’s vehicles and, at one of the funerals, pulls over to offer Harold a lift in his own car, a secondhand hearse.

Their friendship develops over the course of the week, a busy one for Harold. His mother has decided he should marry and signs him up for a computer dating service, 70s-style.

‘They screen out the fat and the ugly,” his mother assures a bemused Harold. As she reads out the questionnaire – “Do you sometimes have headaches after a difficult day?” – and responds “Yes I do indeed,” Harold casually loads a gun and points it at her before turning it on himself for suicide attempt number 16.

There follow three more blood-spattered performances – one for each of his prospective wives-to-be – that include self-immolation, a self-inflicted machete attack and a spectacular seppuku that ends in a copycat performance by would-be actor date No 3. In between engagements, Harold and Maude have a picnic at a demolition site, save a tree, steal a couples of vehicles including a police motorbike, frolic in a field of daisies and fall in love. 

by Elizabeth Quinn, The Guardian |  Read more:
Image: Cinetext Bildarchiv/Paramount/Allstar
[ed. A favorite. If you haven't seen it, do yourself a favor. I think it's on Amazon Prime.]

Omicron Updates


Image: Omicron mutation profile of S1 spike protein compared to other variants. (Trevor Bedford, mathematical epidemiologist via Twitter)

How To Fix Twitter

Jack Dorsey is stepping down as CEO of Twitter, to be replaced by Parag Agrawal. It’s not clear whether this will cause much of a change in the company’s direction — Tim Cook and Dara Khosrowshahi don’t seem to have radically altered the courses charted by Steve Jobs and Travis Kalanick, for instance. And it’s an open question whether the company’s famously chaotic corporate culture is even capable of making substantive changes. But on the off chance that change could be in the offing for the platform I spend most of my time on, I thought I’d offer my thoughts.

What’s wrong with Twitter

First, the background. I’ve done a lot of complaining about Twitter over the years. This post will probably be the last in the series — at least for a very long while — so if you’re a reader who doesn’t care about Twitter, you can breathe easy. But as a prelude to saying what I think can be done to fix Twitter, it’s worth it to review what I think are the platform’s major problems. So here’s a list of what I’ve written:

1. The Shouting Class: Twitter’s openness, free entry, and virality combine to allow anyone to gain prominence over anyone else; there’s much less gatekeeping and moderation than other platforms. But this means the discourse gets dominated by people with both the time and motivation to spend all day shouting on Twitter. That includes a fair number of good people with reasonable concerns, but it also includes a huge number of clout-seeking self-aggrandizers, histrionic wailers, ideological extremists, trolls, and contentious people who just like to argue. And because journalists and politicians all have to be on Twitter for their jobs, we’ve effectively locked our nation’s thought-leaders in a room with the Shouting Class. This encourages extremism, exhausts our empathy, and makes our society more divided and contentious.

2. The Shouting Class 2: Last Refuge of Scoundrels: In this sequel post, I gather some empirical evidence that Twitter tends to attract contentious people, and how the platform tends to amplify expressions of outrage.

3. It’s not Cancel Culture, it’s Cancel Technology: Amid all the brouhaha over “cancel culture”, there has been relatively little focus on how social media changes the nature of social ostracism. The way Twitter is set up makes ostracism both more unavoidable — because you can’t choose the group of people you express your ideas to — and more long-lasting, because your tweets, or screenshots of them, get saved forever. This has made social ostracism more likely and more pernicious in some ways than in, say, the 1990s or the 2000s.

4. Twitter and gekokujo: In 1930s Japan, nationalistic young army officers would sometimes force their more moderate superior officers to give them a free hand by appealing to nationalistic sentiments that prevailed among the general population. This was called “gekokujo”, a word indicating subversion of a hierarchy of authority. In the same way, Twitter allows dissenters within any organization to take their complaints to the general public and harness free-floating outrage and unrest to cow their superiors. If you don’t think the organizers of your science fiction convention have given you sufficient recognition, for example, you can take to Twitter and denounce them, leading to a wave of outrage that terrifies them into acceding to your demands. This power is, to some extent, based on an illusion — most people aren’t used to being yelled at online, and even a dozen angry replies can generate the impression of a vast wave of popular anger. But until companies and other organizations learn that they can ride out Twitter outrages with little consequence, the platform will continue to be disruptive to organizations at every level of society.

5. Status Anxiety as a Service: The directness with which Twitter allows “reply-guys” to address “bluechecks” creates the illusion of equality on the platform — an illusion that is then constantly shattered, as the reply-guys realize that the bluechecks have a much bigger platform than they do. This breeds a peculiarly toxic social dynamic, contributing to the platform’s general air of resentment and vindictiveness.

These are not the only problems with Twitter, obviously; there’s also the fact that communication by short text-only messages is inherently attenuated, removing much of the nuance and context that makes video and audio communication so natural. But that’s just a general problem of the internet, really, and Twitter Spaces represent one attempt to address the issue.

Really, the problem with Twitter is just that it’s ruled by shouty jerks. All of the other problems — ubiquitous fear of “cancellation”, disruption of organizations, toxic status anxiety — ultimately come back to the Shouting Class.

How to make Twitter better

So how can the platform be fixed? The product team has experimented with a number of tweaks over the years — hiding some replies, tweaking the algorithm, adding features like “mute conversation”, and so on. And centralized content moderation has increased, leading to a slow squeezing out of much of the alt-right and Qanon. That has improved things incrementally. But if management really wants to improve the platform, it’s going to have to tinker with its basic nature.

by Noah Smith, Noahpinion |  Read more:
Image: uncredited

Astrid Fitzgerald | Abstract Art on Other | Construction 124

Monday, November 29, 2021

The Holy Grail of Energy Technology (Gets Closer? Maybe?)

Nuclear fusion is the ephemeral holy grail of climate technology. It would provide nearly limitless amounts of clean energy without the byproduct of long-lasting radioactive waste to be managed.

It’s also the biggest bet Silicon Valley luminary Sam Altman has ever made.

“This is the biggest investment I’ve ever made,” Altman told CNBC of his $375 million investment in Helion Energy, announced Friday. It’s part of a larger $500 million round that the start-up will use to complete the construction of a fusion facility near its headquarters in Everett, Washington.

Altman was the president of the Silicon Valley start-up shop Y Combinator from 2014 through 2019 and is now the CEO of Open AI, an organization that researches artificial intelligence, which he co-founded with Elon Musk and others. (Musk has since stepped away, citing conflicts of interest with Tesla’s AI pursuits.) Altman has also been a big proponent of universal basic income, the idea that the government should give every citizen a basic living wage to compensate for technological disruptions that make some jobs irrelevant.

Years ago, Altman had made a list of the technologies he wanted to get involved in, and artificial intelligence and energy topped that list.

Altman visited four fusion companies, and made his first investment of $9.5 million into Helion 2015.

“I immediately upon meeting the Helion founders thought they were the best and their technical approach was the best by far,” he said.

Helion’s approach to fusion

Nuclear fusion is the opposite reaction of nuclear fission: Where fission splits a larger atom into two smaller atoms, releasing energy, fusion happens when two lighter nuclei slam together to form a heavier atom. It’s the way the sun makes energy, and the basis of hydrogen bombs. Helion is one of a handful of start-ups working to control and commercialize fusion as an energy source, including Commonwealth Fusion Systems and TAE Technologies.

Perhaps the best-known fusion project is Iter in Southern France, where about 35 nations are collaborating to build a donut-shaped fusion machine called a tokamak.

Helion does not use a tokamak, said David Kirtley, Helion’s co-founder and CEO. The fusion machine Helion is building is long and narrow.


Helion uses “pulsed magnetic fusion,” Kirtley explained. That means the company uses aluminum magnets to compress its fuel and then expand it to get electricity out directly.

Extremely high temperatures are needed to create and maintain the delicate state of matter called plasma, where electrons are separated from nuclei, and where fusion can occur.

In June, Helion announced it exceeded 100 million degrees Celsius in its 6th fusion generator prototype, Trenta.

Kirtley compares Helion’s fusion machine to a diesel engine, while older technologies are more like a campfire. With a campfire, you stoke the fire to generate heat. In a diesel engine, you inject the fuel into a container, then compress and heat the fuel until it begins to burn. “And then you use the expansion of it to directly do useful work,” said Kirtley.

“By taking this new fresh approach and some of the old physics, we can we can move forward and do it fast,” Kirtley said. “The systems end up being a lot smaller, a lot faster to iterate, and then that gets us to commercially useful electricity, which is solving the climate change problem, as soon as possible.”

Helion Energy is using aneutronic fusion, meaning “they don’t have a lot high energy neutrons present in their fusion reaction,” according to Brett Rampal, the Director of Nuclear Innovation at the non-profit Clean Air Task Force.

There are still unknowns with aneutronic fusion, Rampal said.

“An aneutronic approach, like Helion Energy is pursuing, could have potential benefits that other approaches do not, but could also have different downsides and challenges to achieving commercial fusion energy production,” Rampal said. (...)

Altman’s three-part utopian vision

For Altman, fusion is part of his overall vision of increasing abundance through technological innovation — a vision that stands apart from many investors and thinkers in the climate space.

“Number one, I think it is our best shot to get out of the climate crisis,” Altman said.

More generally, “decreasing the cost of energy is one of the best ways to improve people’s quality of lives,” Altman said. “The correlation there is just incredibly big.”

Altman’s utopian vision encompasses three parts.

Artificial intelligence, Altman said, will drive the cost of goods and services down with exponential increases in productivity. Universal basic income will be necessary to pay people’s cost of living in the transition period where many jobs are eliminated. And virtually limitless, low-cost, green energy is the third part of Altman’s vision for the world.

“So for the same reason I’m so interested in AI, I think that fusion, as a path to abundant energy, is sort of the other part of the equation to get to abundance,” Altman told CNBC.

“I think fundamentally today in the world, the two limiting commodities you see everywhere are intelligence, which we’re trying to work on with AI, and energy, which I think Helion has the most exciting thing in the entire world happening for right now.”

But Altman knows that fusion has been elusive for decades. “The joke in fusion is that it’s been 30 years away for 50 years,” he said.

by Catherine Clifford, CNBC |  Read more:
Image: Helion

Sunday, November 28, 2021

“Organized Retail Crime”: A Look at Organized Retail Crime in the US and How Ecommerce Turned it into a Big Business

Stolen goods get sold to law-abiding Americans by third-party vendors on big ecommerce sites that profit from it. Legislation to control it struggles.

It’s a big profitable business across the US because the cost of the merchandise is zero: Organize a bunch of people via the social media, raid a store and and run out, arms-full of merchandise, and then sell this stuff into specialized distribution channels from where it gets sold by third-party vendors on some of the best-known ecommerce platforms in the US, such as eBay and Amazon and many others.

Shares of Best Buy [BBY] plunged 12.4% today after the company’s earnings call, during which it discussed a laundry list of headwinds and pressures on its gross profit margins, which, for US sales, fell 60 basis points to 23.4%, “primarily driven,” as CFO Matt Bilunas put it, by product damages and returns compared to last year, lower margins of services, and the infamous “inventory shrink.”

Inventory shrinkage or inventory shrink are the retail industry’s long-established terms for the phenomenon of inventory vanishing from the company due to vendor fraud, employee theft, and retail theft, including organized retail crime.

The total amount of shrink across the US from vendor fraud, employee theft, and retail theft in 2020 was roughly $62 billion, about the same as in 2019 despite many stores being closed for part of 2020, according to the National Retail Federation’s “2021 Retail Security Survey: The state of national retail security and organized retail crime.”

Average shrink from vendor fraud, employee theft, and retail theft amounted to 1.6% of sales in 2020 (at retail prices), according to the NRF’s survey.

It has been going on for a long time, and many retailers have reported the shrink in their financial statements for a long time as one of the costs and margin pressues. But the connection with ecommerce has given it a new business model.

“We are definitely seeing more and more, particularly organized retail crime and incidence of shrink in our locations,” said Best Buy CEO Corie Barry during the conference call (transcript via Seeking Alpha). “And I think you’ve heard other retailers talk about it, and we certainly have seen it as well.”

In the prepared remarks, Barry said that Best Buy will launch a “new capability,” namely using the QR codes for products that are locked up. “Instead of waiting for an associate to unlock the product, the customer can scan the QR code and then proceed to check out to pay and pick up the product,” she said.

“We are doing a number of things to protect our people and our customers. As we talked about in the prepared remarks, we are finding ways where we can lock up product but still make that a good customer experience. In some instances, we’re hiring security. We’re working with our vendors on creative ways we can stage the product. We’re working with trade organizations,” she said.

But all this costs money and if the hoops are high enough for customers to jump through, it costs revenues.

“You can see that pressure in our financials,” she said. “And more importantly, frankly, you can see that pressure our associates. This is traumatizing for our associates and is unacceptable. We are doing everything we can to try to create as safe as possible environments.”

Organized retail crime has been around for about as long as retail itself. But the perpetrators had trouble selling large quantities of merchandise. Selling detergent and consumer electronics and handbags on the sidewalk was hard work and cumbersome.

But now there’s the internet with perfectly legal and huge retail platforms such as Amazon and eBay and many others, where perfectly law-abiding retail customers, who have no idea where the products came from, end up buying this contraband from third-party vendors, thus enabling the sophisticated fencing operations that make organized retail theft possible.

Retailers, including in recent years ecommerce retailers, have long been sitting ducks for criminals, in part because retailers want to create a smooth and hassle-free shopping experience. And they’ve been getting hit by theft from all sides – and organized retail crime is just one of them:
  • Ecommerce crime
  • Organized retail crime
  • Cyber-related incidents
  • Internal theft (by employees)
  • Return fraud (online and brick & mortar)
  • Gift card fraud
The costs of these crimes have always been part of the costs of doing business for retailers. And they have rolled those costs into retail prices. Customers are paying for these crimes.

The 2012 report cited figures from 2010, of total shrink of $35 billion that year, with:
  • $8.5 billion from vendor fraud, error, and unknown sources
  • $15.9 billion from employee theft
  • $10.9 billion from retail theft, including “organized retail crime.”
That was over 10 years ago. Inflation and the ease of selling this stuff on the internet have ballooned the total shrink to $62 billion in 2020.

State legislatures around the US and members of the US Congress have proposed various laws that would require online retailers, such as Amazon, to obtain proof from vendors that they purchased the merchandise legally.

by Wolf Richter, Wolf Street |  Read more:
[ed. Not just Best Buy. I know a friend who applied for a job with Home Depot, and part of their training was to never confront a customer, even if they were clearly stealing store merchandise. You could walk out the door with an unpaid barbeque grill and never be stopped. I'm not sure what the reasoning is/was: legal exposure and hassles, potential violence, bad shopping experiences for other customers, whatever. But as this article shows, it's a big problem.]

How to Fix Social Media

Around two o’clock in the afternoon on October 30, 1973, a disc jockey at the New York City radio station WBAI played a track called “Filthy Words” from comedian George Carlin’s latest album. “I was thinking one night about the words you couldn’t say on the public airwaves,” Carlin began. He then rattled off seven choice examples — “f***” was among the milder ones — and proceeded to riff on their origin, usage, and relative offensiveness for the next ten minutes.

A Long Island man named John Douglas heard the broadcast as he was driving home from a trip to Connecticut with his teenaged son. He promptly filed a complaint with the Federal Communications Commission. “Whereas I can perhaps understand an ‘X-rated’ phonograph record’s being sold for private use, I certainly cannot understand the broadcast of same over the air that, supposedly, you control,” he wrote. “Can you say this is a responsible radio station, that demonstrates a responsibility to the public for its license?” After a year-long investigation, the FCC ruled that the station had violated a decades-old law that prohibits the broadcast of “obscene, indecent, or profane language.” The commission emphasized that, when it comes to the regulation of speech, broadcasters require “special treatment” because they operate “in the public interest.”

WBAI’s parent company appealed the ruling. In 1977 the D.C. Circuit Court issued a split decision reversing the FCC action, saying it was “overbroad” and entered “into the forbidden realm of censorship.” The FCC took the case to the Supreme Court. A year later, in a contentious five-to-four ruling, the Court sided with the commission. Writing for the majority, Justice John Paul Stevens observed that, while “each medium of expression presents special First Amendment problems,” broadcast media’s “uniquely pervasive presence in the lives of all Americans” — a presence, he noted, that extends beyond the public square and into the home — circumscribes its free-speech protections. An electronic broadcast is different from an electronic call between two persons, Stevens argued, and the two forms of communication deserve to be treated differently under the law. While the government can’t police private conversations, it can regulate the content of broadcasts to protect the public interest.

Today, mired as we are in partisan, bitter, and seemingly fruitless debates over the roles and responsibilities of social media companies, the controversy surrounding George Carlin’s naughty comedy routine can seem distant and even quaint. Thanks to the Internet’s dismantling of traditional barriers to broadcasting, companies such as Facebook, Google, and Twitter transmit a volume and variety of content that would have been unimaginable fifty years ago. What’s at issue now is far greater than the propriety of a few dirty words. Arguments over whether and how to control the information distributed through social media go to the heart of America’s democratic ideals.

It’s a mistake, though, to assume that technological changes, even profound ones, render history irrelevant. The arrival of broadcast media at the start of the last century set off an information revolution just as tumultuous as the one we are going through today, and the way legislators, judges, and the public responded to the earlier upheaval can illuminate our current situation. Particularly pertinent are the distinctions between different forms of communication that informed the Supreme Court’s decision in the Carlin case — and that had guided legal and regulatory policy-making throughout the formative years of the mass media era. Digitization has blurred those distinctions at a technical level — all forms of communication can now be transmitted through a single computer network — but it has not erased them.

By once again making such distinctions, particularly between personal speech and public speech, we have an opportunity to break out of our current ideological bind and create a democratic framework for governing social media that is consistent with the country’s values and traditions.

‘Under Lock and Key’

For most of the twentieth century, advances in communication technology proceeded along two separate paths. The “one-to-one” systems used for correspondence and conversation remained largely distinct from the “one-to-many” systems used for broadcasting. The distinction was manifest in every home: When you wanted to chat with someone, you’d pick up the telephone; when you wanted to view or listen to a show, you’d switch on the TV or radio. The technological separation of the two modes of communication underscored the very different roles they played in people’s lives. Everyone saw that personal communication and public communication entailed different social norms, presented different sets of risks and benefits, and merited different legal, regulatory, and commercial responses. (...)

When the telegraph and the telephone arrived, they may have been new things in the world, but they had an important precedent in the mail system. Radio broadcasting had no such precedent. For the first time, a large, dispersed audience could receive the same information simultaneously and without delay from a single source. As would be the case with the Internet nearly a century later, the exotic new medium remained in the hands of tinkerers and hobbyists during its early years. Every evening, the airwaves buzzed with the transmissions of tens of thousands of amateur operators, who, as media historian Hugh R. Slotten observes, “tended to view the spectrum as a new, wide-open frontier, akin to the American West.”

The amateurs — adolescent boys, many of them — played a crucial role in the development of radio technology, and most used their sets responsibly. But some, in another foreshadowing of the net, were bent on mischief and mayhem. Shielded by anonymity, they would transmit rumors and lies, slurs and slanders. The U.S. Navy, which relied on radio to manage its fleet, was a prime target. Officers “complained bitterly,” Slotten reports, “about amateurs sending out fake distress calls or posing as naval commanders and sending ships on fraudulent missions.”

The nuisance became a crisis in the early morning hours of April 15, 1912, when the Titanic sank after its fateful collision with an iceberg. Efforts to rescue the passengers were hindered by a barrage of amateur radio messages. The messages clogged the airwaves, making it hard for official transmissions to get through. Worse, some of the amateurs sent out what we would today call fake news, including a widely circulated rumor that the Titanic remained seaworthy and was being towed to a nearby port for repairs.

Although European countries had begun imposing government controls on wireless traffic as early as 1903, radio had been left largely unregulated in the United States. The public and the press, dazzled by the magical new technology, feared that bureaucratic meddling would stifle progress. Government intervention “would hamper the development of a great modern enterprise,” the New York Times opined in an editorial just three weeks before the Titanic’s sinking. “The pathways of the ether should not be involved in red tape.”

The Titanic tragedy changed everything, as Susan J. Douglas documents in her book Inventing American Broadcasting. The public was outraged, and the press demanded immediate government action. Four months later, Congress passed the Radio Act of 1912. Among the law’s provisions were requirements that all radio operators be licensed by the Department of Commerce, that senders of malicious messages be fined, and that amateur operators be restricted to the less-desirable shortwave band of the spectrum. Radio’s Wild West days were over. (...)

The Public Interest Standard

Broadcasting is not grain storage. Because it deals in the intangible goods that shape the public mind — ideas and opinions, facts and fabrications — it is inherently political. That broadcasting had a public calling may have been obvious in the 1920s, but the nature of that calling was not. Without any precedent to draw on, society had to figure out, more or less from scratch, how to accommodate the powerful new technology — how to tap its many benefits while curbing its destructive potential. It was a complicated, daunting challenge, requiring that the interests of the “community at large” be balanced not just against the interests of private businesses but also against the interests of individuals, including the right to freedom of expression.

If that sounds uncomfortably familiar, it’s because we now face a similar balancing act as we struggle to accommodate social media. The way the country met the challenge a hundred years ago, haltingly but effectively, holds important lessons for us today. (...)

Disentangling personal speech and public speech is clarifying. It reveals the dual roles that social media companies play. They transmit personal messages on behalf of individuals, and they broadcast a variety of content to the general public. The two businesses have very different characteristics, as we’ve seen, and they demand different kinds of oversight. The two-pronged regulatory approach of the last century, far from being obsolete, remains vital. It can once again help bring order to a chaotic media environment. For a Congress struggling with the complexities of the social media crisis, it might even serve as the basis of a broad new law — a Digital Communications Act in the tradition of the original Communications Act — that both protects the privacy of personal correspondence and conversation and secures the public’s interest in broadcasting.

by Nicholas Carr, The New Atlantis |  Read more:
Image: iStock