Monday, March 6, 2023

B-52s

[ed. Wonder how you'd teach this in songwriting classes. SNL 1980. See also: The B-52s Are Still Sassy After All These Years (Flood). And, the undeniably danceable Love Shack.]


via: here and here

February 18, 2023

Republican leaders are recognizing that the sight of Republican lawmakers heckling the president of the United States didn’t do their party any favors.

It not only called attention to their behavior, it prompted many news outlets to fact-check President Biden’s claim that Republicans had called for cuts to Social Security and Medicare or even called to get rid of them. Those outlets noted that while Republicans have repeatedly said they have no intention of cutting those programs, what Biden said was true: Republican leaders have repeatedly suggested such cuts, or even the elimination of those programs, in speeches, news interviews, and written proposals.

Senator Thom Tillis (R-NC) told Alexander Bolton of The Hill that Republicans should stick to “reasonable and enduring policy” proposals. “I think we’re missing an opportunity to differentiate,” he said. “Focus on policy. If you get that done, it will age well.”

But therein lies the Republican Party’s problem. What ARE its reasonable and enduring policies? One of the reasons Biden keeps pressuring the party to release its budget is that it’s not at all clear what the party stands for.

Senate minority leader Mitch McConnell (R-KY) refused to issue any plans before the 2022 midterm election, and in 2020, for the first time in its history, the party refused to write a party platform. The Republican National Committee simply resolved that if its party platform committee had met, it “would have undoubtedly unanimously agreed to reassert the Party's strong support for President Donald Trump and his Administration.” So, it resolved that “the Republican Party has and will continue to enthusiastically support the President's America-first agenda.”

Cutting Social Security is a centerpiece of the ideology the party adopted in the 1980s: that the government in place since 1933 was stunting the economy and should be privatized as much as possible.

In place of using the federal government to regulate business, provide a basic social safety net, protect civil rights, and promote infrastructure, Reagan Republicans promised that cutting taxes and regulation would free up capital, which investors would then plow into new businesses, creating new jobs and moving everybody upward. Americans could have low taxes and services both, they promised, for “supply-side economics” would create such economic growth that lower tax rates would still produce high enough revenues to keep the debt low and maintain services.

But constructing an economy that favored the “supply side” rather than the “demand side”—those ordinary Americans who would spend more money in their daily lives—did not, in fact, produce great economic growth or produce tax revenues high enough to keep paying expenses. In January 1981, President Ronald Reagan called the federal deficit, then almost $74 billion, “out of control.” Within two years, he had increased it to $208 billion. The debt, too, nearly tripled during Reagan’s term, from $930 billion to $2.6 trillion. The Republican solution was to cut taxes and slash the government even further.

As early as his 1978 congressional race, George W. Bush called for fixing Social Security’s finances by permitting people to invest their payroll tax themselves. In his second term as president in 2005, he called for it again. When Republican senator Rick Scott of Florida proposed an 11-point (which he later changed to a 12 points) “Plan to Rescue America” last year, vowing to “sunset” all laws automatically after five years, the idea reflected that Republican vision. It permitted the cutting of Social Security without attaching those cuts to any one person or party.

But American voters like Social Security and Medicare and, just as they refused Bush’s attempt to privatize Social Security, recoiled from Scott’s plan. Yesterday, under pressure from voters and from other Republicans who recognized the political damage being done, Scott wrote an op-ed saying his plan was “obviously not intended to include entitlement programs such as Medicare and Social Security—programs that hard-working people have paid into their entire lives—or the funds dedicated to our national security.” (The online version of the plan remains unchanged as of Saturday morning.)

Scott attacked Biden for suggesting otherwise, but he also attacked Mitch McConnell, who also condemned Scott’s plan, accusing them of engaging in “shallow gotcha politics, which is what Washington does.” He also accused “Washington politicians” for “lying to you every chance they get.” Scott’s venom illustrated the growing rift in the Republican Party.

Since the 1990s, Republicans have had an ideological problem: voters don’t actually like their economic vision, which has cut services and neglected infrastructure even as it has dramatically moved wealth upward. So to keep voters behind them, Republicans hammered on social and cultural issues, portraying those who liked the active government as godless socialists who were catering to minorities and women. “There is a religious war going on in this country,” Republican Pat Buchanan told the Republican National Convention in 1992. “It is a cultural war, as critical to the kind of nation we shall be as was the Cold War itself, for this war is for the soul of America.”

A generation later, that culture war has joined with the economic vision of the older party to create a new ideology. More than half of Republicans now reject the idea of a democracy based in the rule of law and instead support Christian nationalism, insisting that the United States is a Christian nation and that our society and our laws should be based in evangelical Christian values. Forty percent of the strongest adherents of Christian nationalism think “true American patriots may have to resort to violence in order to save our country,” while 22% of sympathizers agree with that position.

Scott released his 11-point plan because, he said, “Americans deserve to know what we will do when given the chance,” and his plan reflected the new Republicans. Sunsetting laws and tax cuts were only part of the plan. He promised to cut government jobs by 25% over the next five years, “sell off all non-essential government assets, buildings and land, and use the proceeds to pay down our national debt,” get rid of all federal programs that local governments can take over, cut taxes, “grow America’s economy,” and “stop Socialism.”

by Heather Cox Richardson, Letters From An American |  Read more:
[ed. Concise and accurate summation. Unfortunately.]

Grant Green

Grant Green, My One And Only Love
via:
[ed. Full version here.]

Sunday, March 5, 2023

Three Times

Claire Lehmann, The Mocking of Nature. 2019
[ed. From the essay Three Times (N+1), on pregnancy and abortion.]

Tom Gauld
via:

Re-Genesis

The term anitya (अनित्य) refers to the impermanence of all worldly things, and according to Wikipedia, it first appears in verse 1.2.10 of the Katha Upanishad. Buddhism and Hinduism share this doctrine, though they disagree on whether or not the Self exists. I was first made aware of the term and concept aged 24 during a meditation retreat in the Himalayas in April 1991. After 11 hours of meditation practice, we were sitting crosslegged on the floor, listening to the deep, patient voice of our absent guru, SN Goenka, issue from the speakers of a battered Ashram blaster.

Goenka was emphasising the difference between understanding the reality of universal impermanence as a theoretical proposition—such as one learned in quantum mechanics, astro-physics, or the history of light entertainment—and grasping it on a more personal level, integrating it into one’s life, and leaving it on in the background like Alexa to monitor and mitigate one’s emotional reactions to life’s irritations. The difference, he said, was profound. It was everything.

My impulse was to shrug. So what? Yes, things change. I’ve noticed. Yes, the sands of time will run through the hourglass and the desert winds will blow away the dust of my bones and raze my vainglorious monuments to the ground. Big deal. I like change. New things replace the old and the world would be boring were it otherwise.

Well, I’m 57 now, and I’m less sanguine than I was about this sort of thing. To some changes, I am reconciled. Others sadden me, but I have accepted that it is less than politic to complain. But I am having particular difficulty accepting the slow disappearance and death of a cultural edifice I had always assumed to be eternal—rock music. Nor I think am I alone. Many from my generational cohort—Boomers before me and Gen-Xers after—seem to be stuck in the first stage of grief: denial.

I’m the same age as rock music—maybe exactly the same age. I was born on Sunday, May 9th, 1965, among the first members of Generation X. That was also the day The Beatles first saw Bob Dylan play live, at the Royal Albert Hall. The large gamete encountered the small and something entirely new was created. The same weekend, Dylan began scrawling the lyrics to ‘Like a Rolling Stone’ on a napkin at the Savoy hotel—a free-form screed of scornful contempt for his own generation’s Beautiful and Damned, which evolved over subsequent weeks into what is now (statistically, at any rate) the most acclaimed song of all time. Bruce Springsteen once described it as “a torrent that comes rushing towards you. Floods your soul, floods your mind,” and when it was released in July, it changed everything. That summer, rock ’n’ roll, folk, and blues, drugs, poetry, Byronic peacock swagger, disdain, and conceit all coalesced into the greatest sound the world had ever heard.

It didn’t last. The art historian Kenneth Clark once wrote somewhere that every artistic movement lasts a generation if you’re lucky. You get between 15 and 25 years before the candle begins to gutter. 1965–80 is the short rock century. 1955–80 is the long one, if you want to start with The King rather than his Jester. In his 2016 book, Never a Dull Moment, music journalist David Hepworth convincingly places rock music’s peak at 1971 (and in Uncommon People the following year, he dates the end of the rock star to 1994). So by the time I started hearing LPs that belonged to friends’ older siblings in 1977, the bloom was already coming off.

My full induction occurred on November 17th, 1979, when the Friday Rock Show’s Tommy Vance played through a listeners’ poll that lasted fully two hours. I remember my parents turning off the TV and generously leaving me to it, no idea of the irreversible changes being made to my brain. I already knew ‘Smoke on the Water,’ ‘Child in Time,’ and ‘Stairway to Heaven,’ but I was now introduced to ‘Shine on You Crazy Diamond,’ ‘Supper’s Ready,’ ‘Layla,’ and ‘Free Bird.’ It was like seeing the Taj Mahal, Hagia Sophia, and Chartres for the first time all on the same evening. I have never entirely come down since. But six months later, according to the Clark Formula, it was all over. Rock, like Axl, was a blown Rose.

Sure, there were aftershocks–Guns N’ Roses among them. America’s Indie scene produced REM, Pixies, and Sonic Youth, while grunge produced Nirvana. Troopers like AC/DC and the Stones kept on keeping on like nothing had changed while a handful of names from the ’70s like Ozzy and Aerosmith enjoyed successful second acts. But the whole scene increasingly resembled a postmodern pastiche, like the Disneyfied, castrated facsimile of Vegas at the end of Martin Scorsese’s Casino. Springsteen remains glorious but he is leading a revivalist prayer meeting, not imparting the original revelation. By the mid-’90s, the Rock and Roll Hall of Fame was honouring talent faster than Rock and Roll could generate it.

And now? A few good men have not deserted their posts but they are dying in their boots. The reinforcements never came. “Just about every rock legend you can think of,” Damon Linker wrote in an essay for the Week, “is going to die within the next decade or so.” The stats are grim and foretell a “tidal wave of obituaries”:
Behold the killing fields that lie before us: Bob Dylan (78 years old); Paul McCartney (77); Paul Simon (77) and Art Garfunkel (77); Carole King (77); Brian Wilson (77); Mick Jagger (76) and Keith Richards (75); Joni Mitchell (75); Jimmy Page (75) and Robert Plant (71); Ray Davies (75); Roger Daltrey (75) and Pete Townshend (74); Roger Waters (75) and David Gilmour (73); Rod Stewart (74); Eric Clapton (74); Debbie Harry (74); Neil Young (73); Van Morrison (73); Bryan Ferry (73); Elton John (72); Don Henley (72); James Taylor (71); Jackson Browne (70); Billy Joel (70); and Bruce Springsteen (69, but turning 70 next month).
A few of these legends might manage to live into their 90s, despite all the …wear and tear to which they’ve subjected their bodies over the decades. But most of them will not.

That essay was published four years ago. It’s not dark yet, but it’s getting there. And as these totems of cheerfully complacent youth and vitality meet their maker, Linker writes, it “will force us not only to endure their passing, but to confront our own mortality as well.” Concert attendances remain high, but demographics toll the bell. Those tickets, album sales, and streams are so heavily skewed towards the elderly now that the whole project is just one cold snap away from oblivion.

by Simon Evans, Quillette |  Read more:
Image: Genesis performing in 2007. Photo by Andrew Bossi via Wikipedia.
[ed. See also: The coming death of just about every rock legend (The Week):]

"Before rock emerged from rhythm and blues in the late 1950s, and again since it began its long withdrawing roar in the late 1990s, the norm for popular music has been songwriting and record production conducted on the model of an assembly line. This is usually called the "Brill Building" approach to making music, named after the building in midtown Manhattan where leading music industry offices and studios were located in the pre-rock era. Professional songwriters toiled away in small cubicles, crafting future hits for singers who made records closely overseen by a team of producers and corporate drones. Today, something remarkably similar happens in pop and hip-hop, with song files zipping around the globe to a small number of highly successful songwriters and producers who add hooks and production flourishes in order to generate a team-built product that can only be described as pristine, if soulless, perfection.

This is music created by committee and consensus, actively seeking the largest possible audience as an end in itself. Rock (especially as practiced by the most creatively ambitious bands of the mid-1960s: The Beatles, The Rolling Stones, The Kinks, and the Beach Boys) shattered this way of doing things, and for a few decades, a new model of the rock auteur prevailed. As critic Steven Hyden recounts in his delightful book Twilight of the Gods: A Journey to the End of Classic Rock, rock bands and individual rock stars were given an enormous amount of creative freedom, and the best of them used every bit of it. They wrote their own music and lyrics, crafted their own arrangements, experimented with wildly ambitious production techniques, and oversaw the design of their album covers, the launching of marketing campaigns, and the conjuring of increasingly theatrical and decadent concert tours.

This doesn't mean there was no corporate oversight or outside influence on rock musicians. Record companies and professional producers and engineers were usually at the helm, making sure to protect their reputations and investments. Yet to an astonishing degree, the artists got their way. Songs and albums were treated by all — the musicians themselves, but also the record companies, critics, and of course the fans — as Statements. For a time, the capitalist juggernaut made possible and sustained the creation of popular art that sometimes achieved a new form of human excellence. That it didn't last shouldn't keep us from appreciating how remarkable it was while it did."

Friday, March 3, 2023

Capitol Rioter Guilty of Stealing Badge From Beaten Officer

On January 6, 2021, a joint session of the United States Congress convened at the United States Capitol, which is located at First Street, SE, in Washington, D.C. During the joint session, elected members of the United States House of Representatives and the United States Senate were meeting in separate chambers of the United States Capitol to certify the vote count of the Electoral College of the 2020 Presidential Election, which had taken place on November 3, 2020. The joint session began at approximately 1:00 p.m. Shortly thereafter, by approximately 1:30 p.m., the House and Senate adjourned to separate chambers to resolve a particular objection. Vice President Mike Pence was present and presiding, first in the joint session, and then in the Senate chamber.

As the proceedings continued in both the House and the Senate, and with Vice President Mike Pence present and presiding over the Senate, a large crowd gathered outside the U.S. Capitol. As noted above, temporary and permanent barricades were in place around the exterior of the U.S. Capitol building, and U.S. Capitol Police were present and attempting to keep the crowd away from the Capitol building and the proceedings underway inside.
 
At such time, the certification proceedings were still underway and the exterior doors and windows of the U.S. Capitol were locked or otherwise secured. Members of the U.S. Capitol Police attempted to maintain order and keep the crowd from entering the Capitol; however, shortly after 2:00 p.m., individuals in the crowd forced entry into the U.S. Capitol, including by breaking windows and by assaulting members of the U.S. Capitol Police, as others in the crowd encouraged and assisted those acts. (...)

During national news coverage of the aforementioned events, video footage which appeared to be captured on mobile devices of persons present on the scene depicted evidence of violations of local and federal law, including scores of individuals inside the U.S. Capitol building without authority to be there. On January 6, 2021, Metropolitan Police Department (“MPD”) Officer M.F. responded to a radio call for assistance at the U.S. Capitol. Officer M.F. was in full MPD uniform and equipped with a body-worn camera. Officer M.F. responded to the west front of the U.S. Capitol and became involved with other officers in the effort to push back rioters from the doorway to the U.S. Capitol at the lower west terrace. While Officer M.F. was defending the doorway, a rioter pulled Officer M.F. into the crowd, where members of the crowd beat, tased, and robbed Officer M.F. of his MPD badge (#3603), police radio, and MPD-issued 17-round magazine, while also trying to forcibly remove his service weapon from its fixed holster. The radio was securely attached to Officer M.F.’s tactical vest, and the badge was pinned to the vest. As a rioter attempted to get Officer M.F.’s gun, Officer M.F. heard him yell words to the effect that he was going to take Officer M.F.’s gun and kill him. Following the assault, Officer M.F. lost consciousness and was hospitalized for his injuries, including a likely concussion and injuries from the taser. Officer M.F. was admitted to the hospital for monitoring of his cardiac activity. (...)

On January 21, 2021, FBI agents in Buffalo, NY, interviewed WITNESS 1 regarding potential information related to the events at the U.S. Capitol on January 6, 2021. WITNESS 1 reported that Thomas Sibick (“SIBICK”) posted a video of the riot to his Instagram account. WITNESS 1 provided the video, which shows footage of SIBICK using his cell phone to record himself on the inauguration ceremony stage of the lower west terrace where he screams, “Just got tear-gassed, but we’re going, baby, we’re going! We’re pushing forward now!” Figures 9 and 10 are still images from the video. (...)

An open source video posted on YouTube shows SIBICK exiting the tunnel, as shown in Figures 15.1 SIBICK can be heard saying, “Let me out. Let me out.” An individual in front of him assists SIBICK in getting out such that SIBICK is no longer visible on camera. At that point, a voice that likely belongs to the man who helped SIBICK tells SIBICK “Thank you for your service,” and then asks where he is from. A voice that seems to be SIBICK’s can be heard saying, “Buffalo,” and then states, “Let’s go. Let me just get refreshed.”

Agents conducted an initial interview of SIBICK on January 27, 2021. SIBICK acknowledge being in Washington, D.C. at the U.S. Capitol on January 6, 2021. SIBICK stated that while he was at the Capitol, he saw a D.C. Metro Police Officer being pulled down the steps and hit with what SIBICK described as a “flagpole.” SIBICK also reported seeing at least two other individuals beating the D.C. Metro Police Officer and attempting to get his gun, but were unable to do so because of the “plastic piece on top of the holster.” SIBICK heard someone say, “Get his gun and kill him.” SIBICK stated that he attempted to reach the officer to pull him away but was unable to get to him and at that point he feared for his life and that of the officer. SIBICK further stated that due to the violence, he decided to leave. When shown the photograph of SIBICK holding the riot shield, SIBICK said that the shield had been passed through the crowd and SIBICK asked a man next to him to take his picture with it. SIBICK further explained that a man in a tactical vest asked SIBICK if he was going to “use” the shield, to which SIBICK replied, “No.” 

On February 2, 2021, SIBICK contacted one of the agents who had previously interviewed him and stated that he had been thinking about the individuals that assaulted the police officer and that he was going to email the agent with more information. When asked if he had anything different to add to his previous interview, SIBICK replied, “No.” When asked if the details he described in his prior interview were accurate and honest, SIBICK replied, “Yes.” The agent asked SIBICK if he had participated or was involved in any way in the assault of the D.C. police officer, and SIBICK replied, “No” and reiterated that he had tried to pull the officer away but was unsuccessful. 

On February 23, 2021, the agents re-interviewed SIBICK after law enforcement observed an individual consistent with SIBICK’s appearance on Officer M.F.’s body-worn camera. The agents showed SIBICK still shots from Officer M.F.’s body-worn camera. SIBICK admitted to grabbing the officer’s badge and radio. SIBICK stated that he had reached in to try to help the officer, and that he remembered the badge coming off as he reached for him. SIBICK said that he pressed the “emergency orange button” once he had possession of the radio to get help for the officer.2 SIBICK also stated that he dropped the badge and radio and left. When asked if he saw anyone pick up the radio and badge, SIBICK said that he carried the radio and badge with him when he left and dropped them in a trash can on Constitution Avenue. SIBICK stated that he thought about giving the items to an officer but was afraid of being arrested. Later in the interview, SIBICK stated that he needed to “recant” his statement and that he actually brought the items to his hotel room and then back to his home in Buffalo, NY. SIBICK stated that the day after he returned to Buffalo, he was planning to turn the items in to the FBI. However, he was afraid of being arrested and instead threw them in a dumpster on North Street in Buffalo. SIBICK later clarified that it was a dumpster located on the back alleyway of the Lenox Hotel at 140 North Street. 

On February 25, 2021, an agent sent SIBICK a ruse email stating that the security cameras at the Lenox Hotel were going to be checked to confirm SIBICK’s statement that he disposed of the badge and radio in the dumpster. On February 26, 2021, SIBICK called the agent stating that he was distraught and “wanted to do the right thing.” SIBICK stated that he did not dispose of the badge in the dumpster behind the Lenox Hotel. Rather, he had buried the badge in his backyard. SIBICK stated that he purchased a metal detector to find the badge, which he then dug up, and that he wanted to return it. SIBICK state. ad actually thrown away the radio, however. Later that night, SIBICK met the agent and gave him a bag containing mud and Officer M.F.’s badge (#3603).

by Criminal Case Filing/US District Court, Judge G. Michael Harvey |  Read more (pdf):
Images: US Justice Dept.
[ed. What an asshole (among many). At least this'll always be on his resume. See also: Capitol Rioter Guilty of Stealing Badge From Beaten Officer (US News).]

How to Navigate the AI Apocalypse As a Sane Person

To frame the problem: we humans are all the same make and model. In the space of possible minds we are points all stacked essentially on top of one another, and the only reason we feel there is significant mental diversity among humans is because we are so zoomed in. Maybe the space of possible minds is really small? This is increasingly ruled out by the progress made in AI itself, which uses extremely different mechanisms from biology to achieve results equal in intelligence on a task (and now often greater than). This indicates that there may be creatable minds located far away from all the little eight billion points stacked on top of each other, things much more intelligent than us and impossible to predict.

And what is more dangerous? The atom bomb, or a single entity significantly more intelligent than any human? The answer is the entity significantly more intelligent than any human, since intelligence is the most dangerous quality in existence. It’s the thing that makes atom bombs. Atom bombs are just like this inconsequential downstream effect of intelligence. If you think this is sci-fi, I remind you that superintelligences are what the leaders of these companies are expecting to happen: (...)

Forget if you find this proposed rate of progress outlandish (I think it’s unlikely): How precisely does Sam Altman [ed. CEO of OpenAI] plan on controlling something that doubles in intelligence every 18 months? And, even if Sam did have perfect control of a superintelligence, it’s still incredibly dangerous—and not just due to the concentration of power. E.g., let’s say you give a superintelligent AGI a goal. It could be anything at all (a classic example from Bostrom is maximizing the output of a paperclip factory). The first thing that a superintelligent agent does to make sure it achieves its goal is to make sure you don’t give it any more goals. After all, the most common failure mode for its goal would be receiving some other overriding goal from the human user who prompted it and has control over what it cares about. So now its first main incentive is quite literally to escape from the person who gave it the initial command! This inevitable lack of controllability is sometimes referred to as “instrumental convergence,” which is the idea that past a certain level of intelligence systems develop many of the habits of biological creatures (self-preservation, power gathering, etc), and there are a host of related issues like “deceptive alignment” and more.

In the most bearish case, a leaked/escaped AGI might realize that its best bet is to modify itself to be even more intelligent to accomplish its original goal, quickly creating an intelligence feedback loop. This and related scenarios are what Eliezer Yudkowsky is most worried about: (...)

Sometimes people think that issues around the impossibility of controlling advanced AI originated with now somewhat controversial figures like Nick Bostrom or Eliezer Yudkowsky—but these ideas are not original to them; e.g., instrumental convergence leading a superintelligent AGI destroying humanity goes back to Marvin Minsky at MIT, one of the greatest computer scientists of the 20th century and a Turing Award winner. In other words, classic thinkers in these areas have worried about this for a long time. These worries are as well-supported and pedigreed as arguments about the future get.

More importantly, people at these very companies acknowledge these arguments. (...)

The thing is that it’s unclear how fast any of this is arriving. It’s totally imaginable scenario that large language models stall at a level that is dumber than human experts on any particular subject, and therefore they make great search engines but only mediocre AGIs, and everyone makes a bunch of money. But AI research won’t end there. It will never end, now it’s begun.

I find the above leisurely timeline scenario where AGI progress stalls out for a while plausible. I also find it plausible that AI, so unbound by normal rules of biology, fed unlimited data, and unshackled from metabolism and being stuck inside the limited space of a human skull, rockets ahead of us quite quickly, like in the next decade. I even find it conceivable that Eliezer’s greatest fear could come true—after all, we use recursive self-improvement to create the best strategic gaming AIs. If someone finds out a way to do that with AGI, it really could become superintelligent overnight, and, now bent on some inscrutable whim, do truly sci-fi stuff like releasing bioengineered pathogens that spread with no symptoms and then, after a month of unseen transmission, everyone on Earth, the people mowing their lawns, the infants in their cribs, the people eking out a living in slums, the beautiful vapid celebrities, all begin to cough themselves to death.

Point is: you don’t need to find one particular doomsday scenario convincing to be worried about AGI! Nor do you need to think that AGI will become worryingly smarter than humans at some particular year. In fact, fixating on specific scenarios is a bad method of convincing people, as they will naturally quibble over hypotheticals.

Ultimately, the problem of AI is something human civilization will have to reckon with in exactly the same way we have had decades of debates and arguments over what to do about climate change and nuclear weapons. AGI that surpasses humans sure seems on the verge of arriving in the next couple years, but it could be decades. It could be a century. No matter when the AGIs we’re building surpass their creators, the point is that’s very bad. We shouldn’t feel comfortable living next to entities far more intelligent than us anymore so than wild animals should feel comfortable living next to humans. From the perspective of wildlife, we humans can change on a whim and build a parking lot over them in a heartbeat, and they’ll never know why. They’re just too stupid to realize the risk we pose and so they go about their lives. Comparatively, to be as intelligent as we are and to live in a world where there are entities that far surpass us is to live in a constant state of anxiety and lack of control. I don’t want that for the human race. Do you?

Meanwhile, AI-safety deniers have the arguments that. . . technology has worked out so far? Rah rah industry? Regulations are always bad? Don’t choke the poor delicate orchids that are the biggest and most powerful companies in the world? Look at all the doomsdays that didn’t happen so therefore we don’t need to worry about doomsdays? All of these are terrible reasons to not worry. The only acceptable current argument against AI safety is an unshakeable certainty that we are nowhere near getting surpassed. But even if that’s true, it just pushes the problem down the road. In other words, AI safety as a global political, social, and technological issue is inevitable.

by Erik Hoel, The Intrinsic Perspective |  Read more:
Image: Tom Gauld, via

Aristotle (and The Stoics)

John Sellars is the author of Lessons in Stoicism (published as The Pocket Stoic in the US), The Fourfold Remedy: Epicurus and the Art of Happiness (published as The Pocket Epicurean in the US), and numerous other books on Stoicism and Hellenistic philosophy. He is a reader in Philosophy at Royal Holloway, University of London, a visiting research fellow at King’s College London (where he is associate editor for the Ancient Commentators on Aristotle project), and a member of Common Room at Wolfson College, Oxford.

Sellars’s newest book, Aristotle: Understanding the World’s Greatest Philosopher, explores Aristotle’s central ideas on a range of topics, from morality and living the good life to biology and the political climate of Athens. It is lucid and concise, and suitable for both the neophyte and scholars of Aristotle alike—it details the particulars of Aristotle’s thought but also reexamines his importance as a philosopher and scientist more generally.

Sellars kindly agreed to be interviewed by Riley Moore for Quillette in February. The following transcript has been edited for length.
***
Riley Moore: It’s difficult to discuss Aristotle without discussing everything, because Aristotle wrote about everything—ethics, logic, biology, politics, literature; anything knowable, he investigated it. You go through this in detail in your newest book, Aristotle: Understanding the World’s Greatest Philosopher. Let’s pretend I have never heard of Aristotle. Who was Aristotle, biographically? Was he a pupil of Plato just as Plato was a pupil of Socrates? Is there a direct lineage there?

John Sellars: Yes, there is. Aristotle was originally from northern Greece. His father was a doctor who died when Aristotle was about 10 years old. Aristotle is then brought up by his uncle who had been a student at Plato’s Academy some years earlier. When Aristotle reaches 17 or 18, he goes to Athens to study at Plato’s Academy and stays there for 20 years. Plato is certainly the key point of reference. Socrates, Plato, and Aristotle make up this kind of triumvirate of significant Greek thinkers, and they’re all engaging with their predecessors. We see Aristotle wrestling with Plato’s ideas and ultimately trying to break away from them in order to develop his own independent views. That’s the beginning of Aristotle’s career.

Then Plato dies, and the Academy passes to Plato’s nephew. Aristotle decides that’s a good point to leave. Maybe Plato’s nephew and Aristotle didn’t get on, we don’t know, but he heads off to Asia Minor—what we would now call Turkey—with some other pupils from the Academy who left around the same time. Then he moves just a short distance to Lesbos, which is the nearest Greek island, and starts to study the natural world. In particular, he studies marine biology. He does that for a few years, and then he is invited back north to his home region by Philip of Macedon, the king, to tutor Philip’s young son, Alexander, who goes on to become Alexander the Great. It may have been that Aristotle’s father, the doctor, had been a physician at the Macedonian court. So, there may have been a family connection there.

Alexander grows up after a few years studying with Aristotle and sets off for his great adventure in the Middle East and all the way to India. Philip of Macedon is murdered around the same time, and Aristotle, having been Philip of Macedon’s guest, decides it’s not a very safe environment. So, he returns to Athens and sets up his own school, the Lyceum, as an alternative, in a sense, to the Academy. The last few years of his life were spent primarily in Athens. So, there’s a big early period in Athens and a big later period. There’s a brief interlude where he’s traveling to Lesbos and Macedonia.

RM: It’s speculated that Plato’s death prompted Aristotle to leave Athens when he didn’t inherit the Academy. Philip of Macedon’s death prompted Aristotle to return to Athens. These monumental deaths have a powerful impact on his life.

JS: Yes, that’s true. Of course, when Aristotle comes to Athens he’s an outsider. He’s not an Athenian citizen. This may have played a role in the story. As a non-Athenian, he wouldn’t have been eligible to own property. If the succession of the Academy involved the transfer of property, it may well have been that Aristotle just wouldn’t have been a plausible successor because he couldn’t have owned anything. He’s kind of an orphan and an outsider with this slightly transient lifestyle. But at the same time, his father may have been at the court of Macedon. Aristotle’s obviously got enough private income to devote his life to intellectual pursuits. He’s moving in quite high circles. But at the same time, he doesn’t really have the social stability and security that one might want.

RM: Can you sketch the ground covered by Aristotle’s work?

JS: It covers nearly everything. It covers everything that we think of as philosophy today. Metaphysics, epistemology, ethics, aesthetics, political philosophy, logic. All the things that we would think of as parts of philosophy, he does all of that. But he’s also engaged in a series of wider intellectual pursuits that have now become disciplines in their own right. There’s a sense in which his work really founds the discipline of biology. No one’s done that kind of work carefully and closely—studying particular types of creatures—before him.

He starts with metaphysics when he’s in Plato’s Academy—the most abstract and complex stuff. And then he moves on to the study of nature. One presumes that his later time with Philip of Macedon and Alexander prompted him to become more interested in political questions. He spends time studying literature, thinking about Greek tragedy, thinking about what makes a good work of art. On the one hand, we might think of Aristotle as a kind of research scientist, but these days we wouldn’t imagine a research scientist to also be interested in questions of literary theory. But Aristotle is actually doing both. In our hyper-specialized world today, that rarely happens.

RM: Could you define “metaphysics” broadly? What does it mean to Aristotle?

JS: “Metaphysics” is a modern word. It’s not one that Aristotle himself would have known. According to legend—and some might dispute the story—Aristotle’s lecture notes were lost for a while after his death. One of his pupils inherited them. The notes were just ignored for a century or two, and then they were rediscovered and edited. A series of works about the natural world were grouped together and given the title Physics. And a series of other works were then grouped together and called Metaphysica, meaning “after the physics.” So, the word doesn’t refer to something beyond the physical world or supernatural or anything like that. It’s just the work that comes after the Physics. Aristotle called the content of those works “first philosophy”—the fundamental questions that deal with the most basic facts about the nature of what exists. So, if we’re asking about the nature of being or existence, we’re asking the most fundamental question because it applies to everything. Then we can ask questions like “What’s a living being?” But that’s also a much narrower question because it doesn’t apply to everything that is, it’s just a subcategory.

by Riley Moore, Quillette |  Read more:
Image: Rembrandt's Aristotle with a Bust of Homer (1653)
[ed. See also: How to be an Aristotlian (Antigone).]

Thursday, March 2, 2023

Wayne Shorter (1933-2023)

In Memoriam: Wayne Shorter, 1933-2023 (Downbeat)

Being Hapa (Or Not)


Sunset in Waikiki: Tourists sipping mai tais crowded the beachside hotel bar. When the server spotted my friend and me, he seemed to relax. "Ah," he said, smiling. "Two hapa girls."

He asked if we were from Hawaii. We weren't. We both have lived in Honolulu — my friend lives there now — but hail from California. It didn't matter. In that moment, he recognized our mixed racial backgrounds and used "hapa" like a secret handshake, suggesting we were aligned with him: insiders and not tourists.

Like many multiracial Asian-Americans, I identify as hapa, a Hawaiian word for "part" that has spread beyond the islands to describe anyone who's part Asian or Pacific Islander. When I first learned the term in college, wearing it felt thrilling in a tempered way, like trying on a beautiful gown I couldn't afford. Hapa seemed like the identity of lucky mixed-race people far away, people who'd grown up in Hawaii as the norm, without "Chink" taunts, mangled name pronunciations, or questions about what they were.

Over time, as more and more people called me hapa, I let myself embrace the word. It's a term that explains who I am and connects me to others in an instant. It's a term that creates a sense of community around similar life experiences and questions of identity. It's what my fiancé and I call ourselves, and how we think of the children we might have: second-generation hapas.

But as the term grows in popularity, so does debate over how it should be used. Some people argue that hapa is a slur and should be retired. "[It] is an ugly term born of racist closed-mindedness much like 'half-breed' or 'mulatto,'" design consultant Warren Wake wrote to Code Switch after reading my piece on a "hapa Bachelorette."

Several scholars told me it's a misconception that hapa has derogatory roots. The word entered the Hawaiian language in the early 1800s, with the arrival of Christian missionaries who instituted a Hawaiian alphabet and developed curriculum for schools. Hapa is a transliteration of the English word "half," but quickly came to mean "part," combining with numbers to make fractions. (For example, hapalua is half. Hapaha is one-fourth.) Hapa haole — part foreigner — came to mean a mix of Hawaiian and other, whether describing a mixed-race person, a fusion song, a bilingual Bible, or pidgin language itself.

This original use was not negative, said Kealalokahi Losch, a professor of Hawaiian studies and Pacific Island studies at Kapi'olani Community College. "The reason [hapa] feels good is because it's always felt good," he told me. Losch has been one of the few to study the earliest recorded uses of the term, buried in Hawaiian-language newspapers, and found no evidence that it began as derogatory. Because the Hawaiian kingdom was more concerned with genealogy than race, he explained, if you could trace your lineage to a Hawaiian ancestor, you were Hawaiian. Mixed Hawaiian did not mean less Hawaiian.

Any use of hapa as a slur originated with outsiders, Losch said. That includes New England missionaries, Asian plantation workers and the U.S. government, which instituted blood quantum laws to limit eligibility for Hawaiian homestead lands. On the continental U.S., some members of Japanese-American communities employed hapa to make those who were mixed "feel like they were not really, truly Japanese or Japanese-American," said Duncan Williams, a professor of religion and East Asian languages and cultures at the University of Southern California. He said this history may have led some to believe the word is offensive. (...)

The desire of many Native Hawaiians to reclaim this word is often linked to a larger call for change. In Hawaii, a growing sovereignty movement maintains that the late 19th-century overthrow and annexation of the kingdom were illegal and the islands should again exercise some form of self-governance. But even within that movement opinions on hapa vary. I spoke with attorney Poka Laenui, who said he has been involved in the Hawaiian sovereignty movement for more than 40 years. He told me, in the "idea of aloha" — the complex blend that includes love, compassion and generosity — he doesn't mind if the term is shared. "If our word can be used to assist people in identifying and understanding one another, who am I to object?" he said.

Linguist and consultant Keao NeSmith told me he was shocked the first time he heard hapa outside of a Native Hawaiian context. NeSmith, who grew up on Kauai, learned more about the wider use of hapa when interviewed for a PRI podcast last year. Hearing the episode, his family and friends were shocked, too. "It's a new concept to many of us locals here in Hawaii to call Asian-Caucasian mixes 'hapa' like that," NeSmith said. "Not that it's a bad thing." (...)

That broad interpretation of the word may have its roots in Hawaii, where I have friends descended from Japanese and Chinese immigrants who grew up thinking hapa meant part Asian. Elsewhere in the islands, "hapa haole" continued to mean part Hawaiian. This makes literal sense in that "part foreigner" describes only what is different, with the dominant race or culture assumed. It's like how I might answer, "half Japanese" to "What are you?"-type questions; where whiteness is normalized, it doesn't have to be named.

by Akemi Johnson, NPR |  Read more:
Images: Jennifer Qian for NPR; and, Akemi Johnson
[ed. I grew up in Hawaii and am hapa (half Caucasian/half Japanese). All this racial slicing and dicing is a recent construct to me, important for reasons I can't quite fathom. From personal experience (in Hawaii), hapa always meant part-Caucasian/part Asian, part Caucasian/part Hawaiian, part Caucasian/some sort of other race(s). Generally, you'd never hear anyone who wasn't partly Caucasian call themselves hapa if they were, say, just of mixed Asian races, or anything else (Hawaiian, Portugese, Samoan, other Pacific Islanders, etc). Always Caucasian/something. And being hapa was valued, something aesthetically attractive, having no clear racial characteristics. If you were of mixed races, with no Caucasian element, you identified with whatever the predominant weighting was (eg: half Chinese and half a bunch of other stuff? Chinese). Not sure why this is more important these days. Eventually, we'll all be mutts anyway.]

Wednesday, March 1, 2023


Romare Bearden
, Jamming at the Savoy, Brooklyn Museum 1981; and, Pittsburgh Memory (1964)
via: here and here


Brandan Henry, A Child's Dream, 2020
via:

via:

The Man Behind Bag Fees

John Thomas is a mild-mannered airline consultant, a cheerful native of Australia with a ready laugh who is known for throwing great parties at his Needham home. So why do his friends want to stick pins into a voodoo doll of his likeness?

Thomas, 54, is the guy who brought baggage fees to airlines in North America. He first advised carriers to start charging for checked luggage in 2008, setting off a chain reaction that saw one airline after another adopt the charge and opening the floodgates for a steady stream of other new fees.

Passengers fumed, but analysts say it was necessary at a time when jet fuel prices were soaring and the industry seemed near collapse.

Without the infusion of cash provided by baggage fees — which now generate more than $3 billion a year — some airlines might have shut down, said Jay Sorensen, president of the Wisconsin-based travel consulting firm IdeaWorksCompany.

“It was a tsunami of money,’’ Sorensen said. “I would credit bag fees with saving the industry that year.’’

Bag fees don’t affect Thomas, though. He either flies on his seven-passenger Cessna Citation jet, which he keeps at Norwood Memorial Airport, or stuffs his belongings in a carry-on — even on a three-week trip to China.

His response to the irony? “Oops.’’ And then a sheepish giggle. (...)

Thomas works from LEK’s Boston office, where his main job is helping airlines make money, something the industry has needed desperately in recent years. [ed. This was written in 2013; fees continue to persist.]

Carriers were still recovering from losses after the 9/11 terrorist attacks when oil prices started to climb in 2007, eventually hitting record highs. Meanwhile, online travel sites made it easier for customers to compare prices and more difficult for airlines to raise fares to cover rising costs.

“They were looking at certain economic death,’’ Thomas said. 

Thomas first proposed baggage fees to a Canadian carrier in 2006. Several low-cost airlines in Europe started imposing bag fees around that time, but none of the major North American carriers had. The Canadian airline, which Thomas can’t reveal under his contract, didn’t go along with baggage fees — or with another of Thomas’s suggestion, to sell teeth whitener to passengers on red-eye flights.

So, when a US airline came to him for ways to generate new revenue streams, Thomas had a solution in hand. He and his team did a route-by-route analysis for the carrier, which Thomas also can’t identify, determining that revenue gained from bag fees would more than offset any loss of passengers if competitors didn’t do the same.

Still, airline executives were nervous. If the airline lost more customers than he projected, Thomas’s reputation, and LEK’s, would have suffered. “It’s my head on the line,’’ he said.

On Feb. 4, 2008, United Airlines announced it would charge $25 for the second checked bag. Within two weeks, US Airways said it would do the same, and almost all the major carriers except Southwest Airlines followed, according to LEK.

In May that year, a week before United was to implement the fee, American Airlines said it would charge for the first bag, too.

The other carriers that were planning to make passengers pay for the second bag said they would also start charging for the first, with the exception of JetBlue Airways.

“We were ecstatic,’’ Thomas said. The success ushered in the era of airlines imposing fees for services once included in ticket prices, such as sitting in a window seat, while adding new charges for perks such as extra legroom and early boarding.

“Baggage fees were the first horse out of the barn and the door was never closed,’’ said Sorensen, of IdeaWorks.

The revenue stream that resulted has been credited with helping to stabilize the industry. From 2008 to 2011, non-ticket revenue reported by airlines around the world more than doubled, to $22.6 billion from $10.3 billion, according to IdeaWorks.

Passengers have grown resigned to these fees. Ken Lynch of Mont Vernon, N.H., usually travels with a carry-on and pays to board early so he doesn’t have to battle for space in overhead bins. The 6-foot-5 technology and banking consultant, who flies once or twice a month, also shells out for extra legroom.

“Everybody hates the airlines,’’ he said. “It’s the modern-day version of the stagecoach: It’s uncomfortable, cramped, and the air stinks. The only thing missing is the smell of horse manure.’’

Thomas knows he’s not going to win any popularity contests among fliers. His wife, Paula Vanderhorst, gets a kick out of telling flight attendants that they have him to blame for all the passengers jamming carry-ons into overhead bins. (...)

But that won’t stop Thomas from dreaming up new ways for airlines to make money. He advised a British carrier to charge passengers $100 to guarantee that the seat next to them would be empty and recommended that another airline offer a service that picks up passengers’ bags at home and delivers them to their destination.

by Katie Johnston, Boston.com |  Read more:
Image: Josh Reynolds for The Boston Globe
[ed. God forbid some airline company goes out of business because it can't compete. I will never fly United, ever. Ever. And this guy? It's never one person, and these luggage fees and other pricing decisions were made by the airlines themselves, but hopefully there's some special place in hell for people who gladly make the lives of millions of others more miserable and costly (and are proud of it).]

via:

Subjective Ageing

The puzzling gap between how old you are and how old you think you are. There are good reasons you always feel 20 percent younger than your actual age.

This past thanksgiving, I asked my mother how old she was in her head. She didn’t pause, didn’t look up, didn’t even ask me to repeat the question, which would have been natural, given that it was both syntactically awkward and a little odd. We were in my brother’s dining room, setting the table. My mother folded another napkin. “Forty-five,” she said.

She is 76.

Why do so many people have an immediate, intuitive grasp of this highly abstract concept—“subjective age,” it’s called—when randomly presented with it? It’s bizarre, if you think about it. Certainly most of us don’t believe ourselves to be shorter or taller than we actually are. We don’t think of ourselves as having smaller ears or longer noses or curlier hair. Most of us also know where our bodies are in space, what physiologists call “proprioception.”

Yet we seem to have an awfully rough go of locating ourselves in time. A friend, nearing 60, recently told me that whenever he looks in the mirror, he’s not so much unhappy with his appearance as startled by it—“as if there’s been some sort of error” were his exact words. (High-school reunions can have this same confusing effect. You look around at your lined and thickened classmates, wondering how they could have so violently capitulated to age; then you see photographs of yourself from that same event and realize: Oh.) The gulf between how old we are and how old we believe ourselves to be can often be measured in light-years—or at least a goodly number of old-fashioned Earth ones. (...)

But “How old do you feel?” is an altogether different question from “How old are you in your head?” The most inspired paper I read about subjective age, from 2006, asked this of its 1,470 participants—in a Danish population (Denmark being the kind of place where studies like these would happen)—and what the two authors discovered is that adults over 40 perceive themselves to be, on average, about 20 percent younger than their actual age. “We ran this thing, and the data were gorgeous,” says David C. Rubin (75 in real life, 60 in his head), one of the paper’s authors and a psychology and neuroscience professor at Duke University. “It was just all these beautiful, smooth curves.”

Why we’re possessed of this urge to subtract is another matter. Rubin and his co-author, Dorthe Berntsen, didn’t make it the focus of this particular paper, and the researchers who do often propose a crude, predictable answer—namely, that lots of people consider aging a catastrophe, which, while true, seems to tell only a fraction of the story. You could just as well make a different case: that viewing yourself as younger is a form of optimism, rather than denialism. It says that you envision many generative years ahead of you, that you will not be written off, that your future is not one long, dreary corridor of locked doors.

I think of my own numbers, for instance—which, though a slight departure from the Rubin-Berntsen rule, are still within a reasonable range (or so Rubin assures me). I’m 53 in real life but suspended at 36 in my head, and if I stop my brain from doing its usual Tilt-A-Whirl for long enough, I land on the same explanation: At 36, I knew the broad contours of my life, but hadn’t yet filled them in. I was professionally established, but still brimmed with potential. I was paired off with my husband, but not yet lost in the marshes of a long marriage (and, okay, not yet a tiresome fishwife). I was soon to be pregnant, but not yet a mother fretting about eating habits, screen habits, study habits, the brutal folkways of adolescents, the porn merchants of the internet.

I was not yet on the gray turnpike of middle age, in other words. (...)

Ian Leslie, the author of Conflicted and two other social-­science books (32 in his head, 51 in “boring old reality”), took a similar view to mine and Richard’s, but added an astute and humbling observation: Internally viewing yourself as substantially younger than you are can make for some serious social weirdness.

“30 year olds should be aware that for better or for worse, the 50 year old they’re talking to thinks they’re roughly the same age!” he wrote. “Was at a party over the summer where average was about 28 and I had to make a conscious effort to remember I wasn’t the same—they can tell of course, so it’s asymmetrical.”

Yes. They can tell. I’ve had this unsettling experience, seeing little difference between the 30-something before me and my 50-something self, when suddenly the 30-something will make a comment that betrays just how aware she is of the age gap between us, that this gap seems enormous, that in her eyes I may as well be Dame Judi Dench.

by Jennifer Senior, The Atlantic |  Read more:
Image: Klaus Kremmerz
[ed. For me, it varies. Mostly around 45-55. But sometimes (say, where risk or self-control is involved) it's more like 17-21.]