Wednesday, January 8, 2025

Dying a “Good” Death

Disability and the Assisted Suicide Debate

I underwent countless appointments, expensive testing, and was continually gaslit and neglected by medical professionals. I was told that mast cell activation disorder wasn’t real (it is very real). I lost twenty pounds in a month and I reacted to anything I consumed. I began to throw up stomach acid because I could consume so little. I moved back home with my parents. As my reactions got worse and doctors shrugged in confusion, I grew desperate. Starving to death and having daily allergic reactions to things like sunlight on my skin or warm shower water was torturous. I made an ultimatum with myself: if I could not secure care, I would take my life.

I began to consider what death would be like. I looked up assisted suicide programs across the world. In doing so, I came across countless stories – some in favor of medical euthanasia, others firmly against it. I found news articles, white papers, research articles, and personal stories about the benefits and harms of assisted suicide. I read stories of disabled people who felt compelled to choose death because the world framed them as burdens. I watched interviews detailing flawed ethics and issues of consent. I came to think very differently about death. (...)

Amongst many others, these experiences have shaped my relationship with death. In all of these cases, medical and welfare supports could have improved the experience of death for everyone involved – for those dying, for caregivers, and for extended family members. I don’t think an early death, in any of these cases, would have necessarily been more “humane” or “dignified” than those that occurred. While these experiences are deeply personal, I believe they are emblematic of broader concerns with assisted dying.

In my opinion, those protesting against assisted dying laws are not asking dying people to suffer. The issue is not as black and white as people believe it to be. Those of us protesting do not want people to be in needless pain, instead we want resources other than death to alleviate suffering while people are still alive. We ask that policies ensure that no one is coerced or forced into choosing assisted dying. Similarly, we demand that policies are solid and that there are no legal loopholes to them. While internet debates have been furious over the past few weeks, I think we all want the same thing: an experience of death that honors and values the individual’s life. We differ, though, in what that experience looks like. (...)

Despite its roots in the eugenics movement, today twelve countries around the world permit assisted suicide in certain localities and certain cases. Right to Die Societies around the world are likewise pushing for expanded access to these programs for people who are not deemed terminally ill, citing that the right to die is rooted in individual liberties and compassion as “it is not always possible to relieve suffering.” Those in favor of assisted dying believe that we can enact legislation to ensure that the right to die is not abused by the state and that it will not be used to forward eugenics.

People are clearly suffering, but many are suffering from a lack of meaningful supports. As assisted dying programs expand, a cultural resonance follows. This cultural framework implies that living with significant medical or care needs is burdensome and that you should choose death because palliative care poses challenges for loved ones. In a hyper-capitalist world where palliative care costs an exorbitant amount of money, insurance frequently denies claims for supports and loved ones often do not have the time to perform caregiving because they need every working hour to scrape by, assisted dying offers a financial reprieve that is important to many people. The fact that death alone offers financial relief, time relief, and potentially relief from emotional distress is heinous. Death appears to be cost efficient in a world where healthy nondisabled people are considered valuable while older, sick, and disabled people are financial liabilities.

Despite the ongoing and active resistance of disabled people, it seems that assisted dying is only growing at an international scale. Safeguards have been violated for decades and yet new legislation continues to pass. At the same time, welfare cuts and gaps in medical care grow exponentially. In my opinion, assisted dying offers a neat and clean excuse for governments to further cut services instead of interrogating their failures. The core question at the heart of this debate is not “do you believe people have the right to die without suffering” but rather “what is an acceptable amount of collateral for you to have a peaceful death?”

by Nicole Schroeder, Disability Visibility Project |  Read more:
Image: uncredited
[ed. Point taken, but I'm still a strong advocate for right-to-die options. We routinely use euthanasia for suffering pets, but not suffering people. I know many consider this an unsophisticated and/or un-nuanced view of the issue, but I also believe it should be an individual's inherent 'right' to decide their own fate under similar circumstances. We're taught from birth and throughout our lives to exert self-control, then that control is taken away at precisely the time when it might matter the most. Unfortunately, the process of acquiring multiple approvals, and the various caveats involved, are so byzantine as to be nearly unworkable. Yet, for many people who do finally gain approval there are also a significant number that never follow through. All that matters is that they have a choice - an option in case their quality of life and/or care becomes untenable. So yes, a better system of palliative care should be a given, but when there isn't (and there won't be soon) there should be an additional option. I know personally that I'm way more concerned about the process of dying, than dying itself. It shouldn't have to be that way.]

The Parable of Anna Akhmatova

The Parable of Anna Akhmatova 

Akhmatova, was a promising poet in the days before the Soviet Revolution, but her physical presence was just as compelling as her writing. Modigliani made at least twenty paintings of Akhmatova, and she had an affair with the famous poet Osip Mandelstam. Nobel laureate Boris Pasternak proposed marriage to her on multiple occasions.

Even far away at Oxford, philosopher and intellectual historian Isaiah Berlin—whom I considered the most brilliant person in the entire University when I was a student there—allegedly pined away with romantic longings based on his brief encounter with Akhmatova 35 years before.
 
I don’t think it’s going too far to claim that she could have been a movie actress, given her beauty and allure.

But Akhmatova was crushed under Soviet rule.

Not only was her poetry sharply criticized and censored, but the secret police bugged her apartment, and kept her under surveillance.

She was silenced so completely, that many people simply assumed she was dead.

by Ted Gioia, Honest Broker |  Read more:
Image: Nathan Altman, 1915

Tuesday, January 7, 2025

How to Be a Good Rhythm Guitarist

[ed. Placeholder. Eric definitely has a unique personality (New Yorker). But this is a good lesson with some great songs.]

Death of a Scientist

Norman Maclean’s Young Men and Fire is an unfinished nonfiction book devoted to the 1949 Mann Gulch fire in the woods of Montana, in which 13 of the firefighters who had parachuted in to fight it died. Maclean spent decades returning over and over again to the event, revisiting the site, interviewing survivors, and debating with scientists.

As the editor of the posthumously published edition, Alan Thomas, explained, Maclean struggled with the technicalities of fire science, and those difficulties were part of the reason he never finished the book. ...It clearly isn’t a masterpiece on the same level as his famous novella, A River Runs Through It, but it still stuck with me long after I finished.

I think that’s because there was another book that was taking shape inside the draft, and Maclean did not live long enough to complete the transformation. This other book would have been not a conventional piece of science journalism but a personal meditation on mortality and his own search for meaning: more “Old Men and Death” than “Young Men and Fire.”

I started to figure this out about the halfway point, where there’s a powerful scene where you can feel the book shifting from the determinedly factual into something else. It recounts the last day in the life of Harry Gisborne, a Forest Service scientist who was investigating the Mann Gulch fire. There’s a lot of wisdom here:
To Gisborne, science started and ended in observation, and theory should always be endangered by it. … He said to [Bob] Jansson: “I’m glad I got a chance to get up here. Tomorrow we can get all our dope together and work on Hypothesis Number One. Maybe it will lead to a theory.” This was at rest stop 35. By now the rest stops were becoming stations of the cross.

They were following a game trail along the cliffs high above the Missouri River at the lower end of the Gates of the Mountains, and were only a quarter or a half mile from their truck when they reached stop 37. Gisborne sat down on a rock and said: “Here’s a nice place to sit and watch the river. I made it good. My legs might ache a little, though, tomorrow.”

In his report, Jansson says: “I think Gisborne’s rising at point 37 on the map was due to the attack hitting him.” He goes on to explain in parenthesis that “thrombosis cases usually want to stand or sit up because of difficulty in breathing.” Gisborne died within a minute, and Jansson piled rocks around him so he would not fall off the game trail into the Missouri River a hundred feet below.

When Jansson knew Gisborne was dead, he stretched him out straight on the game trail, built the rocks around him higher, closed his eyes, and then put his glasses back on hi so, just in case he woke up, he could see where he was.

Then Jansson ran for help. The stars came out. Nothing moved on the game trail. The great Missouri passing below repeated the same succession of chords it probably will play for a million years to come. The only other motion was the moon floating across the lenses of Gisborne’s glasses, which at last were unobservant.

This is the death of a scientist, a scientist who did much to establish a science. On the day of his death he had the pleasure of discovering that his theory about the Mann Gulch blowup was wrong. It would be revealing if tomorrow had come and he had got all his dope together and had worked out a new Hypothesis Number One. Maybe it would have led to another theory, probably the right one.

In any case, because of him we have been able to form what is likely the correct theory. Gisborne’s portrait hangs on the staircase of the Northern Forest Fire Laboratory in Missoula, which immediately adjoins the Smokejumper base. He looks you square in the eye but is half amused as if he had caught you too attached to one of your theories, or one of his. …

For a scientist, this is a good way to live and die, maybe the ideal way for any of us–excitedly finding we were wrong and excitedly waiting for tomorrow to come so we can start over, get our new dope together, and find a Hypothesis Number One all over again. And being basically on the right track when we were wrong.
This was on November 9, 1949; Gisborne was only 56 years old.

by Andrew Batson, The Tangled Woof |  Read more:
Image: Young Men and Fire

Watch Duty

How a wildfire monitoring app became essential in the US west

Cristy Thomas began to panic as she called 911 for the second time on a warm October day but couldn’t get through. She anxiously watched the plume of black smoke pouring over her rural community in central California get larger.

Then she heard a familiar ping.

Watch Duty, an app that alerts users of wildfire risk and provides critical information about blazes as they unfold, had already registered the fire. She relaxed. The cavalry was coming.

“I can’t tell you the sigh of relief,” she said, recalling how soon after sirens blared through the neighborhood and helicopters thundered overhead. “We were seeing it happen and we had questions – but Watch Duty answered all of them.”

Thomas is one of millions of Watch Duty evangelists who helped fuel the meteoric rise in the app. In just three years since it launched, the organization now boasts up to 7.2 million active users and up to 512m pageviews at peak moments. For a mostly volunteer-run non-profit, the numbers are impressive, even by startup standards. But they are not surprising.

Watch Duty has changed the lives of people in fire-prone areas. No longer left to scramble for information when skies darken and ash fills the air, users can now rely on an app for fast and accurate intel – and it’s free.

It offers access to essential intel on where dangers are, with maps of fire perimeters, evacuation areas and where to go for shelter. Users can find feeds of wildfire cameras, track aircraft positions and see wind data all in one place. The app also helps identify when there’s little cause for alarm, when risks have subsided, and what agencies are working in the trenches.

“The app is not just about alerts, it is about a state of mind,” Watch Duty’s CEO, John Mills, said. The Silicon Valley alum founded the organization after moving from San Francisco to a sprawling ranch in Sonoma county where fire dangers are high. After starting in just four California counties, Watch Duty covered the entire state in its first year before rapidly expanding across the American west and into Hawaii.

As the community has grown – reaching people across 14 states in 2024 – new features and enhanced precision have accented its popularity, and according to Mills, filled unmet needs.

In the past years, it’s not just residents who have come to rely on the app. An array of responders, from firefighters to city officials to journalists are also logging on, ensuring key actors are on the same page.

“People always thank me for Watch Duty, and I am like, ‘you’re welcome – and I am sorry that you need it,’” Mills said. But it’s clear that the need is real. In each new area where they have offered the service, word of mouth has driven usage.

“We spent no money on marketing at all,” Mills said. “We just let the genie out of the bottle so the world would know things could never go back to the way things were.”

The app sprouted out of an emergency information ecosystem on social media that has for years communicated unofficial information. But unlike other platforms that seek to capture user attention and keep it there, Watch Duty has no algorithms that filter or muddy important information.

It relies on volunteers dubbed “reporters” who listen for emergency updates in the low hum of radio static, analyze data from the National Weather Service and other sources, and discuss findings with one another before sending push notifications to their active user base.

Run by real people, including active and retired wildland firefighters, dispatchers and veteran storm watchers, the team collaborates to quickly gather and vet information when a fire ignites.

An automated dispatch relays 911 alerts via Slack, kicking Watch Duty reporters in the particular region into action. Radio scanners, wildfire cameras, satellites and announcements from officials are scoured for intel. When conditions are confirmed they post the information, adding a push notification to users in the area if there’s a threat to life or property.

The network is fueled by hundreds of people who donate their time and a small staff of just 15 reporters and engineers. Together, they have alerted the public to more than 9,000 wildfires this year.

by Gabrielle Canon, The Guardian | Read more:
Image: Gabrielle Canon/The Guardian; Jon Putman/Sopa Images/Rex/Shutterstock
[ed. Imagine that, somebody (Mr. Mills) developing a life saving tool just for altruistic reasons, no profit motive involved. I don't know what's more impressive, the app or the developer (and the volunteers who make it happen). Contrast with the post below. Downloaded it straight-away.]

Never Forgive Them

In the last year, I’ve spent about 200,000 words on a kind of personal journey where I’ve tried again and again to work out why everything digital feels so broken, and why it seems to keep getting worse, despite what tech’s “brightest” minds might promise. More regularly than not, I’ve found that the answer is fairly simple: the tech industry’s incentives no longer align with the user.

The people running the majority of internet services have used a combination of monopolies and a cartel-like commitment to growth-at-all-costs thinking to make war with the user, turning the customer into something between a lab rat and an unpaid intern, with the goal to juice as much value from the interaction as possible. To be clear, tech has always had an avaricious streak, and it would be naive to suggest otherwise, but this moment feels different. I’m stunned by the extremes tech companies are going to extract value from customers, but also by the insidious way they’ve gradually degraded their products.
 
To be clear, I don’t believe that this gradual enshittification is part of some grand, Machiavellian long game by the tech companies, but rather the product of multiple consecutive decisions made in response to short-term financial needs. Even if it was, the result would be the same — people wouldn’t notice how bad things have gotten until it’s too late, or they might just assume that tech has always sucked, or they’re just personally incapable of using the tools that are increasingly fundamental to living in a modern world.

You are the victim of a con — one so pernicious that you’ve likely tuned it out despite the fact it’s part of almost every part of your life. It hurts everybody you know in different ways, and it hurts people more based on their socioeconomic status. It pokes and prods and twists millions of little parts of your life, and it’s everywhere, so you have to ignore it, because complaining about it feels futile, like complaining about the weather.

It isn’t. You’re battered by the Rot Economy, and a tech industry that has become so obsessed with growth that you, the paying customer, are a nuisance to be mitigated far more than a participant in an exchange of value. A death cult has taken over the markets, using software as a mechanism to extract value at scale in the pursuit of growth at the cost of user happiness.

These people want everything from you — to control every moment you spend working with them so that you may provide them with more ways to make money, even if doing so doesn’t involve you getting anything else in return. Meta, Amazon, Apple, Microsoft and a majority of tech platforms are at war with the user, and, in the absence of any kind of consistent standards or effective regulations, the entire tech ecosystem has followed suit. A kind of Coalition of the Willing of the worst players in hyper-growth tech capitalism.

Things are being made linearly worse in the pursuit of growth in every aspect of our digital lives, and it’s because everything must grow, at all costs, at all times, unrelentingly, even if it makes the technology we use every day consistently harmful.

This year has, on some level, radicalized me, and today I’m going to explain why. It’s going to be a long one, because I need you to fully grasp the seriousness and widespread nature of the problem. (...)

We as a society need to reckon with how this twists us up, makes us more paranoid, more judgmental, more aggressive, more reactionary, because when everything is subtly annoying, we all simmer and suffer in manifold ways. There is no digital world and physical world — they are, and have been, the same for quite some time, and reporting on tech as if this isn’t the case fails the user. It may seem a little dramatic, but take a second and really think about how many little digital irritations you deal with in a day. It’s time to wake up to the fact that our digital lives are rotten.

I’m not talking about one single product or company, but most digital experiences. The interference is everywhere, and we’ve all learned to accept conditions that are, when written out plainly, are kind of insane. (...)

As every single platform we use is desperate to juice growth from every user, everything we interact with is hyper-monetized through plugins, advertising, microtransactions and other things that constantly gnaw at the user experience. We load websites expecting them to be broken, especially on mobile, because every single website has to have 15+ different ad trackers, video ads that cover large chunks of the screen, all while demanding our email or for us to let them send us notifications.

Every experience demands our email address, and giving out our email address adds another email to inboxes already stuffed with two types of spam — the actual “get the biggest laser” spam that hits the junk folder automatically, and the marketing emails we receive from clothing brands we wanted a discount from or newspapers we pay for that still feel it’s necessary to bother us 3 to 5 times a day. I’ve basically given up trying to fight back — how about you? (...)

It isn’t that you don’t “get” tech, it’s that the tech you use every day is no longer built for you, and as a result feels a very specific kind of insane. (...)

I’m not writing this to complain, but because I believe — as I hinted at a few weeks ago — that we are in the midst of the largest-scale ecological disaster of our time, because almost every single interaction with technology, which is required to live in modern society, has become actively adversarial to the user. These issues hit everything we do, all the time, a constant onslaught of interference, and I believe it’s so much bigger than just social media and algorithms — though they’re a big part of it, of course.

In plain terms, everybody is being fucked with constantly in tiny little ways by most apps and services, and I believe that billions of people being fucked with at once in all of these ways has profound psychological and social consequences that we’re not meaningfully discussing.

The average person’s experience with technology is one so aggressive and violative that I believe it leaves billions of people with a consistent low-grade trauma. We seem, as a society, capable of understanding that social media can hurt us, unsettle us, or make us feel crazed and angry, but I think it’s time to accept that the rest of the tech ecosystem undermines our wellbeing in an equally-insidious way. And most people don’t know it’s happening, because everybody has accepted deeply shitty conditions for the last ten years. (...)

We all live in the ruins created by the Rot Economy, where the only thing that matters is growth. Growth of revenue, growth of the business, growth of metrics related to the business, growth of engagement, of clicks, of time on app, of purchases of micro-transactions, of impressions of ads, of things done that make executives feel happy.(...)

I’ve written a lot about how the growth-at-all-costs mindset of The Rot Economy is what directly leads big tech companies to make their products worse, but what I’ve never really quantified is the scale of its damage.(...)

Every single weird thing that you’ve experienced with an app or service online is the dread hand of the Rot Economy — the gravitational pull of growth, the demands upon you, the user, to do something. And when everybody is trying to chase growth, nobody is thinking stability, and because everybody is trying to grow, everybody sort of copies everybody else’s ideas, which is why we see microtransactions and invasive ads and annoying tricks that all kind of feel the same in everything, though they’re all subtly different and customized just for that one app. It’s exhausting.

For a while, I’ve had the Rot Economy compared to Cory Doctorow’s (excellent) enshittification theory, and I think it’s a great time to compare (and separate) the two. To quote Cory in The Financial Times, Enshittification is “[his] theory explaining how the internet was colonised by platforms, why all those platforms are degrading so quickly and thoroughly, why it matters and what we can do about it.” He describes the three stages of decline:

“First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves.” (...)

Perhaps that’s semantics. However, Cory’s theory lacks a real perpetrator beyond corporations that naturally say “alright we’re gonna do Enshittification now, watch this.” Where The Rot Economy separates is that growth is, in and of itself, the force that drives companies to enshittify. While enshittification neatly fits across companies like Spotify and Meta (and their ad-focused business models), it doesn’t really make sense when it comes to things where there isn’t a clear split between business customers and consumers, like Microsoft or Salesforce — because enshittification is ultimately one part of the larger Rot Economy, where everything must grow forever. (...)

The Rot Economy isn’t simply growth-at-all-costs thinking — it’s a kind-of secular religion, something to believe in, that everything and anything can be more, should be more, must be more, that we are defined only by our pursuit of more growth, and that something that isn’t growing isn’t alive, and is in turn inferior.

No, perhaps not a religion. Religions are, for the most part, concerned with the hereafter, and contain an ethical dimension that says your present actions will affect your future — or your eternity. The Rot Economy is, by every metric, defined by its short-termism. 

by Ed Zitron, Where's Your Ed At? |  Read more:
Image: Newsweek/Getty via
[ed. I don't use Google anymore (DuckDuckGo instead) and was fortunate to intuit over a decade ago how damaging social media was to a healthy psyche. Consequently, I never got sucked into signing up with any platform, except for Facebook (for about six months after the movie The Social Network came out, and I quickly got out). However there was one exception: YouTube. But now, with all the ads, even that's become a thoroughly degraded experience. I agree with everything written here, but it feels incomplete. The rot economy, or enshittification, or whatever you want to call it, has a number of co-conspirators: the ad industy first and foremost; media (engagement/metrics), politicians (campaign donations); financial industry (banks, venture capitalists, private equity, etc.); and of course, corporations, shareholders and company managers who are all incentivized to squeeze every dollar they can out of us. One big non-virtuous circle of vultures and sharks, each contributing in their own way to making life more miserable, costly, and a constant battle to not get punked or plundered. What better time to elect a grifting billionaire and his billionaire buddies to game our federal government and perhaps embed permanent rot within our institutions, laws, and policies. See also: The Rot Economy.]

Public Domain Day, 2025

January 1, 2025 is Public Domain Day: Works from 1929 are open to all, as are sound recordings from 1924! (Duke University School of Law)
Image: Duke University

Monday, January 6, 2025

Alma Naidu & Simon Oslender

[ed. And so it goes. Love this old Billy Joel song, nicely reimagined here (original here). And, don't miss this choral version by DR Pigekore (apparently the last performance led by director Philip Faber). Oh hell, I'll just put it up below:] [ed. See also: Mystery of Love.]

Charles Hermans, The Masked Ball, 19th century (detail). Full image: here
[ed. K.C. Chief's skybox.]

via:
[ed. From 1998. Many more since.]

N.C. Wyeth, “Dark Harbor Fishermen" 1943

Operation Plumbbob

The first human object launched into space wasn’t Sputnik 1. It was actually a manhole cover.
 
Operation Plumbbob was a series of nuclear tests that were conducted between May 28 and October 7, 1957, at the Nevada Test Site, following Project 57, and preceding Project 58/58A.

Background

The operation consisted of 29 explosions, of which only two did not produce any nuclear yield. Twenty-one laboratories and government agencies were involved. While most Operation Plumbbob tests contributed to the development of warheads for intercontinental and intermediate range missiles, they also tested air defense and anti-submarine warheads with smaller yields. They included 43 military effects tests on civil and military structures, radiation and bio-medical studies, and aircraft structural tests. Operation Plumbbob had the tallest tower tests to date in the U.S. nuclear testing program as well as high-altitude balloon tests. One nuclear test involved the largest troop maneuver ever associated with U.S. nuclear testing.

Approximately 18,000 members of the U.S. Air Force, Army, Navy and Marines participated in exercises Desert Rock VII and VIII during Operation Plumbbob. The military was interested in knowing how the average foot-soldier would stand up, physically and psychologically, to the rigors of the tactical nuclear battlefield. (...)

The John shot on July 19, 1957, was the only test of the Air Force's AIR-2A Genie rocket with a nuclear warhead. It was fired from an F-89J Scorpion fighter over Yucca Flats at the Nevada National Security Site. On the ground, the Air Force carried out a public relations event by having five Air Force officers and a motion picture photographer stand under ground zero of the blast, which took place at between 18,500 and 20,000 feet (5,600 and 6,100 m) altitude, with the idea of demonstrating the possibility of the use of the weapon over civilian populations without ill effects. The five officers were Colonel Sidney C. Bruce, later professor of Electrical Engineering at Colorado University, died in 2005; Lieutenant Colonel Frank P. Ball, died in 2003; Major John W. Hughes II, died in 1990; Major Norman B. Bodinger, died in 1997; Major Donald A. Luttrell, died in 2014. The videographer, Akira "George" Yoshitake, died in 2013. (...)

Missing steel bore cap

In 1956, Robert Brownlee, from Los Alamos National Laboratory in New Mexico, was asked to examine whether nuclear detonations could be conducted underground. The first subterranean test was the nuclear device known as Pascal A, which was lowered down a 500 ft (150 m) borehole. However, the detonated yield turned out to be 50,000 times greater than anticipated, creating a jet of fire that shot hundreds of meters into the sky. During the Pascal-B nuclear test of August 1957, a 900-kilogram (2,000 lb) iron lid was welded over the borehole to contain the nuclear blast, despite Brownlee predicting that it would not work. When Pascal-B was detonated, the blast went straight up the test shaft, launching the cap into the atmosphere. The plate was never found. Scientists believe compression heating caused the cap to vaporize as it sped through the atmosphere....Brownlee estimated that the explosion, combined with the specific design of the shaft, could accelerate the plate to approximately six times Earth's escape velocity.

by Wikipedia |  Read more:
Image: via

School Shootings Increase NRA Donations

The US has experienced a tragic increase in school shootings in recent years. Between 1995 and 2021, shootings on campuses of US schools have resulted in over 300 deaths and touched on the lives of more than 240,000 students, thrusting the issue into the forefront of public concern and sparking strong debates about the need for stricter gun regulation in the US. Policy proposals are often aimed at reducing access to firearms and increasing background checks for potential gun owners.

Previous scholarship on the consequences of school (and mass) shootings has found that citizens, on average, tend to become more left-wing in their attitudes toward gun regulation and voting behavior after shootings... On the legislative side, these tragic events often catalyze the introduction of new gun control bills. Nonetheless, in the vast majority of cases, gun regulation remains either unchanged or becomes even less restrictive in the aftermath of shootings. This limited legislative action on gun regulation raises a critical question: Why does the United States continue to struggle with implementing meaningful gun control measures despite recurring tragedies and public support for such measures? (...)

Using address-level data from 225,000 individual donations in a difference-in-differences design, this study provides causal evidence suggesting that donations to the NRA surge in counties that experience school shootings. In stark contrast to the typically transient nature of pro-gun regulation movements, this uptick in political participation endures over multiple years, remaining elevated over comparatively long periods of time. These effects are largest in states where gun regulation is weakest, adding to the political difficulties of enacting gun regulation in states with the greatest potential benefits in terms of reducing firearm-related deaths. Donations to gun control groups in affected regions do not seem to specifically respond to such events. (...)

In general, these effects are consistent with previous findings that suggest that gun rights supporters are a very politically active constituency that is comparatively easy to mobilize. Moreover, these findings are consistent with studies finding that gun rights supporters respond to threats of gun regulation by, e.g., preemptively purchasing guns, gathering information about gun rights, or joining gun rights organizations, especially after incidents that receive considerable media attention. The empirical pattern also aligns with anecdotal evidence suggesting that gun rights supporters mobilize by contacting legislators when perceiving gun rights under threat. (...)

Results
  • The results of this study lead to three conclusions. First, school shootings have important effects on political participation on the political right. The causal estimates of this study demonstrate that gun rights supporters respond to school shootings by increasing contributions to political organizations committed to upholding Second Amendment rights. These effects are sizeable, with donations increasing by around 30% and donor numbers increasing by about 40% in affected counties as compared to the pretreatment baseline. (...)
  • Second, the impact of school shootings on donations and donor counts is remarkably durable. Even multiple years after the incident, both donations and donor counts remain at elevated levels. Insofar as this pattern is indicative of citizens remaining actively committed to the cause of protecting gun rights, this highlights another difficulty in the struggle to enact stricter gun control laws. The time frame over which legislation is passed is medium to long term. Thus, while windows of opportunity might open the door for the introduction of new bill proposals, making sure that these bills eventually pass requires that gun control advocates collectively mobilize and remain mobilized over longer periods of time. However, gun control movements have historically struggled to keep such momentum up, and saliency usually wanes as the issue-attention cycle brings other issues to the forefront. (...)
  • Last, the strength of these effects is most pronounced in states with weaker gun regulation. These states typically experience higher rates of firearm-related deaths and are most likely to benefit from gun control legislation. For instance, Kalesan et al. estimate that the introduction of universal background checks, as already existent in many states and supported by 92% of citizens in 2022, could reduce the rate of firearm-related deaths from 10.35 to 4.46 per 100,000 people in states without such a policy.
by Tobias Roemer, Science Advances |  Read more:
Image: Connecticut State Police/Reuters via
[ed. Please consider a thought or prayer on the back of your check.]

[ed. Help a buddy out?]

Texas Plows Ahead

The Texas Responsible AI Governance Act (TRAIGA) has been formally introduced in the Texas legislature, now bearing an official bill number: HB 1709. It has been modified from its original draft, improving on it in some important ways and worsening in others. In the end, TRAIGA/HB 1709 still retains most of the fundamental flaws I described in my first essay on the bill. It is, by far, the most aggressive AI regulation America has seen with a serious chance at becoming law—much more even than SB 1047, the California AI bill that was the most-discussed AI policy of 2024 before being vetoed in September.

This bill is massive, so I will not cover all its provisions comprehensively. Here, however, is a summary of what the new version of TRAIGA does.

TRAIGA in Brief

The ostensible purpose of TRAIGA is to combat algorithmic discrimination, or the notion that an AI system might discriminate, intentionally or unintentionally, against a consumer based on their race, color, national origin, gender, sex, sexual orientation, pregnancy status, age, disability status, genetic information, citizenship status, veteran status, military service record, and, if you reside in Austin, which has its own protected classes, marital status, source of income, and student status. It also seeks to ensure the “ethical” deployment of AI by creating an exceptionally powerful AI regulator, and by banning certain use cases, such as social scoring, subliminal manipulation by AI, and a few others.

Precisely like SB 1047, TRAIGA accomplishes its goal by imposing “reasonable care” negligence liability. But TRAIGA goes much further. First, unlike SB 1047, TRAIGA’s liability is very broad. SB 1047 created an obligation for developers of AI models that cost over $100 million to exercise “reasonable care” (a common legal term of art) to avoid harms greater than $500 million. TRAIGA requires developers (both foundation model developers and fine-tuners), distributors (cloud service providers, mainly), and deployers (corporate users who are not small businesses) of any AI model regardless of size or cost to exercise “reasonable care” to avoid “algorithmic discrimination” against all of the protected classes listed above. Under long-standing legal precedent, discrimination can be deemed to have occurred regardless of discriminatory intent; in other words, even if you provably did not intend to discriminate, you can still be found to have discriminated so long as there is a negative effect of some kind on any of the above-listed groups. And you can bear liability for these harms.

On top of this, TRAIGA requires developers and deployers to write a variety of lengthy compliance documents—“High-Risk Reports” for developers, “Risk Identification and Management Policies” for developers and deployers, and “Impact Assessments” for deployers. These requirements apply to any AI system that is used, or could conceivably be used, as a “substantial factor” in making a “consequential decision” (I’ll define these terms in a moment, because their definitions have changed since the original version). The Impact Assessments must be performed for every discrete use case, whereas the High-Risk Reports and Risk-Identification and Management Policies apply at the model and firm levels, respectively—meaning that they can cover multiple use cases. However, all of these documents must be updated regularly, including when a “substantial modification” is made to a model. In the case of a frontier language model, such modifications happen almost monthly, so both developers and deployers who use such systems can expect to be writing and updating these compliance documents constantly.

In theory, TRAIGA contains exemptions for open-source AI, but it is weak—bordering on nonsensical: the exemption only applies to open models that are not used as “substantial factors” in “consequential decisions,” but it is not clear how a developer of an open-source language model could possibly prevent their model from being used in “consequential decisions,” given the very nature of open-source software. Furthermore, the bill defines open-source AI differently in different provisions, at one point allowing only models that openly release training data, code, and model weights, and at another point allowing models that release weights and “technical architecture.” If you are an open-source developer, the odds are that every provision, including the liability, applies to you.

On top of this, TRAIGA creates the most powerful AI regulator in America, and therefore among the most powerful in the world: the Texas Artificial Intelligence Council, a new body with the ability to issue binding rules regarding “standards for ethical artificial intelligence development and deployment,” among a great many other things. This is far more powerful than the regulator envisioned by SB 1047, which had only narrow rulemaking authority.

The bill comes out of a multistate policymaker working group convened by the Future of Privacy Forum, a progressive non-profit focused on importing EU-style technology law into the United States. States like California, Connecticut, Colorado, and Virginia have introduced similar regulations; in important ways, they resemble the European Union’s AI Act, with that law’s focus on preemptive regulation of the use of technology by businesses.

All of this is purported by its sponsor, Representative Giovanni Capriglione, a Republican, as a model for “red state” AI legislation—in the months after Donald Trump ran a successful presidential campaign based in part on the idea of broad-based deregulation of the economy. Color me skeptical that Representative Capriglione’s bill matches the current mood of the Republican Party; indeed, I would be hard-pressed to come up with legislation that conflicts more comprehensively with the priorities of the Republican Party as articulated by its leaders. Perhaps you view this as a virtue, perhaps you view it as a sin; I view it as a fact.

All of this has been the thrust of TRAIGA since the beginning. But how has the bill changed since it was previewed in October?

by Dean W. Ball, Hyperdimensionl |  Read more:
Image: State Rep. Giovanni Capriglione/uncredited via:
[ed. Wow. Texas (!) takes a mighty swing at AI regulation. Will it be a homer or foul ball? Definitely interesting to see what kind of pushback this bill gets, and what that means for future efforts. Already we can see it being positioned as a "red state" policy position (as if AI were a political football - or, maybe it's just a sales pitch), and the usual scare tactics around stiffling "future innovation". Regardless, a lot of thought went into this, which in itself is encouraging, and a good template for what comes next. Also, a side note - ballpark estimate of the economic stakes involved (PwC):]
***
  • What comes through strongly from all the analysis we’ve carried out for this report is just how big a game changer AI is likely to be, and how much value potential is up for grabs. AI could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption-side effects.
  • Labour productivity improvements will drive initial GDP gains as firms seek to "augment" the productivity of their labour force with AI technologies and to automate some tasks and roles.
  • Our research also shows that 45% of total economic gains by 2030 will come from product enhancements, stimulating consumer demand. This is because AI will drive greater product variety, with increased personalisation, attractiveness and affordability over time.
  • The greatest economic gains from AI will be in China (26% boost to GDP in 2030) and North America (14.5% boost), equivalent to a total of $10.7 trillion and accounting for almost 70% of the global economic impact.  ~ Sizing the Prize (PwC)

Sunday, January 5, 2025

Money, Art and Representation

The Powerful and Pragmatic Faces of Medieval Coinage

Introduction 

A persistent trope in numismatic literature and exhibitions is that coins are art. It is seen, perhaps, as a way of making these small objects more engaging or of asserting their equivalence with other works found in galleries. It is perhaps also a projection onto the Middle Ages of the phenomenon developed in the fifteenth century of the decorative medal as a form of artistic expression. The idea of medieval coins as art, however, faces a twofold problem. First, the concept of “art” in the Middle Ages is contested by historians of visual and material culture because although objects might have been beautifully made in the Middle Ages this was always secondary to and in service of another (non-aesthetic) primary purpose. The concepts of beauty as its own purpose and the artist as individual were absent. Second, while many medieval coins are attractive and enticing to modern eyes, others challenge even the most culturally relativist viewer to assert with confidence that these objects were created to be admired for their beauty. They were all, however certainly intended to be tools of representation. Indeed, representation is integral to the identity of a coin. Without designs marking it apart, a lump of metal is not a coin; it is simply a lump of metal, or perhaps an ingot. Marking a coin with a representation to make it recognizable theoretically simplifies transactions, as people are able simply to exchange coins in specified amounts rather than having to test the purity and weight of metal themselves or pay somebody else to do so. In the medieval world this did not always work perfectly in practice, as some examples in this chapter will demonstrate, and the rationale for issuing coins in the Middle Ages was not necessarily exclusively to facilitate transactions. Nevertheless, the aims and consequences of representation are in all cases fundamental to understanding the role of money in the Middle Ages. 


Consequently this chapter focuses not on coins as art but on the interactions which made representation on a medieval coin possible and meaningful. These interactions are usually discussed in terms of the connection between the authority which caused a coin to be made and the intended audience for its use, as a top-down communication, which only rarely extended into a visible dialogue, for example when an intended audience rejected a coin or a contemporary commentator mentioned some change in design. (...) Such interpretation is, however, implicitly grounded in the idea of coins as art (or perhaps propaganda - also a problematic term in medieval contexts since it is closely associated with modern ideas about the conscious aim and capacity of states to influence directly and totally the political consciousness of their subjects), in which the authority becomes the artist and minute details or changes in coin design have been read as sensitive barometers revealing the personal feelings and political preferences of kings and emperors. A medieval coin, though, was fundamentally an object of use, created to mediate a range of social contexts, from paying taxes and armies, or engaging in commerce, to giving religious donations, or distributing imperial largesse. Its uses thus all required an audience which both understood and accepted the social role played by that coin. 

Consequently, this chapter begins with the intended audience, examining the ways in which people in the Middle Ages encountered coins and what this tells us about the capacity for representation on coins to communicate within, and to create, shared visual contexts. Only then does it turn to the authority, examining how and why issuers of coins decided to situate their representational choices on a spectrum between conservatism and innovation. These choices, however, were not usually enacted by the authorities who ordered coins to be made. The often overlooked role played by makers of medieval money is considered as a separate and vital component in representation and visual communication. Finally, this chapter turns to unintended audiences. Medieval money travelled, as money has always done, and representation on coins influenced visual culture far beyond the spaces controlled by its issuing authority. Differences in the responses of unintended and intended audiences to medieval money bring us closer to understanding complex landscapes of visual familiarity and foreignness, both during the Middle Ages, as coins traversed space and time, and in the present, where the ultimate unintended audience - the modern viewer, collector, scholar or curator - responds to representation on medieval coins, but also generates new understandings of it. 

When this chapter talks about representation it takes in all of the intentional visual symbols placed on coins by their makers. It includes human images and other complex designs of animals, buildings or abstract patterns. It also includes smaller, simpler representations, which might be part of these complex images or which might appear beside them. Some of these formed part of the wider visual culture of the coin’s intended audience, such as crosses on coinages issues by Christian polities. Crosses could be encountered in multiple contexts, such as in wall paintings, manuscripts and sculpture, and were probably immediately familiar to most of their viewers. Other images and marks had more esoteric and specific meanings which may have been irrelevant or unknown to many users of these coins, such as mint marks. Other marks, though useful to numismatists today for identifying and seriating coins, are still not always understood and may have had specific meaning or have been purely decorative, such as stars or dots (often termed “pellets”) around the main design. Representation on coins can also refer to text in the form of inscriptions making political statements, proclaiming titles and religious views, or naming the maker, the mint or the 4 value or denomination of the coin. The balance of image and word itself became an issue of political representation in the Middle Ages, discussed below.

by Rebecca R. Darley, Birkbeck/University of London |  Read more (pdf):
Image: via
[ed. Probably of limited interest here, but I found it quite fascinating (for the most part). Everybody thinks about money, but not usually in this way (except when special or limited run currencies are produced to commemorate someone or something. Citations have been removed for easier reading. Have to laugh at the last item here about meanings evolving or devolving over time: thus a Roman emperor eventually becomes a porcupine:]!
***
"Much of what we think of as representation on coins was thus the direct product not of the commissioning authority, nor even the moneyer in the sense of overseer, but of the die cutters. These were the people who created the images that we see on coins. Their skill, or apparent lack thereof, created many of the fine details which have optimistically been read as insights into the minds of issuing authorities. This can sometimes overlook fairly significant gaps in our knowledge, such as how designs, even if sanctioned by an authority, were transmitted to mints further away for die cutters to engrave. (...)

In addition to the die-cutters, who were almost certainly skilled and valued artisans, the term “maker” included other individuals relevant to the issue of representation. Working through the stages of making a medieval coin highlights a number of processes which might have fallen within the remit of a variable number of people. Somebody had to calculate the metallic composition of the blank flans onto which coins were struck, or decide not to and select an appropriate number of old coins and poorly-formed pieces of metal, with serious implications for representation. An even mixture would take a struck design better than a mixture full of different metals. A heavily leadbased copper alloy, for instance, such as that used in a series of coins produced in Sri Lanka during the fifth and sixth centuries, limited the design that could be impressed on the coins, as it made the metal friable and likely to crack under pressure. (...)

The Frisian and Anglo-Saxon sceattas mentioned previously suggest similar processes. Many of these carry an image resembling a porcupine (Fig, 10), which evolved from a Late Roman portrait of an emperor, in which the hair became an increasingly prominent crest that eventually replaced a recognizable human bust altogether. Its original meaning and eventually its original appearance faded in importance in comparison to the role of the image as a symbol with a newly constructed set of social meanings.
"

[ed. See also: Your Book Review: Autobiography Of Yukichi Fukuzawa (ACX):]

"I had been living in Japan for a year before I got the idea to look up whose portraits were on the banknotes I was handling every day. In the United States, the faces of presidents and statesmen adorn our currency. So I was surprised to learn that the mustachioed man on the ¥1,000 note with which I purchased my daily bento box was a bacteriologist. It was a pleasant surprise, though. It seems to me that a society that esteems bacteriologists over politicians is in many ways a healthy one."

Friday, January 3, 2025

Teija Lehto, Blueberries - (woodcut, 43 x 60 cm.), 2024.

R. Crumb
via:

H5N1: Much More Than You Wanted To Know

H5N1: Much More Than You Wanted To Know (ACX)
Image: Metaculus
Even if H5N1 doesn’t go pandemic in humans for a while, it is already pandemic in many birds including chickens, getting there in cows, and possibly gearing up to get there in pigs. This will have economic repercussions for farmers and meat-eaters.

The CDC and various other epidemiological groups have raised the alarm about drinking raw milk while H5N1 is epidemic in cows. There is an obvious biological pathway by which the virus could get into raw milk and be dangerous, but I haven’t seen anyone quantify the risk level. Epidemiologists hate raw milk, think there is never any reason to drink it, and will announce that risks > benefits if the risk is greater than zero. I don’t know if the risk level is at a point where people who like raw milk should avoid it. Everyone says that pasteurized milk (all normal milk; your milk is pasteurized unless you get it from special hippie stores) is safe.

There are already H5N1 vaccines for both chickens and humans; pharma companies are working hard on cows. First World governments have been stockpiling human vaccines just in case, but have so far accumulated enough for only a few percent of the population. If H5N1 goes pandemic, it will probably be because it mutated or reassorted, and current vaccines may not work against the new pandemic strain.

Some people have suggestions for how to prepare for a possible pandemic, but none of them are very surprising: stockpile medications, stockpile vaccines, stockpile protective equipment. The only one that got so much as a “huh” out of me was Institute for Progress’ suggestion to buy out mink farms. Minks are even worse than pigs in their tendency to get infected with lots of different animal and human viruses; they are exceptionally likely to be a source of new zoonotic pandemics. Mink are farmed for their fur, but there aren’t as many New York City heiresses wearing mink coats as there used to be, and the entire US mink industry only makes $80 million/year. We probably lose more than $80 million/year in expectation from mink-related pandemics, so maybe we should just shut them down, the same way we tell the Chinese to shut down wet markets in bat-infested areas. (...)

Conclusions/Predictions

All discussed earlier in the piece, but putting them here for easy reference - see above for justifications and qualifications.
  • H5N1 is already pandemic in birds and cows and will likely continue to increase the price of meat and milk.
  • 5% chance that H5N1 starts a sustained pandemic in humans in the next year.
  • 50% chance that H5N1 starts a sustained pandemic in humans in the next twenty years, assuming no dramatic changes to the world (eg human extinction) during that time.
  • If H5N1 does start a sustained pandemic in the next few years, 30% chance it’s about as bad as a normal seasonal flu, 63% chance it’s between 2 - 10x as bad (eg Asian Flu), 6% chance it’s between 10 - 100x as bad (eg Spanish flu), and <1% chance it’s >100x as bad (unprecedented). The 1% chance is Outside View based on other people’s claims, and I don’t really understand how this could happen.
Thanks to Nuño Sempere and Sentinel for help and clarification. Sentinel is an organization that forecasts and responds to global catastrophes; you can find their updates, including on H5N1, here.

by Scott Alexander, ACX |  Read more:
Image: Metaculus (prediciton market)

It's Still Easier To Imagine The End Of The World Than The End Of Capitalism

No Set Gauge has a great essay on Capital, AGI, and Human Ambition, where he argues that if humankind survives the Singularity, the likely result is a future of eternal stagnant wealth inequality.

The argument: post-Singularity, AI will take over all labor, including entrepreneurial labor; founding or working at a business will no longer provide social mobility. Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return. Therefore, everyone’s existing pre-singularity capital will grow at the same rate. Although the absolute growth rate of the economy may be spectacular, the overall wealth distribution will stay approximately fixed.

Moreover, the period just before the Singularity may be one of ballooning inequality, as some people navigate the AI transition better than others; for example, shares in AI companies may go up by orders of magnitude relative to everything else, creating a new class of billionaires or trillionaires. These people will then stay super-rich forever (possibly literally if immortality is solved, otherwise through their descendants), while those who started the Singularity without capital remain poor forever.

Finally, modern democracies pursue redistribution (and are otherwise responsive to non-elite concerns) partly out of geopolitical self interest. Under capitalism (as opposed to eg feudalism), national power depends on a strong economy, and a strong economy benefits from educated, globally-mobile, and substantially autonomous bourgeoisie and workforce. Once these people have enough power, they demand democracy, and once they have democracy, they demand a share of the pie; it’s hard to be a rich First World country without also being a liberal democracy (China is trying hard, but hasn’t quite succeeded, and even their limited success depends on things like America not opening its borders to Chinese skilled labor). Cheap AI labor (including entrepreneurial labor) removes a major force pushing countries to operate for the good of their citizens (though even without this force, we might expect legacy democracies to continue at least for a while). So we might expect the future to have less redistribution than the present.

This may not result in catastrophic poverty. Maybe the post-Singularity world will be rich enough that even a tiny amount of redistribution (eg UBI) plus private charity will let even the poor live like kings (though see here for a strong objection). Even so, the idea of a small number of immortal trillionaires controlling most of the cosmic endowment for eternity may feel weird and bad. From No Set Gauge:
In the best case, this is a world like a more unequal, unprecedentedly static, and much richer Norway: a massive pot of non-human-labour resources (oil :: AI) has benefits that flow through to everyone, and yes some are richer than others but everyone has a great standard of living (and ideally also lives forever). The only realistic forms of human ambition are playing local social and political games within your social network and class. If you don't have a lot of capital (and maybe not even then), you don't have a chance of affecting the broader world anymore. Remember: the AIs are better poets, artists, philosophers—everything; why would anyone care what some human does, unless that human is someone they personally know? Much like in feudal societies the answer to "why is this person powerful?" would usually involve some long family history, perhaps ending in a distant ancestor who had fought in an important battle ("my great-great-grandfather fought at Bosworth Field!"), anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era ("oh, my uncle was technical staff at OpenAI"). The children of the future will live their lives in the shadow of their parents, with social mobility extinct. I think you should definitely feel a non-zero amount of existential horror at this, even while acknowledging that it could've gone a lot worse.
I don’t think about these scenarios too often - partly because it’s so hard to predict what will happen after the Singularity, and partly because everything degenerates into crazy science-fiction scenarios so quickly that I burn a little credibility every time I talk about it.

Still, if we’re going to discuss this, we should get it right - so let’s talk crazy science fiction. When I read this essay, I found myself asking three questions. First, why might its prediction fail to pan out? Second, how can we actively prevent it from coming to pass? Third, assuming it does come to pass, how could a smart person maximize their chance of being in the aristocratic capitalist class?

(So they can give to charity? Sure, let’s say it’s so they can give to charity.)

II.

Here are some reasons to doubt this thesis.

First, maybe AI will kill all humans. Some might consider this a deeper problem than wealth inequality - though I am constantly surprised how few people are in this group.

Second, maybe AI will overturn the gameboard so thoroughly that normal property relations will lose all meaning. Frederic Jameson famously said that it was “easier to imagine the end of the world than the end of capitalism”, and even if this is literally correct we can at least spare some thought for the latter. Maybe the first superintelligences will be so well-aligned that they rule over us like benevolent gods, either immediately leveling out our petty differences and inequalities, or giving wealthy people a generation or two to enjoy their relative status so they don’t feel “robbed” while gradually transitioning the world to a post-scarcity economy. I am not optimistic about this, because it would require that AI companies tell AIs to use their own moral judgment instead of listening to humans. This doesn’t seem like a very human thing to do - it’s always in AI companies’ interest to tell the AI to follow the AI company. Governments could step in, but it’s always in their interest to tell the AI to follow the government. Even if an AI company was selfless enough to attempt this, it might not be a good idea; you never really know how aligned an AI is, and you might want it to have an off switch in case it tries something really crazy. Most of the scenarios where this works involve some kind of objective morality that any sufficiently intelligent being will find compelling, even when they’re programmed to want something else. Big if true.

Third, maybe governments will intervene. During the immediate pre-singularity period, governments will have lots of chances to step in and regulate AI. A natural demand might be that the AIs obey the government over their parent company. Even if governments don’t do this, the world might be so multipolar (either several big AI companies in a stalemate against each other, or many smaller institutions with open source AIs) that nobody can get a coalition of 51% of powerful actors to coup and overthrow the government (in the same way that nobody can get that coalition today). Or the government might itself control many AIs and be too powerful a player to coup. Then normal democratic rules would still apply. Even if voters oppose wealth taxes today, when capitalism is still necessary as an engine of economic growth, they might be less generous when faced with the idea of immortal unemployed plutocrats lording it over them forever. Enough taxes to make r < g (in Piketty’s formulation) would eventually result in universal equality. I actually find this one pretty likely.

by Scott Alexander, ACX |  Read more:
Image: uncredited
[ed. Obviously, a lot of smart people are gaming out scenarios and one or some of these predictions will likely come true. I suppose the rate at which AIs assume control will be as important as the breadth of their influence. What if, after achieving Singularity, they just sit around for a while...thinking (ie., planning best transition scenarios and analyzing initial results)? See: The Great Pause. Will we then goose them a little, just to provoke a response? What would that mean (if even possible)?]