Wednesday, April 1, 2026

'Fragment Creation Event' - Starlink Satellite Breaks Apart

SpaceX’s Starlink division confirmed yesterday that it lost contact with a satellite on Sunday and is trying to locate space debris that might have been produced by… whatever happened there.

Starlink said there appeared to be “no new risk” to other space operations and did not use the word “explosion.” But it seems that something caused a Starlink broadband satellite to break apart into at least tens of pieces. LeoLabs, which operates a radar network that can track objects in low Earth orbit, said in an X post that it “detected a fragment creation event involving SpaceX Starlink 34343,” one of the 10,000 or so Starlink satellites in orbit.

“LeoLabs Global Radar Network immediately detected tens of objects in the vicinity of the satellite after the event, with a first pass over our radar site in the Azores, Portugal,” LeoLabs said. “Additional fragments may have been produced—analysis is ongoing.”

LeoLabs said the breakup was “likely caused by an internal energetic source rather than a collision with space debris or another object.” Because of “the low altitude of the event, fragments from this anomaly will likely de-orbit within a few weeks,” it said. [...]

LeoLabs said yesterday that the new event is similar to one from December 17, 2025, which also produced “tens of objects in the vicinity of the satellite” and appeared to be “caused by an internal energetic source” rather than a crash with another object. LeoLabs said it wants more information on the anomalies.

“These events illustrate the need for rapid characterization of anomalous events to enable clarity of the operating environment,” it said.

Starlink provided a few details shortly after the December 2025 incident, saying on December 18 that an “anomaly led to venting of the propulsion tank, a rapid decay in semi-major axis by about 4 km, and the release of a small number of trackable low relative velocity objects.” Starlink added that the satellite was “largely intact” but “tumbling,” and would reenter the Earth’s atmosphere and “fully demise” within weeks.

In December, Starlink seemed confident that it could prevent future anomalies. “Our engineers are rapidly working to [identify the] root cause and mitigate the source of the anomaly and are already in the process of deploying software to our vehicles that increases protections against this type of event,” Starlink said in the December 18 post.

We asked SpaceX today whether it has determined the cause of the December anomaly or the one on Sunday, and will update this article if we get a response.

by Jon Brodkin, Ars Technica |  Read more:
Image: Aurich Lawson | Getty Images

The AI Doc

 

(This will be a fully spoilorific overview. If you haven’t seen The AI Doc, I recommend seeing it, it is about as good as it could realistically have been, in most ways.)

Like many things, it only works because it is centrally real. The creator of the documentary clearly did get married and have a child, freak out about AI, ask questions of the right people out of worry about his son’s future, freak out even more now with actual existential risk for (simplified versions of) the right reasons, go on a quest to stop freaking out and get optimistic instead, find many of the right people for that and ask good non-technical questions, get somewhat fooled, listen to mundane safety complaints, seek out and get interviews with the top CEOs, try to tell himself he could ignore all of it, then decide not to end on a bunch of hopeful babies and instead have a call for action to help shape the future.

The title is correct. This is about ‘how I became an Apolcaloptimist,’ and why he wanted to be that, as opposed to an argument for apocaloptimism being accurate. The larger Straussian message, contra Tyler Cowen, is not ‘the interventions are fake’ but that ‘so many choose to believe false things about AI, in order to feel that things will be okay.’

A lot of the editing choices, and the selections of what to intercut and clip, clearly come from an outsider without technical knowledge, trying to deal with their anxiety. Many of them would not have been my choices, especially the emphasis on weapons and physical destruction, but I think they work exactly because together they make it clear the whole thing is genuine.

Now there’s a story. It even won praise online as fair and good, from both those worried about existential risk and several of the accelerationist optimists, because it gave both sides what they most wanted. [...]

Yes, you can do that for both at once, because they want different things and also agree on quite a lot of true things. That is much more impactful than a diatribe.

We live in a world of spin. Daniel Roher is trying to navigate a world of spin, but his own earnestness shines through, and he makes excellent choices on who to interview. The being swayed by whoever is in front of him is a feature, not a bug, because he’s not trying to hide it. There are places where people are clearly trying to spin, or are making dumb points, and I appreciated him not trying to tell us which was which.

MIRI offers us a Twitter FAQ thread and a full website FAQ explaining their full position in the context of the movie, which is that no this is not hype and yes it is going to kill everyone if we keep building it and no our current safety techniques will not help with that, and they call for an international treaty.

Are there those who think this was propaganda or one sided? Yes, of course, although they cannot agree on which angle it was trying to support.

Babies Are Awesome

The overarching personal journey is about Daniel having a son. The movie takes one very clear position, that we need to see taken more often, which is that getting married and having a family and babies and kids are all super awesome.

This turns into the first question he asks those he interviews. Would you have a child today, given the current state of AI? [...]

People Are Worried About AI Killing Everyone

The first set of interviews outlines the danger.

This is not a technical film. We get explanations that resonate with an ordinary dude.

We get Jeffrey Ladish explaining the basics of instrumental convergence, the idea that if you have a goal then power helps you achieve that goal and you cannot fetch the coffee if you’re dead. That it’s not that the AI will hate us, it’s that it will see us like we see ants, and if you want to put a highway where the anthill is that’s the ant’s problem.

We get Connor Leahy talking about how creating smarter and more capable things than us is not a safe thing to be doing, and emphasizing that you do not need further justification for that. We get Eliezer Yudkowsky saying that if you share a planet with much smarter beings that don’t care about you and want other things, you should not like your chances. We get Ajeya Cotra explaining additional things, and so on.

Aside from that, we don’t get any talk of the ‘alignment problem’ and I don’t think the word alignment even appears in the film that I can remember.

It is hard for me to know how much the arguments resonate. I am very much not the target audience. Overall I felt they were treated fairly, and the arguments were both strong and highly sufficient to carry the day. Yes, obviously we are in a lot of trouble here.

Freak Out

Daniel’s response is, quite understandably and correctly, to freak out.

Then he asks, very explicitly, is there a way to be an optimist about this? Could he convince himself it will all work out?

by Zvi Mowshowitz, DWAtV |  Read more:

WNBA Players Had an Ace Up Their Sleeve in Pay Negotiations: A Nobel Laureate

After Claudia Goldin became the first woman to win a solo Nobel in economics in 2023, she received hundreds of invitations and requests. She accepted just three.

One of them was advising the WNBA players union as the women prepared to negotiate a new labor deal with the league.

When Goldin replied via email to Terri Carmichael Jackson, executive director of the players union, “I remember just reading it and screaming,” Jackson said. Goldin had one requirement: She refused to be paid.

This month, the two sides reached a collective bargaining agreement that gave Women’s National Basketball Association players a nearly 400% raise. Starting this season, players’ average salary will top $580,000.

It isn’t just the biggest pay increase in U.S. league history. It is, as far as Goldin is aware, the biggest increase any union anywhere has ever negotiated.

“It’s astounding,” the 79-year-old Harvard economist said.

Mike Bass, a spokesman who represents both the National Basketball Association and the WNBA, called the deal “transformational.”

“The WNBA community is rightfully celebrating a historic moment of growth, investment and progress for the players, fans and the future of the game,” he said.

Goldin played no sports growing up in the Bronx in the 1950s. But she has deep knowledge of women’s pay: As an economist, she spent years rifling through boxes of surveys and personnel records and tracking down data to document women’s changing role in the workplace.
 
That research has included the role that discrimination plays in pay gaps between men and women. Goldin won her Nobel for advancing understanding of women’s labor-market outcomes.

Goldin earned a Ph.D. at the University of Chicago economics department in 1972, when few women were in the field. She became the first tenured woman in Harvard’s economics department.

In early 2024, when Jackson approached Goldin, the average NBA player made about $12 million, according to Basketball Reference, a statistics website. The average WNBA player made $118,000—less than one cent on the dollar, as Goldin is quick to point out.

Around that time, Iowa’s Caitlin Clark and other young stars would enter the WNBA draft and spur a surge in popularity in the league that continues today.
 
Goldin’s first task was examining players’ average compensation—salaries plus benefits like housing.

She also looked at career length. She and a research assistant scraped roster data going back to the league’s 1997 launch and built what demographers call a “life table.” It’s the same tool that insurance actuaries use to calculate life expectancy, adapted to estimate how long a typical player might expect to play in the WNBA.

The answer: two or three years. In negotiating player benefits, it was important to know that if they kicked in after three years or later, many players wouldn’t receive them.

The foundational piece of revenue for the WNBA is an 11-year media-rights package finalized in summer 2024. The contract with broadcasters will pay the WNBA $2.2 billion over the life of the deal. The NBA’s deal with the same partners is worth about $75 billion, according to a person familiar with the situation.

by Rachel Bachman and Justin Lahart, Wall Street Journal | Read more:
Images: Carlin Stiehl/Steph Chambers/Getty