Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Tuesday, April 7, 2026

Intraventricular CARv3-TEAM-E T Cells in Recurrent Glioblastoma

In this first-in-human, investigator-initiated, open-label study, three participants with recurrent glioblastoma were treated with CARv3-TEAM-E T cells, which are chimeric antigen receptor (CAR) T cells engineered to target the epidermal growth factor receptor (EGFR) variant III tumor-specific antigen, as well as the wild-type EGFR protein, through secretion of a T-cell–engaging antibody molecule (TEAM). Treatment with CARv3-TEAM-E T cells did not result in adverse events greater than grade 3 or dose-limiting toxic effects. Radiographic tumor regression was dramatic and rapid, occurring within days after receipt of a single intraventricular infusion, but the responses were transient in two of the three participants. (Funded by Gateway for Cancer Research and others; INCIPIENT ClinicalTrials.gov number, NCT05660369.)
***

Glioblastoma is the most aggressive primary brain tumor, and the prognosis for recurrent disease is exceedingly poor with no effective treatment options. Chimeric antigen receptor (CAR) T cells represent a promising approach to cancer because of their proven efficacy against refractory lymphoid malignant neoplasms, for which they have become the standard of care. However, the use of CAR T cells in solid tumors such as glioblastomas has been limited to date, largely owing to the challenge in targeting a single antigen in a heterogeneous disease and to immunosuppressive mechanisms associated with the tumor microenvironment. 

In a previous clinical trial, we found that peripheral infusion of epidermal growth factor receptor (EGFR) variant III–specific CAR T cells (CART-EGFRvIII) safely mediated on-target effects in patients with glioblastoma. Despite this activity, no radiographic responses were observed, and recurrent tumor cells expressed wildtype EGFR protein and showed heavy intratumoral infiltration with suppressive regulatory T cells. To address these barriers, we developed an engineered T-cell product (CARv3-TEAM-E) that targets EGFRvIII through a second-generation CAR while also secreting T-cell–engaging antibody molecules (TEAMs) against wildtype EGFR, which is not expressed in the normal brain but is nearly always expressed in glioblastoma. We found in preclinical models that TEAMs secreted by CAR T cells act locally at the site where cognate antigen is engaged by the CAR T cells in the treatment of heterogeneous tumors. We also found in vitro that these molecules have the capacity to redirect even regulatory T cells against tumors. On the basis of these data, we initiated a first-in-human, phase 1 clinical study to evaluate the safety of CARv3-TEAM-E T cells in patients with recurrent or newly diagnosed glioblastoma. Here, we report the findings from a prespecified interim analysis involving the first three participants treated with this approach. [...]

Discussion

This study shows that antitumor CAR-mediated responses can be rapidly obtained in patients with glioblastoma, even in those with advanced, intraparenchymal cerebral disease. This finding contrasts with a previous report of a complete response that was observed in a patient with recurrent leptomeningeal disease who received treatment with 16 intracranial infusions of monospecific interleukin-13 receptor alpha 2 CAR T cells. It was hypothesized by the investigators of that study that the involvement of glioblastoma in the leptomeninges may have rendered the disease more responsive to intraventricular therapy. Our experience in the current study suggests that even a single dose of intraventricularly administered living drugs such as CAR T cells also have the capacity to access and mediate activity against infiltrative, parenchymal glioblastoma.

by Bryan D. Choi, M.D., Ph.D., Elizabeth R. Gerstner, M.D., Matthew J. Frigault, M.D., Mark B. Leick, M.D., Christopher W. Mount, M.D., Ph.D., Leonora Balaj, Ph.D., Sarah Nikiforow, M.D., Ph.D., Bob S. Carter, M.D., Ph.D., William T. Curry, M.D., Kathleen Gallagher, Ph.D., and Marcela V. Maus, M.D., Ph.D. NIH, National Center for Biotechnology Information |   Read more:
Image: via
[ed. Only three patients (so far) and it appears sustained treatments are needed to prevent recurrence. But still, pretty interesting.]

Sam Altman May Control Our Future—Can He Be Trusted?

[ed. A must read, possibly historic. Unfortuntately, the accompanying visual is too weird to include here. For a more concise summary see: A history and a proposal (DWAtV)]

In the fall of 2023, Ilya Sutskever, OpenAI’s chief scientist, sent secret memos to three fellow-members of the organization’s board of directors. For weeks, they’d been having furtive discussions about whether Sam Altman, OpenAI’s C.E.O., and Greg Brockman, his second-in-command, were fit to run the company. Sutskever had once counted both men as friends. In 2019, he’d officiated Brockman’s wedding, in a ceremony at OpenAI’s offices that included a ring bearer in the form of a robotic hand. But as he grew convinced that the company was nearing its long-term goal—creating an artificial intelligence that could rival or surpass the cognitive capabilities of human beings—his doubts about Altman increased. As Sutskever put it to another board member at the time, “I don’t think Sam is the guy who should have his finger on the button.”

At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them. “He was terrified,” a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed “Sam exhibits a consistent pattern of . . .” The first item is “Lying.”

Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and Elon Musk, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company’s success, or even its survival. The C.E.O. had to be a person of uncommon integrity. According to Sutskever, “any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility.” But “the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it.” In one of the memos, he seemed concerned with entrusting the technology to someone who “just tells people what they want to hear.” If OpenAI’s C.E.O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman’s role entrusted him with the future of humanity, but he could not be trusted. [...]

The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”

With the board silent, Altman’s advisers built a public case for his return. Lehane has insisted that the firing was a coup orchestrated by rogue “effective altruists”—adherents of a belief system that focusses on maximizing the well-being of humanity, who had come to see A.I. as an existential threat. (Hoffman told Nadella that the firing might be due to “effective-altruism craziness.”) Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”—urged Altman to wage an aggressive social-media campaign. Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.

Altman interrupted his “war room” at six o’clock each evening with a round of Negronis. “You need to chill,” he recalls saying. “Whatever’s gonna happen is gonna happen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman conveyed to Mira Murati, who had given Sutskever material for his memos and was serving as the interim C.E.O. of OpenAI in that period, that his allies were “going all out” and “finding bad things” to damage her reputation, as well as those of others who had moved against him, according to someone with knowledge of the conversation. (Altman does not recall the exchange.) [...]

In a series of increasingly tense calls, Altman demanded the resignations of board members who had moved to fire him. “I have to pick up the pieces of their mess while I’m in this crazy cloud of suspicion?” Altman recalled initially thinking, about his return. “I was just, like, Absolutely fucking not.” Eventually, Sutskever, Toner, and McCauley lost their board seats. Adam D’Angelo, a founder of Quora, was the sole original member who remained. As a condition of their exit, the departing members demanded that the allegations against Altman—including that he pitted executives against one another and concealed his financial entanglements—be investigated. They also pressed for a new board that could oversee the outside inquiry with independence. But the two new members, the former Harvard president Lawrence Summers and the former Facebook C.T.O. Bret Taylor, were selected after close conversations with Altman. “would you do this,” Altman texted Nadella. “bret, larry summers, adam as the board and me as ceo and then bret handles the investigation.” (McCauley later testified in a deposition that when Taylor was previously considered for a board seat she’d had concerns about his deference to Altman.)

Less than five days after his firing, Altman was reinstated. Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence. But the debate over Altman’s trustworthiness has moved beyond OpenAI’s boardroom. The colleagues who facilitated his ouster accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology. “We need institutions worthy of the power they wield,” Murati told us. “The board sought feedback, and I shared what I was seeing. Everything I shared was accurate, and I stand behind all of it.” Altman’s allies, on the other hand, have long dismissed the accusations. After the firing, Conway texted Chesky and Lehane demanding a public-relations offensive. “This is REPUTATIONAL TO SAM,” he wrote. He told the Washington Post that Altman had been “mistreated by a rogue board of directors.”

OpenAI has since become one of the most valuable companies in the world. It is reportedly preparing for an initial public offering at a potential valuation of a trillion dollars. Altman is driving the construction of a staggering amount of A.I. infrastructure, some of it concentrated within foreign autocracies. OpenAI is securing sweeping government contracts, setting standards for how A.I. is used in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.

Altman has promoted OpenAI’s growth by touting a vision in which, he wrote in a 2024 blog post, “astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.” His rhetoric has helped sustain one of the fastest cash burns of any startup in history, relying on partners that have borrowed vast sums. The U.S. economy is increasingly dependent on a few highly leveraged A.I. companies, and many experts—at times including Altman—have warned that the industry is in a bubble. “Someone is going to lose a phenomenal amount of money,” he told reporters last year. If the bubble pops, economic catastrophe may follow. If his most bullish projections prove correct, he may become one of the wealthiest and most powerful people on the planet.

In a tense call after Altman’s firing, the board pressed him to acknowledge a pattern of deception. “This is just so fucked up,” he said repeatedly, according to people on the call. “I can’t change my personality.” Altman says that he doesn’t recall the exchange. “It’s possible I meant something like ‘I do try to be a unifying force,’ ” he told us, adding that this trait had enabled him to lead an immensely successful company. He attributed the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider.” But a board member offered a different interpretation of his statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’ ” Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn’t be trusted?

One morning this winter, we met Altman at OpenAI’s headquarters, in San Francisco, for one of more than a dozen conversations with him for this story. The company had recently moved into a pair of eleven-story glass towers, one of which had been occupied by Uber, another tech behemoth, whose co-founder and C.E.O., Travis Kalanick, seemed like an unstoppable prodigy—until he resigned, in 2017, under pressure from investors, who cited concerns about his ethics. (Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”)

An employee gave us a tour of the office. In an airy space full of communal tables, there was an animated digital painting of the computer scientist Alan Turing; its eyes tracked us as we passed. The installation is a winking reference to the Turing test, the 1950 thought experiment about whether a machine can credibly imitate a person. (In a 2025 study, ChatGPT passed the test more reliably than actual humans did.) Typically, you can interact with the painting. But the sound had been disabled, our guide told us, because it wouldn’t stop eavesdropping on employees and then butting into their conversations. Elsewhere in the office, plaques, brochures, and merchandise displayed the words “Feel the AGI.” The phrase was originally associated with Sutskever, who used it to caution his colleagues about the risks of artificial general intelligence—the threshold at which machines match human cognitive capacities. After the Blip, it became a cheerful slogan hailing a superabundant future.

We met Altman in a generic-looking conference room on the eighth floor. “People used to tell me about decision fatigue, and I didn’t get it,” Altman told us. “Now I wear a gray sweater and jeans every day, and even picking which gray sweater out of my closet—I’m, like, I wish I didn’t have to think about that.” Altman has a youthful appearance—he is slender, with wide-set blue eyes and tousled hair—but he is now forty, and he and Mulherin have a one-year-old son, delivered by a surrogate. “I’m sure, like, being President of the United States would be a much more stressful job, but of all the jobs that I think I could reasonably do, this is the most stressful one I can imagine,” he said, making eye contact with one of us, then with the other. “The way that I’ve explained this to my friends is: ‘This was the most fun job in the world until the day we launched ChatGPT.’ We were making these massive scientific discoveries—I think we did the most important piece of scientific discovery in, I don’t know, many decades.” He cast his eyes down. “And then, since the launch of ChatGPT, the decisions have gotten very difficult.”

by Ronan Farrow and Andrew Marantz, New Yorker | Read more:
Image: via

Wednesday, April 1, 2026

'Fragment Creation Event' - Starlink Satellite Breaks Apart

SpaceX’s Starlink division confirmed yesterday that it lost contact with a satellite on Sunday and is trying to locate space debris that might have been produced by… whatever happened there.

Starlink said there appeared to be “no new risk” to other space operations and did not use the word “explosion.” But it seems that something caused a Starlink broadband satellite to break apart into at least tens of pieces. LeoLabs, which operates a radar network that can track objects in low Earth orbit, said in an X post that it “detected a fragment creation event involving SpaceX Starlink 34343,” one of the 10,000 or so Starlink satellites in orbit.

“LeoLabs Global Radar Network immediately detected tens of objects in the vicinity of the satellite after the event, with a first pass over our radar site in the Azores, Portugal,” LeoLabs said. “Additional fragments may have been produced—analysis is ongoing.”

LeoLabs said the breakup was “likely caused by an internal energetic source rather than a collision with space debris or another object.” Because of “the low altitude of the event, fragments from this anomaly will likely de-orbit within a few weeks,” it said. [...]

LeoLabs said yesterday that the new event is similar to one from December 17, 2025, which also produced “tens of objects in the vicinity of the satellite” and appeared to be “caused by an internal energetic source” rather than a crash with another object. LeoLabs said it wants more information on the anomalies.

“These events illustrate the need for rapid characterization of anomalous events to enable clarity of the operating environment,” it said.

Starlink provided a few details shortly after the December 2025 incident, saying on December 18 that an “anomaly led to venting of the propulsion tank, a rapid decay in semi-major axis by about 4 km, and the release of a small number of trackable low relative velocity objects.” Starlink added that the satellite was “largely intact” but “tumbling,” and would reenter the Earth’s atmosphere and “fully demise” within weeks.

In December, Starlink seemed confident that it could prevent future anomalies. “Our engineers are rapidly working to [identify the] root cause and mitigate the source of the anomaly and are already in the process of deploying software to our vehicles that increases protections against this type of event,” Starlink said in the December 18 post.

We asked SpaceX today whether it has determined the cause of the December anomaly or the one on Sunday, and will update this article if we get a response.

by Jon Brodkin, Ars Technica |  Read more:
Image: Aurich Lawson | Getty Images

The AI Doc

 

(This will be a fully spoilorific overview. If you haven’t seen The AI Doc, I recommend seeing it, it is about as good as it could realistically have been, in most ways.)

Like many things, it only works because it is centrally real. The creator of the documentary clearly did get married and have a child, freak out about AI, ask questions of the right people out of worry about his son’s future, freak out even more now with actual existential risk for (simplified versions of) the right reasons, go on a quest to stop freaking out and get optimistic instead, find many of the right people for that and ask good non-technical questions, get somewhat fooled, listen to mundane safety complaints, seek out and get interviews with the top CEOs, try to tell himself he could ignore all of it, then decide not to end on a bunch of hopeful babies and instead have a call for action to help shape the future.

The title is correct. This is about ‘how I became an Apolcaloptimist,’ and why he wanted to be that, as opposed to an argument for apocaloptimism being accurate. The larger Straussian message, contra Tyler Cowen, is not ‘the interventions are fake’ but that ‘so many choose to believe false things about AI, in order to feel that things will be okay.’

A lot of the editing choices, and the selections of what to intercut and clip, clearly come from an outsider without technical knowledge, trying to deal with their anxiety. Many of them would not have been my choices, especially the emphasis on weapons and physical destruction, but I think they work exactly because together they make it clear the whole thing is genuine.

Now there’s a story. It even won praise online as fair and good, from both those worried about existential risk and several of the accelerationist optimists, because it gave both sides what they most wanted. [...]

Yes, you can do that for both at once, because they want different things and also agree on quite a lot of true things. That is much more impactful than a diatribe.

We live in a world of spin. Daniel Roher is trying to navigate a world of spin, but his own earnestness shines through, and he makes excellent choices on who to interview. The being swayed by whoever is in front of him is a feature, not a bug, because he’s not trying to hide it. There are places where people are clearly trying to spin, or are making dumb points, and I appreciated him not trying to tell us which was which.

MIRI offers us a Twitter FAQ thread and a full website FAQ explaining their full position in the context of the movie, which is that no this is not hype and yes it is going to kill everyone if we keep building it and no our current safety techniques will not help with that, and they call for an international treaty.

Are there those who think this was propaganda or one sided? Yes, of course, although they cannot agree on which angle it was trying to support.

Babies Are Awesome

The overarching personal journey is about Daniel having a son. The movie takes one very clear position, that we need to see taken more often, which is that getting married and having a family and babies and kids are all super awesome.

This turns into the first question he asks those he interviews. Would you have a child today, given the current state of AI? [...]

People Are Worried About AI Killing Everyone

The first set of interviews outlines the danger.

This is not a technical film. We get explanations that resonate with an ordinary dude.

We get Jeffrey Ladish explaining the basics of instrumental convergence, the idea that if you have a goal then power helps you achieve that goal and you cannot fetch the coffee if you’re dead. That it’s not that the AI will hate us, it’s that it will see us like we see ants, and if you want to put a highway where the anthill is that’s the ant’s problem.

We get Connor Leahy talking about how creating smarter and more capable things than us is not a safe thing to be doing, and emphasizing that you do not need further justification for that. We get Eliezer Yudkowsky saying that if you share a planet with much smarter beings that don’t care about you and want other things, you should not like your chances. We get Ajeya Cotra explaining additional things, and so on.

Aside from that, we don’t get any talk of the ‘alignment problem’ and I don’t think the word alignment even appears in the film that I can remember.

It is hard for me to know how much the arguments resonate. I am very much not the target audience. Overall I felt they were treated fairly, and the arguments were both strong and highly sufficient to carry the day. Yes, obviously we are in a lot of trouble here.

Freak Out

Daniel’s response is, quite understandably and correctly, to freak out.

Then he asks, very explicitly, is there a way to be an optimist about this? Could he convince himself it will all work out?

by Zvi Mowshowitz, DWAtV |  Read more:

Monday, March 30, 2026

Lost In Space

No one is happy with NASA’s new idea for private space stations (Ars Technica):

"Most elements of a major NASA event this week that laid out spaceflight plans for the coming decade were well received: a Moon base, a focus on less talk and more action, and working with industry to streamline regulations so increased innovation can propel the United States further into space.

However, one aspect of this event, named Ignition, has begun to run into serious turbulence. It involves NASA’s attempt to navigate a difficult issue with no clear solution: finding a commercial replacement for the aging International Space Station.

During the Ignition event on Tuesday, NASA leaders had blunt words for the future of commercial activity in low-Earth orbit. Essentially, they are not confident in the viability of a commercial marketplace for humans there, and the agency’s plan to work with private companies to develop independent space stations does not appear to be headed toward success. Plenty of people in the industry share these concerns, but NASA officials have not expressed them out loud before.

“We’re on a path that’s not leading us where we thought it would,” said Dana Weigel, manager of the International Space Station program for NASA.

NASA proposed a new solution that would bind the private companies more closely to NASA, requiring them not to build free-flying space stations but rather to work directly with the space agency on modules that would, at least initially, dock with the International Space Station. This change was not well-received."

***
[ed. See also: SpaceX offers details on orbital data center satellites (Space News):]

"At a March 21 event in Austin, Texas, Musk outlined an initiative by SpaceX, along with automaker Tesla and artificial intelligence company xAI — also run by Musk — to massively increase production of high-end computer chips needed for both terrestrial and space applications.

The Terafab project seeks to produce one terawatt of processors annually, which Musk said is 50 times the combined production rate of all manufacturers of chips used today in advanced applications such as AI.

Those processors, he said, are the “missing ingredient” in his plans to deploy a large constellation of satellites to serve as an orbital data center.

“We either build the Terafab or we don’t have the chips, and we need the chips, so we’re going to build the Terafab,” he said.

"SpaceX filed an application with the Federal Communications Commission in late January for a constellation of up to one million satellites that would be used as an orbital data center for AI applications. The company provided few technical details about the constellation, including the size of the satellites, in that application."

Sunday, March 29, 2026

Hawaii’s Small Farmers Begin Recovery After Catastrophic Flooding

Eddie Oroyan’s farm was thriving when the storms hit. He and his wife had started LewaTerra Farm last year on a gorgeous stretch of land on the north shore of Oahu. They were delivering vegetables to customers in the community, selling at farmer’s markets and to local restaurants.

Then, on the week of 10 March, a first kona low storm hit the island, bringing copious amounts of water, flooding their land and wiping out crops. Nearly all their papayas were gone. And the tomatoes didn’t survive. But the couple quickly began cleaning, replanting and tying down crops, confident that they would get back on their feet shortly.

“It was looking really positive. We were like, OK, we’re going to make it out of this,” Oroyan said.

But days later the Hawaiian Islands were hit with yet another storm – this one even more perilous. It inundated neighborhoods, leading to more than 200 rescues, washing houses off their foundations and leaving wide swaths of the land underwater.

Oroyan and his wife evacuated in chest-deep water. They returned to find an almost complete loss.

“The crops were completely covered and had already been underwater earlier that week. The disease was already setting in,” he said.

One week on, Hawaii is only just beginning to grapple with the aftermath of both storms, which saw as much as 50in of rain and caused some of the state’s worst flooding since 2004. The damage is immense – with officials estimating costs at $1bn, and farmers have been hit hard, particularly on Oahu. More than 300 farms have reported about $17.5m in damage as of this week, said Brian Miyamoto, the executive director of the Hawai‘i Farm Bureau.

“This is so widespread that the need is astronomical,” he said.

And with significant debris, damaged roads, and thick mud indoors and outside, cleanup will take time. [...]

Blake Briddell and Brit Yim, who for the last eight years have run an eight-acre farm on land that used to serve as a sugarcane plantation on the north shore, went through their nursery and storage sheds, elevating everything off the ground to protect their breadfruit, mango and citrus trees.

The storm came sooner than expected. The first front brought incessant rain, dropping about 20in in McKinnon’s area, which typically sees an average of 30in for the year. The water levels on Briddell’s farm were steadily rising, and the couple soon had to evacuate.

The heavy rains didn’t stay for long, but caused significant damage, including flooding fields and saturating the ground, and harvested crops were lost to power outages and damaged equipment.

Much of the land that Oroyan and his wife, Jessica Eirado Enes, tend had been left coated in a thick layer of mud thanks to the dense clay soil. Millions of years of erosion from the mountains produced that mineral-rich clay soil, which is good for planting, but that doesn’t soak up water well, Oroyan said, and swallows shoes and tractors.

The couple spent days cleaning up their land, trying to get things back in order and leaving soaked equipment out to dry. They got to work replanting crops that had tipped over, including eggplant and okra.

So did McKinnon and Briddell. Another kona storm was forecast, but was expected to be less severe than the previous ones. “It’s silly looking back, but we were talking about how it might be nice to get a little bit of rain to wash the mud off of everything. Like a little bit of rain would be welcome,” Briddell said.

Briddell woke up at 1.30am on the morning of 20 March to the see water surrounded his farm’s small living space, an alarming development given that it is located on the most elevated area of the property. The water was already shin-deep, meaning the road was too flooded for the couple to drive out, he said.

“We knew we were stuck at that point and it was just a matter of ‘OK, everything that we can get back up elevated, let’s do it’” Briddell said. “The water at that stage was raising about a foot every 20 minutes. I’ve never seen anything like it. You could literally see the water line climbing.”

Meanwhile, as the storm made landfall, Oroyan had been harvesting beets and lettuce in the rain, trying to get them out of the ground before it became too muddy to do so. As he prepared to go to bed, he saw that water was already overwhelming a nearby culvert and coming to the edge of a drainage ditch on the property.

He and his wife began to prepare once more. They gathered their things and moved valuable heavy equipment, a solar generator and a washing machine.

“Within 20 minutes of me saying we should start prepping it was at the foot of the living space,” Oroyan said. Twenty minutes later it was up to their knees, and they drove their vehicles to higher ground with water submerging the hoods of their cars. They made it to a neighbors after walking through chest-deep water.

Briddell and Yim put on wetsuits, and placed their dry clothes in a cooler. The couple knew their cats would not leave, and that they couldn’t swim out with them, so they left wet food on the rafters of their home where they knew they’d be safe. They swam a quarter of a mile to their kayak and met with a friend who offered them a vehicle to drive out in.

“The drive was scarier than the swim. The water ripping down the roads. You’re driving with the tailpipe pipes submerged for miles where you can’t let off the gas,” Briddell said.

by Dani Anguiano, The Guardian | Read more:
Images: Eddie Oroyan of LewaTerra Farm
[ed. Climate change. We lost the fight before ever getting started. Because it was a hoax. Because we needed to protect our corporations and our economy, 401Ks, consumptive standards of living. Because it was too complex and too far in the future. Because it was just too hardSee also: They’re Rich but Not Famous—and They’re Suddenly Everywhere.]

Friday, March 27, 2026

Fuzz: Wildlife Conflict in the Modern Era

Recently, I read Fuzz: When Natures Breaks the Law by Mary Roach. Like all of her books, it is a meandering journey that touches on a common theme. Although the subtitle makes it seem that the theme is nature crime, the theme is more about conflicts between bureaucracy, modernity and nature rather than crime itself. A more accurate but worse title would be Fuzz: The Weird Ways Humans Deal with Nature while Navigating Bureaucracy and the Impossibility of People Wanting to be around Wildlife without Ever Being Inconvenienced. Some examples Roach explores include the Indian government’s attempt to sterilize monkeys, how the city of Aspen deals with bears raiding trash cans, and the many failed attempts at getting rid of birds including the infamous Australian emu war.

Reading Fuzz was often frustrating because most of the problems share the same basic structure regardless of time or place. Humans disturb a local ecosystem through moving there or extracting resources. Animals then wander into human settlements in response to ecosystem change that has worsened their food supply, altered the predator-prey ratio, or made it easier to get caloric rich food. Humans react by engaging in one of two strategies. Strategy one is to kill everything, which is usually ineffective because it does not affect the population levels or results in extinction (at least in the region) which results in further ecosystem change. Strategy two is to feed the wild animals because that seems like the nice thing to do except that feeding them encourages the animals to keep going into the human settlements which makes the animals bolder which leads to more conflict and potentially leads to attacks. Once this has started, the animals become so used to relying on people for food that they cannot be integrated back into the wild. Sometimes people become so frustrated and angry that they go back to the first strategy of kill everything.

These problems can seem intractable. People have a hard time being convinced that killing everything doesn’t work and the people who don’t want to kill the animals have a hard time accepting that their help may makes things worse. They continue to feed the wild animals, resist methods that would discourage the animals (such as locking trashcans), and mainly advocate translocation (moving the animal to a different area) even though translocation rarely works. Whether because of blinding love or hate, people have a hard time handling wild animals wandering into their homes and cities.

Even though reading about these issues was frustrating, Fuzz left me feeling more inspired than dejected. There are examples of humans humanely and successfully addressing human-wildlife conflict and limiting the presence of introduced flora and fauna. They do so through careful study of local ecosystems, which includes the humans who live there and how they feel about wildlife. What was the most inspiring thing in the book was seeing how much the animal rights and environmental movements have changed how the public handles these wildlife issues. Before the 1970s, the kill everything approach was the norm. Now it is not.

Throughout these stories, Roach makes the case that the best way to deal with wildlife conflict is to find better ways to live with animals that isn’t killing them or making them reliant on humans. Sometimes the solution is simple and easy. After multiple chapters of ridiculous attempts to stop birds from eating crops, Roach argues that it’s better to do nothing or to hire a human to scare the birds off. Other times the solution is complicated. In New Zealand, there’s research being done on using genetic engineering to induce infertility among mice and other destructive, introduced species as a way to reduce the population without mass poisoning. The researchers are trying to limit unintended consequences but there will always be risk. The important question is whether the unknown risk of doing something is worth the known risk of doing nothing. I appreciate that there are people out there doing the often thankless work of trying to make humans and wildlife happy. Roach did an excellent job of showing the myriad of ways this plays out and, unlike other books I’ve read, Roach discusses these issues without claiming that now is the first time humans have tried caring about nature and ecological balance.

by Mia Milne, Solar Thoughts |  Read more:
Image: Fuzz
[ed. This issue has played out forever in my old hometown of Anchorage, Alaska (as you can imagine), and will probably never be resolved to everyone's satisfaction. It's a form of politics. What's the science say, and what are the options? How feasible are mitigative policies, and how much will they cost? Finally arriving at the most relevant question: what kind of city do you want to live in (that would perpetually kill its animal populations and modify its natural environment)?]

Q Day is Coming

Google is dramatically shortening its readiness deadline for the arrival of Q Day, the point at which existing quantum computers can break public-key cryptography algorithms that secure decades’ worth of secrets belonging to militaries, banks, governments, and nearly every individual on earth.

In a post published on Wednesday, Google said it is giving itself until 2029 to prepare for this event. The post went on to warn that the rest of the world needs to follow suit by adopting PQC—short for post-quantum cryptography—algorithms to augment or replace elliptic curves and RSA, both of which will be broken.

The end is nigh

“As a pioneer in both quantum and PQC, it’s our responsibility to lead by example and share an ambitious timeline,” wrote Heather Adkins, Google’s VP of security engineering, and Sophie Schmieg, a senior cryptography engineer. “By doing this, we hope to provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry.”

Separately, Google detailed its timeline for making Android quantum resistant, the first time the company has publicly discussed PQC support on the operating system. Starting with the beta version, Android 17 will support ML-DSA, a digital signing algorithm standard advanced by the National Institute for Standards and Technology. ML-DSA will be added to Android’s hardware root of trust. The move will allow developers to have PQC keys for signing their apps and verifying other software signatures. [...]

So what’s spooking Google so much?

Wednesday’s hard deadline came as a surprise to many cryptography engineers, including those who have been active in the PQC transition for years.

“That is certainly a significant acceleration/tightening of the public transition timelines we’ve seen to date, and is accelerated over even what we’ve seen the US government ask for,” Brian LaMacchia, a cryptography engineer who oversaw Microsoft’s post-quantum transition from 2015 to 2022 and now works at Farcaster Consulting Group, said in an interview. “The 2029 timeline is an aggressive speedup but raises the question of what’s motivating them.”

Google didn’t lay out the rationale for the revision in either of its posts. A spokeswoman didn’t immediately provide answers to questions sent by email.

Estimates for when Q Day will arrive have varied widely since the mid-1990s, when mathematician Peter Shor first showed that a quantum computer of sufficient strength could factor integers in polynomial time, much faster than classical computers. That put the world on notice that RSA’s days were limited. Follow-on research showed quantum computers provided a similar speed-up in solving the discrete log problem that underpins elliptic curves. [...]

In preparation for Q Day, cryptographers have devised new encryption algorithms that rely on problems that quantum computers don’t have an advantage over classical computers in solving. Rather than factoring or solving the discrete log, one approach involves mathematical structures known as lattices. A second approach involves a stateless hash-based digital signature scheme. The National Institute of Standards and Technology has advanced several algorithms that have yet to be broken and are presumed to be secure.

In 2022 the NSA set a deadline for PQC readiness in national security systems by 2033 and for 2030 for a few specific applications.

by Dan Goodin, Ars Technica |  Read more:
Image: JuSun/Getty
[ed. So does this mean we don't need passwords anymore? Or the old ones won't work? I can't tell. Tech companies have been telling us that'd happen for years, too. It's coming! And, how does strong AI affect any of this? If I have to change all my passwords everywhere I'm going to go crazy.]

Thursday, March 26, 2026

NASA's 'Lunar Viceroy' on Moon Base Plans

NASA's “Lunar Viceroy” talks about how NASA will build a Moon base (Ars Technica)
Image: Rendering of a Moon base that will be built over the next decade. Credit: NASA
[ed. In the next 10 years.]

Seeing Like a Sedan

Waymos and Cybercabs see the world through very different sensors. Which technology wins out will determine the future of self-driving vehicles.

Picture a fall afternoon in Austin, Texas. The city is experiencing a sudden rainstorm, common there in October. Along a wet and darkened city street drive two robotaxis. Each has passengers. Neither has a driver.

Both cars drive themselves, but they perceive the world very differently.
 
One robotaxi is a Waymo. From its roof, a mounted lidar rig spins continuously, sending out laser pulses that bounce back from the road, the storefronts, and other vehicles, while radar signals emanate from its bumpers and side panels. The Waymo uses these sensors to generate a detailed 3D model of its surroundings, detecting pedestrians and cars that human drivers might struggle to see.

In the next lane is a Tesla Cybercab, operating in unsupervised full self-driving mode. It has no lidar and no radar, just eight cameras housed in pockets of glass. The car processes these video feeds through a neural network, identifying objects, estimating their dimensions, and planning its path accordingly.

This scenario is only partially imaginary. Waymo already operates, in limited fashion, in Austin, San Francisco, Los Angeles, Atlanta, and Phoenix, with announced plans to operate in many more cities. Tesla Motors launched an Austin pilot of its robotaxi business in June 2025, albeit using Model Y vehicles with safety monitors rather than the still-in-development Cybercab. The outcome of their competition will tell us much about the future of urban transportation.

The engineers who built the earliest automated driving systems would find the Waymo unsurprising. For nearly two decades after the first automated vehicles emerged, a consensus prevailed: To operate safely, an AV required redundant sensing modalities. Cameras, lidar, and radar each had weaknesses, but they could compensate for each other. That consensus is why those engineers would find the Cybercab so remarkable. In 2016, Tesla broke with orthodoxy by embracing the idea that autonomy could ultimately be solved with vision and compute and without lidar — a philosophical stance it later embodied in its full vision-only system. What humans can do with their eyeballs and a brain, the firm reasoned, a car must also be able to do with sufficient cameras and compute. If a human can drive without lidar, so, too, can an AV… or so Tesla asserts.

This philosophical disagreement will shortly play out before our eyes in the form of a massive contest between AVs that rely on multiple sensing modalities — lidar, radar, cameras — and AVs that rely on cameras and compute alone.

The stakes of this contest are enormous. The global taxi and ride-hailing market was valued at approximately $243 billion in 2023 and is projected to reach $640 billion by 2032. In the United States alone, people take over 3.6 billion ride-hailing trips annually. Converting even a fraction of this market to AVs represents a multibillion-dollar opportunity. Serving just the American market, at maturity, will require millions of vehicles.

Given the scale involved, the cost of each vehicle matters. The figures are commercially sensitive, but it is certainly true that cameras are cheaper than lidar. If Tesla’s bet pays off, building a Cybercab will cost a fraction of what it will take to build a Waymo. Which vision wins out has profound implications for how quickly each company will be able to put vehicles into service, as well as for how quickly robotaxi service can scale to bring its benefits to ordinary consumers across the United States and beyond.

by Andrew Miller, Asterisk |  Read more:
Image: Jared Nangle
[ed. via DWAtV:]
***
A relevant thing about Elon Musk is that, while he has a lot of technical expertise and can accomplish a lot of seemingly impossible tasks, he also just says things.

For example, here’s another thing he just said this week, in a trick he’s pulled several times without delivering, where the prediction market is at 12% but that seems rather high to me:
NewsWire: Elon Musk offers to pay TSA workers' salaries amid government shutdown.
Just saying things, and announcing with confidence he will do things he probably cannot do, is central to his strategy of then yelling at people to sleep on floors until they manage to do it, which occasionally works to at least some extent. Elon Musk may plausibly start such a project, but the chances he achieves the goals he is stating are very low.

Announce periodically you are going to the moon and stars, and if one time you end up with SpaceX, it’s still a win. It’s worked for him quite well, so far.

Wednesday, March 25, 2026

China and the Future of Science

[The following post is a polished transcript of a speech I recently gave to a private gathering of American technologists. Its contents may be of interest to a larger audience. -TG.]

The Chinese socio-political system differs from our own. From the perspective of the topic of this conference, here is the most salient distinction: the Chinese system has a telos. The Chinese party-state is fundamentally a set of goal-oriented institutions. This is not unique to China—it is in fact a distinguishing feature of all Leninist systems. I sometimes think of Leninist systems as a little bit like that bus in the movie Speed. Who here has seen it? For those who haven’t, here is basic gist of that film: an extortionist attaches a bomb to the speedometer of a bus. If the bus ever slows below 50 miles per hour, everyone blows up. So it is with your average communist system. Either it hurtles towards some clearly defined goal or things start to fall apart.

In the early days of Mao, the overarching aim of the communist system was to seize state power, first through subversion and insurgency, then through more regular combined arms warfare. In the later days of Mao the newly established Chinese state and the society it intertwined were oriented around class struggle, both at home and abroad. From the 1980s through the 2010s the Chinese system was orbited a different yet still very explicitly stated goal: getting rich. In theory, if not always in practice, every action taken by every cadre, every soldier, and every state employee was subordinate to this larger, unifying aim. We must make China rich.

That is no longer the animating telos of the Chinese system. There is a new goal, one that has been articulated with great clarity by Chairman Xi and the Chinese central committee: In 2026, the aim of China’s communist enterprise is to lead humanity through what they call “the next round of techno scientific revolution and industrial transformation.” The Chinese leadership believes humanity stands on the cusp of the next industrial revolution. China can only be restored to its ancestral greatness if it is the pioneer of this revolution. All machinery of party and state must bend towards this end. All 100 million members of the Communist Party of China, all 50 million government employees of the PRC, all two million soldiers of the People’s Liberation Army, and ultimately all of the 1.4 billion people that call China home must be mobilized to accomplish this aim. That is the ambition. China will be the greatest scientific power the world has ever seen—or bust.

The communists are deadly serious about their pursuit of this aim. Statistics provide one window into the seriousness of their intent. Now I don’t intend for the remainder of this speech to be a laundry list of numbers, but I think the numbers are useful for helping us see the scale of what China has already accomplished and the speed with which they have accomplished it. They are also strong signal of future intent—it is difficult to survey the numbers and not appreciate just how ironclad China’s commitment to scientific achievement really is.

Now scientific achievement is difficult to measure. One common metric is to count the so-called “high impact papers” – journal articles highly cited by other leading lights in a given scientific field. Count up these papers over the course of a year, see who wrote them, see where those authors work, and—voila!—you have a ranked list of which institutions are putting out the most high-impact science in a given year. Had you done this counting exercise in the year 2005, you would have discovered that six of the world’s ten most productive universities were in the United States. Today only one of those universities is in the United States. That university is Harvard, coming in at spot number three on the list. At spot number one? Zhejiang University.

How many of you have heard of Zhejiang University? Can I get a show of hands?

And of course, Zhejiang University is just one of the Chinese institutions on this top ten list. China claims not just the number-one spot, but also the number-two spot. And not just the number-one and number-two spots, but also the fourth, fifth, sixth, seventh, eight, ninth spots go to the Chinese.

The scientific publisher Nature makes a similar catalog on a slightly more granular level, looking at specific fields of science. According to Nature’s most recent rankings, 18 of the top 25 most productive research institutes in the physical sciences, 19 of the top 20 in geosciences, and a full 25 out of 25 in chemistry are Chinese. Only in the biosciences do American scientists still have a lead—but even on that list three of the top ten are Chinese.

The kicker is, none of that was true even just a decade ago.

The most granular analysis of all is published by the Australian Strategic Policy Institute, or ASPI. ASPI publishes a neat research tracker that surveys new publications in 74 distinct high-end technologies. Unlike the statistics I just discussed, their tracker includes research published by scientists working in national laboratories and private institutions as well as those published by academic scientists. For each category they make a list of the ten institutions that are publishing the most high-impact science in that particular topic. What have they found? For 66 of the 74 categories tracked, a majority of the institutions that are now publishing the highest-impact science are Chinese. In many areas of science the dominance is total: For example, ten of then most productive research institutions in the fields of nanoscale material manufacturing, photonic sensors, chemical coating, drone operations, automated swarms, and undersea communications are Chinese. The number is nine out of ten for work on supercapacitors, advanced composite materials, inertial navigation systems, and satellite positioning, eight out of ten in advanced optical communications, advanced radiofrequency communications, and new chemical coatings, and seven out of ten for directed energy technologies, nuclear engineering, and nuclear waste treatment.

The scale of Chinese scientific production is in part a story about people. China graduates five times the number of medical and biomedical students than we do every year, seven times the number of engineers, and two-and-a-half times the number of undergraduates with research experience in artificial intelligence. Last year China graduated almost double the number of STEM PhD students than we did—and that number is actually worse than it sounds because—depending on the exact year you do the counting—between one sixth and one fifth of our STEM graduates are themselves Chinese.

Many of these researchers go back. They go back partially because they are well compensated for doing so. They also go back because of the research opportunities afforded to them. A recent study found that returning Chinese scientists go on to become the lead author on 2.5 times more papers than their colleagues who stay in the United States. Many Chinese research labs have 30 or 40 people attached to them—the equivalent to a commercial research lab in the United States. Ask any scientist who has gone to China in the past three years to visit academic colleagues and they will tell you how astounded they are at the quality of the laboratory equipment and machinery that their Chinese colleagues have access to. If in the not-so-distant past Chinese localities competed with each other to lay the most asphalt, now that funding pours into laboratory equipment, scientific instruments, and advanced scientific facilities. Thus China now has the world’s most sensitive ultra-high-energy cosmic-ray detector, the world’s largest and most sensitive radio telescope, the world’s strongest steady-state magnetic field, the world’s fastest quantum computer by computational advantage, and the world’s most sensitive neutrino detector. Just yesterday an attendee at this conference informed me of another I should add to my list: the world’s largest primate medical research center.

Now I can already hear some of your objections. “Tanner, these measures don’t include classified research. They don’t include the proprietary research by private companies—that is the stuff that actually pushes technology forward. American companies are not publishing billion-dollar trade secrets in the latest journals. The Chinese scientists are under insane publish or perish pressures—they are far more likely to lie and cheat. Don’t you know Chinese scientists take part in citation cartels? Haven’t you read those bitter critiques of the new system written by China’s own disgruntled scientists?”

My main response to this: you guys have lost the thread. I am reminded of a similar style of argument we often see in AI development. Every time a new model is released people play around with it for a bit and then start to catalog the flaws of this model. But the real story, the story historians will tell a generation from now, is never about the model of the moment. What matters is movement between those moments. History is made by the trend-line. What capabilities did the models have four years ago? What capabilities do they have now? What might they reasonably be expected to have in a decade hence?

Something similar might be said for science and China.

by Tanner Greer, The Scholar's Stage |  Read more:
Image: uncredited
[ed. See also: The China Tech Canon (Asterisk).]

Monday, March 23, 2026

Vertical Farming

via:
[ed. Impressive.]
***
"While most vertical farms are limited to lettuces, Plenty spent the past decade designing a patent-pending, modular growing system flexible enough to support a wide variety of crops – including strawberries. Growing on vertical towers enables uniform delivery of nutrients, superior airflow and more intense lighting, delivering increased yield with consistent quality.

Every element of the Plenty Richmond Farm–including temperature, light and humidity–is precisely controlled through proprietary software to create the perfect environment for the strawberry plants to thrive. The farm uses AI to analyze more than 10 million data points each day across its 12 grow rooms, adapting each grow room’s environment to the evolving needs of the plants – creating the perfect environment for Driscoll’s proprietary plants to thrive and optimizing the strawberries’ flavor, texture and size. Even pollination has been engineered by Plenty, using a patent-pending method that evenly distributes controlled airflow across the strawberry flowers for more efficient and effective pollination than using bees, supporting more uniform strawberry size and shape."  ~ Greater Richmond Partnership

Sunday, March 22, 2026

Teshekpuk Lake

Arctic Alaska oil and gas lease sale draws record bidding, despite legal clouds (AK Beacon)

The first lease sale in the National Petroleum Reserve in Alaska since 2019 generated $163 million in high bids, but some bids were for protected land
***
A controversial oil and gas federal lease sale in the National Petroleum Reserve in Alaska generated a new bidding record, according to results released on Wednesday. It was the first auction held in that Arctic Alaska territory since 2019.

The lease sale produced $163 million in high bids, beating the $104 million mark set during the first competitive oil and gas lease sale in the Indiana-sized reserve, which was held in 1999 during the Clinton administration.

Eleven companies submitted bids for more than 1.3 million acres of the nearly 5.5 million acres offered in the auction.

Kevin Pendergast, Alaska state director for the U.S. Bureau of Land Management, called the results “historic.”

“This is the strongest sale we have ever had in the National Petroleum Reserve in Alaska by nearly every measure. It makes clear that for the NPR-A, despite all the successes to date, the best days are still ahead,” Pendergast said at the conclusion of the bid opening, which lasted about two hours.

In statements issued after the bid reading, federal and state officials hailed the results. [...]

The lease sale was one of five mandated in the reserve over the next 10 years by the sweeping budget and tax bill called the “One Big Beautiful Bill Act.” That mandate calls for lease sales to be conducted under a Trump administration management plan that opened 82% of the reserve to oil development. Previously, the Obama administration held annual lease sales in the petroleum reserve, but that administration’s management plan protected about half of the land through the designation of “special areas” considered important to wildlife and to Native cultural practices.

Federal officials auctioned tracts of protected land

Much of the bidding in Wednesday’s sale was for territory that was previously off-limits to oil development under protections that date as far back as the Reagan administration. [ed. guess who helped write and fight for those protections.]

The inclusion of long-protected land in the sale, predominantly the area around ecologically sensitive Teshekpuk Lake, made the lease sale contentious. It is the subject of two lawsuits filed by Native and environmental groups.

Bids were accepted even for tracts within an area encircling Teshekpuk Lake, the North Slope’s largest lake, despite a federal court order issued Monday that reinstated development prohibitions there.

by Yareth Rosen, Alaska Beacon |  Read more:
Image: YouTube
[ed. Nice video, you should watch it. $163 million is not nothing, but it's not a lot. Prudhoe Bay - before there was any infrastructure or pipeline - garnered $900 million, and it was a much smaller area. When I was overseeing oil and gas leasing in the arctic in the 80s there was very little interest in NPR-A - except for Teshekpuk Lake, one of the most ecologically important areas on the North Slope (along with ANWR). We used to joke that if you wanted to find oil just look for the most environmentally sensitive area you could find in a lease sale and bid there. Not a joke anymore.]

Friday, March 20, 2026

Bow and Arrow Diffusion Across Cultures

Study pinpoints when bow and arrow came to North America (Ars Technica)

Image:A petroglyph from Newspaper Rock, a site along Indian Creek in southeastern Utah. Credit: David Hiser/Environmental Protection Agency/Public domain
[ed. I haven't finished half my morning coffee and already know about atlatls (and why dogs love them), risk-buffering, and frozen feces knives. Is science great, or what?]
***
1. Introduction
In his book, Shadows in the Sun, Davis (1998: 20) recounts what is now arguably one of the most popular ethnographic accounts of all time:
“There is a well known account of an old Inuit man who refused to move into a settlement. Over the objections of his family, he made plans to stay on the ice. To stop him, they took away all of his tools. So in the midst of a winter gale, he stepped out of their igloo, defecated, and honed the feces into a frozen blade, which he sharpened with a spray of saliva. With the knife he killed a dog. Using its rib cage as a sled and its hide to harness another dog, he disappeared into the darkness.”
Since publication, this story has been told and re-told in documentaries, books, and across internet websites and message boards (Davis, 2007, Davis, 2010; Gregg et al., 2000; Kokoris, 2012; Taete, 2015). Davis states that the original source of the tale was Olayuk Narqitarvik (Davis, 2003, Davis, 2009). It was allegedly Olayuk's grandfather in the 1950s who refused to go to the settlements and thus fashioned a knife from his own feces to facilitate his escape by skinning and disarticulating a dog. Davis has admitted that the story could be “apocryphal”, and that initially he thought the Inuit who told him this story was “pulling his leg” (Davis, 2009, Davis, 2014). Yet, as support for the credibility of the story, Davis cites the auto-biographical account of Peter Freuchen, the Danish arctic explorer (Hodge and Davis, 2012). Freuchen (1953) describes how he dug himself a pit to sleep in and woke up trapped by snow. Every effort to get out that he tried failed. Finally, he recalled seeing dog's excrement frozen solid as a rock. So, Freuchen defecated in his hand, shaped it into a chisel, and waited for it to freeze solid. He then used the implement to free himself from the snow: “I moved my bowels and from the excrement I managed to fashion a chisel-like instrument which I left to freeze… At last I decided to try my chisel and it worked” (Freuchen, 1953: 179).

2. Materials and methods
In order to procure the necessary raw materials for knife production, one of us (M.I.E.) went on a diet with high protein and fatty acids, which is consistent with an arctic diet, for eight days (Binford, 2012; Fumagalli et al., 2015) (Table S1). The Inuit do not only eat meat from maritime and terrestrial animals (Arendt, 2010; Zutter, 2009), and there were three instances during the eight-day diet that M.I.E. ate fruit, vegetables, or carbohydrates (Table S1).

Raw material collection did not begin until day four, and then proceeded regularly for the next five days (Table S1). Fecal samples were formed into knives using ceramic molds, “knife molds” (Figs. S1–S2), or molded by hand, “hand-shaped knives” (Fig. S3). All fecal samples were stored at −20 °C until the experiments began.

Thursday, March 19, 2026

NSF Tech Labs: Science Funding Goes Beyond the Universities

The National Science Foundation announces Friday that it is launching one of the most significant experiments in science funding in decades. A new initiative called Tech Labs will invest up to $1 billion over the next five years in large-scale long-term funding to teams of scientists working outside traditional university structures, a major departure from how the agency has funded research over the past 75 years.

The timing couldn’t be better. The way our science agencies fund research in the U.S. no longer matches the way many breakthroughs actually happen.

For most of the postwar era, federally funded science has been built around a simple model. Vannevar Bush’s famous 1945 essay, “Science: The Endless Frontier,” sketched a vision of government-backed research led by university-based scientists pursuing their own ideas. The system that emerged—small, project-based federal grants mostly to individual scientists—worked brilliantly for decades. It gave researchers autonomy, kept politics at arm’s length, and helped make American science the envy of the world.

But the frontier has moved. In 1945 world-class scientific research could be done with a few graduate students and modest equipment. But the science that shapes our world, from particle physics to protein design to advanced materials, increasingly requires massive data sets, large integrated teams and sustained institutional support.

Take the discovery of the Higgs boson, a particle that helps explain why anything has mass—and thus why atoms, molecules and matter itself can exist. Making this discovery required a multibillion-dollar particle accelerator, thousands of scientists across dozens of countries, and papers with multipage author lists.

Google DeepMind’s AlphaFold2, which cracked the 50-year-old protein-folding problem and earned researchers the 2024 Nobel Prize in Chemistry, emerged from a team with access to massive computational resources and sustained institutional support.

The Janelia Research Campus in collaboration with other institutions mapped the complete wiring diagram of the fruit-fly brain, neuron by neuron, synapse by synapse, through years of coordinated microscopy and analysis that no single lab could attempt alone.

Yet our federal science funding system is still largely organized around small grants to university scientists. At the NSF, around two-thirds of research dollars flow through small awards to individual university investigators. At the National Institutes of Health, the share is often more than 80%. The average NSF grant is roughly $246,000 a year for three years, often requiring investigators to predict in advance exactly what research they’ll pursue and to spend a significant amount of time navigating administrative hurdles. Scientists consistently report spending close to half their research hours on compliance and grant management.

The system still produces good science, but it has weak points. The current structure is built for discrete projects rather than missions. When research requires long-term continuity, interdisciplinary collaboration or substantial shared infrastructure, it’s often difficult for it to fit into this structure. Many advances we now celebrate succeeded despite the funding model, not because of it.

Philanthropy has stepped into this gap. Focused research organizations, a model backed by former Google CEO Eric Schmidt, build time-limited teams around ambitious technical problems and tie funding to specific milestones that researchers must meet. The Allen Institute for Brain Science, launched with $100 million from Microsoft co-founder Paul Allen, built the first comprehensive gene-expression map of the mouse brain through industrial-scale data collection that would have been impossible under fragmented academic grants. The Arc Institute offers scientists eight-year appointments backed by permanent technical staff with expertise in topics such as machine learning and genome engineering, the kind of sustained expertise that often evaporates when a three-year grant ends. These institutions bet on teams, not projects.

But philanthropy alone can’t reshape American science. The federal government spends close to $200 billion on research and development, orders of magnitude more than even the largest foundations. If we want to change how science gets done at scale, federal funding has to evolve.

While final details are still being worked out, Tech Labs represents NSF’s attempt to do exactly that. Rather than funding isolated projects, the agency would provide flexible, multiyear institutional grants in the range of $10 million to $50 million a year to coordinated research organizations that operate outside the constraints of university bureaucracy. These could include university-adjacent entities such as the Arc Institute or fully independent teams with focused missions. The program would bring the lessons of philanthropic science into a part of the federal portfolio that hasn’t seriously tried them.

This is a good political moment to launch this initiative. Republicans have expressed interest in diversifying federal research away from universities. Democrats want to see the legacy of the Chips and Science Act come to fruition and to get dollars out the door. By funding independent research organizations, Tech Labs sidesteps some of the thorniest debates about indirect costs and institutional overhead. 

by Caleb Watney, Wall Street Journal (via Archive Today) |  Read more:
Image: Getty
[ed. Sounds like a great idea. Especially since science funding has become more politicized, and Congress can't seem to go six months without shutting down the government. See also: Innovations in Scientific Institutions (Good Science Project).]

Friday, March 6, 2026

Cognitive Interdependence in Close Relationships

This chapter is concerned with the thinking processes of the intimate dyad. So, although we wlll focus from time to time on the thinking processes of the individual - as they influence and are influenced by the relationship with another person - our prime interest is in thinking as it occurs at the dyadic level. This may be dangerous territory for inquiry. After all, this topic resembles one that has, for many years now, represented something of a "black hole" in the social sciences - the study of the group mind. For good reasons, the early practice of drawing an analogy between the mind of the individual and the cognitive operations of the group has long been avoided, and references to the group mind in contemporary literature have dwindled to a smattering of wisecracks. 

Why, then, would we want to examine cognitive interdependence in close relationships? Quite simply, we believe that much could be learned about intimacy in this enterprise, and that a treatment of this topic, enlightened by the errors of past analyses, is now possible. The debate on the group mind has receded into history sufficiently that its major points can be appreciated, and at the same time, we find new realms of theoretical sophistication in psychology regarding the operation of the individual mind. With this background, we believe it is possible to frame a notion somewhat akin to the "group mind" and we to use it to conceptualize how people in close relationships may depend on each other for acquiring, remembering, and generating knowledge.

Interdependent Cognition 

Interdependence is the hallmark of intimacy. Although we are all interdependent to a certain degree, people in close relationships lead lives that are intertwined to the extreme. Certainly, the behaviors they enact, the emotions they feel, and the goals they pursue are woven in an intricate web. But on hearing even the simplest conversation between intimates, it becomes remarkably apparent that their thoughts, too, are interconnected. Together, they think about things in ways they would not alone. The idea that is central in our analysis of such cognitive interdependence is what we term transactive memory. As will become evident, we find this concept more clearly definable and, ultimately, more useful than kindred concepts that populate the history of social psychology. As a preamble to our ideas on transactive memory, we discuss the group mind notion and its pitfalls. We then turn to a concern with the basic properties and processes of transactive memory. [...]

The Nature of Transactive Memory 

Ordinarily, psychologists think of memory as an individual's store of knowledge, along with the processes whereby that knowledge is constructed organized, and accessed. So, it is fair to say that we are studying "memory'; when we are concerned with how knowledge gets into the person's mind, how it is arranged in the context of other knowledge when it gets there, and how it is retrieved for later use. At this broad level of definition, our conception of transactive memory is not much different from the notion of individual memory. With transactive memory, we are concerned with how knowledge enters the dyad, is organized within it, and is made available for subsequent use by it. This analogical leap is a reasonable one as long as we restrict ourselves to considering the functional equivalence of individual and transactive memory. Both kinds of memory can be characterized as systems that, according to general system theory (von Bertalanffy, 1968), may show rough parallels in their modes of operation. Our interest is in processes that occur when the transactive memory system is called upon to perform some function for the group - a function that the individual memory system might reasonably be called upon to perform for the person. 

Transactive memory can be defined in terms of two components: (1) an organized store of knowledge that is contained entirely in the individual memory systems of the group members, and (2) a set of knowledge-relevant transactive processes that occur among group members. Stated more colloquially, we envision transactive memory to be a combination of individual minds and the communication among them. This definition recognizes explicitly that transactive memory must be understood as a name for the interplay of knowledge, and that this interplay, no matter how complex, is always capable of being analyzed in terms of communicative events that have individual sources and individual recipients. By this definition, then, the thought processes of transactive memory are completely observable. The various communications that pass between intimates are, in principle, observable by outside observers just as each intimate can observe the communications of the other. Using this line of intepretation, we recognize that the observable interaction between individuals entails not only the transfer of knowledge, but the construction of a knowledge-acquiring, knowledge-holding, and knowledge-using system that is greater than the sum of its individual member systems. 

Let us consider a simple example to bring these ideas down to earth. Suppose we are spending an evening with Rudy and Lulu, a couple married for several years. Lulu is in another room for the moment, and we happen to ask Rudy where they got the wonderful stuffcd Canadian goose on the mantle. He says, "we were in British Columbia..." and then bellows, "Lulu! What was the name of that place where we got the goose?" Lulu returns to the room to say that it was near Kelowna or Penticton - somewhere along Lake Okanogan. Rudy says, "Yes, in that area with all the fruit stands." Lulu finally makes the identification: Peachland. In all of this, the various ideas that Rudy and Lulu exchange lead them through their individual memories. In a process of interactive cueing, they move sequentially toward the retrieval of a memory trace, the existence of which is known to both of them; And it is just possible that, without each other, neither Rudy nor Lulu could have produced the item. This is not the only process of transactive memory. Although we will speak of interactive cueing again, it is just one of a variety of communication processes that operate on knowledge in the dyad. Transactive processes can occur during the intake of information by the dyad, they can occur after information is stored and so modify the stored information, and they can occur during retrieval. 

The successful operation of these processes is dependent, however, on the formation of a transactive memory structure - an organizational scheme that connects the knowledge held by each individual to the knowledge held by the other. It is common in theorizing about the thoughts and memories of individuals to posit an organizational scheme that allows the person to connect thoughts with one another - retrieving one when the other is encountered, and so forth. In a dyad, this scheme is complicated somewhat by the fact that the individual memory stores are physically separated. Yet it is perfectly reasonable to say that one partner may know, at least to a degree, what is in the other's memory. Thus, one's memory is "connected" to the other's, and it is possible to consider how information is arranged in the dyadic system as a whole. A transactive memory structure thus can be said to reside in the memories of both individuals - when they are considered as a combined system. 

We should point out here that transactive processes and structures are not exclusively the province of intimate dyads. We can envision these: things occurring as well in pairs of people who have just met, or even in groups of people larger than the dyad. At the extreme, one might attribute these processes and organizational capacities to whole societies, and so make transactive memory into a synonym for culture. Our conceptualization stops short or these extensions for two reasons. First, we hesitate to extend these ideas to larger groups because the analysis quickly becomes unwieldy; our framework for understanding transactive memory would need to expand geometrically as additional individuals were added to the system. Second, we refrain from applying this analysis to nonintimate relations for the simple reason that, in such dyads, there is not as much to be remembered. Close dyads share a wealth of information unique to the dyad, and use it to operate as a unit. More distant dyads; in turn, engage in transactive processes only infrequently - and in the case of a first and only encounter, do so only once. Such pairs will thus not have a very rich organizational scheme for information they hold. We find the notion of transactive memory most apt, in sum, for the analysis of cognitive interdependence in intimate dyads. 

Our subsequent discussion of transactive memory in this chapter is fashioned to coincide with the process-structure distinction. We begin by considering the processes involved in the everyday operation of transactive memory. Here, we examine the phases of knowledge processing standardly recognized in cognitive psychology - encoding, storage, and retrieval - to determine how they occur in transactive memory. The second general section examines the nature of the organizational structure used for the storage of information in the dyad. The structure of stored information across the two individual memories will be examined, with a view toward determining how this organization impinges on the group's mental operations. The final section concentrates on the role of transactive memory, both process and structure, in the life of the dyad. We consider how such memory may contribute to compatibility or incompatibility in relationships, and how an individual's personal memory may be influenced by membership in a transactive system. 

Transactive Memory Processes 

Communication is the transfer of information. When communication takes place between people, we might say that information is transferred from one memory to another. However, when the dyadic group is conceptualiized as having one memory system, interpersonal communication in the dyad comes to mean the transfer of information within memory. We believe that multiple transfers can occur as the dyad encodes information, as it holds information in storage, and as it retrieves information - and that such transfers can make each of these processes somewhat different from its counterpart occurring at the individual level.

Transactive Encoding 

Obviously, dyads do not have their sense organs in common. The physical and social environment thus must be taken in by each person separately. Social theorists have repeatedly noted, though; that an individual's perceptions can be channeled in social ways. Many have observed, for example, that one partner might empathize with another and see the world from the other's "point of view." Alternatively, cognitive constructions of a "group perspective" may be developed by both partners that lend a certain commonality to their intake of information (see Wegner & Giuliano, 1982). These social influences on encoding, however, are best understood as effects on the individual. How does the dyad encode information? 

When partners encounter some event and encode it privately in their individual memories, they may discuss it along the way. And though we might commonly think of such a discussion as a "rehash," a mere echo of the original perceived event, there is reason to think that it could be much more. After all whereas expeiencing an event can be accomplished quite passively, discussing an event requires active processing of the information - and the generation of ideas relevant to the event. Several demonstrations of an individual memory phenomenon called the "generation effect" indicate that people will often remember information they have generated better than information they have simply experienced. So, for instance, one might remember the number 37 better if one had been presented with "14 + 23 = ?" than if one had merely been presented with "37 ." Partners who talk over an event, generating information along the way, might thus come to an encoded verbal representation of the event that supplants their original, individual encoding. 

The influence of the generation effect could, of course, take many forms. Ordinarily, it should lead partners to remember their own contributions to dyadic discussions better than the contributions of their partners. This phenomenon has been observed in several studies (e.g., Ross & Sicoly, 1979). But the generation effect could also contribute to one's memory for group generated information. When a couple observes some event - say, a wedding they may develop somewhat disparate initial encodings. Each will understand that it was indeed a wedding; but only one may encode the fact that the father of the bride left the reception in a huff; the other might notice instead the odd, cardboard-like flavor of the wedding cake. Their whispered chat during all this could lead them to infer that the bride's father was upset by the strange cake. Because this interpretation was generated by the group, both partners will have thus encoded the group's understanding of the events. Their chat could thus revise history for the group, leaving both with stored memories of the father angry over a sorry cake. 

Evidence from another domain of cognitive research leads to a similar point. One of the most powerful determinants of encoding in individual memory is the degree to which the incoming information is semantically elaborated (e.g., Anderson & Reder, 1979). To elaborate incoming information is simply to draw inferences from it and consider its meaning in relation to other information. This is precisely what happens in dyadic communications about events. Partners often talk about things they have experienced as individuals or as a group. They may speak about each other's behavior, about the behavior of others they both know, about the day's events, and so on. In such discussions, it is probable that those particular events or behaviors relevant to the dyad will be discussed at length. They will be tied to other items of knowledge and, in the process, will become more elaborately encoded - and thus more likely to be available for later retrieval. 

To the extent that generative or elaborative processes are effortful, or require careful thinking, their effects could be strengthened yet further. Encoding processes that are effortful for the individual typically lead to enhanced memory. When a couple engages in an argument, cognitive effort may be required for each person to understand what the other is saying and for each to convey a personal point of view. Such effort on the part of both could also be necessary when one partner is merely trying to teach the other something. It is the shared experience of argument, decision-making, or careful analysis that will be remembered more readily when the communication is effortful. After all, couples more frequently remember their "talks" than their routine dinner conversations. 

These transactive encoding processes could conceivably lead a dyad to understand events in highly idiosyncratic and private ways. Their discussions could go far afield, linking events to knowledge that, while strongly relevant to the dyad, is embedded primarily in the dyad's known history or anticipated future. The partners' memories of the encoded events themselves could be changed dramatically by the tenor of their discussions, sometimes to the point of losing touch with the initial realities the partners perceived. To some degree, such departures from originally encoded experience might be corrected by the partners' discussions' of events with individuals outside the relationship; such outsiders would serve to introduce a perspective on events that is uninformed of the dyad's concerns, and that therefore might help to modify memory of the events. But many experiences are discussed only within the relationship, and these are thus destined to be encoded in ways that may make them more relevant to the dyad's concerns than to the realities from which they derived.

by Daniel M Wegner, Toni Giuliano, and Paula T. Hertel, Harvard |  Read more (pdf):
Image via:

[ed. Probably of little interest to most but I find this, and the process of memory retrieval in general, to be fascinating. When I think back on the various experiences and conversations I've had over my lifetime it's not uncommon to settle on the same scenes, arguments, feelings, etc. over and over again to represent what I remember as being reality, or at least an accurate reflection of my personal 'history', when actually they're just a small slice of a larger picture, taken out of context. Want an example? Try talking to an old friend at a class reunion and see what they recall about your experiences together. We can never remember all the details of the thousands of small conversations and experiences we've had - individually, with partners, with others - that in the aggregate have more relevance to reality than we can imagine... or remember.]