Showing posts with label Design. Show all posts
Showing posts with label Design. Show all posts

Thursday, August 28, 2025

Tuesday, August 19, 2025

New Crewless Warships

Images: DARPA
"Along with the lightness and sleekness, the systems aboard Defiant are more like those of a deep-space probe, with an emphasis on reliability and redundancy that allows it to operate at sea for up to a year without human intervention. It can even refuel itself autonomously. Where a conventional ship would have technicians aboard for repairs and routine maintenance, Defiant can tolerate wear and tear on its system and can switch to backups as needed.

Another aspect of the design is that it's highly simplified, so it can be manufactured quickly and refitted in any port that can handle yacht, tug, and workboat customers. This means that in the near future, autonomous ships can be deployed in large numbers to act as force multipliers for the US Navy, take over boring routine duties like sub hunting or harbor patrols, and carry out missions in hostile waters without risking human lives."

[ed. Sea drones.]

How Cheaply Could We Build High-Speed Rail?

At the end of April, the Transit Costs Project released a report: it’s called How to Build High-Speed Rail on the Northeast Corridor. As the name suggests, the authors of the report had a simple goal: the stretch of the US from DC and Baltimore through Philadelphia to New York and up to Boston, the densest stretch of the country. It’s an ideal location for high-speed rail. How could you actually build it — trains that get you from DC to NYC in two hours, or NYC to Boston in two hours — without breaking the bank?

That last part is pretty important. The authors think you could do it for under $20 billion dollars. That’s a lot of money, but it’s about five times less than the budget Amtrak says it would require. What’s the difference? How is it that when Amtrak gets asked to price out high-speed rail, it gives a quote that much higher?

We brought in Alon Levy, transit guru and the lead author of the report, to answer the question, and to explain a bunch of transit facts to a layman like me. Is this project actually technically feasible? And, if it is, could it actually work politically? (...)

I’m excited for this conversation, largely because although I'm not really a transit nerd, I enjoyed this report from you and your colleagues at the Transit Costs Project. But it's not really written for people like me. I'm hoping we can translate it for a more general audience.

The report was pretty technical. We wrote the original Transit Costs Project report about the construction cost of various urban rail megaprojects. So we were comparing New York and Boston projects with a selection of projects elsewhere: Italian projects, some Istanbul subway and commuter rail tunnels, the Stockholm subway extension, and so on.

Essentially the next step for me was to look at how you would actually do it correctly in the US, instead of talking about other people's failures. That means that the report on the one hand has to go into broad things, like coordination between different agencies and best practices. But also it needs to get into technical things: what speed a train can go on a specific curve of a specific radius at a specific location. That’s the mood whiplash in the report, between very high-level and very low-level.

I think you guys pulled it off very well. Let's get into it —  I'll read a passage from the intro:
“Our proposal's goal is to establish a high-speed rail system on the Northeast Corridor between Boston and Washington. As the Corridor is also used by commuter trains most of the way… the proposal also includes commuter rail modernization [speeding up trains], regularizing service frequency, and… the aim is to use already committed large spending programs to redesign service.”
As a result, you think we could get high-speed rail that brings both the Boston–New York City trip and the New York City–Washington trip under two hours. You'd cut more than a third of the time off both those trips.

And here’s the kicker: you argue that the infrastructure program would total about $12.5 billion, and the new train sets would be under $5 billion. You're looking at a $17–18 billion project. I know that's a big sticker price in the abstract, but it's six to eight times cheaper than the proposals from Amtrak for this same idea. That’s my first question: Why so cheap?


First of all, that $18 billion is on top of money that has already been committed. There are some big-ticket tunnels that are already being built. One of the things that people were watching with the election was if the new administration was going to try to cancel the Gateway Tunnel, but they seem to have no interest in doing so. Transportation Secretary Sean Duffy talks about how there’s a lot of crime on the New York City subway, and how liberals want people to ride public transportation more and to drive less, but I have not seen any attacks on these pre-existing projects. So, as far as I’m concerned, they’re done deals.

The second thing is that along the length of the Northeast Corridor, this investment is not all that small. It’s still less than building a completely new greenfield line. With the Northeast Corridor, most of the line pre-exists; you would not need to build anything de novo. The total investment that we’re prescribing in Massachusetts, Rhode Island, New Jersey, Pennsylvania, Delaware, and most of Maryland is essentially something called a track-laying machine.

The Northeast Corridor has this problem: Let’s say that you have a line with a top speed of 125 mph, and the line has six very sharp curves that limit the trains to 80 mph. If those six curves are all within a mile of each other, there’s one point in the middle of the line where you have six 80 mph curves. That couple-mile stretch is 80 mph, while the rest of the line is 125. Now, what happens if these curves are evenly spaced along the line?

You have a way longer commute, right?

Yes. If you have to decelerate to 80 mph and back five times, that’s a lot slower. That’s the problem in the Northeast Corridor: there are faster and slower segments. Massachusetts is faster. Rhode Island is mostly fast. Connecticut is slow. If you have a line that’s slow because you have these restrictions in otherwise fast territory, then you fix them, and you’ve fixed the entire line. The line looks slow, but the amount of work you need to fix it is not that much.

The Northeast Corridor (red is stretches with commuter rail)

Most of the reason the Northeast Corridor is slow is because of the sharp curves. There are other fixes that can be done, but the difficult stuff is fixing the sharp curves. The area with the sharpest curves is between New Haven and southern Rhode Island. The curves essentially start widening around the point where you cross between Connecticut and Rhode Island, and shortly thereafter, in Rhode Island, it transitions into the fastest part of the Corridor.

In southeast Connecticut, the curves are sharp, and there’s no way to fix any of them. This is also the lowest-density part of the entire Northeast: I-95, for example, only has four lanes there, while the rest of the way, it has at least six. I-95 there happens to be rather straight, so you can build a bypass there. The cost of that bypass is pretty substantial, but that’s still only about one-sixth of the corridor. You fix that, and I’m not saying you’ve fixed everything, but you’ve saved half an hour.

Your proposal is not the cheapest possible high-speed rail line, but I want to put it in context here. In 2021, there was a big proposal rolled out by the Northeast Corridor Commission, which was a consortium of states, transit providers, New Jersey Transit, Amtrak, and federal transportation agencies. Everybody got in on this big Connect Northeast Corridor (Connect NEC) plan, and the top line number was $117 billion, seven times your proposal. And this is in 2021 dollars.

They didn’t think that they could do Boston to New York and New York to DC in two hours each, either. There are two different reasons for their high price tags. The first reason is that they included a lot of things that are just plain stupid.

For example, theirs involved a lot of work on Penn Station in New York. Some of it is the Gateway Project, so that money is committed already, but they think that they need a lot beyond the tunnel. They have turned Gateway into a $40 or $50 billion project. I’m not going to nitpick the Gateway spending, although I’m pretty sure it could be done for much cheaper, but they think they need another $7 billion to rebuild Penn Station, and another $16 billion to add more tracks.

And you don’t think that’s necessary.

No. We ran some simulations on the tracks, and it turns out that the Penn Station that currently exists, is good enough — with one asterisk — even if you ran twice as much service. You can’t do that right now because, between New Jersey and New York Station, there is one tunnel. It has two tracks, one in each direction. They run 24–25 trains per hour at the peak. This is more or less the best that can be done on this kind of infrastructure. (...)

Unfortunately, they think Penn Station itself can’t handle the doubled frequency and would need a lot of additional work. Amtrak thinks that it needs to add more tracks by condemning an entire Midtown Manhattan block south of Penn Station called Block 780. They’re not sure how many tracks: I’ve seen between 7 and 12.

To be clear, the number of additional tracks they need is 0, essentially because they’re very bad at operations.

Well, let’s talk about operations. You say one way to drive down the cost of high-speed rail is just better-coordinated operations for all the trains in the Corridor. The idea is that often fast trains are waiting for slow trains, and in other places, for procedural reasons, every train has to move at the speed of the slowest train that moves on that segment.

What’s the philosophical difference between how you and the rail managers currently approach the Corridor?

The philosophical difference is coordinating infrastructure and operations. Often you also coordinate which trainsets you’re going to buy. This is why the proposal combines policy recommendations with extremely low-level work, including timetables to a precision of less than a minute. The point of infrastructure is to enable a service. Unless you are a very specific kind of infrastructure nerd, when you ride a train, you don’t care about the top speed, you don’t care about the infrastructure. You care about the timetable. The total trip time matters. Nobody rides a TGV to admire all the bridges they built on the Rhone.

I think some people do!

I doubt it. I suspect that the train goes too fast to be a good vantage point.

But as I said, you need 48 trains per hour worth of capacity between New Jersey or Manhattan. You need to start with things like the throughput you need, how much you need to run on each branch, when each branch runs, how they fit together. This constrains so much of your planning, because you need the rail junctions to be set up so that the trains don’t run into each other. You need to set up the interlockings at the major train stations in the same way. When you have fast and slow trains in the same corridor, you need to write timetables so that the fast trains will not be unduly delayed.

This all needs to happen before you commit to any infrastructure. The problem is that Connect NEC plans (Connect 2035, 2037) are not following that philosophy. They are following another philosophy: Each agency hates the other agencies. Amtrak and the commuter rail agencies have a mutually abusive relationship. There’s a lot of abuse from Amtrak to various commuter rail operators, and a lot of abuse by certain commuter rail operators, especially Metro North and Connecticut DOT against Amtrak. If you ask each agency what they want, they’ll say, “To get the others out of our hair.” They often want additional tracks that are not necessary if you just write a timetable.

To be clear, they want extra tracks so that they don’t have to interact with each other?

Exactly. And this is why Amtrak, the commuter railways, and the Regional Plan Association keep saying that the only way to have high-speed rail in the Northeast Corridor is to have an entirely separate right of way for Amtrak, concluding with its own dedicated pair of tunnels to Penn Station in addition to Gateway.

They’re talking about six tracks, plus two tracks from Penn Station to Queens and the Bronx, with even more urban tunneling. The point is that you don’t need any of that. Compromising a little on speed, the trip times I’m promising are a bit less than four hours from Boston to Washington. That’s roughly 180 kilometers an hour [~110 mph]. To be clear, this would be the slowest high-speed line in France, Spain, or Japan, let alone China. It would probably be even with the fastest in Germany and South Korea. It’s not Chinese speed. For example, Rep Moulton was talking about high-speed rail a couple of months ago, and said, “This is America. We need to be faster. Why not go 200, 250 mph?” He was talking about cranking up the top speed. When we were coming up with this report, we were constantly trying to identify how much time a project would save, and often we’d say, “This curve fix will speed up the trains by 20 seconds, but for way too much hassle and money.” The additional minutes might be too expensive. Twenty seconds don’t have an infinite worth. (...)

I want to go back to something you said earlier. You were contrasting the aesthetic of this proposal with Representative Moulton’s proposal, who wants our top speeds to be faster than Chinese top speeds. How do you get voters to care about — and I mean this descriptively — kinda boring stuff about cant angles?

Voters are not going to care about the cant angle efficiency on a curve. They’re not going to care about approach speed. However, I do think that they will if you tell voters, “Here's the new timetable for you as commuters. It looks weird, but your commute from Westchester or Fairfield County to Manhattan will be 20 minutes faster.”

With a lot of these reports, the issue is often that there are political trade-offs. The idea of what you should be running rail service for, who you should be running it for, that ended up drifting in the middle of the 20th century.

But also, the United States is so far from the technological frontier that even the very basics of German or Swiss rail planning, like triangle planning of rolling stock/infrastructure/operations, that's not done. Just doing that would be a massive increase in everything: reliability, frequency, speed, even in passenger comfort.

 The main rail technology conference in the world, it's called InnoTrans, it's in Berlin every two years. I hear things in on-the-floor interviews with vendors that people in the United States are just completely unaware of.

by Santi Ruiz and Alon Levy, Statecraft |  Read more:
Image: uncredited
[ed. Fascinating stuff! (I think, anyway). And, for something completely different, see: How to Be a Good Intelligence Analyst (Statecraft):]

***
I think the biggest misconception about the community and the CIA in particular is that it's a big organization. It really isn't. When you think about overstuffed bureaucracies with layers and layers, you're describing other organizations, not the CIA. It is a very small outfit relative to everybody else in the community. (...)

What kinds of lessons were consistently learned in the Lessons Learned program?

There's an argument that the lessons learned are more accurately described as lessons collected or lessons archived, rather than learned.

Because learning institutionally is hard?

Learning institutionally is hard. Not only is it hard to do, but it's also hard to measure and to affect. But, if nothing else, practitioners became more thoughtful about the profession of intelligence. To me, that was really important. The CIA is well represented by lots of fiction, from Archer to Jason Bourne. It's always good for the brand. Even if we look nefarious, it scares our adversaries. But it's super far removed from reality. Reality in intelligence looks about as dull as reality in general. Being a really good financial or business analyst, any of those kinds of tasks, they're all working a certain part of your brain that you can either train and improve, or ignore and just hope for the best.

I don't think any of those are dull, but I take your point about perception vs. reality.

I don't mean to suggest those are dull, but generally speaking, they don't run around killing assassins. It's a lot less of that.

Monday, August 11, 2025

Disruptor 16: Carbon Robotics

Seattle-based Carbon Robotics offers an AI-powered laser weeder that attaches to farmers’ tractors and looks like a space-age combine, except that it weeds instead of harvests.

Supplied with a database of 40 million images, the AI-powered agtech system shoots lasers as it passes over rows of crops, with machine learning enabling it to recognize weeds and kill them at their base using a laser, replacing the need for both manual labor and herbicides. The company says it has destroyed more than 15 billion weeds on more than 100 crops.

Carbon Robotics says its approach to weeding increases yields, quality and consistency, and helps preserve topsoil. The latter is a growing global concern, as experts estimate most of the world’s topsoil has been degraded to the point that its agriculturally usable life is measured in decades. (...)

Cost of agtech upgrades, and unproven technology compared to conventional farming approaches, is an issue. Laser weeder costs can run over $1 million, based on public reports, but farmers that have used the technology have endorsed it.

Recently, Carbon Robotics debuted the LaserWeeder G2, a smaller, less expensive version of its technology, though still a significant investment for many farmers in a business that’s made inherently risky due to weather and the volatility of global commodities markets. (...)

Carbon Robotics is growing its manufacturing in eastern Washington State, with a recent 70% headcount increase to about 200, and it ultimately has plans to grow its tech applications beyond farming. “The real driver is having AI systems doing things in the real world. Will Carbon Robotics always be in the ag industry? We’ll probably do things well outside it,” said Mikesell in an interview with GeekWire.

by Elizabeth MacBride, CNBC | Read more:
Image: Igor Gnedo, Antonina Lepore & Adrianne Paerels
[ed. Weedtech. From CNBC's Disruptor 50 list. Number one is, of course, Anduril (drones, surveillance, other AI-enabled weaponry - defense tech sector). We're screwed.]

Thursday, August 7, 2025

What to Expect When You’re Expecting … GPT-5

For years we have been hearing, endlessly, about how GPT-5 was going to land imminently, and those predictions turned out to be wrong so often that a year ago I wrote a post about it, called GPT-5…now arriving Gate 8, Gate 9, Gate 10, not to mention a couple of April Fool’s jokes. But this time I think GPT-5 really is about to drop, no foolin’.

GPT-5 will surely be better, a lot better than GPT-4. I guarantee that minds will be blown. When it comes out, it will totally eclipse GPT-4. Nonetheless, I have 7 darker predictions.
1. GPT-5 will still, like its predecessors, be a bull in a china shop, reckless and hard to control. It will still make a significant number of shake-your-head stupid errors, in ways that are hard to fully predict. It will often do what you want, sometimes not—and it will remain difficult to anticipate which in advance..
2. Reasoning about physical, psychological and mathematical world will still be unreliable, GPT-5 will solve many of the individual specific items used in prior benchmarks, but still get tripped up, particularly in longer and more complex scenarios.

3. Fluent hallucinations will still be common, and easily induced, continuing—and in in fact escalating— the risk of large language models being used as a tool for creating plausible-sounding yet false misinformation. Guardrails (a la ChatGPT) may be in place, but the guardrails will teeter between being too weak (beaten by “jailbreaks”) and too strong (rejecting some perfectly reasonable requests).

4. Its natural language output still won’t be something that one can reliably hook up to downstream programs; it won’t be something, for example, that you can simply and directly hook up to a database or virtual assistant, with predictable results. GPT-5 will not have reliable models of the things that it talks about that are accessible to external programmers in a way that reliably feeds downstream processes. People building things like virtual assistants and agents will find that they cannot reliably enough map user language onto user intentions.

5. GPT-5 by itself won’t be a general purpose artificial general intelligence capable of taking on arbitrary tasks. Without external aids it won’t be able beat Meta’s Cicero in Diplomacy; it won’t be able to drive a car reliably; it won’t be able to reliably guide a robot like Optimus to be anything like as versatile as Rosie the Robot. It will remain turbocharged pastiche generator, and a fine tool for brainstorming, and for first drafts, but not trustworthy general intelligence.

6. “Alignment” between what humans want and what machines do will continue to be a critical, unsolved problem. The system will still not be able to restrict its output to reliably following a shared set of human values around helpfulness, harmlessness, and truthfulness. Examples of concealed bias will be discovered within days or months. Some of its advice will be head-scratchingly bad.

7. When AGI (artificial intelligence) comes, large language models like GPT-5 may be seen in hindsight as part of the eventual solution, but only as part of the solution. “Scaling” alone—building bigger and models until they absorb the entire internet — will prove useful, but only to a point. Trustworthy, general artificial intelligence, aligned with human values, will come, when it does, from systems that are more structured, with more built-in knowledge, and will incorporate at least some degree of explicit tools for reasoning and planning, as well as explicit knowledge, that are lacking in systems like GPT. Within a decade, maybe much less, the focus of AI will move from a pure focus on scaling large language models to a focus on integrating them with a wide range of other techniques. In retrospectives written in 2043, intellectual historians will conclude that there was an initial overemphasis on large language models, and a gradual but critical shift of the pendulum back to more structured systems with deeper comprehension.
If all seven predictions prove correct, I hope that the field will finally realize that it is time to move on.

Shiny things are always fun to play with, and I fully expect GPT-5 to be the shiniest so far, but that doesn’t mean that it is a critical step on the optimal path to AI that we can trust. For that, we will, I predict, need genuinely new architectures that incorporate explicit knowledge and world models at their very core. [ed. caution - spoiler]
***
Oh, one more thing. I am not usually in the habit of self-plagiarism, but in the interest of full disclosure, this essay was different. Virtually every word, except the first paragraph and this last section, was deliberately taken from an earlier essay that I posted on Christmas Day 2022, called What to expect when you are expecting … GPT-4. I searched-and-replaced GPT-4 with GPT-5, trimmed a few lines, and here we are.

by Gary Marcus, On AI |  Read more:
Image: WickerViper23/Stable Diffusion

Wednesday, August 6, 2025

Bridging the Gap: Neurosymbolic AI

How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI. Neurosymbolic AI is quietly winning. Here’s what that means – and why it took so long

Machine learning, the branch of AI concerned with tuning algorithms from data, is an amazing field that has changed the world — and will continue doing so. But it is also filled with closed-minded egotists with too much money, and too much power.

This is a story, in three acts, spanning four decades, about how many of them tried, ultimately unsuccessfully, to keep a good idea, neurosymbolic AI, down—only to accidentally vindicate that idea in the end.

For those who are unfamiliar with the field’s history, or who think it began only in 2012, AI has been around for many decades, split, almost since its very beginning, into two different traditions.

One is the neural network or “connectionist” tradition which goes back to the 1940s and 1950s, first developed by Frank Rosenblatt, and popularized, advanced and revived by Geoffrey Hinton, Yann LeCun, and Yoshua Bengio (along with many others, including most prominently, Juergen Schmidhuber who rightly feels that his work has been under-credited), and brought to current form by OpenAI and Google. Such systems are statistical, very loosely inspired by certain aspects of the brain (viz. the “nodes” in neural networks are meant to be abstractions of neurons), and typically trained on large-scale data. Large Language Models (LLMs) grew out of that tradition.

The other is the symbol-manipulation tradition, with roots going back to Bertrand Russell and Gottlob Frege, and John von Neumann and Alan Turing, and the original godfathers of AI, Herb Simon, Marvin Minsky, and John McCarthy, and even Hinton’s great-great-great-grandfather George Boole. In this approach, symbols and variables stand for abstractions; mathematical and logical functions are core. Systems generally represent knowledge explicitly, often in databases, and typically make extensive use of (are written entirely in) classic computer programming languages. All of the world’s software relies on it.

For thirty years, I have been arguing for a reconciliation between the two, neurosymbolic AI. The core notion has always been that the two main strands of AI—neural networks and symbolic manipulation—complement each other, with different strengths and weaknesses. In my view, neither neural networks nor classical AI can really stand on their own. We must find ways to bring them together.

After a thirty-year journey, I believe that neurosymbolic AI’s moment has finally arrived, in part from an unlikely place.
***
In her bestseller Empire of AI, Karen Hao crisply sets the stage.

She begins by neatly distilling the scientific tension.
Hinton and Sutskever continued [after their seminal 2012 article on deep learning] to staunchly champion deep learning. Its flaws, they argued, are not inherent to the approach itself. Rather they are the artifacts of imperfect neural-network design as well as limited training data and compute. Some day with enough of both, fed into even better neural networks, deep learning models should be able to completely shed the aforementioned problems. "The human brain has about 100 trillion parameters, or synapses," Hinton told me in 2020.

"What we now call a really big model, like GPT-3, has 175 billion. It's a thousand times smaller than the brain.

"Deep learning is going to be able to do everything," he said.

Their modern-day nemesis was Gary Marcus, a professor emeritus of psychology and neural science at New York University, who would testify in Congress next to Sam Altman in May 2023. Four years earlier, Marcus coauthored a book called Rebooting AI, asserting that these issues were inherent to deep learning. Forever stuck in the realm of correlations, neural networks would never, with any amount of data or compute, be able to understand causal relationships-why things are the way they are-and thus perform causal reasoning. This critical part of human cognition is why humans need only learn the rules of the road in one city to be able to drive proficiently in many others, Marcus argued.

Tesla's Autopilot, by contrast, can log billions of miles of driving data and still crash when encountering unfamiliar scenarios or be fooled with a few strategically placed stickers. Marcus advocated instead for combining connectionism and symbolism, a strain of research known as neuro-symbolic AI. Expert systems can be programmed to understand causal relationships and excel at reasoning, shoring up the shortcomings of deep learning. Deep learning can rapidly update the system with data or represent things that are difficult to codify in rules, plugging the gaps of expert systems. "We actually need both approaches," Marcus told me.
She goes on to point out that the field has become an intellectual monoculture, with the neurosymbolic approach largely abandoned, and massive funding going to the pure connectionist (neural network) approach:
Despite the heated scientific conflict, however, the funding for AI development has continued to accelerate almost exclusively in the pure connectionist direction. Whether or not Marcus is right about the potential of neurosymbolic Al is beside the point; the bigger root issue has been the whittling down and weakening of a scientific environment for robustly exploring that possibility and other alternatives to deep learning.

For Hinton, Sutskever, and Marcus, the tight relationship between corporate funding and AI development also affected their own careers.
Hao then captures OpenAI’s sophomoric attitude towards fair scientific criticism:
Over the years, Marcus would become one of the biggest critics of OpenAI, writing detailed takedowns of its research and jeering its missteps on social media. Employees created an emoji of him on the company Slack to lift up morale after his denouncements and to otherwise use as a punch line. In March 2022, Marcus wrote a piece for Nautilus titled "Deep Learning Is Hitting a Wall”, repeating his argument that OpenAI's all-in approach to deep learning would lead it to fall short of true AI advancements. A month later, OpenAI released DALL-E 2 to immense fanfare, and Brockman cheekily tweeted a DALL-E 2-generated image using the prompt "deep learning hitting a wall.” The following day, Altman followed with another tweet: "Give me the confidence of a mediocre deep learning skeptic." Many OpenAI employees relished the chance to finally get back at Marcus.
But then again, as the saying goes, he who laughs last, laughs loudest.
***
For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.

This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.

And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.

It is also an essay about sociology.
***
The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning.

by Gary Marcus, On AI |  Read more:
Image: via

Friday, August 1, 2025

Design Your Own Rug!

For my wedding anniversary, I designed and had hand-woven in Afghanistan a rug for my microbiologist wife. The rug mixes traditional Afghanistan designs with some scientific elements including Bunsen burners, test tubes, bacterial petri dishes and other elements.


I started with several AI designs, such as that shown below, to give the weavers an idea of what I was looking for. Some of the AI elements were muddled and very complex and so we developed a blueprint over a few iterations. The blueprint was very accurate to the actual rug.


I am very pleased with the final product. The wool is of high quality, deep and luxurious, and the design is exactly what I intended. My wife loves the rug and will hang it at her office. The price was very reasonable, under $1000. I also like that I employed weavers in a small village in Northern Afghanistan. The whole process took about 6 months.

You can develop your own custom rug from Afghanu Rugs. Tell them Alex sent you. Of course, they also have many beautiful traditional designs. You can even order my design should you so desire!

by Alex Tabarrok, Marginal Revolution | Read more:
Images: the author

Monday, July 28, 2025

Elon’s Edsel

Tesla Cybertruck Is The Auto Industry’s Biggest Flop In Decades

The list of famous auto industry flops is long and storied, topped by stinkers like Ford’s Edsel and exploding Pinto and General Motors’s unsightly Pontiac Aztek crossover SUV. Even John Delorean’s sleek, stainless steel DMC-12, iconic from its role in the “Back To The Future” films, was a sales dud that drove the company to bankruptcy.

Elon Musk’s pet project, the dumpster-driving Tesla Cybertruck, now tops that list.

After a little over a year on the market, sales of the 6,600-pound vehicle, priced from $82,000, are laughably below what Musk predicted. Its lousy reputation for quality–with eight recalls in the past 13 months, the latest for body panels that fall off–and polarizing look made it a punchline for comedians. Unlike past auto flops that just looked ridiculous or sold badly, Musk’s truck is also a focal point for global Tesla protests spurred by the billionaire’s job-slashing DOGE role and MAGA politics.

“It’s right up there with Edsel,” said Eric Noble, president of consultancy CARLAB and a professor at ArtCenter College of Design in Pasadena, California (Tesla design chief Franz von Holzhausen, who styled Cybertruck for Musk, is a graduate of its famed transportation design program). “It’s a huge swing and a huge miss.”

Judged solely on sales, Musk’s Cybertruck is actually doing a lot worse than Edsel, a name that’s become synonymous with a disastrous product misfire. Ford hoped to sell 200,000 Edsels a year when it hit the market in 1958, but managed just 63,000. Sales plunged in 1959 and the brand was dumped in 1960. Musk predicted that Cybertruck might see 250,000 annual sales. Tesla sold just under 40,000 in 2024, its first full year. There’s no sign that volume is rising this year, with sales trending lower in January and February, according to Cox Automotive.

And Tesla’s overall sales are plummeting this year, with deliveries tumbling 13% in the first quarter to 337,000 units, well below consensus expectations of 408,000. The company did not break out Cybertruck sales, which is lumped in with the Model S and Model X, its priciest segment. But it’s clear Cybertruck sales were hurt this quarter by the need to make recall-related fixes, Ben Kallo, an equity analyst for Baird, said in a research note. Tesla didn’t immediately respond to a request for comment.

The quarterly slowdown underscores the fact that when it comes to the Cybertruck, results are nowhere near the billionaire entrepreneur’s carnival barker claims.

“Demand is off the charts,” he crowed during a results call in November 2023, just before the first units started shipping to customers. “We have over 1 million people who have reserved the car.”

In anticipation of high sales, Tesla even modified its Austin Gigafactory so it could produce up to 250,000 Cybertrucks a year, capacity investments that aren’t likely to be recouped.

“They didn't just say they wanted to sell a lot. They capacitized to sell a lot,” said industry researcher Glenn Mercer, who leads Cleveland-based advisory firm GM Automotive. But the assumption of massive demand has proven foolhardy. And it failed to account for self-inflicted wounds that further stymied sales. Turns out the elephantine Cybertruck is either too large or non-compliant with some countries’ pedestrian safety rules, so there’s little opportunity to boost sales with exports.

“They haven’t sold a lot and it’s unlikely in this case that overseas markets can save them, even China that’s been huge for Tesla cars,” Mercer said. “It’s really just for this market.”

More than a decade before Cybertruck went into production, Musk hinted that Tesla would eventually do some kind of electric pickup. When he unveiled his design to the world for the first time, Musk was clear that he did not want a conventional aesthetic or even something that played with pickup looks a bit but was still familiar, the approach Rivian took with its R1T pickup.

“Pickup trucks have been the same for 100 years,” and Cybertruck “doesn’t look like anything else,” said Musk, who earlier that month had proudly told an audience at a conference for space entrepreneurs, “I do zero market research whatsoever.”

That would be an apt tagline for Musk’s preposterous pickup. “The spectacular failure of Cybertruck was a failure of empathy,” said CARLAB’s Noble, whose company helps carmakers develop products based on consumer research. “Everything from the bed configuration to the cab configuration to its performance and all sorts of pickup truck duty-cycle issues, it’s just not empathetic to a pickup truck buyer.”

Cybertruck’s distinctive look resulted from two key forces, said a person familiar with the development process, who asked not to be identified because the information isn’t public. One was Musk’s passion for sci-fi designs. The other was an early decision to create a vehicle that didn’t need to be painted.

If Tesla opted not to paint the trucks, it wouldn’t need to install a new $200 million paintshop, a big potential cost savings. And it wouldn’t have to worry about EPA scrutiny from the harmful emissions and runoff those facilities often produce.

Ultimately, Musk opted for a stainless steel exterior, the same choice Delorean made for his ill-fated sports car four decades earlier. But because Musk isn’t a production engineer, he may not have fully appreciated the challenges it presents versus aluminum or composite materials, the person said. Aside from the fact that stainless steel shows handprints–a common gripe about kitchen appliances–it’s hard to bend and likes to snap back to its original shape, one of the reasons there have been problems with Cybertruck body panels.

“This is where I think they misconstrued the tradeoff,” Mercer said. “They drooled over not spending $200 million on a paint shop, but probably spent that much trying to get the stainless steel to work.” 

by Alan Ohnsman, Forbes | Read more:
Image: Fernando Capeto for Forbes; Photos by Andrew Harnik/Getty Images and Justin Sullivan/Getty Images

Sunday, July 20, 2025

Land Rover
via: here/here
[ed. Iconic vs. Not So Iconic: Tesla's Cybertruck (designed on a napkin?). Starting at $99,990.]

Thursday, July 17, 2025

Optical Glass House, Hiroshima Japan

NAP Architects has designed Optical Glass House located in Hiroshima, Japan.

from NAP Architects:
This house is sited among tall buildings in downtown Hiroshima, overlooking a street with many passing cars and trams. To obtain privacy and tranquility in these surroundings, we placed a garden and optical glass façade on the street side of the house.

The garden is visible from all rooms, and the serene soundless scenery of the passing cars and trams imparts richness to life in the house. Sunlight from the east, refracting through the glass, creates beautiful light patterns.

Rain striking the water-basin skylight manifests water patterns on the entrance floor. Filtered light through the garden trees flickers on the living room floor, and a super lightweight curtain of sputter-coated metal dances in the wind.

Although located downtown in a city, the house enables residents to enjoy the changing light and city moods, as the day passes, and live in awareness of the changing seasons.

Optical Glass Façade
A façade of some 6,000 pure-glass blocks (50mm x 235mm x 50mm) was employed. The pure-glass blocks, with their large mass-per-unit area, effectively shut out sound and enable the creation of an open, clearly articulated garden that admits the city scenery.

To realize such a façade, glass casting was employed to produce glass of extremely high transparency from borosilicate, the raw material for optical glass.

The casting process was exceedingly difficult, for it required both slow cooling to remove residual stress from within the glass, and high dimensional accuracy.

Even then, however, the glass retained micro-level surface asperities, but we actively welcomed this effect, for it would produce unexpected optical illusions in the interior space.

Waterfall
So large was the 8.6m x 8.6m façade, it could not stand independently if constructed by laying rows of glass blocks a mere 50mm deep. We therefore punctured the glass blocks with holes and strung them on 75 stainless steel bolts suspended from the beam above the façade.

Such a structure would be vulnerable to lateral stress, however, so along with the glass blocks, we also strung on stainless steel flat bars (40mm x 4mm) at 10 centimeter intervals.

The flat bar is seated within the 50mm-thick glass block to render it invisible, and thus a uniform 6mm sealing joint between the glass blocks was achieved. The result —a transparent façade when seen from either the garden or the street.

The façade appears like a waterfall flowing downward, scattering light and filling the air with freshness.

Captions
The glass block façade weighs around 13 tons. The supporting beam, if constructed of concrete, would therefore be of massive size. Employing steel frame reinforced concrete, we pre-tensioned the steel beam and gave it an upward camber.

Then, after giving it the load of the façade, we cast concrete around the beam and, in this way, minimized its size.”

by Karmatrends |  Read more:
Images: NAP Architechs
[ed. See also: Optical Glass House, Hiroshima, Japan (Architectural Review).]

Monday, July 14, 2025

Apple in China

Apple Used China to Make a Profit. What China Got in Return Is Scarier.

A little more than a decade ago, foreign journalists living in Beijing, including myself, met for a long chat with a top Chinese diplomat. Those were different days, when high-ranking Chinese officials were still meeting with members of the Western press corps. The diplomat whom we met was charming, funny, fluent in English. She also had the latest iPhone in front of her on the table.

I noticed the Apple gadget because at the time, Chinese state news media were unleashing invectives on the Cupertino, Calif.-based company for supposedly cheating Chinese consumers. (It wasn’t true.) There were rumors circulating that Chinese government officials were being told not to flaunt American status symbols. The diplomat’s accouterment proved that wrong.

At the time, one could make the argument that China’s economic modernization was being accompanied by a parallel, if somewhat more laggardly, political reform. But the advent in 2012 of Xi Jinping, the Chinese leader who has consolidated power and re-established the primacy of the Chinese Communist Party, has shattered those hopes. And, as Patrick McGee makes devastatingly clear in his smart and comprehensive “Apple in China,” the American company’s decision under Tim Cook, the current C.E.O., to manufacture about 90 percent of its products in China has created an existential vulnerability not just for Apple, but for the United States — nurturing the conditions for Chinese technology to outpace American innovation.

McGee, who was the lead Apple reporter for The Financial Times and previously covered Asian markets from Hong Kong, takes what we instinctively know — “how Apple used China as a base from which to become the world’s most valuable company, and in doing so, bound its future inextricably to a ruthless authoritarian state” — and comes up with a startling conclusion, backed by meticulous reporting: “that China wouldn’t be China today without Apple.”

Apple says that it has trained more than 28 million workers in China since 2008, which McGee notes is larger than the entire labor force of California. The company’s annual investment in China — not even counting the value of hardware, “which would more than double the figure,” McGee writes — exceeds the total amount the Biden administration dedicated for a “once-in-a-generation” initiative to boost American computer chip production.

“This rapid consolidation reflects a transfer of technology and know-how so consequential,” McGee writes, “as to constitute a geopolitical event, like the fall of the Berlin Wall.”

McGee has a journalist’s knack for developing scenes with a few curated details, and he organizes his narrative chronologically, starting with Apple’s origins as a renegade upstart under Steve Jobs in the 1970s and ’80s. After Jobs’s firing and rehiring comes a corporate mind shift in which a vertically integrated firm falls for the allure of contract manufacturing, sending its engineers abroad to train low-paid workers in how to churn out ever more complicated electronics.

We only really get to Apple in China about 90 pages into the book, and that China, in the mid- to late 1990s, was mainly attractive because of what one China scholar called “low wages, low welfare and low human rights.” McGee relates how one Apple engineer, visiting suppliers in the southern Chinese manufacturing center of Shenzhen, was horrified that there were no elevators in the “slapdash” facility, and that the stairs were built with troubling irregularity: with, say, 12 steps (of varying heights) between the first and second floors, then 18 to the next, then 16, then 24.

But China at the turn of the millennium was in the process of joining the World Trade Organization, and its leaders were banking on an export-led economy that would learn from foreign investors. Starting in the 2000s the Taiwanese mega-supplier Foxconn constructed entire settlements for Chinese workers building Apple electronics. First up on the new assembly lines were iMacs that were produced by what became known as “China speed.”

Less than 15 years after Chinese workers began making Apple products en masse, Chinese consumers were buying them en masse, too. Covering China at the time, I chafed at the popular narrative that reduced Apple’s presence in China to a tale of downtrodden workers at Foxconn and other suppliers. Yes, there were nets outside factory dorms to prevent suicides; and wages remained low. Even Apple admitted to alarming labor abuses in its Chinese supply chain.

But that was only half the story. The iPhone in China signified success, an individualistic, American-accented flavor that seemed to delight both veteran diplomats and Foxconn workers I got to know in southwest China. Those of us who had lived in China for years could see that life was getting freer and richer for most Chinese. By the mid-2010s, it was the United States that seemed behind in terms of integrating apps into daily life. In China, at least in the big cities, we were already living in the tech future. (...)

In 2015, Apple was the largest corporate investor in China, to the tune of about $55 billion a year, according to internal documents McGee obtained for this book. (Cook himself told the Chinese media that the company had created nearly five million jobs there: “I’m not sure there are too many companies, domestic or foreign, who can say that.”) At the same time, Xi laid out “Made in China 2025,” his blueprint for achieving technological self-sufficiency in the next decade, dependent on Apple being what McGee calls “a mass enabler of ‘Indigenous innovation.’”

“As Apple taught the supply chain how to perfect multi-touch glass and make the thousand components within the iPhone,” he writes, “Apple’s suppliers took what they knew and offered it to homegrown companies led by Huawei, Xiaomi, Vivo and Oppo.” Today, some of these premium products come with specs that are increasingly ahead of American design, and have outsold Apple in many major markets.

by Hannah Beech, NY Times | Read more:
Image: Wang Zhao/Agence France-Presse — Getty Images
[ed. See also: China’s Rise, America’s Dysfunction, and the Need for Cooperation (Current Affairs):]

"Well, you’re absolutely right about that, but I can fully understand why Donald Trump wants to try and improve the livelihoods of the bottom 50 percent of Americans. I think that’s a noble goal that he has. I can understand why he wants to make American industries more competitive and re-industrialize America. That’s also an understandable goal. But I think he will find that the best way to achieve those goals is actually to work with the rest of the world. And one thing I’ve learned after studying geopolitics for 55 years is that you’ve got to be cold and calculating if you want to succeed in geopolitics, and if you’re emotional, then you’re at a major disadvantage.

So, for example, how did China become so wealthy so quickly? What they did was to work closely with the United States. Even though, technically, during part of the Cold War the U.S. was an adversary, China worked with the United States to grow its economy. And I think that’s one thing that is taboo in the United States, that actually the best way for the United States to regenerate its economic growth and make it grow faster is not to try and bring down China, but to work with China. Just as in the time when you were worried about Japanese cars taking over the United States, what did you do? You have voluntary export restraints. You enccourage the Japanese to set up factories—Toyota factories, Honda factories—in the United States. The same thing can be done with China. It can only be done if you are rational and calculating in your moves and not emotional and say, oh, no, we can never work with China. Why can’t you work with China? If working with China is going to bring benefits to the American people, why not work with them? Because at the end of the day, it’s very clear that all efforts to stop the rise of China by the United States will fail. You cannot stop a 4,000-year-old civilization that has its own civilizational cycles, and as it is rising, depriving them of this technology or that technology is not going to stop the rise of China."

via:

Wednesday, July 9, 2025

What is Downforce?

Each minute exterior detail on top-tier consumer performance cars like a McLaren 620R and professional race cars like an IndyCar or Formula 1 car is designed to make mechanical physics work to the driver’s advantage. Every millimeter of bodywork makes a difference in how the vehicle drives and performs, and the car’s relationship to the air it’s cutting through is paramount. A crucial part of this relationship is downforce, which can be harnessed and applied by aerodynamic parts throughout the car’s shape. The science of downforce can get fairly deep, but we’re here to give an overview of what it means and a breakdown of why it’s important to driving execution.

To define downforce with just a couple of words, it is vertical load created by a vehicle’s aerodynamic parts as it’s in motion. To boil it down even further, a car’s exterior components split, route, and direct airflow in a way that pushes the vehicle down and increases traction and stability. Front splitters, canards (also known as dive planes), rear spoilers, front spoilers, those massive adjustable air foils that Chaparral affixed to their badass Can Am race cars back in the day, and other aerodynamic bits all create downforce. Downforce keeps cars planted on the road at speed and ensures the tires are pressed firmly onto the road for maximum grip.

What’s cool about downforce is it can be used at both high and low speeds relative to the capabilities of the vehicle. Downforce is often associated with high-speed driving, especially cornering, such as an IndyCar that needs every teeny bit of grip it can muster as it courses through the Long Beach Grand Prix circuit. The Dallara-designed chassis is a prime example because of its heavy use of aerowork.

However, downforce plays into low-speed performance, too—this is why you’ll often see heavily modified autocross cars with massive wings. Despite autocross courses often featuring low-speed sections in their tight courses, cars with wings that have a lot of surface area can still use that air to help stay planted and shave thousandths of a second off of their run times.

by Peter Nelson, The Drive |  Read more:
Image: Peter Nelson

Thursday, July 3, 2025

So You Want To Look Rich?

So, you want to look rich? Well, you’ve come to the right place. And no, I won’t be peddling any “quiet luxury” nonsense here (barf). I’m here to show you the cheapest way to get the biggest, boldest piece of artwork in your home. Because nothing says “Daddy Warbucks” quite like art that eats an entire wall for breakfast.


“HoOooOoOw does this make meEeeeeEe look riiiiicCccCCh?” you ask. Well, if you’ve ever tried to frame anything in this godforsaken town, you know it’s astronomically expensive. And sure, I respect the craft—cutting glass, sanding wood, fastening a perfect corner joint? Not easy. My wallet, however, does not share the same sentiment and admiration for *~craft~* (one day). Large-scale framing is expensive, so having large-scale art in your home must = wealth. Is this girl math?

Lucky for you, I’m scrappy/good at connecting dots and figured out a workaround that gets you art + a frame for around $200(ish). And when we’re talking large-scale art? That’s not not highway robbery!!!!!!!!

So, here’s a breakdown of exactly what you’re going to do:

Step 1:

Buy this huge-ass frame from IKEA. As someone who has spent far too much time on the hunt for large-scale frames at a kind price, let me tell you, this frame is a godsend.

Step 2:

Head to the National Gallery’s website and dive into their free image archive. I first discovered it in college thanks to my genius art history professor Brantl (miss you, legend). Their open-access archive lets you download high-res images of various works, totally free. Pro Tip: make sure the free image download filter is turned ON.

Feeling overwhelmed by the options? Don’t panic, hun. That’s what I’m here for. Below are some solid search terms and filters to get you started:

Search Terms: Horse Race, Shaker Drawings, Edgar Degas, Flora and Fauna, Alfred Stieglitz, Post Impressionist, Pierre Bonnard, Holger Hanson, Tamarind Institute, Robert Frank, Spanish Southwest, Realist, George Bellows, John Sloan, Abstract Expressionist, Mark Rothko, Kenneth Noland, John Frederick Peto, Realist (Subject>Still Life, Photography (Themes>Motion), Landscape, Painting (Subject>Place Names), Ernst Kirchner, Charles Logasa, Drawing (Subject>Objects), Paul Klee, Walter Griffin, Drawings (Subjects>Flora & Fauna), Index of American Design, Mina Lowery.

Here are some fun ones I found:  [ed. more...]


Step 3 (Edited):

Hit! That! Download! Button! And throw your chosen artwork into Photoshop. Crop it to your frame size (78.75" x 55"), then head to ‘Image Size’ and bump the resolution from 72 to 300 PPI to keep things crisp. Then (important!) grow the artwork by 3 inches, bringing it to 81.75" x 58". That extra bit will help it sit just right and tight in the frame.

Step 4:

Next, head to www.bagofloveuse.com (I’m serious), toggle over to the Fabric & Leather Printing menu, and upload your artwork under the “Print on Fabric” section. You’ll want to input custom dimensions and choose a fabric that prints rich, saturated color with zero shine. I went with the 6.28oz cotton twill and can’t recommend it enough. It has weight, texture, and looks way more expensive than it is. Also, because you added that 3-inch border around your artwork, you can opt for the “uneven scissor cut,” which is free (I swear I’m not usually this cheap).

One note: Bags of Love now caps their print width at 57.09 inches, but since that’s still wider than your frame, you should be fine. You’ll just have to be a bit more precise when snapping it in. Horizontal images still work best, but if you’re feeling bold with a vertical, go for it. You do you.

Step 5:

Time to get that m-effer in the frame! I recommend doing this with a friend (free labor, obviously) because getting the fabric pulled taut and snapped cleanly into the back of the frame is much easier with an extra set of hands. Like most things IKEA, the setup is pretty painless and requires little to no tools.

Step 6:

Honestly, I wish there was more to it, but that’s it. Hang it up and you’re done. You look rich, and now everybody wants to be your friend!

Anyway, without further ado, here are some gorgeous examples of large-scale artworks in homes I love. May they inspire your walls: [ed. more..]

by Juliana Ramirez, Search Terms | Read more:
Images: Andy Williams; John Decker, Green Plums, 1885; Peter Henry Emerson, Marsh Weeds, 1895.
[ed. See also: Everyone’s Moving (thoughtful gifts for new beginnings). Lots of good links.]

Thursday, June 26, 2025

Not Made in the USA

The Trump phone was announced last week with a claim that the device would be made entirely in America, and people were rightly skeptical. Trump Mobile's $500 T1 Phone "is a sleek, gold smartphone engineered for performance and proudly designed and built in the United States for customers who expect the best from their mobile carrier," the Trump Organization said in a press release.

But with electronics supply chain experts casting doubt on the feasibility of designing and building an American-made phone in a short span of time, Trump Mobile's website doesn't currently promise an American-made phone. The website says the T1 is "designed with American values in mind," that it is "brought to life right here in the USA," and that there are "American hands behind every device."

The Trump Mobile website previously said, "Our MADE IN THE USA 'T1 Phone' is available for pre-order now." The phone was initially supposed to be available in August, but the date was changed to September, and now the website simply says it will be available "later this year." (...)

Some experts have said the Trump phone appears to be a re-skinned version of the REVVL 7 Pro 5G, made by Chinese company Wingtech. The REVVL 7 Pro 5G is sold by T-Mobile for $250, half the price of the Trump phone.

by Jon Brodkin, Ars Technica |  Read more:
Image: Getty Images/Joe Readle
[ed. Lol. The bullshit/scamming machine continues firing on all cylinders. A+ for creativity. See also: this.]

Saturday, June 21, 2025

Honda Rockets

Honda’s hopper suddenly makes the Japanese carmaker a serious player in rocketry.

An experimental reusable rocket developed by the research and development arm of Honda Motor Company flew to an altitude of nearly 900 feet Tuesday, then landed with pinpoint precision at the carmaker's test facility in northern Japan.

The accomplishment may not sound like much, but it's important to put it into perspective. Honda's hopper is the first prototype rocket outside of the United States and China to complete a flight of this kind, demonstrating vertical takeoff and vertical landing technology that could underpin the development of a reusable launch vehicle. (...)

Developed in-house by Honda R&D Company, the rocket climbed vertically from a pedestal at the company's test site in southeastern Hokkaido, the northernmost of Japan's main islands. The vehicle reached an altitude of about 890 feet (271 meters). The vehicle descended to a nearby landing target and settled on its four landing legs just 15 inches (37 centimeters) from its aim point, according to Honda.

What's more, the rocket stood on its four landing legs for liftoff, then retracted the landing gear as it climbed into the sky. At its highest point, the vehicle extended aerodynamic fins akin to those used on SpaceX's reusable Falcon 9 and Super Heavy boosters. Moments before reaching the ground, the rocket folded the fins against its fuselage and deployed its four landing legs for touchdown. The flight lasted approximately 57 seconds.

by Stephan Clark, ArsTechnica |  Read more:
[ed. A company deeply committed to R&D, over short-term shareholder returns, applying its expertise across a variety of platforms. Very impressive.]