Thursday, July 18, 2019

Washington State’s Big Bet On ‘Free College’

Washington state doesn’t have a problem finding educated people to work in its booming high-tech economy – it’s just most of those people come from out of state.

This is why Washington enacted the landmark Workforce Education Investment Act into law in May 2019.

The main idea behind the new law is to make college more affordable. It does so by providing state aid grants that will cover much or all of tuition for more Washington resident students – 36,000 more by 2021 who are eligible based on their income, according to a Senate source with knowledge of the plan. This will be done through the new Washington College Grant.

The bill was passed at a time when several presidential candidates are pushing ambitious plans on college affordability. Washington state Gov. Jay Inslee, himself a presidential candidate, has said the bill puts Washington state “ahead of the nation” in providing college access, but has not made it a centerpiece of his campaign.

I’m the author of a book about how states finance higher education. Here are what I see as the most significant aspects of what has been described as Washington state’s “free college” plan.

1. Businesses will pay for it

Since the new Workforce Investment Act will benefit employers, they’re the ones who are going to pay most for it. Firms hiring workers with advanced skills will pay various amounts more in business taxes. For instance, under the new law, advanced computing businesses with gross revenues over US$100 billion – meaning Amazon and Microsoft – will pay the highest rates on their state business taxes. Under the new law, both firms will pay an increase of two-thirds on what they already pay in business taxes, up to a $7 million annual limit per firm.

If it seems unusual that this tax surcharge is directed at specific firms, that’s because it is. The firms’ willingness to pay increased tax rates in order to produce more of the workers they need at home was a big factor in building legislative support for the new tax. Employers in Washington state have long complained about the “skills gap”: that is, how hard it is to find skilled workers locally.

Indeed, the state ranks third in the nation for attracting workers from elsewhere with a bachelor’s degree or higher. But when it comes to producing an educated workforce among its own citizens, Washington comes up short. It is in the bottom 10 states in producing college graduates.

2. Funding for financial aid is guaranteed

One of the most significant features of the new law is that it guarantees for the first time that funding will actually be available to cover the grants. This is important because, since the Great Recession, the state has been unable to fund all of the eligible students who applied for the State Need Grant. In 2018, for instance, more than a quarter of eligible applicants, some 22,600, were turned away. This has been deeply unpopular.

The fact that the grant money is guaranteed may lead students – especially first-generation college students – to do more to prepare for college, because they know the cost is covered, according to to research by Laura Perna, a higher education researcher at the University of Pennsylvania.

3. More money, fewer rules

Washington state’s new college affordability initiative differs from the “free college” efforts being undertaken by other states such as Tennessee and Oregon. In other states, such as these, Rhode Island and, soon, Massachusetts, the “free college” initiatives are mostly limited to tuition-free community college for some students. But in Washington state, the Workforce Education Investment Act provides money for students to attend not only a community college, but four-year public and private colleges and universities.

Other states’ free college initiatives, for the most part, are “last dollar” programs. The most prominent example is Tennessee. In last dollar programs, the state money students get is applied toward their college costs only after they have gotten other financial aid, such as federally administered Pell Grants. These last dollar state grants typically cover only tuition and cannot be applied to living costs.

The new Washington program, however, offers “first dollar” grants. This allows students to apply Pell and other aid to college costs besides tuition, such as books, room and board, and transportation. This lowers the amount that students have to borrow for college.

Also, unlike in some states’ “free college” programs, there is no residency requirement after graduation. This is not the case in, for example, New York, where students who get their tuition covered by an Excelsior Scholarship must live and work in New York for the same number of years that they received the scholarship. Otherwise, their scholarship becomes a repayable loan.

The new law also seeks to help those who need training that doesn’t necessarily involve college. For instance, students can use the grants for Registered Apprenticeship programs, which sometimes charge tuition. The act also provides substantial new money – $11.5 million for the next two-year budget cycle – for Career Connect Washington, an effort to bring employers and educators together to design programs that emphasize the skills employers seek.

by William Zumeta, The Conversation | Read more:
Image: VDB/Shutterstock

Wednesday, July 17, 2019

Neal Stephenson on Depictions of Reality

If you want to speculate on the development of tech, no one has a better brain to pick than Neal Stephenson. Across more than a dozen books, he’s created vast story worlds driven by futuristic technologies that have both prophesied and even provoked real-world progress in crypto, social networks, and the creation of the web itself. Though Stephenson insists he’s more often wrong than right, his technical sharpness has even led to a half-joking suggestion that he might be Satoshi Nakamoto, the shadowy creator of bitcoin. His latest novel, Fall; or, Dodge in Hell, involves a more literal sort of brain-picking, exploring what might happen when digitized brains can find a second existence in a virtual afterlife.

So what’s the implicit theology of a simulated world? Might we be living in one, and does it even matter? Stephenson joins Tyler to discuss the book and more, including the future of physical surveillance, how clothing will evolve, the kind of freedom you could expect on a Mars colony, whether today’s media fragmentation is trending us towards dystopia, why the Apollo moon landings were communism’s greatest triumph, whether we’re in a permanent secular innovation starvation, Leibniz as a philosopher, Dickens and Heinlein as writers, and what storytelling has to do with giving good driving directions.

TYLER COWEN: I am here today with Neal Stephenson, who is arguably the world’s greatest author of speculative fiction and science fiction. Welcome, Neal.

NEAL STEPHENSON: It’s good to be here. Thanks for having me on your program.

COWEN: Let me start with some general questions about tech. We will get to your new book.

How will physical surveillance evolve? There’s facial surveillance, gait surveillance in China that’s coming to many airports. What’s your vision for this?

STEPHENSON: When you say physical surveillance, you just mean —

COWEN: They record your face, they know who you are, they track your movements.

STEPHENSON: Actually recording you while you’re wandering around somewhere, as opposed to tapping your phone, that kind of thing.

COWEN: And if you jaywalk, they’ll fine your bank account, and you’ll get a text message two minutes later.

STEPHENSON: Right. Well, I think it’s just going to be based on what people are willing to tolerate and put up with. There’s already something of a backlash going on over the use of facial recognition in some cities in this country. I think people just have to be diligent and be aware of what’s happening in that area and push back against it.

COWEN: Is there a positive scenario for its spread?

STEPHENSON: For it spreading?

COWEN: Right. Is it possible it will make China a more cooperative place, a more orderly place, and in the longer run, they’ll be freer? Or is that just not in the cards?

STEPHENSON: I’m not sure if cooperative, orderly, and freer are compatible concepts, right? Cooperative and orderly, definitely. People who are in internment camps are famously cooperative and orderly, but . . .

Freedom is a funny word. It’s a hard thing to talk about because to a degree, if this kind of thing cuts down, let’s say, on random crime, then it’s going to make people effectively freer. Especially if you’re a woman or someone who is vulnerable to being the victim of random crime, and some kind of surveillance system renders that less likely to happen, then, effectively, you’ve been granted a freedom that you didn’t have before.

But it’s not the kind of statutory freedom that we tend to talk about when we’re talking about politics and that kind of thing.

COWEN: Other than satellites, which are already quite proven, what do you think is the most plausible economic value to space?

STEPHENSON: It’s tough making a really solid economic argument for space. There’s a new book out by Daniel Suarez called Delta-V, in which he’s advancing a particular argument, which is a pretty abstract idea based on how debt works and what you have to do in order to keep an economy afloat. But I think it’s a thing that people need to do because they want to do it, as opposed to because there’s a sound business argument for it.

COWEN: Do you think, socially, we’re less willing or able to do it psychologically than, say, in the 1960s?

STEPHENSON: Well, the ’60s was funny because it was a Cold War propaganda effort on both sides. The whole story of how that came about is a really wild story that begins with World War II, when Hitler wants to bomb London. But it’s too far away, so he has to build big rockets to do it with. So rockets advance way beyond where they would have advanced had he not done that.

Then we grab the technology, and suddenly we need it to drop H bombs on the other side of the world. So again, trillions of dollars of money go into it, and then it becomes so dangerous that we can’t actually use it for that. Instead, we use that rocket technology to compete in the propaganda sphere. I once knew a grizzled old veteran of that ’60s space program who said that the Apollo moon landings were communism’s greatest triumph.

So that’s how that all happened, and it happened way earlier than any kind of rational economic argument could be made for it. I still think it’s the case that, if we’re going to do things in space, it’s more for psychological reasons than it is for money reasons.

COWEN: If we had a Mars colony, how politically free do you think it would be? Or would it just be like perpetual martial law? Like living on a nuclear submarine?

STEPHENSON: I think it would be a lot like living on a nuclear submarine because you can’t — being in space is almost like being in an intensive care unit in a hospital, in the sense that you’re completely dependent on a whole bunch of machines working in order to keep you alive. A lot of what we associate with freedom, with personal freedom, becomes too dangerous to contemplate in that kind of environment.

COWEN: Is there any Heinlein-esque-like scenario — Moon is a Harsh Mistress, where there’s a rebellion? People break free from the constraints of planet Earth. They chart their own institutions. It becomes like the settlements in the New World were.

STEPHENSON: Well, the settlements in the New World, I don’t think are a very good analogy because there it was possible — if you’re a white person in the New World and you have some basic skills, you can go anywhere you want.

An unheralded part of what happened there is that, when those people got into trouble, a lot of times, they were helped out by the indigenous peoples who were already there and who knew how to do stuff. None of those things are true in a space colony kind of environment. You don’t have indigenous people who know how to get food and how to get shelter. You don’t have that ability to just freely pick up stakes and move about.

On social media

COWEN: You saw some of the downsides of social media earlier than most people did in Seveneves. It’s also in your new book, Fall. What’s the worst-case scenario for how social media evolved? And what’s the institutional failure? Why do many people think they’re screwing things up?

STEPHENSON: I think we’re actually living through the worst-case scenario right now, so look about you, and that’s what we’ve got. Our civil institutions were founded upon an assumption that people would be able to agree on what reality is, agree on facts, and that they would then make rational, good-faith decisions based on that. They might disagree as to how to interpret those facts or what their political philosophy was, but it was all founded on a shared understanding of reality.

And that’s now been dissolved out from under us, and we don’t have a mechanism to address that problem.

COWEN: But what’s the fundamental problem there? Is it that decentralized communications media intrinsically fail because there are too many voices? Is there something about the particular structure of social media now?

STEPHENSON: The problem seems to be the fact that it’s algorithmically driven, and that there are not humans in the loop making decisions, making editorial, sort of curatorial decisions about what is going to be disseminated on those networks.

As such, it’s very easy for people who are acting in bad faith to game that system and produce whatever kind of depiction of reality best suits them. Sometimes that may be something that drives people in a particular direction politically, but there’s also just a completely nihilistic, let-it-all-burn kind of approach that some of these actors are taking, which is just to destroy people’s faith in any kind of information and create a kind of gridlock in which nobody can agree on anything.

COWEN: If we go back to the world of 2006, where there’s Google Reader, there’s plenty of blogs, RSS is significant, algorithms are much, much less important — does that work well in your view? Or is the problem more deeply rooted than that?

STEPHENSON: Well, I think, at the end of the day, people are not going to agree on facts unless there’s a reason for them to do so. I’ve been talking about a really interesting book called A Culture of Fact by Barbara Shapiro, which is a sort of academic-style book that discusses how the idea of facts entered our minds in the first place because we didn’t always have it. Procedures were developed that would enable people to agree on what was factual, and that had a huge impact on culture and on the economy and everything else.

And now that’s, as I said, going away, and the only way to bring it back is, first, to have a situation where people need and want to agree on facts.
On what the future will look like

COWEN: Your idea of this smart book, which is in Diamond Age — do you think that will ever happen? There will be a primer that people use, and it’s online, and it will educate them and teach them how to be more disciplined?

STEPHENSON: A lot of different people have taken inspiration from The Diamond Age and worked on various aspects of the problem. It’s always interesting to talk to them because it’s sort of a classic “six blind men and the elephant” thing, where I’ll hear from someone who says, “Oh, I’m working on something inspired by The Diamond Age.” And I ask them what that means to them, and it’s always a little different.

Sometimes it’s how do we physically build something that could do what that book does? Sometimes it’s how do we organize knowledge, how do we set up curricula that are adaptable to the needs of a particular reader? It’s really not just one technology. It’s a whole basket of different hardware and software technologies, and people are definitely coming at that from various angles right now.

COWEN: What do you think stops it from happening? We don’t have the tech? Or just users aren’t interested, or what? What’s the constraint?

STEPHENSON: It’s just kind of distributed among a large number of different projects. There’s not any one big, centralized, this-is-it version of the thing, which isn’t necessarily bad. That’s a great way for people to spawn a lot of ideas and do a lot of decentralized work on a project, but nothing is pulling it together into the primer.

COWEN: In your early novels, like Snow Crash, Diamond Age, there’s a sense that states often have become quite weak. Do you think in reality, the state has ended up staying more powerful, for reasons which are surprising? Or you foresaw that?

STEPHENSON: I certainly didn’t foresee anything. In Snow Crash, in Diamond Age, I’m kind of riffing on a way of thinking that I saw quite a bit among basically libertarian-minded techies during the ’80s and the ’90s that was all about getting rid of the nation-state and reducing the power of nation-states.

If that was happening, I think it got flipped in the other direction, basically, by 9/11. When something like that happens, it immediately creates a desire in a lot of people’s minds to return to a more centralized, authoritarian nation-state arrangement, and that’s the trajectory that we’ve been on ever since.

by Tyler Cowen, Marginal Utility |  Read more:
Image: uncredited
[ed. I've been waiting for this interview since it was first announced.]

Shopify and the Power of Platforms

While I am (rightfully) teased about how often I discuss Aggregation Theory, there is a method to my madness, particularly over the last year: more and more attention is being paid to the power wielded by Aggregators like Google and Facebook, but to my mind the language is all wrong.

I discussed this at length last year:
  • Tech’s Two Philosophies highlighted how Facebook and Google want to do things for you; Microsoft and Apple were about helping you do things better.
  • The Moat Map discussed the relationship between network effects and supplier differentiation: the more that network effects were internalized the more suppliers were commoditized, and the more that network effects were externalized the more suppliers were differentiated.
  • Finally, The Bill Gates Line formally defined the difference between Aggregators and Platforms. This is the key paragraph:
This is ultimately the most important distinction between platforms and Aggregators: platforms are powerful because they facilitate a relationship between 3rd-party suppliers and end users; Aggregators, on the other hand, intermediate and control it.
It follows, then, that debates around companies like Google that use the word “platform” and, unsurprisingly, draw comparisons to Microsoft twenty years ago, misunderstand what is happening and, inevitably, result in prescriptions that would exacerbate problems that exist instead of solving them.

There is, though, another reason to understand the difference between platforms and Aggregators: platforms are Aggregators’ most effective competition.

Amazon’s Bifurcation

Earlier this week I wrote about Walmart’s failure to compete with Amazon head-on; after years of trying to leverage its stores in e-commerce, Walmart realized that Amazon was winning because e-commerce required a fundamentally different value chain than retail stores. The point of my Daily Update was that the proper response to that recognition was not to try to imitate Amazon, but rather to focus on areas where the stores actually were an advantage, like groceries, but it’s worth understanding exactly why attacking Amazon head-on was a losing proposition.

When Amazon started, the company followed a traditional retail model, just online. That is, Amazon bought products at wholesale, then sold them to customers:


Amazon’s sales proceeded to grow rapidly, not just of books, but also in other media products with large selections like DVDs and CDs that benefitted from Amazon’s effectively unlimited shelf-space. This growth allowed Amazon to build out its fulfillment network, and by 1999 the company had seven fulfillment centers across the U.S. and three more in Europe.

Ten may not seem like a lot — Amazon has well over 300 fulfillment centers today, plus many more distribution and sortation centers — but for reference Walmart has only 20. In other words, at least when it came to fulfillment centers, Amazon was halfway to Walmart’s current scale 20 years ago.

It would ultimately take Amazon another nine years to reach twenty fulfillment centers (this was the time for Walmart to respond), but in the meantime came a critical announcement that changed what those fulfillment centers represented. In 2006 Amazon announced Fulfillment by Amazon, wherein 3rd-party merchants could use those fulfillment centers too. Their products would not only be listed on Amazon.com, they would also be held, packaged, and shipped by Amazon.

In short, Amazon.com effectively bifurcated itself into a retail unit and a fulfillment unit:


The old value chain is still there — nearly half of the products on Amazon.com are still bought by Amazon at wholesale and sold to customers — but 3rd parties can sell directly to consumers as well, bypassing Amazon’s retail arm and leveraging only Amazon’s fulfillment arm, which was growing rapidly:


Walmart and its 20 distribution centers don’t stand a chance, particularly since catching up means competing for consumers not only with Amazon but with all of those 3rd-party merchants filling up all of those fulfillment centers.

Amazon and Aggregation

There is one more critical part of the drawing I made above:


Despite the fact that Amazon had effectively split itself in two in order to incorporate 3rd-party merchants, this division is barely noticeable to customers. They still go to Amazon.com, they still use the same shopping cart, they still get the boxes with the smile logo. Basically, Amazon has managed to incorporate 3rd-party merchants while still owning the entire experience from an end-user perspective.

This should sound familiar: as I noted at the top, Aggregators tend to internalize their network effects and commoditize their suppliers, which is exactly what Amazon has done.

Amazon benefits from more 3rd-party merchants being on its platform because it can offer more products to consumers and justify the buildout of that extensive fulfillment network; 3rd-party merchants are mostly reduced to competing on price.

That, though, suggests there is a platform alternative — that is, a company that succeeds by enabling its suppliers to differentiate and externalizing network effects to create a mutually beneficial ecosystem. That alternative is Shopify.

The Shopify Platform

At first glance, Shopify isn’t an Amazon competitor at all: after all, there is nothing to buy on Shopify.com. And yet, there were 218 million people that bought products from Shopify without even knowing the company existed.

The difference is that Shopify is a platform: instead of interfacing with customers directly, 820,000 3rd-party merchants sit on top of Shopify and are responsible for acquiring all of those customers on their own.

by Ben Thompson, Stratechery |  Read more:
Images: Stratechery

Make America Dated Again: The Chinese Reproducing US Vintage

SHANGHAI - When Shen Wei smokes a Cuban cigar and plays big-band music through his decades-old American radio, he’s whisked to a bygone era.

“The sound — it brings you back to that age,” says the 38-year-old artist and entrepreneur. “You can imagine, 80 years ago maybe, a gorgeous lady sitting over there, listening to some beautiful music. You have some connection with it.”

The cavernous ground-floor showroom where Shen traverses time is filled with U.S. military memorabilia he’s collected over the years: uniforms, helmets, hats, sunglasses, gloves, jewelry, and watches. Under a wall-mounted American flag and model fighter plane sits the large tube radio Shen bought on eBay six years ago, produced by General Electric in 1940.

As for the replica World War II-era Air Force jackets hanging on racks, they’re Shen’s original creations. To emulate the look and feel of historical jackets, he uses vegetable tanning for the leather and attaches oxidized copper buttons. If customers wish, Shen can also stitch on military patches, or paint the backs with images of aircraft, pinup girls, and cartoon characters like Bugs Bunny — just like American pilots used to do. Such customized jackets can sell for over 20,000 yuan ($2,900) apiece.

“People are getting richer. They have money to spend on this,” says Shen. “My customers are willing to pay this price to have something very unique.”

As the owner of Shanghai-based vintage brand Lucky Forces, Shen is one of a growing number of Chinese entrepreneurs faithfully re-creating Western items from the 1930s to 1960s. He sells to a burgeoning community of newly minted vintage fans in the country — typically men in their 30s and 40s — who see in the objects a timeless aesthetic, an air of prestige, and an escape from the pressures of work. But while the group is developing fast, artisans say vintage culture is still misunderstood and that the market, with its small size and competition from copycat merchants, can be a challenge.

Shen developed a particular interest in U.S. Army vintage after seeing high-grossing period films like 2001’s “Pearl Harbor” while in university. He remembers falling in love with the soldiers’ clothing, moved by the use of painted jackets as a creative distraction from the realities of war.

Over a decade later, Shen now sees his vintage collecting as a means of glimpsing into a past he never experienced but feels nostalgia for nevertheless. While his view of yesteryear might be rather narrow and romanticized, he muses, it reflects a profound desire to be somewhere other than the present. (...)

Lucky Forces may be all-American in its style, but the craft philosophy behind it originated in neighboring Japan. Shen was initially inspired to start his brand after learning of the Japanese fashion movement amekaji, or “American casual,” on websites like the influential vintage forum 33oz. Nearly 40 years ago, Japanese brands began re-creating American clothing and designs dating from the 1940s to the ’60s with a high degree of authenticity, with some even using looms and other manufacturing equipment from the era to make denim. Amekaji brands have since gained international attention for their take on old U.S. clothing, while department stores, thrift shops, fashion magazines, and events catering to the style have made Japan a mecca for Chinese vintage fans.

“After digging deeply into amekaji and vintage culture, a lot of 33oz forum users have ended up starting their own brands,” says Li Ying, manager of 33oz. Li Ying says that although China’s vintage subculture is rooted in amekaji, it has developed its own scene with a growing number of grassroots undertakings. 33oz has itself evolved from an online forum for denim fans into China’s leading promoter of vintage culture, selling clothes, organizing community fairs, and churning out social media content on platforms such as Weibo and WeChat. (...)

Cui Wei, a catering entrepreneur who organizes vintage-themed events in his spare time, has seen many Chinese artisans come and go from the domestic subculture — often failing because their meticulously handcrafted goods can be cheaply imitated and mass-produced by merchants on e-commerce platforms like Taobao.

“It’s similar to the pop music I used to do,” says Cui, a former professional singer. “Being original is hard, but making commercial music is easy.” Cui hopes his events, which he pays for at his own expense, will nurture vintage culture in China and protect brands like Lucky Forces and Han’s Pipes.

The spread of the hobby is slow, partly because it’s costly. Cui says his understanding of vintage culture — particularly amekaji — developed over the 15 trips he has taken to Japan since 2016. Even replicas of vintage clothing are pricey, leading to an expression popular among local vintage fans: “You have to be really rich to look really poor.”

by Kenrick Davis, Sixth Tone |  Read more:
Image: Kenrick Davis/Sixth Tone

At the World Taxidermy and Fish Carving Competition

On a stormy day in Springfield, Mo., the Expo Center was full of menacing bears, jumping lions, flying birds, and swimming fish, all remaining pretty still as they got their hair blow-dried, feathers tweaked, or scales retouched.

Every other year since 1983, taxidermists from all over the world gather for the World Taxidermy and Fish Carving Championships. Since the 1990s, it has been organized by Larry Blomquist, owner and publisher of the taxidermy magazine Breakthrough.

“You need a lot of skills to be a good taxidermist,” he says, “a good knowledge of anatomy, habitat, sculpting, sewing, painting, and have a very creative mindset to come up with the piece to look alive and tell a story. A good taxidermist is an artist.”

About 30 judges, two for each category ranging from Large Mammals to Reptiles, walk among the entries. In front of a very stoic rabbit being attacked by a lynx, a judge comments: “He looks like he is getting a back-scratch. This is not realistic.”

“A good piece needs to display emotion; this is what makes it stands out,” says Wendy Christensen, who has been a judge for 25 years. “But it also needs to be anatomically correct.” To check accuracy, judges run their fingers through fur, inspect teeth with a flashlight, and compare pieces with reference photos.

A winning entry can take, on average, 150 hours to be completed and eventually sell for $10,000 to $20,000.

This year the Best of Show did not have fur or feathers. For the first time in the championship’s history, the top prize was awarded to a large, majestic fish called a muskellunge—better known as a “muskie”—created by Tim Gorenchan from Escanaba, Mich.

Next door to the competition is one of the largest trade shows in the taxidermy industry. Vendors populating 166 booths sell glass eyes and such other artificial parts as noses, jaws, and reproduction turkey heads, or display habitat, such as fake rocks and tree branches.

Allis Markham, a taxidermist based in Los Angeles, has been giving classes for several years in the hope of making taxidermy more popular in urban areas and among women. She says 95% of her students are women. One of her former students, Lauren Crist, a full-time animator for Disney, won first place with her blue jay this year.

The competition also includes a category for newcomers no older than 14.

by Aude Guerrucci, Bloomberg | Read more:
Image: Aude Guerrucci

Tanaka Minori

Neuralink

Elon Musk’s Neuralink, the secretive company developing brain-machine interfaces, showed off some of the technology it has been developing to the public for the first time. The goal is to eventually begin implanting devices in paralyzed humans, allowing them to control phones or computers.

The first big advance is flexible “threads,” which are less likely to damage the brain than the materials currently used in brain-machine interfaces. These threads also create the possibility of transferring a higher volume of data, according to a white paper credited to “Elon Musk & Neuralink.” The abstract notes that the system could include “as many as 3,072 electrodes per array distributed across 96 threads.”

The threads are 4 to 6 μm in width, which makes them considerably thinner than a human hair. In addition to developing the threads, Neuralink’s other big advance is a machine that automatically embeds them.

Musk gave a big presentation of Neuralink’s research Tuesday night, though he said that it wasn’t simply for hype. “The main reason for doing this presentation is recruiting,” Musk said, asking people to go apply to work there. Max Hodak, president of Neuralink, also came on stage and admitted that he wasn’t originally sure “this technology was a good idea,” but that Musk convinced him it would be possible.

In the future, scientists from Neuralink hope to use a laser beam to get through the skull, rather than drilling holes, they said in interviews with The New York Times. Early experiments will be done with neuroscientists at Stanford University, according to that report. “We hope to have this in a human patient by the end of next year,” Musk said.

During a Q&A at the end of the presentation, Musk revealed results that the rest of the team hadn’t realized he would: “A monkey has been able to control a computer with its brain.”

"It’s not going to be suddenly Neuralink will have this neural lace and start taking over people’s brains,” Musk said. “Ultimately” he wants “to achieve a symbiosis with artificial intelligence.” And that even in a “benign scenario,” humans would be “left behind.” Hence, he wants to create technology that allows a “merging with AI.” He later added “we are a brain in a vat, and that vat is our skull,” and so the goal is to read neural spikes from that brain.

The first paralyzed person to receive a brain implant that allowed him to control a computer cursor was Matthew Nagle. In 2006, Nagle, who had a spinal cord injury, played Pong using only his mind; the basic movement required took him only four days to master, he told The New York Times. Since then, paralyzed people with brain implants have also brought objects into focus and moved robotic arms in labs, as part of scientific research. The system Nagle and others have used is called BrainGate and was developed initially at Brown University.

by Elizabeth Lopatto, The Verge | Read more:
Image: Neuralink
[ed. Say what you will about Musk, the guy is not short on ideas.]

Tuesday, July 16, 2019

British Open 2019: A Second Chance for Royal Portrush

Acres of Clams

On this sunny, 55-degree Tuesday in March it feels like the whole of Seattle is on vacation. There are moms with strollers and hot dog vendors and gaggles of teenagers crowded along the city’s waterfront. I hear seagulls in the distance, that faint soundtrack of voices and birds and cars inching past looking for parking — but I don’t see them. I keep walking, past the aquarium, some souvenir shops, and finally a candy store, before arriving at last at Pier 54. The gulls, I discover, have found the best place on the waterfront to converge: the seafood bar outside Ivar’s Acres of Clams.

A statue of Ivar’s founder Ivar Haglund feeding french fries to hungry gulls sits outside the restaurant. Decades ago, a neighboring business posted signs demanding people stop feeding the birds, which were becoming entitled and cantankerous thanks to tourists’ well-intentioned offerings. But Haglund posted a sign of his own near the outdoor seating area for the fish bar: “Seagulls welcome! Seagull lovers welcome to feed seagulls in need.” A variation of the sign is still there today (along with an admonition not to feed any pigeons or birds that come into the covered eating area).

The last time I was here, back when my mom still lived in Seattle, everything was dark and empty. But now, thanks to the beginnings of a $688 million project to make the waterfront more pedestrian- and tourist-friendly, the area is barely recognizable. The city is demolishing the old Alaskan Way Viaduct, an elevated freeway that separated the water from downtown Seattle and cast a literal shadow over the once-bustling area. Parking lots are being replaced by more green space, bike paths, and an easy way to get from Pike Place Market to the waterfront’s other classic attractions like the aquarium, Ferris wheel, and countless T-shirt shops.

The old and new sit together in an uneasy stalemate. People stand on the sidewalk, phones outstretched, recording the Viaduct’s demolition across the street. I stand there with them, watching machines crumble concrete like they’re taking bites out of the infrastructure. It’s rare that you see a city decide what it wants to be, and Seattle wants its residents to feel as though they have all the advantages of a megalopolis like New York without trapping them between hot slabs of concrete walkways and buildings. The tourism board even tried to coin a term for it — “Metronatural” — referring to “a blending of clear skies and expansive water with a fast-paced city life.” Residents mocked the slogan but people kept moving there anyway. Today Seattle is a tech city, a place where the water feels like little more than a photo opportunity. The canneries and fishing-supply companies that once leased spaces on these piers live on only in restaurants serving seafood by the water.

There’s a crowd underneath the large “Ivar’s Fish Bar” neon sign when I walk up to order. The menus are written in faux-chalkboard style, and someone has stenciled the word “SEAFOOD” onto the small tiles beneath the counter. I’m hungry and excited to revisit Ivar’s for the first time since childhood. Yet I can’t help but worry that this bowl of chowder might not be as good as I remember; things are sometimes better when you leave them in the past.

Ivar’s Acres of Clams has been here in one form or another since the late 1930s. It was the first of what’s become a statewide chain, complete with 21 seafood bars and two other sit-down restaurants throughout Washington. Over the last 80-some years, Ivar’s has earned its status as a Pacific Northwest institution but Acres, with its high prices ($25 for a salmon Caesar salad or $68 for a lobster tail surf and turf) feels like a place for tourists. Those in the know order food from the walk-up counter just next to the restaurant and eat at one of the many tables nearby (covered and uncovered, so no one has to worry about soggy food from the frequent Pacific Northwest rains).

People come to Ivar’s fish bar because they serve all the stuff you want to eat at the waterfront: chowders, seafood cocktails, and fish and chips. Ivar’s fish and chips are light and crispy, and the batter doesn’t separate from the cod fillets like so many subpar versions; one could almost imagine these cod swimming through the ocean with the crunchy breading for skin. (...)

The waterfront Ivar’s seafood bar is short-staffed today, and the employees ask people to step forward and order first the fried food, then everything else. It’s a confusing system but it works out in the end. I order something called “clam nectar,” imagining it served like an oyster shooter. The guys behind the counter shout out my order, “Three-piece cod and chips! Cup of chowder!” then fall into a whisper to add, “and a clam nectar too.”

In the 1970s, Haglund advertised the clam nectar by announcing that men needed permission from their wives to order more than three cups. Clam nectar, it turns out, was an uncontrollable aphrodisiac. Are clams, which don’t have sex to reproduce, just two shells containing a lifetime of frustrated libido?

The nectar comes in a paper cup, the kind of thing one usually has with coffee or a scalding tea. The nectar, which is essentially clam broth, spices, and butter, is light and rich, full of umami but without the heavy mouthfeel of a fatty pork broth. It’s delicious. I try it multiple times after multiple fishy palate-cleansers like chowder and chips, to be sure it was the nectar I was tasting.

Advertising clam nectar as an aphrodisiac was the kind of stunt Haglund pulled all the time. In 1947, a railroad tank car of corn syrup ruptured, sending a sticky-sweet slide out onto the waterfront. Haglund put on a pair of hip boots, ordered up a large stack of pancakes from his kitchen, and waded into the streets. When the newspapers came, they found him surrounded by syrup, spooning it onto his breakfast. A photo of him was passed through the newswires and found its way into papers around the world. A couple days before the Viaduct opened, Haglund hired a brass band to play outside Acres of Clams and invited everyone to help him give thanks to the city for building “acres of covered parking” outside his restaurant.

Haglund also often accidentally stepped into local politics. In 1976, he purchased the Smith Tower, Seattle’s first skyscraper, and flew a custom 16-foot windsock shaped like a salmon on top of it. When the city tried to have him take it down for a code violation, he protested in the form of bad poetry. Supporters (and even city officials) made their arguments in verse. When it was Haglund’s turn to talk, he urged the city not to make their decision too quickly “in light of all this free publicity.” The board approved the salmon.

by Tove Danovich, Eater | Read more:
Image: Lauren Segal

Monday, July 15, 2019

Prepare River Ecosystems for an Uncertain Future

In January, millions of fish died in Australia’s Murray–Darling Basin as the region experienced some of its driest and hottest weather on record. The heat also caused severe water shortages for people living there. Such harsh conditions will become more common as the world warms. Iconic and valuable species such as the Murray cod (Maccullochella peelii peelii) — Australia’s largest freshwater fish — could vanish, threatening biodiversity and livelihoods.

Rivers around the world are struggling to cope with changing weather patterns. In Germany and Switzerland, a heatwave last year killed thousands of fish and blocked shipping on the River Rhine. California is emerging from a six-year drought that restricted water supplies and devastated trees, fish and other aquatic life. Across the US southwest, extended dry spells are destroying many more forests and wetlands.

What should river managers do? They cannot look to tools of old: conventional management techniques that aim to restore ecosystems to their original state. Ongoing human development and climate change mean that this is no longer possible. And models based on past correlations do a poor job of predicting how species might respond to unprecedented changes in future (see ‘Ecosystem change’). A different approach is called for.

To maintain water supplies and avoid devastating population crashes, rivers must be managed adaptively, enhancing their resilience and limiting risk. Researchers must also develop better forecasting tools that can project how key species, life stages and ecosystems might respond to environmental changes. This will mean moving beyond simply monitoring the state of ecosystems to modelling the biological mechanisms that underpin their survival.

Model process

Today, river managers track properties such as species diversity and population abundance, and compare them with historical averages. If they spot troubling declines, they might intervene by, for instance, altering the amount of water released from dams. But by the time trends are detected, they can be impossible to arrest.

Understanding how sensitive ecosystems might change is crucial to managing them in the future. For example, in the American west, native cottonwoods (Populus spp.) are valuable, long-lived trees that anchor river banks and offer habitats for many species. They are finely tuned to seasonal flood patterns, releasing their seeds in early summer when river flows peak. The seeds take root in moist ground after the flood recedes. But if the flood is delayed, even by a few days, many seeds fall on dry ground and die. Drought-tolerant species, such as salt cedar (Tamarix ramosissima), that disperse seeds over a longer period will move in and dramatically alter conditions for native flora and fauna.

Models based on biological processes or mechanisms — that is, how rates of survival, reproduction and dispersal vary with environmental conditions — can follow and predict such shifts. For example, by modelling the impacts of changes in flood timing on aquatic invertebrates, it is possible to predict how the numbers of dragonflies and mayflies in a dryland river will vary with different patterns of dam releases.

Process-based models can be tailored to particular life stages of a species, or sequences of events. They can identify tipping points and bottlenecks. For example, they have revealed that the early juvenile stage of coho salmon (Oncorhynchus kisutch) in the northwestern United States is most sensitive to summer droughts. The salmon spawn in streams that flow into coastal rivers, and might spend a couple of years in fresh water before moving to the sea. Juveniles might not survive, or might find it hard to travel downstream, when the river levels are low.

Armed with all this information, managers can intervene before a problem arises. For example, in wet years, conservationists in the Pacific Northwest could find and support habitats that are crucial to juvenile salmon. They could manage water flows in dry years to enable the salmon to migrate. Similarly, in the US southwest, river flows could be increased strategically from reservoirs to protect important species, such as cottonwoods. And in Australia, letting more water pass through dams in spring could stop rivers drying up while the eggs of Murray cod mature.

Rivers must also be managed for people. Allocating scarce water resources is contentious. Policymakers, water-resource engineers, conservationists and ecologists must work together to decide how much water should be diverted to people, agriculture and industry, and how much is needed to protect ecosystems during drought.

Such models can also track how interactions among species in communities vary under changing conditions. For example, the loss of riparian specialists in dryland river ecosystems and invasion by both non-native and upland species in a drier future could create a vicious cycle. River ecosystems could become more vulnerable to climate change and to alien species.

Some river basins are beginning to be managed adaptively — agencies are trying different management practices, learning from them and updating them as needed. For example, in Australia, state and federal agencies periodically reassess and rebalance water allocations, as climate trends, information and assessment tools develop. Similarly, the Bay–Delta Plan in California proposes to revisit relationships between target species, water flows and water quality in San Francisco Bay and the Sacramento–San Joaquin River Delta every five years.

But adaptive management alone might miss conservation targets. Unexpected consequences could emerge over the long term as impacts mount. Process-based models can look further ahead and save time, money and disruption by limiting the number of interventions as well as avoiding adverse impacts. They would help stakeholders and managers to choose which features of ecosystems to maintain, to justify costly interventions such as major engineering works and to weigh trade-offs to build resilience under increasing climatic uncertainty.

Obstacles to implementation

Process-based models are already used in fisheries and conservation. For example, they have shown conservationists that it is more effective to protect juvenile loggerhead sea turtles from being caught in fishing nets than to safeguard their eggs on beaches. And such models help to guide the management of wetland habitats in the United States for the endangered Everglades snail kite (Rostrhamus sociabilis), the fledglings of which are susceptible to droughts.

But they are rarely used in river management, mainly because data on the basic biology of local species are lacking. Such data are costly for scientists and agencies to collect. Measuring fecundity or survival, for example, takes years and thus requires long-term funding and commitment. Such campaigns are usually reserved for endangered or commercially valuable species.

Simplifying models might help to bridge the data gaps in the interim. Species with similar life histories or characteristics might respond similarly to changing river conditions. Studies of one could inform models and management of similar species in other places. For instance, plains cottonwood (Populus deltoides) in North America, river red gum (Eucalyptus camaldulensis) in Australia, and Euphrates poplar (Populus euphratica) in North Africa and Eurasia are all riparian trees that have similar hydrological requirements and drought tolerances. They share characteristics such as shallow roots and furrowed bark that resists flood scour, and can resprout after being buried by sediment. Analytical methods could also be developed to extrapolate across gaps in data sets.

Four steps

River scientists and managers should take the following steps.

by Jonathan D. Tonkin, N. LeRoy Poff and Colleagues, Nature |  Read more:
Image: Jose Luis Roca/AFP/Getty

Why a "Public Option" Isn't Enough

At one point, the meaning of “Medicare For All” was quite clear. Under Medicare For All, every American, instead of having to navigate the tangled and inefficient marketplace of for-profit health insurance corporations, would simply be enrolled in Medicare. Instead of people paying premiums and copays to an insurance company, they would pay taxes, and those taxes would be used to pay providers. As Dr. Abdul El-Sayed wrote in this magazine, Medicare For All is “single-payer healthcare that would provide cradle-to-grave government-supported healthcare for all Americans.”

But as Democrats have realized how well the phrase “Medicare For All” polls with voters, its meaning has been deliberately muddied. Most of the Democratic presidential candidates now support something they call “Medicare For All,” but it’s often not clear what they mean by it. Some, when they clarify specifics, make it clear that what they actually want is a “public option,” i.e. a new kind of government insurance plan that you can buy within the structure of the existing healthcare marketplace. Pete Buttigieg says that he believes in “Medicare For All Who Want It.” Presumably, what this would mean in practice is that when you go to healthcare.gov to select your insurance plan, one option would be a thing called “Medicare For All,” and you could buy it, through premiums, if you chose it. This is, as Dr. El-Sayed points out, a “rebranding” of the concept, an attempt to present Bernie Sanders’ single payer proposal and Barack Obama’s old abandoned “public option” idea as roughly the same.

But how do proponents of (actual) Medicare For All respond to the basic arguments made by those proposing “Medicare For All Who Want It”? What Pete Buttigieg and other moderates say is this: Why force people into a government program? Most people are satisfied with their healthcare (though note the huge difference between the 70 percent of Medicare enrollees who say they are satisfied with the cost, and the 51 percent of people with private insurance who are satisfied with cost). Why abolish private insurance? Why not just have insurance companies compete against a government plan in an open marketplace where people can choose? That way, everyone who wants Medicare gets it, while people who are satisfied with their current insurance can keep it. Everyone wins. The implication here is that anyone who supports a full single-payer plan, in which everyone would just be insured under a government program, must be rigidly ideological, wanting to shutter the private insurance industry for no good reason. Why would we do that instead of just providing a new option?

To understand why full “single payer” health insurance is the left’s goal, rather than just “another insurance plan on the marketplace,” it helps first to understand the left’s vision for how healthcare should work. In an ideal world, your healthcare would not be something you have to think about very much. If you got sick, you would choose a doctor’s office and make an appointment. You would go to that appointment and see the doctor. Then you would leave. You would not have to apply for insurance, not have to pay bills. And this would be the case no matter who you were or how much money you made. In Britain, this is what you do already. As U.K. Current Affairs contributor Aisling McCrea has explained, the NHS makes healthcare easy. “Insurance” isn’t a part of it at all: Your relationship is between you and your doctor, not you and your doctor and your doctor’s hospital’s billing department and your insurance company. Leftists dream of making healthcare as easy as possible to receive and universally accessible to all regardless of how much money they have.

Private health insurance is an unnecessary part of the healthcare system. Insurance companies are middlemen, and insurance just exists to make sure that providers get paid. It was our government’s own choice to encourage the proliferation of private insurance, through laws like the Health Maintenance Organization Act of 1973. It was the federal government that subsidized private insurance companies and encouraged employers to use them. Other countries didn’t build this kind of healthcare system, for two reasons:
  • It doesn’t cover everyone.
  • It creates a bloated, inefficient insurance bureaucracy.
Our government has always been playing catch-up trying to get more people covered. It’s created employer subsidies, Medicaid, CHIP, and the Obamacare exchanges in a desperate bid to get this system to do its job, and despite decades of piecemeal healthcare reforms 13.7 percent of Americans remain without health insurance and millions more have inadequate coverage. Offering to let Americans “buy-in” to Medicare keeps Americans paying premiums, and as long as Americans must personally pay premiums to receive healthcare there are going to be some people who can’t or won’t pay those premiums and go without. It turns Medicare-For-All into a publicly run HMO. Maintaining an employer-sponsored health insurance system means remaining in a situation where large numbers of people go through a period of being uninsured each year, because when you lose your job you lose your insurance. (Currently 1 in 4 Americans go through an uninsured period each year.) Single payer advocates ask the question: “Why have a nightmarish tangle of public and private options, varying by state, with people moving on and off all the time? Why not just pay for healthcare with taxes, cover everyone, and make it free at the point of use?”

Not only will a public option fail to cover everyone, it will do nothing to restrain the growth of healthcare costs. Single payer systems control costs by giving the health service a monopoly on access to patients, preventing providers from exploiting desperate patients for profit. If instead there are a large number of insurance companies, providers can play those insurance companies off each other. Right now, we have a two-tier system, in which the best doctors and hospitals refuse to provide coverage unless your insurer offers them exorbitantly high rents. To support that cost while still making a profit, your insurer has to subject you to higher premiums, higher co-pays, and higher deductibles. Poor Americans with poor-quality insurance are stuck with providers who don’t provide high enough quality care to make these demands. The best providers keep charging ever higher rents, and the gap between the care they offer and the care the poor receive just keeps growing. Poor Americans are now seeing a decline in life expectancy, in part because they cannot afford to buy insurance that would give them access to the best doctors and hospitals. Costs balloon for rich Americans while the quality of care stagnates for the poor.

The bloat doesn’t just come from providers. Because insurance works on a profit incentive, the insurance companies must extract rents as well. So the patient is paying to ensure not only that their doctor or hospital is highly-compensated, but that the insurance company generates profit too. Each insurance company has its own managers—its own CEO, its own human resources department, and so on. We have to pay all of these people, and because there are so many private insurance companies, there are so many middle managers to pay. (Barack Obama once bizarrely critiqued single payer by saying it would eliminate millions of jobs in the insurance bureaucracy, implying that we should keep admittedly pointless jobs and gouge patients as a make-work program.)

These duplicate bureaucracies are expensive to maintain and do nothing to improve the quality of care. The providers make them compete to offer higher compensation, and you pay for it. Getting rid of these middle men makes the system far more efficient. We now spend 17 percent of GDP on healthcare. Britain spends 10 percent, and British people can expect to live two years longer. (Though in order to achieve a full cost-effective British system, we’d have to socialize medicine rather than just socializing insurance.) People do not associate government with efficiency, but when it comes to moving money from one place to another—which, after all, is all an insurance company does—it can be quite good, and it makes far more sense to have government handle healthcare payments than to leave it to companies with a direct financial incentive to deny treatment.

Private insurance is inconvenient, inefficient, and continues to leave large numbers of Americans with inadequate insurance or no insurance at all. The Affordable Care Act shored up this system by funnelling more public money into subsidies for private insurance. Now these Democratic candidates are proposing to make a new insurance company, call it “Medicare,” and charge you premiums to use it. That doesn’t get rid of the problem of wasteful duplicative bureaucracies, and will guarantee that some people remain uninsured. It was the federal government’s decision to build this bizarre, burdensome system. Nothing about private health care is natural or inevitable. It doesn’t have to be like this.

by Benjamin Studebaker & Nathan J. Robinson, Current Affairs | Read more:
Image: uncredited

Tony Thornburg by Sølve Sundsbø, LUNCHEON #5 S/S 2018
via:
[ed. My nephew, Tony. Who's been branching out lately and in a new tv series on HBO with Laia Costa. I don't know what it's called or when it's coming out. Click on the link for more pics.]

Sunday, July 14, 2019

In Defence of Antidepressants

I was first prescribed antidepressants in 2000. Ever since, I have been on and off these drugs, mostly because the idea of taking them made me uncomfortable. It was a mixture of guilt, probably not unlike the guilt some athletes must feel for taking a prohibited doping substance; shame for needing a pill that had such a profound impact on my behaviour; and frustration with the recurrent episodes of depression that would bring me back to the antidepressants I would then quickly abandon.

I broke this cycle when my daughters were born and I realised that it would be irresponsible to stop treatment because being a good father meant having a stable mood. It was a purely pragmatic decision, made without resolving the existential issues that antidepressants had raised for me before. That being the case, I do not write with the fervour of the newly converted, although sometimes I speculate about how much smoother my life would have been had I decided much sooner to stick to the antidepressants.

Depression is widespread. According to the World Health Organization, in 2015 depression affected more than 300 million people, or 5.1 per cent of females and 3.6 per cent of males, worldwide. It was the single largest contributor to global disability, and the major cause of the nearly 800,000 deaths by suicide recorded every year – suicide being the second leading cause of death among 15- to 29-year-olds.

Despite these statistics, depression remains misunderstood by the public at large and is, it seems, best described by those who have lived it. The novelist William Styron wrote in his memoir Darkness Visible (1990) that: ‘For those who have dwelt in depression’s dark wood, and known its inexplicable agony, their return from the abyss is not unlike the ascent of the poet, trudging upward and upward out of hell’s black depths.’ Andrew Solomon’s memoir The Noonday Demon (2001) is a useful tome and the book on depression for the public at large. ‘It is the aloneness within us made manifest,’ he writes of the state, ‘and it destroys not only connection to others but also the ability to be peacefully alone with oneself.’

For those outside the experience, part of the confusion comes from the association of the disease with melancholia and sadness, feelings we all have experienced. Malignant sadness, or depression, is something else entirely, and it takes a leap of faith to accept that too much of something can become something completely other. (...)

It is obvious that the discomfort I once felt over taking antidepressants echoed a lingering, deeply ideological societal mistrust. Articles in the consumer press continue to feed that mistrust. The benefit is ‘mostly modest’, a flawed analysis in The New York Times told us in 2018. A widely shared YouTube video asked whether the meds work at all. And even an essay on Aeon this year claims: ‘Depression is a very complex disorder and we simply have no good evidence that antidepressants help sufferers to improve.’

The message is amplified by an abundance of poor information circulating online about antidepressants in an age of echo chambers and rising irrationality. Although hard to measure, the end result is probably tragic since the ideology against antidepressants keeps those in pain from seeking and sticking to the best available treatment, as once happened to me. Although I am a research scientist, I work on topics unrelated to brain diseases, and my research is not funded by the ‘pharma industry’ – the disclaimer feels silly but, trust me, it is needed. I write here mainly as a citizen interested in this topic. I take for granted that a world without depression would be a better place, and that finding a cure for this disease is a noble pursuit. Without a cure, the best treatment available is better than none at all. (...)

One reason for the recent surge of skepticism is a gigantic meta-analysis by the psychiatrist Andrea Cipriani at the University of Oxford and colleagues, published in The Lancet in 2018. While the earlier study by Kirsch had included 5,133 participants, Fournier’s had 718, and another study, by Janus Christian Jakobsen in Denmark in 2017, had 27,422, Cipriani and colleagues analysed data from 116,477 people – or 3.5 times more participants than in the three previous studies combined.

The sample size is not sufficient to ensure quality, but the authors were careful to select only double-blind trials and did their best to include unpublished information from drug manufacturers to minimise publication bias. They found no evidence of bias due to funding by the pharma industry, and also included head-to-head comparisons between drugs (which minimised blind-breaking). They concluded that ‘all antidepressants included in the meta-analysis were more efficacious than placebo in adults with MDD, and the summary effect sizes were mostly modest’. The results are summarised by a statistic, the odds ratio (OR) that quantifies the association between health improvement and the action of the antidepressant. If the OR is 1, then antidepressants are irrelevant; for ORs above 1, a positive effect is detected. For 18 of the 21 antidepressants, the ORs they found ranged from 1.51 to 2.13. These results have been widely mischaracterised and described as weak in the press.

It is not intuitive to interpret ORs, but these can be converted to percentages that reflect the chances of experiencing health improvement from the antidepressant, which in this study ranged from 51 per cent to 113 per cent. These percentage increases are relevant, particularly taking into account the incidence of the disease (20 per cent of people are likely to be affected by depression at some stage of their lives).

For comparison, please note the uncontroversial finding that taking aspirin reduces the risk of stroke – its associated OR is ‘only’ 1.4, but no one describes it as weak or has raised doubts about this intervention. It would be unscientific to describe the work of Cipriani and colleagues as the definitive word on the topic, but it’s the best study we have so far. The message is clear: antidepressants are better than placebo; they do work, although the effects are mostly modest, and some work better than others. This paper was an important confirmation in times of a reproducibility crisis in so many scientific fields. We don’t have to look too far: a major study was published this spring that does not confirm the association of any of the 18 genes that were reanalysed and had been proposed to be associated with MDD. (...)

The human body contains at least 12,000 metabolites. On the day of his final exam, a biochemistry major might know a few hundred, but most of us will be able to name only a few dozen, with a clear bias for the metabolites known to influence behaviour. We will immediately associate adrenalin, cortisol, testosterone, oestrogen, oxytocin and dopamine with stereotypical behaviours and personality types, but what about serotonin? The molecule is certainly no obscure metabolite. The French novelist Michel Houellebecq named his latest novel Sérotonine (2019). But would you associate the ‘happy hormone’, as serotonin is often described, with the formation and maintenance of social hierarchies and the impetus to fight observed across the animal kingdom, from lobsters to primates? Indeed, since SSRIs have been found to influence our moral decision making, naming serotonin the ‘happy hormone’ appears to be a mistake. Apart from its role in mood balance, this neurotransmitter is involved in appetite, emotions, sleep-wake cycles, and motor, cognitive and autonomic functions. In fact, most of the body’s serotonin production is not found in the brain, but in the gut.

We simply do not have a consensual overarching explanation for how SSRIs/SNRIs work in depression, and how to link these neurotransmitters to the environmental stressors, genetic factors, and immunologic and endocrine responses proposed to contribute to depression. It is also clear that restoring the chemical balance of monoamines in the brain with a pill, which only takes minutes or hours, is insufficient to immediately produce therapeutic effects, which take several weeks. Indeed, without a complete picture of the mechanism of depression, it is not surprising that the available drug treatments are not fully effective. In a study involving thousands of MDD patients consecutively encouraged to move to a different treatment if they did not achieve remission from the previous treatment, only about 67 per cent of the MDD patients taking antidepressants went into clinical remission, even after four consecutive treatments. Thus, there is a large group of patients who don’t respond to SSRI/SNRIs, which raises doubts about the monoamine hypothesis to explain depression in full.

Other ideas have emerged. One line of thought focuses on the neurotransmitters glutamate (involved in cognition and emotion) and GABA (involved in inhibition), among others. One of the most exciting findings in the field is the clinical efficacy of ketamine, which targets glutamate neurotransmission, producing immediate effects in patients refractory to SSRI/SNRI treatments. Along with the monoamine hypothesis, most of these newer approaches are somehow related to the notion of neuronal plasticity, the ability of the nervous system to change, both functionally and structurally, in response to experience and injury, which can take some time to occur. Thus, it could be that the decreased levels of monoamines are not the real cause of depression, perhaps not even an absolutely necessary condition for depression. The data certainly suggest that there might be better targets to be found, and that the pharmacological approach has to become progressively more tailored.

That said, the temptation to dismiss the monoamine hypothesis to score points against antidepressants shows a lack of understanding of how medicine has worked for most of its history; imperfect but useful therapies have been the rule, even as we refine our understanding of disease.

by Vasco M Barreto, Aeon |  Read more:
Image: Gabriele Diwald/Unsplash
[ed. See also: Cipriani on Anitdepressants; and What to Make of New Positive NSI-189 Results? (Duck Soup/SSC).]

Saturday, July 13, 2019

What the Measles Epidemic Really Says About America

In two essays, “Illness as Metaphor” in 1978 and “AIDS and Its Metaphors” in 1988, the critic Susan Sontag observed that you can learn a lot about a society from the metaphors it uses to describe disease. She also suggested that disease itself can serve as a metaphor—a reflection of the society through which it travels. In other words, the way certain illnesses spread reveals something not just about a nation’s physiological health but also about its cultural and political health. For instance, AIDS would not have ravaged America as fully as it did without institutionalized homophobia, which inclined many Americans to see the disease as retribution for gay sex.

Now another virus is offering insights into the country’s psychic and civic condition. Two decades ago, measles was declared eliminated in the U.S. Yet in the first five months of this year, the Centers for Disease Control and Prevention recorded 1,000 cases—more than occurred from 2000 to 2010.

The straightforward explanation for measles’ return is that fewer Americans are receiving vaccines. Since the turn of the century, the share of American children under the age of 2 who go unvaccinated has quadrupled. But why are a growing number of American parents refusing vaccines—in the process welcoming back a disease that decades ago killed hundreds of people a year and hospitalized close to 50,000?

One answer is that contemporary America suffers from a dangerous lack of historical memory. Most of the parents who are today skipping or delaying their children’s combined measles, mumps, and rubella (MMR) vaccine don’t remember life with measles, much less that it used to kill more children than drowning does today. Nor do they recall how other diseases stamped out by vaccines—most prominently smallpox and polio—took lives and disfigured bodies.

Our amnesia about vaccines is part of a broader forgetting. Prior generations of Americans understood the danger of zero-sum economic nationalism, for instance, because its results remained visible in their lifetimes. When Al Gore debated Ross Perot about NAFTA in 1993, he reminded the Texan businessman of the 1930 Smoot-Hawley Tariff Act, which raised tariffs on 20,000 foreign products—prompting other countries to retaliate, deepening the Great Depression, and helping to elect Adolf Hitler. But fewer and fewer people remember the last global trade war. Similarly, as memories of Nazism fade across Europe and the United States, anti-Semitism is rising. Technology may improve; science may advance. But the fading of lessons that once seemed obvious should give pause to those who believe history naturally bends toward progress.

Declining vaccination rates not only reflect a great forgetting; they also reveal a population that suffers from overconfidence in its own amateur knowledge. In her book Calling the Shots: Why Parents Reject Vaccines, the University of Colorado at Denver’s Jennifer Reich notes that starting in the 1970s, alternative-health movements “repositioned expertise as residing within the individual.” This ethos has grown dramatically in the internet age, so much so that “in arenas as diverse as medicine, mental health, law, education, business, and food, self-help or do-it-yourself movements encourage individuals to reject expert advice or follow it selectively.” Autodidacticism can be valuable. But it’s one thing to Google a food to see whether it’s healthy. It’s quite another to dismiss decades of studies on the benefits of vaccines because you’ve watched a couple of YouTube videos. In an interview, Reich told me that some anti-vaccine activists describe themselves as “researchers,” thus equating their scouring of the internet on behalf of their families with the work of scientists who publish in peer-reviewed journals.

In many ways, the post-1960s emphasis on autonomy and personal choice has been liberating. But it can threaten public health. Considered solely in terms of the benefits to one’s own child, the case for vaccinating against measles may not be obvious. Yes, the vaccine poses little risk to healthy children, but measles isn’t necessarily that dangerous to them either. The problem is that for others in society—such as children with a compromised immune system—measles may be deadly. By vaccinating their own children, and thus ensuring that they don’t spread the disease, parents contribute to the “herd immunity” that protects the vulnerable. But this requires thinking more about the collective and less about one’s own child. And this mentality is growing rarer in an era of what Reich calls “individualist parenting,” in which well-off parents spend “immense time and energy strategizing how to keep their children healthy while often ignoring the larger, harder-to-solve questions around them.”

Historical amnesia and individualism have contributed to a third cultural condition, one that is more obvious but also, perhaps, more central to measles’ return and at least as worrying for society overall: diminished trust in government. For earlier generations of Americans, faith in mass vaccines derived in large part from the campaign to eradicate polio, in the 1950s—a time when the country’s victory in World War II and the subsequent postwar boom had boosted the public’s belief in its leaders. This faith made it easy to convince Americans to accept the polio vaccine, and the vaccine’s success in turn boosted confidence in the officials who protected public health. So popular was the vaccine’s inventor, Jonas Salk, that in 1955 officials in New York offered to throw him a ticker-tape parade. (...)

Yet it’s not only conservatives who translate their suspicion of government into suspicion of vaccines. Many liberals distrust the large drug companies that both produce vaccines and help fund the Food and Drug Administration, which is supposed to regulate them. The former Green Party presidential candidate Jill Stein has suggested that “widespread distrust” of what she describes as the medical-industrial complex is understandable because “regulatory agencies are routinely packed with corporate lobbyists and CEOs.” The environmental activist Robert F. Kennedy Jr. claims that thimerosal, a preservative formerly used in some vaccines, harms children. Bright-blue counties in Northern California, Washington State, and Oregon have some of the lowest vaccination rates in the country.

Although polls suggest that conservatives are slightly less accepting of vaccines than liberals are, a 2014 study found that distrust of government was correlated with distrust of vaccines among both Republicans and Democrats. Indeed, the best predictor of someone’s view of vaccines is not their political ideology, but their trust in government and their openness to conspiracy theories.

It’s not surprising, therefore, that a plunge in the percentage of Americans who trust Washington to do the right thing most or all of the time—which hovered around 40 percent at the turn of the century and since the 2008 financial crisis has regularly dipped below 20 percent—has coincided with a decline in vaccination rates. In 2001, 0.3 percent of American toddlers had received no vaccinations. By 2017, that figure had jumped more than fourfold. Studies also show a marked uptick in families requesting philosophical exemptions from vaccines, which are permitted in 16 states.

by Peter Beinart, The Atlantic |  Read more:
Image: Edmon De Haro

Where Are All the Bob Ross Paintings?


Bob Ross painted more than 1,000 landscapes for his television show — so why are they so hard to find? Solving one of the internet’s favorite little mysteries. (NY Times)