Tuesday, January 23, 2018

Amazon Go and the Future

Yesterday the Amazon Go concept store in Seattle opened to the public, filled with sandwiches, salads, snacks, various groceries, and even beer and wine (Recode has a great set of pictures here). The trick is that you don’t pay, at least in person: a collection of cameras and sensors pair your selection to your Amazon account — registered at the door via smartphone app — which rather redefines the concept of “grab-and-go.”

The economics of Amazon Go define the tech industry; the strategy, though, is uniquely Amazon’s. Most of all, the implications of Amazon Go explain both the challenges and opportunities faced by society broadly by the rise of tech.

The Economics of Tech

This point is foundational to nearly all of the analysis of Stratechery, which is why it’s worth repeating. To understand the economics of tech companies one must understand the difference between fixed and marginal costs, and for this Amazon Go provides a perfect example.

A cashier — and forgive the bloodless language for what is flesh and blood — is a marginal cost. That is, for a convenience store to sell one more item requires some amount of time on the part of a cashier, and that time costs the convenience store operator money. To sell 100 more items requires even more time — costs increase in line with revenue.

Fixed costs, on the other hand, have no relation to revenue. In the case of convenience stores, rent is a fixed cost; 7-11 has to pay its lease whether it serves 100 customers or serves 1,000 in any given month. Certainly the more it serves the better: that means the store is achieving more “leverage” on its fixed costs.

In the case of Amazon Go specifically, all of those cameras and sensors and smartphone-reading gates are fixed costs as well — two types, in fact. The first is the actual cost of buying and installing the equipment; those costs, like rent, are incurred regardless of how much revenue the store ultimately produces.

Far more extensive, though, are the costs of developing the underlying systems that make Amazon Go even possible. These are R&D costs, and they are different enough from fixed costs like rent and equipment that they typically live in another place on the balance sheet entirely.
  • These different types of costs affect management decision-making at different levels (that is, there is a spectrum from purely marginal costs to purely fixed costs; it all depends on your time frame):
  • If the marginal cost of selling an individual item is more than the marginal revenue gained from selling the item (i.e. it costs more to pay a cashier to sell an item than the gross profit earned from an item) then the item won’t be sold.
  • If the monthly rent for a convenience store exceeds the monthly gross profit from the store, then the store will be closed.
  • If the cost of renovations and equipment (in the case of small businesses, this cost is usually the monthly repayments on a loan) exceeds the net profit ex-financing, then the owner will go bankrupt.
Keep in mind, most businesses start out in the red: it usually takes financing, often in the form of a loan, to buy everything necessary to even open the business in the first place; a company is not truly profitable until that financing is retired. Of course once everything is paid off a business is not entirely in the clear: physical objects like shelves or refrigeration units or lights break and wear out, and need to be replaced; until that happens, though, money can be made by utilizing what has already been paid for.

This, though, is why the activity that is accounted for in R&D is so important to tech company profitability: while digital infrastructure obviously needs to be maintained, by-and-large the investment reaps dividends far longer than the purchase of any physical good. Amazon Go is a perfect example: the massive expense that went into developing the underlying system powering cashier-less purchasing does not need to be spent again; moreover, unlike shelving or refrigerators, the output of that expense can be duplicated infinitely without incurring any additional cost.

This principle undergirds the fantastic profitability of successful tech companies:
  • It was expensive to develop mainframes, but IBM could reuse the expertise to build them and most importantly the software needed to run them; every new mainframe was more profitable than the last.
  • It was expensive to develop Windows, but Microsoft could reuse the software on all computers; every new computer sold was pure profit.
  • It was expensive to build Google, but search can be extended to anyone with an Internet connection; every new user was an opportunity to show more ads.
  • It was expensive to develop iOS, but the software can be used on billions of iPhones, every one of which generates tremendous profit.
  • It was expensive to build Facebook, but the network can scale to two billion people and counting, all of which can be shown ads.

In every case a huge amount of fixed costs up front is overwhelmed by the ongoing ability to make money at scale; to put it another way, tech company combine fixed costs with marginal revenue opportunities, such that they make more money on additional customers without any corresponding rise in costs.

This is clearly the goal with Amazon Go: to build out such a complex system for a single store would be foolhardy; Amazon expects the technology to be used broadly, unlocking additional revenue opportunities without any corresponding rise in fixed costs — of developing the software, that is; each new store will still require traditional fixed costs like shelving and refrigeration. That, though, is why this idea is so uniquely Amazonian.

The Strategy of Technology

The most important difference between Amazon and most other tech companies is that the latter generally invest exclusively in research and development — that is, to say, in software. And why not? As I just explained software development has the magical properties of value retention and infinite reproduction. Better to let others handle the less profitable and more risky (at least in the short term) marginal complements. To take the three most prominent examples:
  • Microsoft builds the operating system (and eventually, application software) and leaves the building of computers to OEMs
  • Google builds the search engine and leaves the creation of web pages to be searched to the rest of the world
  • Facebook builds the infrastructure of the network, and leaves the creation of content to be shared to its users
All three companies are, at least in terms of their core businesses, pure software companies, which means the economics of their businesses align with the economics of software: massive fixed costs, and effectively zero marginal costs. And while Microsoft’s market, large though it may have been, was limited by the price of a computer, Google and Facebook, by virtue of their advertising model, are super-aggregators capable of scaling to anyone with an Internet connection. All three also benefit (or benefited) from strong network effects, both on the supply and demand side; these network effects, supercharged by the ability to scale for free, are these companies’ moats.

Apple and IBM, on the other hand, are/were vertical integrators, particularly IBM. In the mainframe era the company built everything from components to operating systems to application software and sold it as a package with a long-term service agreement. By doing so all would-be competitors were foreclosed from IBM’s market; eventually, in a(n unsuccessful) bid to escape antitrust pressure, application software was opened up, but that ended up entrenching IBM further by adding on a network effect. Apple isn’t nearly as integrated as IBM was back in the 60s, but it builds both the software and the finished products on which it runs, foreclosing competitors (while gaining economies of scale from sourcing components and two-sided network effects through the App Store); Apple is also happy to partner with telecoms, which have their own network effects.

Amazon is doing both.

In market after market the company is leveraging software to build horizontal businesses that benefit from network effects: in e-commerce, more buyers lead to more suppliers lead to more buyers. In cloud services, more tenants lead to great economies of scale, not just in terms of servers and data centers but in the leverage gained by adding ever more esoteric features that both meet market needs and create lock-in. As I wrote last year the point of buying Whole Foods was to jump start a similar dynamic in groceries.

At the same time Amazon continues to vertically integrate. The company is making more and more products under its own private labels on one hand, and building out its fulfillment network on the other. The company is rapidly moving up the stack in cloud services, offering not just virtual servers but microservices that obviate the need for server management entirely. And in logistics the company has its own airplanes, trucks, and courier services, and has promised drones, with the clear goal of allowing the company to deliver products entirely on its own.

To be both horizontal and vertical is incredibly difficult: horizontal companies often betray their economic model by trying to differentiate their vertical offerings; vertical companies lose their differentiation by trying to reach everyone. That, though, gives a hint as to how Amazon is building out its juggernaut: economic models — that is, the constraint on horizontal companies going vertical — can be overcome if the priority is not short-term profit maximization.

Amazon's Triple Play

In 2012 Amazon acquired Kiva Systems for $775 million, then the largest acquisition in company history. Kiva Systems built robots for fulfillment centers, and many analysts were puzzled by the purchase: Kiva Systems already had a plethora of customers, and Amazon was free to buy their robots for a whole lot less than $775 million. Both points argued against a purchase: continuing to sell to other companies removed the only plausible strategic rationale for buying the company instead of simply buying robots, but to stop selling to Kiva Systems’ existing customers would be value-destructive. It’s one thing to pay 8x revenue, as Amazon did; it’s another to cut off that revenue in the process.

In fact, though, that is exactly what Amazon did. The company had no interest in sharing Kiva Systems’ robots with its competitors, leaving a gap in the market. At the same time the company ramped up its fulfillment center build-out, gobbling up all of Kiva Systems’ capacity. In other words, Amazon made the “wrong” move in the short-term for a long-term benefit: more and better fulfillment centers than any of its competitors — and spent billions of dollars doing so.

This willingness to spend is what truly differentiates Amazon, and the payoffs are tremendous. I mentioned telecom companies in passing above: their economic power flows directly from massive amounts of capital spending; said power is limited by a lack of differentiation. Amazon, though, having started with a software-based horizontal model and network-based differentiation, has not only started to build out its vertical stack but has spent massive amounts of money to do so. That spending is painful in the short-term — which is why most software companies avoid it — but it provides a massive moat.

That is why, contra most of the analysis I have seen, I don’t think Amazon will license out the Amazon Go technology. Make no mistake, that is exactly what a company like Google would do (and as I expect them to do with Waymo), and for good reason: the best way to get the greatest possible return on software R&D is to spread it as far and wide as possible, which means licensing. The best way to build a moat, though, is to actually put in the effort to dig it, i.e. spend the money.

To that end, I suspect that in five to ten years the countries Amazon serves will be blanketed with Amazon Go stores, selling mostly Amazon products, augmented by Amazon fulfillment centers. That is the other point many are missing; yes, the Amazon Go store took pains to show that it still had plenty of workers: shelf stockers, ID checkers, and food preparers, etc.

Unlike cashiers, though, none of these jobs have to actually be present in the store most of the time. It seems obvious that Amazon Go stores of the future will rarely have employees in store at all: there will be a centralized location for food preparation and a dedicated fleet of shelf stockers. That’s the thing about Amazon: the company isn’t afraid of old-world scale. No, sandwich preparation doesn’t scale infinitely, but it does scale, particularly if you are willing to spend.

by Ben Thompson, Stratechery |  Read more:
Image: Jason Del Rey for Recode

blood moon
via:

Well Endowed

How Rich Universities Waste Their Endowments

In November 2015, the man who led the operations to capture Saddam Hussein and kill Osama bin Laden stepped to the podium in a wood-paneled boardroom in Austin, Texas, to embark on a new and very different mission: launching a public university system into the highest level of prominence and respect.

Former four-star admiral and Navy SEAL Bill McRaven had been hired almost a year earlier, with great fanfare, to serve as chancellor of the University of Texas system, which oversees the University of Texas at Austin and thirteen other college campuses and medical schools. Now, addressing the UT board of regents, he was proposing nine “quantum leaps”—major initiatives that, he declared, “will make us the envy of every system in the nation.”

Some of the “leaps” are things other public universities could only dream of doing, in an era of budget cutting. Ten million dollars for a “UT Network for National Security.” Thirty-six million dollars (so far) to “develop a collaborative health care enterprise.”

McRaven was able to secure that funding because of a mountain of money that few outside the UT system know much about, called the Permanent University Fund. The fund, derived from oil drilling in state-owned land in West Texas, is worth about $20 billion. Two-thirds belongs to the UT system, making up the majority of its $24 billion endowment and putting it in an exclusive club with wealthy private schools. The UT system has more endowment money per student than Georgetown.

And yet, just three months after his “quantum leaps” speech, McRaven once again found himself before the board of regents—this time asking for a tuition increase. “The fact is, we fall well below our peers in terms of national rankings,” he said. To climb in the rankings, he argued, would require spending more money—money that would have to come from students.

How could McRaven propose a tuition hike when the system has a multibillion-dollar oil fund in its pocket? This question has only started brewing at the UT system, but members of the public, and lawmakers, have long been asking wealthy private schools pointed questions along the same lines. Massive endowments at places like Yale and Stanford add up to an enormous public subsidy. Donations are tax-deductible, and universities don’t pay taxes on the investment income endowments generate. Meanwhile, they typically spend only a small percentage of the endowment per year. That has spurred suggestions that universities be forced to spend their endowments on affordability as a condition of those tax benefits.

Even Donald Trump, during the 2016 campaign, told a Pennsylvania crowd that he would “work with Congress on reforms to make sure that if universities want access to all of these special federal tax breaks, and tax dollars, paid for by you, that they are going to make good-faith efforts to reduce the cost of college and student debt, and to spend their endowments on their students rather than other things that don’t matter.”

The tax bills that the House and Senate passed in December finally took action, imposing a 1.4 percent tax on the largest endowments. (As this article went to press, the final bill was still being negotiated.) That move appears to be driven more by a growing Republican antipathy toward academia—the House version of the bill would have taxed the tuition waivers granted to graduate students—than by concerns about affordability. But universities haven’t done themselves any favors by being extremely cagey about how they spend their endowments. When Congress asked dozens of schools to report on their spending in 2016, for instance, Harvard declined to say exactly how much of its $37 billion endowment is paid to the people who manage it. While most colleges did tell Congress what percentage of their annual endowment payout goes to financial aid, they generally didn’t elaborate further—such as on the proportion of aid that’s based on academic merit, which tends to benefit upper-middle-class students, versus financial need.

But there is one institution that serves as an exception to the black box of endowment spending. Unlike most other American public universities, which have miniscule endowments, the University of Texas system is tremendously wealthy, with an endowment more than double that of the next-richest public institution. And unlike private universities, it has to reveal how it spends that wealth.

Thanks to the fracking boom, record-high oil prices, and some smart investing, the oil fund’s value has skyrocketed in recent years. That provides an unprecedented opportunity to examine not just how a wealthy endowment-like fund is spent in general, but also what university administrators decide to do with a sudden windfall of cash.

And if the UT system’s choices are at all representative of well-endowed institutions across the country, we can pretty safely conclude that universities spend their endowments primarily to elevate their status—not to help students afford college. 

by Neena Satija, Washington Monthly |  Read more:
Image: Nicolas Raymond/Todd Wiseman for the Texas Tribune

When a Partner Cheats

Marriages fall apart for many different reasons, but one of the most common and most challenging to overcome is the discovery that one partner has “cheated” on the other.

I put the word cheated in quotes because the definition of infidelity can vary widely among and within couples. Though most often it involves explicit sexual acts with someone other than one’s spouse or committed partner, there are also couples torn asunder by a partner’s surreptitious use of pornography, a purely emotional relationship with no sexual contact, virtual affairs, even just ogling or flirting with a nonpartner.

Infidelity is hardly a new phenomenon. It has existed for as long as people have united as couples, married or otherwise. Marriage counselors report that affairs sometimes occur in happy relationships as well as troubled ones.

According to the American Association for Marriage and Family Therapy, national surveys indicate that 15 percent of married women and 25 percent of married men have had extramarital affairs. The incidence is about 20 percent higher when emotional and sexual relationships without intercourse are included. As more women began working outside the home, their chances of having an affair have increased accordingly.

Volumes have been written about infidelity, most recently two excellent and illuminating books: “The State of Affairs: Rethinking Infidelity” by Esther Perel, a New York psychotherapist, and “Healing from Infidelity” by Michele Weiner-Davis, a psychotherapist in Boulder, Colo. Both books are based on the authors’ extensive experience counseling couples whose relationships have been shattered by affairs.

The good news is, depending upon what caused one partner to wander and how determined a couple is to remain together, infidelity need not result in divorce. In fact, Ms. Perel and other marriage counselors have found, couples that choose to recover from and rebuild after infidelity often end up with a stronger, more loving and mutually understanding relationship than they had previously.

“People who’ve been betrayed need to know that there’s no shame in staying in the marriage — they’re not doormats, they’re warriors,” Ms. Weiner-Davis said in an interview. “The gift they provide to their families by working through the pain is enormous.”

Ms. Perel concedes that “some affairs will deliver a fatal blow to a relationship.” But she wrote, “Others may inspire change that was sorely needed. Betrayal cuts to the bone, but the wound can be healed. Plenty of people care deeply for the well-being of their partners even while lying to them, just as plenty of those who have been betrayed continue to love the ones who lied to them and want to find a way to stay together.”

The latter was exactly the position a friend of mine found herself in after discovering her husband’s affair. “At first I wanted to kick him out,” she told me. “But I realized that I didn’t want to get divorced. My mother did that and she ended up raising three children alone. I didn’t want a repeat of my childhood. I wanted my son, who was then 2 years old, to have a father in his life. But I also knew that if we were going to stay together, we had to go to couples counseling.”

About a dozen sessions later, my friend came away with critical insights: “I know I’m not perfect. I was very focused on taking care of my son, and my husband wasn’t getting from me whatever he needed. Everybody should be allowed to make mistakes and learn from them. We learned how to talk to each other and really listen. I love him and respect him, I’m so happy we didn’t split apart. He’s a wonderful father, a stimulating partner, and while our marriage isn’t perfect — whose is? — we are supportive and nurturing of each other. Working through the affair made us stronger.”

As happened with my friend, most affairs result from dissatisfaction with the marital relationship, fueled by temptation and opportunity. One partner may spend endless hours and days on work, household chores, outside activities or even social media, to the neglect of their spouse’s emotional and sexual needs. Often betrayed partners were unaware of what was lacking in the relationship and did not suspect that trouble was brewing.

Or the problem may result from a partner’s personal issues, like an inability to deal with conflict, a fear of intimacy, deep-seated insecurity or changes in life circumstances that rob the marital relationship of the attention and affection that once sustained it.

But short of irreversible incompatibility or physical or emotional abuse, with professional counseling and a mutual willingness to preserve the marriage, therapists maintain that couples stand a good chance of overcoming the trauma of infidelity and avoiding what is often the more painful trauma of divorce.

by Jane E. Brody, NY Times |  Read more:
Image: Paul Rogers
[ed. I'm pretty sure staying in a marriage because it's the lesser of two traumas (infidelity vs divorce) is not very good reasoning. Assuming there's a mutual willingness to continue (which often there isn't after the initial guilt of disclosure subsides), trust has to be re-established. And respect. And how does one go about doing that other than by accepting in faith that a lying partner will in fact never lie again? It's a process that can only be validated over time and with subsequent experience. Perhaps that's why some marriages that recover are stronger - couples become more attuned to deception and accountability and being present - and less inclined to emotional fantasies of what "a good marriage" should be, but actually is... hard work.]

Hugh Masekela


Hugh Masekela April, 1939 – January, 2018

Monday, January 22, 2018

“Get Out of Jail Free” Cards

In the movies I’ve seen people who try to get out of a traffic ticket by telling the police officer they made a donation to the policeman’s ball, but those were comedies. I had no idea that not only does this exist there are official cards. In fact, the police in New York are livid that the number of cards is being limited:
The city’s police-officers union is cracking down on the number of “get out of jail free” courtesy cards distributed to cops to give to family and friends. 
Patrolmen’s Benevolent Association boss Pat Lynch slashed the maximum number of cards that could be issued to current cops from 30 to 20, and to retirees from 20 to 10, sources told The Post. 
The cards are often used to wiggle out of minor trouble such as speeding tickets, the theory being that presenting one suggests you know someone in the NYPD.
The rank and file is livid. 
“They are treating active members like s–t, and retired members even worse than s–t,” griped an NYPD cop who retired on disability. “All the cops I spoke to were . . . very disappointed they couldn’t hand them out as Christmas gifts.”
A Christmas gift of institutionalized corruption.

Here’s another article on these cards which just gets all the more stunning.
First, there are tiers of cards. Silver cards are the highest honor given to citizens. It’s almost universally honored by officers, and can also help save money on insurance. Gold PBA cards are only given to police officers and their families. You’d be hard-pressed finding a cop who won’t honor a gold card.
Gold and silver cards! It gets better. You can buy these cards on eBay. Here’s a gold New Jersey card on sale for $114. A silver “family member”shield goes for $299. Some of these are probably fake. The gold and silver are rare but remember, cops get 20 to 30 regular cards so you can see why they might be upset at losing them.

The regular cards have become more common as NYC hires more police. The union may in fact be trying to bump up its monopoly profit by restricting supply.

The cards don’t just go to family members. The rot is deep:
Union officials say the cards are also public relations tools and tokens of appreciation handed out to politicians, judges, lawyers, businessmen, civil service workers and members of the news media.
A retired police officer on Quora explains how the privilege is enforced:
The officer who is presented with one of these cards will normally tell the violator to be more careful, give the card back, and send them on their way.… 
The other option is potentially more perilous. The enforcement officer can issue the ticket or make the arrest in spite of the courtesy card. This is called “writing over the card.” There is a chance that the officer who issued the card will understand why the enforcement officer did what he did, and nothing will come of it. However, it is equally possible that the enforcement officer’s zeal will not be appreciated, and the enforcement officer will come to work one day to find his locker has been moved to the parking lot and filled with dog excrement.
by Alex Taborrok, Marginal Revolution |  Read more:
Image: uncredited 
[ed. I'm not sure which I'd want more, one of these or a handicapped parking permit.]

Three Theories of Infinite Earths


One of the major theories of cosmology — the study of space — is that the universe we live in might not have an endpoint, but instead goes on forever. Scientists theorize it’s possible that if you flew a spaceship trying to reach the end of the universe, it would continue to fly past suns and moons and planets and black holes forever. Not everyone agrees with this idea.

The multiverse theory, advanced by other scientists and physicists, says the spaceship would eventually reach the end of our universe, and then transition into another universe — and then another and another and another, for an infinite amount of time.

If either of these theories is true, the infinite nature of space, combined with the limited way that particles can organize themselves to form matter (which planets and life forms are all made out of), leads to a shocking but inevitable truth: Earth as we know it probably repeats itself, over and over.

“In any one region of space there’s a finite number of atoms and particles, and there’s a finite size,” says Carroll. “If you think about an infinite number of regions spread throughout the universe, everything that can possibly happen in any region will happen an infinite number of times.” (...)

That means it’s entirely plausible that somewhere out in the infinite universe there’s another Earth where you are sitting in front of another Internet reading this article. In fact, it is plausible that there are billions of you’s on billions of Earths that are exactly the same or slightly different or vastly different. The math of infinity makes this repetition not only plausible, but certain.The theory of Many Worlds is the most similar to the circumstances described in Counterpart, and it’s also the most likely to be true, based on experimental evidence. It goes like this: When particles are teeny tiny (smaller than atoms) they act really, really weird. An electron, for example, is always spinning. When you observe it, you can see that it’s spinning either clockwise or counterclockwise. But before you looked at that electron, when it wasn’t being observed, it was very likely spinning in both directions. You may know this better as Schrödinger’s cat, the theoretical parable that says if you have a cat in a box, the cat is both alive and dead until you open the box, at which point the cat must be one or the other but not both.

“Some people think that the act of looking at the electron made it spin clockwise. I think they’re just wrong,” says Carroll. Rather, the Many Worlds theory says that the act of observing the electron created a whole different outcome entirely: “By coming into contact with that electron you have split. Before there was only one of you, afterward there are two of you,” he says. One of you is watching the electron move clockwise and the other version of you is watching it move counterclockwise. In either case, the electron is still moving in both directions.

Image: uncredited
[ed.  Sci-fi unless proven.]

The Second Coming of Ultrasound

Before Pierre Curie met the chemist Marie Sklodowska; before they married and she took his name; before he abandoned his physics work and moved into her laboratory on Rue Lhomond where they would discover the radioactive elements polonium and radium, Curie discovered something called piezoelectricity. Some materials, he found—like quartz and certain kinds of salts and ceramics—build up an electric charge when you squeeze them. Sure, it’s no nuclear power. But thanks to piezoelectricity, US troops could locate enemy submarines during World War I. Thousands of expectant parents could see their baby’s face for the first time. And one day soon, it may be how doctors cure disease.

Ultrasound, as you may have figured out by now, runs on piezoelectricity. Applying voltage to a piezoelectric crystal makes it vibrate, sending out a sound wave. When the echo that bounces back is converted into electrical signals, you get an image of, say, a fetus, or a submarine. But in the last few years, the lo-fi tech has reinvented itself in some weird new ways.

Researchers are fitting people’s heads with ultrasound-emitting helmets to treat tremors and Alzheimer’s. They’re using it to remotely activate cancer-fighting immune cells. Startups are designing swallowable capsules and ultrasonically vibrating enemas to shoot drugs into the bloodstream. One company is even using the shockwaves to heal wounds—stuff Curie never could have even imagined.

So how did this 100-year-old technology learn some new tricks? With the help of modern-day medical imaging, and lots and lots of bubbles.

Bubbles are what brought Tao Sun from Nanjing, China to California as an exchange student in 2011, and eventually to the Focused Ultrasound Lab at Brigham and Women’s Hospital and Harvard Medical School. The 27-year-old electrical engineering grad student studies a particular kind of bubble—the gas-filled microbubbles that technicians use to bump up contrast in grainy ultrasound images. Passing ultrasonic waves compress the bubbles’ gas cores, resulting in a stronger echo that pops out against tissue. “We’re starting to realize they can be much more versatile,” says Sun. “We can chemically design their shells to alter their physical properties, load them with tissue-seeking markers, even attach drugs to them.”

Nearly two decades ago, scientists discovered that those microbubbles could do something else: They could shake loose the blood-brain barrier. This impassable membrane is why neurological conditions like epilepsy, Alzheimer’s, and Parkinson’s are so hard to treat: 98 percent of drugs simply can’t get to the brain. But if you station a battalion of microbubbles at the barrier and hit them with a focused beam of ultrasound, the tiny orbs begin to oscillate. They grow and grow until they reach the critical size of 8 microns, and then, like some Grey Wizard magic, the blood-brain barrier opens—and for a few hours, any drugs that happen to be in the bloodstream can also slip in. Things like chemo drugs, or anti-seizure medications.

This is both super cool and not a little bit scary. Too much pressure and those bubbles can implode violently, irreversibly damaging the barrier.

That’s where Sun comes in. Last year he developed a device that could listen in on the bubbles and tell how stable they were. If he eavesdropped while playing with the ultrasound input, he could find a sweet spot where the barrier opens and the bubbles don’t burst. In November, Sun’s team successfully tested the approach in rats and mice, publishing their results in Proceedings in the National Academy of Sciences.

“In the longer term we want to make this into something that doesn’t require a super complicated device, something idiot-proof that can be used in any doctor’s office,” says Nathan McDannold, co-author on Sun’s paper and director of the Focused Ultrasound Lab. He discovered ultrasonic blood-brain barrier disruption, along with biomedical physicist Kullervo Hynynen, who is leading the world’s first clinical trial evaluating its usefulness for Alzheimer’s patients at the Sunnybrook Research Institute in Toronto. Current technology requires patients to don special ultrasound helmets and hop in an MRI machine, to ensure the sonic beams go to the right place. For the treatment to gain any widespread traction, it’ll have to become as portable as the ultrasound carts wheeled around hospitals today.

More recently, scientists have realized that the blood-brain barrier isn’t the only tissue that could benefit from ultrasound and microbubbles. The colon, for instance, is pretty terrible at absorbing the most common drugs for treating Crohn’s disease, ulcerative colitis, and other inflammatory bowel diseases. So they’re often delivered via enemas—which, inconveniently, need to be left in for hours.

But if you send ultrasound waves waves through the colon, you could shorten that process to minutes. In 2015, pioneering MIT engineer Robert Langer and then-PhD student Carl Schoellhammer showed that mice treated with mesalamine and one second of ultrasound every day for two weeks were cured of their colitis symptoms. The method also worked to deliver insulin, a far larger molecule, into pigs.

Since then, the duo has continued to develop the technology within a start-up called Suono Bio, which is supported by MIT’s tech accelerator, The Engine. The company intends to submit its tech for FDA approval in humans sometime later this year.

Instead of injecting manufactured microbubbles, Suono Bio uses ultrasound to make them in the wilds of the gut. They act like jets, propelling whatever is in the liquid into nearby tissues. In addition to its backdoor approach, Suono is also working on an ultrasound-emitting capsule that could work in the stomach for things like insulin, which is too fragile to be orally administered (hence all the needle sticks). But Schoellhammer says they have yet to find a limit on the kinds of molecules they can force into the bloodstream using ultrasound.

“We’ve done small molecules, we’ve done biologics, we’ve tried DNA, naked RNA, we’ve even tried Crispr,” he says. “As superficial as it may sound, it all just works.”

by Megan Molteni, Wired |  Read more:
Image: Suono Bio

A New Map For America


These days, in the thick of the American presidential primaries, it’s easy to see how the 50 states continue to drive the political system. But increasingly, that’s all they drive — socially and economically, America is reorganizing itself around regional infrastructure lines and metropolitan clusters that ignore state and even national borders. The problem is, the political system hasn’t caught up.

America faces a two-part problem. It’s no secret that the country has fallen behind on infrastructure spending. But it’s not just a matter of how much is spent on catching up, but how and where it is spent. Advanced economies in Western Europe and Asia are reorienting themselves around robust urban clusters of advanced industry. Unfortunately, American policy making remains wedded to an antiquated political structure of 50 distinct states.

To an extent, America is already headed toward a metropolis-first arrangement. The states aren’t about to go away, but economically and socially, the country is drifting toward looser metropolitan and regional formations, anchored by the great cities and urban archipelagos that already lead global economic circuits.

The Northeastern megalopolis, stretching from Boston to Washington, contains more than 50 million people and represents 20 percent of America’s gross domestic product. Greater Los Angeles accounts for more than 10 percent of G.D.P. These city-states matter far more than most American states — and connectivity to these urban clusters determines Americans’ long-term economic viability far more than which state they reside in.

This reshuffling has profound economic consequences. America is increasingly divided not between red states and blue states, but between connected hubs and disconnected backwaters. Bruce Katz of the Brookings Institution has pointed out that of America’s 350 major metro areas, the cities with more than three million people have rebounded far better from the financial crisis. Meanwhile, smaller cities like Dayton, Ohio, already floundering, have been falling further behind, as have countless disconnected small towns across the country.

The problem is that while the economic reality goes one way, the 50-state model means that federal and state resources are concentrated in a state capital — often a small, isolated city itself — and allocated with little sense of the larger whole. Not only does this keep back our largest cities, but smaller American cities are increasingly cut off from the national agenda, destined to become low-cost immigrant and retirement colonies, or simply to be abandoned. (...)

Connectivity isn’t just about infrastructure; it’s about strategy. It’s not just about more roads, rail lines and telecommunications — as well as manufacturing plants and data centers — but where those are placed. Getting that right is critical to getting the most out of public investment. But too often, decisions about infrastructure investment are made at the state (or even county) level, and end at the state border.

A New Map for America (NY Times)
Image: Joel Kotkin​ (boundaries and names of 7 mega-regions)​; Forbes Magazine​; Regional Plan Association; Census Bureau; ​United States​ High Speed Rail Association; Clare Trainor/University of Wisconsin-Madison Cartography Laboratory.

These Guys Are Good


[ed. Pretty much my golf game (all luck), except I wouldn't hit the green (maybe a sprinkler head, porta-potty, or innocent bystander), and sure wouldn't win a new BMW Roadster.]

Joe Jackson

Meat and the H-Word

We all know, or at least we can all figure out with a moment’s honest reflection, that our dominant attitudes on animals are inconsistent. Someone can be incredibly disturbed by the notion of eating their puppy, but happily consume bacon every other morning, and the cognitive dissonance between the two positions never seems to cause any bother. If we’re being serious, though, we know that many sows are smarter than chihuahuas, and that all of the traits that cause us to love our pets are just as present in the animals we regularly devour the murdered corpses of. (I am sorry, that was a somewhat extreme way of putting it.) This is a commonplace observation, but in a way that’s what makes it so strange: it’s obvious that we have no rational reason to think some animals are friends and others are food. The only differences are tradition and the strength of the relationships we happen to have developed with the friend-animals, but that’s no more a justification of the distinction than it would be to say “I only eat people who aren’t my friends.” Even though nobody can justify it, though, it continues. People solve the question “Why do you treat some animals as if they have personalities but other equally sophisticated animals as if they are inanimate lumps of flavor and calories?” by simply pretending the question hasn’t been asked, or by making some remark like “Well, if pigs would quit making themselves taste so good, I could quit eating them.”

The truth is disturbing, which is why it’s so easily ignored. I’m sure I don’t have to remind you of all the remarkable facts about pigs. First, the stereotypes are false: they are clean animals and don’t sweat, and they don’t “pig out” but prefer to eat slowly and methodically. They are, as Glenn Greenwald puts it, “among the planet’s most intelligent, social, and emotionally complicated species, capable of great joy, play, love, connection, suffering and pain.” They can be housebroken, and can be trained to walk on a leash and do tricks. They dream, they play, they snuggle. They can roll out rugs, play videogames, and herd sheep. They love sunbathing and belly rubs. But don’t take my word for it—listen to the testimony of this man who accidentally adopted a 500-pound pig:

She’s unlike any animal I’ve met. Her intelligence is unbelievable. She’s house trained and even opens the back door with her snout to let herself out to pee. Her food is mainly kibble, plus fruit and vegetables. Her favourite treat is a cupcake. She’s bathed regularly and pigs don’t sweat, so she doesn’t smell. If you look a pig closely in the eyes, it’s startling; there’s something so inexplicably human. When you’re lying next to her and talking, you know she understands. It was emotional realising she was a commercial pig. The more we discovered about what her life could have been, it seemed crazy to us that we ate animals, so we stopped.

I want to note something that often passes by too quickly, which is that the sentience of animals like pigs and cows is almost impossible to deny. Animals can clearly feel “distress” and “pleasure,” and since they have nervous systems just like we do, these feelings are being felt by a “consciousness.” If a human eyeball captures light and creates images that are seen from within, so does a pig’s eyeball, because eyes are eyes. In other words, pigs have an internal life: there is something it is like to be a pig. We’ll almost certainly never know what that’s like, and it’s impossible to even speculate on, but if we believe that other humans are conscious, it is unclear why other animals wouldn’t be, albeit in a more rudimentary way. No, they don’t understand differential calculus or Althusser’s theory of interpellation. (Neither do I.) But they share with us the more morally crucial quality of being able to feel things. They can be happy and they can suffer.

Of course, critics suggest that this is just irrational anthropomorphism: the idea of animal emotions is false, because emotions are concepts we have developed to understand our own experiences as humans, and we have no idea what the parallel experiences in animals are like and whether they are properly comparable. The temptation to attributes human traits to animals is certainly difficult to resist; I can’t help but see sloths that look like they’re smiling as actually smiling, but these sloths almost certainly have no idea that they are smiling. Likewise, whenever I see a basset hound I feel compelled to try to cheer it up, even though I know that sad-eyed dogs aren’t really sad. Even if we do posit that animals feel emotions, nobody can know just how distant their consciousnesses are from our own. We have an intuitive sense that “being a bug” doesn’t feel like much, but how similar is being a water vole to being an antelope versus being a dragonfly? All of it is speculation. David Foster Wallace, in considering the Lobster Question (“Is it all right to boil a sentient creature alive just for our gustatory pleasure?”), noted that the issues of “whether and how different kinds of animals feel pain, and of whether and why it might be justifiable to inflict pain on them in order to eat them, turn out to be extremely complex and difficult,” and many can’t actually be resolved satisfactorily. How do you know what agony means to a lobster? Still, he said, “standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience… To my lay mind, the lobster’s behavior in the kettle appears to be the expression of a preference; and it may well be that an ability to form preferences is the decisive criterion for real suffering.”

And lobsters are a trickier case than other more complex creatures, since they’re freaky and difficult to empathize with. As we speak of higher-order creatures who have anatomy and behavioral traits more closely paralleling our own, there is at least good evidence to suggest that various nonhuman animals can experience terrible pain. (Again, hardly anyone would deny this with dogs, and once we accept that we just need to be willing to carry our reasoning through.) Once we accept that these beings experience pain, it next becomes necessary to admit that humans inflict a lot of it on them. We massacre tens of billions of animals a year, and their brief lives are often filled with nothing but pain and fear. The “lucky” ones are those like the male chicks who are deemed “useless” and are “suffocated, gassed or minced alive at a day old.” At least they will be spared the life of torture that awaits most of the creatures raised in factory farms. I don’t know how many atrocity tales to tell here, because again, this is not something unknown, but something “known yet ignored.” I can tell you about animals living next to the rotting corpses of their offspring, animals beaten, shocked, sliced, living in their own blood and feces. I could show you horrible pictures, but I won’t. Here’s Greenwald describing a practice used in pig farms:

Pigs are placed in a crate made of iron bars that is the exact length and width of their bodies, so they can do nothing for their entire lives but stand on a concrete floor, never turn around, never see any outdoors, never even see their tails, never move more than an inch. They are put in so-called farrowing crates when they give birth, and their piglets run underneath them to suckle and are often trampled to death. The sows are bred repeatedly this way until their fertility declines, at which point they are slaughtered and turned into meat. The pigs are so desperate to get out of their crates that they often spend weeks trying to bite through the iron bars until their gums gush blood, bash their heads against the walls, and suffer a disease in which their organs end up mangled in the wrong places, from the sheer physical trauma of trying to escape from a tiny space or from acute anxiety.

Separate from the issue of “conditions” is the issue of killing itself. Obviously, it is better if an animal lives in relative comfort before it is slaughtered, and better if their deaths are imposed “humanely.” But personally, I find the idea of “humane slaughter” oxymoronic, because I’m disturbed by the taking of life as well as by suffering. This part is difficult to persuade people of, since it depends largely on a moral instinct about whether an animal’s life is “inherently” valuable, and whether they should have some kind of autonomy or dignity. Plenty of people who could agree that animal torture is wrong can still believe that eating animals is unobjectionable in and of itself. My disagreement with this comes from my deep gut feeling that opposing torture but endorsing killing is like saying “Of course, the people we eat shouldn’t be kept in tiny cages before we kill them, that’s abominable.” Once you grant that animals are conscious, and have “feelings” of one kind of another, and “wills” (i.e. that there are things they want and things they don’t want, and they don’t want to die), the whole process of mass killing seems irredeemably horrifying. (...)

Because people slip so naturally into oblivious complicity, it’s crucial to actively examine the world around you for evidence of things hidden. What am I missing? What have I accepted as ordinary that might in fact be atrocious? Am I in denial about something that will be clear in retrospect? Every time I apply this kind of thinking to meat-eating, I get chills. Here we have set up mass industrial slaughter, a world built on the suffering and death of billions of creatures. The scale of the carnage is unfathomable. (I know sharks aren’t particularly sympathetic, but I’m still shocked by the statistic that while sharks kill 8 people per year, humans kill 11,000 sharks per hour.) Yet we hide all of it away, we don’t talk about it. Laws are passed to prevent people from even taking photographs of it. That makes me feel the same way I do about the death penalty: if this weren’t atrocious, it wouldn’t need to be kept out of view. “Mass industrial slaughter.” There’s no denying that’s what it is. Yet that sounds like something a decent society shouldn’t have in it.

by Nathan J. Robinson, Current Affairs |  Read more:
Image: Katherine Lam

photo: markk

Sunday, January 21, 2018

Saturday, January 20, 2018

It's the (Democracy-Poisoning) Golden Age of Free Speech

In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence? (Yes, there are systems that can create increasingly convincing fake videos.)

Or let’s say you were the one who posted that video. If so, is anyone even watching it? Or has it been lost in a sea of posts from hundreds of millions of content pro­ducers? Does it play well with Facebook’s algorithm? Is YouTube recommending it?

Maybe you’re lucky and you’ve hit a jackpot in today’s algorithmic public sphere: an audience that either loves you or hates you. Is your post racking up the likes and shares? Or is it raking in a different kind of “engagement”: Have you received thousands of messages, mentions, notifications, and emails threatening and mocking you? Have you been doxed for your trouble? Have invisible, angry hordes ordered 100 pizzas to your house? Did they call in a SWAT team—men in black arriving, guns drawn, in the middle of dinner?

Standing there, your hands over your head, you may feel like you’ve run afoul of the awesome power of the state for speaking your mind. But really you just pissed off 4chan. Or entertained them. Either way, congratulations: You’ve found an audience.
***
Here's how this golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.

These companies—which love to hold themselves up as monuments of free expression—have attained a scale unlike anything the world has ever seen; they’ve come to dominate media distribution, and they increasingly stand in for the public sphere itself. But at their core, their business is mundane: They’re ad brokers. To virtually anyone who wants to pay them, they sell the capacity to precisely target our eyeballs. They use massive surveillance of our behavior, online and off, to generate increasingly accurate, automated predictions of what advertisements we are most susceptible to and what content will keep us clicking, tapping, and scrolling down a bottomless feed.

So what does this algorithmic public sphere tend to feed us? In tech parlance, Facebook and YouTube are “optimized for engagement,” which their defenders will tell you means that they’re just giving us what we want. But there’s nothing natural or inevitable about the specific ways that Facebook and YouTube corral our attention. The patterns, by now, are well known. As Buzzfeed famously reported in November 2016, “top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined.”

Humans are a social species, equipped with few defenses against the natural world beyond our ability to acquire knowledge and stay in groups that work together. We are particularly susceptible to glimmers of novelty, messages of affirmation and belonging, and messages of outrage toward perceived enemies. These kinds of messages are to human community what salt, sugar, and fat are to the human appetite. And Facebook gorges us on them—in what the company’s first president, Sean Parker, recently called “a social-­validation feedback loop.”

There are, moreover, no nutritional labels in this cafeteria. For Facebook, YouTube, and Twitter, all speech—whether it’s a breaking news story, a saccharine animal video, an anti-Semitic meme, or a clever advertisement for razors—is but “content,” each post just another slice of pie on the carousel. A personal post looks almost the same as an ad, which looks very similar to a New York Times article, which has much the same visual feel as a fake newspaper created in an afternoon.

What’s more, all this online speech is no longer public in any traditional sense. Sure, Facebook and Twitter sometimes feel like places where masses of people experience things together simultaneously. But in reality, posts are targeted and delivered privately, screen by screen by screen. Today’s phantom public sphere has been fragmented and submerged into billions of individual capillaries. Yes, mass discourse has become far easier for everyone to participate in—but it has simultaneously become a set of private conversations happening behind your back. Behind everyone’s backs.

Not to put too fine a point on it, but all of this invalidates much of what we think about free speech—conceptually, legally, and ethically.

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that doeslook to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

Mark Zuckerberg holds up Facebook’s mission to “connect the world” and “bring the world closer together” as proof of his company’s civic virtue. “In 2016, people had billions of interactions and open discussions on Facebook,” he said proudly in an online video, looking back at the US election. “Candidates had direct channels to communicate with tens of millions of citizens.”

This idea that more speech—more participation, more connection—constitutes the highest, most unalloyed good is a common refrain in the tech industry. But a historian would recognize this belief as a fallacy on its face. Connectivity is not a pony. Facebook doesn’t just connect democracy-­loving Egyptian dissidents and fans of the videogame Civilization; it brings together white supremacists, who can now assemble far more effectively. It helps connect the efforts of radical Buddhist monks in Myanmar, who now have much more potent tools for spreading incitement to ethnic cleansing—fueling the fastest- growing refugee crisis in the world.

The freedom of speech is an important democratic value, but it’s not the only one. In the liberal tradition, free speech is usually understood as a vehicle—a necessary condition for achieving certain other societal ideals: for creating a knowledgeable public; for engendering healthy, rational, and informed debate; for holding powerful people and institutions accountable; for keeping communities lively and vibrant. What we are seeing now is that when free speech is treated as an end and not a means, it is all too possible to thwart and distort everything it is supposed to deliver.

by Zeynep Tufekci, Wired | Read more:
Image: Adam Maida

Charles S. Raleigh, Law of the Wild, 1881
via:

The Instagrammable Charm of the Bourgeoisie

It is tempting to believe that we live in a time uniquely saturated with images. And indeed, the numbers are staggering: Instagrammers upload about 95 million photos and videos every day. A quarter of Americans use the app, and the vast majority of them are under 40. Because Instagram skews so much younger than Facebook or Twitter, it is where “tastemakers” and “influencers” now live online, and where their audiences spend hours each day making and absorbing visual content. But so much of what seems bleeding edge may well be old hat; the trends, behaviors, and modes of perception and living that so many op-ed columnists and TED-talk gurus attribute to smartphones and other technological advances are rooted in the much older aesthetic of the picturesque.

Wealthy eighteenth-century English travelers such as Gray used technology to mediate and pictorialize their experiences of nature just as Instagrammers today hold up their phones and deliberate over filters. To better appreciate the picturesque, travelers in the late 1700s were urged to use what was known as a gray mirror or “Claude glass,” which would simplify the visual field and help separate the subject matter from the background, much like an Instagram filter. Artists and aesthetes would carry these tablet-sized convex mirrors with them, and position themselves with their backs to whatever they wished to behold—the exact move that Gray was attempting when he tumbled into a ditch. The artist and Anglican priest William Gilpin, who is often credited with coining the term “picturesque,” even went so far as to mount a Claude mirror in his carriage so that, rather than looking at the actual scenery passing outside his window, he could instead experience the landscape as a mediated, aestheticized “succession of high-coloured pictures.”

Connections between the Instragrammable and the picturesque go deeper than framing methods, however. The aesthetics are also linked by shared bourgeois preoccupations with commodification and class identity. By understanding how Instagram was prefigured by a previous aesthetic movement—one which arose while the middle class was first emerging—we can come closer to understanding our current moment’s tensions between beauty, capitalism, and the pursuit of an authentic life. (...)

While the word “picturesque” came into circulation in the early 1700s to describe anything that looked “like a picture,” it solidified into a stable aesthetic by the late 1700s, when travelers began recording their trips through Europe and England with sketches, etchings, and the occasional painting. The method for circulating their images was more cumbersome than ours, but largely followed the same formula as today. A wealthy traveler trained in draftsmanship (whom we would now call an influencer) would take a months-long journey, carrying art supplies to record picturesque scenes. When he returned home, these images were turned into etchings, which could then be mass-produced, sold individually or bound together to create a record of his travels for his friends and family to peruse.

This practice had its roots in the Grand Tour, a rite of passage for young male aristocrats entering government and diplomacy, in which they roamed the continent for a few years with the aim of accruing gentlemanly knowledge of the world. But the picturesque travelers of the late eighteenth century were a new type of tourist, men and women born during a period of rapid economic and social change. This was the world of Jane Austen, in which a burgeoning middle class sought to solidify and improve its position in English society by adopting practices that signaled prosperity and refinement. (...)

For Gilpin, the picturesque was not just an aesthetic, but a mindset that projected compositional principles onto a landscape while constantly comparing that landscape against previous trips and pictures, a kind of window-shopping of the soul. But the direct experience of picturesque nature is really secondary to having recorded it, either on paper or in memory. “There may be more pleasure in recollecting, and recording,” he writes, “from a few transient lines, the scenes we have admired, than in the present enjoyment of them.” Only recently catching up with the insights of our forebears, the pleasures of recording and archiving have been rediscovered by digital media theorists, such as Nathan Jurgenson, who calls this preoccupation “nostalgia for the present.” Typically, this condition is associated with photographic image-making, and especially with digital technology, but these preoccupations obviously preceded the advent of the camera. (...)

Today you can still find echoes of the picturesque in travel photos on Instagram. A friend’s recent trip to Cuba, for example, will feature leathery old men smoking cigars among palm trees and pastel junkers. Or simply search #VanLife to see an endless stream of vintage Volkswagens chugging through the red desert landscape of the American Southwest. But rather than concentrate on generic similarities between the picturesque and images one finds on Instagram, it is more illuminating to think of how both aesthetics arose from similar socioeconomic and class circumstances—manifesting, according to Price, as images filled with “interesting and entertaining particulars.”

Price’s use of the word “interesting” is significant in understanding the relationship between the picturesque and the Instagrammable. In Our Aesthetic Categories: Zany, Cute, Interesting (2012), philosopher Sianne Ngai positions the picturesque as a function of visual interest—of variation and compositional unpredictability—which she connects to the enticements of capitalism. For a scene or a picture to be interesting, she argues, it must be judged in relation to others, one of many. According to Ngai, this picturesque habit began “emerging in tandem with the development of markets.” Unlike beauty, which exalts, or the sublime, which terrifies, Ngai suggests that the picturesque produces an affect somewhere between excitement and boredom. It is a feeling tied to amusement and connoisseurship, like letting one’s eyes wander over a series of window displays. (...)

The picturesque was ultimately about situating oneself within the class structure by demonstrating a heightened aesthetic appreciation of the natural world, during a period when land was becoming increasingly commodified. By contrast, the Instagrammable is a product of the neoliberal turn toward the individual. It is therefore chiefly concerned with bringing previously non-commodifiable aspects of the self into the marketplace by turning leisure and lifestyle into labor and goods. Though the two aesthetics share a similar image-making methodology and prize notions of authenticity, the Instagrammable is perhaps even more capacious than its predecessor. Through the alchemy of social media, everything you post, whether it is a self-portrait or not, is transformed into a monetized datapoint and becomes an exercise in personal branding.

It almost goes without saying the selfie is by far the most popular kind of image on Instagram. Photos of faces receive 38 percent more engagement than other kinds of content. Indeed, one could argue that all images on the platform are imbued with the selfie’s metaphysical logic: I was here, this is me. Following this structure, mirrors and shiny surfaces on Instagram abound, with the photographer reflected in still ponds, shop windows, and Anish Kapoor sculptures. Sometimes a body part or an inanimate object will stand in for the self: fingers cradling a puppy, hot-dog legs by the beach, a doll in the shadow of the Eiffel Tower. Other times, the presence of the Instagrammer is suggested through a shadow cast against a scenic backdrop, or merely implied by the very existence of the photograph itself, which says, This was an Instagrammable moment I recorded. Although rarely figural, picturesque images could also be said to have possessed the qualities of the selfie avant la lettre, given what they were often meant to signal: I went here, I am the kind of person who has traveled and decorates my home with this kind of art.

This all-encompassing logic of the selfie clarifies itself when you type “#Instagrammable” into the platform’s search bar. Foamy lattes, tourist selfies, old jeeps, women in teeny bikinis, and the phrase “namaste bitches” written in neon lights. On first glance, these photos seem to share nothing but a hashtag, yet when taken together, they represent an emergent worldview. Whereas British travelers of the picturesque era set their newly trained gazes upon rugged vistas and ruined abbeys and then recreated them on their own properties, Instagrammers are instead retooling their own lives—the most obvious medium of our neoliberal age. In short, the project of the Instagrammer is not to find interesting things to photograph, but to become the interesting thing.

At its core, Instagram is powered by a careful balance of desire: every commodity (including the Instagrammer) must be desirable to the consumer, but no consumer can seem unsettled by desire for the commodity. Like the measured interest at the core of the picturesque—a display of world-wise connoisseurship that signaled class belonging—“thirst,” and its careful suppression, is what drives Instagram. Thirst is an affect that combines envy, erotic desire, and visual attention. However, if you are obviously thirsty, it means that your persona as a sanguine consumer has slipped, which is considered bad or embarrassing. One has revealed too much about one’s real desires. In this way, Instagram influencers are like dandies, whose greatest accomplishment was the control of their emotions, and more importantly control over the ways their faces and bodies performed those emotions. “It is the joy of astonishing others,” writes Charles Baudelaire in The Painter of Modern Life (1863), but “never oneself being astonished.” (...)

It is this obsession with looking natural that appeals to advertisers, because unlike a magazine ad or television commercial, the line on Instagram between the real and the make-believe is much more porous. People scroll for hours on their phones because of the pictures’ ability to simultaneously conjure fantasy and ground that fantasy in the suggestion of documented experience. Contemporary audiences know that television ads are fake, but on an Instagram feed, mixed with family snapshots and close-ups of birthday parties, sponsored posts of cerulean waters on the shores of Greece look real enough—achievable, or at a minimum, something one should hope to achieve.

by Daniel Penny, Boston Review |  Read more:
Image: Getty

The Enchanted Loom

The light of the sun and moon cannot be outdistanced, yet mind reaches beyond them. Galaxies are as infinite as grains of sand, yet mind spreads outside them.
—Eisai 
Biology gives you a brain, life turns it into a mind.
—Jeffrey Eugenides
All brains gather intelligence; to lesser or greater extents, some brains acquire a state of mind. How and where they find the means to do so is the question raised by poets and philosophers, doctors of divinity and medicine who have been fooling around with it for the past five thousand years and leave the mystery intact. It’s been a long time since Adam ate of the apple, but about the metaphysical composition of the human mind, all we can say for certain is that something unknown is doing we don’t know what.

Our gathering of intelligence about the physical attributes and behaviors of the brain has proved more fruitful. No small feat. The brain is the most complicated object in the known universe, housing 86 billion neurons, no two alike and each connected to thousands of other neurons, passing signals to one another across as many as 100 trillion synaptic checkpoints. Rational study of the organism (its chemistries, mechanics, and cellular structure) has led to the development of the Human Genome Project, yielded astonishing discoveries in medicine and biotechnology—the CT scan and the MRI, gene editing and therapy, advanced diagnostics, surgical and drug treatment of neurological disorder and disease. All triumphs of the intellect but none of them answering the question as to whether the human mind is flesh giving birth to spirit or spirit giving birth to flesh.

Mind is consciousness, and although a fundamental fact of human existence, consciousness is subjective experience as opposed to objective reality and therefore outdistances not only the light of the sun and the moon but also the reach of the scientific method. It doesn’t lend itself to trial by numbers. Nor does it attract the major funding (public and private, civilian and military) that in China, Europe, and the Americas expects the brain sciences to produce prompt and palpable reward and relief.

The scientific-industrial complex focuses its efforts on the creation of artificial intelligence—computer software equipped with functions of human cognition giving birth to machines capable of visual perception, speech and pattern recognition, decision making and data management. Global funding for AI amounted to roughly $30 billion in 2016, the fairest share of the money aimed at stepping up the commercial exploitations of the internet. America’s military commands test drones that decide for themselves which targets to destroy; Google assembles algorithms that monetize online embodiments of human credulity and desire, ignorance and fear.

We live in an age convinced that technology is the salvation of the human race, and over the past fifty years, we’ve learned to inhabit a world in which it is increasingly the thing that thinks and the man reduced to the state of a thing. We have machines to scan the flesh and track the blood, game the stock market, manufacture our news and social media, tell us where to go, what to do, how to point a cruise missile or a toe shoe. Machines neither know nor care to know what or where is the human race, why or if it is something to be deleted, sodomized, or saved. Watson and Alexa can access the libraries of Harvard, Yale, and Congress, but they can’t read the books. They process words as objects, not as subjects. Not knowing what the words mean, they don’t hack into the vast cloud of human consciousness (history, art, literature, religion, philosophy, poetry, and myth) that is the making of once and future human beings. (...)

History is not what happened two hundred or two thousand years ago. It is a story about what happened two hundred or two thousand years ago. The stories change, as do the sight lines available to the tellers of the tales. To read three histories of the British Empire, one of them published in 1800, the others in 1900 and 2000, is to discover three different British Empires on which the sun eventually sets. The must-see tourist attractions remain intact—Napoleon still on his horse at Waterloo, Queen Victoria enthroned in Buckingham Palace, the subcontinent fixed to its mooring in the Indian Ocean—but as to the light in which Napoleon, the queen, or India are to be seen, accounts differ.

It’s been said that over the span of nine months in the womb, the human embryo ascends through a sequence touching on over three billion years of evolution, that within the first six years of life, the human mind stores subjective experience gathered in what is now believed to be the nearly 200,000 years of its existence. How subjective gatherings of consciousness pass down from one generation to the next, collect in the pond of awareness that is every newly arriving human being, is another lobe of the mystery the contributors to this issue of the Quarterly leave intact. It doesn’t occur to Marilynne Robinson, twenty-first-century essayist and novelist, to look the gift horse in the mouth. “We all live in a great reef of collective experience, past and present, that we receive and preserve and modify. William James says data should be thought of not as givens but as gifts…History and civilization are an authoritative record the mind has left, is leaving, and will leave.”

by Lewis Lapham, Lapham's Quarterly |  Read more:
Image: "The Weeders, by Jules Breton, 1868