Wednesday, February 14, 2018

Kicking the Table: Populism or Capitalist De-Modernization at the Semi-Periphery: The Case of Poland

Those who are against fascism without being against capitalism, who lament over the barbarism that comes out of fascism, are like people who wish to eat their veal without slaughtering the calf. They are willing to eat the calf, but they dislike the sight of blood. They are easily satisfied if the butcher washes his hands before weighing the meat. They are not against the property relations which engender barbarism; they are only against barbarism itself. They raise their voices against barbarism, and they do so in countries where precisely the same property relations prevail, but where the butchers wash their hands before weighing the meat. 
—Bertolt Brecht, “Five Difficulties of Writing the Truth” (1935)
It was in autumn 1990 that Poland experienced a pivotal moment in its modern political history—for the first time the president of the country was to be elected by a popular vote. The top job was finally claimed by Lech Wałęsa, the iconic leader of Solidarność trade union and Peace Nobel Prize winner. As is often the case with fundamental breakthroughs, however, there was something much darker and disturbing lurking in the background. Wałęsa’s victory did not happen without a fight. He was challenged by another prominent center-right politician with a long history of anti-Soviet activism: Tadeusz Mazowiecki. The latter got the support of elite intellectual circles, marking the final cleavage in the previously united opposition that throughout the 1980s had fought under the banner of Solidarność. It hardly was a surprising course of events as it closely followed class divisions: Wałęsa, a simple worker turned revolutionary, enjoyed the support of Polish liberal intellectuals as long as he was useful, even crucial, in the fight against Soviet domination. Once that fight was won, class divisions, especially those dictated by cultural capital, reemerged as an important—even if not the only—line of political division. But what wassurprising and shocked all pundits was the fact that it wasn’t Mazowiecki whom Wałęsa had to face in the run-off ballot. Another candidate claimed second place: Stan Tymiński, an obscure and completely unknown figure.

Tymiński only appeared in Polish public life right before the election, coming back from decades of emigration spent in Canada and South America. He presented himself as the anti-establishment candidate of “the people.” He had no support from either ex-communists or Solidarność and he underlined his independence. He also advertised his personal material success: a Polish-Canadian businessman, well-travelled and experienced in the mythical West, doing business across North and South America. He campaigned against the entire political establishment, maintaining that all politicians were corrupt and controlled by the secret service and claiming to possess many proofs of this collaboration, which, however, he never revealed. He also passionately denounced the suffering of the poorer part, who had been deeply harmed by vicious neoliberal reforms undertaken with the support of IMF and the World Bank a year earlier (reforms devised, as it happens, by no less a figure than the famous neoliberal prophet himself, Jeffrey Sachs). To these impoverished masses, Tymiński promised material prosperity and symbolic dignity, and, despite the fact that he had zero political experience and was unanimously lambasted by intellectual establishment, he managed to secure the second place in the first round of the elections, winning 23% of the votes, more than Tadeusz Maowiecki who had served as Polish prime minister from 1989 and was probably the best qualified candidate to ever run for the office of president in Poland.

A reader following the 2016 US election—and who has not?—may start to see an uncanny resemblance: yes, Stan Tymiński was, toutes proportions gardées, Polish Donald Trump and he defeated the politician who was the closest equivalent of Hilary Clinton in Polish political life: a very well educated and well prepared political professional (a lawyer for that matter) discredited for many voters by his links to the elite of neoliberal establishment. Tymiński did not win the presidency, but the shock that followed his victory over Tadeusz Mazowiecki was very similar to what the US experienced in 2016.

This is a fact worth remembering given the more recent populist turn in Polish—and not only Polish—politics: populism did not appear in the last years solely as the result of the 2008 financial crisis. In the Polish context at least, it is as old as neoliberalism and constitutes its somber counterpart. Despite Tymiński’s defeat in 1990, it has remained a constant element of our political life, enjoying in various institutional forms between 15 and 20 percent of electoral support. Tymiński disappeared from Polish politics as quickly as he entered it, but just a year later, in 1991, another popular figure was born: Andrzej Lepper. A home-grown, rural populist, he rallied farmers to oppose the government after a wave of bankruptcies and unrest provoked by the shock of neoliberal therapy applied to Polish society after the fall of the Soviet bloc. This time a political organization was born: Samoobrona (meaning “Self-defence”), first as a movement, then a political party. After more than a decade of lurking in the shadows, Lepper entered government in 2005, becoming deputy prime-minister in the cabinet of…Jarosław Kaczyński, the well-known leader of the Law and Justice party that currently holds power in Poland. At that time they only ruled for two years, falling victims to their own infighting and intrigues; however, that coalition, as well as the early developments that I sketched above, is crucial to understanding the present political situation in Poland. Before it happened Law and Justice was just an ordinary neo-conservative party: they affirmed nationalism (labeled “patriotism” according to the rules of political correctness), opposed women’s emancipation and gay rights, proclaimed their religious faith etc. When it came to the economy, they were just as neoliberal as the liberals: they lowered not only the taxes for the rich, but also mandatory contributions to healthcare and social security that companies are supposed to pay and they completely scrapped the inheritance tax. But in the course of these two years of coalition government, Law and Justice devoured Samoobrona, which never rose to power again, and they captured its electorate, slowly turning from a standard conservative to the populist-conservative party that they are today. What helped this development was, of course, the success of Hungary’s Victor Orbán who provided a blueprint of how to legally bypass the law in order to construct the bizarre hybrid of authoritarian parliamentarism that we are experiencing today.

Many Polish liberals are disgusted by the fact that so many Polish voters “betrayed the values of democratic society” and “sold” their allegiance to Constitutional Court or the separation of powers for $150 a month child bonus introduced by the Law and Justice government. This is, however, a fundamental misconception. Celebrations of democratic values come very easily to those who do not need to worry about how to feed their kids and whose class egoism has been ruthless during the last three decades of neoliberal rule. (...)

This disconnect is well exemplified in the discussions surrounding Poland’s position and membership in the European Union. Polish liberals fear some kind of Polexit—either by choice or by expulsion due to the undemocratic policies of the populist government. So they point to the fact that the European Union with the Schengen Zone agreement gave us an incredible freedom of movement in Europe. Of course, factually it is true. Being born in 1976 I’m old enough to remember what it meant to live behind the Iron Curtain. We were not allowed to keep our passports at home and we had to apply for them every time we intended to leave the country. We needed a visa to enter any Western state. Visas were difficult to obtain, cost a lot and covered short periods of time like two weeks or a month. Crossing the border was a stressing and humiliating experience for us: we were suspected of being spies or smugglers, interrogated and checked for hours. Today all I have to do is take my national ID, a driving license and a credit card and I can go three and a half thousand kilometers from Warsaw to Lisbon crossing half a dozen national borders without being checked even once. What used to be border checkpoints are now parking lots on the side of highways. Police booths I remember from my teenage years are turned into hot-dog stands. As citizens of a EU country I am entitled to live, work and buy real-estate in any member country. It really is great, but with one caveat: you need to have resources to be able to profit from this exceptional and remarkable freedom. What good is the ability to travel to Lisbon to a person who can hardly afford a train ticket to the nearest town? Even worse: there may be no train to the nearest town because Polish neoliberals decided that public transportation is passé, that it belongs to the old and obsolete socialist past, so they neglected a lot of local connections in favor of promoting car ownership. If you cannot buy a car? Well, it is your fault, because you are not entrepreneurial enough. So you get stuck in some grey, crumbling and aging peripheral town or hamlet. The only thing you can afford is a TV, where you watch the lavish lifestyle of cosmopolitan elites. And, suddenly, here’s this populist government which does not tell you that you are a savage and maladjusted Homo sovieticus who lacks “civilizational competence”, but rather treats you as a dignified subject who deserves attention and—what a formidable turn of events!—they give you a child bonus, so your kids can go for holidays for the first time in their lives. What would you say to the liberals who come nagging you about how much you betrayed democratic values and how urgently we need to defend the freedom and civil society we were so desperately fighting for in Soviet times? And these are the very same people who ruled your country for eight years, denying you both dignity and welfare while constantly bragging about fabulous GDP growth and the incredible economic miracle that they created.

Well, if you have any brains left, you would say just one thing: “Fuck off!” And this is precisely what Law and Justice supporters are saying. Contrary to the liberal narration their support for populism is not an irrational eruption of barbarism and resentment, but rather the opposite: a proof of their rationality and sober thinking. A quick glance at the opinion polls shows that almost none of the most controversial policies enacted by the Polish populist government enjoys widespread public support, even among Law and Justice voters. Two thirds of Poles do not like what is happening with Constitutional Court, an overwhelming majority is against logging in the primordial forest in Białowieża and does not support the government’s obsession with keeping the Polish economy addicted to coal. The conspiracy theory, advanced by some prominent politicians of the ruling party, that the airplane crash in Smoleńsk in 2010, where Lech Kaczyński (the twin brother of Jarosław Kaczyński and the President of Poland at the time) died along with 100 other prominent politicians was an orchestrated attack, is believed by only 14% of the population. The reasons why people support the government have little to do with all those ridiculous and harmful policies. Parliamentary politics in a bourgeois state is very much like cooking with limited supplies: you may have a bowl of hot oil and you may think that tempura would be a great treat, but if all you have are potatoes, you will most likely settle for fries.

But, wait, isn’t it a dangerous normalization of right-wing populism that I’m advocating here? After all we saw what happened in Warsaw on November the 11th this year, when the Independence Day parade turned into a neo-fascist festival of hatred, xenophobia and racism. Shouldn’t we be more concerned or even alarmed? There are for sure, reasons for concern and alarm, but if it is ever going to be politically fruitful, we need to have a good understanding of what is going on. To understand does not mean to justify let alone praise or support. Polish conservative populism is not fascism. Only a small minority of people who marched on November the 11th in Warsaw were actual fascists. But, of course, there is a risk of sliding towards fascism. The government is turning a blind eye to the fascist excesses, because they do not want to have a more radical right-wing formation emerging on the right side of the political spectrum. So they are keen on letting the right-wing extremists know that they somehow include them under their political patronage. This surely is playing with fire and should never take place. An outright ban on any kind of fascism is the only acceptable way to go and the only way to avoid a repetition of horrors that Central-Eastern Europe experienced in the past century. What is, however, equally urgent is addressing the root of fascism and countering the force behind the fascist awakening. Just to denounce right-wing populism and the drift towards fascism it entails is going to get us nowhere unless we understand the reason why they are occupying a place closer and closer to the mainstream of political life.

It’s here again, that we encounter the basic flaw of liberal common sense, with its fixation on cultural factors and the importance of ethos. What they neglect is an element that was entirely wiped out of both public and academic discourse in Poland as well as elsewhere, for example, in the US: the issue of class and its indelible materialist component. Populism is a kind of displaced and perverted class revolt. It derives from an oppression of double kind: material for the poor and symbolic for the lower-middle class. The former strives for material redistribution, the latter—for symbolic recognition, for something to be proud of and for the feeling of dignity they are deprived of. Polish populists have found a way to cunningly combine the support of the two into a coherent political force and it has allowed them to win elections. Now, fulfilling their electoral promises grants them the ongoing legitimacy that they clearly enjoy in the eyes of a large group of Polish society.

Looking from the other side of the Atlantic, I would venture a hypothesis that the same is at least partially true for the American society. Walter Benn Michaels has talked for more than a decade about how much the US political orthodoxy has been the politics of identity and recognition above material redistribution. What this means is not just that a great many people have become the victims of growing inequality but that a large group of them—white people and especially straight white men—have come to understand themselves as doubly victimized. They have very little resources as they get nothing from material redistribution (because there is virtually none), and they get nothing from symbolic redistribution (since that goes precisely to people who are not straight and white). One may say: rightly so, why should they? Given the racist and patriarchal society that we live in, this is the group that does not deserve recognition for what they are. But as true as this diagnosis may be, it does not change an obvious political consequence: this is the group that occupies the position that Ernesto Laclau called pure heterogeneity; or caput mortuum, using the Lacanian-alchemist term—a leftover, a sedimentation on the walls of the sample tube where the chemical reaction is taking place. This is the most unstable and dangerous element as it does not take part in the normal political game, but being exotic (i.e. positioned outside) to the system it only disrupts the process. Laclau describes it with a metaphor: as we sit around a table playing a board game, they are those who were pushed aside—thus they are heterogenous to the very process of the game—and they cannot be a player in the ongoing match. This is an utterly painful and humiliating position and it can hardly be enjoyed by anyone who happens to occupy it. These people may not have any means to enter the game, but they can do a different thing: kick the table, so there will be no more playing for anyone. This is what they did in many places around the world in 2015 and 2016. And, as long as they remain in the position of pure heterogeneity, they’ll keep on doing it, no matter how much we denounce and demonize them. As a matter of fact, the more the liberals whine about the destruction of state institutions and irreparable harm done to political order by those actions, the more enjoyment the supporters of populism will get from kicking the table. After all, this is what the so-called protest voting is all about. (...)

Throughout a good part of 20th century, academic development studies were dominated by what was called modernization theory. It claimed that all countries move along the same trajectory of social change, where some—mainly the West—are more advanced than the others. It had a right-wing and a left-wing version and culminated in the (in)famous declaration of the end of history made by Francis Fukuyama in the early 1990s. What we are witnessing right now is a precise reversal of this alleged pattern: the peripheries of capitalist world-system have become some sort of perverse avant-garde of reaction. What we have experienced in Poland since early 1990s, as I showed at the beginning of this text, has not been a glitch provoked by cultural factors but a reaction to neoliberal austerity. It took neoliberalism some time to destroy core societies to the same level, but when it started to get there, strikingly similar formations appeared first in the UK and the US, precisely the most neoliberal countries in the center of the capitalist world-system. It should not come as surprise that France is the place where politics may still seem “business as usual”: Emanuel Macron looks like another Tony Blair, Gerhard Schroder or Bill Clinton. France is, after all, the number one public spender in the OECD and still maintains one of the most generous and inclusive welfare mechanisms on the planet. What the liberals fascinated by Macron do not get is that the neoliberal reforms he is undertaking are destroying the very status quo on which he got elected. The advancement of the Front National in France, just like the electoral success of Alternative für Deutschland in Germany, are visible signs of what we may very well face in a not very distant future. I would dub the phenomenon “de-modernization” as it is reversing both the conquests of liberal modernity (not only in the political sphere, the same is true when it comes to secular state or labor conditions) as well as the relation between the center and the periphery postulated by the modernization theory. The future of Berlin, Paris or Washington is in Warsaw and Budapest, not the other way around.

Looking at this uncanny development from the perspective of the Polish semi-periphery I cannot but marvel at an incredible irony of the situation. I grew up in the last years of Soviet regime and I remember quite well the dreams and aspiration that followed the system change in 1989. The key ambition of liberal elites was for Poland to come back to the mainstream of Western politics and to become “a normal, European country.” And it was firstly and mainly the Anglo-Saxon political world that captured the imagination of Polish liberal elites as a noble example to follow. When I look today at the chaos and indolence of the Trump administration or the mess that Brexit generates in the UK I cannot help but think of it as a bizarre “polonization” of world politics. I’ve seen this before! Steve Bannon looks, talks and acts (including the red nose and generally alcoholic look) as if he were an advisor to the Polish right-wing government of Jan Olszewski in 1992 not to the US president in 2017. Poland—and the entire region of Central-Eastern Europe—is undeniably in the mainstream of European and world politics. Even more: we are a kind of avant-garde! Not because we have advanced so high, but because capitalism in its neoliberal incarnation has brought politics so low.

by Jan Sowa, Nonsite.org |  Read more:

Tuesday, February 13, 2018

They Only Look Casual

There she goes, strutting that strut. Her outfit is arranged just so. She’s got the bag with the umpteen-person wait list, not yet available in stores. The cameras flash. It’s a fashion moment. It’s a watershed. It’s a marketing opportunity.

It’s the 15-foot catwalk that runs from the hotel door to the S.U.V. door.

For fashion houses looking to leverage the star power of celebrities, the holy grail had long been the red carpet, the bigger an event the better. Special teams at major labels might court actresses, their reps and their stylists for years to dress their top clients, and spend tens of thousands of dollars or more to make custom outfits for them. A hit could make a brand, or cement its status, paying dividends for years to come as the moment was fondly recalled in Best Of lists and debated by carpet pundits.

That was then.

With social media ascendant, there is a new, and increasingly important, runway for the stars, and a rising guard of stylists working to dress them for it. It’s the sidewalk. It’s the airport. It’s the Starbucks run.

For those women whose followers feverishly track their every move and every selfie — your Hadids (Gigi and Bella); your Kaia Gerbers (Cindy Crawford’s look-alike model daughter); your Emily Ratajkowskis; Selena Gomez, your Instagram queen (the platform’s most followed person) — any moment can be a moment. Their presence is an event. They need no carpet; they are the carpet.

“Five years ago it was all about the red carpet moment,” said Christian Classen, 31, a stylist for Ms. Gomez and young celebrities including the Disney star Dove Cameron, the singer Banks, the Instagram poet Rupi Kaur and the actress Zazie Beetz. “Less now. An Instagram selfie on some people can be 10 times more important.”

Mr. Classen does style many of his clients for formal appearances, but he has made a specialty of casual off-carpet looks. When he struck out on his own as a stylist in 2015, labels tightly guarded their stores, lending clothes only for specific red carpet occasions. “Now, if it’s for a street style or an airport, they’re going to give it to me right away,” Mr. Classen said.

Not that the red carpet has disappeared. It remains, ready when needed, for the Oscars, the Emmys, the Grammys, the premieres. And so remain, at the ready, the legion of red carpet stylists. But joining them are a new wave of “day stylists” whose forte is the casual, tossed-off, this-old-thing look of street style: what the stars would throw together on their own (but often don’t have to).

Even the most casual of looks — the jeans, beanie and pap-proof goggle shades the star may wear to scurry to the gate of her departing flight — may well take a village. (The highest-profile stars may have separate stylists to work on their biggest red carpet events. Ms. Gomez, for example, also works with the stylist Kate Young.)

“A lot of people probably think that they choose on a daily basis from their own closets,” said Mimi Cuttrell, 26, a stylist who works with Gigi Hadid, Ms. Gerber and Ms. Hadid’s mother, Yolanda Hadid. “Sometimes there are outfits that are completely planned out from head to toe. I’m really particular with tailoring, too. There’s a lot of pieces and back work that goes into getting one street style look ready.” (...)

The point of such styling is to look effortless, natural and, in one of fashion’s favorite terms, “authentic” — even when that authenticity is mediated by an on-hand stylist to offer up the glossiest version of your authentic self. So much so that many of the millions of fans watching along on social media may not realize they’re looking at a tailor-made ensemble.

by Matthew Schneier, NY Times |  Read more:
Image: Backgrid

Max Ernst, The Phases of the Night, 1946
via:

Daido Moriyama, Record No.35 (2017)
via:

The Autonomous Selfie Drone Is Here

Autonomous drones have long been hyped, but until recently they’ve been little more than that. The technology in Skydio’s machine suggests a new turn. Drones that fly themselves — whether following people for outdoor self-photography, which is Skydio’s intended use, or for longer-range applications like delivery, monitoring and surveillance — are coming faster than you think.

They’re likely to get much cheaper, smaller and more capable. They’re going to be everywhere, probably sooner than we can all adjust to them.

Most consumer drones rely on some degree of automation in flight. DJI, the Chinese drone company that commands much of the market, makes several drones that can avoid obstacles and track subjects.

But these features tend to be less than perfect, working best in mostly open areas. Just about every drone on the market requires a pilot.

“Our view is that almost all of the use cases for drones would be better with autonomy,” said Adam Bry, Skydio’s chief executive.

Skydio was founded by Mr. Bry and Abe Bachrach — who met as graduate students at the Massachusetts Institute of Technology and later started Google’s drone program, Project Wing — along with Matt Donahoe, an interface designer.

In 2014, with funding from the venture firm Andreessen Horowitz, the company began working on what would become the R1. Skydio has since raised $70 million from Andreessen and several other investors, including Institutional Venture Partners, Playground Global and the basketball player Kevin Durant.

Skydio’s basic goal was a drone that requires no pilot. When you launch the R1 using a smartphone app, you have your subject stand in front of the drone, then tap that person on the screen — now it’s locked on. You can also select one of several “cinematic modes,” which specify the direction from which the drone will try to record its subject. (It can even predict your path and stay ahead of you to shoot a selfie from the front.)

After takeoff, it’s hands off. The drone operates independently. In the eight-minute flight I saw — through a wooded trail sparsely populated with runners and dogs — the R1 followed its target with eerie determination, avoiding every obstacle as naturally as an experienced human pilot might, and never requiring help. It lost its subject — me — only once, but I had to really work to make that happen. (...)

What this means is ubiquity. As I watched the R1 tail Mr. Bry, I played the scene forward in my mind: What happens when dozens or hundreds of runners and bikers and skiers and hikers and tourists begin setting out their own self-flying GoPros to record themselves? Our society has proved in thrall to photography; if you can throw up a camera and get a shot of you reaching the summit, who’s not going to do it?

by Farhad Manjoo, NY Times |  Read more:
Image: Laura Morton for The New York Times

California Launches Aetna Probe

California's insurance commissioner has launched an investigation into Aetna after learning a former medical director for the insurer admitted under oath he never looked at patients' records when deciding whether to approve or deny care.

California Insurance Commissioner Dave Jones expressed outrage after CNN showed him a transcript of the testimony and said his office is looking into how widespread the practice is within Aetna.

"If the health insurer is making decisions to deny coverage without a physician actually ever reviewing medical records, that's of significant concern to me as insurance commissioner in California -- and potentially a violation of law," he said.

Aetna, the nation's third-largest insurance provider with 23.1 million customers, told CNN it looked forward to "explaining our clinical review process" to the commissioner.

The California probe centers on a deposition by Dr. Jay Ken Iinuma, who served as medical director for Aetna for Southern California from March 2012 to February 2015, according to the insurer.

During the deposition, the doctor said he was following Aetna's training, in which nurses reviewed records and made recommendations to him.

Jones said his expectation would be "that physicians would be reviewing treatment authorization requests," and that it's troubling that "during the entire course of time he was employed at Aetna, he never once looked at patients' medical records himself." (...)

Members of the medical community expressed similar shock, saying Iinuma's deposition leads to questions about Aetna's practices across the country.

"Oh my God. Are you serious? That is incredible," said Dr. Anne-Marie Irani when told of the medical director's testimony. Irani is a professor of pediatrics and internal medicine at the Children's Hospital of Richmond at VCU and a former member of the American Board of Allergy and Immunology's board of directors.

"This is potentially a huge, huge story and quite frankly may reshape how insurance functions," said Dr. Andrew Murphy, who, like Irani, is a renowned fellow of the American Academy of Allergy, Asthma and Immunology. He recently served on the academy's board of directors. (...)

"This is something that all of us have long suspected, but to actually have an Aetna medical director admit he hasn't even looked at medical records, that's not good," said Murphy, who runs an allergy and immunology practice west of Philadelphia.

by Wayne Drash, CNN |  Read more:
Image: Wayne Drash/CNN

Is the Universe a Conscious Mind?

In the past 40 or so years, a strange fact about our Universe gradually made itself known to scientists: the laws of physics, and the initial conditions of our Universe, are fine-tuned for the possibility of life. It turns out that, for life to be possible, the numbers in basic physics – for example, the strength of gravity, or the mass of the electron – must have values falling in a certain range. And that range is an incredibly narrow slice of all the possible values those numbers can have. It is therefore incredibly unlikely that a universe like ours would have the kind of numbers compatible with the existence of life. But, against all the odds, our Universe does.

Here are a few of examples of this fine-tuning for life:
  • The strong nuclear force (the force that binds together the elements in the nucleus of an atom) has a value of 0.007. If that value had been 0.006 or less, the Universe would have contained nothing but hydrogen. If it had been 0.008 or higher, the hydrogen would have fused to make heavier elements. In either case, any kind of chemical complexity would have been physically impossible. And without chemical complexity there can be no life.
  • The physical possibility of chemical complexity is also dependent on the masses of the basic components of matter: electrons and quarks. If the mass of a down quark had been greater by a factor of 3, the Universe would have contained only hydrogen. If the mass of an electron had been greater by a factor of 2.5, the Universe would have contained only neutrons: no atoms at all, and certainly no chemical reactions.
  • Gravity seems a momentous force but it is actually much weaker than the other forces that affect atoms, by about 1036. If gravity had been only slightly stronger, stars would have formed from smaller amounts of material, and consequently would have been smaller, with much shorter lives. A typical sun would have lasted around 10,000 years rather than 10 billion years, not allowing enough time for the evolutionary processes that produce complex life. Conversely, if gravity had been only slightly weaker, stars would have been much colder and hence would not have exploded into supernovae. This also would have rendered life impossible, as supernovae are the main source of many of the heavy elements that form the ingredients of life.
Some take the fine-tuning to be simply a basic fact about our Universe: fortunate perhaps, but not something requiring explanation. But like many scientists and philosophers, I find this implausible. In The Life of the Cosmos (1999), the physicist Lee Smolin has estimated that, taking into account all of the fine-tuning examples considered, the chance of life existing in the Universe is 1 in 10229, from which he concludes: 
In my opinion, a probability this tiny is not something we can let go unexplained. Luck will certainly not do here; we need some rational explanation of how something this unlikely turned out to be the case.
The two standard explanations of the fine-tuning are theism and the multiverse hypothesis. Theists postulate an all-powerful and perfectly good supernatural creator of the Universe, and then explain the fine-tuning in terms of the good intentions of this creator. Life is something of great objective value; God in Her goodness wanted to bring about this great value, and hence created laws with constants compatible with its physical possibility. The multiverse hypothesis postulates an enormous, perhaps infinite, number of physical universes other than our own, in which many different values of the constants are realised. Given a sufficient number of universes realising a sufficient range of the constants, it is not so improbable that there will be at least one universe with fine-tuned laws.

Both of these theories are able to explain the fine-tuning. The problem is that, on the face of it, they also make false predictions. For the theist, the false prediction arises from the problem of evil. If one were told that a given universe was created by an all-loving, all-knowing and all-powerful being, one would not expect that universe to contain enormous amounts of gratuitous suffering. One might not be surprised to find it contained intelligent life, but one would be surprised to learn that life had come about through the gruesome process of natural selection. Why would a loving God who could do absolutely anything choose to create life that way? Prima facie theism predicts a universe that is much better than our own and, because of this, the flaws of our Universe count strongly against the existence of God.

Turning to the multiverse hypothesis, the false prediction arises from the so-called Boltzmann brain problem, named after the 19th-century Austrian physicist Ludwig Boltzmann who first formulated the paradox of the observed universe. Assuming there is a multiverse, you would expect our Universe to be a fairly typical member of the universe ensemble, or at least a fairly typical member of the universes containing observers (since we couldn’t find ourselves in a universe in which observers are impossible). However, in The Road to Reality (2004), the physicist and mathematician Roger Penrose has calculated that in the kind of multiverse most favoured by contemporary physicists – based on inflationary cosmology and string theory – for every observer who observes a smooth, orderly universe as big as ours, there are 10 to the power of 10123 who observe a smooth, orderly universe that is just 10 times smaller. And by far the most common kind of observer would be a ‘Boltzmann’s brain’: a functioning brain that has by sheer fluke emerged from a disordered universe for a brief period of time. If Penrose is right, then the odds of an observer in the multiverse theory finding itself in a large, ordered universe are astronomically small. And hence the fact that we are ourselves such observers is powerful evidence against the multiverse theory.

Neither of these are knock-down arguments. Theists can try to come up with reasons why God would allow the suffering we find in the Universe, and multiverse theorists can try to fine-tune their theory such that our Universe is less unlikely. However, both of these moves feel ad hoc, fiddling to try to save the theory rather than accepting that, on its most natural interpretation, the theory is falsified. I think we can do better.

by Philip Goff, Aeon |  Read more:
Image: Carlo Allegri/Reuters

Monday, February 12, 2018

Marc Ribot

Corporations Will Inherit the Earth

What a herky-jerky mess our federal government is. What a bumbling klutz. It can’t manage health care. It can’t master infrastructure. It can’t fund itself for more than tiny increments of time. It can barely stay open. It shut down briefly on Friday for the second time in three weeks. Maybe it should just stay closed for good.

Let corporations pick up the slack! In fact they’re doing that already, with an innovation and can-do ambition sorely absent in Washington.

Three days before the latest shutdown, Elon Musk borrowed a launchpad previously used by NASA’s trailblazing astronauts to send his own rocket into space. It was the first time that a vessel of such might and majesty was thrust heavenward by a private company rather than a government agency.

It was also a roaring, blazing sign of our times, in which the gaudy dreams and grand experiments belong to the private sector, not the public one, and in which the likes of Musk or Amazon’s Jeff Bezos chart a future for our species beyond our stressed-out planet. NASA no longer leads the way.

Speaking of Amazon, it joined two other corporate giants, Berkshire Hathaway and JPMorgan Chase, to announce two weeks ago that they would form their own health care provider and try to solve the riddle that continues to stump lawmakers: dependable service at affordable prices.

Amazon also recently stole a high-profile educator from Stanford University, Candace Thille. Her hiring suggests that the company is poised to expand employee training to a point where Amazon is essentially filling in for public and private universities and grooming its own work force.

And Musk is not only reaching for the stars but also tunneling under the earth. A new venture of his, the Boring Company, is a response to the inability of public officials in Los Angeles to ease the region’s paralyzing traffic. Musk envisions a futuristic network of subterranean chutes. The first one is already under construction.

We Americans are living a paradox. We’re keenly suspicious of big corporations — just look at how many voters thrilled to Bernie Sanders’s jeremiads about a corrupt oligarchy, or at polls that show a growing antipathy to capitalism — and yet we’re ever more reliant on them. They’re in turn bolder, egged on by the ineptness and inertia of Washington.

“When there’s a vacuum, there are going to be entities that step into it,” Chris Lehane told me. “This is an example of that.” Lehane is the head of global policy for Airbnb, which ran a commercial this month that alluded (without profanity) to Trump’s “shithole countries” remark and promoted those very places as travel destinations. It spoke to another vacuum — a moral one — being filled by companies, many of which are more high-minded, forward-thinking and solutions-oriented than the federal government on immigration, L.G.B.T. rights, climate change and more. (...)

Corporations have long been engines of innovation, sources of philanthropy and even laboratories for social policy. But the situation feels increasingly lopsided these days. I’m struck, for example, by the intensity of conversation over the last year about what Facebook and its algorithms should do to stanch the destructive tribalism in American life. It’s true that Mark Zuckerberg’s monster has badly aggravated that dynamic, in part by allowing its platform to be manipulated by bad actors. But so has Washington, and we seem less hopeful that it’s redeemable and likely to shepherd us to a healthier place.

Although government spending has hardly dried up — the budget deal signed by Trump on Friday attests to that — and the federal debt continues to metastasize, there’s a questionable commitment to scientific research, leaving private actors to call many of the shots.

But companies’ primary concern isn’t public welfare. It’s the bottom line. I say that not to besmirch them but to state the obvious. Their actions will never deviate too far from their proprietary interests, and while tapping their genius and money is essential, outsourcing too much to them is an abdication of government’s singular role. What’s best for Amazon and what’s best for humanity aren’t one and the same.

by Frank Bruni, NY Times |  Read more:
Image: Ben Wiseman
[ed. See also: We’ve Trashed the Oceans; Now We're Turning Space Into a Junkyard for Billionaires]

Heart Stents Are Useless for Most Stable Patients

Lots of Americans have chest pain because of a lack of blood and oxygen reaching the heart. This is known as angina. For decades, one of the most common ways to treat this was to insert a mesh tube known as a stent into arteries supplying the heart. The stents held the vessels open and increased blood flow to the heart, theoretically fixing the problem.

Cardiologists who inserted these stents found that their patients reported feeling better. They seemed to be healthier. Many believed that these stents prevented heart attacks and maybe even death. Percutaneous coronary intervention, the procedure by which a stent can be placed, became very common.

Then in 2007, a randomized controlled trial was published in The New England Journal of Medicine. The main outcomes of interest were heart attacks and death. Researchers gathered almost 2,300 patients with significant coronary artery disease and proof of reduced blood flow to the heart. They assigned them randomly to a stent with medical therapy or to medical therapy alone.

They followed the patients for years. The result? The stents didn’t make a difference beyond medical treatment in preventing these bad outcomes.

This was hard to believe. So more such studies were conducted.

In 2012, the studies were collected in a meta-analysis in JAMA Internal Medicine. Three studies looked at patients who were stable after a heart attack. Five more examined patients who had stable angina or ischemia but had not yet had a heart attack. The meta-analysis showed that stents delivered no benefit over medical therapy for preventing heart attacks or death for patients with stable coronary artery disease.

Still, many cardiologists argued, stents improved patients’ pain. It improved their quality of life. Even if we didn’t reduce the outcomes that physicians cared about, these so-called patient-centered outcomes mattered, and patients who had stents reported improvements in these domains in studies.

The problem was that it was difficult to know whether the stents were leading to pain relief, or whether it was the placebo effect. The placebo effect is very strong with respect to procedures, after all. What was needed was a trial with a sham control, a procedure that left patients unclear whether they’d had a stent placed.

Many physicians opposed such a study. They argued that the vast experience of cardiologists showed that stents worked, and therefore randomizing some patients not to receive them was unethical. Others argued that exposing patients to a sham procedure was also wrong because it left them subject to potential harm with no benefit. More skeptical observers might note that some doctors and hospitals were also financially rewarded for performing this procedure.

Regardless, such a trial was done, and the results were published this year. (...)

There was no difference in the outcomes of interest between the intervention and placebo groups.

Stents didn’t appear even to relieve pain.

Some caveats: All the patients were treated rigorously with medication before getting their procedures, so many had improved significantly before getting (or not getting) a stent. Some patients in the real world won’t stick to the intensive medical therapies, so there may be a benefit from stents for those patients (we don’t know). The follow-up was only at six weeks, so longer-term outcomes aren’t known. These results also apply only to those with stable angina. There may be more of a place for stents in patients who are sicker, who have disease in more than one blood vessel, or who fail to respond to medical therapy.

But many, if not most patients, probably don’t need them. This is hard for patients and physicians to wrap their heads around because, in their experience, patients who got stents got better. They seemed to receive a benefit from the procedure. But that benefit appears to be because of the placebo effect, not any physical change from improved blood flow. (...)

Even in this study, 2 percent of patients had a major bleeding event. Remember that hundreds of thousands of stents are placed every year. Stents are also expensive. They can add at least $10,000 to the cost of therapy.

Stents still have a place in care, but much less of one than we used to think. Yet many physicians as well as patients will still demand them, pointing out that they lead to improvements in some people, even if that improvement is from a placebo effect.

by Aaron E. Carroll, NY Times | Read more:
Image: Jack Sachs
[ed. See also:  Powerless Placebos]

Is Tech Dividing America?

When Americans consider how technology has changed their lives, they tend to focus on how the internet and smartphones have altered how they watch TV, connect with friends, or how they shop. But those changes pale in comparison to how technology has already restructured the economy, shaking up the workforce and shifting opportunity to tech-centric urban hubs. As artificial intelligence quickly moves from fiction to daily reality, that revolution will arguably become much more consequential.

Economists broadly agree that technology will continue to be an engine of economic growth. But it also will upend old certainties about who benefits. Already, we can see a growing inequality gap, with winners and losers by region and workplace. The next wave of changes, handled badly, could make this gap even more extreme.

MIT researcher David Autor has been at the center of that conversation for two decades now. One of the world’s premier labor economists, Autor has helped drive a reconsideration of how Americans are really coping with the changes transforming their workplaces. And he's trying to take the conversation beyond the ivory tower: His 2016 TED talk about the surprising impact of automation, “Why Are There Still So Many Jobs?” has been viewed more than 1.3 million times.

Autor's interest comes from seeing these changes at the ground level: Fresh out of Tufts University with a degree in psychology, he ended up running a Silicon Valley-sponsored computer-training program for at-risk children and adults at San Francisco’s Glide Memorial Church, a counterculture hot spot. When he headed back to Harvard’s John F. Kennedy School of Government for an M.A. and then Ph.D. in public policy, he brought a newly keen interest in figuring out how the technologies being pumped in to the labor market would shape what it means to be a worker in the United States. (...)

We’ve just started to think seriously as a nation about who wins and who doesn’t as the American workplace automates. In 1998, you co-wrote a paper that showed the rise of technology in the workplace was actually proving to be good for higher-skilled workers. Is that a fair read?

What that paper suggested was that it's definitely the case that automation is raising the demand for skilled labor. And the work that I've done since has been about what set of activities are complemented by automation and which set of activities is displaced, pointing out that on the one hand, there were tasks that were creative and analytical, and on the other, tasks that required dexterity and flexibility, which were very difficult to automate. So the middle of the skill distribution, where there are well understood rules and procedures, is actually much more susceptible to automation.

So, there's a hollowing out of middle-class jobs, but high-skilled, high-wage workers and the low-skilled low-wage workers remain? Is that what we're seeing play out right now in the U.S.?

That polarization of jobs definitely reduced the set of opportunities for people who don't have a college degree. People who have a high school or lower degree, it used to be they were in manufacturing, in clerical and administrative support. Now, increasingly, they're in cleaning, home health, security, etc. Ironically, we've automated some of the stuff that was more interesting for us, and we're left with some of the stuff that is less interesting. (...)

“Automation anxiety" is overblown, you’ve said. How anxious should American workers be?

People are talking about how robots are going to take all the jobs, but we're in a time of very dramatic employment growth and have been for a decade. Job growth is robust throughout western Europe, as well. So, we're certainly not in a period where there's any outward sign that work is coming to an end. We have had two centuries of people worrying very vocally about how automation will make us superfluous. I don't think it's made us superfluous, and I don't think it's on the verge of making us superfluous.

The greater concern is not about the number of jobs but whether those jobs will pay decent wages and people will have sufficient skills to do them. That's the great challenge. It's never been a better time to be a highly educated worker in the western world. But there hasn't been a worse time to be a high school dropout or high school graduate. (...)

In just the past year, Silicon Valley as an industry has developed a good and evil reputation. It’s cutting-edge and pays well, but it sometimes disrupts the world without seeming to care too much about the consequences, a la Uber. Which is it?

I don't think it's either. It creates a lot of benefits as well as creating real challenges: It's definitely the case that it's raising total GDP, but has been very dis-equalizing. It is up to our institutions to deal well with that or not.

Some countries have done a much better job at sharing the gains and making sure that everybody's bought in. Others have been much more social Darwinists about it, and the U.S. is very much at the extreme of that among industrialized economies, of going, ‘rah rah,’ to the winners and ‘too bad for you,’ to the losers.

How, exactly, are other countries good at it?

Countries that, I think, are doing really well with this—Norway, Sweden, Denmark, Germany, Switzerland, Austria—have very good educational systems that prepare people not just for highly educated, Ph.D.-level jobs, but also very good vocational, technical education systems.

But there’s also the notion that there are multiple stakeholders in the economy, not just shareholders. Workers have more voice, and that makes people less apprehensive about these changes because they expect that if they are gains, they'll get a piece of them, where in the U.S. a lot of people think, ‘Well, there might be a gain, but I'll be worse off.’ And they're probably right.

Other countries have made it a lot easier for people to feel comfortable about the changes they're bringing on themselves. I think that’s one diagnosis of the current U.S. political system.

You’ve noted in your work that LBJ created a “Commission on Technology, Automation, and Economic Progress” way back in 1964. I didn’t know whether to be encouraged by that or saddened by that—that we’ve been talking about these questions for a long time and don’t seem to have any better answers.

There are two schools of thought that you hear often. One is, ‘the sky is falling, the robots are coming for our jobs, we're all screwed because we've made ourselves obsolete.’ The other version you also hear a lot is, ‘We've been through things like this in the past, it's all worked out fine, it took care of itself, don't worry.’ And I think both of these are really wrong.

I've already indicated why I think the first view is wrong. The reason I think the second view is wrong is because I don't think it took care of itself. Countries have very different levels of quality of life, institutional quality, of democracy, of liberty and opportunity, and those are not because they have different markets or different technologies. It's because they've made different institutional arrangements. Look at the example of Norway and Saudi Arabia, two oil-rich countries. Norway is a very happy place. It's economically mobile with high rates of labor force participation, high rates of education, good civil society. And Saudi Arabia is an absolute monarchy that has high standards of living, but it's not a very happy place because they've stifled innovation and individual freedom. Those are two examples of taking the same technology, which is oil wealth, and either squandering it or investing it successfully.

I think the right lesson from history is that this is an opportunity. Things that raise GDP and make us more productive, they definitely create aggregate wealth. The question is, how do we use that wealth well to have a society that's mobile, that's prosperous, that's open? Or do we use it to basically make some people very wealthy and keep everyone else quiet? So, I think we are at an important juncture, and I don't think the U.S. is dealing with it especially well. Our institutions are very much under threat at a time when they're arguably most needed.

by Nancy Scola, Politico |  Read more:
Image: Porter Gifford

How Vans Got Cool Again

Back in 2002, when Rian Pozzebon, who was then a relative unknown in the sneaker community, got the offer to join Vans and help rebuild the brand’s ailing skate shoe program with his longtime friend and colleague Jon Warren, he had one big question: “Will they let us mess with the classics?”

At the time, Vans wasn’t particularly interested in core models like the Slip-On, Old Skool, and Authentic. “The classics just kind of existed,” says Pozzebon. “But they weren’t pushed.” Instead, they languished—in just a few basic colors—in Vans stores.

The company's focus was directed elsewhere, on newer styles. After riding the wave of the ‘90s skateboarding boom, Vans faced new competition from younger skate shoe brands like DC and Osiris. These companies—born only a few years earlier—favored a chunkier, more tech-forward silhouette (a word the fashion community uses to describe the shape of a shoe). Vans’ retro styling, by comparison, felt stale. By the early years of the new millennium, nearly a decade of sustained growth had fallen off—as had customers’ goodwill.

“I just never took it seriously as a lifestyle shoe. At all,” Brian Trunzo, senior menswear trend forecaster at WGSN, says of his feelings about Vans at the time. Beset by new competition in its core skate market and ignored by trendsetting sneakerheads who preferred the Air Force 1 or Adidas Superstar, Vans seemed on the verge of slipping into irrelevance.

And here was Pozzebon—not even an employee yet—asking if he could look backwards instead of forwards to inform his design decisions. It was a bold question, to say the very least. And yet. “When we came and interviewed they were like, ‘Whatever it takes. Whatever you need,’” he recalls. Whether or not he fully knew it at the time, he’d landed on something that would prove crucial for the brand’s future success.

“It was that vintage piece,” says Pozzebon, now the company's Lifestyle Footwear Design Director. “At the time, Vans didn’t necessarily know what they really had.”

By focusing on that element of the company’s DNA, Pozzebon and his design team led Vans through a turnaround that was nothing short of staggering. The brand has become a staple of American footwear culture, on the level with iconic brands like Converse (which is twice as old) and Nike (which is nearly 10 times as large). Vans are worn by celebrities and fashion influencers, the jeans and T-shirt crowd who rarely pay attention to what's stylish, teenagers and toddlers, alike. What makes it all the more impressive—especially in an age of unprecedented technological innovation—is that it leaned on just five classic styles to drive its cultural relevance, which arguably have never been higher, as well as its sales, which have inarguably never been higher.

by Jonathan Evans, Esquire |  Read more:
Image: Vans

Sunday, February 11, 2018


Wassily Kandinsky, Lyrisches (1911)
via:

Why the Culture Wins

Many years ago, a friend of mine who knows about these sorts of things handed me a book and said “Here, you have to read this.” It was a copy of Iain M. Banks’s Use of Weapons.

I glanced over the jacket copy. “What’s the Culture?” I asked.

“Well,” she said, “it’s kind of hard to explain.” She settled in for what looked to be a long conversation.

“In Thailand, they have this thing called the Dog. You see the Dog wherever you go, hanging around by the side of the road, skulking around markets. The thing is, it’s not a breed, it’s more like the universal dog. You could take any dog, of any breed, release it into the streets, and within a couple of generations it will have reverted to the Dog. That’s what the Culture is, it’s like the evolutionary winner of the contest between all cultures, the ultimate basin of attraction.”

“I’m in,” I said.

“Oh, and there’s this great part where the main character gets his head cut off – or I guess you would say, his body cut off – and so the drone gives him a hat as a get-well present…”

In the end, I didn’t love Use of Weapons, but I liked it enough to pick up a copy of Banks’s previous book, Consider Phlebas, and read it through. Here I found a much more satisfactory elaboration of the basic premise of his world. For me, it established Banks as one the great visionaries of late 20th century science fiction.

Compared to the other “visionary” writers working at the time – William Gibson, Neal Stephenson – Banks is underappreciated. This is because Gibson and Stephenson in certain ways anticipated the evolution of technology, and considered what the world would look like as transformed by “cyberspace.” Both were crucial in helping us to understand that the real technological revolution occurring in our society was not mechanical, but involved the collection, transmission and processing of information.

Banks, by contrast, imagined a future transformed by the evolution of culture first and foremost, and by technology only secondarily. His insights were, I would contend, more profound. But they are less well appreciated, because the dynamics of culture surround us so completely, and inform our understanding of the world so entirely, that we struggle to find a perspective from which we can observe the long-term trends.

In fact, modern science fiction writers have had so little to say about the evolution of culture and society that it has become a standard trope of the genre to imagine a technologically advanced future that contains archaic social structures. The most influential example of this is undoubtedly Frank Herbert’s Dune, which imagines an advanced galactic civilization, but where society is dominated by warring “houses,” organized as extended clans, all under the nominal authority of an “emperor.” Part of the appeal obviously lies in the juxtaposition of a social structure that belongs to the distant past – one that could be lifted, almost without modification, from a fantasy novel – and futuristic technology.

Such a postulate can be entertaining, to the extent that it involves a dramatic rejection of Marx’s view, that the development of the forces of production drives the relations of production (“The hand-mill gives you society with the feudal lord; the steam-mill, society with the industrial capitalist.”1). Put in more contemporary terms, Marx’s claim is that there are functional relations between technology and social structure, so that you can’t just combine them any old way. Marx was, in this regard, certainly right, hence the sociological naiveté that lies at the heart of Dune. Feudalism with energy weapons makes no sense – a feudal society could not produce energy weapons, and energy weapons would undermine feudal social relations.

Dune at least exhibits a certain exuberance, positing a scenario in which social evolution and technological evolution appear to have run in opposite directions. The lazier version of this, which has become wearily familiar to followers of the science fiction genre, is to imagine a future that is a thinly veiled version of Imperial Rome. Isaac Asimov’s Foundation series, which essentially takes the “fall of the Roman empire” as the template for its scenario, probably initiated the trend. Gene Roddenberry’s Star Trek relentlessly exploited classical references (the twin stars, Romulus and Remus, etc.) and storylines. And of course George Lucas’s Star Wars franchise features the fall of the “republic” and the rise of the “empire.” What all these worlds have in common is that they postulate humans in a futuristic scenario confronting political and social challenges that are taken from our distant past.

In this context, what distinguishes Banks’s work is that he imagines a scenario in which technological development has also driven changes in the social structure, such that the social and political challenges people confront are new. Indeed, Banks distinguishes himself in having thought carefully about the social and political consequences of technological development. For example, once a society has semi-intelligent drones that can be assigned to supervise individuals at all times, what need is there for a criminal justice system? Thus in the Culture, an individual who commits a sufficiently serious crime is assigned – involuntarily – a “slap drone,” who simply prevents that person from committing any crime again. Not only does this reduce recidivism to zero, the prospect of being supervised by a drone for the rest of one’s life also serves as a powerful deterrent to crime.

This is an absolutely plausible extrapolation from current trends – even just looking at how ankle monitoring bracelets work today. But it also raises further questions. For instance, once there is no need for a criminal justice system, one of the central functions of the state has been eliminated. This is one of the social changes underlying the political anarchism that is a central feature of the Culture. There is, however, a more fundamental postulate. The core feature of Banks’s universe is that he imagines a scenario in which technological development has freed culture from all functional constraints – and thus, he imagines a situation in which culture has become purely memetic. This is perhaps the most important idea in his work, but it requires some unpacking.

by Joseph Heath, Sci Phi Journal |  Read more:
Image: via
[ed. I'm not a sci-fi enthusiast in general (although I love a lot of Neal Stephenson's work) and haven't read Banks but might check him out if this is the premise of his Culture series. See also: Use of Weapons and 30 years of Culture: What Are the Top Five Iain M Banks Novels?]

Friday, February 9, 2018


Totoya Hokkei, Surimono
via:

Everything You Love Will Be Eaten Alive

Here are two different visions for what a city ought to be. Vision 1: the city ought to be a hub of growth and innovation, clean, well-run, high-tech, and business-friendly. It ought to attract the creative class, the more the better, and be a dynamic contributor to the global economy. It should be a home to major tech companies, world-class restaurants, and bold contemporary architecture. It should embrace change, and be “progressive.” Vision 2: the city ought to be a mess. It ought to be a refuge for outcasts, an eclectic jumble of immigrants, bohemians, and eccentrics. It should be a place of mystery and confusion, a bewildering kaleidoscope of cultures and classes. It should be a home to cheap diners, fruit stands, grumpy cabbies, and crumbling brownstones. It should guard its traditions, and be “timeless.”

It should be immediately obvious that not only are these views in tension, but that the tension cannot ever be resolved without one philosophy succeeding in triumphing over the other. That’s because the very things Vision 2 thinks make a city worthwhile are the things Vision 1 sees as problems to be eliminated. If I believe the city should be run like a business, then my mission will be to clear up the mess: to streamline everything, to eliminate the weeds. If I’m a Vision 2 person, the weeds are what I live for. I love the city because it’s idiosyncratic, precisely because things don’t make sense, because they are inefficient and dysfunctional. To the proponent of the progressive city, a grumpy cabbie is a bad cabbie; we want friendly cabbies, because we want our city to attract new waves of innovators. To the lover of the City of Mystery, brash personalities are part of what adds color to life. In the battle of the entrepreneurs and the romantics, the entrepreneurs hate what the romantics love, and the romantics hate what the entrepreneurs love. In the absence of a Berlin-like split, there can be no peace accord, it must necessarily be a fight to the death. What’s more, neither side is even capable of understanding the other: a romantic can’t see why anyone would want to clean up the dirt that gives the city its poetry, whereas an entrepreneur can’t see why anyone would prefer more dirt to less dirt.

Vanishing New York: How A Great City Lost Its Soul, based on the blog of the same name, is a manifesto for the Romantic Vision of the city, with Michael Bloomberg cast as the chief exponent of the Entrepreneurial Vision. “Nostalgic” will probably be the word most commonly used to capture Jeremiah Moss’s general attitude toward New York City, and Moss himself embraces the term and argues vigorously for the virtues of nostalgia. But I think in admitting to being “nostalgic,” he has already ceded too much. It’s like admitting to being a “preservationist”: they accuse you of being stuck in the past, and you reply “Damn right, I’m stuck in the past. The past was better.” But this isn’t simply about whether to preserve a city’s storied past or charge forward into its gleaming future. If that were the case, the preservationists would be making an impossible argument, since we’re heading for the future whether they like it or not. It’s also about different conceptions of what matters in life. The entrepreneurs want economic growth, the romantics want jazz and sex and poems and jokes. To frame things as a “past versus future” divide is to grant the entrepreneurs their belief that the future is theirs.

Moss’s book is about a city losing its “soul” rather than its “past,” and he spends a lot of time trying to figure out what a soul is and how a city can have one or lack one. He is convinced that New York City once had one, and increasingly does not. And while it is impossible to identify precisely what the difference is, since the quality is of the “you know it when you see it” variety, Moss does describe what the change he sees actually means. Essentially, New York City used to be a gruff, teeming haven for weirdos and ethnic minorities. Now, it is increasingly full of hedge fund managers, rich hipsters, and tourists. Tenements and run-down hotels have been replaced with glass skyscrapers full of luxury condos. Old bookshops are shuttered, designer clothes stores in their place. Artisanal bullshit is everywhere, meals served on rectangular plates. You used to be able to get a pastrami and a cup of coffee for 50 cents! What the hell happened to this place? (...)

Moss loves a lot of places, and because New York City is transitioning from being a city for working-class people to a city for the rich, he is constantly being wounded by the disappearance of beloved institutions. CBGB, the dingy punk rock music club where the Ramones and Patti Smith got their start, is forced out after its rent is raised to $35,000 a month. Instead, we get a commemorative CBGB exhibit at the Met, with a gift shop selling Sid Vicious pencil sets and thousand-dollar handbags covered in safety pins. The club itself becomes a designer clothing store selling $300 briefs. The ornate building that once housed the socialist Jewish Daily Forward newspaper, the exterior of which featured bas-relief sculptures of Marx and Engels, is converted to luxury condos. Its ethnic residents largely squeezed out, bits of Little Italy are carved off and rebranded as “Nolita” for the purpose of real estate brochures, since—as one developer confesses—the name “Little Italy” still connotes “cannoli.” A five-story public library in Manhattan, home to the largest collection of foreign-language books in the New York library system, is flattened and replaced with a high-end hotel (a new library is opened in the hotel’s basement, with hardly any books). Harlem’s storied Lenox Lounge is demolished, its stunning art-deco facade gone forever. Rudy Giuliani demolishes the Coney Island roller coaster featured in Annie Hall. Cafe Edison, a Polish tea house (see photo p. 32-33), is evicted and replaced with a chain restaurant called “Friedman’s Lunch,” named after right-wing economist Milton Friedman. (I can’t believe that’s true, but it is.) Judaica stores, accordion repairmen, auto body shops: all see their rent suddenly hiked from $3,000 to $30,000, and are forced to leave. All the newsstands in the city are shuttered and replaced; they go from being owner-operated to being controlled by a Spanish advertising corporation called Cemusa. Times Square gets Disneyfied, scrubbed of its adult bookstores, strip joints, and peep shows. New York University buys Edgar Allen Poe’s house and demolishes it. (“We do not accept the views of preservationists who say nothing can ever change,” says the college’s president.) (...)

The greed of landlords and developers is a prime reason that New York is steadily transforming into “Disneyland for billionaires.” But government policy also bears direct responsibility. Throughout New York history, city officials like Robert Moses have either neglected or waged active war against the ethnic populations that stood in the way of development. (“Look on the bright side… the city got rid of a million and a half undesirables,” a mayoral aide observed about the fires that destroyed countless tenements in the 1970s, allegedly partially due to the city’s intentional neglect of fire services.) But Michael Bloomberg was explicit in his commitment to making New York a city for the rich. Bloomberg’s city planning director, Standard Oil heir Amanda Burden, stated the administration’s aspirations: “What I have tried to do, and think I have done, is create value for these developers, every single day of my term.” Bloomberg himself was even more frank, calling New York City a “luxury product,” and saying:

“We want rich from around this country to move here. We love the rich people.”

“If we can find a bunch of billionaires around the world to move here, that would be a godsend… Wouldn’t it be great if we could get all the Russian billionaires to move here?”

“If we could get every billionaire around the world to move here it would be a godsend that would create a much bigger income gap.”
(...)

The effort to replace poor people with rich people is often couched in what Moss calls “propaganda and doublespeak.” One real estate investment firm claims to “turn under-achieving real estate into exceptional high-yielding investments,” without admitting that this “under-achieving real estate” often consists of people’s family homes. (Likewise, people often say things like “Oh, nobody lives there” about places where… many people live.) One real estate broker said they aspired to “a well-cultivated and curated group of tenants, and we really want to help change the neighborhood.” “Well-cultivated” almost always means “not black,” but the assumption that neighborhoods actually need to be “changed” is bad enough on its own.

In fact, one of the primary arguments used against preservationists is the excruciating two-word mantra: cities change. Since change is inevitable and desirable, those who oppose it are irrational. Why do you hate change? You don’t believe that change is good? Because it’s literally impossible to stop change, the preservationist is accused of being unrealistic. Note, however, just how flimsy this reasoning is: “Well, cities change” is as if a murderer were to defend himself by saying “Well, people die.” The question is not: is change inevitable? Of course change is inevitable. The question is what kinds of changes are desirable, and which should be encouraged or inhibited by policy. What’s being debated is not the concept of change, but some particular set of changes.

Even “gentrification” doesn’t describe just one thing. It’s a word I hate, because it captures a lot of different changes, some of which are insidious and some of which seem fine. There are contentious debates over whether gentrification produces significant displacement of original residents, and what its economic benefits might be for those residents. The New York Times chided Moss, calling him “impeded by myopia,” for failing to recognize that those people who owned property in soon-to-be-gentrified areas could soon be “making many millions of dollars.” But that exactly shows the point: Moss is concerned with the way that the pursuit of many millions of dollars erodes the very things that make a city special, that give it life and make it worth spending time in. A pro-gentrification commentator, in a debate with Moss, said that he didn’t really see any difference, because “people come for the same reason they always have: to make as much money as possible.” That’s exactly the conception that Moss is fighting. People came to New York, he says, because it was a place worth living in, not because they wanted to make piles of money. (...)

I have to confess, I differ a lot from Moss in my conception of what a good city should be like. I have always found New York City to be something of an armpit, and not because it’s full of high-priced condos. Many of the people Moss adores, the bohemians and artists, I find fairly intolerable. Moss is a poet, and wants a city of poetry. I am not a poet, I generally detest poetry. Moss has a strange fondness for mean New York, the New York that told everybody else to fuck off. I thought that New York was kind of an asshole.

But that’s okay: the philosophy of Vanishing New York is that cities shouldn’t all be the same, that they should have different attitudes toward life and different cultures. If I am more New Orleans than Brooklyn, that means we have a diverse world in which New Orleans and Brooklyn are very different places. The one thing that we should all be scared of, wherever we live, is the collapse of those differences, the streamlining and homogenizing of everything.

And yet the logic of capitalism sort of demands that this occur. If efficiency is your goal, then you’re going to have chain restaurants. They’re just more efficient. If you must perpetually grow and grow, then you’re going to have to demolish a lot of things that people dearly love. If everyone embraces the pursuit of financial gain, then landlords are never going to cut tenants a break merely because their business is a neighborhood institution. In a free market world, everything you love will be eaten alive, unless you’re rich.

by Nathan J. Robinson, Current Affairs |  Read more:
Image: Jeremiah Moss

Even What Doesn’t Happen is Epic

Science fiction isn’t new to China, as Cixin Liu explains in Invisible Planets, an introduction to Chinese sci-fi by some of its most prominent authors, but good science fiction is. The first Chinese sci-fi tales appeared at the turn of the 20th century, written by intellectuals fascinated by Western technology. ‘At its birth,’ Cixin writes, science fiction ‘became a tool of propaganda for the Chinese who dreamed of a strong China free of colonial depredations’. One of the earliest stories was written by the scholar Liang Qichao, a leader of the failed Hundred Days’ Reform of 1898, and imagined a Shanghai World’s Fair, a dream that didn’t become a reality until 2010. Perhaps surprisingly, given the degree of idealistic fervour that followed Mao’s accession, very little utopian science fiction was produced under communism (in the Soviet Union there was plenty, at least initially). What little there was in China was written largely for children and intended to educate; it stuck to the near future and didn’t venture beyond Mars. By the 1980s Chinese authors had begun to write under the influence of Western science fiction, but their works were suppressed because they drew attention to the disparity in technological development between China and the West. It wasn’t until the mid-1990s, when Deng’s reforms began to bite, that Chinese science fiction experienced what Cixin calls a ‘renaissance’.

Cixin himself has been at the forefront of the scene since the 1990s. He is the first Asian writer to receive a Hugo award (in 2015), and the author whose work best captures the giddying, libidinous pace of the Chinese economic boom. His monumental Three-Body Trilogy – first published between 2006 and 2010, and recently translated into English by Ken Liu, a Chinese-American sci-fi writer – is Chinese science fiction’s best-known work. Barack Obama is a fan, and the forthcoming movie adaptations are already being described as ‘China’s Star Wars’. The trilogy concerns the catastrophic consequences of humanity’s attempt to make contact with extraterrestrials (it turns out that the reason we haven’t heard from aliens yet is that we’re the only species thick enough to reveal our own location in the universe). It is one of the most ambitious works of science fiction ever written. The story begins during the Cultural Revolution and ends 18,906,416 years into the future. There is a scene in ancient Byzantium, and a scene told from the perspective of an ant. The first book is set on Earth, though several of its scenes take place in virtual reality representations of Qin dynasty China and ancient Egypt; by the end of the third book, the stage has expanded to encompass an intercivilisational war that spans not only the three-dimensional universe but other dimensions too.

The grand scale of Cixin’s story is supported by an immense quantity of research. He graduated from the North China University of Water Conservancy and Electric Power in 1988 and worked, until his literary career took off, as a computer engineer at a power plant in Shanxi province. That training might sound narrow, but his science fiction, which situates itself at the diamond end of the ‘hard’ to ‘soft’ scale (‘hard sci-fi’ has a lot of science in it, ‘soft sci-fi’ doesn’t), demonstrates a knowledge of particle physics, molecular biology, cutting-edge computer science and much more besides. The Three-Body Problem, the first volume of the trilogy, takes its title from an esoteric problem of orbital mechanics to do with predicting the motions of three objects whose gravitational fields intersect. It’s relevant because the alien race the humans recklessly make contact with come from a planet that has three suns, which causes serious climate change issues. Brief ‘stable eras’, with regular nights and days, give way without warning to ‘chaotic eras’, during which the days can last years and a sun can be so close that it desiccates everything its rays fall on. The ‘three-body problem’ is the reason the Trisolarans are delighted to find a planet – ours – that has just one sun and a predictable climate. Naturally, they want to steal it from us. Unfortunately for them our planet is four light years away, which gives us four hundred years to prepare for their invasion.

New technology and the science behind it are always well explained (though never boringly) by Cixin. The best bits in his books are set pieces that would be hallucinatory, or surreal, were it not that everything is described with such scientific authority. One of the most visionary scenes comes towards the end of The Three-Body Problem, when the Trisolarans develop ‘sophons’: tiny robots made from protons that have been ‘unfolded’ into two dimensions, according to principles derived from superstring theory. The plan is to send them to Earth to confuse the results from particle accelerator experiments and report news of humanity back to Trisolaris. But attempts at unfolding the proton, using a giant particle accelerator, go wrong. On the first try, the Trisolarans go too far and unfold it into one dimension, creating an infinitely thin line 1500 light-hours long that breaks apart and drifts back down to Trisolaris as ‘gossamer threads that flickered in and out of existence’. On the second attempt the proton is unfolded into three dimensions. Colossal geometric solids – spheres, tetrahedrons, cones, tori, solid crosses and Möbius strips – fill the sky, ‘as though a giant child had emptied a box of building blocks in the firmament’. Then they melt and turn into a single glaring eye, which transforms into a parabolic mirror that focuses a condensed beam of sunlight onto the Trisolaran capital city, setting it ablaze.

Besides theoretical physics, Cixin appears to have read widely in history, political theory, game theory, sociology, even aesthetics. The main character in the second volume, The Dark Forest, isn’t a scientist but a sociologist called Luo Ji who comes up with the ‘Dark Forest theory’, according to which the universe is like a forest ‘patrolled by numberless and nameless predators’. Any planet that reveals its location is prey; survival depends on stealth. Luo Ji is appointed by the UN as one of the Wallfacers, a small group of individuals charged with formulating plans to combat the Trisolarans. They are called Wallfacers after a Buddhist meditation technique that involves staring in silence at a wall, because in order to evade the sophons they work alone and don’t have to reveal the details of their plan to anyone, not even the authorities who set up the programme. Most of the plans aren’t put into action: the former US Defense Secretary Frederick Tyler, for example, has the idea of offering the Trisolarans a Trojan horse: a hydrogen bomb hidden in a mountain-sized shard of ice (in the trilogy, even what doesn’t happen is epic). Luo Ji’s plan involves threatening to broadcast the location of Trisolaris to the universe, and it succeeds at least in forestalling humankind’s destruction. In the final novel, Death’s End, it emerges that there are civilisations even more technologically advanced than the Trisolarans: they monitor the universe for signs of intelligent life and wipe out any potentially threatening solar systems with the push of a button – they see it as a cleaning job.

This pessimistic view of the universe, in which civilisations must exist in isolation for the sake of their own safety, illustrates a point that Cixin makes throughout the series: that virtuous behaviour is a luxury, conditional on the absence of threat. The Trisolarans aren’t bad, they just want to survive. After a devastating confrontation between Earth’s space fleet and Trisolaran weaponry, a handful of Earth ships escape into space. The plan is to re-establish civilisation away from the solar system, but their crews soon realise that the ships’ combined supplies aren’t sufficient to get all of them to their destination. The first to act on this realisation is an American ship called Bronze Age, which nukes the others, harvests their supplies and continues on its way. Early in Death’s End, Bronze Age is recalled to Earth. Humanity hasn’t been destroyed, thanks to Luo Ji, and is now living in peace and prosperity. The men and women aboard Bronze Age think they’re going to be welcomed as heroes but when they get back home they’re charged with crimes against humanity. The actions of a chaotic era – Earth’s, not Trisolaris’s – are judged by the standards of a stable era. The same thing happens to Luo Ji. Earth enjoys stability because Luo Ji is waiting by a button, ready to broadcast Trisolaris’s location. But after he retires he is charged with genocide: in order to test his Dark Forest theory, Earth transmitted the location of another (presumably inhabited) solar system, which was subsequently destroyed. Immediately after Luo’s removal from office, Trisolaris attacks and the whole of humankind is banished to a gulag in Australia, where it descends into brutal civil war.

In this context, radical political movements are shown to be self-deluding. They appear during stable eras but are made irrelevant, or are transformed past recognition, by real crisis. Cixin wants us to know that communism, especially, sucks.

by Nick Richardson, LRB |  Read more:
Image: Amazon
[ed. I thought The Three Body Problem interesting until about the last quarter of the book when all the physics got too overwhelming.]