Wednesday, February 14, 2018


Brenda Cablayan, Weke Road
via:

New Study Finds Sea Level Rise Accelerating

Global sea level rise has been accelerating in recent decades, rather than increasing steadily, according to a new study based on 25 years of NASA and European satellite data.

This acceleration, driven mainly by increased melting in Greenland and Antarctica, has the potential to double the total sea level rise projected by 2100 when compared to projections that assume a constant rate of sea level rise, according to lead author Steve Nerem. Nerem is a professor of Aerospace Engineering Sciences at the University of Colorado Boulder, a fellow at Colorado's Cooperative Institute for Research in Environmental Sciences (CIRES), and a member of NASA's Sea Level Change team.

If the rate of ocean rise continues to change at this pace, sea level will rise 26 inches (65 centimeters) by 2100 -- enough to cause significant problems for coastal cities, according to the new assessment by Nerem and colleagues from NASA's Goddard Space Flight Center in Greenbelt, Maryland; CU Boulder; the University of South Florida in Tampa; and Old Dominion University in Norfolk, Virginia. The team, driven to understand and better predict Earth’s response to a warming world, published their work Feb. 12 in the journal Proceedings of the National Academy of Sciences.

"This is almost certainly a conservative estimate," Nerem said. "Our extrapolation assumes that sea level continues to change in the future as it has over the last 25 years. Given the large changes we are seeing in the ice sheets today, that's not likely."

Rising concentrations of greenhouse gases in Earth’s atmosphere increase the temperature of air and water, which causes sea level to rise in two ways. First, warmer water expands, and this "thermal expansion" of the ocean has contributed about half of the 2.8 inches (7 centimeters) of global mean sea level rise we've seen over the last 25 years, Nerem said. Second, melting land ice flows into the ocean, also increasing sea level across the globe.

These increases were measured using satellite altimeter measurements since 1992, including the Topex/Poseidon, Jason-1, Jason-2 and Jason-3 satellite missions, which have been jointly managed by multiple agencies, including NASA, Centre national d’etudes spatiales (CNES), European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), and the National Oceanic and Atmospheric Administration (NOAA). NASA’s Jet Propulsion Laboratory in Pasadena, California, manages the U.S. portion of these missions for NASA’s Science Mission Directorate. The rate of sea level rise in the satellite era has risen from about 0.1 inch (2.5 millimeters) per year in the 1990s to about 0.13 inches (3.4 millimeters) per year today.

"The Topex/Poseidon/Jason altimetry missions have been essentially providing the equivalent of a global network of nearly half a million accurate tide gauges, providing sea surface height information every 10 days for over 25 years," said Brian Beckley, of NASA Goddard, second author on the new paper and lead of a team that processes altimetry observations into a global sea level data record. "As this climate data record approaches three decades, the fingerprints of Greenland and Antarctic land-based ice loss are now being revealed in the global and regional mean sea level estimates."

by Katie Weeman and Patrick Lynch, NASA |  Read more:
Image: NASA

Financial Markets Have Taken Over the Economy

Ours is, without a doubt, the age of finance—of the supremacy of financial actors, institutions, markets, and motives in the global capitalist economy. Working people in the advanced economies, for instance, increasingly have their (pension) savings invested in mutual funds and stock markets, while their mortgages and other debts are turned into securities and sold to global financial investors (Krippner 2011; Epstein 2018). At the same time, the ‘under-banked’ poor in the developing world have become entangled, or if one wishes, ‘financially included’, in the ‘web’ of global finance through their growing reliance on micro-loans, micro-insurance and M-Pesa-like ‘correspondent banking’ (Keucheyan 2018; Mader 2018). More generally, individual citizens everywhere are invited to “live by finance”, in Martin’s (2002, p. 17) evocative words, that is: to organize their daily lives around ‘investor logic’, active individual risk management, and involvement in global financial markets. Citizenship and rights are being re-conceptualized in terms of universal access to ‘safe’ and affordable financial products (Kear 2012)—redefining Descartes’ philosophical proof of existence as: ‘I am indebted, therefore I am’ (Graeber 2011). Financial markets are opening ‘new enclosures’ everywhere, deeply penetrating social space—as in the case of so-called ‘viaticals’, the third-party purchase of the rights to future payoffs of life insurance contracts from the terminally ill (Quinn 2008); or of ‘health care bonds’ issued by insurance companies to fund health-care interventions; the payoff to private investors in these bonds depends on the cost-savings arising from the health-care intervention for the insurers. Or what to think of ‘humanitarian impact bonds’ used to profitably finance physical rehabilitation services in countries affected by violence and conflict (Lavinas 2018); this latter instrument was created in 2017 by the International Red Cross in cooperation with insurer Munich Re and Bank Lombard Odier.

Conglomerate corporate entities, which used to provide long-term employment and stable retirement benefits, were broken up under pressure of financial markets and replaced by disaggregated global commodity-chain structures (Wade 2018), operating according to the principles of ‘shareholder value maximization’ (Lazonick 2014)—with the result that today real decision-making power is often to be found no longer in corporate boardrooms, but in global financial markets. As a result, accumulation—real capital formation which increases overall economic output—has slowed down in the U.S., the E.U. and India, as profit-owners, looking for the highest returns, reallocated their investments to more profitable financial markets (Jayadev, Mason and Schröder 2018).

An overabundance of (cash) finance is used primarily to fund a proliferation of short-term, high-risk (potentially high-return) investments in newly developed financial instruments, such as derivatives—Warren Buffet’s ‘financial weapons of mass destruction’ that blew up the global financial system in 2007-8. Financial actors (ranging from banks, bond investors, and pension funds to big insurers and speculative hedge funds) have taken much bigger roles on much larger geographic scales in markets of items essential to development such as food (Clapp and Isakson 2018), primary commodities, health care (insurance), education, and energy. These same actors hunt the globe for ‘passive’ unearthed assets which they can re-use as collateral for various purposes in the ‘shadow banking system’—the complex global chains of credit, liquidity and leverage with no systemic regulatory oversight that has become as large as the regulated ‘normal’ banking system (Pozsar and Singh 2011; Gabor 2018) and enjoys implicit state guarantees (Kane 2013, 2015).

Pressed by the international financial institutions and their own elites, states around the world have embraced finance-friendly policies which included reducing cross-border capital controls, promoting liquid domestic stock markets, reducing the taxation of wealth and capital gains, and rendering their central banks independent from political oversight (Bortz and Kaltenbrunner 2018; Wade 2018; Chandrasekhar and Ghosh 2018). What is most distinctive about the present era of finance, however, is the shift in financial intermediation from banks and other institutions to financial markets—a shift from the ‘visible hand’ of (often-times relationship) regulated banking to the axiomatic ‘invisible hand’ of supposedly anonymous, self-regulating, financial markets. This displacement of financial institutions by financial markets has had a pervasive influence on the motivations, choices and decisions made by households, firms and states as well as fundamental quantitative impacts on growth, inequality and poverty—far-reaching consequences which we are only beginning to understand.

Setting the Stage

... This view of the superiority of a ‘market-based’ financial system rests on Friedrich von Hayek’s grotesque epistemological claim that ‘the market’ is an omniscient way of knowing, one that radically exceeds the capacity of any individual mind or even the state. For Hayek, “the market constitutes the only legitimate form of knowledge, next to which all other modes of reflection are partial, in both senses of the word: they comprehend only a fragment of a whole and they plead on behalf of a special interest. Individually, our values are personal ones, or mere opinions; collectively, the market converts them into prices, or objective facts” (Metcalf 2017). After his ‘sudden illumination’ in 1936 that the market is the best possible and only legitimate form of social organisation, Hayek had to find an answer to the dilemma of how to reformulate the political and the social in a way compatible with the ‘rationality’ of the (unregulated) market economy. Hayek’s answer was that the ‘market’ should be applied to all domains of life. Homo œconomicus—the narrowly self-interested subject who, according to Foucault (2008, pp. 270-271), “is eminently governable ….” as he/she “accepts reality and responds systematically to systematic modifications artificially introduced into the environment—had to be universalized. This, in turn, could be achieved by the financialization of ‘everything in everyday life’, because financial logic and constraints would help to impose ‘market discipline and rationality’ on economic decision-makers. After all, borrowers compete with another for funds—and it is commercial (profit-oriented) banks and financial institutions which do the screening and selection of who gets funded. (...)

This Hayekian legacy underwrites, and quietly promotes, neoliberal narratives and discourses which advocate that authority—even sovereignty—be conceded to (in our case: financial) ‘markets’ which act as an ‘impartial and transparent judge’, collecting and processing information relevant to economic decision-making and coordinating these decisions, and as a ‘guardian’, impartially imposing ‘market discipline and market rationality’ on economic decision-makers—thus bringing about not just ‘socially efficient outcomes’ but social stability as well. This way, financialization constitutes progress—bringing “the advantages enjoyed by the clients of Wall Street to the customers of Wal-Mart”, as Nobel-Prize winning financial economist Robert Shiller (2003, p. x) writes. “We need to extend finance beyond our major financial capitals to the rest of the world. We need to extend the domain of finance beyond that of physical capital to human capital, and to cover the risks that really matter in our lives. Fortunately, the principles of financial management can now be expanded to include society as a whole.”

Attentive readers might argue that faith in the social efficiency of financial markets has waned—after all, Hayek’s grand epistemological claim was falsified, in a completely unambiguous manner, by the Great Financial Crisis of 2007-8 which brought the world economy to the brink of a systemic meltdown. Even staunch believers in the (social) efficiency of self-regulating financial markets, including most notably former Federal Reserve chair Alan Greenspan, had to admit a fundamental ‘flaw in their ideology’.

And yet, I beg to disagree. The economic ideology that created the crash remains intact and unchallenged. There has been no reckoning and no lessons were learned, as the banks and their shareholders were rescued, at the cost of about everyone else in society, by massive public bail-outs, zero interest rates and unprecedented liquidity creation by central banks. Finance staged a major come-back—profits, dividends, salaries and bonuses in the financial industry have rebounded to where they were before, while the re-regulation of finance became stuck in endless political negotiations. Stock markets, meanwhile, notched record highs (before the downward ‘correction’ of February 2018), derivative markets have been doing rather well and under-priced risk-taking in financial markets has gathered steam (again), this time especially so in the largest emerging economies of China, India and Brazil (BIS 2017; Gabor 2018). In the process, global finance has become more concentrated and even more integral to capitalist production and accumulation. The reason why even the Great Financial Crisis left the supremacy of financial interests and logic unchallenged, is simple: there is no acceptable alternative mode of social regulation to replace our financialized mode of co-ordination and decision-making.

‘Really-Existing’ Finance Capitalism

Financialization underwrites neoliberal narratives and discourses which emphasize individual responsibility, risk-taking and active investment for the benefit of the individual him-/herself—within the ‘neutral’ or even ‘natural’ constraints imposed by financial markets and financial norms of creditworthiness (Palma 2009; Kear 2012). This way, financialization morphs into a ‘technique of power’ to maintain a particular social order (Palma 2009; Saith 2011), in which the delicate task of balancing competing social claims and distributive outcomes is offloaded to the ‘invisible hand’ which operates through anonymous, ‘blind’ financial markets (Krippner 2005, 2011). This is perhaps illustrated clearest by Michael Hudson (2012, p. 223):
“Rising mortgage debt has made employees afraid to go on strike or even to complain about working conditions. Employees became more docile in a world where they are only one paycheck or so away from homelessness or, what threatens to become almost the same thing, missing a mortgage payment. This is the point at which they find themselves hooked on debt dependency.”
Paul Krugman (2005) has called this a ‘debt-peonage society’—while J. Gabriel Palma (2009, p. 833) labelled it a ‘rentiers’ delight’ in which financialization sustains the rent-seeking practices of oligopolistic capital—as a system of discipline as well as exploitation, which is “difficult to reconcile with any acceptable definition of democracy” (Mann 2010, p. 18).

In this regime of social regulation, income and wealth became more concentrated in the hands of the rentier class (Saith 2011; Goda, Onaran and Stockhammer 2017) , and as a result, productive capital accumulation gave way before the increased speculative use of the ‘economic surplus of society’ in pursuit of ‘financial-capital’ gains through asset speculation (Davis and Kim 2015). This took the wind out of the sails of the ‘real’ economy, and firms responded by holding back investment, using their profits to pay out dividends to their shareholders and to buy back their own shares (Lazonick 2014). Because the rich own most financial assets, anything that causes the value of financial assets to rise rapidly made the rich richer (Taylor, Ömer and Rezai 2015).

In the U.S., arguably the most financialized economy in the world, the result of this was extreme income polarization, unseen after WWII (Piketty 2014; Palma 2011). The ‘American Dream’, writes Gabriel Palma (2009, p. 842), was “high jacked by a rather tiny minority—for the rest, it has only been available on credit!” Because that is what happened: lower- and middle-income groups took on more debt to finance spending on health care, education or housing, spurred by the deregulation of financial markets and changes in the tax code which made it easier and more attractive for households with modest incomes to borrow in order to spend. This debt-financed spending stimulated an otherwise almost comatose U.S. economy by spurring consumption (Cynamon and Fazzari 2015). In the twenty years before the Great Financial Crash, debts and ‘financial excess’—in the form of the asset price bubbles in ‘New Economy’ stocks, real estate markets and commodity (futures) markets— propped up aggregate demand and kept the U.S. and global economy growing. “We have,” Paul Krugman (2013) concludes, “an economy whose normal condition is one of inadequate demand—of at least mild depression—and which only gets anywhere close to full employment when it is being buoyed by bubbles.”

But it is not just the U.S. economy: the whole world has become addicted to debt. The borrowings of global households, governments and firms have risen from 246% of GDP in 2000 to 327%, or $ 217 trillion, today—which is $70 trillion higher than 10 years ago. It means that for every extra dollar of output, the world economy cranks out more than almost 10 extra dollars of debt. Forget about the synthetic opioid crisis, the world’s more dangerous addiction is to debt. China, which has been the engine of the global economy during most of the post-2008 period, has been piling up debt to keep its growth process going—the IMF (2017) expects China’s non-financial sector debt to exceed 290% of its GDP in 2022, up from around 140% (of GDP) in 2008, warning that China’s current credit trajectory is “dangerous with increasing risks of a disruptive adjustment.” China’s insatiable demand for debt fueled growth, but also led to a property bubble and a rapidly growing shadow banking system (Gabor 2018)—raising concerns that the economy may face a hard landing and send shockwaves through the world’s financial markets. The next global financial catastrophe may be just around the corner.

by Servaas Storm, Naked Capitalism via: Institute for New Economic Thinking | Read more:
Image: uncredited (INET)

How to Die


You may complain, “But he was snatched away when I didn’t expect it.” Thus all are deceived by their own trust and a willed forgetfulness of mortality in the case of things they cherish. Nature promised no one that it would make an exception to necessity. Every day there pass before our eyes the funerals of the famous and the obscure, yet we are busy with other things, and we find a sudden surprise in the thing that, our whole life long, we were told was coming. It’s not the unfairness of the fates, but the warped inability of the human mind to get enough of all things, that makes us complain of leaving that place to which we were admitted as a special favor. How much more just was he who, having learned of his son’s death, spoke a word worthy of a great man: “I knew then, when I fathered him, that he would die.” … His son’s death came as no news to him; for what news is it that someone has died whose whole life was nothing else than a journey toward death? “I knew then, when I fathered him, that he would die.” Then he added something of even greater sagacity and insight: “And it was for that that I raised him.” It’s for that that we are all brought up; whoever is brought into life is destined for death. Let’s rejoice in what will be given, but let’s return it when we’re asked for it back. The fates will seize hold of one person now, another later, but they will overlook no one. Let the soul stand girded for battle; let it never fear what must be, let it always expect what’s unknown. … There’s no single end fixed for all; for one, life departs in mid-course, but abandons another at its very beginning, and barely dispatches a third who is already worn out with extreme old age and longing to go. Each in his or her own time, we all bend our course to the same place. Is it more stupid to ignore the law of mortality, or more impudent to reject it? I don’t know.

by Seneca via: Charlotte Smalley, The American Scholar |  Read more:
Image: The Death of Seneca by Manuel Domínguez Sánchez, 1871

Kicking the Table: Populism or Capitalist De-Modernization at the Semi-Periphery: The Case of Poland

Those who are against fascism without being against capitalism, who lament over the barbarism that comes out of fascism, are like people who wish to eat their veal without slaughtering the calf. They are willing to eat the calf, but they dislike the sight of blood. They are easily satisfied if the butcher washes his hands before weighing the meat. They are not against the property relations which engender barbarism; they are only against barbarism itself. They raise their voices against barbarism, and they do so in countries where precisely the same property relations prevail, but where the butchers wash their hands before weighing the meat. 
—Bertolt Brecht, “Five Difficulties of Writing the Truth” (1935)
It was in autumn 1990 that Poland experienced a pivotal moment in its modern political history—for the first time the president of the country was to be elected by a popular vote. The top job was finally claimed by Lech Wałęsa, the iconic leader of Solidarność trade union and Peace Nobel Prize winner. As is often the case with fundamental breakthroughs, however, there was something much darker and disturbing lurking in the background. Wałęsa’s victory did not happen without a fight. He was challenged by another prominent center-right politician with a long history of anti-Soviet activism: Tadeusz Mazowiecki. The latter got the support of elite intellectual circles, marking the final cleavage in the previously united opposition that throughout the 1980s had fought under the banner of Solidarność. It hardly was a surprising course of events as it closely followed class divisions: Wałęsa, a simple worker turned revolutionary, enjoyed the support of Polish liberal intellectuals as long as he was useful, even crucial, in the fight against Soviet domination. Once that fight was won, class divisions, especially those dictated by cultural capital, reemerged as an important—even if not the only—line of political division. But what wassurprising and shocked all pundits was the fact that it wasn’t Mazowiecki whom Wałęsa had to face in the run-off ballot. Another candidate claimed second place: Stan Tymiński, an obscure and completely unknown figure.

Tymiński only appeared in Polish public life right before the election, coming back from decades of emigration spent in Canada and South America. He presented himself as the anti-establishment candidate of “the people.” He had no support from either ex-communists or Solidarność and he underlined his independence. He also advertised his personal material success: a Polish-Canadian businessman, well-travelled and experienced in the mythical West, doing business across North and South America. He campaigned against the entire political establishment, maintaining that all politicians were corrupt and controlled by the secret service and claiming to possess many proofs of this collaboration, which, however, he never revealed. He also passionately denounced the suffering of the poorer part, who had been deeply harmed by vicious neoliberal reforms undertaken with the support of IMF and the World Bank a year earlier (reforms devised, as it happens, by no less a figure than the famous neoliberal prophet himself, Jeffrey Sachs). To these impoverished masses, Tymiński promised material prosperity and symbolic dignity, and, despite the fact that he had zero political experience and was unanimously lambasted by intellectual establishment, he managed to secure the second place in the first round of the elections, winning 23% of the votes, more than Tadeusz Maowiecki who had served as Polish prime minister from 1989 and was probably the best qualified candidate to ever run for the office of president in Poland.

A reader following the 2016 US election—and who has not?—may start to see an uncanny resemblance: yes, Stan Tymiński was, toutes proportions gardées, Polish Donald Trump and he defeated the politician who was the closest equivalent of Hilary Clinton in Polish political life: a very well educated and well prepared political professional (a lawyer for that matter) discredited for many voters by his links to the elite of neoliberal establishment. Tymiński did not win the presidency, but the shock that followed his victory over Tadeusz Mazowiecki was very similar to what the US experienced in 2016.

This is a fact worth remembering given the more recent populist turn in Polish—and not only Polish—politics: populism did not appear in the last years solely as the result of the 2008 financial crisis. In the Polish context at least, it is as old as neoliberalism and constitutes its somber counterpart. Despite Tymiński’s defeat in 1990, it has remained a constant element of our political life, enjoying in various institutional forms between 15 and 20 percent of electoral support. Tymiński disappeared from Polish politics as quickly as he entered it, but just a year later, in 1991, another popular figure was born: Andrzej Lepper. A home-grown, rural populist, he rallied farmers to oppose the government after a wave of bankruptcies and unrest provoked by the shock of neoliberal therapy applied to Polish society after the fall of the Soviet bloc. This time a political organization was born: Samoobrona (meaning “Self-defence”), first as a movement, then a political party. After more than a decade of lurking in the shadows, Lepper entered government in 2005, becoming deputy prime-minister in the cabinet of…Jarosław Kaczyński, the well-known leader of the Law and Justice party that currently holds power in Poland. At that time they only ruled for two years, falling victims to their own infighting and intrigues; however, that coalition, as well as the early developments that I sketched above, is crucial to understanding the present political situation in Poland. Before it happened Law and Justice was just an ordinary neo-conservative party: they affirmed nationalism (labeled “patriotism” according to the rules of political correctness), opposed women’s emancipation and gay rights, proclaimed their religious faith etc. When it came to the economy, they were just as neoliberal as the liberals: they lowered not only the taxes for the rich, but also mandatory contributions to healthcare and social security that companies are supposed to pay and they completely scrapped the inheritance tax. But in the course of these two years of coalition government, Law and Justice devoured Samoobrona, which never rose to power again, and they captured its electorate, slowly turning from a standard conservative to the populist-conservative party that they are today. What helped this development was, of course, the success of Hungary’s Victor Orbán who provided a blueprint of how to legally bypass the law in order to construct the bizarre hybrid of authoritarian parliamentarism that we are experiencing today.

Many Polish liberals are disgusted by the fact that so many Polish voters “betrayed the values of democratic society” and “sold” their allegiance to Constitutional Court or the separation of powers for $150 a month child bonus introduced by the Law and Justice government. This is, however, a fundamental misconception. Celebrations of democratic values come very easily to those who do not need to worry about how to feed their kids and whose class egoism has been ruthless during the last three decades of neoliberal rule. (...)

This disconnect is well exemplified in the discussions surrounding Poland’s position and membership in the European Union. Polish liberals fear some kind of Polexit—either by choice or by expulsion due to the undemocratic policies of the populist government. So they point to the fact that the European Union with the Schengen Zone agreement gave us an incredible freedom of movement in Europe. Of course, factually it is true. Being born in 1976 I’m old enough to remember what it meant to live behind the Iron Curtain. We were not allowed to keep our passports at home and we had to apply for them every time we intended to leave the country. We needed a visa to enter any Western state. Visas were difficult to obtain, cost a lot and covered short periods of time like two weeks or a month. Crossing the border was a stressing and humiliating experience for us: we were suspected of being spies or smugglers, interrogated and checked for hours. Today all I have to do is take my national ID, a driving license and a credit card and I can go three and a half thousand kilometers from Warsaw to Lisbon crossing half a dozen national borders without being checked even once. What used to be border checkpoints are now parking lots on the side of highways. Police booths I remember from my teenage years are turned into hot-dog stands. As citizens of a EU country I am entitled to live, work and buy real-estate in any member country. It really is great, but with one caveat: you need to have resources to be able to profit from this exceptional and remarkable freedom. What good is the ability to travel to Lisbon to a person who can hardly afford a train ticket to the nearest town? Even worse: there may be no train to the nearest town because Polish neoliberals decided that public transportation is passé, that it belongs to the old and obsolete socialist past, so they neglected a lot of local connections in favor of promoting car ownership. If you cannot buy a car? Well, it is your fault, because you are not entrepreneurial enough. So you get stuck in some grey, crumbling and aging peripheral town or hamlet. The only thing you can afford is a TV, where you watch the lavish lifestyle of cosmopolitan elites. And, suddenly, here’s this populist government which does not tell you that you are a savage and maladjusted Homo sovieticus who lacks “civilizational competence”, but rather treats you as a dignified subject who deserves attention and—what a formidable turn of events!—they give you a child bonus, so your kids can go for holidays for the first time in their lives. What would you say to the liberals who come nagging you about how much you betrayed democratic values and how urgently we need to defend the freedom and civil society we were so desperately fighting for in Soviet times? And these are the very same people who ruled your country for eight years, denying you both dignity and welfare while constantly bragging about fabulous GDP growth and the incredible economic miracle that they created.

Well, if you have any brains left, you would say just one thing: “Fuck off!” And this is precisely what Law and Justice supporters are saying. Contrary to the liberal narration their support for populism is not an irrational eruption of barbarism and resentment, but rather the opposite: a proof of their rationality and sober thinking. A quick glance at the opinion polls shows that almost none of the most controversial policies enacted by the Polish populist government enjoys widespread public support, even among Law and Justice voters. Two thirds of Poles do not like what is happening with Constitutional Court, an overwhelming majority is against logging in the primordial forest in Białowieża and does not support the government’s obsession with keeping the Polish economy addicted to coal. The conspiracy theory, advanced by some prominent politicians of the ruling party, that the airplane crash in Smoleńsk in 2010, where Lech Kaczyński (the twin brother of Jarosław Kaczyński and the President of Poland at the time) died along with 100 other prominent politicians was an orchestrated attack, is believed by only 14% of the population. The reasons why people support the government have little to do with all those ridiculous and harmful policies. Parliamentary politics in a bourgeois state is very much like cooking with limited supplies: you may have a bowl of hot oil and you may think that tempura would be a great treat, but if all you have are potatoes, you will most likely settle for fries.

But, wait, isn’t it a dangerous normalization of right-wing populism that I’m advocating here? After all we saw what happened in Warsaw on November the 11th this year, when the Independence Day parade turned into a neo-fascist festival of hatred, xenophobia and racism. Shouldn’t we be more concerned or even alarmed? There are for sure, reasons for concern and alarm, but if it is ever going to be politically fruitful, we need to have a good understanding of what is going on. To understand does not mean to justify let alone praise or support. Polish conservative populism is not fascism. Only a small minority of people who marched on November the 11th in Warsaw were actual fascists. But, of course, there is a risk of sliding towards fascism. The government is turning a blind eye to the fascist excesses, because they do not want to have a more radical right-wing formation emerging on the right side of the political spectrum. So they are keen on letting the right-wing extremists know that they somehow include them under their political patronage. This surely is playing with fire and should never take place. An outright ban on any kind of fascism is the only acceptable way to go and the only way to avoid a repetition of horrors that Central-Eastern Europe experienced in the past century. What is, however, equally urgent is addressing the root of fascism and countering the force behind the fascist awakening. Just to denounce right-wing populism and the drift towards fascism it entails is going to get us nowhere unless we understand the reason why they are occupying a place closer and closer to the mainstream of political life.

It’s here again, that we encounter the basic flaw of liberal common sense, with its fixation on cultural factors and the importance of ethos. What they neglect is an element that was entirely wiped out of both public and academic discourse in Poland as well as elsewhere, for example, in the US: the issue of class and its indelible materialist component. Populism is a kind of displaced and perverted class revolt. It derives from an oppression of double kind: material for the poor and symbolic for the lower-middle class. The former strives for material redistribution, the latter—for symbolic recognition, for something to be proud of and for the feeling of dignity they are deprived of. Polish populists have found a way to cunningly combine the support of the two into a coherent political force and it has allowed them to win elections. Now, fulfilling their electoral promises grants them the ongoing legitimacy that they clearly enjoy in the eyes of a large group of Polish society.

Looking from the other side of the Atlantic, I would venture a hypothesis that the same is at least partially true for the American society. Walter Benn Michaels has talked for more than a decade about how much the US political orthodoxy has been the politics of identity and recognition above material redistribution. What this means is not just that a great many people have become the victims of growing inequality but that a large group of them—white people and especially straight white men—have come to understand themselves as doubly victimized. They have very little resources as they get nothing from material redistribution (because there is virtually none), and they get nothing from symbolic redistribution (since that goes precisely to people who are not straight and white). One may say: rightly so, why should they? Given the racist and patriarchal society that we live in, this is the group that does not deserve recognition for what they are. But as true as this diagnosis may be, it does not change an obvious political consequence: this is the group that occupies the position that Ernesto Laclau called pure heterogeneity; or caput mortuum, using the Lacanian-alchemist term—a leftover, a sedimentation on the walls of the sample tube where the chemical reaction is taking place. This is the most unstable and dangerous element as it does not take part in the normal political game, but being exotic (i.e. positioned outside) to the system it only disrupts the process. Laclau describes it with a metaphor: as we sit around a table playing a board game, they are those who were pushed aside—thus they are heterogenous to the very process of the game—and they cannot be a player in the ongoing match. This is an utterly painful and humiliating position and it can hardly be enjoyed by anyone who happens to occupy it. These people may not have any means to enter the game, but they can do a different thing: kick the table, so there will be no more playing for anyone. This is what they did in many places around the world in 2015 and 2016. And, as long as they remain in the position of pure heterogeneity, they’ll keep on doing it, no matter how much we denounce and demonize them. As a matter of fact, the more the liberals whine about the destruction of state institutions and irreparable harm done to political order by those actions, the more enjoyment the supporters of populism will get from kicking the table. After all, this is what the so-called protest voting is all about. (...)

Throughout a good part of 20th century, academic development studies were dominated by what was called modernization theory. It claimed that all countries move along the same trajectory of social change, where some—mainly the West—are more advanced than the others. It had a right-wing and a left-wing version and culminated in the (in)famous declaration of the end of history made by Francis Fukuyama in the early 1990s. What we are witnessing right now is a precise reversal of this alleged pattern: the peripheries of capitalist world-system have become some sort of perverse avant-garde of reaction. What we have experienced in Poland since early 1990s, as I showed at the beginning of this text, has not been a glitch provoked by cultural factors but a reaction to neoliberal austerity. It took neoliberalism some time to destroy core societies to the same level, but when it started to get there, strikingly similar formations appeared first in the UK and the US, precisely the most neoliberal countries in the center of the capitalist world-system. It should not come as surprise that France is the place where politics may still seem “business as usual”: Emanuel Macron looks like another Tony Blair, Gerhard Schroder or Bill Clinton. France is, after all, the number one public spender in the OECD and still maintains one of the most generous and inclusive welfare mechanisms on the planet. What the liberals fascinated by Macron do not get is that the neoliberal reforms he is undertaking are destroying the very status quo on which he got elected. The advancement of the Front National in France, just like the electoral success of Alternative für Deutschland in Germany, are visible signs of what we may very well face in a not very distant future. I would dub the phenomenon “de-modernization” as it is reversing both the conquests of liberal modernity (not only in the political sphere, the same is true when it comes to secular state or labor conditions) as well as the relation between the center and the periphery postulated by the modernization theory. The future of Berlin, Paris or Washington is in Warsaw and Budapest, not the other way around.

Looking at this uncanny development from the perspective of the Polish semi-periphery I cannot but marvel at an incredible irony of the situation. I grew up in the last years of Soviet regime and I remember quite well the dreams and aspiration that followed the system change in 1989. The key ambition of liberal elites was for Poland to come back to the mainstream of Western politics and to become “a normal, European country.” And it was firstly and mainly the Anglo-Saxon political world that captured the imagination of Polish liberal elites as a noble example to follow. When I look today at the chaos and indolence of the Trump administration or the mess that Brexit generates in the UK I cannot help but think of it as a bizarre “polonization” of world politics. I’ve seen this before! Steve Bannon looks, talks and acts (including the red nose and generally alcoholic look) as if he were an advisor to the Polish right-wing government of Jan Olszewski in 1992 not to the US president in 2017. Poland—and the entire region of Central-Eastern Europe—is undeniably in the mainstream of European and world politics. Even more: we are a kind of avant-garde! Not because we have advanced so high, but because capitalism in its neoliberal incarnation has brought politics so low.

by Jan Sowa, Nonsite.org |  Read more:

Tuesday, February 13, 2018

They Only Look Casual

There she goes, strutting that strut. Her outfit is arranged just so. She’s got the bag with the umpteen-person wait list, not yet available in stores. The cameras flash. It’s a fashion moment. It’s a watershed. It’s a marketing opportunity.

It’s the 15-foot catwalk that runs from the hotel door to the S.U.V. door.

For fashion houses looking to leverage the star power of celebrities, the holy grail had long been the red carpet, the bigger an event the better. Special teams at major labels might court actresses, their reps and their stylists for years to dress their top clients, and spend tens of thousands of dollars or more to make custom outfits for them. A hit could make a brand, or cement its status, paying dividends for years to come as the moment was fondly recalled in Best Of lists and debated by carpet pundits.

That was then.

With social media ascendant, there is a new, and increasingly important, runway for the stars, and a rising guard of stylists working to dress them for it. It’s the sidewalk. It’s the airport. It’s the Starbucks run.

For those women whose followers feverishly track their every move and every selfie — your Hadids (Gigi and Bella); your Kaia Gerbers (Cindy Crawford’s look-alike model daughter); your Emily Ratajkowskis; Selena Gomez, your Instagram queen (the platform’s most followed person) — any moment can be a moment. Their presence is an event. They need no carpet; they are the carpet.

“Five years ago it was all about the red carpet moment,” said Christian Classen, 31, a stylist for Ms. Gomez and young celebrities including the Disney star Dove Cameron, the singer Banks, the Instagram poet Rupi Kaur and the actress Zazie Beetz. “Less now. An Instagram selfie on some people can be 10 times more important.”

Mr. Classen does style many of his clients for formal appearances, but he has made a specialty of casual off-carpet looks. When he struck out on his own as a stylist in 2015, labels tightly guarded their stores, lending clothes only for specific red carpet occasions. “Now, if it’s for a street style or an airport, they’re going to give it to me right away,” Mr. Classen said.

Not that the red carpet has disappeared. It remains, ready when needed, for the Oscars, the Emmys, the Grammys, the premieres. And so remain, at the ready, the legion of red carpet stylists. But joining them are a new wave of “day stylists” whose forte is the casual, tossed-off, this-old-thing look of street style: what the stars would throw together on their own (but often don’t have to).

Even the most casual of looks — the jeans, beanie and pap-proof goggle shades the star may wear to scurry to the gate of her departing flight — may well take a village. (The highest-profile stars may have separate stylists to work on their biggest red carpet events. Ms. Gomez, for example, also works with the stylist Kate Young.)

“A lot of people probably think that they choose on a daily basis from their own closets,” said Mimi Cuttrell, 26, a stylist who works with Gigi Hadid, Ms. Gerber and Ms. Hadid’s mother, Yolanda Hadid. “Sometimes there are outfits that are completely planned out from head to toe. I’m really particular with tailoring, too. There’s a lot of pieces and back work that goes into getting one street style look ready.” (...)

The point of such styling is to look effortless, natural and, in one of fashion’s favorite terms, “authentic” — even when that authenticity is mediated by an on-hand stylist to offer up the glossiest version of your authentic self. So much so that many of the millions of fans watching along on social media may not realize they’re looking at a tailor-made ensemble.

by Matthew Schneier, NY Times |  Read more:
Image: Backgrid

Max Ernst, The Phases of the Night, 1946
via:

Daido Moriyama, Record No.35 (2017)
via:

The Autonomous Selfie Drone Is Here

Autonomous drones have long been hyped, but until recently they’ve been little more than that. The technology in Skydio’s machine suggests a new turn. Drones that fly themselves — whether following people for outdoor self-photography, which is Skydio’s intended use, or for longer-range applications like delivery, monitoring and surveillance — are coming faster than you think.

They’re likely to get much cheaper, smaller and more capable. They’re going to be everywhere, probably sooner than we can all adjust to them.

Most consumer drones rely on some degree of automation in flight. DJI, the Chinese drone company that commands much of the market, makes several drones that can avoid obstacles and track subjects.

But these features tend to be less than perfect, working best in mostly open areas. Just about every drone on the market requires a pilot.

“Our view is that almost all of the use cases for drones would be better with autonomy,” said Adam Bry, Skydio’s chief executive.

Skydio was founded by Mr. Bry and Abe Bachrach — who met as graduate students at the Massachusetts Institute of Technology and later started Google’s drone program, Project Wing — along with Matt Donahoe, an interface designer.

In 2014, with funding from the venture firm Andreessen Horowitz, the company began working on what would become the R1. Skydio has since raised $70 million from Andreessen and several other investors, including Institutional Venture Partners, Playground Global and the basketball player Kevin Durant.

Skydio’s basic goal was a drone that requires no pilot. When you launch the R1 using a smartphone app, you have your subject stand in front of the drone, then tap that person on the screen — now it’s locked on. You can also select one of several “cinematic modes,” which specify the direction from which the drone will try to record its subject. (It can even predict your path and stay ahead of you to shoot a selfie from the front.)

After takeoff, it’s hands off. The drone operates independently. In the eight-minute flight I saw — through a wooded trail sparsely populated with runners and dogs — the R1 followed its target with eerie determination, avoiding every obstacle as naturally as an experienced human pilot might, and never requiring help. It lost its subject — me — only once, but I had to really work to make that happen. (...)

What this means is ubiquity. As I watched the R1 tail Mr. Bry, I played the scene forward in my mind: What happens when dozens or hundreds of runners and bikers and skiers and hikers and tourists begin setting out their own self-flying GoPros to record themselves? Our society has proved in thrall to photography; if you can throw up a camera and get a shot of you reaching the summit, who’s not going to do it?

by Farhad Manjoo, NY Times |  Read more:
Image: Laura Morton for The New York Times

California Launches Aetna Probe

California's insurance commissioner has launched an investigation into Aetna after learning a former medical director for the insurer admitted under oath he never looked at patients' records when deciding whether to approve or deny care.

California Insurance Commissioner Dave Jones expressed outrage after CNN showed him a transcript of the testimony and said his office is looking into how widespread the practice is within Aetna.

"If the health insurer is making decisions to deny coverage without a physician actually ever reviewing medical records, that's of significant concern to me as insurance commissioner in California -- and potentially a violation of law," he said.

Aetna, the nation's third-largest insurance provider with 23.1 million customers, told CNN it looked forward to "explaining our clinical review process" to the commissioner.

The California probe centers on a deposition by Dr. Jay Ken Iinuma, who served as medical director for Aetna for Southern California from March 2012 to February 2015, according to the insurer.

During the deposition, the doctor said he was following Aetna's training, in which nurses reviewed records and made recommendations to him.

Jones said his expectation would be "that physicians would be reviewing treatment authorization requests," and that it's troubling that "during the entire course of time he was employed at Aetna, he never once looked at patients' medical records himself." (...)

Members of the medical community expressed similar shock, saying Iinuma's deposition leads to questions about Aetna's practices across the country.

"Oh my God. Are you serious? That is incredible," said Dr. Anne-Marie Irani when told of the medical director's testimony. Irani is a professor of pediatrics and internal medicine at the Children's Hospital of Richmond at VCU and a former member of the American Board of Allergy and Immunology's board of directors.

"This is potentially a huge, huge story and quite frankly may reshape how insurance functions," said Dr. Andrew Murphy, who, like Irani, is a renowned fellow of the American Academy of Allergy, Asthma and Immunology. He recently served on the academy's board of directors. (...)

"This is something that all of us have long suspected, but to actually have an Aetna medical director admit he hasn't even looked at medical records, that's not good," said Murphy, who runs an allergy and immunology practice west of Philadelphia.

by Wayne Drash, CNN |  Read more:
Image: Wayne Drash/CNN

Is the Universe a Conscious Mind?

In the past 40 or so years, a strange fact about our Universe gradually made itself known to scientists: the laws of physics, and the initial conditions of our Universe, are fine-tuned for the possibility of life. It turns out that, for life to be possible, the numbers in basic physics – for example, the strength of gravity, or the mass of the electron – must have values falling in a certain range. And that range is an incredibly narrow slice of all the possible values those numbers can have. It is therefore incredibly unlikely that a universe like ours would have the kind of numbers compatible with the existence of life. But, against all the odds, our Universe does.

Here are a few of examples of this fine-tuning for life:
  • The strong nuclear force (the force that binds together the elements in the nucleus of an atom) has a value of 0.007. If that value had been 0.006 or less, the Universe would have contained nothing but hydrogen. If it had been 0.008 or higher, the hydrogen would have fused to make heavier elements. In either case, any kind of chemical complexity would have been physically impossible. And without chemical complexity there can be no life.
  • The physical possibility of chemical complexity is also dependent on the masses of the basic components of matter: electrons and quarks. If the mass of a down quark had been greater by a factor of 3, the Universe would have contained only hydrogen. If the mass of an electron had been greater by a factor of 2.5, the Universe would have contained only neutrons: no atoms at all, and certainly no chemical reactions.
  • Gravity seems a momentous force but it is actually much weaker than the other forces that affect atoms, by about 1036. If gravity had been only slightly stronger, stars would have formed from smaller amounts of material, and consequently would have been smaller, with much shorter lives. A typical sun would have lasted around 10,000 years rather than 10 billion years, not allowing enough time for the evolutionary processes that produce complex life. Conversely, if gravity had been only slightly weaker, stars would have been much colder and hence would not have exploded into supernovae. This also would have rendered life impossible, as supernovae are the main source of many of the heavy elements that form the ingredients of life.
Some take the fine-tuning to be simply a basic fact about our Universe: fortunate perhaps, but not something requiring explanation. But like many scientists and philosophers, I find this implausible. In The Life of the Cosmos (1999), the physicist Lee Smolin has estimated that, taking into account all of the fine-tuning examples considered, the chance of life existing in the Universe is 1 in 10229, from which he concludes: 
In my opinion, a probability this tiny is not something we can let go unexplained. Luck will certainly not do here; we need some rational explanation of how something this unlikely turned out to be the case.
The two standard explanations of the fine-tuning are theism and the multiverse hypothesis. Theists postulate an all-powerful and perfectly good supernatural creator of the Universe, and then explain the fine-tuning in terms of the good intentions of this creator. Life is something of great objective value; God in Her goodness wanted to bring about this great value, and hence created laws with constants compatible with its physical possibility. The multiverse hypothesis postulates an enormous, perhaps infinite, number of physical universes other than our own, in which many different values of the constants are realised. Given a sufficient number of universes realising a sufficient range of the constants, it is not so improbable that there will be at least one universe with fine-tuned laws.

Both of these theories are able to explain the fine-tuning. The problem is that, on the face of it, they also make false predictions. For the theist, the false prediction arises from the problem of evil. If one were told that a given universe was created by an all-loving, all-knowing and all-powerful being, one would not expect that universe to contain enormous amounts of gratuitous suffering. One might not be surprised to find it contained intelligent life, but one would be surprised to learn that life had come about through the gruesome process of natural selection. Why would a loving God who could do absolutely anything choose to create life that way? Prima facie theism predicts a universe that is much better than our own and, because of this, the flaws of our Universe count strongly against the existence of God.

Turning to the multiverse hypothesis, the false prediction arises from the so-called Boltzmann brain problem, named after the 19th-century Austrian physicist Ludwig Boltzmann who first formulated the paradox of the observed universe. Assuming there is a multiverse, you would expect our Universe to be a fairly typical member of the universe ensemble, or at least a fairly typical member of the universes containing observers (since we couldn’t find ourselves in a universe in which observers are impossible). However, in The Road to Reality (2004), the physicist and mathematician Roger Penrose has calculated that in the kind of multiverse most favoured by contemporary physicists – based on inflationary cosmology and string theory – for every observer who observes a smooth, orderly universe as big as ours, there are 10 to the power of 10123 who observe a smooth, orderly universe that is just 10 times smaller. And by far the most common kind of observer would be a ‘Boltzmann’s brain’: a functioning brain that has by sheer fluke emerged from a disordered universe for a brief period of time. If Penrose is right, then the odds of an observer in the multiverse theory finding itself in a large, ordered universe are astronomically small. And hence the fact that we are ourselves such observers is powerful evidence against the multiverse theory.

Neither of these are knock-down arguments. Theists can try to come up with reasons why God would allow the suffering we find in the Universe, and multiverse theorists can try to fine-tune their theory such that our Universe is less unlikely. However, both of these moves feel ad hoc, fiddling to try to save the theory rather than accepting that, on its most natural interpretation, the theory is falsified. I think we can do better.

by Philip Goff, Aeon |  Read more:
Image: Carlo Allegri/Reuters

Monday, February 12, 2018

Marc Ribot

Corporations Will Inherit the Earth

What a herky-jerky mess our federal government is. What a bumbling klutz. It can’t manage health care. It can’t master infrastructure. It can’t fund itself for more than tiny increments of time. It can barely stay open. It shut down briefly on Friday for the second time in three weeks. Maybe it should just stay closed for good.

Let corporations pick up the slack! In fact they’re doing that already, with an innovation and can-do ambition sorely absent in Washington.

Three days before the latest shutdown, Elon Musk borrowed a launchpad previously used by NASA’s trailblazing astronauts to send his own rocket into space. It was the first time that a vessel of such might and majesty was thrust heavenward by a private company rather than a government agency.

It was also a roaring, blazing sign of our times, in which the gaudy dreams and grand experiments belong to the private sector, not the public one, and in which the likes of Musk or Amazon’s Jeff Bezos chart a future for our species beyond our stressed-out planet. NASA no longer leads the way.

Speaking of Amazon, it joined two other corporate giants, Berkshire Hathaway and JPMorgan Chase, to announce two weeks ago that they would form their own health care provider and try to solve the riddle that continues to stump lawmakers: dependable service at affordable prices.

Amazon also recently stole a high-profile educator from Stanford University, Candace Thille. Her hiring suggests that the company is poised to expand employee training to a point where Amazon is essentially filling in for public and private universities and grooming its own work force.

And Musk is not only reaching for the stars but also tunneling under the earth. A new venture of his, the Boring Company, is a response to the inability of public officials in Los Angeles to ease the region’s paralyzing traffic. Musk envisions a futuristic network of subterranean chutes. The first one is already under construction.

We Americans are living a paradox. We’re keenly suspicious of big corporations — just look at how many voters thrilled to Bernie Sanders’s jeremiads about a corrupt oligarchy, or at polls that show a growing antipathy to capitalism — and yet we’re ever more reliant on them. They’re in turn bolder, egged on by the ineptness and inertia of Washington.

“When there’s a vacuum, there are going to be entities that step into it,” Chris Lehane told me. “This is an example of that.” Lehane is the head of global policy for Airbnb, which ran a commercial this month that alluded (without profanity) to Trump’s “shithole countries” remark and promoted those very places as travel destinations. It spoke to another vacuum — a moral one — being filled by companies, many of which are more high-minded, forward-thinking and solutions-oriented than the federal government on immigration, L.G.B.T. rights, climate change and more. (...)

Corporations have long been engines of innovation, sources of philanthropy and even laboratories for social policy. But the situation feels increasingly lopsided these days. I’m struck, for example, by the intensity of conversation over the last year about what Facebook and its algorithms should do to stanch the destructive tribalism in American life. It’s true that Mark Zuckerberg’s monster has badly aggravated that dynamic, in part by allowing its platform to be manipulated by bad actors. But so has Washington, and we seem less hopeful that it’s redeemable and likely to shepherd us to a healthier place.

Although government spending has hardly dried up — the budget deal signed by Trump on Friday attests to that — and the federal debt continues to metastasize, there’s a questionable commitment to scientific research, leaving private actors to call many of the shots.

But companies’ primary concern isn’t public welfare. It’s the bottom line. I say that not to besmirch them but to state the obvious. Their actions will never deviate too far from their proprietary interests, and while tapping their genius and money is essential, outsourcing too much to them is an abdication of government’s singular role. What’s best for Amazon and what’s best for humanity aren’t one and the same.

by Frank Bruni, NY Times |  Read more:
Image: Ben Wiseman
[ed. See also: We’ve Trashed the Oceans; Now We're Turning Space Into a Junkyard for Billionaires]

Heart Stents Are Useless for Most Stable Patients

Lots of Americans have chest pain because of a lack of blood and oxygen reaching the heart. This is known as angina. For decades, one of the most common ways to treat this was to insert a mesh tube known as a stent into arteries supplying the heart. The stents held the vessels open and increased blood flow to the heart, theoretically fixing the problem.

Cardiologists who inserted these stents found that their patients reported feeling better. They seemed to be healthier. Many believed that these stents prevented heart attacks and maybe even death. Percutaneous coronary intervention, the procedure by which a stent can be placed, became very common.

Then in 2007, a randomized controlled trial was published in The New England Journal of Medicine. The main outcomes of interest were heart attacks and death. Researchers gathered almost 2,300 patients with significant coronary artery disease and proof of reduced blood flow to the heart. They assigned them randomly to a stent with medical therapy or to medical therapy alone.

They followed the patients for years. The result? The stents didn’t make a difference beyond medical treatment in preventing these bad outcomes.

This was hard to believe. So more such studies were conducted.

In 2012, the studies were collected in a meta-analysis in JAMA Internal Medicine. Three studies looked at patients who were stable after a heart attack. Five more examined patients who had stable angina or ischemia but had not yet had a heart attack. The meta-analysis showed that stents delivered no benefit over medical therapy for preventing heart attacks or death for patients with stable coronary artery disease.

Still, many cardiologists argued, stents improved patients’ pain. It improved their quality of life. Even if we didn’t reduce the outcomes that physicians cared about, these so-called patient-centered outcomes mattered, and patients who had stents reported improvements in these domains in studies.

The problem was that it was difficult to know whether the stents were leading to pain relief, or whether it was the placebo effect. The placebo effect is very strong with respect to procedures, after all. What was needed was a trial with a sham control, a procedure that left patients unclear whether they’d had a stent placed.

Many physicians opposed such a study. They argued that the vast experience of cardiologists showed that stents worked, and therefore randomizing some patients not to receive them was unethical. Others argued that exposing patients to a sham procedure was also wrong because it left them subject to potential harm with no benefit. More skeptical observers might note that some doctors and hospitals were also financially rewarded for performing this procedure.

Regardless, such a trial was done, and the results were published this year. (...)

There was no difference in the outcomes of interest between the intervention and placebo groups.

Stents didn’t appear even to relieve pain.

Some caveats: All the patients were treated rigorously with medication before getting their procedures, so many had improved significantly before getting (or not getting) a stent. Some patients in the real world won’t stick to the intensive medical therapies, so there may be a benefit from stents for those patients (we don’t know). The follow-up was only at six weeks, so longer-term outcomes aren’t known. These results also apply only to those with stable angina. There may be more of a place for stents in patients who are sicker, who have disease in more than one blood vessel, or who fail to respond to medical therapy.

But many, if not most patients, probably don’t need them. This is hard for patients and physicians to wrap their heads around because, in their experience, patients who got stents got better. They seemed to receive a benefit from the procedure. But that benefit appears to be because of the placebo effect, not any physical change from improved blood flow. (...)

Even in this study, 2 percent of patients had a major bleeding event. Remember that hundreds of thousands of stents are placed every year. Stents are also expensive. They can add at least $10,000 to the cost of therapy.

Stents still have a place in care, but much less of one than we used to think. Yet many physicians as well as patients will still demand them, pointing out that they lead to improvements in some people, even if that improvement is from a placebo effect.

by Aaron E. Carroll, NY Times | Read more:
Image: Jack Sachs
[ed. See also:  Powerless Placebos]

Is Tech Dividing America?

When Americans consider how technology has changed their lives, they tend to focus on how the internet and smartphones have altered how they watch TV, connect with friends, or how they shop. But those changes pale in comparison to how technology has already restructured the economy, shaking up the workforce and shifting opportunity to tech-centric urban hubs. As artificial intelligence quickly moves from fiction to daily reality, that revolution will arguably become much more consequential.

Economists broadly agree that technology will continue to be an engine of economic growth. But it also will upend old certainties about who benefits. Already, we can see a growing inequality gap, with winners and losers by region and workplace. The next wave of changes, handled badly, could make this gap even more extreme.

MIT researcher David Autor has been at the center of that conversation for two decades now. One of the world’s premier labor economists, Autor has helped drive a reconsideration of how Americans are really coping with the changes transforming their workplaces. And he's trying to take the conversation beyond the ivory tower: His 2016 TED talk about the surprising impact of automation, “Why Are There Still So Many Jobs?” has been viewed more than 1.3 million times.

Autor's interest comes from seeing these changes at the ground level: Fresh out of Tufts University with a degree in psychology, he ended up running a Silicon Valley-sponsored computer-training program for at-risk children and adults at San Francisco’s Glide Memorial Church, a counterculture hot spot. When he headed back to Harvard’s John F. Kennedy School of Government for an M.A. and then Ph.D. in public policy, he brought a newly keen interest in figuring out how the technologies being pumped in to the labor market would shape what it means to be a worker in the United States. (...)

We’ve just started to think seriously as a nation about who wins and who doesn’t as the American workplace automates. In 1998, you co-wrote a paper that showed the rise of technology in the workplace was actually proving to be good for higher-skilled workers. Is that a fair read?

What that paper suggested was that it's definitely the case that automation is raising the demand for skilled labor. And the work that I've done since has been about what set of activities are complemented by automation and which set of activities is displaced, pointing out that on the one hand, there were tasks that were creative and analytical, and on the other, tasks that required dexterity and flexibility, which were very difficult to automate. So the middle of the skill distribution, where there are well understood rules and procedures, is actually much more susceptible to automation.

So, there's a hollowing out of middle-class jobs, but high-skilled, high-wage workers and the low-skilled low-wage workers remain? Is that what we're seeing play out right now in the U.S.?

That polarization of jobs definitely reduced the set of opportunities for people who don't have a college degree. People who have a high school or lower degree, it used to be they were in manufacturing, in clerical and administrative support. Now, increasingly, they're in cleaning, home health, security, etc. Ironically, we've automated some of the stuff that was more interesting for us, and we're left with some of the stuff that is less interesting. (...)

“Automation anxiety" is overblown, you’ve said. How anxious should American workers be?

People are talking about how robots are going to take all the jobs, but we're in a time of very dramatic employment growth and have been for a decade. Job growth is robust throughout western Europe, as well. So, we're certainly not in a period where there's any outward sign that work is coming to an end. We have had two centuries of people worrying very vocally about how automation will make us superfluous. I don't think it's made us superfluous, and I don't think it's on the verge of making us superfluous.

The greater concern is not about the number of jobs but whether those jobs will pay decent wages and people will have sufficient skills to do them. That's the great challenge. It's never been a better time to be a highly educated worker in the western world. But there hasn't been a worse time to be a high school dropout or high school graduate. (...)

In just the past year, Silicon Valley as an industry has developed a good and evil reputation. It’s cutting-edge and pays well, but it sometimes disrupts the world without seeming to care too much about the consequences, a la Uber. Which is it?

I don't think it's either. It creates a lot of benefits as well as creating real challenges: It's definitely the case that it's raising total GDP, but has been very dis-equalizing. It is up to our institutions to deal well with that or not.

Some countries have done a much better job at sharing the gains and making sure that everybody's bought in. Others have been much more social Darwinists about it, and the U.S. is very much at the extreme of that among industrialized economies, of going, ‘rah rah,’ to the winners and ‘too bad for you,’ to the losers.

How, exactly, are other countries good at it?

Countries that, I think, are doing really well with this—Norway, Sweden, Denmark, Germany, Switzerland, Austria—have very good educational systems that prepare people not just for highly educated, Ph.D.-level jobs, but also very good vocational, technical education systems.

But there’s also the notion that there are multiple stakeholders in the economy, not just shareholders. Workers have more voice, and that makes people less apprehensive about these changes because they expect that if they are gains, they'll get a piece of them, where in the U.S. a lot of people think, ‘Well, there might be a gain, but I'll be worse off.’ And they're probably right.

Other countries have made it a lot easier for people to feel comfortable about the changes they're bringing on themselves. I think that’s one diagnosis of the current U.S. political system.

You’ve noted in your work that LBJ created a “Commission on Technology, Automation, and Economic Progress” way back in 1964. I didn’t know whether to be encouraged by that or saddened by that—that we’ve been talking about these questions for a long time and don’t seem to have any better answers.

There are two schools of thought that you hear often. One is, ‘the sky is falling, the robots are coming for our jobs, we're all screwed because we've made ourselves obsolete.’ The other version you also hear a lot is, ‘We've been through things like this in the past, it's all worked out fine, it took care of itself, don't worry.’ And I think both of these are really wrong.

I've already indicated why I think the first view is wrong. The reason I think the second view is wrong is because I don't think it took care of itself. Countries have very different levels of quality of life, institutional quality, of democracy, of liberty and opportunity, and those are not because they have different markets or different technologies. It's because they've made different institutional arrangements. Look at the example of Norway and Saudi Arabia, two oil-rich countries. Norway is a very happy place. It's economically mobile with high rates of labor force participation, high rates of education, good civil society. And Saudi Arabia is an absolute monarchy that has high standards of living, but it's not a very happy place because they've stifled innovation and individual freedom. Those are two examples of taking the same technology, which is oil wealth, and either squandering it or investing it successfully.

I think the right lesson from history is that this is an opportunity. Things that raise GDP and make us more productive, they definitely create aggregate wealth. The question is, how do we use that wealth well to have a society that's mobile, that's prosperous, that's open? Or do we use it to basically make some people very wealthy and keep everyone else quiet? So, I think we are at an important juncture, and I don't think the U.S. is dealing with it especially well. Our institutions are very much under threat at a time when they're arguably most needed.

by Nancy Scola, Politico |  Read more:
Image: Porter Gifford

How Vans Got Cool Again

Back in 2002, when Rian Pozzebon, who was then a relative unknown in the sneaker community, got the offer to join Vans and help rebuild the brand’s ailing skate shoe program with his longtime friend and colleague Jon Warren, he had one big question: “Will they let us mess with the classics?”

At the time, Vans wasn’t particularly interested in core models like the Slip-On, Old Skool, and Authentic. “The classics just kind of existed,” says Pozzebon. “But they weren’t pushed.” Instead, they languished—in just a few basic colors—in Vans stores.

The company's focus was directed elsewhere, on newer styles. After riding the wave of the ‘90s skateboarding boom, Vans faced new competition from younger skate shoe brands like DC and Osiris. These companies—born only a few years earlier—favored a chunkier, more tech-forward silhouette (a word the fashion community uses to describe the shape of a shoe). Vans’ retro styling, by comparison, felt stale. By the early years of the new millennium, nearly a decade of sustained growth had fallen off—as had customers’ goodwill.

“I just never took it seriously as a lifestyle shoe. At all,” Brian Trunzo, senior menswear trend forecaster at WGSN, says of his feelings about Vans at the time. Beset by new competition in its core skate market and ignored by trendsetting sneakerheads who preferred the Air Force 1 or Adidas Superstar, Vans seemed on the verge of slipping into irrelevance.

And here was Pozzebon—not even an employee yet—asking if he could look backwards instead of forwards to inform his design decisions. It was a bold question, to say the very least. And yet. “When we came and interviewed they were like, ‘Whatever it takes. Whatever you need,’” he recalls. Whether or not he fully knew it at the time, he’d landed on something that would prove crucial for the brand’s future success.

“It was that vintage piece,” says Pozzebon, now the company's Lifestyle Footwear Design Director. “At the time, Vans didn’t necessarily know what they really had.”

By focusing on that element of the company’s DNA, Pozzebon and his design team led Vans through a turnaround that was nothing short of staggering. The brand has become a staple of American footwear culture, on the level with iconic brands like Converse (which is twice as old) and Nike (which is nearly 10 times as large). Vans are worn by celebrities and fashion influencers, the jeans and T-shirt crowd who rarely pay attention to what's stylish, teenagers and toddlers, alike. What makes it all the more impressive—especially in an age of unprecedented technological innovation—is that it leaned on just five classic styles to drive its cultural relevance, which arguably have never been higher, as well as its sales, which have inarguably never been higher.

by Jonathan Evans, Esquire |  Read more:
Image: Vans