Monday, January 30, 2017


Solongo Monkhooroi, No Pets
via:

Inside the Mind of a Snapchat Streaker

Abby Rogers, a 15 year-old from California’s Bay Area, recently took extreme measures to help a buddy in Michigan.

The friend had her phone confiscated by parents and biked to a local library where she got on a computer, sent Rogers her Snapchat user name and password, and begged for a big favor. Every day for two weeks, Rogers would need to log on to the messaging app and send short picture messages back and forth between her friend’s account and her own. It didn’t matter the subject. It could be pictures of walls, ceilings—anything, really—so long as it kept alive a continuous, daily volley of missives known as a Snapstreak. Thanks to Rogers’ intervention, the girls’ Snapstreak has been running for more than 270 days.

Keeping streaks alive has grown so urgent that Rogers checks the Snapchat app on her phone roughly every 15 minutes. She had 12 running at last check. If friends don’t “snap” back and forth for 24 hours, streaks die, breaking one of the digital ties that bind America’s teens.

“Sometimes I’ll end up going through a streak in the middle of class. I’ll just leave the phone face up and take a picture of the ceiling,” said Rogers, who feels “guilty” if she doesn’t respond to her friends’ snaps immediately. “I don’t want to leave them hanging.” (...)

A Snap spokeswoman said streaks are meant to be a fun way to illustrate online relationships. The company shows numbers that appear next to each friend’s name and tracks the days they’ve gone continuously sending and receiving messages. Long streaks are rewarded with colored hearts, fire, and other emojis on users’ profiles.

Messages on Snapchat can be personal videos and amusing comments. But they can just as easily be photos of ceilings.

“Some people Snapchat just for the streak,” said Isaiah Figueroa, an 18-year-old student at Wichita State University. Figueroa has 27 streaks with friends, including one of more than 250 days. He gets irritated when people snap a meaningless picture with a generic message, such as “keep the streak going.” He tries to avoid this but admits he’s guilty of the practice when he gets busy and is afraid he’ll lose a streak.

It’s not just teenagers. Chase Haverick, a 30-year-old development communications manager from Oklahoma City, has seven streaks and recently hit 100 days with one friend. Special “100” and “fire” emojis now hover next to a digital representation of their relationship on Snapchat. Hitting the 100 mark was a “bucket list” goal for 2016, Haverick said, only half joking.

Snapstreaks began in April 2015 as part of a wider app update that introduced Friend Emojis. These symbolize a hierarchy of relationships: At the top is the person you send snaps to the most and who also sends you the most snaps. They get a gold heart emoji. Lower down, a smirk emoji means a person messages you more than anyone, but you snap more with others.

The rankings provide clarity in what adolescent development experts said is a messy period of social growth for young adults. “For those that have streaks, they provide a validation for the relationship,” said Emily Weinstein, a doctoral candidate at Harvard University studying the intersection of adolescent behavior and social media. “Attention to your streaks each day is a way of saying ‘we’re OK.’”

Snap adds urgency by putting an hourglass emoji next to a friend’s name if a streak is about to end. “The makers built into the app a system so you have to check constantly or risk missing out,” said Nancy Colier, a psychotherapist and author of The Power of Off. “It taps into the primal fear of exclusion, of being out of the tribe and not able to survive.”

by Lizette Chapman, Bloomberg | Read more:
Image:David Paul Morris

Sunday, January 29, 2017

How the Women of the Mormon Church Came to Embrace Polygamy

Laurel Thatcher Ulrich is a historian’s historian. For more than three decades, she has dazzled her profession with archival discoveries, creative spark and an ability to see “history” where it once appeared there was none to be seen. Her most famous book, “A Midwife’s Tale” (1990), focused on what seemed for generations to be a useless source — the prosaic, detailed diary of an 18th-century New England midwife. Out of these centuries-old jottings, Ulrich conjured an entire social world centered on women’s emotions, experiences and labor. It was one of the most celebrated historical works of the 1990s, winner of the Pulitzer Prize and required reading for any self-respecting history graduate student. To an unusual degree, Ulrich put her stamp on a particular historical method: She went in search of women’s daily lives before the Industrial Revolution, and she used the personal diary as her point of entry.

This approach figures prominently in Ulrich’s latest book, “A House Full of Females: Plural Marriage and Women’s Rights in Early Mormonism, 1835-1870.” This time, there are several diaries, all of them written by men and women who lived through the perilous early years of the Mormon Church. These intimate sources survived through a variety of means: tucked away in tin bread boxes, stashed in basements, hidden in log-cabin walls. Through them, Ulrich seeks to uncover how women experienced the strange and controversial new practice of polygamy, or “plural marriage.”

The basic texture of her subjects’ lives will be familiar to any fan of “A Midwife’s Tale.” In Ulrich’s telling, mid-19th-century Mormon women spent most of their time giving birth; tending to children; surviving bouts of malaria, ague and typhus; and watching in agony as their children succumbed to similar diseases. In between, they performed herculean feats of domestic labor — what the church described as the “privilege to make and mend, and wash, and cook for the Saints.” In return for this “privilege,” early Mormon women also had to contend with a new social difficulty. Beginning in the 1840s, their husbands were suddenly permitted — indeed, encouraged and sometimes coerced — to take on additional wives as a path to eternal salvation.

Ulrich notes that the practice of plural marriage did not descend fully formed from the heavens. It was a social experiment that had to be negotiated and developed by all concerned. The church founder, Joseph Smith, introduced the idea approximately two decades after he supposedly uncovered the sacred golden plates of the Book of Mormon on a hillside in upstate New York. His sudden insight that God wanted men to take multiple wives coincided with rumors about his own extramarital affairs, but Ulrich sidesteps the question of whether Smith encouraged the practice “in order to justify illicit relations with vulnerable young women” (as other biographers have suggested).

Her main interest is in what plural marriage meant for Mormon women in the 19th century, forced to adapt on the fly to a situation they could never have anticipated. This is in some ways a personal question for Ulrich, herself a mother of five and a practicing Mormon as well as a Harvard history professor. All eight of her great-grandparents settled in Utah before the Civil War, members of the faith’s pioneer generation. To ask what it was like for the women who made that journey is also to ask how the modern Mormon Church developed its tight-knit social world, and to think about who mattered within it.

Despite Ulrich’s emphasis on women’s voices and ideas, “A House Full of Females” centers its narrative in part on a man named Wilford Woodruff. An apostle of the church and one of Mormonism’s early converts, Woodruff played a significant role in Mormon history. But his most important quality, from Ulrich’s perspective, is that he kept a detailed diary. That diary paid attention to women, noting on one occasion that the local ward meeting house “was full of females quilting sewing etc.” (thus providing Ulrich with her title). Woodruff married his wife Phebe Carter in 1837 and by all accounts loved her deeply, despite long sojourns apart for missionary work and the difficult deaths of several children. In the mid-1840s, he nonetheless “sealed” himself to two teenage girls, the beginning of a decades-long adventure in polygamy.

In asking readers to enter Wilford and Phebe’s world, Ulrich assumes a certain amount of background knowledge. She takes for granted that her readers know something about the landmark events of early Mormonism, including the mob attacks on Mormon communities in Missouri and Illinois, Smith’s murder and Brigham Young’s ascendancy, and the dismal wagon train journey to the promised land of Utah. She assumes, too, that readers understand the basic tenets of Mormon theology and the controversies that the church inspired in the rest of the United States.

Ulrich focuses instead on the confusion and excitement that accompanied the “glorious” revelation sanctioning plural marriage within the Mormon community, especially among its most elite members. She remains unsure about whether the first plural relationships necessarily involved sex, noting the “scarcity of babies” produced. What does seem clear is that many Mormon women were less than thrilled with the development. Smith’s wife Emma objected from the first and never let up, eventually helping to found a dissident anti-polygamist branch of the church after her husband’s death.

“A House Full of Females” is sensitive to the difficulty and confusion that accompanied early plural marriage, with its implied loss of status for women. But the book also tells a more complicated tale about women’s on-the-ground experiences. While some women objected to plural marriage, Ulrich notes, others sought it out as a means of securing economic stability or of escaping from abusive marriages. Still others came to embrace plural marriage as a form of communitarianism, in which women shared domestic burdens and labor. Ulrich describes friendships and rivalries between wives (themes that will be familiar to any viewer of HBO’s “Big Love”). She even makes a case for plural marriage as a vehicle for a form of feminist consciousness-raising. Although outsiders referred to Brigham Young’s home as his “harem,” Ulrich writes, “it could also have been described as an experiment in cooperative housekeeping and an incubator of female activism.”

As evidence for this budding political sensibility, she points to the little-known fact that the territory of Utah granted women the right to vote in 1870, a full half-century before the federal constitutional amendment. Unlike Wyoming, the first to approve woman suffrage, Utah was already majority-female at that point, and many of those Mormon women supported both suffrage and plural marriage.

by Beverly Gage, NY Times |  Read more:
Image: Edward Martin

Friday, January 27, 2017

Why Ikea's Flatpack Refugee Shelter Won Design of the Year

When Hind and Saffa Hameed arrived at the Al Jamea’a refugee camp in Baghdad in 2015, having been hounded from their home in Ramadi by Islamic State militants, they had never been so glad to see an Ikea product. It wasn’t a Billy bookcase or Malm bed – but an entire flat-pack refugee shelter. The Swedish furniture giant’s innovation has just been crowned Beazley design of the year2016 by London’s Design Museum.

“If you compare life in the tents and life in these shelters, it’s a thousand times better,” Saffa, 34, told UNHCR, the UN’s refugee agency. “The tents are like a piece of clothing and they would always move. We lived without any privacy. It was so difficult.”

The family, having been moved from tent to tent, had experienced severe overheating in summer and flooding during Iraq’s rainy season, giving the four young daughters constant diarrhoea. “When the rains came to the camp, the water was about one foot high,” said Hind, 30. “But this shelter is more protected. We have a door we can close and lock. I feel it’s safer and cleaner.”

With years of expertise in squeezing complex items of furniture into the smallest self-assembly package possible, Ikea has come up with a robust 17.5 sq m shelter that fits inside two boxes and can be assembled by four people in just four hours, following the familiar picture-based instructions – substituting the ubiquitous allen key for a hammer, with no extra tools necessary.

Developed by the not-for-profit Ikea Foundation with UNHCR over the past five years, the Better Shelter consists of a sturdy steel frame clad with insulated polypropylene panels, along with a solar panel on the roof that provides four hours of electric light, or mobile phone charging via a USB port. Crucially, it is firmly anchored to the ground and the walls are stab-proof, a potentially life-saving feature given that such shelters are often sited where violence is rife and gender-based.

Despite the dramatic increase in the number of people being displaced around the world, with UNHCR estimating there are now 2.6 million refugees who have lived in camps for over five years, and some for more than a generation, the typical flimsy tent hasn’t changed much. Cold in winter and sweltering in summer, tents still rely on canvas, ropes and poles. They generally last about six months, leaking when it rains, and blowing away in strong winds.

At $1,250, a Better Shelter costs twice as much as a typical emergency tent, but it provides security, insulation and durability, and it lasts for at least three years. Beyond that time, when the plastic panels might degrade, the frame can be reused and clad in whatever local materials are to hand, from mud bricks to corrugated iron.

Since production started in June 2015, over 16,000 have been deployed to crisis locations ranging from Nepal, where Médecins Sans Frontières used them as clinics following the devastating earthquake. Several thousand have been sent to Iraq, and hundreds to Djibouti to house refugees fleeing Yemen.

“It’s almost like playing with Lego,” says Per Heggenes, CEO of the Ikea Foundation. “You can put it together in different ways to make small clinics or temporary schools. A family could also take it apart and take it with them, using the shelter as a framework around which to build with local materials.”

by Oliver Wainwright, The Guardian |  Read more:
Image: Better Shelter

Thursday, January 26, 2017

Paul Simon

The Twilight of the Liberal World Order

The liberal world order established in the aftermath of World War II may be coming to an end, challenged by forces both without and within. The external challenges come from the ambition of dissatisfied large and medium-size powers to overturn the existing strategic order dominated by the United States and its allies and partners. Their aim is to gain hegemony in their respective regions. China and Russia pose the greatest challenges to the world order because of their relative military, economic, and political power and their evident willingness to use it, which makes them significant players in world politics and, just as important, because the regions where they seek strategic hegemony—Asia and Europe—historically have been critical to global peace and stability. At a lesser but still significant level, Iran seeks regional hegemony in the Middle East and Persian Gulf, which if accomplished would have a strategic, economic, and political impact on the international system. North Korea seeks control of the Korean peninsula, which if accomplished would affect the stability and security of northeast Asia. Finally, at a much lower level of concern, there is the effort by ISIS and other radical Islamist groups to establish a new Islamic caliphate in the Middle East. If accomplished, that, too, would have effects on the global order.

However, it is the two great powers, China and Russia, that pose the greatest challenge to the relatively peaceful and prosperous international order created and sustained by the United States. If they were to accomplish their aims of establishing hegemony in their desired spheres of influence, the world would return to the condition it was in at the end of the 19th century, with competing great powers clashing over inevitably intersecting and overlapping spheres of interest. These were the unsettled, disordered conditions that produced the fertile ground for the two destructive world wars of the first half of the 20th century. The collapse of the British-dominated world order on the oceans, the disruption of the uneasy balance of power on the European continent due to the rise of a powerful unified Germany, combined with the rise of Japanese power in East Asia all contributed to a highly competitive international environment in which dissatisfied great powers took the opportunity to pursue their ambitions in the absence of any power or group of powers to unite in checking them. The result was an unprecedented global calamity. It has been the great accomplishment of the U.S.-led world order in the 70 years since the end of the Second World War that this kind of competition has been held in check and great power conflicts have been avoided.

The role of the United States, however, has been critical. Until recently, the dissatisfied great and medium-size powers have faced considerable and indeed almost insuperable obstacles in achieving their objectives. The chief obstacle has been the power and coherence of the order itself and of its principal promoter and defender. The American-led system of political and military alliances, especially in the two critical regions of Europe and East Asia, has presented China and Russia with what Dean Acheson once referred to as “situations of strength” in their regions that have required them to pursue their ambitions cautiously and in most respects to defer serious efforts to disrupt the international system. The system has served as a check on their ambitions in both positive and negative ways. They have been participants in and for the most part beneficiaries of the open international economic system the United States created and helped sustain and, so long as that system was functioning, have had more to gain by playing in it than by challenging and overturning it. The same cannot be said of the political and strategic aspects of the order, both of which have worked to their detriment. The growth and vibrancy of democratic government in the two decades following the collapse of Soviet communism has posed a continual threat to the ability of rulers in Beijing and Moscow to maintain control, and since the end of the Cold War they have regarded every advance of democratic institutions, including especially the geographical advance close to their borders, as an existential threat—and with reason. The continual threat to the basis of their rule posed by the U.S.-supported order has made them hostile both to the order and to the United States. However, it has also been a source of weakness and vulnerability. Chinese rulers in particular have had to worry about what an unsuccessful confrontation with the United States might do to their sources of legitimacy at home. And although Vladimir Putin has to some extent used a calculated foreign adventurism to maintain his hold on domestic power, he has taken a more cautious approach when met with determined U.S. and European opposition, as in the case of Ukraine, and pushed forward, as in Syria, only when invited to do so by U.S. and Western passivity. Autocratic rulers in a liberal democratic world have had to be careful.

The greatest check on Chinese and Russian ambitions, however, has come from the combined military power of the United States and its allies in Europe and Asia. China, although increasingly powerful itself, has had to contemplate facing the combined military strength of the world’s superpower and some very formidable regional powers linked by alliance or common strategic interest, including Japan, India, and South Korea, as well as smaller but still potent nations like Vietnam and Australia. Russia has had to face the United States and its NATO allies. When united, these military powers present a daunting challenge to a revisionist power that can call on no allies of its own for assistance. Even were the Chinese to score an early victory in a conflict, they would have to contend over time with the combined industrial productive capacities of some of the world’s richest and most technologically advanced nations. A weaker Russia would face an even greater challenge.

Faced with these obstacles, the two great powers, as well as the lesser dissatisfied powers, have had to hope for or if possible engineer a weakening of the U.S.-supported world order from within. This could come about either by separating the United States from its allies, raising doubts about the U.S. commitment to defend its allies militarily in the event of a conflict, or by various means wooing American allies out from within the liberal world order’s strategic structure. For most of the past decade, the reaction of American allies to greater aggressiveness on the part of China and Russia in their respective regions, and to Iran in the Middle East, has been to seek more reassurance from the United States. Russian actions in Georgia, Ukraine, and Syria; Chinese actions in the East and South China seas; Iranian actions in Syria, Iraq, and along the littoral of the Persian Gulf—all have led to calls by American allies and partners for a greater commitment. In this respect, the system has worked as it was supposed to. What the political scientist William Wohlforth once described as the inherent stability of the unipolar order reflected this dynamic—as dissatisfied regional powers sought to challenge the status quo, their alarmed neighbors turned to the distant American superpower to contain their ambitions.

The system has depended, however, on will, capacity, and coherence at the heart of the liberal world order. The United States had to be willing and able to play its part as the principal guarantor of the order, especially in the military and strategic realm. The order’s ideological and economic core order—the democracies of Europe and East Asia and the Pacific—had to remain relatively healthy and relatively confident. In such circumstances, the combined political, economic, and military power of the liberal world would be too great to be seriously challenged by the great powers, much less by the smaller dissatisfied powers.

In recent years, however, the liberal order has begun to weaken and fracture at the core. As a result of many related factors—difficult economic conditions, the recrudescence of nationalism and tribalism, weak and uncertain political leadership and unresponsive mainstream political parties, a new era of communications that seems to strengthen rather than weaken tribalism—there has emerged a crisis of confidence in what might be called the liberal enlightenment project. That project tended to elevate universal principles of individual rights and common humanity over ethnic, racial, religious, national, or tribal differences. It looked to a growing economic interdependence to create common interests across boundaries and the establishment of international institutions to smooth differences and facilitate cooperation among nations. Instead, the past decade has seen the rise of tribalism and nationalism; an increasing focus on the “other” in all societies; and a loss of confidence in government, in the capitalist system, and in democracy. We have been witnessing something like the opposite of the “end of history” but have returned to history with a vengeance, rediscovering all the darker aspects of the human soul. That includes, for many, the perennial human yearning for a strong leader to provide firm guidance in a time of seeming breakdown and incoherence.

by Robert Kagan, Brookings Institution |  Read more:
Image: Dr. Strangelove

The Long March From China to the Ivies

As the daughter of a senior colonel in China’s People’s Liberation Army, Ren Futong has lived all 17 years of her life in a high-walled military compound in northern Beijing. No foreigners are allowed inside the gates; the vast encampment, with its own bank, grocery store and laundromat, is patrolled by armed guards and goose-stepping soldiers.

Growing up in this enclave, Ren – also known as Monica, the English name she has adopted – imbibed the lessons of conformity and obedience, loyalty and patriotism, in their purest form. At her school, independent thought that deviated from the reams of right answers the students needed to memorise for the next exam was suppressed. The purpose of it all, Monica told me, was “to make everybody the same”.

For most of her childhood, Monica did as she was expected to. She gave up painting and calligraphy, and rose to the top of her class. Praised as a “study god”, she aced the national high-school entrance exam, but inside she was beginning to rebel. The agony and monotony of studying for that test made her dread the prospect of three more years cramming for the gaokao, the pressure-packed national exam whose result – a single number – is the sole criterion for admissions into Chinese universities.

One spring evening two years ago, Monica, then 15, came home to the compound and made what, for an acquiescent military daughter, was a startling pronouncement. “I told my parents that I was tired of preparing for tests like a machine,” she recalls. “I wanted to go to university in America.” She had hinted at this desire before, talking once over dinner about the freedom offered by an American liberal-arts education, but her parents had dismissed it as idle chatter. This time, they could see that she was dead serious. “My parents were kinda shocked,” she says. “They remained silent for a long period.”

Several days passed before they broke their silence. Her father, a taciturn career officer educated at a military academy, told her that “it would be much easier if you stayed in China where your future is guaranteed.” Her mother, an IT engineer, said Monica would very likely get into China’s most prestigious institution, Peking University, a training ground for the country’s future leaders. “Why give that up?” she asked. “We know the system here, but we know nothing about America, so we can’t help you there. You’d be totally on your own.” Then, after cycling through all the counter-arguments, her mother finally said: “If your heart is really set on going to the US, we will support your decision.”

The Ren family was taking a considerable risk. If Monica, their only child, wanted to study abroad, she would have to abandon the gaokao track, the only route available to universities within China, to have time to prepare for a completely different set of standardised tests and a confounding university application process. If she changed her mind – or, worse, failed to make the transition – she could not resume her studies within the Chinese system. And if that happened, she would miss the chance of going to an elite university and, therefore, of getting a top job within the system. For the Rens, this was the point of no return.

It is one of China’s curious contradictions that, even as the government tries to eradicate foreign influences from the country’s universities, the flood of Chinese students leaving for the West continues to rise. Over the past decade, the number of mainland Chinese students enrolled in American colleges and universities has nearly quintupled, from 62,523 in 2005 to 304,040 last year, according to the Institute of International Education. Many of these students are the sons and daughters of China’s rising elite, establishment families who can afford tuition fees of $60,000 a year for America’s top universities – and the tens of thousands of dollars needed to prepare for the transition. Even the daughter of Xi Jinping, China’s president and the man driving the campaign against foreign ideas, recently studied – under a pseudonym – at Harvard University.

Among Western educators, the Chinese system is famous for producing an elite corps of high-school students who regularly finish at the top of global test rankings, far ahead of their American and British counterparts. Yet so many Chinese families are now opting out of this system that selling education to Chinese students has become a profitable business for the West. They now account for nearly a third of all foreign students in America, contributing $9.8 billion a year to the United States’ economy. In Britain, too, Chinese students top the international lists. And the outflow shows no sign of subsiding: according to a recent Hurun Report, an annual survey of China’s elite, 80% of the country’s wealthy families plan to send their children abroad for education.

Not every Chinese student is driven, as Monica is, by the desire to escape the grind of the gaokao and get a more liberal education. For many Chinese families, sending a child to a Western university is a way of signalling status – yet “another luxury brand purchase,” as Jiang Xueqin, an educational consultant, puts it. For students faring poorly in the gaokao system, moreover, foreign universities offer an escape valve, and a way to gain an edge in the increasingly competitive job and marriage market back home. And for wealthy families seeking a safe haven for their assets – by one estimate more than $1 trillion in capital left China in 2015 – a foreign education for a child can serve as a first step towards capital flight, foreign investment, even eventual emigration.

by Brook Larmer, 1843 |  Read more:
Image: James Wasserman

Living the High Life


Image: The Interlace, Singapore, OMA/Ole Scheeren (2007-13)

Doomsday Prep For The Super-Rich

[ed. Having just experiencing a mini-disaster this last weekend I'd suggest a few basic precautions that anyone can take (other than maintaining a fully fueled helicopter): a case of bottled water, various canned goods (which don't require heating, maybe some hard crackers for a little starch), flashlight and batteries, solar phone charger, lighter, bag of charcoal (to cook whatever food you have left before it goes bad, which happens quicker than you might think), a supply of cash.]

Steve Huffman, the thirty-three-year-old co-founder and C.E.O. of Reddit, which is valued at six hundred million dollars, was nearsighted until November, 2015, when he arranged to have laser eye surgery. He underwent the procedure not for the sake of convenience or appearance but, rather, for a reason he doesn’t usually talk much about: he hopes that it will improve his odds of surviving a disaster, whether natural or man-made. “If the world ends—and not even if the world ends, but if we have trouble—getting contacts or glasses is going to be a huge pain in the ass,” he told me recently. “Without them, I’m fucked.”

Huffman, who lives in San Francisco, has large blue eyes, thick, sandy hair, and an air of restless curiosity; at the University of Virginia, he was a competitive ballroom dancer, who hacked his roommate’s Web site as a prank. He is less focussed on a specific threat—a quake on the San Andreas, a pandemic, a dirty bomb—than he is on the aftermath, “the temporary collapse of our government and structures,” as he puts it. “I own a couple of motorcycles. I have a bunch of guns and ammo. Food. I figure that, with that, I can hole up in my house for some amount of time.”

Survivalism, the practice of preparing for a crackup of civilization, tends to evoke a certain picture: the woodsman in the tinfoil hat, the hysteric with the hoard of beans, the religious doomsayer. But in recent years survivalism has expanded to more affluent quarters, taking root in Silicon Valley and New York City, among technology executives, hedge-fund managers, and others in their economic cohort.

Last spring, as the Presidential campaign exposed increasingly toxic divisions in America, Antonio García Martínez, a forty-year-old former Facebook product manager living in San Francisco, bought five wooded acres on an island in the Pacific Northwest and brought in generators, solar panels, and thousands of rounds of ammunition. “When society loses a healthy founding myth, it descends into chaos,” he told me. The author of “Chaos Monkeys,” an acerbic Silicon Valley memoir, García Martínez wanted a refuge that would be far from cities but not entirely isolated. “All these dudes think that one guy alone could somehow withstand the roving mob,” he said. “No, you’re going to need to form a local militia. You just need so many things to actually ride out the apocalypse.” Once he started telling peers in the Bay Area about his “little island project,” they came “out of the woodwork” to describe their own preparations, he said. “I think people who are particularly attuned to the levers by which society actually works understand that we are skating on really thin cultural ice right now.”

In private Facebook groups, wealthy survivalists swap tips on gas masks, bunkers, and locations safe from the effects of climate change. One member, the head of an investment firm, told me, “I keep a helicopter gassed up all the time, and I have an underground bunker with an air-filtration system.” He said that his preparations probably put him at the “extreme” end among his peers. But he added, “A lot of my friends do the guns and the motorcycles and the gold coins. That’s not too rare anymore.” (...)

How did a preoccupation with the apocalypse come to flourish in Silicon Valley, a place known, to the point of cliché, for unstinting confidence in its ability to change the world for the better?

Those impulses are not as contradictory as they seem. Technology rewards the ability to imagine wildly different futures, Roy Bahat, the head of Bloomberg Beta, a San Francisco-based venture-capital firm, told me. “When you do that, it’s pretty common that you take things ad infinitum, and that leads you to utopias and dystopias,” he said. It can inspire radical optimism—such as the cryonics movement, which calls for freezing bodies at death in the hope that science will one day revive them—or bleak scenarios. Tim Chang, the venture capitalist who keeps his bags packed, told me, “My current state of mind is oscillating between optimism and sheer terror.”

In recent years, survivalism has been edging deeper into mainstream culture. In 2012, National Geographic Channel launched “Doomsday Preppers,” a reality show featuring a series of Americans bracing for what they called S.H.T.F. (when the “shit hits the fan”). The première drew more than four million viewers, and, by the end of the first season, it was the most popular show in the channel’s history. A survey commissioned by National Geographic found that forty per cent of Americans believed that stocking up on supplies or building a bomb shelter was a wiser investment than a 401(k). Online, the prepper discussions run from folksy (“A Mom’s Guide to Preparing for Civil Unrest”) to grim (“How to Eat a Pine Tree to Survive”). (...)

How many wealthy Americans are really making preparations for a catastrophe? It’s hard to know exactly; a lot of people don’t like to talk about it. (“Anonymity is priceless,” one hedge-fund manager told me, declining an interview.) Sometimes the topic emerges in unexpected ways. Reid Hoffman, the co-founder of LinkedIn and a prominent investor, recalls telling a friend that he was thinking of visiting New Zealand. “Oh, are you going to get apocalypse insurance?” the friend asked. “I’m, like, Huh?” Hoffman told me. New Zealand, he discovered, is a favored refuge in the event of a cataclysm. Hoffman said, “Saying you’re ‘buying a house in New Zealand’ is kind of a wink, wink, say no more. Once you’ve done the Masonic handshake, they’ll be, like, ‘Oh, you know, I have a broker who sells old ICBM silos, and they’re nuclear-hardened, and they kind of look like they would be interesting to live in.’ ”

I asked Hoffman to estimate what share of fellow Silicon Valley billionaires have acquired some level of “apocalypse insurance,” in the form of a hideaway in the U.S. or abroad. “I would guess fifty-plus per cent,” he said, “but that’s parallel with the decision to buy a vacation home. Human motivation is complex, and I think people can say, ‘I now have a safety blanket for this thing that scares me.’ ” The fears vary, but many worry that, as artificial intelligence takes away a growing share of jobs, there will be a backlash against Silicon Valley, America’s second-highest concentration of wealth. (Southwestern Connecticut is first.) “I’ve heard this theme from a bunch of people,” Hoffman said. “Is the country going to turn against the wealthy? Is it going to turn against technological innovation? Is it going to turn into civil disorder?”

by Evan Osnos, New Yorker |  Read more:
Image: Dan Winters

Wednesday, January 25, 2017

Hillsong UNITED


[ed. See also: Touch The Sky]

What’s up with Firefox?

Until about five years ago, techies and others who wanted a speedier, extensible, more privacy-oriented web browser on their desktops often immediately downloaded Mozilla's Firefox to use instead of Internet Explorer on Windows or Safari on the Mac.

But those days seem long ago. Firefox is hardly discussed today, and its usage has cratered from a high of over 30 percent of the desktop browser market in 2010 to about 12 percent today, according to Mozilla, citing stats from NetMarketShare. (Various other analytics firms put the share as low as 10 percent or as high as 15 percent.) And Firefox’s share on mobile devices is even worse, at under 1 percent, according to the same firm.

Today, the go-to-browser is Google’s Chrome, which has over a 50 percent share on both desktop and mobile, according to NetMarketShare.

Mozilla Wakes Up

After years of neglecting Firefox, misreading mobile users, and putting most of its chips on a failed phone project, Mozilla says it is working hard to get Firefox off the mat.

“In many ways, we went through a time that you don’t get to survive,” says Mark Mayo, senior vice president for Firefox and a member of Mozilla’s decision-making steering committee. “Somehow we’re not dead… and it feels like we’re picking up speed and figuring out what to do.”

He admits that Firefox has fallen behind Chrome, Microsoft’s Edge, and Apple’s Safari technically, but says the company is executing with total focus on a plan to reverse that. “For several years, we have not been spending the effort we would normally spend on the flagship product,” Mayo concedes. “Firefox didn’t get better along with the competition.”

Why Firefox is Different

Now, he says, the company has embraced the proposition that “it kind of makes no sense to be us and not have the best browser.” That’s because for Mozilla, which is controlled by a foundation of the same name, Firefox is its main product. The two names are inseparable in many peoples’ minds. And an open, vibrant web — as opposed to a world of apps and social media and search controlled by a few companies — is its main philosophical concern.

That last bit may sound like idealistic claptrap, but it’s always been core to Mozilla’s mission. Mayo says he fears that big companies like Google and Apple don’t care whether roaming the open internet is subsumed by launching apps or by the act of searching. But, he says, Firefox does.

“Everyone else builds a browser for defensive reasons,” says Mayo. “We build one because we love browsers.” (...)

The Task Ahead

But building Firefox into a real contender will take a lot more work, and Mayo concedes that even parts of the plan won’t be visible to users until later this year. Still, Mozilla claims that it “aims to pass Chrome on key performance measures that matter by end of year.”

To do that, the company is betting on something called Project Quantum, a new under-the-hood browsing engine that will replace big chunks of Mozilla’s ancient Gecko engine. In an October blog post by David Bryant, head of platform engineering, the company claimed this:
“We are striving for performance gains from Quantum that will be so noticeable that your entire web experience will feel different. Pages will load faster, and scrolling will be silky smooth. Animations and interactive apps will respond instantly, and be able to handle more intensive content while holding consistent frame rates. And the content most important to you will automatically get the highest priority, focusing processing power where you need it the most.”
Another cornerstone for the new Firefox is a project called the Context Graph that aims to use an enhanced browser history to replace navigational search. The idea is to use differential privacy — the same kind of privacy-respecting machine learning that Apple uses — to suggest places on the web to go for particular needs, rather than getting navigational answers from search.

Mayo calls this “navigation by browser, not Google” and declares: “Navigation in the browser has been stagnant for a decade and we’re not going to stand for that.”

by Walt Mossberg, Recode |  Read more:
Image: uncredited

Everyday Authoritarianism is Boring and Tolerable

Malaysia is a country that I know well, and whose political system I have studied closely for fifteen years. It is also a country whose political liberalization I have long awaited. Malaysia has a multiparty parliamentary system of government, but the same coalition of parties has been in power for six decades, and has never lost a general election. The government retains—in a holdover from the British colonial period—the legal authority to detain people without trial if it so desires. The print and broadcast media are fairly compliant, mostly owned by the corporate allies of political elites, and rarely criticize the government.

Living in Malaysia and working on Malaysian politics has taught me something important about authoritarianism from my perspective as an American. That is, the mental image of authoritarian rule in the minds of most Americans is completely unrealistic, and dangerously so.

Even though Malaysia is a perfectly wonderful place to visit, and an emerging market economy grappling with the same “middle income trap” issues that characterize most emerging market economies, scholars of comparative politics do not consider it to be an electoral democracy. Freedom House considers Malaysia “Partly Free.” The Democracy-Dictatorship dataset codes Malaysia as a civilian dictatorship, as do Boix-Miller-Rosato. Levitsky and Way consider Malaysia to be a classic case of competitive authoritarianism. There are quite a few other countries like Malaysia: Mexico and Taiwan for most of the 20th century, Russia, Turkey, Singapore, Cameroon, Tanzania, and others.

The mental image that most American harbor of what actual authoritarianism looks like is fantastical and cartoonish. This vision of authoritarian rule has jackbooted thugs, all-powerful elites acting with impunity, poverty and desperate hardship for everyone else, strict controls on political expression and mobilization, and a dictator who spends his time ordering the murder or disappearance of his opponents using an effective and wholly compliant security apparatus. This image of authoritarianism comes from the popular media (dictators in movies are never constrained by anything but open insurrection), from American mythmaking about the Founding (and the Second World War and the Cold War), and from a kind of “imaginary othering” in which the opposite of democracy is the absence of everything that characterizes the one democracy that one knows.

Still, that fantastical image of authoritarianism is entirely misleading as a description of modern authoritarian rule and life under it. It is a description, to some approximation, of totalitarianism. Carl Friedrich is the best on totalitarianism (see PDF), and Hannah Arendt of course on its emergence (PDF). But Arendt and Friedrich were very clear that totalitarianism is exceptional as a form of politics.

The reality is that everyday life under the kinds of authoritarianism that exist today is very familiar to most Americans. You go to work, you eat your lunch, you go home to your family.* There are schools and businesses, and some people “make it” through hard work and luck. Most people worry about making sure their kids get into good schools. The military is in the barracks, and the police mostly investigate crimes and solve cases. There is political dissent, if rarely open protest, but in general people are free to complain to one another. There are even elections. This is Malaysia, and many countries like it.

Everyday life in the modern authoritarian regime is, in this sense, boring and tolerable. It is not outrageous. Most critics, even vocal ones, are not going to be murdered like Anna Politkovskaya, they are going to be frustrated. Most not-very-vocal critics will live their lives completely unmolested by the security forces. They will enjoy it when the trains run on time, blame the government when they do not, gripe at their taxes, and save for vacation. Elections, when they happen, will serve the “anesthetic function” that Philippe Schmitter attributed to elections in Portugal under Salazar in the greatly underappreciated in 1978 volume Elections without Choice.

Life under authoritarian rule in such situations looks a lot like life in a democracy. As Malaysia’s longtime Prime Minister Mahathir Mohamad used to say, “if you don’t like me, defeat me in my district.”

This observation has two particular consequences. One, for asking if “the people” will tolerate authoritarian rule. The premise upon which this question is based is that authoritarianism is intolerable generally. It turns out that most people express democratic values, but living in a complicated world in which people care more about more things than just their form of government, it is easy to see that given an orderly society and a functioning economy, democratic politics may become a low priority.** The answer to the question “will ‘the people’ tolerate authoritarian rule?” is yes, absolutely.

Second, for knowing if you are living in an authoritarian regime versus a democratic one. Most Americans conceptualize a hypothetical end of American democracy in Apocalyptic terms. But actually, you usually learn that you are no longer living in a democracy not because The Government Is Taking Away Your Rights, or passing laws that you oppose, or because there is a coup or a quisling. You know that you are no longer living in a democracy because the elections in which you are participating no longer can yield political change.

It is possible to read what I’ve written here as a defense of authoritarianism, or as a dismissal of democracy. But my message is the exact opposite. The fantasy of authoritarianism distracts Americans from the mundane ways in which the mechanisms of political competition and checks and balances can erode. Democracy has not survived because the alternatives are acutely horrible, and if it ends, it will not end in a bang. It is more likely that democracy ends, with a whimper, when the case for supporting it—the case, that is, for everyday democracy—is no longer compelling.

by Tom Pepinsky, Associate Professor of Government, Cornell University | Read more:

Tuesday, January 24, 2017

Saturday, January 21, 2017

The Trouble with Quantum Mechanics

The development of quantum mechanics in the first decades of the twentieth century came as a shock to many physicists. Today, despite the great successes of quantum mechanics, arguments continue about its meaning, and its future.

1.

The first shock came as a challenge to the clear categories to which physicists by 1900 had become accustomed. There were particles—atoms, and then electrons and atomic nuclei—and there were fields—conditions of space that pervade regions in which electric, magnetic, and gravitational forces are exerted. Light waves were clearly recognized as self-sustaining oscillations of electric and magnetic fields. But in order to understand the light emitted by heated bodies, Albert Einstein in 1905 found it necessary to describe light waves as streams of massless particles, later called photons.

Then in the 1920s, according to theories of Louis de Broglie and Erwin Schrödinger, it appeared that electrons, which had always been recognized as particles, under some circumstances behaved as waves. In order to account for the energies of the stable states of atoms, physicists had to give up the notion that electrons in atoms are little Newtonian planets in orbit around the atomic nucleus. Electrons in atoms are better described as waves, fitting around the nucleus like sound waves fitting into an organ pipe.1 The world’s categories had become all muddled.

Worse yet, the electron waves are not waves of electronic matter, in the way that ocean waves are waves of water. Rather, as Max Born came to realize, the electron waves are waves of probability. That is, when a free electron collides with an atom, we cannot in principle say in what direction it will bounce off. The electron wave, after encountering the atom, spreads out in all directions, like an ocean wave after striking a reef. As Born recognized, this does not mean that the electron itself spreads out. Instead, the undivided electron goes in some one direction, but not a precisely predictable direction. It is more likely to go in a direction where the wave is more intense, but any direction is possible.

Probability was not unfamiliar to the physicists of the 1920s, but it had generally been thought to reflect an imperfect knowledge of whatever was under study, not an indeterminism in the underlying physical laws. Newton’s theories of motion and gravitation had set the standard of deterministic laws. When we have reasonably precise knowledge of the location and velocity of each body in the solar system at a given moment, Newton’s laws tell us with good accuracy where they will all be for a long time in the future. Probability enters Newtonian physics only when our knowledge is imperfect, as for example when we do not have precise knowledge of how a pair of dice is thrown. But with the new quantum mechanics, the moment-to-moment determinism of the laws of physics themselves seemed to be lost.

All very strange. In a 1926 letter to Born, Einstein complained:
Quantum mechanics is very impressive. But an inner voice tells me that it is not yet the real thing. The theory produces a good deal but hardly brings us closer to the secret of the Old One. I am at all events convinced that He does not play dice.2
As late as 1964, in his Messenger lectures at Cornell, Richard Feynman lamented, “I think I can safely say that no one understands quantum mechanics.”3 With quantum mechanics, the break with the past was so sharp that all earlier physical theories became known as “classical.”

The weirdness of quantum mechanics did not matter for most purposes. Physicists learned how to use it to do increasingly precise calculations of the energy levels of atoms, and of the probabilities that particles will scatter in one direction or another when they collide. Lawrence Krauss has labeled the quantum mechanical calculation of one effect in the spectrum of hydrogen “the best, most accurate prediction in all of science.”4 Beyond atomic physics, early applications of quantum mechanics listed by the physicist Gino Segrè included the binding of atoms in molecules, the radioactive decay of atomic nuclei, electrical conduction, magnetism, and electromagnetic radiation.5 Later applications spanned theories of semiconductivity and superconductivity, white dwarf stars and neutron stars, nuclear forces, and elementary particles. Even the most adventurous modern speculations, such as string theory, are based on the principles of quantum mechanics.

Many physicists came to think that the reaction of Einstein and Feynman and others to the unfamiliar aspects of quantum mechanics had been overblown. This used to be my view. After all, Newton’s theories too had been unpalatable to many of his contemporaries. Newton had introduced what his critics saw as an occult force, gravity, which was unrelated to any sort of tangible pushing and pulling, and which could not be explained on the basis of philosophy or pure mathematics. Also, his theories had renounced a chief aim of Ptolemy and Kepler, to calculate the sizes of planetary orbits from first principles. But in the end the opposition to Newtonianism faded away. Newton and his followers succeeded in accounting not only for the motions of planets and falling apples, but also for the movements of comets and moons and the shape of the earth and the change in direction of its axis of rotation. By the end of the eighteenth century this success had established Newton’s theories of motion and gravitation as correct, or at least as a marvelously accurate approximation. Evidently it is a mistake to demand too strictly that new physical theories should fit some preconceived philosophical standard.

In quantum mechanics the state of a system is not described by giving the position and velocity of every particle and the values and rates of change of various fields, as in classical physics. Instead, the state of any system at any moment is described by a wave function, essentially a list of numbers, one number for every possible configuration of the system.6 If the system is a single particle, then there is a number for every possible position in space that the particle may occupy. This is something like the description of a sound wave in classical physics, except that for a sound wave a number for each position in space gives the pressure of the air at that point, while for a particle in quantum mechanics the wave function’s number for a given position reflects the probability that the particle is at that position. What is so terrible about that? Certainly, it was a tragic mistake for Einstein and Schrödinger to step away from using quantum mechanics, isolating themselves in their later lives from the exciting progress made by others.

2.

Even so, I’m not as sure as I once was about the future of quantum mechanics. It is a bad sign that those physicists today who are most comfortable with quantum mechanics do not agree with one another about what it all means. The dispute arises chiefly regarding the nature of measurement in quantum mechanics. This issue can be illustrated by considering a simple example, measurement of the spin of an electron. (A particle’s spin in any direction is a measure of the amount of rotation of matter around a line pointing in that direction.)

All theories agree, and experiment confirms, that when one measures the amount of spin of an electron in any arbitrarily chosen direction there are only two possible results. One possible result will be equal to a positive number, a universal constant of nature. (This is the constant that Max Planck originally introduced in his 1900 theory of heat radiation, denoted h, divided by 4Ï€.) The other possible result is its opposite, the negative of the first. These positive or negative values of the spin correspond to an electron that is spinning either clockwise or counter-clockwise in the chosen direction.

But it is only when a measurement is made that these are the sole two possibilities. An electron spin that has not been measured is like a musical chord, formed from a superposition of two notes that correspond to positive or negative spins, each note with its own amplitude. Just as a chord creates a sound distinct from each of its constituent notes, the state of an electron spin that has not yet been measured is a superposition of the two possible states of definite spin, the superposition differing qualitatively from either state. In this musical analogy, the act of measuring the spin somehow shifts all the intensity of the chord to one of the notes, which we then hear on its own.

This can be put in terms of the wave function. If we disregard everything about an electron but its spin, there is not much that is wavelike about its wave function. It is just a pair of numbers, one number for each sign of the spin in some chosen direction, analogous to the amplitudes of each of the two notes in a chord.7 The wave function of an electron whose spin has not been measured generally has nonzero values for spins of both signs.

There is a rule of quantum mechanics, known as the Born rule, that tells us how to use the wave function to calculate the probabilities of getting various possible results in experiments. For example, the Born rule tells us that the probabilities of finding either a positive or a negative result when the spin in some chosen direction is measured are proportional to the squares of the numbers in the wave function for those two states of the spin.8

The introduction of probability into the principles of physics was disturbing to past physicists, but the trouble with quantum mechanics is not that it involves probabilities. We can live with that. The trouble is that in quantum mechanics the way that wave functions change with time is governed by an equation, the Schrödinger equation, that does not involve probabilities. It is just as deterministic as Newton’s equations of motion and gravitation. That is, given the wave function at any moment, the Schrödinger equation will tell you precisely what the wave function will be at any future time. There is not even the possibility of chaos, the extreme sensitivity to initial conditions that is possible in Newtonian mechanics. So if we regard the whole process of measurement as being governed by the equations of quantum mechanics, and these equations are perfectly deterministic, how do probabilities get into quantum mechanics?

One common answer is that, in a measurement, the spin (or whatever else is measured) is put in an interaction with a macroscopic environment that jitters in an unpredictable way. For example, the environment might be the shower of photons in a beam of light that is used to observe the system, as unpredictable in practice as a shower of raindrops. Such an environment causes the superposition of different states in the wave function to break down, leading to an unpredictable result of the measurement. (This is called decoherence.) It is as if a noisy background somehow unpredictably left only one of the notes of a chord audible. But this begs the question. If the deterministic Schrödinger equation governs the changes through time not only of the spin but also of the measuring apparatus and the physicist using it, then the results of measurement should not in principle be unpredictable. So we still have to ask, how do probabilities get into quantum mechanics?

One common answer is that, in a measurement, the spin (or whatever else is measured) is put in an interaction with a macroscopic environment that jitters in an unpredictable way. For example, the environment might be the shower of photons in a beam of light that is used to observe the system, as unpredictable in practice as a shower of raindrops. Such an environment causes the superposition of different states in the wave function to break down, leading to an unpredictable result of the measurement. (This is called decoherence.) It is as if a noisy background somehow unpredictably left only one of the notes of a chord audible. But this begs the question. If the deterministic Schrödinger equation governs the changes through time not only of the spin but also of the measuring apparatus and the physicist using it, then the results of measurement should not in principle be unpredictable. So we still have to ask, how do probabilities get into quantum mechanics?

One response to this puzzle was given in the 1920s by Niels Bohr, in what came to be called the Copenhagen interpretation of quantum mechanics. According to Bohr, in a measurement the state of a system such as a spin collapses to one result or another in a way that cannot itself be described by quantum mechanics, and is truly unpredictable. This answer is now widely felt to be unacceptable. There seems no way to locate the boundary between the realms in which, according to Bohr, quantum mechanics does or does not apply. As it happens, I was a graduate student at Bohr’s institute in Copenhagen, but he was very great and I was very young, and I never had a chance to ask him about this.

Today there are two widely followed approaches to quantum mechanics, the “realist” and “instrumentalist” approaches, which view the origin of probability in measurement in two very different ways.9 For reasons I will explain, neither approach seems to me quite satisfactory.10

by Steven Weinberg, NYRB | Read more:
Image: Eric J. Heller

John James Audubon, Louisiana Heron (1834)
via:

Showering with Spiders

One cold morning last autumn, with the shower’s hot, steamy water pleasantly pelting my neck and shoulders, I glanced up and noticed a spider hanging in the corner above my head—a quivering, spindly, brown spider. I’m not a spider aficionado, but I do know about poisonous spiders in our area of the Pacific Northwest: the hobo spider and the black widow. My shower companion was neither. A daddy longlegs, I deduced, Pholcus phalangioides to be precise, minding its own business near the showerhead.

Daddy longlegs spiders build messy webs with no particular pattern to them, and they eat insects, mites, and other spiders, including the poisonous hobo (and, I’m sorry to say, sometimes each other). They like ceiling corners and warmer spaces, so the beige fiberglass tub/shower combination in our twenty-plus-year-old home made a comfortable spot for spider settlement.

I was in a hurry, so I finished my shower and thought no more about the long-legged wall hugger. The next morning, as I shoved back the shower curtain and stepped into the tub, there it was again. Or still. How long had this creature lived in my bathroom without my noticing? Maybe for months, possibly longer. How long is that in spider time? With a life-span of two or three years, this arachnid may have inhabited the space for a third of its life or more. In a way, the spider had greater claim to the shower than I had. In terms of the percentages of our lives spent in the place, I was the newcomer, and if I cleared the web, I’d be the one driving out the longtime inhabitant. Besides, I could shower quite comfortably with or without him. Or her.

And so began my conscious choice to shower with spiders. It’s a small thing, one might say a silly and meaningless thing. We spend maybe ten minutes together each morning, all told, more time than some busy working couples spend in conversation each day. I have found that I’m strangely appreciative of our benign interspecies companionship during my morning routine. I’m required to do nothing special, except be quietly mindful of another being inhabiting my space in an unfathomable way. If I notice my itsy-bitsy neighbor slowly lowering itself from the ceiling toward the shower stall while I’m there, I’ll shake my hand to splash a bit of water as a warning. The spider, being mindful too, will vibrate for a moment, and then either stop and crouch with its belly close to the wall, or quick-step back up toward the ceiling. We have an understanding, the spider and I: do no harm.

by Victoria Doerper, Orion |  Read more:
Image: James Wardell
[ed. I stomped on a daddy longlegs this morning while taking a shower even though I know they're harmless. It was invading my space. Won't do that again (as long as there's some mutual accommodation).]

‘A Cat in Hell’s Chance’ – Why We’re Losing the Battle to Keep Global Warming Below 2C

It all seemed so simple in 2008. All we had was financial collapse, a cripplingly high oil price and global crop failures due to extreme weather events. In addition, my climate scientist colleague Dr Viki Johnson and I worked out that we had about 100 months before it would no longer be “likely” that global average surface temperatures could be held below a 2C rise, compared with pre-industrial times.

What’s so special about 2C? The simple answer is that it is a target that could be politically agreed on the international stage. It was first suggested in 1975 by the environmental economist William Nordhaus as an upper threshold beyond which we would arrive at a climate unrecognisable to humans. In 1990, the Stockholm Environment Institute recommended 2C as the maximum that should be tolerated, but noted: “Temperature increases beyond 1C may elicit rapid, unpredictable and non-linear responses that could lead to extensive ecosystem damage.”

To date, temperatures have risen by almost 1C since 1880. The effects of this warming are already being observed in melting ice, ocean levels rising, worse heat waves and other extreme weather events. There are negative impacts on farming, the disruption of plant and animal species on land and in the sea, extinctions, the disturbance of water supplies and food production and increased vulnerability, especially among people in poverty in low-income countries. But effects are global. So 2C was never seen as necessarily safe, just a guardrail between dangerous and very dangerous change.

To get a sense of what a 2C shift can do, just look in Earth’s rear-view mirror. When the planet was 2C colder than during the industrial revolution, we were in the grip of an ice age and a mile-thick North American ice sheet reached as far south as New York. The same warming again will intensify and accelerate human-driven changes already under way and has been described by James Hansen, one of the first scientists to call global attention to climate change, as a “prescription for long-term disaster”, including an ice-free Arctic. (...)

Is it still likely that we will stay below even 2C? In the 100 months since August 2008, I have been writing a climate-change diary for the Guardian to raise questions and monitor progress, or the lack of it, on climate action. To see how well we have fared, I asked a number of leading climate scientists and analysts for their views. The responses were as bracing as a bath in a pool of glacial meltwater.

by Andrew Simms, The Guardian |  Read more:
Image: NASA/EPA

Humanism, Science, and the Radical Expansion of the Possible

Humanism was the particular glory of the Renaissance. The recovery, translation, and dissemination of the literatures of antiquity created a new excitement, displaying so vividly the accomplishments and therefore the capacities of humankind, with consequences for civilization that are great beyond reckoning.

The disciplines that came with this awakening, the mastery of classical languages, the reverent attention to pagan poets and philosophers, the study of ancient history, and the adaptation of ancient forms to modern purposes, all bore the mark of their origins yet served as the robust foundation of education and culture for centuries, until the fairly recent past. In muted, expanded, and adapted forms, these Renaissance passions live on among us still in the study of the humanities, which, we are told, are now diminished and threatened. Their utility is in question, it seems, despite their having been at the center of learning throughout the period of the spectacular material and intellectual flourishing of Western civilization. Now we are less interested in equipping and refining thought, more interested in creating and mastering technologies that will yield measurable enhancements of material well-being—for those who create and master them, at least. Now we are less interested in the exploration of the glorious mind, more engrossed in the drama of staying ahead of whatever it is we think is pursuing us. Or perhaps we are just bent on evading the specter of entropy. In any case, the spirit of the times is one of joyless urgency, many of us preparing ourselves and our children to be means to inscrutable ends that are utterly not our own. In such an environment, the humanities do seem to have little place. They are poor preparation for economic servitude. This spirit is not the consequence but the cause of our present state of affairs. We have as good grounds for exulting in human brilliance as any generation that has ever lived.

The antidote to our gloom is to be found in contemporary science. This may seem an improbable stance from which to defend the humanities, and I do not wish to undervalue contemporary art or literature or music or philosophy. But it is difficult to recognize the genius of a period until it has passed. Milton, Bach, Mozart all suffered long periods of eclipse, beginning before their lives had ended. Our politics may appear in the light of history to have been filled with triumphs of statecraft, unlikely as this seems to us now. Science, on the other hand, can assert credible achievements and insights, however tentative, in present time. The last century and the beginning of this one have without question transformed the understanding of Being itself. “Understanding” is not quite the right word, since this mysterious old category, Being, fundamental to all experience past, present, and to come, is by no means understood. However, the terms in which understanding may, at the moment, be attempted have changed radically, and this in itself is potent information. The phenomenon called quantum entanglement, relatively old as theory and thoroughly demonstrated as fact, raises fundamental questions about time and space, and therefore about causality.

Particles that are “entangled,” however distant from one another, undergo the same changes simultaneously. This fact challenges our most deeply embedded habits of thought. To try to imagine any event occurring outside the constraints of locality and sequence is difficult enough. Then there is the problem of conceiving of a universe in which the old rituals of cause and effect seem a gross inefficiency beside the elegance and sleight of hand that operate discreetly beyond the reach of all but the most rarefied scientific inference and observation. However pervasive and robust entanglement is or is not, it implies a cosmos that unfolds or emerges on principles that bear scant analogy to the universe of common sense. It is abetted in this by string theory, which adds seven unexpressed dimensions to our familiar four. And, of course, those four seem suddenly tenuous when the fundamental character of time and space is being called into question. Mathematics, ontology, and metaphysics have become one thing. Einstein’s universe seems mechanistic in comparison. Newton’s, the work of a tinkerer. If Galileo shocked the world by removing the sun from its place, so to speak, then this polyglot army of mathematicians and cosmologists who offer always new grounds for new conceptions of absolute reality should dazzle us all, freeing us at last from the circle of old Urizen’s compass. But we are not free.

There is no art or discipline for which the nature of reality is a matter of indifference, so one ontology or another is always being assumed if not articulated. Great questions may be as open now as they have been since Babylonians began watching the stars, but certain disciplines are still deeply invested in a model of reality that is as simple and narrow as ideological reductionism can make it. I could mention a dominant school of economics with its anthropology. But I will instead consider science of a kind. The study of brain and consciousness, mind and self—associated with so-called neuroscience—asserts a model of mental function as straightforward, cau­sally speaking, as a game of billiards, and plumes itself on just this fact. It is by no means entangled with the sciences that address ontology. The most striking and consequential changes in the second of these, ontology, bring about no change at all in the first, neuroscience, either simultaneous or delayed. The gist of neuroscience is that the adverbs “simply” and “merely” can exorcise the mystifications that have always surrounded the operations of the mind/brain, exposing the machinery that in fact produces emotion, behavior, and all the rest. So while inquiries into the substance of reality reveal further subtleties, idioms of relation that are utterly new to our understanding, neuroscience tells us that the most complex object we know of, the human brain, can be explained sufficiently in terms of the activation of “packets of neurons,” which evolution has provided the organism in service to homeostasis. The amazing complexity of the individual cell is being pored over in other regions of science, while neuroscience persists in declaring the brain, this same complexity vastly compounded, an essentially simple thing. If this could be true, if this most intricate and vital object could be translated into an effective simplicity for which the living world seems to provide no analogy, this indeed would be one of nature’s wonders. (...)

The real assertion being made in all this (neuroscience is remarkable among the sciences for its tendency to bypass hypothesis and even theory and go directly to assertion) is that there is no soul. Only the soul is ever claimed to be nonphysical, therefore immortal, therefore sacred and sanctifying as an aspect of human being. It is the self but stands apart from the self. It suffers injuries of a moral kind, when the self it is and is not lies or steals or murders, but it is untouched by the accidents that maim the self or kill it. Obviously, this intuition—it is much richer and deeper than anything conveyed by the word “belief”—cannot be dispelled by proving the soul’s physicality, from which it is aloof by definition. And on these same grounds, its nonphysicality is no proof of its nonexistence. This might seem a clever evasion of skepticism if the character of the soul were not established in remote antiquity, in many places and cultures, long before such a thing as science was brought to bear on the question. (...)

Is it fair to say that this school of thought is directed against humanism? This seems on its face to be true. The old humanists took the works of the human mind—literature, music, philosophy, art, and languages—as proof of what the mind is and might be. Out of this has come the great aura of brilliance and exceptionalism around our species that neuroscience would dispel. If Shakespeare had undergone an MRI, there is no reason to believe there would be any more evidence of extraordinary brilliance in him than there would be of a self or a soul. He left a formidable body of evidence that he was both brilliant and singular, but it has fallen under the rubric of Renaissance drama and is somehow not germane, perhaps because this places the mind so squarely at the center of the humanities. From the neuroscientific point of view, this only obscures the question. After all, where did our high sense of ourselves come from? From what we have done and what we do. And where is this awareness preserved and enhanced? In the arts and the humane disciplines. I am sure there are any number of neuroscientists who know and love Mozart better than I do, and who find his music uplifting. The inconsistency is for them to explain. (...)

If there is a scientific mode of thought that is crowding out and demoralizing the humanities, it is not research in the biology of the cell or the quest for life on other planets. It is this neo-Darwinism, which claims to cut through the dense miasmas of delusion to what is mere, simple, and real. Since these “miasmas” have been the main work of human consciousness for as long as the mind has left a record of itself, its devaluing is a major work of dehumanization. This is true because it is the great measure of our distinctiveness as a species. It is what we know about ourselves. It has everything in the world to do with how we think and feel, with what we value or despise or fear, all these things refracted through cultures and again through families and individuals. If the object of neuroscience or neo-Darwinism was to describe an essential human nature, it would surely seek confirmation in history and culture. But these things are endlessly complex, and they are continually open to variation and disruption. So the insistence on an essential simplicity is understandable, if it is not fruitful. If I am correct in seeing neuroscience as essentially neo-Darwinist, then it is affixed to a model of reality that has not gone through any meaningful change in a century, except in the kind of machinery it brings to bear in asserting its worldview. (...)

That said, it might be time to pause and reflect. Holding to the old faith that everything is in principle knowable or comprehensible by us is a little like assuming that every human structure or artifact must be based on yards, feet, and inches. The notion that the universe is constructed, or we are evolved, so that reality must finally answer in every case to the questions we bring to it, is entirely as anthropocentric as the notion that the universe was designed to make us possible. Indeed, the affinity between the two ideas should be acknowledged. While the assumption of the intelligibility of the universe is still useful, it is not appropriately regarded as a statement of doctrine, and should never have been. Science of the kind I criticize tends to assert that everything is explicable, that whatever has not been explained will be explained—and, furthermore, by its methods. Its practitioners have seen to the heart of it all. So mystery is banished—mystery being no more than whatever their methods cannot capture yet. Mystery being also those aspects of reality whose implications are not always factors in their worldview, for example, the human mind, the human self, history, and religion—in other words, the terrain of the humanities. Or of the human.

by Marilynne Robinson, The Nation |  Read more:
Image: Kelly Ruth Winter/ The Nation
[ed. This essay is excerpted from The Givenness of Things, © Marilynne Robinson.]