Sunday, December 29, 2013

The English Beat



Here's How Data Thieves Have Captured Our Lives on the Internet

[ed Somewhat of a companion piece to the article further down by Evgeny Morozov.]

Some years ago, when writing a book on understanding the internet, I said that our networked future was bracketed by the dystopian nightmares of two old-Etonian novelists, George Orwell and Aldous Huxley. Orwell thought we would be destroyed by the things we fear, while Huxley thought that we would be controlled by the things that delight us. What Snowden has taught us is that the two extremes have converged: the NSA and its franchises are doing the Orwellian bit, while Google, Facebook and co are attending to the Huxleyean side of things.

In The Master Switch: The Rise and Fall of Information Empires, his magisterial history of the main communications technologies of the 20th century – telephone, radio, movies and television – the legal scholar Timothy Wu discerned a pattern.

Each technology started out as magnificently open, chaotic, collaborative, creative, exuberant and experimental, but in the end all were "captured" by charismatic entrepreneurs who went on to build huge industrial empires on the back of this capture. This is what has become known as the Wu cycle – "a typical progression of information technologies: from somebody's hobby to somebody's industry; from jury-rigged contraption to slick production marvel; from a freely accessible channel to one strictly controlled by a single corporation or cartel – from open to closed system".

The big question, Wu asked, was whether the internet would be any different? Ten years ago, I would have answered: "Yes." Having digested Snowden's revelations, I am less sure, because one of the things he has demonstrated is the extent to which the NSA has suborned the internet companies which have captured the online activities of billions of internet users. It has done this via demands authorised by the secret foreign intelligence surveillance (Fisa) court, but kept secret from the companies' users; and by tapping into the communications that flow between the companies' server farms across the world.

The reason this made sense is because so much of our communications and data are now entrusted to these internet giants. Tapping into them must have seemed a no-brainer to the NSA. After all, Google and Facebook are essentially in the same business as the agency. Its mission – comprehensive surveillance – also happens to be their business model.

The only difference is that whereas the spooks have to jump through some modest legal hoops to inspect our content, the companies get to read it neat. And the great irony is that this has been made possible because of our gullibility. The internet companies offered us shiny new "free" services in return for our acceptance of click-wrap "agreements" which allow them to do anything they damn well please with our data and content. And we fell for it. We built the padded cells in which we now gambol and which the NSA bugs at its leisure.

In our rush for "free" services, we failed to notice how we were being conned. The deal, as presented to us in the End User Licence Agreement, was this: you exchange some of your privacy (in the form of personal information) for the wonderful free services that we (Google, Facebook, Yahoo, Skype, etc) provide in return. The implication is that privacy is a transactional good – something that you own and that can be traded. But, in these contexts, privacy is an environmental good, not a transactional one. Why? Because when I use, say, Gmail, then I'm not only surrendering my privacy to Google, but the privacy of everyone who writes to me at my Gmail address. They may not have consented to this deal, but their email is being read by Google nonetheless. And before any lawyer (or Sir Malcolm Rifkind) pops up to object that having machines read one's communications is not the same thing as having a human being do it, let me gently inquire if they are up to speed on machine-learning algorithms? The fact that Mark Zuckerberg is not sitting there sucking his pencil and reading your status updates doesn't mean that his algorithms aren't making pretty astute inferences from those same updates – which is why Facebook probably knows that two people are going to have an affair before they do; or why one can make interesting inferences about the nature of a couple's marriage from inspection of their network graphs.

And this is where the interests of the NSA and the big internet companies converge. For what they have both managed to do is to abolish the practice of anonymous reading which, in the good old analogue days, we regarded as an essential condition for an open, democratic society. In a networked world, the spooks and the companies know everything you read, and the companies even know how long you spent on a particular page. And if you don't think that's creepy then you haven't been paying attention.

by John Naughton, The Guardian |  Read more:
Image: Alamy

Saturday, December 28, 2013


Prostitute in Tamaulipas
via:

The Great Fratsby

[ed. See also: An open letter to the makers of Wolf of Wall Street, and the Wolf himself.]

A man of humble upbringing decides that he will become a millionaire. For several years, wealth is his only goal, because he desperately wants everything else that comes from being rich. He reinvents himself along the way, transcending his roots, presenting a phony, tony name as his public face to the world. He does not come about his millions entirely legally, and he will one day have to answer for his crimes. But in the high times, he buys a mansion on the Gold Coast of Long Island, where he fills glamorous parties with beautiful women and the men that lust after them. In the film version of his life, he is played by a very tan Leonardo DiCaprio in boat shoes. Pop quiz: “The Wolf of Wall Street” or “The Great Gatsby”?

Even if both films did not open in the same year, starring the same actor, set in the same context of gaudy maximalism, they would still be having an intense conversation with one another (over a Martini and a bloody steak). Both are entries in the great epic of American capitalism, stories of high-flying greed and the power of self-delusion, morality plays about deeply unhappy Trimalchios who drown their insecurities in money and false hopes. But the coincidence (or, rather, brilliant alchemy) of DiCaprio’s appearance in both films just heightens the similarities between the stories, bringing everything into sharp relief.

The tale of two Leos forces the tale of two “Gatsby”s (or three, to bring Fitzgerald’s original novel into it), pitting them against each other, the romantic story versus the depraved one, the tragedy of loving one woman too much versus the tragedy of loving money so much that the soul corrodes. Scorsese’s is a far better film than Baz Luhrmann’s swirly, neon adaptation, but Luhrmann benefited from much better source material, the essence of which couldn’t help but waft off of his hypersaturated, glossy spectacle and still hit the viewer with a cold smack of recognition. No matter where the green light goes, it is always there, and something sad and gleaming shines through. Scorsese’s entire picture sparkles from end to end, dancing so hard and at such a sustained high pitch that it threatens to topple at any moment, and yet there is no lingering light to it, no nagging lesson in Jordan Belfort’s demise. For a person falling from grace to land with a thud, he must have once been graceful. Belfort and Gatsby may share a common criminality on their way to the top, but only one of them makes it look fully disgusting.

In other words, Luhrmann’s film may be the “Gatsby” that this generation deserves (Technicolor, attention-disordered, deafeningly loud, brimming with loose cultural pastiche), but Scorsese’s “Wolf” is the “Gatsby” that the current Wall Street demands—its dark cousin and perverse reflection. There is no deeper romance to “Wolf,” only craven desire. The film has a black heart where a green light should be. (...)

After a recent screening of “Wolf” in New York, the movie’s screenwriter, Terence Winter (who knows from gangsters), said in a Q. & A. that it was a conscious decision (by Scorsese and also DiCaprio, who optioned Belfort’s story for himself and developed it as a passion project for years—this is his Citizen Cocaine) not to show Belfort’s victims in the film: “We never wanted you to hear the voices on the other end of the line.” As a result, “Wolf” has few casualties—a quick mention of a stockbroker who blows his brains out, a few near-death swipes as a result of hubris and drug-induced haze, a heartless sucker punch to the stomach of Belfort’s distraught second wife—and it is the lack of consequences that have left many critics with a queasy feeling, and the fear that Scorsese will do more harm than good by glorifying a bacchanal put on at the expense of innocent people. When the film screened on Wall Street to a crowd of finance types, there were many cheers and high fives. If any movie is in danger this year of having “bad fans,” it’s this one (watch closely as “Scarface” posters in frat houses are quietly replaced with “Wolf” ones).

by Rachel Syme, New Yorker |  Read more:
Image: uncredited

The Visible and the Invisible

Let me tell you a thing or two about jail.

From the very moment in which the words “under arrest” are uttered, everyone you encounter contributes to rendering you powerless.

When I was preparing to begin student-teaching, I talked to a number of experienced teachers who advised me to remember that the most negative response you can give a student (or anyone else for that matter) is to ignore him or her. Not to rebuke, nor to deny, but to ignore. If you have a student who speaks up too frequently and too often without a point, and who is generally disruptive, the best response is to say nothing at all. Let their talk be answered with empty silence. Render them invisible—non-existent—by treating them as such.

The police who arrest you, and then the sheriff’s deputies who are your jailers, similarly negate everything that constitutes your sense of self: your will, your intellect, your emotions. They do so by ignoring you completely. No answers to your questions, whether about the charges, about the process you are going through, about your ability to communicate with anyone outside—about anything. You are something that they process, the ultimate objectification.

Everything is uncomfortable and debilitating. The handcuffs hurt, and having your arms behind your back makes it hard to get into the back seat of the squad car without falling into it. The back seat itself is hard molded plastic, without upholstery of any kind. You can’t sit with your back supported by the seat-back because your manacled arms are in the way. You slide across the hard plastic with every turn, every acceleration or deceleration the driver makes.

At the precinct station house, they hold you first in a tiled room with hard wooden benches bolted to the shiny concrete floor, in the midst of which, as in every room you will now occupy, you see a drain toward which the floor slopes from all sides. Through a small window with wire mesh suspended within it, you can see someone going through papers in the adjoining room. She occasionally looks up at you, and occasionally others appear in the room with her. Eventually a couple of cops, their equipment swinging and rattling heavily from their belts, enter the room you are in and, still refusing to answer any questions, take you back out to the car, load you in, and drive you to the county jail. (...)

I mentioned mindless routine.

You are awakened at 4:30 a.m., ordered to dress and make your bed in the exact manner of this particular institution, made to wait standing for the carts to arrive, and then called one by one to get a tray of food from the guys who came with the cart from the kitchen.

As for the food, I must tell you about the peanut butter. You get a wad of it, wrapped in wax paper, about the size of a lemon or an egg, with some soft, easily torn white bread and no utensil to help spread the wad, which is itself only semi-soft. In every holding cell in the system, you see wads of peanut butter stuck to the ceiling overhead. The ceilings are always fifteen or more feet high. It takes a powerful arm to launch a stiff, hard wad of peanut butter at that ceiling hard enough to get it to stick. Once it is there, however, it seems to stay for perpetuity. I never saw, nor did I ever hear tell of, one of these peanut-butter hardballs coming down.

At about 5:15 a.m. you are ordered back to your cell and locked down for a few more hours of sleep.

The rest of the day consists of a rotation of time spent locked down in your cell and time spent in the common area, where you can find conversations, card games, books, magazines, and loud televisions tuned either to movies (action pictures, crime, jails, violence—lots and lots of violence) or to sports.

This common area is semi-circular and two stories high, the cells ranging along the rim of the circle, their interior walls all glass. A steel staircase and catwalk provide access to the upper story of cells. Everyone is visible at all times. The only nod to privacy is the pony-wall, about two feet high, in front of the toilet in your cell. It assures that when you sit on the toilet you are visible only from the waist up.

The guard’s desk, a miniature command center, stands at the hub of this semi-circle. She or he is watching you all the time.

by Howard Tharsing, Threepenny Review | Read more:
Image: Brad Phillips via:

A Radical Shift in Capitalism



The benefits of personal data to consumers are obvious; the costs are not, writes Evgeny Morozov

Following his revelations this year about Washington’s spying excesses, Edward Snowden now faces a growing wave of surveillance fatigue among the public – and the reason is that the National Security Agency contractor turned whistleblower has revealed too many uncomfortable truths about how today’s world works.

Technical infrastructure and geopolitical power; rampant consumerism and ubiquitous surveillance; the lofty rhetoric of “internet freedom” and the sober reality of the ever-increasing internet control – all these are interconnected in ways most of us would rather not acknowledge or think about. Instead, we have focused on just one element in this long chain – state spying – but have mostly ignored all others.

But the spying debate has quickly turned narrow and unbearably technical; issues such as the soundness of US foreign policy, the ambivalent future of digital capitalism, the relocation of power from Washington and Brussels to Silicon Valley have not received due attention. But it is not just the NSA that is broken: the way we do – and pay for – our communicating today is broken as well. And it is broken for political and economic reasons, not just legal and technological ones: too many governments, strapped for cash and low on infrastructural imagination, have surrendered their communications networks to technology companies a tad too soon.

Mr Snowden created an opening for a much-needed global debate that could have highlighted many of these issues. Alas, it has never arrived. The revelations of the US’s surveillance addiction were met with a rather lacklustre, one-dimensional response. Much of this overheated rhetoric – tinged with anti-Americanism and channelled into unproductive forms of reform – has been useless. Many foreign leaders still cling to the fantasy that, if only the US would promise them a no-spy agreement, or at least stop monitoring their gadgets, the perversions revealed by Mr Snowden would disappear.

Here the politicians are making the same mistake as Mr Snowden himself, who, in his rare but thoughtful public remarks, attributes those misdeeds to the over-reach of the intelligence agencies. Ironically, even he might not be fully aware of what he has uncovered. These are not isolated instances of power abuse that can be corrected by updating laws, introducing tighter checks on spying, building more privacy tools, or making state demands to tech companies more transparent.

Of course, all those things must be done: they are the low-hanging policy fruit that we know how to reach and harvest. At the very least, such measures can create the impression that something is being done. But what good are these steps to counter the much more disturbing trend whereby our personal information – rather than money – becomes the chief way in which we pay for services – and soon, perhaps, everyday objects – that we use?

No laws and tools will protect citizens who, inspired by the empowerment fairy tales of Silicon Valley, are rushing to become data entrepreneurs, always on the lookout for new, quicker, more profitable ways to monetise their own data – be it information about their shopping or copies of their genome. These citizens want tools for disclosing their data, not guarding it. Now that every piece of data, no matter how trivial, is also an asset in disguise, they just need to find the right buyer. Or the buyer might find them, offering to create a convenient service paid for by their data – which seems to be Google’s model with Gmail, its email service.

What eludes Mr Snowden – along with most of his detractors and supporters – is that we might be living through a transformation in how capitalism works, with personal data emerging as an alternative payment regime. The benefits to consumers are already obvious; the potential costs to citizens are not. As markets in personal information proliferate, so do the externalities – with democracy the main victim.

by Evgeny Morozov, Notes EM |  Read more: 
Image:Micah Ganske via:

Friday, December 27, 2013


Charles M. Schulz
via:
[ed. Not much worth posting lately (unless you like endless Best of 2013 lists), so enjoy some family time and reflection this holiday season.]

Wednesday, December 25, 2013

Tuesday, December 24, 2013

The Year We Broke the Internet

As winter storms were buffeting parts of the country last week, our collective attention was drawn halfway around the world to Egypt. Images of the pyramids and the Sphinx covered in snow had emerged, and were being shared tens of thousands of times on Facebook and Twitter. It wasn’t hard to see why. For some, sharing the photos was a statement on global warming. For others, sharing was about the triumph of discovery, making them proud housecats dropping a half-chewed mouse of news on the Internet’s doorstep. For most, however, the photos were just another thoughtlessly processed and soon-forgotten item that represented our now-instinctual response to the unrelenting stream of information we’re subjected to every waking hour: Share first, ask questions later. Better yet: Let someone else ask the questions. Better still: What was the question again?

Needless to say, the photos were bullshit.

It’s hard not to note the tidy symbolism here. The Internet, like the Sphinx, is a ravenous beast that eats alive anyone who can’t answer its hoary riddle. We in the media have been struggling for twenty years to solve that riddle, and this year, the answer arrived: Big Viral, a Lovecraftian nightmare that has tightened its thousand-tentacled grip on our browsing habits with its traffic-at-all-costs mentality—veracity, newsworthiness, and relevance be damned. We solved the riddle, and then we got eaten anyway.

The Egypt photos weren’t the only viral hoax to hijack the social media conversation in the past month. Of the others, the most infamous was reality-TV producer Elan Gale’s in-flight pissing match with a fellow passenger, which he documented on Twitter, and which was shepherded along by BuzzFeed to the delight of hundreds of thousands of onlookers. That it was actually a prank rankled some, but even that turned out to be a boon for the sites that shared it: They got the clicks coming and going, both on the ramp-up and in the reveal. The story may well have been, in the words of Slate’s Dave Weigel, “the sort of shoddy reporting that would get a reporter at a small newspaper fired,” but it was also a perfect microcosm of the way the Internet works now.

“We’re not in the business of publishing hoaxes,” BuzzFeed’s news editor wrote in response to Weigel’s piece, “and we feel an enormous responsibility here to provide our readers with accurate, up-to-date information”—which sounds a bit like Altria’s health inspector saying they’re sorry they gave you cancer.

The fact is, that sort of double-dipping is what most of us who produce Internet content do, myself included. Give me the viral pictures, and I’ll give you the truth. And then, after an appropriate waiting period, I’ll give you the other truth, and capitalize on that traffic too. It’s almost a perfect callback to William Randolph Hearst’s infamous declaration on the eve of the Spanish-American War, “You furnish the pictures and I’ll furnish the war.” Even more fitting, historians don’t think he ever said anything like that. Then as now, it’s the myth that plays, not the reality. Today it just plays on an exponentially larger stage.

The media has long had its struggles with the truth—that’s nothing new. What is new is that we’re barely even apologizing for increasingly considering the truth optional. In fact, the mistakes, and the falsehoods, and the hoaxes are a big part of a business plan driven by the belief that big traffic absolves all sins, that success is a primary virtue. Haste and confusion aren’t bugs in the coding anymore, they’re features. Consider what Ryan Grim, Washington bureau chief for the Huffington Post, told The New York Times in its recent piece on a raft of hoaxes, including Gale’s kerfuffle, a child’s letter to Santa that included a handwritten Amazon URL, and a woman who wrote about her fictitious poverty so effectively that she pulled in some $60,000 in online donations. “The faster metabolism puts people who fact-check at a disadvantage,” Grim said. “If you throw something up without fact-checking it, and you’re the first one to put it up, and you get millions and millions of views, and later it’s proved false, you still got those views. That’s a problem. The incentives are all wrong.”

In other words, press “Publish” or perish.

by Luke O'Neil, Esquire |  Read more:
Image: uncredited

Christmas Song

She was his girl, he was her boyfriend
Soon to be his wife, make him her husband
A surprise on the way, any day, any day
One healthy little giggling, dribbling baby boy
The Wise Men came, three made their way
To shower him with love
While he lay in the hay
Shower him with love, love, love
Love love, love
Love, love was all around

Not very much of his childhood was known
Kept his mother Mary worried
Always out on his own
He met another Mary who for a reasonable fee
Less than reputable was known to be
His heart was full of love, love, love
Love, love, love
Love, love was all around

Lyrics

Written by: Mel Torme and Robert Wells via:

Man Ray. La femme et son poisson 1938.
via:
[ed. See also: here]

HSBC Settlement Proves the Drug War is a Joke

If you've ever been arrested on a drug charge, if you've ever spent even a day in jail for having a stem of marijuana in your pocket or "drug paraphernalia" in your gym bag, Assistant Attorney General and longtime Bill Clinton pal Lanny Breuer has a message for you: Bite me.

Breuer this week signed off on a settlement deal with the British banking giant HSBC that is the ultimate insult to every ordinary person who's ever had his life altered by a narcotics charge. Despite the fact that HSBC admitted to laundering billions of dollars for Colombian and Mexican drug cartels (among others) and violating a host of important banking laws (from the Bank Secrecy Act to the Trading With the Enemy Act), Breuer and his Justice Department elected not to pursue criminal prosecutions of the bank, opting instead for a "record" financial settlement of $1.9 billion, which as one analyst noted is about five weeks of income for the bank.

The banks' laundering transactions were so brazen that the NSA probably could have spotted them from space. Breuer admitted that drug dealers would sometimes come to HSBC's Mexican branches and "deposit hundreds of thousands of dollars in cash, in a single day, into a single account, using boxes designed to fit the precise dimensions of the teller windows."

This bears repeating: in order to more efficiently move as much illegal money as possible into the "legitimate" banking institution HSBC, drug dealers specifically designed boxes to fit through the bank's teller windows. Tony Montana's henchmen marching dufflebags of cash into the fictional "American City Bank" in Miami was actually more subtle than what the cartels were doing when they washed their cash through one of Britain's most storied financial institutions.

Though this was not stated explicitly, the government's rationale in not pursuing criminal prosecutions against the bank was apparently rooted in concerns that putting executives from a "systemically important institution" in jail for drug laundering would threaten the stability of the financial system. The New York Times put it this way:
Federal and state authorities have chosen not to indict HSBC, the London-based bank, on charges of vast and prolonged money laundering, for fear that criminal prosecution would topple the bank and, in the process, endanger the financial system. (...)
So you might ask, what's the appropriate financial penalty for a bank in HSBC's position? Exactly how much money should one extract from a firm that has been shamelessly profiting from business with criminals for years and years? Remember, we're talking about a company that has admitted to a smorgasbord of serious banking crimes. If you're the prosecutor, you've got this bank by the balls. So how much money should you take?

How about all of it? How about every last dollar the bank has made since it started its illegal activity? How about you dive into every bank account of every single executive involved in this mess and take every last bonus dollar they've ever earned? Then take their houses, their cars, the paintings they bought at Sotheby's auctions, the clothes in their closets, the loose change in the jars on their kitchen counters, every last freaking thing. Take it all and don't think twice. And then throw them in jail.

Sound harsh? It does, doesn't it? The only problem is, that's exactly what the government does just about every day to ordinary people involved in ordinary drug cases.

by Matt Taibbi, Rolling Stone |  Read more:
Image: MediaBistro

Imagining the Post-Antibiotics Future


Predictions that we might sacrifice the antibiotic miracle have been around almost as long as the drugs themselves. Penicillin was first discovered in 1928 and battlefield casualties got the first non-experimental doses in 1943, quickly saving soldiers who had been close to death. But just two years later, the drug’s discoverer Sir Alexander Fleming warned that its benefit might not last. Accepting the 1945 Nobel Prize in Medicine, he said:
“It is not difficult to make microbes resistant to penicillin in the laboratory by exposing them to concentrations not sufficient to kill them… There is the danger that the ignorant man may easily underdose himself and by exposing his microbes to non-lethal quantities of the drug make them resistant.”
As a biologist, Fleming knew that evolution was inevitable: sooner or later, bacteria would develop defenses against the compounds the nascent pharmaceutical industry was aiming at them. But what worried him was the possibility that misuse would speed the process up. Every inappropriate prescription and insufficient dose given in medicine would kill weak bacteria but let the strong survive. (As would the micro-dose “growth promoters” given in agriculture, which were invented a few years after Fleming spoke.) Bacteria can produce another generation in as little as twenty minutes; with tens of thousands of generations a year working out survival strategies, the organisms would soon overwhelm the potent new drugs.

Fleming’s prediction was correct. Penicillin-resistant staph emerged in 1940, while the drug was still being given to only a few patients. Tetracycline was introduced in 1950, and tetracycline-resistant Shigella emerged in 1959; erythromycin came on the market in 1953, and erythromycin-resistant strep appeared in 1968. As antibiotics became more affordable and their use increased, bacteria developed defenses more quickly. Methicillin arrived in 1960 and methicillin resistance in 1962; levofloxacin in 1996 and the first resistant cases the same year; linezolid in 2000 and resistance to it in 2001; daptomycin in 2003 and the first signs of resistance in 2004.

With antibiotics losing usefulness so quickly — and thus not making back the estimated $1 billion per drug it costs to create them — the pharmaceutical industry lost enthusiasm for making more. In 2004, there were only five new antibiotics in development, compared to more than 500 chronic-disease drugs for which resistance is not an issue — and which, unlike antibiotics, are taken for years, not days. Since then, resistant bugs have grown more numerous and by sharing DNA with each other, have become even tougher to treat with the few drugs that remain. In 2009, and again this year, researchers in Europe and the United States sounded the alarm over an ominous form of resistance known as CRE, for which only one antibiotic still works.

Health authorities have struggled to convince the public that this is a crisis. In September, Dr. Thomas Frieden, the director of the U.S. Centers for Disease Control and Prevention, issued a blunt warning: “If we’re not careful, we will soon be in a post-antibiotic era. For some patients and some microbes, we are already there.” The chief medical officer of the United Kingdom, Dame Sally Davies — who calls antibiotic resistance as serious a threat as terrorism — recently published a book in which she imagines what might come next. She sketches a world where infection is so dangerous that anyone with even minor symptoms would be locked in confinement until they recover or die. It is a dark vision, meant to disturb. But it may actually underplay what the loss of antibiotics would mean.

by Maryn McKenna, Medium |  Read more:
Image: Eneas De Troya

Sounding the Alarm


At 2:46 p.m. on March 11, 2011, the Pacific Plate, just off Japan's northeast coast, suddenly thrust downward, unleashing a monstrous, 9.0-magnitude earthquake that rocked the country for the next six minutes. The massive Tohoku quake and resulting tsunami are believed to have killed at least 16,000 people and injured 6,000 more. Another 2,600 people are still missing and presumed dead. The quake was the most powerful to ever strike Japan, and was the fourth-largest ever recorded. It also was the first earthquake to be heard in outer space, and was the most expensive natural disaster in human history, generating $235 billion in total damage. But there was a silver lining, if you could call it that: Tohoku was also the first time that Japanese citizens were given the precious, if limited, gift of time.

That gift came in the form of Japan's earthquake early warning system, which detected the giant temblor just before it hit and immediately sent computer-generated alerts across the country to cellphones, TVs, schools, factories, and transit systems. Japan put its finishing touches on its $500 million early warning system in 2007, leaving four years — barely the blink of an eye in geological timescales — before the investment paid off.

And in 2011, by all accounts it did. Although it's impossible to quantify the number of lives that the system saved, there were reports in the quake's aftermath of schools having had time to get all their students under desks, of eleven 320-mile-per-hour bullet trains slowing to a stop; of more than 16,000 elevators automatically shutting down when the alarm system went off. In the sixty seconds before the giant temblor struck, roughly 52 million people received text-message warnings that the quake was fast approaching and that they needed to get out of harm's way.

In 2007, the same year that Japan finished building its early warning system, earthquake scientists roughly 5,000 miles away in California marked a related, albeit far humbler, benchmark. Richard Allen, director of the Seismological Laboratory at UC Berkeley, was in his office on October 30 when a 5.6-magnitude earthquake hit the Alum Rock section of San Jose. The quake caused only moderate shaking and very little damage, but Allen had reason to be excited: The event marked the first time his Berkeley group was able to test its own early warning system, set up just two weeks before. "It was our first proof-of-concept event," Allen recalled in a recent interview. Thirty minutes after the light shaking ended, Allen received an email showing that the system had successfully detected the right waves, done the right math, and made the right prediction about when and how strongly the quake would hit.

Yet this was only a researcher's victory. The tiny system his team had built produced no cascade of texts, no TV or radio transmissions, and no widespread notification that an earthquake was on its way. In the event of a disaster, the technology wasn't even in place for Allen himself to receive a real-time notification from his own system.

But this was not a case of Japan being light years ahead of the United States in terms of earthquake-science research. Instead, the wide technological gap between the two countries has more to do with each nation's sense of urgency about the dangers of earthquakes, and the need to prepare for them. In fact, back in 2003, Allen had co-written what essentially became the seminal scientific paper on quake predictions. His work showed that it's technically possible to predict the size and location of quakes right before they strike, and argued for the methods that became the basis for early warning systems, much like the one later built in Japan.

And yet a decade after Allen co-authored that paper, California, the second-most seismically active state in the nation (behind only Alaska), still has next to nothing in terms of a public seismic warning system. The technology exists and has for years, but the state legislature has failed to find or allocate the necessary funds to make it happen.

by Azeen Ghorayshi, East Bay Express |  Read more:
Image: Stephen Loewinsohn

The Sense of an Ending

[ed. I tend to avoid books that seem overly hyped and/or have conflicting reviews, so I came late to The Sense of an Ending, but it's a wonderful (if somewhat short) novel that you almost want to read twice once you've finished it. It resonated with me, anyway. I have the habit of dog-earing the left-hand corner of pages in sections that contain particularly poignant or insightful passages (so I can find them again). After dog-earing nearly every other page of this book, I finally gave up. See also: Life in Smoke and Mirrors]

The new book is a mystery of memory and missed opportunity. Tony Webster, a cautious, divorced man in his 60s who “had wanted life not to bother me too much, and had succeeded,” receives an unexpected bequest from a woman he’d met only once, 40 years earlier. The mother of his college girlfriend, Veronica, has bequeathed him £500 — a legacy that unsettles Tony, pushing him to get in touch with Veronica (their relationship had ended badly) and seek answers to certain unresolved questions.

Had he loved Veronica? (At the time, it was an emotion he had lacked the spine to own up to.) What had happened to the energetic boy he used to be, “book-hungry, sex-hungry, meritocratic, anarchistic,” who thought of himself as “being kept in some kind of holding pen, waiting to be released” into an engaged adult life of “passion and danger, ecstasy and despair”? And what ever became of the friend he and Veronica both knew back then, a brainy, idealistic boy named Adrian Finn? Gradually, Tony assembles his willfully forgotten past impressions and actions, joining together the links that connect him to these people, as if trying to form a “chain of individual responsibilities” that might explain how it happened that his life’s modest wages had resulted in “the accumulation, the multiplication, of loss.” (...)

Adrian’s indifference to playing it cool somehow made him the leader of the boys’ clique when they were teenagers; he became the one they looked up to. Yet Tony never emulated Adrian, and was guilty of the pose Adrian deplored: pretending not to care. He pays for this failure again and again, from his 20s to his 60s. “Does character develop over time?” Tony asks himself, wondering at the “larger holding pen” that has come to contain his adult life. Maybe character freezes sometime between the ages of 20 and 30, he speculates. “And after that, we’re just stuck with what we’ve got. We’re on our own. If so, that would explain a lot of lives, wouldn’t it? And also — if this isn’t too grand a word — our tragedy.” (...)

But who does Tony enfold into his “we”? His agonized analysis is entirely self-­referential, as solitary and armored as the man himself. Decades earlier, Tony had accused Veronica of an “inability to imagine anyone else’s feelings or emotional life,” but it was he, not she, who was incapable of looking outside his own head. Barnes’s unreliable narrator is a mystery to himself, which makes the novel one unbroken, sizzling, satisfying fuse. Its puzzle of past causes is decoded by a man who is himself a puzzle. Tony resembles the people he fears, “whose main concern is to avoid further damage to themselves, at whatever cost,” and who wound others with a hypersensitivity that is insensitive to anything but their own needs. “I have an instinct for survival, for self-­preservation,” he reflects. “Perhaps this is what Veronica called cowardice and I called being peaceable.”

by Liesl Schillinger, NY Times |  Read more:
Image: via: