Tuesday, March 17, 2015

Permanent Recorder

[ed. See also: The Algorithmic Self.]

It used to be easy to mock reality TV for having nothing to do with actual reality — the scenarios were contrived and pre-mediated, the performances were semi-scripted, the performers were hyper-self-conscious. These shows were more a negation of reality than a representation of it; part of their appeal seemed to be in how they helped clarify for viewers the genuine “reality” of their own behavior, in contrast with the freak shows they were seeing on the screen. To be real with people, these shows seemed to suggest, just don’t act like you are on television.

But now we are all on television all the time. The once inverted anti-reality of reality TV has turned out to be prefigurative. In a recent essay for the New York Times, Colson Whitehead seizes on the reality TV conceit of a “loser edit” — how a shows’ editors pare down and frame the footage of certain participants to make their incipient failure seem deserved — and expands it into a metaphor for our lives under ubiquitous surveillance.
The footage of your loser edit is out there as well, waiting … From all the cameras on all the street corners, entryways and strangers’ cellphones, building the digital dossier of your days. Maybe we can’t clearly make out your face in every shot, but everyone knows it’s you. We know you like to slump. Our entire lives as B-roll, shot and stored away to be recut and reviewed at a moment’s notice when the plot changes: the divorce, the layoff, the lawsuit. Any time the producers decide to raise the stakes.
Whitehead concludes that the important thing is that everyone gets an edit inside their own head, which suggests that the imposition of a reality TV frame on our lives has been clarifying. “If we’re going down, let us at least be a protagonist, have a story line, not be just one of those miserable players in the background. A cameo’s stand-in. The loser edit, with all its savage cuts, is confirmation that you exist.” Reality TV models for us what it is like to be a character in our own life story, and it gives us a new metaphor for how to accomplish this — we don’t need to be a bildungsroman author but instead a savvy cutting-room editor. Accept that your life is footage, and you might even get good at making a winner’s edit for yourself.

You could draw a similar conclusion from Facebook’s Timeline, and the year-in-review videos the company has taken to making of one’s raw profile data. These aren’t intrusive re-scriptings of our experience but instructional videos into how to be a coherent person for algorithms — which, since these algorithms increasingly dictate what others see of you, is more or less how you “really” are in your social networks. Facebook makes the winner’s edit of everybody, because everyone supposedly wins by being on Facebook. Everyone gets to be connected and the center of the universe simultaneously. So why not bequeath to it final-cut rights for your life’s edit?

Tech consultant Alistair Croll, in post at O’Reilly Radar, is somewhat less complacent about our surrendering our editing rights. He makes the case that since everyone henceforth will be born into consolidated blanket surveillance, they will be nurtured by a symbiotic relationship with their own data timeline. “An agent with true AI will become a sort of alter ego; something that grows and evolves with you … When the machines get intelligent, some of us may not even notice, because they’ll be us and we’ll be them.”

In other words, our cyborg existence will entail our fusion not with some Borg-like hive mind that submerges us into a collective, but with a machine powered by our own personal data that represents itself as already part of ourselves. The algorithms will be learning how to edit our lives for us from the very start, and we may not recognize this editing as stemming from an outside entity. The alien algorithms ease themselves into control over us by working with our uniquely personal data, which will feel inalienable because it is so specifically about us, though the very fact of its collection indicates that it belongs to someone else. Our memories will be recorded by outside entities so thoroughly that we will intuitively accept those entities as a part of us, as an extension of the inside of our heads. Believing that something that is not us could have such a complete archive of our experiences may prove to be to unacceptable, too dissonant, too terrifying.

Croll argues that this kind of data-driven social control, with algorithms dictating the shape and scope of our lives for us, will be “the moral issue of the next decade: nobody should know more about you than you do.“ That sounds plausible enough, if you take it to mean (as Croll clearly does) that no one should use against you data that you don’t know has been collected about you. (Molly Knefel discusses a similar concern here, in an essay about how kids will be confronted by their permanent records, which reminds me of the “right to be forgotten” campaign.) But it runs counter to the cyborg idea — it assumes we will be able to draw a clear line between ourselves and the algorithms. If we can’t distinguish between these, it will be nonsensical to worry about which has access to more data about ourselves. It will be impossible to say whether you or the algorithms “knew” some piece of information about you first, particularly when the algorithms will be synthesizing data about us and then teaching it to us.

In that light, the standard that “no one should know more about you than you do” starts to seem clearly absurd. Outside entities are producing knowledge about us all the time in ways we can’t control. Other people are always producing knowledge about me, from their perspective and for their own purposes, that I can never access. They will always know “more about me” than I do by virtue of their having a point of view on the world that I can’t calculate and replicate.

Because we find it hard to assign a point of view to a machine, we perhaps think they can’t know more about us or have a perspective that isn’t fully controllable by someone, if not us. Croll is essentially arguing that we should have control over what knowledge a company’s machines produce about us. That assumes that their programmers can fully control their algorithms, which seems to be less the case the more sophisticated they become — the fact that the algorithms turn out results that no one can explain may be the defining point at which data becomes Big Data, as Mike Pepi explains here. And if the machines are just proxies for the people who program them, Croll’s “moral issue” still boils down to a fantasy of extreme atomization — the demand that my identity be entirely independent of other people, with no contingencies whatsoever.

The ability to impose your own self-concept on others is a matter of power; you can demand it, say, as a matter of customer service. This doesn’t change what those serving you know and think about you, but it allows you to suspend disbelief about it. Algorithms that serve us don’t allow for such suspension of disbelief, because they anticipate what service we might expect and put what they know about us into direct action. Algorithms can’t have opinions about us that they keep to themselves. They can’t help but reveal al all times that they “know more about us” — that is, they know us different from how we know ourselves.

Rather than worry about controlling who can produce information about us, it may be more important to worry about the conflation of data with self-knowledge. The radical empiricism epitomized by the Quantified Self movement is becoming more and more mainstream as tracking devices that attempt to codify us as data become more prevalent — and threaten to become mandatory for various social benefits like health insurance. Self-tracking suggests that consciousness is a useless guide to knowing the self, generating meaningless opinions about what is happening to the self while interfering with the body’s proper responses to its biofeedback. It’s only so much subjectivity. Consciousness should subordinate itself to the data, be guided more automatically by it. And you need control of this data to control what you will think of yourself in response to it, and to control the “truth” about yourself.

by Rob Horning, Marginal Utility, TNI |  Read more:
Image: uncredited

Monday, March 16, 2015

Beach Boys

Love & Mercy


[ed. I've wondered why no one ever made a movie about Brian Wilson, the Beach Boys' troubled genius. Now they have. I hope it doesn't suck. (God only knows)]
h/t Boing Boing

Oregon is the First State to Adopt Automatic Voter Registration

[ed. The shape of things to come. How did this take so long? (Wait, I think I know).]

Seventeen years after Oregon decided to become the first state to hold all elections with mail-in ballots, it took another pioneering step on Monday to broaden participation by automatically registering people to vote.

Gov. Kate Brown signed a bill that puts the burden of registration on the state instead of voters.

Under the legislation, every adult citizen in Oregon who has interacted with the Driver and Motor Vehicle Services Division since 2013 but hasn't registered to vote will receive a ballot in the mail at least 20 days before the next statewide election. The measure is expected to add about 300,000 new voters to the rolls.

"It just changes expectations for who's responsible for making elections work," said Barry Burden, a professor of political science at the University of Wisconsin in Madison and director of the Elections Research Center. "In every other state it's the responsibility for the voters to make sure it happens."

Some other states have considered such legislation but none has gone as far as Oregon.

Minnesota nearly implemented automatic voter registration in 2009 before the plan was vetoed by Gov. Tim Pawlenty, who said "registering to vote should be a voluntary, intentional act."

Similar concerns were raised by Oregon's minority Republicans.

"Simply because it makes us unique or makes us first does not necessarily mean that it actually improves on what we're doing," said state Sen. Jackie Winters, a Republican from Salem.

Oregon Republicans also voiced worry about potential voter fraud, the cost of implementing the measure, and whether the DMV can ensure personal information remains secure.

Information the DMV has on file, such as age, residential information, signature and citizenship status, will be transferred to the secretary of state, who will then automatically update registration information.

When it came up for a vote in the state Senate last week, all Republicans and one Democrat voted against it. The Democrats hold a 18-12 advantage in the Senate so the bill easily passed. (...)

People eligible to vote will get a postcard saying they've been registered and have three weeks to opt out. They'll be automatically registered as unaffiliated but can select a political party from the postcard and return it to election officials through the mail.

Automatic registration is not uncommon in other countries. A 2009 report by the Brennan Center for Justice says nations where the government takes the lead in enrolling voters have much higher registration rates. Argentina has a 100 percent registration rate, while Sweden, Australia and Canada all have registration rates over 90 percent.

by Sheila Kumar, AP |  Read more:
Image: AP Photo/Don Ryan

The Evil Genius of Airlines

It’s a relatively mild winter day in Toronto, but the wind whipping across the tarmac at Pearson International Airport threatens to freeze exposed skin anyway. Beneath the upturned wing of a Hainan Airlines Boeing 787, just in from Beijing, ramp worker Ahmed Osoble appears oblivious to the cold as he unloads luggage from the plane’s cavernous belly, wearing an orange safety vest over his bulky parka and green ear protectors over his black toque. One-by-one, Osoble and another ramp worker guide U-shaped luggage canisters, or “cans,” onto a hydraulic platform so they can be lowered to a waiting train of luggage carts hitched to a tractor. They work quickly. “If this plane is even five minutes delayed, it causes a whole bunch of other headaches,” Osoble yells over the noise of whining jet engines before jumping in the driver’s seat and scooting away.

For most who arrive or depart from Canada’s busiest airport, that luggage cart is the only glimpse they’ll get of the vast apparatus built to handle the straining suitcases and lumpy gym bags they check in at the departure counter. As passengers shuffle through the bright, airy airplane terminal, ground workers like Osoble shepherd thousands of pieces of luggage into a dimly lit world of grubby conveyor belts, bulky scanning machines and pinball-like flippers that violently shove luggage in the direction of its final destination. The average suitcase takes just 8.5 minutes to wind its way through the 16-km maze of conveyor belts at Pearson’s Terminal 1, and less than one per cent of the tens of millions of bags moved every year in Canada end up lost.

Yet these days airlines are offering passengers a big reason to bypass all that purpose-built infrastructure. Last fall, Air Canada and WestJet followed Toronto’s Porter Airlines and most U.S. carriers by charging economy-class passengers on domestic flights $25 to check a single suitcase (a charge to check a second piece of luggage had already been in place for several years). The move amounted to a doubling-down on the industry’s new fee-based business model, which, at times, appears designed to make parts of the travel experience so uncomfortable that passengers will pay to avoid them. A term has even been coined to reflect what airlines are inflicting on their customers: calculated misery. Want to sit with your family? Pay extra for advance seat selection. Uncomfortable in your economy-class seat? Pay to upgrade to a larger one that was only squeezed in because your original seat was shrunk.

But nowhere has the strategy been as lucrative for airlines’ bottom lines as the decision to charge for checking bags. In addition to asking passengers to pay for a formerly free service, the new fees have helped spawn a whole new family of optional charges related to advance boarding privileges—all the better to beat the rush of passengers lugging swollen carry-on luggage into the cabin and tying up precious overhead bin space.

Of course, all of these new fees and strategies to separate passengers from their money come at a time when the airlines are already benefiting tremendously from falling oil prices. And they’re not planning on passing on the savings if they can help it. “Our plan is not to pass any of it on,” WestJet’s CEO Gregg Saretsky said during a recent conference call to discuss the airline’s record fourth-quarter profit, in 2014, of $90 million. Likewise Air Canada’s CEO Calin Rovinescu—on the heels of recording his airline’s best financial performance in 77 years, with $531 million in profit in 2014—told investors: “We’ll use whatever tools we have at our disposal to drive profitability.”

While the new business model appears to offer the salvation that cash-strapped airlines have been seeking for years—baggage and other fees contributed to nearly US$50 billion in so-called “ancillary” revenues globally in 2014—it also risks unleashing a whole other dimension of hurt down the road, as flying becomes more miserable and cumbersome. In particular, the carry-on crisis spawned by checked-bag fees has bogged down already snail-like boarding times, tied up security screening lines at airports and forced unwilling employees to play the role of reluctant bag police. Even aircraft manufacturers have been dragged into the mess as they rush to redesign their planes to accommodate extra carry-on cargo. Meanwhile, all those kilometres of airport conveyor belts—financed by airport improvement fees, and designed to get planes on their way as fast as possible—threaten to go underutilized. All of these things add hidden costs—both monetary and psychological—to the already trying experience of modern air travel. And, as always, it’s passengers who will ultimately pay the price.

by Chris Sorenson, Macleans |  Read more:
Image: uncredited

How TripAdvisor is Changing the Way We Travel

The Koryo Hotel does pretty well on TripAdvisor, all things considered. The Internet never—and I mean never—works. The towels are “thin,” the sheet thread count low, and the milk powdered. Watch out for the “giant mutant cockroach snake hybrid“ in the shower. Guests even have to pay for the pool.

Still, the Koryo, a pair of dull beige towers connected near the top by a sky bridge, is the number-one-rated hotel on TripAdvisor for Pyongyang, North Korea. Despite its many faults, it garners three and a half out of five “bubbles,” in TripAdvisor parlance (so as not to be confused with the star systems that signify quality in hotels and restaurants), across nearly 90 reviews.

Perhaps this is no surprise. People come expecting the worst, and with expectations so dismally calibrated, something like hot water starts to sound pretty amazing. Reviewers carefully note that many of the hotel’s quirks—you can’t walk out the front door unaccompanied, for instance—are out of the manager’s hands. (You’ll have to take that up with the Supreme Leader.) It may not be the Ritz-Carlton, the sentiment goes, but considering the fact that you are staying in the marquee property in the showcase capital of the world’s most repressive regime, it may be best to, as one reviewer counseled, “just chill out, have some beers, some expired Oreos from the gift shop and make friends with the other tourists.”

The fact that so many people are so earnestly reviewing a hotel that they have not themselves chosen (accommodations are selected by government-sanctioned tour operators), in a situation in which management is hardly likely to care, in a country where the Internet-driven wisdom of crowds is a remote fiction, speaks to the curious power of TripAdvisor, which, in its decade and a half of existence, has changed travel as we know it. The reviews demonstrate the abiding urge to share and the faith that sharing—even for that one-more-grain-of-sand 13,786th reviewer of the Bellagio Las Vegas—will make someone else’s experience, or quite possibly everyone’s experience, that much better.

No matter your destination, you will, at some point in your research, visit TripAdvisor. The company, with the humble mantra “real hotel reviews you can trust,” has become—on a rising tide of 200 million user reviews and counting—a travel-industry Goliath, able to turn obscure hotels into sold-out hot spots, carry new flocks of visitors on digital word of mouth to quiet destinations, even rewrite the hospitality standards of entire nations. For travelers the impact has been equally profound. What begins as a simple search-engine query becomes an epic fact-finding mission that leaves no moldy shower curtain unturned, a labyrinthine choose-your-own-adventure—do you read the one-bubble rant?—in which the perfect hotel always seems just one more click away. For all the power of the service, it raises deep questions about travel itself, including, most pressingly, who do we want—who do we trust—to tell us where to go? “The future,” Don DeLillo once wrote, “belongs to crowds.” Are we there yet?

by By: Tom Vanderbilt, Outside | Read more:
Image: Danka & Peter

The Social Security Maze and Other U.S. Mysteries

In recent years, the Boston University economist Laurence J. Kotlikoff has made something of a game — and then, eventually, an art — of cornering smart people and convincing them that they were leaving tens of thousands of Social Security dollars on the table.

He did it with Glenn Loury, an M.I.T.-trained, Brown University economics professor whose wife, Linda, had recently died of cancer. His spouse was a prominent economics professor herself, but neither of them realized that her survivor’s benefits would be worth more than $100,000 to Mr. Loury until Mr. Kotlikoff told him.

Philip Moeller, a writer and retirement expert, also had it wrong, though he almost didn’t find out as he stubbornly insisted to Mr. Kotlikoff that he and his wife knew what they were doing. They did not; Mr. Kotlikoff landed them nearly $50,000 extra with some fancy footwork related to spousal benefits.

The pair eventually teamed with a third author, the “PBS NewsHour” correspondent Paul Solman, to write the guide to Social SecurityGet What’s Yours,” which came out last month. The overwhelming demand took the publisher by surprise. The book raced to the No. 1 slot sitewide on Amazon, whose inventory of paper books was wiped out for weeks.

But we should not be surprised by this. Social Security is the biggest source of retirement income for many Americans, and even if you’ve made more money than average during your career, that just means that the book’s tricks and tips will be ever more relevant. After all, the more you’ve made, the more you have at stake when it comes to filing for benefits in just the right way at precisely the right moment.

Given that there are 2,728 core rules and thousands more supplements to them according to the authors, it pays, literally, to seek out a guide. The book covers filing and suspending, “deeming” rules that are too complicated to explain here, the family maximum benefit and other head-scratchers that can nonetheless move the financial needle in a significant way.

The book’s success is also, however, symptomatic of something that we take for granted but should actually disgust us: The complexity of our financial lives is so extreme that we must painstakingly manage each and every aspect of it, from government programs to investing to loyalty programs. Mr. Kotlikoff’s game has yielded large winnings for his friends and readers (and several dinners of gratitude), but the fact that gamesmanship is even necessary in the first place with our national safety net is shameful.

Mr. Kotlikoff, 64, did not set out to become Dr. Social Security. Two decades ago, he and a colleague were studying the adequacy of life insurance. To do so, you need to know something about Social Security. Soon, Mr. Kotlikoff was developing a computer model for various payouts from the government program and realized that consumers might actually pay to use it.

From that instinct, a service called Maximize My Social Security was born, though it wasn’t easy to do and get it right. “We had to develop very detailed code, and the whole Social Security rule book is written in geek,” he said. “It’s impossible to understand.”

Because of that, most people filing for benefits have to get lucky enough to encounter a true expert in their social circle, at a Social Security office or on its hotline. They are rare, and this information dissymmetry offends Mr. Kotlikoff. “We have a system that produces inequality systematically,” he said. It’s not because of what the beneficiaries earned, either; it’s simply based on their (perhaps random) access to those who have a deep understanding of the rules.

by Ron Lieber, NY Times |  Read more:
Image: Robert Neubecker

Sunday, March 15, 2015

Talking Heads


Hong Hao - Story of Literary Men
via:

Beck

A Kingdom in Splinters

What language did Adam and Eve speak in the Garden of Eden? Today the question might seem not only quaint, but daft. Thus, the philologist Andreas Kempe could speculate, in his “Die Sprache des Paradises” (“The Language of Paradise”) of 1688, that in the Garden God spoke Swedish to Adam and Adam replied in Danish while the serpent—wouldn’t you know it?—seduced Eve in French. Others suggested Flemish, Old Norse, Tuscan dialect, and, of course, Hebrew. But as James Turner makes clear in his magisterial and witty history, which ranges from the ludicrous to the sublime, philologists regarded the question not just as one addressing the origins of language, but rather as seeking out the origins of what makes us human; it was a question at once urgent and essential.1 After all, animals do express themselves: they chitter and squeak, they bay and roar and whinny. But none of them, so far as we know, wields grammar and syntax; none of them is capable of articulate and reasoned discourse. We have long prided ourselves, perhaps excessively, on this distinction. But on the evidence Turner so amply provides, we might also wonder whether the true distinction lies not simply in our ability to utter rational speech, but in the sheer obsessive love of language itself; that is, in philology, the “love of words.”

This abiding passion for words, cultivated fervently from antiquity into modern times—or at least until around 1800, in Turner’s view—encompassed a huge range of subjects as it developed: not only grammar and syntax, but rhetoric, textual editing and commentary, etymology and lexicography, as well as, eventually, anthropology, archeology, biblical exegesis, linguistics, literary criticism, and even law. It comprised three large areas: textual philology, theories about the origins of language, and, much later, comparative studies of different related languages. Two texts predominated: Homer, considered sacred by the ancient Greeks, and the Bible, a contested area of interpretation for both Jews and Christians. As for theories of language origins, these go back to the pre-Socratics and Plato; the controversy was over whether language was divinely given, with words corresponding to the things they named, or arrived at by convention (the nomos versus physis debate). As for comparative studies, these arose in the eighteenth-century, largely as a result of Sir William Jones’s discovery of the common Indo-European matrix of most European languages. Encounters with “exotic,” that is, non-European, peoples in the course of the Renaissance voyages of discovery were another important source; here American Indian languages in their variety and complexity offered an especially rich, if perplexing, new field of inquiry.

To follow Turner’s account of all this is akin to witnessing the gradual construction of a vast and intricate palace-complex of the mind, carried out over centuries, with all its towers and battlements, crenellations and cupolas, as well as its shadier and sometimes disreputable alleyways and culs-de-sac, only to witness it disintegrate, by almost imperceptible stages, into fragmented ruins, a kingdom in splinters. The remnants of that grand complex, its shards and tottering columns, as it were, are our discrete academic disciplines today with their strict perimeters and narrow confines. To illustrate the difference, take Charles Eliot Norton (1827–1908), one of Turner’s heroes (and the subject of his Liberal Education of Charles Eliot Norton of 2002): Norton was the first professor of art history at Harvard and, indeed, one of the founders of the discipline, but he was also, among many other things, an expert on Dante who “taught and wrote art history as a philologist, an interpreter of texts.” Nowadays a polymath like Norton would not be hired, let alone get tenure, at any American university; he would be viewed as a dubious interloper on others’ turf.

In fact, traditional philology nowadays is less a ruin than the shadow of a ruin; no, even less than that, the vestige of a shadow. Turner acknowledges, and laments, this from the outset; he notes that “many college-educated Americans no longer recognize the word.” He adds that, “for most of the twentieth century, philology was put down, kicked around, abused, and snickered at, as the archetype of crabbed, dry-as-dust, barren, and by and large pointless academic knowledge. Did I mention mind-numbingly boring?” Worse, “it totters along with arthritic creakiness.” With friends like these, we might ask, can philology sink any further into oblivion than it already has? But the unspoken question here—“shall these bones live?”—is one that Turner poses and resolves triumphantly. He breathes life back into philology. There is not a dull page in this long book (and I include here its sixty-five pages of meticulous and sometimes mischievous endnotes). He accomplishes this by setting his account firmly in a detailed if inevitably brisk historical narrative interspersed with vivid cameos of individual scholars, the renowned as well as the notorious, the plainly deranged alongside the truly radiant.

Here I should disclose a distant interest. I once flirted with the idea of devoting myself to philology. I was soon dissuaded by my encounters with philologists in the flesh. The problem was not that they were dry; in fact, their cool, faintly cadaverous aplomb was a distinct relief amid the relentlessly “relevant” atmosphere of the 1960s. Dry but often outrageously eccentric, they were far from being George Eliot’s Casaubon toiling, and making others toil, to leave some small but significant trace in the annals of desiccation. No, it was rather their sheer single-mindedness coupled with a hidden ferocity that gave me pause. When I first met the late Albert Jamme, the renowned epigrapher of Old South Arabian, this Belgian Jesuit startled me by exclaiming at the top of his voice, “I hate my bed!” When I politely suggested that he get a new mattress, he shot back with “No, no! I hate my bed because it keeps me from my texts!” And the undiluted vitriol of Jamme’s opinions of his colleagues (all three of them!), both in conversation and in print, was scary; from him and others I learned that nothing distills venom more quickly than disagreement over a textual reading. At times there was something decidedly otherworldly about other philologists I met. In the 1970s, when I studied at the University of Tübingen and had the good fortune to work with Manfred Ullmann, the great lexicographer of Classical Arabic, he startled me one day by excitedly brandishing a file card on which was written the Arabic word for “clitoris” (bazr) and exclaiming, “Kli-tO-ris! What do ordinary folk know about Kli-tO-ris?” (More than you imagine, I thought.) Needless to say, it was the word—its etymology, its cognates, its morphology—that captivated him.

As for philological single-mindedness, when a celebrated German Assyriologist of my acquaintance (who shall remain nameless) got married, he rose abruptly from the wedding banquet to announce “Jetzt zur Arbeit!” (“Now to work!”) and headed for the door, a volume of cuneiform texts tucked under one arm; only the outraged intervention of his new mother-in-law kept him from leaving. Such anecdotes about philologists—their pugnacity, their obsessiveness, their downright daffiness—could fill a thick volume. Such anecdotes taught me not only that I wasn’t learned enough to become a philologist, I wasn’t unhinged enough either.

by Eric Ormsby, TNC | Read more:
Image: Papyrus of Callimachus's Aetia. via

Driving Mr. Albert

[ed. See also: Cukoo Spit and Ski Jumps.]

In the beginning, there was a brain. All of the universe was the size of this brain, floating in space. Until one day it simply exploded. Out poured photons and quarks and leptons. Out flew dust particles like millions of fast-moving birds into the expanding aviary of the cosmos. Cooked heavy elements — silicon, magnesium, and nickel — were sucked into a small pocket and balled together under great pressure and morphed with the organic matter of our solar system. Lo, the planets!

Our world — Earth — was covered with lava, then granite mountains. Oceans formed, a wormy thing crawled from the sea. There were pea-brained brontosauri and fiery meteor showers and gnawing, hairy-backed monsters that kept coming and coming — these furious little stumps, human beings, us. Under the hot sun, we roasted different colors, fornicated, and fought. Full of wonder, we attached words to the sky and the mountains and the water, and claimed them as our own. We named ourselves Homer, Sappho, Humperdinck, and Nixon. We made bewitching sonatas and novels and paintings. Stargazed and built great cities. Exterminated some people. Settled the West. Cooked meat and slathered it with special sauce. Did the hustle. Built the strip mall.

And in the end, after billions of years of evolution, a pink two-story motel rose up on a drag of asphalt in Berkeley, California. The Flamingo Motel. There, a man stepped out onto the balcony in a bright beam of millennial sunlight, holding the original universe in his hands, in a Tupperware container, and for one flickering moment he saw into the future. I can picture this man now: he needs a haircut, he needs some coffee.

But not yet, not before we rewind and start again. Not long ago. In Maine on a bus. In Massachusetts on a train. In Connecticut behind the wheel of a shiny, teal-colored rental car. The engine purrs. I should know, I’m the driver. I’m on my way to pick up an eighty-four-year-old man named Thomas Harvey, who lives in a modest, low-slung 1950s ranch that belongs to his sixty-seven-year-old girlfriend, Cleora. To get there you caroom through New Jersey’s exurbia, through swirls of dead leaves and unruly thickets of oak and pine that give way to well-ordered fields of roan, buttermilk, and black snorting atoms — horses. Harvey greets me at the door, stooped and chuckling nervously, wearing a red-and-white plaid shirt and a solid-blue Pendleton tie that still bears a waterlogged $10 price tag from some earlier decade. He has peckled, blowsy skin runneled with lines, an eagle nose, stubbed yellow teeth, bitten nails, and a spray of white hair as fine as corn silk that shifts with the wind over the bald patches on his head. He could be one of a million beach-bound, black-socked Florida retirees, not the man who, by some odd happenstance of life, possesses the brain of Albert Einstein — literally cut it out of the dead scientist’s head.

Harvey has stoked a fire in the basement, which is dank and dark, and I sit among crocheted rugs and genie bottles of blown glass, Ethiopian cookbooks, and macramé. It has taken me more than a year to find Harvey, and during that time I’ve had a dim, inchoate feeling — one that has increased in luminosity — that if I could somehow reach him and Einstein’s brain, I might unravel their strange relationship, one that arcs across this century and America itself. And now, before the future arrives and the supercomputers of the world fritz out and we move to lunar colonies — before all that hullabaloo — Harvey and I are finally sitting here together.

That day Harvey tells me the story he’s told before — to friends and family and pilgrims — one that has made him an odd celebrity even in this age of odd celebrity. He tells it deliberately, assuming that I will be impressed by it as a testament to the rightness of his actions rather than as a cogent defense of them. “You see,” he says, “I was just so fortunate to have been there. Just so lucky.”

“Fortunate” is one word, “improbable” is another.

by Michael Paterniti, Harpers | Read more:
Image: uncredited

Saturday, March 14, 2015

Geomancy Almanac


It's hard to overestimate the importance of his contributions to the intellectual development of Europe - by the time of his death, the library was one of the most celebrated collections in Europe.

Phases of the moon, Geomantie’ (Geomancy), Codex Palatinus 833 Germanicus.
via:

[ed. Wow, nobody ever finds these anymore. Good going, guys!]
photo: tadk

Beyond the Churn

Page-views suggest we crave short, informative text, ‘clickbaited’ with images of half-naked or bleeding bodies – even faster variations on the TV soundbites I once helped locate on tape. The marketplace has something to tell us. But to say that the 24/7, quick-and-dirty news cycle exists because people want it is incomplete logic. Poor people in a blighted urban food desert – devoid of garden or grocer but rife with Burger Kings and Dairy Queens – don’t consume fast food every day because their bodies are hungry for French fries. They consume it because they’re hungry for food. Its lack of nutrient density often means they have to keep eating – creating a confusing 21st century conundrum for the evolved human body: to be at once obese and malnourished.

In a media landscape of zip-fast reports as stripped of context as a potato might be stripped of fibre, most news stories fail to satiate. We don’t consume news all day because we’re hungry for information – we consume it because we’re hungry for connection. That’s the confusing conundrum for the 21st century heart and mind: to be at once over-informed and grasping for understanding.

I’ve begun college writing classes by asking students to name the first image that comes to mind at the term ‘atomic bomb’. Nearly every answer, every time, is ‘mushroom cloud’. They’ve seen that black-and-white photograph in high-school textbooks alongside brief paragraphs about mass death. But they can’t remember much about it. Who dropped the nuclear weapon? What year? In what country and for what reason? They memorised it all once, but it wasn’t relevant enough to their lives to stick.

Then the students read an excerpt from Hiroshima Diary (1955), the personal account of the Japanese physician Michihiko Hachiya, who in 1945 was enjoying his quiet garden when he saw a flash of light and found himself naked.

‘A large splinter was protruding from a mangled wound in my thigh, and something warm trickled into my mouth,’ Hachiya wrote. ‘My cheek was torn, I discovered as I felt it gingerly, with the lower lip laid wide open. Embedded in my neck was a sizable fragment of glass which I matter-of-factly dislodged, and with the detachment of one stunned and shocked I studied it and my blood-stained hand. Where was my wife?’

As he runs to help himself and others, Hachiya sees people moving ‘like scarecrows, their arms held out from their bodies’ to avoid the pain of burnt flesh touching burnt flesh. His attention to fact befits a man of science, but in rendering the sights, sounds and smells of the bomb’s wake, Hachiya is an artist. He relays the tale chronologically and with little judgment, allowing readers to find their own way to meaning. After reading his account, students look stunned and speak softly. Though generations and continents removed, they recognize Hachiya's fears as their own; 'atomic bomb' has zoomed in from detached concept to on-the-ground reality.

We have the chance now to reach such understandings through digital journalism. Recent years have seen a surge of timely, immersive, nonfiction commissioned and relayed by digital media startups such as Atavist, Narratively, Guernica and many others. Meanwhile, think tanks such as the Nieman Storyboard at Harvard and the Future of Digital Longform project at Columbia are examining the integration of time-honoured story and its exciting new formats.

On journalistic training grounds, a slower joining of reporting and artistry is taking shape in college classrooms: writing programmes in English departments and fine-arts schools increasingly honour non‑fiction as a genre alongside fiction and poetry; such concentrations most often cater to memoir, but a handful of schools now offer robust opportunities in reportage, profile-writing, the essay. However, journalism schools, 15 years after a curriculum shifted beneath my feet, show little sign of seeking the artistic wisdom of creative programmes. The most formative course of my college training – a reporting wringer in which each of us wrote five news pieces per week for the lauded student paper while working all semester on one long, carefully crafted, front-page feature – no longer exists  as a core requirement. Yes, today’s multimedia training demands contributed to that curricular decision, but journalism has long feared that ‘creativity’ means ‘making stuff up’ – a self-destructive fear, since a news story without creativity isn’t a story at all.

True story comprises two strands, spiralling: the specific and the universal. The earthly and transcendent, literal and metaphorical, tangible and intangible. The binding agent is the act of storytelling – often by the reliable devices of description, setting, structure, metaphor, character, but always through thoughtful ordering of words, images, sound. When we sever that bridge between objective fact and subjective meaning in the interest of speed or protocol, TV anchors awkwardly interview six-year-old witnesses to shooting rampages, and reporters convey military suicides as tallies in a descending order of deemed significance known as the ‘inverted pyramid’. This approach, though sometimes useful, ultimately desensitises or disturbs us. It fails to match the moment.

by Sarah Smarsh, Aeon | Read more:
Image: via: