Friday, September 23, 2016

The Perfect High for Credit Card Junkies

Somewhere in the middle of Ohio there’s a nondescript office park, the kind you could drive by for years and never really notice. One of the buildings in that park is basically windowless; you might mistake it for a warehouse or, if you were feeling exotic, a data center. The only thing that’s remarkable about the structure, signaling there’s something of great value inside, is an imposing floor-to-ceiling metal turnstile in a guarded vestibule.

This building is a modern-day mint. It’s where JPMorgan Chase, the largest issuer of credit cards in the U.S., manufactures around 60 percent of the 95 million cards it issues each year. The company requires visitors to keep its exact location secret. For more than 15 years, the facility has hummed along, embossing numbers onto plastic cards and stuffing them into envelopes. During that time, only three things have really disrupted BAU, Chase-ese for “business as usual”: the Target data breach of 2014, which required the factory to quickly reissue millions of cards; the industrywide 2015 switch from magnetic-stripe cards to ones that include microchips; and in August the frenzied demand for the company’s newest offering—the Chase Sapphire Reserve.

Ned Lindsey, Chase’s managing director of customer fulfillment, runs the Ohio plant and a sister facility in Texas. On Aug. 24, Lindsey noticed something strange—card requests were coming in at an extremely high rate, all for the Reserve. “We were seeing demand that was eight- to tenfold more than what we expected,” he says. Lindsey, it seems, doesn’t read credit card blogs. Since July, a fever had been building on social media among points-and-miles obsessives aware that Chase was preparing a premium card—one that would sit above its already-popular Sapphire Preferred, and offer rewards to match. Almost a month before Chase introduced Reserve, the community discovered the card’s perks through some leaked information: a sign-up bonus of 100,000 points, triple points on travel and dining, airport lounge memberships, and credits that offset a $450 annual fee, among other goodies. Of course, like its Sapphire Preferred brethren, the card would have a weighty metal core that creates what is known in the trade as “plunk factor.” Plasticheads got the vapors. “When I first heard the details,” wrote Brian Kelly, aka The Points Guy, probably the most influential card blogger, “I had to sit down, because it sounded way too good to be true.” The Sapphire Reserve, wrote another, Ben Schlappig, is “beyond a no-brainer, possibly the most compelling card we’ve ever seen.” On Reddit, a user shared that Chase had accidentally published a Reserve application link, and thousands of applications poured in before the page was deactivated. By the official launch date of Aug. 23, anticipation had built to the point that the Chase site was bumrushed by a horde of deal-seekers.

In Ohio, Lindsey brought in extra staff and added shifts. “We had to have a backup plan to the backup plan,” he says. The plant ran hot for about three weeks. During that time, Chase burned through its inventory of metal cards—a stock that was supposed to last 10 to 12 months. (The company declined to say how many Preferred or Reserve cards it’s issued.) To continue to sate the appetite for the Sapphire Reserve, Chase had to switch to standard plastic cards as placeholders. The bank says it’s now sent out as many plastic cards as metal—two years’ worth of cards, gone in less than a month.

When the first cards were shipped, some customers posted “unboxing” videos on YouTube, reverently exhibiting terms-and-conditions pamphlets to the camera. The videos have garnered more than 60,000 views; the Reddit thread has 10,000 comments. The hype usually reserved for a new iPhone was now being applied to a high-interest line of credit. And until mid-September, Chase hadn’t spent a dime on advertising.

Chase is not the first issuer to offer a big-annual-fee-but-mucho-points card. American Express and Citibank have been playing in that part of the market for years. But Chase has entered at the right time—when a growing community of enthusiasts will do the company’s marketing online for free—and with the right card, one that assiduously checks every box the modern credit card deal-hunter cares about. None of this happened by accident. The process to create new credit cards is little different from the research Procter & Gamble does to develop a new laundry detergent or Honda does to develop a new crossover SUV. A credit card is a means of payment and the extension of a loan, but it’s also a collection of perks and points that confer experiences and status upon its user, as well as an object people typically touch several times a day. In a country with more than half a billion credit cards already in circulation, it’s not a given that any new combination of attributes will work.

Chase had already learned a thing or two about how to make a card more than a card. The Sapphire Preferred, introduced in 2009 with a $95 annual fee, was the reigning “It card” before Reserve came along. One especially devoted cardholder is Bryan Denman, a 35-year-old New York creative director, who has taken advantage of Preferred’s perks to treat his girlfriend to Chase-sponsored private dinners at the restaurants Craft, Rebelle, and Le Bernardin, as well as to get good seats at venues like Madison Square Garden. Denman recognizes that loyalty to a credit card is unusual. “Everyone’s brought up to distrust their credit card company,” he says. “You don’t want to be on the phone with them; why would you want to spend the night with them?” But his experiences with Chase have conferred upon Denman the zeal of a convert. “I am a complete fanboy,” he says. “I’m telling everyone to get this card.” He’s not alone. “When I go to dinner, there might be three cards that get thrown down. They’re all Chase Sapphire.”

by Sam Grobart, Bloomberg |  Read more:
Image: Finlay MacKay

Art deco styling, São Paten
via:

Jane Jacob's Street Smarts

What the urbanist and writer got so right about cities—and what she got wrong.

Jane Jacobs’s aura was so powerful that it made her, precisely, the St. Joan of the small scale. Her name still summons an entire city vision—the much watched corner, the mixed-use neighborhood—and her holy tale is all the stronger for including a nemesis of equal stature: Robert Moses, the Sauron of the street corner. The New York planning dictator wanted to drive an expressway through lower Manhattan, and was defeated, the legend runs, by this ordinary mom. (...)

Her admirers and interpreters tend to be divided into almost polar opposites: leftists who see her as the champion of community against big capital and real-estate development, and free marketeers who see her as the apostle of self-emerging solutions in cities. In a lovely symmetry, her name invokes both political types: the Jacobin radicals, who led the French Revolution, and the Jacobite reactionaries, who fought to restore King James II and the Stuarts to the British throne. She is what would now be called pro-growth—“stagnant” is the worst term in her vocabulary—and if one had to pick out the two words in English that offended her most they would be “planned economy.” At the same time, she was a cultural liberal, opposed to oligarchy, suspicious of technology, and hostile to both big business and the military. Figuring out if this makes hers a rich, original mixture of ideas or merely a confusion of notions decorated with some lovely, observational details is the challenge that taking Jacobs seriously presents. (...)

Jacobs found her vocation quickly but her subject very late. She spent several years working for a magazine called Amerika, published by the U.S. State Department for distribution in the Soviet Union. Only in the mid-nineteen-fifties did she begin writing about urban issues and architecture, first for Architectural Forum and then for Fortune, which offered a surprisingly welcoming home to polemics against edifice-building.* She married an equally cheerful, nonconformist architect, Robert Jacobs, and they moved—just before the first of their three children was born—into a house at 555 Hudson Street, an address that, for certain students of American originals, has attained the status of Thoreau’s cabin at Walden. (...)

It was against this background of established notoriety that Jacobs published, very much under the guidance of the editor Jason Epstein, “The Death and Life of Great American Cities.” The book is still astonishing to read, a masterpiece not of prose—the writing is workmanlike, lucid—but of American maverick philosophizing, in an empirical style that descends from her beloved Franklin. It makes connections among things which are like sudden illuminations, so that you exclaim in delight at not having noticed what was always there to see.

A celebration of the unplanned, improvised city of streets and corners, Jacobs’s is a landscape that most urban-planning rhetoric of the time condemned as obsolete and slummy, something to be replaced by large-scale apartment blocks with balconies and inner-courtyard parks. She insisted that such Corbusian super blocks tended to isolate their inhabitants, depriving them of the eyes-on-the-street crowding essential to city safety and city joys. She told the story of a little girl seemingly being harassed by an older man, and of how all of Hudson Street emerged from stores and stoops to protect her (though she confesses that the man turned out to be the girl’s father). She made the still startling point that, on richer blocks, a whole class of eyes had to be hired to play the role that, on Hudson Street, locals played for nothing: “A network of doormen and superintendents, of delivery boys and nursemaids, a form of hired neighborhood, keeps residential Park Avenue supplied with eyes.” A hired neighborhood! It’s obvious once it’s said, but no one before had said it, because no one before had seen it.

The book is really a study in the miracle of self-organization, as with D’Arcy Thompson’s studies of biological growth. Without plans, beautiful shapes and systems emerge from necessity. Where before her people had seen accident or exploitation or ugliness, she saw an ecology of appetites. The book rises to an unforgettable climax in a passage on the Whitmanesque “sidewalk ballet,” one of the most inspired, and consciousness-changing, passages in American prose:
Under the seeming disorder of the old city, wherever the old city is working successfully, is a marvelous order for maintaining the safety of the streets and the freedom of the city. It is a complex order. Its essence is intricacy of sidewalk use, bringing with it a constant succession of eyes. The order is all composed of movement and change, and although it is life, not art, we may fancifully call it the art form of the city and liken it to the dance . . . an intricate ballet in which the individual dancers and ensembles all have distinctive parts which miraculously reinforce each other and compose an orderly whole . . . Mr. Halpert unlocking the laundry’s handcart from its mooring to a cellar door, Joe Cornacchia’s son-in-law stacking out the empty crates from the delicatessen, the barber bringing out his sidewalk folding chair, Mr. Goldstein arranging the coils of wire which proclaim the hardware store is open, the wife of the tenement’s superintendent depositing her chunky three-year-old with a toy mandolin on the stoop, the vantage point from which he is learning the English that his mother cannot speak. . . . When I get home after work, the ballet is reaching its crescendo. This is the time of roller skates and stilts and tricycles, and games in the lee of the stoop with bottletops and plastic cowboys; this is the time of bundles and packages, zigzagging from the drug store to the fruit stand and back over to the butcher’s; this is the time when teen-agers, all dressed up, are pausing to ask if their slips show or their collars look right; this is the time when beautiful girls get out of MG’s; this is the time when fire engines go through; this is the time when anybody you know around Hudson Street will go by.
Reread today, the passage (it goes on for pages) may seem a touch overchoreographed. One imagines that other contemporary Village dweller S. J. Perelman reading it with a wince: where are the desultory dry cleaners and depressed delicatessen slicers in this Pagnol movie version of Village life? Still, anyone who lived on a New York block would have recognized its essential truth: a single Yorkville block, when I moved there, thirty-five years ago, had a deli, a playground, and a funeral home; the guys from Wankel’s Hardware on an avenue nearby gathered for lunch at the Anna Maria pizza place on the corner. The ballet happened.

Some of Jacobs’s theories were falsified by subsequent history: she expounds on how the block lengths on the Upper West Side keep its streets stagnant, compared with those of her beloved Village, but in fact Columbus Avenue later on became as lively as Hudson Street. Block lengths prove very secondary to attractive rents. Other insights remain evergreen: she shows that bad old buildings are as important to civic health as good old buildings, because, while the good old buildings get recycled upward, the bad ones prove to be a kind of urban mulch in which prospective new businesses can make a start. (One sees this today in Bushwick.)

Two core principles emerge from the book’s delightful and free-flowing observational surface. First, cities are their streets. Streets are not a city’s veins but its neurology, its accumulated intelligence. Second, urban diversity and density reinforce each other in a virtuous circle. The more people there are on the block, the more kinds of shops and social organizations—clubs, broadly put—they demand; and, the more kinds of shops and clubs there are, the more people come to seek them. You can’t have density without producing diversity, and if you have diversity things get dense. The two principles make it plain that any move away from the street—to an encastled arts center or to plaza-and-park housing—is destructive to a city’s health. Jacobs’s idea can be summed up simply: If you don’t build it, they will come. (A third is less a principle than an exasperated allergy: she hates cars, and what driving them and parking them does to towns.) (...)

Books written in a time of crisis can make bad blueprints for a time of plenty, as polemics made in times of war are not always the best blueprint for policies in times of peace. Jane Jacobs wrote “Death and Life” at a time when it was taken for granted that American cities were riddled with cancer. Endangered then, they are thriving now, with the once abandoned downtowns of Pittsburgh and Philadelphia and even Cleveland blossoming. Our city problems are those of overcharge and hyperabundance—the San Francisco problem, where so many rich young techies have crowded in to enjoy the city’s street ballet that there’s no room left for anyone else to dance. The first stirrings in Jacobs’s day of what we call “gentrification” she called, arrestingly, “unslumming,” insisting that the process works when a slum, amid falling rents and vacated buildings, becomes slimmed down to a “loyal core” of residents who, with eyes on the street, keep it livable enough for new residents to decide to enter. (This sounds right for, say, Crown Heights or Williamsburg, where the core of Hasidim and Caribbeans, staying out of convenience or clan loyalty, made the place appealing to new settlers.) It now seems self-evident to us, but did not then, that a city can fend off decline by drawing in creative types to work in close proximity on innovative projects, an urban process that Jacobs was one of the first to recognize, and name: she called it “slippage,” and saw its value. We live with the consequences of slippage, called by many ugly names, with “yuppie” usually thrown in for good measure.

The complexity of city housing and city streets becomes plainer if you objectively analyze the career of one of Jacobs’s contemporaries. In her writings, the urban planner Ed Logue is, along with Edmund Bacon, of Philadelphia, a prominent villain. Logue was the author of large-scale urban-renewal projects up and down the East Coast; he fathered Roosevelt Island here. In the new book of conversations, Jacobs speaks of him contemptuously. “I thought they were awful,” she says of his plans. “And I thought he was a very destructive man.” (Her interlocutor, outdoing her, likens Logue to Hitler.)

The reality is considerably more complicated than the caricature suggests, and Logue’s work is more of a challenge to Jacobs’s ideas than we might like. Logue is the subject of an illuminating study by the Harvard historian Lizabeth Cohen, who describes him as a man determined “to balance public and private power in order to keep American cities viable, even flourishing.” He may have made bad buildings, but he did it in pursuit of an urban vision in many ways more egalitarian and idealistic than anything that the small-street ideal could encompass. He was a passionate integrationist, and his plan for Boston put a huge emphasis on racial mixing, recognizing that the drive to “protect” neighborhoods most often meant keeping blacks out. In a confrontation at the Museum of Modern Art, in 1962, he accused Jacobs’s anti-planning polemics of winning her too many friends “among comfortable suburbanites,” who, he said, “like to be told that neither their tax dollars nor their own time need be spent on the cities they leave behind them at the close of each work day.” By our usual standards, Logue is the social-democratic, public-minded hero struggling for diversity and equity against the stranglehold of neighborhood segregation, and a progressive ought to side with him against the hidebound defender of organic communities inhospitable to outsiders. (Are there black folks on Hudson Street? Jacobs doesn’t say, and, as Kanigel makes plain, she was impatient when the question came up.) Jacobs pits the small-scale humanist against the brutal, large-scale city planner. But it is just as reasonable to pit the privileged apartment dweller, celebrating her own privilege, against the social democrat trying to produce decent mixed housing for the homeless and the deprived at a price that the city can afford.

By their fruits you shall know them, and by their concrete villages. A cable-car visit to Roosevelt Island is sobering for those briefly inclined to abandon Jacobs for Logue. This is surely not anyone’s idea of successful urbanism. Who would not rather live in the West Village than on Roosevelt Island? If they could afford to. But almost no one can—and the reality is that good housing that will alleviate the San Francisco problem will probably look more like Roosevelt Island than like the West Village, simply because more Roosevelt Islands can be built for many, and the West Village can be preserved for only a few. Refusing to look this truth in the face and think about how the Roosevelt Islands can be made better, rather than about why they are no good, is not to be honest about the challenges of the modern city. The solution can’t be pining for old neighborhoods, sneering at yuppies, and vilifying social planners.

by Adam Gopnik, New Yorker |  Read more:
Image: Elliot Erwitt

Stevan Dohanos (1907 – 1994) Trailer Park Garden – Caravan Suburbia
via:

The Trouble with Sombreros

[ed. Ms. Shriver's reply: Will the Left Survive the Millennials?]

In early September, the novelist Lionel Shriver gave a speech at the Brisbane Writers Festival in which she expressed her hope that identity politics and the concept of cultural appropriation would turn out to be passing fads. During her lecture, several audience members walked out in protest, and the text of her address has sparked a controversy that has spread across the Internet and the British and American press. It has stoked a debate already raging on college campuses, in the literary world, in the fashion and music industries, on city streets, and in other areas of our social and political lives. But while this debate has raised important issues—for writers and the public—both Shriver and her critics may have overlooked some of its larger implications.

“Taking intellectual property, traditional knowledge, cultural expressions, or artifacts from someone else’s culture without permission” is the definition of cultural appropriation that Shriver quotes from a book by Susan Scafidi, a law professor at Fordham University. The topic is a complicated and sensitive one, and Shriver’s first mistake, I think, was to ignore that complexity and sensitivity by adopting a tone that ranged from jauntiness to mockery and contempt. I can think of only a few situations in which humor is entirely out of line, but a white woman (even one who describes herself as a “renowned iconoclast”) speaking to an ethnically diverse audience might have considered the ramifications of playing the touchy subjects of race and identity for easy laughs.

Shriver began with the story of a “tempest-in-a-teacup” that erupted after two Bowdoin College students hosted a tequila party and gave out miniature sombreros “which—the horror—numerous partygoers wore.” The hosts were censured for “ethnic stereotyping” and threatened with removal from their student government posts; their guests were criticized in the student newspaper for lacking “‘basic empathy.’”

“I am a little at a loss,” Shriver said,
to explain what’s so insulting about a sombrero—a practical piece of headgear for a hot climate that keeps out the sun with a wide brim. My parents went to Mexico when I was small, and brought a sombrero back from their travels, the better for my brothers and I to unashamedly appropriate the souvenir to play dress-up. For my part, as a German-American on both sides, I’m more than happy for anyone who doesn’t share my genetic pedigree to don a Tyrolean hat, pull on some lederhosen, pour themselves a weissbier, and belt out the Hofbräuhaus Song. 
But what does this have to do with writing fiction? The moral of the sombrero scandal is clear: you’re not supposed to try on other people’s hats. Yet that’s what we’re paid to do, isn’t it? Step into other people’s shoes, and try on their hats.
Like much of Shriver’s talk, this paragraph contains a kernel of truth encased by a husk of cultural and historical blindness. It seems clear that one part of the fiction writer’s job is “to step into other people’s shoes.” But to paraphrase Freud, sometimes a hat is more than just a hat. Sometimes it is a symbol—and a racist one, at that.

For many Mexicans, the sombrero (now worn almost exclusively as a costume accessory by mariachis) perpetuates the myth of the backward, old-fashioned campesino, a throwback to an earlier century, chattering away in the heavily-accented, high-pitched, rapid-fire rhythms of Speedy Gonzales, the cartoon mouse, in his big yellow sombrero. In the past one more often saw—painted on dinner plates and tourist knick-nacks, embroidered on felt jackets—a caricature of a Mexican peasant dozing off, drunk or just lazy, leaning against a cactus, his face obscured by an enormous sombrero. And this is an unfortunate moment in which to mock a college for trying to reassure its Mexican and Latino students: Donald Trump has yet to call for the mass deportation of lederhosen-wearing, weissbier-swilling German-Americans.

Even as Shriver insisted on the writer’s right to imagine and empathize with people of different classes and races, she appears to have had some trouble empathizing with the people in her audience. It’s not hard to understand why the members of minority groups have grown impatient with the inability or unwillingness of governments and societies to confront the harsh realities of racism, of economic and social inequality, of de facto segregation. Nor is it difficult to find egregious examples of cultural appropriation: the sorry spectacle of feather bonnets and fake turquoise jewelry for sale at Native American fairs staffed and attended solely by white people. White musicians who get rich performing the songs of black soul and blues singers who live and die in poverty. The fast-food chain Taco Bell, which purveys a bastardized form of Mexican cuisine while paying its workers (who, in the West and Southwest, are often Mexican-Americans) wages that average between eight and nine dollars an hour.

Choosing to ignore the real inequities that exist, Shriver takes a familiar tack often used on Fox News: trivializing valid concerns by ridiculing their most absurd manifestations and extreme proponents. She cites the Oberlin students who (forgetting that our country has long functioned as a cultural and culinary melting pot) protested a piratization of Japanese culture: serving sushi in the school dining hall! (...)

Misdirecting our indignation, we let powerful individuals and institutions get away with murder while we fight enemies (academics and novelists) whose power is marginal at best, who may reflect prevailing prejudices but whose work, like it or not, hardly affects the larger society. Surely, corporate greed and the governments that have allowed our schools and health care systems to degenerate are more accountable than the authors of short stories. Though we all share the responsibility for the society in which we live, poets and painters are hardly to blame for the fact that we live in a racist country—or for having gotten us into the economic and political mess we are in.

by Francine Prose, NYR Daily | Read more:
Image: Alex Webb/Magnum Photos

Thursday, September 22, 2016


Celeste Dupuy-Spencer, Fall With Me For A Million Days (My Sweet Waterfall), 2016
via:

The Strange Second Life of String Theory

String theory strutted onto the scene some 30 years ago as perfection itself, a promise of elegant simplicity that would solve knotty problems in fundamental physics — including the notoriously intractable mismatch between Einstein’s smoothly warped space-time and the inherently jittery, quantized bits of stuff that made up everything in it.

It seemed, to paraphrase Michael Faraday, much too wonderful not to be true: Simply replace infinitely small particles with tiny (but finite) vibrating loops of string. The vibrations would sing out quarks, electrons, gluons and photons, as well as their extended families, producing in harmony every ingredient needed to cook up the knowable world. Avoiding the infinitely small meant avoiding a variety of catastrophes. For one, quantum uncertainty couldn’t rip space-time to shreds. At last, it seemed, here was a workable theory of quantum gravity.

Even more beautiful than the story told in words was the elegance of the math behind it, which had the power to make some physicists ecstatic.

To be sure, the theory came with unsettling implications. The strings were too small to be probed by experiment and lived in as many as 11 dimensions of space. These dimensions were folded in on themselves — or “compactified” — into complex origami shapes. No one knew just how the dimensions were compactified — the possibilities for doing so appeared to be endless — but surely some configuration would turn out to be just what was needed to produce familiar forces and particles.

For a time, many physicists believed that string theory would yield a unique way to combine quantum mechanics and gravity. “There was a hope. A moment,” said David Gross, an original player in the so-called Princeton String Quartet, a Nobel Prize winner and permanent member of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. “We even thought for a while in the mid-’80s that it was a unique theory.”

And then physicists began to realize that the dream of one singular theory was an illusion. The complexities of string theory, all the possible permutations, refused to reduce to a single one that described our world. “After a certain point in the early ’90s, people gave up on trying to connect to the real world,” Gross said. “The last 20 years have really been a great extension of theoretical tools, but very little progress on understanding what’s actually out there.”

Many, in retrospect, realized they had raised the bar too high. Coming off the momentum of completing the solid and powerful “standard model” of particle physics in the 1970s, they hoped the story would repeat — only this time on a mammoth, all-embracing scale. “We’ve been trying to aim for the successes of the past where we had a very simple equation that captured everything,” said Robbert Dijkgraaf, the director of the Institute for Advanced Study in Princeton, New Jersey. “But now we have this big mess.”

Like many a maturing beauty, string theory has gotten rich in relationships, complicated, hard to handle and widely influential. Its tentacles have reached so deeply into so many areas in theoretical physics, it’s become almost unrecognizable, even to string theorists. “Things have gotten almost postmodern,” said Dijkgraaf, who is a painter as well as mathematical physicist. (...)

String theory today looks almost fractal. The more closely people explore any one corner, the more structure they find. Some dig deep into particular crevices; others zoom out to try to make sense of grander patterns. The upshot is that string theory today includes much that no longer seems stringy. Those tiny loops of string whose harmonics were thought to breathe form into every particle and force known to nature (including elusive gravity) hardly even appear anymore on chalkboards at conferences. At last year’s big annual string theory meeting, the Stanford University string theorist Eva Silverstein was amused to find she was one of the few giving a talk “on string theory proper,” she said. A lot of the time she works on questions related to cosmology.

Even as string theory’s mathematical tools get adopted across the physical sciences, physicists have been struggling with how to deal with the central tension of string theory: Can it ever live up to its initial promise? Could it ever give researchers insight into how gravity and quantum mechanics might be reconciled — not in a toy universe, but in our own?

“The problem is that string theory exists in the landscape of theoretical physics,” said Juan Maldacena, a mathematical physicist at the IAS and perhaps the most prominent figure in the field today. “But we still don’t know yet how it connects to nature as a theory of gravity.” Maldacena now acknowledges the breadth of string theory, and its importance to many fields of physics — even those that don’t require “strings” to be the fundamental stuff of the universe — when he defines string theory as “Solid Theoretical Research in Natural Geometric Structures.” (...)

Researchers have developed a huge number of quantum field theories in the past decade or so, each used to study different physical systems. Beem suspects there are quantum field theories that can’t be described even in terms of quantum fields. “We have opinions that sound as crazy as that, in large part, because of string theory.”

This virtual explosion of new kinds of quantum field theories is eerily reminiscent of physics in the 1930s, when the unexpected appearance of a new kind of particle — the muon — led a frustrated I.I. Rabi to ask: “Who ordered that?” The flood of new particles was so overwhelming by the 1950s that it led Enrico Fermi to grumble: “If I could remember the names of all these particles, I would have been a botanist.”

Physicists began to see their way through the thicket of new particles only when they found the more fundamental building blocks making them up, like quarks and gluons. Now many physicists are attempting to do the same with quantum field theory. In their attempts to make sense of the zoo, many learn all they can about certain exotic species.

Conformal field theories (the right hand of AdS/CFT) are a starting point. You start with a simplified type of quantum field theory that behaves the same way at small and large distances, said David Simmons-Duffin, a physicist at the IAS. If these specific kinds of field theories could be understood perfectly, answers to deep questions might become clear. “The idea is that if you understand the elephant’s feet really, really well, you can interpolate in between and figure out what the whole thing looks like.” (...)

Inflationary models get tangled in string theory in multiple ways, not least of which is the multiverse — the idea that ours is one of a perhaps infinite number of universes, each created by the same mechanism that begat our own. Between string theory and cosmology, the idea of an infinite landscape of possible universes became not just acceptable, but even taken for granted by a large number of physicists. The selection effect, Silverstein said, would be one quite natural explanation for why our world is the way it is: In a very different universe, we wouldn’t be here to tell the story.

This effect could be one answer to a big problem string theory was supposed to solve. As Gross put it: “What picks out this particular theory” — the Standard Model — from the “plethora of infinite possibilities?”

Silverstein thinks the selection effect is actually a good argument for string theory. The infinite landscape of possible universes can be directly linked to “the rich structure that we find in string theory,” she said — the innumerable ways that string theory’s multidimensional space-time can be folded in upon itself.

by K.C. Cole, Quanta |  Read more:
Image: Renee Rominger/Moonrise Whims

Yahoo Hack Steals Personal Info From at Least 500M Accounts

Computer hackers swiped personal information from at least 500 million Yahoo accounts in what is believed to be the biggest digital break-in at an email provider.

“We’d have found the security breach earlier,
 but even we don’t search Yahoo.”
The massive security breakdown disclosed Thursday poses new headaches for Yahoo CEO Marissa Mayer as she scrambles to close a $4.8 billion sale to Verizon Communication.

The breach Thursday dates back to late 2014, raising questions about the checks and balances within Yahoo - a fallen internet star that has been laying off staff to counter a steep drop in revenue during the past eight years.

At the time of the break-in, Yahoo's security team was led by Alex Stamos, a respected industry executive who left last year to take a similar job at Facebook.

Yahoo didn't explain what took so long to uncover a breach that it blamed on a "state-sponsored actor" - parlance for a hacker working on behalf of a foreign government. The Sunnyvale, California, company declined to explain how it reached its conclusions about the attack, but said it is working with the FBI and other law enforcement as part of its ongoing investigation.

MOST ACCOUNTS EVER STOLEN

"This is a pretty big deal that is probably going to cost them tens of millions of dollars," predicted Avivah Litan, a computer security analyst for Gartner Inc. "Regulators and lawyers are going to have a field day with this one."

Litan described it as the most accounts stolen from a single email provider.

The stolen data includes users' names, email addresses, telephone numbers, birth dates, scrambled passwords, and the security questions - and answers - used to verify an accountholder's identity.

Last month, the tech site Motherboard reported that a hacker who uses the name "Peace" boasted that he had account information belonging to 200 million Yahoo users and was trying to sell the data on the web.

Yahoo is recommending that users change their passwords if they haven't done so since 2014. The company said the attacker didn't get any information about its users' bank accounts or credit and debit cards.

THE VERIZON IMPACT

News of the security lapse could cause some people to have second thoughts about relying on Yahoo's services, raising a prickly issue for the company as it tries to sell its digital operations to Verizon Communications.

That deal, announced two months ago, isn't supposed to close until early next year. That leaves Verizon with wiggle room to renegotiate the purchase price or even back out if it believes the security breach will harm Yahoo's business. That could happen if users shun Yahoo or file lawsuits because they're incensed by the theft of their personal information.

by Michael Liedtke, AP |  Read more:
Image: Scwartz, New Yorker

Hillary vs. the Hate Machine

[ed. See also: Austrailia, Go to Your Room.]

They were everywhere this summer, the wanna-be statesmen, the failed comedians, the conspiracy theorists and entrepreneurs with political convictions, or absolutely no convictions, selling the national id. In Cleveland, they trawled the streets outside the Republican National Convention, shouting, "Hillary's lies matter!" or "Hillary for prison!" – the slogans stamped on buttons, T-shirts, bumper stickers, decals, trucker hats, hoodies, onesies. At the Democratic National Convention in Philadelphia, diehards in Bernie 2016 shirts held signs reading "#NeverHillary" or "Shillary," or handed out posters renaming the Democratic nominee "War Hawk" or "Goldman Girl" or "Monsanto Mama." Everywhere the venom was carefully packaged and rigorously on-message. One button, plumbing the depths of the anti-politically correct, read "Life's a Bitch – Don't Vote for One." Another promoted a "KFC Hillary Special: 2 Fat Thighs, 2 Small Breasts...Left Wing." There were images of an angry Hillary giving America the finger and countless others of her yelling, scowling, looking mean. "Hillary sucks, but not like Monica!" yelled one T-shirt vendor, who told me he'd sold almost 500 shirts in Cleveland with that catchphrase. "Trump that bitch!"

A San Diego lawyer I met in July wore a lapel pin depicting Hillary Clinton as Lucifer. "She's an evil person," he told me. "Evil." He'd come to this conclusion, he said, after reading Armageddon: How Trump Can Beat Clinton, written by former-Bill-Clinton-adviser-turned-National-Enquirer hit-man Dick Morris, which shot to Number Three on The New York Times bestseller list. For much of the summer, three of the top five books on the list were direct attacks on Hillary Clinton (a fourth, Glenn Beck's Liars, is an attack on progressives more broadly). The lawyer admitted that he really had no idea if Clinton was actually evil – he didn't pay careful attention to her record – it was more of a feeling.

Feeling, for lack of a better word, is what drives most Americans' perceptions of Hillary Clinton, one of the most complex and resilient figures in U.S. politics, yet also, after decades of probing scrutiny, less a real person than a vessel for Americans to collectively project their anxieties, fears, frustrations and identity struggles. Across the country, people of every political persuasion – men, women, millennials, baby boomers – told me they were eager for a woman president, just not this woman. Clinton is "inauthentic," some say, as well as selfish – "Her eyes are on her own game," one Democrat noted – calculating and corrupt. Another told me, "She's a fucking liar."

The pervasiveness of these sorts of terms in the national conversation about Clinton tells us far less about her character than it does about the character called "Hillary Clinton," the construction of a sustained and well-funded strategy by the right to shape the way we talk about Clinton, what we believe we "know" about Clinton and also how we view her statements, gestures, actions, policies and, most crucially, her mistakes. "There's just a huge amount of bullshit, and it's been going on for decades," says Bobbie Greene McCarthy, Clinton's friend and former White House deputy chief of staff. "This is too important of an election to buy into a false narrative. But the noise just drowns out any sort of critical thinking."

In November, Americans face a historic choice: Vote for a man who is widely considered one of the most unqualified people to ever run for president, or cast a ballot for a woman whose qualifications for the job exceed those of just about any other candidate in the modern era. Electing the first woman president of the United States will be a revolutionary act, as terrifying to some as it is thrilling to others, but her victory is anything but inevitable. There's a very real chance the visceral hatred, or at minimum the visceral ambivalence, toward Hillary Clinton could hand the election to Donald Trump.

There are valid criticisms to be made about Clinton. She is one of the least transparent politicians in recent memory. Her 2002 vote to authorize the invasion of Iraq is seen by many, including Clinton herself, as a mistake. As an unapologetic capitalist, whose wealth, family philanthropic foundation and Wall Street ties speak to a cozy relationship with the political and financial elite, she is seen as emblematic of the "rigged" system Trump and Bernie Sanders have campaigned against. And, like many politicians, Clinton has bobbed and weaved over the course of her career, sometimes tacking to the right.

Still, she is fundamentally liberal, and running on a highly progressive platform that includes raising the minimum wage and passing gun-safety measures like universal background checks. Clinton has also been a tireless advocate for women and families since the 1970s and, unlike any secretary of state before her, made global women's issues a key point on her agenda. "She is somebody who wants to be president for all the right reasons," says Clinton's longtime aide Jennifer Klein, who is currently advising Clinton on women's and girls' issues. "I mean, that's the irony of all of these negative characterizations: You couldn't find a person who is more dedicated to improving people's lives than Hillary Clinton."

In an article in March for The Guardian, the former editor of The New York Times Jill Abramson analyzed the relationship between Clinton's fundraising and policy positions over the past few years and concluded that Clinton was "fundamentally honest and truthful." She noted that the same conclusion was drawn by PolitiFact, which after exhaustive analysis found Clinton to be the most honest of this year's presidential candidates. Yet Clinton received far more negative media coverage during the 2015 primary season than either Sanders or Trump, according to a study by Harvard's Kennedy School of Government.

"It's almost like these journalists don't know how not to undermine her," one of Clinton's supporters lamented, noting Matt Lauer's widely panned September 7th national-security forum on NBC. Lauer devoted more than a quarter of his interview with Clinton to her private e-mail server, skipping right over most of her foreign-policy bona fides, and then without pushback let Trump express support for Vladimir Putin, make a thoroughly erroneous claim that he'd never supported the Iraq War and defend a tweet asserting that women who join the military somehow should "expect" to be sexually assaulted.

Several days later, while leaving a ceremony commemorating the victims of 9/11 in Lower Manhattan, Clinton visibly stumbled. Captured on video, it was reported as a "fainting spell" – even Tom Brokaw suggested that Clinton might want to consult a neurologist – thus giving further credence to Trump's assertion that she lacks the "stamina" to be president. When Clinton's campaign acknowledged that the candidate had been diagnosed with pneumonia two days earlier but had chosen to press on without informing anyone but her closest allies, the media responded by noting that her "penchant for privacy," as The New York Times put it, "threatens to make her look, again, as though she has something to hide."

Running for president is both exhausting and stressful; in 2004, John Kerry also came down with pneumonia during his presidential campaign. Though Clinton has been campaigning and fundraising relentlessly, so much so that her running mate, Tim Kaine, noted he had trouble keeping up with her, almost none of the media reports pointed out that ill or not, she nonetheless showed up to the 9/11 ceremony.

That Clinton has been held to a higher, and often altogether different, benchmark is both "shocking in 2016" and also not surprising, says Abramson, who, as the first female editor of the Times, was perceived as "pushy" and later fired in part over a pay dispute in which Abramson argued her compensation was less than that of her male predecessor. "When a woman achieves the top position in an important American institution, clichés like 'too ambitious' and 'shrill' easily get applied," she says. "Where, when a man exhibits those traits, it's seen as a sign of leadership." With Clinton, she adds, it's very telling that when she left her job as secretary of state, 69 percent of Americans approved of her, the second-highest rating recorded in history. "She was an international and national icon," says Abramson. "But she was subordinate to Obama." Once Clinton announced her intention to run for president, her poll numbers took a dive of more than 10 points.

And therein lies the rub: Hillary Clinton, one could argue, is right on par with any number of male politicians who have made compromises and who, as human beings, are flawed, lose their tempers, occasionally drop the f-bomb and do many other things that Clinton, as a woman, has been excoriated for. "With Hillary, everything she does is either different from what men do and it's 'wrong,' or it's the same thing that men do and that's 'wrong,'" says Robin Lakoff, a professor emeritus of linguistics at U.C. Berkeley. "And that's because the underlying thing about Clinton and her candidacy is it's not normal. Normal is a male candidate, a male voice, a male tie."

The controversy regarding the 30,000 State Department e-mails that Clinton stored, wrongly, she acknowledges, on her private server in Chappaqua, New York, is typical of many attacks on Clinton – less about the substantive issues than it is about her character. That Clinton maintained a private e-mail server came to light during an investigation by the House Select Committee on Benghazi, whose chief purpose, Rep. Kevin McCarthy admitted in September 2015, was to bring down Clinton's poll numbers, part of what he called a "strategy to fight and win." Now in its 10th iteration since the Benghazi attack in 2012, the committee has failed to find the federal government, or Clinton herself, guilty of any criminal wrongdoing.

Most recently, its members alleged that Clinton lied to Congress during her 11-hour testimony in October 2015, based on supposed inconsistencies between Clinton's statements and those of FBI Director James Comey, who in July declared that his office, while finding no legal reason to recommend prosecution, nonetheless found that she and her staff had been "extremely careless" in handling possibly classified material.

Putting aside the propriety of Comey personally weighing in on the matter – something former Department of Justice spokesman Matthew Miller called a "gross abuse of his own power" – the entire controversy over e-mails that were, depending on the telling, either classified at the time or possibly not classified correctly or classified after the fact or not classified at all, but contained information that may or may not have been classified, has been covered in one or more major publications every single day (but four) for the past 562 days, according to the liberal digital-media company Shareblue. Clinton's e-mails now rival the Watergate scandal as one of the most reported stories in political history.

by Janet Reitman, Rolling Stone | Read more:
Image:Albin Lohr-Jones/Pacific Press/Zuma

Cow Dung Capitalism: Milking the Holy Cow

“We have got it wrong all this time.” The voice on the other end of the line in Bengaluru is crisp and energetic. Inflected with a slight Kannada accent, it has the distinct quality of someone filled with the certainty of his new business model, perhaps a startup man full of entrepreneurial zeal who believes he has stumbled upon something novel and disruptive.

“The milk is not the main revenue you can source from the cow,” he says. “It is urine and dung.”

What? “Yes, that’s right,” he says in all sincerity. “It is cow urine and dung.”

You would expect a hermit holed up in some Himalayan cave to come up with that. May be even one of the numerous insincere voices you hear in newspapers pulling down butcher shops and wailing for the rights of the Indian cow. But a working professional in the startup capital of the country?

“You don’t believe me?” he asks, and then delves into the math of his business model, launching into a speech with several short pauses, as though he is punching numbers on a calculator while talking. “A cow will live for 15-20 years. Let’s say 15. It will give milk for only some of those years. Say, 12 years. One cow will yield an average of seven litres per day. One litre will get you a maximum of Rs 60 or Rs 70. Now multiply all those numbers. That’s the most you can get from milk in its lifetime… But just consider cow urine. It is always available, however old the cow gets, and with the right marketing, every litre will fetch you a product that costs Rs 150 or more.”

Convinced of his reasoning, Shiva Kumar, the general manager of Maa Gou Products (MGP) who heads the company’s production and distribution, is confident of everlasting success: “Do you see what I mean?”

MGP is one of several businesses aiming to profit from cow urine and dung that have mushroomed in the country over the past few years. This Bengaluru-based firm makes and retails several products, ranging from ayurvedic medicines and ointments to daily consumables, an offer basket that includes many of the classical ‘panchgavya’, the blessed five: cow milk, curd, ghee, urine and dung.

Since its inception in 2011 with a line- up of 20 cow-derived products, MGP has grown rapidly. Today it retails 40 such items, with several more in the pipeline. And it has gone from sales of around Rs 5,000 in its first month of existence to over Rs 25 lakh per month now, as its promoters claim. The brand’s range is available in over 500 ayurveda pharmacies and other outlets across several South Indian cities, with plans of expanding to other parts of the country. Orders can even be placed on e-commerce websites like Bigbasket and Amazon.

“You have to understand us,” says Mahavir Sonika, one of MGP’s promoters who is also the founder of the Bengaluru-based export house Suneeta Impex, “We are not gau rakshaks (cow protectors) out on the roads screaming ‘gau raksha, gau raksha’. We are businessmen. And this is a huge untapped market.”

In India something strange is occurring. The cow—a symbol both of religious reverence and communal vigilantism— whose value in a modern economy, irrespective of the politics around it, one would assume should decline as increasing numbers adopt urban lifestyles far removed from an agrarian culture, is finding itself the fount of a new form of business. A unique marriage is unfolding here, between ancient belief systems and the market forces of capitalism. Gurujis are turning into businessmen, and businessmen are turning to cows. With demand for alternate systems of healing and therapy on the rise in urban India, the cow is being marketed as a source of infinite well-being. Tradition is now tradition chic. And the cow, a market choice.

Unadulterated cow urine and dung have always been procured from cow-shelters by the traditional for use at home and in temple pujas. What’s recent is the array of therapeutic and beauty products flooding the market that use these as ingredients. There are face packs, bath scrubbers, mosquito coils and incense sticks that contain cow dung. There are creams, cough syrups, body oils, health tonics, weight-loss tonics, and floor disinfectants that contain distilled cow urine. You name it, they have it. And the names of gau mutra or gau arka (cow urine) or cow dung are not hidden away in long lists of fine print on the packages. It is star-lighted right up front as the chief ingredient in bold letters. You can go to a neighbourhood shop and buy it, or drop by a fancy mall and have it bar-code billed before it’s popped into your shopping bag. And, if you so wish, you can even go online and click—or finger tap—yourself a delivery.

“There was always a demand, I think,” Kumar says. “In the past, people only used urine and dung, and you needed to know somebody in a gaushaala (cow shelter), to get them. But who has the time these days, especially in the cities? So what we have done is just made it more accessible, in a more variety of products, in these busy cities.”

by Lhendup G Bhutia, Open | Read more:
Image:Dipti Desai

Hacker-Proof Code Confirmed

[ed. This sounds a little like bitcoin technology to me, but maybe I'm completely mistaken.]

In the summer of 2015 a team of hackers attempted to take control of an unmanned military helicopter known as Little Bird. The helicopter, which is similar to the piloted version long-favored for U.S. special operations missions, was stationed at a Boeing facility in Arizona. The hackers had a head start: At the time they began the operation, they already had access to one part of the drone’s computer system. From there, all they needed to do was hack into Little Bird’s onboard flight-control computer, and the drone was theirs.

When the project started, a “Red Team” of hackers could have taken over the helicopter almost as easily as it could break into your home Wi-Fi. But in the intervening months, engineers from the Defense Advanced Research Projects Agency (DARPA) had implemented a new kind of security mechanism — a software system that couldn’t be commandeered. Key parts of Little Bird’s computer system were unhackable with existing technology, its code as trustworthy as a mathematical proof. Even though the Red Team was given six weeks with the drone and more access to its computing network than genuine bad actors could ever expect to attain, they failed to crack Little Bird’s defenses.

“They were not able to break out and disrupt the operation in any way,” said Kathleen Fisher, a professor of computer science at Tufts University and the founding program manager of the High-Assurance Cyber Military Systems (HACMS) project. “That result made all of DARPA stand up and say, oh my goodness, we can actually use this technology in systems we care about.”

The technology that repelled the hackers was a style of software programming known as formal verification. Unlike most computer code, which is written informally and evaluated based mainly on whether it works, formally verified software reads like a mathematical proof: Each statement follows logically from the preceding one. An entire program can be tested with the same certainty that mathematicians prove theorems.

“You’re writing down a mathematical formula that describes the program’s behavior and using some sort of proof checker that’s going to check the correctness of that statement,” said Bryan Parno, who does research on formal verification and security at Microsoft Research.

The aspiration to create formally verified software has existed nearly as long as the field of computer science. For a long time it seemed hopelessly out of reach, but advances over the past decade in so-called “formal methods” have inched the approach closer to mainstream practice. Today formal software verification is being explored in well-funded academic collaborations, the U.S. military and technology companies such as Microsoft and Amazon.

The interest occurs as an increasing number of vital social tasks are transacted online. Previously, when computers were isolated in homes and offices, programming bugs were merely inconvenient. Now those same small coding errors open massive security vulnerabilities on networked machines that allow anyone with the know-how free rein inside a computer system.

“Back in the 20th century, if a program had a bug, that was bad, the program might crash, so be it,” said Andrew Appel, professor of computer science at Princeton University and a leader in the program verification field. But in the 21st century, a bug could create “an avenue for hackers to take control of the program and steal all your data. It’s gone from being a bug that’s bad but tolerable to a vulnerability, which is much worse,” he said. (...)

A formal specification is a way of defining what, exactly, a computer program does. And a formal verification is a way of proving beyond a doubt that a program’s code perfectly achieves that specification. To see how this works, imagine writing a computer program for a robot car that drives you to the grocery store. At the operational level, you’d define the moves the car has at its disposal to achieve the trip — it can turn left or right, brake or accelerate, turn on or off at either end of the trip. Your program, as it were, would be a compilation of those basic operations arranged in the appropriate order so that at the end, you arrived at the grocery store and not the airport.

The traditional, simple way to see if a program works is to test it. Coders submit their programs to a wide range of inputs (or unit tests) to ensure they behave as designed. If your program were an algorithm that routed a robot car, for example, you might test it between many different sets of points. This testing approach produces software that works correctly, most of the time, which is all we really need for most applications. But unit testing can’t guarantee that software will always work correctly because there’s no way to run a program through every conceivable input. Even if your driving algorithm works for every destination you test it against, there’s always the possibility that it will malfunction under some rare conditions — or “corner cases,” as they’re called — and open a security gap. In actual programs, these malfunctions could be as simple as a buffer overflow error, where a program copies a little more data than it should and overwrites a small piece of the computer’s memory. It’s a seemingly innocuous error that’s hard to eliminate and provides an opening for hackers to attack a system — a weak hinge that becomes the gateway to the castle.

“One flaw anywhere in your software, and that’s the security vulnerability. It’s hard to test every possible path of every possible input,” Parno said.

Actual specifications are subtler than a trip to the grocery store. Programmers may want to write a program that notarizes and time-stamps documents in the order in which they’re received (a useful tool in, say, a patent office). In this case the specification would need to explain that the counter always increases (so that a document received later always has a higher number than a document received earlier) and that the program will never leak the key it uses to sign the documents.

This is easy enough to state in plain English. Translating the specification into formal language that a computer can apply is much harder — and accounts for a main challenge when writing any piece of software in this way.

“Coming up with a formal machine-readable specification or goal is conceptually tricky,” Parno said. “It’s easy to say at a high level ‘don’t leak my password,’ but turning that into a mathematical definition takes some thinking.”

by Kevin Hartnett, Quanta | Read more:
Image:Boya Sun for Quanta Magazine

Wednesday, September 21, 2016

For $178 Million, the U.S. Could Pay for One Fighter Plane – or 3,358 Years of College

Does free college threaten our all-volunteer military? That is what Benjamin Luxenberg, on the military blog War on the Rocks says. But the real question goes beyond Luxenberg's practical query, striking deep into who we are and what we will be as a nation.

Unlike nearly every other developed country, which offer free or low cost higher education (Germany, Sweden and others are completely free; Korea's flagship Seoul National University runs about $12,000 a year, around the same as Oxford), in America you need money to go to college. Harvard charges $63,000 a year for tuition, room, board and fees, a quarter of a million dollars for a degree. Even a good state school will charge $22,000 for in-state tuition, room and board.

Right now there are only a handful of paths to higher education in America: have well-to-do parents; be low-income and smart to qualify for financial aid, take on crippling debt, or...

Join the military.

The Post-9/11 GI Bill provides up to $20,000 per year for tuition, along with an adjustable living stipend. At Harvard that stipend is $2,800 a month. Universities participating in the Yellow Ribbon Program make additional funds available without affecting the GI Bill entitlement. There are also the military academies, such as West Point, and the Reserve Officers’ Training Corps, commonly known as ROTC, which provide full or near-full college scholarships to future military officers.

Overall, 75 percent of those who enlisted or who sought an officer’s commission said they did so to obtain educational benefits. And in that vein, Luxenberg raises the question of whether the lower cost college education presidential nominee Hillary Clinton proposes is a threat to America's all-volunteer military. If college was cheaper, would they still enlist?

It is a practical question worth asking, but raises more serious issues in its trail. Do tuition costs need to stay high to help keep the ranks filled? Does unequal access to college help sustain our national defense?

Of course motivation to join the service is often multi-dimensional. But let's look a little deeper, and ask what it says about our nation when we guarantee affordable higher education to only a slim segment of our population. About seven percent of all living Americans were in the military at some point. Less than 0.5 percent of the American population currently serves. Why do we leave the other 99.95 percent to whatever they can or can't scrape together on their own? (...)

As a kind of thought experiment, let's begin by rounding off the military higher education benefit, tuition and living stipend, to $53,000 a year. We’ll note a single F-35 fighter plane costs $178 million.

Dropping just one plane from inventory generates 3,358 years of college money. We could pass on buying a handful of the planes, and a lot of people who now find college out of reach could go to school.

The final question many people will be asking at this point is one of entitlement. What did those civilians do that the United States should give them college money?

Ignoring the good idea of expanding “service” to include critical non-military national needs, the answer is nothing. If we started giving out the funds today, those civilians did nothing for them. But maybe it is more important than that.

Security is defined by much more than a large standing military (and that does not even touch the question of how, say, an eight year occupation of Iraq made America more secure). The United States, still struggling to transition from a soot and steel industrial base that collapsed in the 1970s to something that can compete in the 21st century, can only do so through education. More smart people equals more people who can take on the smarter jobs that drive prosperity. It is an investment in one of the most critical forms of infrastructure out there – brains.

To be sure, the issue of how much the United States should spend on defense, and how that money should be allotted, is complex. But the changes to spending discussed here exist far to the margins of that debate: the defense budget is some $607 billion, already the world’s largest by far. The cost of providing broader access to higher education would be a tiny fraction of that amount, far below any threshold where a danger to America’s defense could be reasonably argued.

by Peter Van Buren, Reuters | Read more:
Image: Randall Mikkelsen

Tuesday, September 20, 2016

The Perils of Planned Extinctions

A cynical move is underway to promote a new, powerful, and troubling technology known as “gene drives” for use in conservation. This is not just your everyday genetic modification, known as “GMO”; it is a radical new technology, which creates “mutagenic chain reactions” that can reshape living systems in unimaginable ways.

Gene drives represent the next frontier of genetic engineering, synthetic biology, and gene editing. The technology overrides the standard rules of genetic inheritance, ensuring that a particular trait, delivered by humans into an organism’s DNA using advanced gene-editing technology, spreads to all subsequent generations, thereby altering the future of the entire species.

It is a biological tool with unprecedented power. Yet, instead of taking time to consider fully the relevant ethical, ecological, and social issues, many are aggressively promoting gene-drive technology for use in conservation.

One proposal aims to protect native birds on Hawaii’s Kauai Island by using gene drives to reduce the population of a species of mosquito that carries avian malaria. Another plan, championed by a conservation consortium that includes US and Australian government agencies, would eradicate invasive, bird-harming mice on particular islands by introducing altered mice that prevent them from producing female offspring. Creating the “daughterless mouse” would be the first step toward so-called Genetic Biocontrol of Invasive Rodents (GBIRd), designed to cause deliberate extinctions of “pest” species like rats, in order to save “favored” species, such as endangered birds.

The assumption underlying these proposals seems to be that humans have the knowledge, capabilities, and prudence to control nature. The idea that we can – and should – use human-driven extinction to address human-caused extinction is appalling.

I am not alone in my concern. At the ongoing International Union for the Conservation of Nature (IUCN) World Conservation Congress in Hawaii, a group of leading conservationists and scientists issued an open letter, entitled “A Call for Conservation with a Conscience,” demanding a halt to the use of gene drives in conservation. I am one of the signatories, along with the environmental icon David Suzuki, physicist Fritjof Capra, the Indigenous Environmental Network’s Tom Goldtooth, and organic pioneer Nell Newman.

The discussions that have begun at the IUCN congress will continue at the United Nations Convention on Biological Diversity in Mexico this December, when global leaders must consider a proposed global moratorium on gene drives. Such discussions reflect demands by civil-society leaders for a more thorough consideration of the scientific, moral, and legal issues concerning the use of gene drives.

As I see it, we are simply not asking the right questions. Our technological prowess is largely viewed through the lens of engineering, and engineers tend to focus on one question: “Does it work?” But, as Angelika Hilbeck, President of the European Network of Scientists for Social and Environmental Responsibility (ENSSER) argues, a better question would be: “What else does it do?”

When it comes to the GBIRd project, for example, one might ask whether the “daughterless mouse” could escape the specific ecosystem into which it has been introduced, just as GMO crops and farmed salmon do, and what would happen if it did. As for the mosquitos in Hawaii, one might ask how reducing their numbers would affect the endangered hoary bat species.

Ensuring that these kinds of questions are taken into account will be no easy feat. As a lawyer experienced in US government regulations, I can confidently say that the existing regulatory framework is utterly incapable of assessing and governing gene-drive technology.

Making matters worse, the media have consistently failed to educate the public about the risks raised by genetic technologies. Few people understand that, as MIT science historian Lily Kay explains, genetic engineering was deliberately developed and promoted as a tool for biological and social control. Those driving that process were aiming to fulfill a perceived mandate for “science-based social intervention.”

Powerful tools like genetic modification and, especially, gene-drive technology spark the imagination of anyone with an agenda, from the military (which could use them to make game-changing bio-weapons) to well-intentioned health advocates (which could use them to help eradicate certain deadly diseases). They certainly appeal to the hero narrative that so many of my fellow environmentalists favor.

by Claire Hope Cummings, Project Syndicate | Read more:
Image: Michael Morgenstern via:

Here's the thing...
via:

I Used to Be a Human Being

I was sitting in a large meditation hall in a converted novitiate in central Massachusetts when I reached into my pocket for my iPhone. A woman in the front of the room gamely held a basket in front of her, beaming beneficently, like a priest with a collection plate. I duly surrendered my little device, only to feel a sudden pang of panic on my way back to my seat. If it hadn’t been for everyone staring at me, I might have turned around immediately and asked for it back. But I didn’t. I knew why I’d come here.

A year before, like many addicts, I had sensed a personal crash coming. For a decade and a half, I’d been a web obsessive, publishing blog posts multiple times a day, seven days a week, and ultimately corralling a team that curated the web every 20 minutes during peak hours. Each morning began with a full immersion in the stream of internet consciousness and news, jumping from site to site, tweet to tweet, breaking news story to hottest take, scanning countless images and videos, catching up with multiple memes. Throughout the day, I’d cough up an insight or an argument or a joke about what had just occurred or what was happening right now. And at times, as events took over, I’d spend weeks manically grabbing every tiny scrap of a developing story in order to fuse them into a narrative in real time. I was in an unending dialogue with readers who were caviling, praising, booing, correcting. My brain had never been so occupied so insistently by so many different subjects and in so public a way for so long.

I was, in other words, a very early adopter of what we might now call living-in-the-web. And as the years went by, I realized I was no longer alone. Facebook soon gave everyone the equivalent of their own blog and their own audience. More and more people got a smartphone — connecting them instantly to a deluge of febrile content, forcing them to cull and absorb and assimilate the online torrent as relentlessly as I had once. Twitter emerged as a form of instant blogging of microthoughts. Users were as addicted to the feedback as I had long been — and even more prolific. Then the apps descended, like the rain, to inundate what was left of our free time. It was ubiquitous now, this virtual living, this never-stopping, this always-updating. I remember when I decided to raise the ante on my blog in 2007 and update every half-hour or so, and my editor looked at me as if I were insane. But the insanity was now banality; the once-unimaginable pace of the professional blogger was now the default for everyone.

If the internet killed you, I used to joke, then I would be the first to find out. Years later, the joke was running thin. In the last year of my blogging life, my health began to give out. Four bronchial infections in 12 months had become progressively harder to kick. Vacations, such as they were, had become mere opportunities for sleep. My dreams were filled with the snippets of code I used each day to update the site. My friendships had atrophied as my time away from the web dwindled. My doctor, dispensing one more course of antibiotics, finally laid it on the line: “Did you really survive HIV to die of the web?”

But the rewards were many: an audience of up to 100,000 people a day; a new-media business that was actually profitable; a constant stream of things to annoy, enlighten, or infuriate me; a niche in the nerve center of the exploding global conversation; and a way to measure success — in big and beautiful data — that was a constant dopamine bath for the writerly ego. If you had to reinvent yourself as a writer in the internet age, I reassured myself, then I was ahead of the curve. The problem was that I hadn’t been able to reinvent myself as a human being.

I tried reading books, but that skill now began to elude me. After a couple of pages, my fingers twitched for a keyboard. I tried meditation, but my mind bucked and bridled as I tried to still it. I got a steady workout routine, and it gave me the only relief I could measure for an hour or so a day. But over time in this pervasive virtual world, the online clamor grew louder and louder. Although I spent hours each day, alone and silent, attached to a laptop, it felt as if I were in a constant cacophonous crowd of words and images, sounds and ideas, emotions and tirades — a wind tunnel of deafening, deadening noise. So much of it was irresistible, as I fully understood. So much of the technology was irreversible, as I also knew. But I’d begun to fear that this new way of living was actually becoming a way of not-living.

By the last few months, I realized I had been engaging — like most addicts — in a form of denial. I’d long treated my online life as a supplement to my real life, an add-on, as it were. Yes, I spent many hours communicating with others as a disembodied voice, but my real life and body were still here. But then I began to realize, as my health and happiness deteriorated, that this was not a both-and kind of situation. It was either-or. Every hour I spent online was not spent in the physical world. Every minute I was engrossed in a virtual interaction I was not involved in a human encounter. Every second absorbed in some trivia was a second less for any form of reflection, or calm, or spirituality. “Multitasking” was a mirage. This was a zero-sum question. I either lived as a voice online or I lived as a human being in the world that humans had lived in since the beginning of time.

And so I decided, after 15 years, to live in reality.

by Andrew Sullivan, Select/All, NY Magazine |  Read more:
Image: Kim Dong-kyu, Based on: Wanderer Above the Sea of Fog, by Caspar David Friedrich (1818).

Man v Rat: Could the Long War Soon be Over?

[ed. Not sure about this. See also: The Perils of Planned Extinction.]

First, the myths. There are no “super rats”. Apart from a specific subtropical breed, they do not get much bigger than 20 inches long, including the tail. They are not blind, nor are they afraid of cats. They do not carry rabies. They do not, as was reported in 1969 regarding an island in Indonesia, fall from the sky. Their communities are not led by elusive, giant “king rats”. Rat skeletons cannot liquefy and reconstitute at will. (For some otherwise rational people, this is a genuine concern.) They are not indestructible, and there are not as many of them as we think. The one-rat-per-human in New York City estimate is pure fiction. Consider this the good news.

In most other respects, “the rat poblem”, as it has come to be known, is a perfect nightmare. Wherever humans go, rats follow, forming shadow cities under our metropolises and hollows beneath our farmlands. They thrive in our squalor, making homes of our sewers, abandoned alleys, and neglected parks. They poison food, bite babies, undermine buildings, spread disease, decimate crop yields, and very occasionally eat people alive. A male and female left to their own devices for one year – the average lifespan of a city rat – can beget 15,000 descendants.

There may be no “king rat”, but there are “rat kings”, groups of up to 30 rats whose tails have knotted together to form one giant, swirling mass. Rats may be unable to liquefy their bones to slide under doors, but they don’t need to: their skeletons are so flexible that they can squeeze their way through any hole or crack wider than half an inch. They are cannibals, and they sometimes laugh (sort of) – especially when tickled. They can appear en masse, as if from nowhere, moving as fast as seven feet per second. They do not carry rabies, but a 2014 study from Columbia University found that the average New York City subway rat carried 18 viruses previously unknown to science, along with dozens of familiar, dangerous pathogens, such as C difficile and hepatitis C. As recently as 1994 there was a major recurrence of bubonic plague in India, an unpleasant flashback to the 14th century, when that rat-borne illness killed 25 million people in five years. Collectively, rats are responsible for more human death than any other mammal on earth.

Humans have a peculiar talent for exterminating other species. In the case of rats, we have been pursuing their total demise for centuries. We have invented elaborate, gruesome traps. We have trained dogs, ferrets, and cats to kill them. We have invented ultrasonic machines to drive them away with high-pitched noise. (Those machines, still popular, do not work.) We have poisoned them in their millions. In 1930, faced with a rat infestation on Rikers Island, New York City officials flushed the area with mustard gas. In the late 1940s, scientists developed anticoagulants to treat thrombosis in humans, and some years later supertoxic versions of the drugs were developed in order to kill rats by making them bleed to death from the inside after a single dose. Cityscapes and farmlands were drenched with thousands of tons of these chemicals. During the 1970s, we used DDT. These days, rat poison is not just sown in the earth by the truckload, it is rained from helicopters that track the rats with radar – in 2011 80 metric tonnes of poison-laced bait were dumped on to Henderson Island, home to one of the last untouched coral reefs in the South Pacific. In 2010, Chicago officials went “natural”: figuring a natural predator might track and kill rats, they released 60 coyotes wearing radio collars on to the city streets.

Still, here they are. According to Bobby Corrigan, the world’s leading expert on rodent control, many of the world’s great cities remain totally overcome. “In New York – we’re losing that war in a big way,” he told me. Combat metaphors have become a central feature of rat conversation among pest control professionals. In Robert Sullivan’s 2014 book Rats, he described humanity’s relationship with the species as an “unending and brutish war”, a battle we seem always, always to lose.

Why? How is it that we can send robots to Mars, build the internet, keep alive infants born so early that their skin isn’t even fully made – and yet remain unable to keep rats from threatening our food supplies, biting our babies, and appearing in our toilet bowls?

Frankly, rodents are the most successful species,” Loretta Mayer told me recently. “After the next holocaust, rats and Twinkies will be the only things left.” Mayer is a biologist, and she contends that the rat problem is actually a human problem, a result of our foolish choices and failures of imagination. In 2007, she co-founded SenesTech, a biotech startup that offers the promise of an armistice in a conflict that has lasted thousands of years. The concept is simple: rat birth control

The rat’s primary survival skill, as a species, is its unnerving rate of reproduction. Female rats ovulate every four days, copulate dozens of times a day and remain fertile until they die. (Like humans, they have sex for pleasure as well as for procreation.) This is how you go from two to 15,000 in a single year. When poison or traps thin out a population, they mate faster until their numbers regenerate. Conversely, if you can keep them from mating, colonies collapse in weeks and do not rebound.

Solving the rat problem by putting them on the pill sounds ridiculous. Until recently no pharmaceutical product existed that could make rats infertile, and even if it had, there was still the question of how it could be administered. But if such a thing were to work, the impact could be historic. Rats would die off without the need for poison, radar or coyotes.

SenesTech, which is based in Flagstaff, Arizona, claims to have created a liquid that will do exactly that.

by Jordan Kisner, The Guardian |  Read more:
Image: Frank Greenaway/Getty Images/Dorling Kindersley