Friday, January 31, 2014


Klaus Fussmann (German, b. 1938), Marguerites, 2010.
via:

How to Escape the Community College Trap

When Daquan McGee got accepted to the Borough of Manhattan Community College in the spring of 2010, he was 19 and still finding his footing after a two-year prison sentence for attempted robbery. He signed up for the standard battery of placement tests in reading, writing, and math; took them cold; and failed two—writing and math. Steered into summer developmental education (otherwise known as remediation), he enrolled in an immersion writing course, which he passed while working full-time at a Top Tomato Super Store. Then McGee learned of a program for which a low-income student like him might qualify, designed to maximize his chances of earning a degree. At a late-summer meeting, he got the rundown on the demands he would face.

McGee would have to enroll full-time in the fall, he was told; part-time attendance was not permitted. Every other week, he would be required to meet with his adviser, who would help arrange his schedule and track his progress. In addition to his full course load, McGee would have to complete his remaining remedial class, in math, immediately. If he slipped up, his adviser would hear about it from his instructor—and mandatory tutoring sessions would follow. If he failed, he would have to retake the class right away. Also on McGee’s schedule was a non-optional, noncredit weekly College Success Seminar, featuring time-management strategies, tips on study habits and goal setting, exercises in effective communication, and counsel on other life skills. The instructor would be taking attendance. If McGee complied with all that was asked of him, he would be eligible for a monthly drill: lining up in one of the long hallways in the main campus building to receive a free, unlimited MetroCard good for the following month. More important, as long as he stayed on track, the portion of his tuition not already covered by financial aid would be waived.

In a hurry to make up for his wasted prison years, McGee signed up. The pace, as he’d been warned, was fast from the start, and did not ease up after the fall. Through the spring semester and on into his second year, his course load remained heavy, and the advisory meetings continued, metronomically. He was encouraged to take winter- and summer-term classes, filling in the breaks between semesters. McGee, a guy with a stocky boxer’s build, doesn’t gush—he conveys low-key composure—but when I met him in October of 2012, early in his third year, he had only praise for the unremitting pushiness, and for the array of financial benefits that came along with it. The package was courtesy of a promising experimental initiative that goes by the snappy acronym ASAP, short for Accelerated Study in Associate Programs. Last winter, McGee graduated with an associate’s degree in multimedia studies. It had taken him two and a half years.

In the community-college world, McGee’s achievement is a shockingly rare feat, and the program that so intently encouraged him to accomplish it is a striking anomaly. The country’s low-cost sub-baccalaureate system—created a century ago to provide an open and affordable entry into higher education to an ever more diverse group of Americans—now enrolls 45 percent of all U.S. undergraduates, many of them part-time students. But only a fraction ever earn a degree, and hardly anyone does it quickly. The associate’s degree is nominally a two-year credential, and the system is proud of its transfer function, sending students onward to four-year schools, as juniors, to pursue a bachelor’s degree—the goal that 80 percent of entrants say they aspire to. Reality, however, typically confounds that tidy timeline. In urban community colleges like the Borough of Manhattan Community College, the national three-year graduation rate is 16 percent. Nationwide, barely more than a third of community-college enrollees emerge with a certificate or degree within six years.

Behind these dismal numbers lie the best of intentions. Community colleges have made it their mission to offer easy access, flexibility, and lots of options to a commuter population now dominated by “nontraditional” students. That’s a catchall label for the many people who don’t fit the classic profile of kids living in dorms, being financed by their parents. Nearly 70 percent of high-school graduates currently pursue some kind of postsecondary schooling, up from half in 1980. The surge is hardly surprising: higher education, over the past three decades, has become a prerequisite for a middle-class life. But of course, as the matriculation rate has climbed, so has the number of students who enter college with marginal credentials and other handicaps. The least academically prepared and most economically hard-pressed among them are typically bound for community college, where low-income students—plenty of them the first in their family to venture beyond high school—outnumber their high-income peers 2-to-1. Many of these students are already juggling jobs and family commitments by their late teens (McGee and his longtime girlfriend had a baby daughter in the fall of his freshman year). This could hardly be a more challenging population to serve.

The bet public community colleges have made—that the best way to meet the needs of their constituents is by offering as much flexibility and convenience as possible—makes a certain intuitive sense in light of such complications. So does a commitment to low cost. Give students a cheap, expansive menu, served up at all hours; don’t demand a specific diet—that’s not a bad metaphor for the community-college experience today.

by Ann Hulbert, The Atlantic |  Read more:
Image: Mike McQuade

U.S. Spying at Copenhagen Climate Talks Spark Anger

[ed. But, but... they told us it was just for terrorism, they don't actually read the metadata!]

Developing countries have reacted angrily to revelations that the United States spied on other governments at the Copenhagen climate summit in 2009.

Documents leaked by Edward Snowden show how the US National Security Agency (NSA) monitored communication between key countries before and during the conference to give their negotiators advance information about other positions at the high-profile meeting where world leaders including Barack Obama, Gordon Brown and Angela Merkel failed to agree to a strong deal on climate change.

Jairam Ramesh, the then Indian environment minister and a key player in the talks that involved 192 countries and 110 heads of state, said: "Why the hell did they do this and at the end of this, what did they get out of Copenhagen? They got some outcome but certainly not the outcome they wanted. It was completely silly of them. First of all, they didn't get what they wanted. With all their hi-tech gizmos and all their snooping, ultimately the Basic countries [Brazil, South Africa, India and China] bailed Obama out. With all their snooping what did they get?"

Martin Khor, an adviser to developing countries at the summit and director of the South Centre thinktank, said: "Would you play poker with someone who can see your cards? Spying on one another like this is absolutely not on. When someone has an upper hand is very disconcerting. There should be an assurance in negotiations like this that powerful players are not going to gain undue advantage with technological surveillance.

"For negotiations as complex as these we need maximum goodwill and trust. It is absolutely critical. If there is anything that prevents a level playing field, that stops negotiations being held on equal grounds. It disrupts the talks," he said. (...)

The document shows the NSA had provided advance details of the Danish plan to "rescue" the talks should they founder, and also had learned of China's efforts to coordinate its position with India before the conference.

The talks – which ended in disarray after the US, working with a small group of 25 countries, tried to ram through an agreement that other developing countries mostly rejected – were marked by subterfuge, passion and chaos.

Members of the Danish negotiating team told the Danish newspaper Information that both the US and Chinese delegations were "peculiarly well-informed" about closed-door discussions. "They simply sat back, just as we had feared they would if they knew about our document," one source told Information.

by John Vidal and Suzanne Goldenberg, The Guardian | Read more:
Image: Olivier Morin/AFP/Getty Images

Wednesday, January 29, 2014


[ed. Vermeer selfie]
via:

David Jonason, Toast & Coffee 2004
via:

The Largest Free Transit Experiment in the World

Last January, Tallinn, the capital city of Estonia, did something that no other city its size had done before: It made all public transit in the city free for residents.

City officials made some bold predictions about what would result. There would be a flood of new passengers on Tallinn’s buses and trams — as many as 20 percent more riders. Carbon emissions would decline substantially as drivers left their cars at home and rode transit instead. And low-income residents would gain new access to jobs that they previously couldn’t get to. As Mayor Edgar Savisaar likes to say, zeroing out commuting costs was for some people as good as receiving a 13th month of salary.

One year later, this city of 430,000 people has firmly established itself as the leader of a budding international free-transit movement. Tallinn has hosted two conferences for city officials, researchers and journalists from across Europe to discuss the idea. The city has an English-language website devoted to its experiment. And promotional materials have proclaimed Tallinn the "capital of free public transport."

The idea has been very popular with Tallinners. In an opinion poll, nine out of ten people said they were happy with how it’s going. Pille Saks is one of them. "I like free rides," says Saks, a 29 year-old secretary who goes to work by bus. "I have a car, but I don’t like to drive with it, especially in the winter when there is a lot of snow and roads are icy."

Different Reads on Ridership

What’s less clear on the first anniversary of free transit in Tallinn is whether it has actually changed commuting behavior all that much.

Mayor Savisaar says it has. He points to numbers from early last year, showing that traffic on the biggest crossroads had decreased by 14 percent compared to a week shortly before the policy started. He has also cited substantial increases in transit riders. "We are frequently asked … why we are offering free-of-charge public transport," Savisaar told a gathering of officials from Europe and China in August. "It is actually more appropriate to ask why most cities in the world still don’t."

by Sulev Vedler, The Atlantic |  Read more:
Image: Vallo Kruuser

The Botmaker Who Sees Through the Internet

In the recent Spike Jonze film “Her,” a lonely man buys a futuristic Siri-style computer program designed to interact fluently with humans. The program’s female voice speaks intimately to its owner through an earpiece, while also organizing his e-mails and keeping track of his appointments, until eventually he falls in love with it. Watching it happen is unsettling because of how well the program works on our human protagonist—how powerfully this computerized, disembodied simulation of a woman affects him.

The piece of software at the heart of “Her” only exists within a work of science fiction, of course—one set in a comfortably vague point in the future. While today’s smartphones and computers do “talk” to their users, they do so without the emotionally potent and slightly unpredictable qualities that might make a machine feel human.

But in certain quarters, automated beings with that exact set of qualities have already started to emerge. For the past few years, tiny computer programs have been telling jokes, writing poems, complaining, commenting on the news, and awkwardly flirting—on the Web and through the medium of Twitter, where their messages live side by side with those composed by the carbon-based life-forms who take daily delight in their antics.

One of the most prolific makers of these little programs—or bots, as they’re known—is 30-year-old Darius Kazemi, a computer programmer who lives in Somerville and works at a technology company called Bocoup, while devoting himself to the veritable petting zoo of autonomous digital creatures he has invented in his free time.

Chances are you haven’t heard of Kazemi. But over the past two years, he has emerged as one of the most closely watched and pioneering figures at the intersection of technology, cultural commentary, and what feels to many like a new kind of Web-native art. (...)

Kazemi’s dozens of projects have won him admirers among a range of people so wide it suggests the world doesn’t quite have a category for him yet. His work is tracked by video game designers, comedians, philosophers, and fellow bot-makers; an English literature professor named Leonardo Flores has written about the output of his bots in an online journal devoted to electronic poetry. Web designer Andrew Simone, who follows Kazemi’s work, calls him “a deeply subversive, bot-making John Cage.”

Kazemi is part of a small but vibrant group of programmers who, in addition to making clever Web toys, have dedicated themselves to shining a spotlight on the algorithms and data streams that are nowadays humming all around us, and using them to mount a sharp social critique of how people use the Internet—and how the Internet uses them back.

By imitating humans in ways both poignant and disorienting, Kazemi’s bots focus our attention on the power and the limits of automated technology, as well as reminding us of our own tendency to speak and act in ways that are essentially robotic. While they’re more conceptual art than activism, the bots Kazemi is creating are acts of provocation—ones that ask whether, as computers get better at thinking like us and shaping our behavior, they can also be rewired to spring us free.

by Leon Neyfakh, Boston Globe |  Read more:
Image: Lane Turner

Fast Food, Slow Customers

[ed. See also: Old McDonalds]

With its low coffee prices, plentiful tables and available bathrooms, McDonald’s restaurants all over the country, and even all over the world, have been adopted by a cost-conscious set as a coffeehouse for the people, a sort of everyman’s Starbucks.

Behind the Golden Arches, older people seeking company, schoolchildren putting off homework time and homeless people escaping the cold have transformed the banquettes into headquarters for the kind of laid-back socializing once carried out on a park bench or brownstone stoop.

But patrons have also brought the mores of cafe culture, where often a single purchase is permission to camp out with a laptop. Increasingly, they seem to linger over McCafe Lattes, sometimes spending a lot of time but little money in outlets of this chain, which rose to prominence on a very different business model: food that is always fast. And so restaurant managers and franchise owners are often frustrated by these, their most loyal customers. Such regulars hurt business, some say, and leave little room for other customers. Tensions can sometimes erupt.

In the past month, those tensions came to a boil in New York City. When management at a McDonald’s in Flushing, Queens, called the police on a group of older Koreans, prompting outrage at the company’s perceived rudeness, calls for a worldwide boycott and a truce mediated by a local politician, it became a famous case of a struggle that happens daily at McDonald’s outlets in the city and beyond.

Is the customer always right — even the ensconced penny-pincher? The answer seems to be yes among the ones who do the endless sitting. (...)

McDonald’s is not alone in navigating this tricky territory. Last year, a group of deaf patrons sued Starbucks after a store on Astor Place in Lower Manhattan forbade their meet-up group to convene there, complaining they did not buy enough coffee. Spending the day nursing a latte is behavior reinforced by franchises like Starbucks and others that seem to actively cultivate it, offering free Wi-Fi that encourages customers to park themselves and their laptops for hours.

There is a social benefit to such spots, some experts said.

“As long as there have been cities, these are the kind of places people have met in,” said Don Mitchell, a professor of urban geography at Syracuse University and the author of “The Right to the City: Social Justice and the Fight for Public Space.”

“Whether they have been private property, public property or something in between,” he said, “taking up space is a way to claim a right to be, a right to be visible, to say, ‘We’re part of the city too.’ ”

by Sarah Maslin Nir, NY Times |  Read more:
Image: Ruth Fremson/The New York Times

Tuesday, January 28, 2014

Seven Questions for Bob Dylan

Bob Dylan is either the most public private man in the world or the most private public one. He has a reputation for being silent and reclusive; he is neither. He has been giving interviews—albeit contentious ones—for as long as he's been making music, and he's been making music for more than fifty years. He's seventy-two years old. He's written one volume of an autobiography and is under contract to write two more. He's hosted his own radio show. He exhibits his paintings and his sculpture in galleries and museums around the world. Ten years ago, he cowrote and starred in a movie, Masked and Anonymous, that was about his own masked anonymity. He is reportedly working on another studio recording, his thirty-sixth, and year after year and night after night he still gets on stage to sing songs unequaled in both their candor and circumspection. Though famous as a man who won't talk, Dylan is and always has been a man who won't shut up.

And yet he has not given in; he has preserved his mystery as assiduously as he has curated his myth, and even after a lifetime of compulsive disclosure he stands apart not just from his audience but also from those who know and love him. He is his own inner circle, a spotlit Salinger who has remained singular and inviolate while at the same time remaining in plain sight.

It's quite a trick. Dylan's public career began at the dawn of the age of total disclosure and has continued into the dawn of the age of total surveillance; he has ended up protecting his privacy at a time when privacy itself is up for grabs. But his claim to privacy is compelling precisely because it's no less enigmatic and paradoxical than any other claim he's made over the years. Yes, it's important to him—"of the utmost importance, of paramount importance," says his friend Ronee Blakley, the Nashville star who sang with Dylan on his Rolling Thunder tour. And yes, the importance of his privacy is the one lesson he has deigned to teach, to the extent that his friends Robbie Robertson and T Bone Burnett have absorbed it into their own lives. "They both have learned from him," says Jonathan Taplin, who was the Band's road manager and is now a professor at the University of Southern California. "They've learned how to keep private, and they lead very private lives. That's the school of Bob Dylan—the smart guys who work with him learn from him. Robbie's very private. And T Bone is so private, he changes his e-mail address every three or four weeks."

How does Dylan do it? How does he impress upon those around him the need to protect his privacy? He doesn't. They just do. That's what makes his privacy Dylanesque. It's not simply a matter of Dylan being private; it's a matter of Dylan's privacy being private—of his manager saying, when you call, "Oh, you're the guy writing about Bob Dylan's privacy. How can I not help you?" (...)

"I've always been appalled by people who come up to celebrities while they're eating," says Lynn Goldsmith, a photographer who has taken pictures of Dylan, Springsteen, and just about every other god of the rock era. "But with Dylan, it's at an entirely different level. With everybody else, it's 'We love you, we love your work.' With Dylan, it's 'How does it feel to be God?' It's 'I named my firstborn after you.' In some ways, the life he lives is not the life he's chosen. In some ways, the life he leads has been forced upon him because of the way the public looks upon him to be."

That's the narrative, anyway—Dylan as eternal victim, Dylan as the measure of our sins. There is another narrative, however, and it's that Dylan is not just the first and greatest intentional rock 'n' roll poet. He's also the first great rock 'n' roll asshole. The poet expanded the notion of what it was possible for a song to express; the asshole shrunk the notion of what it was possible for the audience to express in response to a song. The poet expanded what it meant to be human; the asshole noted every human failing, keeping a ledger of debts never to be forgotten or forgiven. As surely as he rewrote the songbook, Dylan rewrote the relationship between performer and audience; his signature is what separates him from all his presumed peers in the rock business and all those who have followed his example. "I never was a performer who wanted to be one of them, part of the crowd," he said, and in that sentence surely lies one of his most enduring achievements: the transformation of the crowd into an all-consuming but utterly unknowing them.

"We played with McCartney at Bonnaroo, and the thing about McCartney is that he wants to be loved so much," Jeff Tweedy says. "He has so much energy, he gives and gives and gives, he plays three hours, and he plays every song you want to hear. Dylan has zero fucks to give about that. And it's truly inspiring.

by Tom Junod, Esquire |  Read more:
Image: Fame Pictures; historic Dylan photos: AP

Monday, January 27, 2014


Brenda Cablayan, Houses on the Hill
via:

Obliquity

If you want to go in one direction, the best route may involve going in the other. Paradoxical as it sounds, goals are more likely to be achieved when pursued indirectly. So the most profitable companies are not the most profit-oriented, and the happiest people are not those who make happiness their main aim. The name of this idea? Obliquity

The American continent separates the Atlantic Ocean in the east from the Pacific Ocean in the west. But the shortest crossing of America follows the route of the Panama Canal, and you arrive at Balboa Port on the Pacific Coast some 30 miles to the east of the Atlantic entrance at Colon.

A map of the isthmus shows how the best route west follows a south-easterly direction. The builders of the Panama Canal had comprehensive maps, and understood the paradoxical character of the best route. But only rarely in life do we have such detailed knowledge. We are lucky even to have a rough outline of the terrain.

Before the canal, anyone looking for the shortest traverse from the Atlantic to the Pacific would naturally have gazed westward. The south-east route was found by Vasco Nunez de Balboa, a Spanish conquistador who was looking for gold, not oceans.

George W. Bush speaks mangled English rather than mangled French because James Wolfe captured Quebec in 1759 and made the British crown the dominant influence in Northern America. Eschewing obvious lines of attack, Wolfe’s men scaled the precipitous Heights of Abraham and took the city from the unprepared defenders. There are many such episodes in military history. The Germans defeated the Maginot Line by going round it, while Japanese invaders bicycled through the Malayan jungle to capture Singapore, whose guns faced out to sea. Oblique approaches are most effective in difficult terrain, or where outcomes depend on interactions with other people. Obliquity is the idea that goals are often best achieved when pursued indirectly.

Obliquity is characteristic of systems that are complex, imperfectly understood, and change their nature as we engage with them. (...)

The distinction between intent and outcome is central to obliquity. Wealth, family relationships, employment all contribute to happiness but these activities are not best conducted with happiness as their goal. The pursuit of happiness is a strange phrase in the US constitution because happiness is not best achieved when pursued. A satisfying life depends above all on building good personal relationships with other people – but we entirely miss the point if we seek to develop these relationships with our personal happiness as a primary goal.

Humans have well developed capacities to detect purely instrumental behaviour. The actions of the man who buys us a drink in the hope that we will buy his mutual funds are formally the same as those of the friend who buys us a drink because he likes our company, but it is usually not too difficult to spot the difference. And the difference matters to us. “Honesty is the best policy, but he who is governed by that maxim is not an honest man,” wrote Archbishop Whately three centuries ago. If we deal with someone for whom honesty is the best policy, we can never be sure that this is not the occasion on which he will conclude that honesty is no longer the best policy. Such experiences have been frequent in financial markets in the last decade. We do better to rely on people who are honest by character rather than honest by choice.

by John Kay, Financial Times via Naked Capitalism |  Read more:
Image: via:

Happiness and Its Discontents

A quick survey of our culture—particularly our self-help culture—confirms Freud's observation. One could even say that, in our era, the idea that we should lead happy, balanced lives carries the force of an obligation: We are supposed to push aside our anxieties in order to enjoy our lives, attain peace of mind, and maximize our productivity. The cult of "positive thinking" even assures us that we can bring good things into our lives just by thinking about them. (...)

Needless to say, our fixation on the ideal of happiness diverts our attention from collective social ills, such as socioeconomic disparities. As Barbara Ehrenreich has shown, when we believe that our happiness is a matter of thinking the right kinds of (positive) thoughts, we become blind to the ways in which some of our unhappiness might be generated by collective forces, such as racism or sexism. Worst of all, we become callous to the lot of others, assuming that if they aren't doing well, if they aren't perfectly happy, it's not because they're poor, oppressed, or unemployed but because they're not trying hard enough.

If all of that isn't enough to make you suspicious of the cultural injunction to be happy, consider this basic psychoanalytic insight: Human beings may not be designed for happy, balanced lives. The irony of happiness is that it's precisely when we manage to feel happy that we are also most keenly aware that the feeling might not last. Insofar as each passing moment of happiness brings us closer to its imminent collapse, happiness is merely a way of anticipating unhappiness; it's a deviously roundabout means of producing anxiety.

Take the notion that happiness entails a healthy lifestyle. Our society is hugely enthusiastic about the idea that we can keep illness at bay through a meticulous management of our bodies. The avoidance of risk factors such as smoking, drinking, and sexual promiscuity, along with a balanced diet and regular exercise, is supposed to guarantee our longevity. To a degree, that is obviously true. But the insistence on healthy habits is also a way to moralize illness, to cast judgment on those who fail to adhere to the right regimen. Ultimately, as the queer theorist Tim Dean has illustrated, we are dealing with a regulation of pleasure—a process of medicalization that tells us which kinds of pleasures are acceptable and which are not.

I suspect that beneath our society's desperate attempts to minimize risk, and to prescribe happiness as an all-purpose antidote to our woes, there resides a wretched impotence in the face of the intrinsically insecure nature of human existence. As a society, we have arguably lost the capacity to cope with this insecurity; we don't know how to welcome it into the current of our lives. We keep trying to brush it under the rug because we have lost track of the various ways in which our lives are not meant to be completely healthy and well adjusted.

Why, exactly, is a healthy and well-adjusted life superior to one that is filled with ardor and personal vision but that is also, at times, a little unhealthy and maladjusted? Might some of us not prefer lives that are heaving with an intensity of feeling and action but that do not last quite as long as lives that are organized more sensibly? Why should the good life equal a harmonious life? Might not the good life be one that includes just the right amount of anxiety? Indeed, isn't a degree of tension a precondition of our ability to recognize tranquillity when we are lucky enough to encounter it? And why should our lives be cautious rather than a little dangerous? Might not the best lives be ones in which we sometimes allow ourselves to become a little imprudent or even a tad unhinged?

by Mari Ruti, Chronicle of Higher Education |  Read more:
Image: Geoffrey Moss

Sunday, January 26, 2014

Game Change


By the time the contenders for Super Bowl XLVIII were set, two weekends ago, a hero and a villain had been chosen, too. The Denver Broncos’ quarterback, the aging, lovable Peyton Manning, had outplayed the Patriots to win the A.F.C. title. Meanwhile, in the N.F.C. championship game, Richard Sherman, a cornerback for the Seattle Seahawks, became the designated bad guy. With thirty-one seconds left to play, Colin Kaepernick, the San Francisco 49ers quarterback, had the ball on Seattle’s eighteen-yard line—the 49ers were losing by six points and needed a touchdown. He spotted Michael Crabtree, a wide receiver, and sent him the pass. Sherman twisted up in the air until he seemed almost in synch with the ball’s spiralling, then tipped the ball into the hands of another defender for an interception, and won the game.

Sherman was swarmed by his teammates but broke away to chase after Crabtree. He stretched out a hand and said, “Hell of a game, hell of a game,” to which Crabtree responded by shoving him in the face mask. Moments later, Sherman was surrounded by reporters and cameramen; by then, he had acquired an N.F.C. champions’ cap, which left his eyes in shadow, and his long dreadlocks hung loose. When Erin Andrews, of Fox Sports, asked him about the final play, he more or less exploded. “I’m the best corner in the game!” he proclaimed. “When you try me with a sorry receiver like Crabtree, that’s the result you gonna get! Don’t you ever talk about me!”

“Who was talking about you?” Andrews asked.

“Crabtree! Don’t you open your mouth about the best, or I’m gonna shut it for you real quick! L.O.B.!”

L.O.B.: that’s Legion of Boom, the nickname of the Seattle defense. The video of the “epic rant,” as it was called, went viral. Andrews told GQ that the response was so overwhelming that her Twitter account froze. She added, “Then we saw it was taking on a racial turn.” Some people expressed alarm that an angry black man was shouting at a blond-haired woman. (Andrews immediately shut down that line of complaint.) Many people expressed a hope that Manning would put Sherman in his place. The names that he was called were numerous, offensive, and explicitly racial, but one that stood out—it was used more than six hundred times on television, according to Deadspin—was “thug.”

by Amy Davidson, New Yorker |  Read more:
Image: via:

Almost Everything in "Dr. Strangelove" Was True

[ed. One of my all-time, top-ten movie favorites. Maybe top-five.]

This month marks the fiftieth anniversary of Stanley Kubrick’s black comedy about nuclear weapons, “Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.” Released on January 29, 1964, the film caused a good deal of controversy. Its plot suggested that a mentally deranged American general could order a nuclear attack on the Soviet Union, without consulting the President. One reviewer described the film as “dangerous … an evil thing about an evil thing.” Another compared it to Soviet propaganda. Although “Strangelove” was clearly a farce, with the comedian Peter Sellers playing three roles, it was criticized for being implausible. An expert at the Institute for Strategic Studies called the events in the film “impossible on a dozen counts.” A former Deputy Secretary of Defense dismissed the idea that someone could authorize the use of a nuclear weapon without the President’s approval: “Nothing, in fact, could be further from the truth.” (See a compendium of clips from the film.) When “Fail-Safe”—a Hollywood thriller with a similar plot, directed by Sidney Lumet—opened, later that year, it was criticized in much the same way. “The incidents in ‘Fail-Safe’ are deliberate lies!” General Curtis LeMay, the Air Force chief of staff, said. “Nothing like that could happen.” The first casualty of every war is the truth—and the Cold War was no exception to that dictum. Half a century after Kubrick’s mad general, Jack D. Ripper, launched a nuclear strike on the Soviets to defend the purity of “our precious bodily fluids” from Communist subversion, we now know that American officers did indeed have the ability to start a Third World War on their own. And despite the introduction of rigorous safeguards in the years since then, the risk of an accidental or unauthorized nuclear detonation hasn’t been completely eliminated.

The command and control of nuclear weapons has long been plagued by an “always/never” dilemma. The administrative and technological systems that are necessary to insure that nuclear weapons are always available for use in wartime may be quite different from those necessary to guarantee that such weapons can never be used, without proper authorization, in peacetime. During the nineteen-fifties and sixties, the “always” in American war planning was given far greater precedence than the “never.” Through two terms in office, beginning in 1953, President Dwight D. Eisenhower struggled with this dilemma. He wanted to retain Presidential control of nuclear weapons while defending America and its allies from attack. But, in a crisis, those two goals might prove contradictory, raising all sorts of difficult questions. What if Soviet bombers were en route to the United States but the President somehow couldn’t be reached? What if Soviet tanks were rolling into West Germany but a communications breakdown prevented NATO officers from contacting the White House? What if the President were killed during a surprise attack on Washington, D.C., along with the rest of the nation’s civilian leadership? Who would order a nuclear retaliation then?

With great reluctance, Eisenhower agreed to let American officers use their nuclear weapons, in an emergency, if there were no time or no means to contact the President. Air Force pilots were allowed to fire their nuclear anti-aircraft rockets to shoot down Soviet bombers heading toward the United States. And about half a dozen high-level American commanders were allowed to use far more powerful nuclear weapons, without contacting the White House first, when their forces were under attack and “the urgency of time and circumstances clearly does not permit a specific decision by the President, or other person empowered to act in his stead.” Eisenhower worried that providing that sort of authorization in advance could make it possible for someone to do “something foolish down the chain of command” and start an all-out nuclear war. But the alternative—allowing an attack on the United States to go unanswered or NATO forces to be overrun—seemed a lot worse. Aware that his decision might create public unease about who really controlled America’s nuclear arsenal, Eisenhower insisted that his delegation of Presidential authority be kept secret. At a meeting with the Joint Chiefs of Staff, he confessed to being “very fearful of having written papers on this matter.”

President John F. Kennedy was surprised to learn, just a few weeks after taking office, about this secret delegation of power. “A subordinate commander faced with a substantial military action,” Kennedy was told in a top-secret memo, “could start the thermonuclear holocaust on his own initiative if he could not reach you.” Kennedy and his national-security advisers were shocked not only by the wide latitude given to American officers but also by the loose custody of the roughly three thousand American nuclear weapons stored in Europe. Few of the weapons had locks on them. Anyone who got hold of them could detonate them. And there was little to prevent NATO officers from Turkey, Holland, Italy, Great Britain, and Germany from using them without the approval of the United States.

by Eric Schlosser, New Yorker |  Read more:
Image: Columbia Pictures

Where'd All The Cocaine Go?

Toward the end of last year, the DEA published its 2013 National Drug Threat Assessment Summary, a 28-page report chronicling drug consumption trends across the United States. These include the continued rise in abuse of prescription drugs (second only to marijuana in popularity), the increase in the production of heroin in Mexico and its availability in the U.S., and the emergence of synthetic designer drugs.

Much of the report is unremarkable—until you arrive at the section on cocaine. “According to [National Seizure System] data,” it reads, “approximately 16,908 kilograms of cocaine were seized at the southwest Border in 2011. During 2012, only 7,143 kilograms of cocaine were seized, a decrease of 58 percent.”

That sharp decline echoes an ongoing trend: 40 percent fewer people in the United States used cocaine in 2012 than they did in 2006; only 19 percent of Chicago arrestees had cocaine in their system two years ago compared to 50 percent in 2000; and less high school seniors say they’ve used cocaine in the last 12 months than at any time since the mid-70s. In fact, the report indicates cocaine was sporadically unavailable in Chicago, Houston, Baltimore, and St. Louis in the spring of 2012. So where’d the blow go? (...)

To speak at greater length on the subject, I reached out UCLA professor of public policy, Mark Kleiman, the nation’s leading authority on drug policy. Earlier this year he gained chronic celebrity status when Washington tapped him to be the state’s “pot czar.” On a recent Sunday morning, Professor Kleiman and I discussed the disappearance of cocaine and whether it would ever come back.

Why would a drug like cocaine ever disappear?
Drug use tends to follow epidemic cycles. When a drug appears, or reappears, it tends to does so at the top of the social spectrum—it’s associated with glamour. People are using it for the first time, and they’re having a good time. Very few people are in trouble with it because use is relatively new and it has terrific word of mouth. People say, “Oh my God, this is wonderful, you have to try this!” So you literally get an exponential growth in use. Every new user is a potential source of additional new users. You get a very rapid rise in the number of users.

As David Musto pointed out in The American Disease, over time, two things happen. Once everybody susceptible to the suggestion of “Let’s try this” has tried it, there’s a declining pool of new users. And, some of the original users have been at it long enough to develop a bad habit. So now there are fewer users to tell all their friends that "this is wonderful," and more problem users who either tell their friends, or demonstrate by their behavior, that this is not wonderful.

How did his cycle play out with cocaine?
In the case of cocaine, there was a rapid price decrease as drug dealers crowded into the market to take advantage of the bonanza. The price of cocaine dropped by 80 percent, which brought a new user group into the population. The development of the crack market made crack available to anyone with $5 for a rock; in the powder cocaine market, the price of admission was $100 for a gram. So, the social status of the drug fell along with the user group. Now, using cocaine puts you in a class not with hedge-fund managers, but with $5 crack whores. Surprisingly, people prefer to be in the “hedge-fund-manager” category, which doesn’t necessarily reflect sound moral judgment but is a social fact.

All of those things created a peak in use. The number of people starting cocaine use peaked in about 1985, which was before the Len Bias affair. Then we got a set of panic-driven policies aimed at suppressing the crack epidemic. We got mandatory sentencing, aggressive law enforcement, and a ramping-up of the War on Drugs. (...)

Is that lifecycle built into every drug?
Yes, the question is when a drug moves from being purely epidemic to being endemic. And that’s happened with cocaine. Remember, the first cocaine epidemic—the long slow one that starts with Sigmund Freud. That one played itself out by the late 1920s. After that, cocaine use went close to zero. That didn’t happen this time—there is still cocaine initiation going on. I do not see any time soon when cocaine is not part of the American scene. It looks to me as if cocaine, the opiates and cannabis, like alcohol, now have a steady user base—not just an occasional flare up. But a drug that’s as destructive as cocaine is when heavily use—especially as crack—isn’t going to become endemic at a high level. The drug we have that’s endemic at a high level is alcohol, and unfortunately that’s not going away. And it looks to me like cannabis is going to join alcohol.

by Michael Zelenko, Vice | Read more:
Image: US Coast Guard

Saturday, January 25, 2014