Thursday, November 21, 2013

Stuxnet's Secret Twin

Three years after it was discovered, Stuxnet, the first publicly disclosed cyberweapon, continues to baffle military strategists, computer security experts, political decision-makers, and the general public. A comfortable narrative has formed around the weapon: how it attacked the Iranian nuclear facility at Natanz, how it was designed to be undiscoverable, how it escaped from Natanz against its creators' wishes. Major elements of that story are either incorrect or incomplete.

That's because Stuxnet is not really one weapon, but two. The vast majority of the attention has been paid to Stuxnet's smaller and simpler attack routine -- the one that changes the speeds of the rotors in a centrifuge, which is used to enrich uranium. But the second and "forgotten" routine is about an order of magnitude more complex and stealthy. It qualifies as a nightmare for those who understand industrial control system security. And strangely, this more sophisticated attack came first. The simpler, more familiar routine followed only years later -- and was discovered in comparatively short order.

With Iran's nuclear program back at the center of world debate, it's helpful to understand with more clarity the attempts to digitally sabotage that program. Stuxnet's actual impact on the Iranian nuclear program is unclear, if only for the fact that no information is available on how many controllers were actually infected. Nevertheless, forensic analysis can tell us what the attackers intended to achieve, and how. I've spent the last three years conducting that analysis -- not just of the computer code, but of the physical characteristics of the plant environment that was attacked and of the process that this nuclear plant operates. What I've found is that the full picture, which includes the first and lesser-known Stuxnet variant, invites a re-evaluation of the attack. It turns out that it was far more dangerous than the cyberweapon that is now lodged in the public's imagination.

In 2007, an unidentified person submitted a sample of code to the computer security site VirusTotal. It later turned out to be the first variant of Stuxnet -- at least, the first one that we're aware of. But that was only realized five years later, with the knowledge of the second Stuxnet variant. Without that later and much simpler version, the original Stuxnet might still today sleep in the archives of anti-virus researchers, unidentified as one of the most aggressive cyberweapons in history. Today we now know that the code contained a payload for severely interfering with the system designed to protect the centrifuges at the Natanz uranium-enrichment plant.

Stuxnet's later, and better-known, attack tried to cause centrifuge rotors to spin too fast and at speeds that would cause them to break. The "original" payload used a different tactic. It attempted to overpressurize Natanz's centrifuges by sabotaging the system meant to keep the cascades of centrifuges safe. (...)

Natanz's cascade protection system relies on Siemens S7-417 industrial controllers to operate the valves and pressure sensors of up to six cascades, or groups of 164 centrifuges each. A controller can be thought of as a small embedded computer system that is directly connected to physical equipment, such as valves. Stuxnet was designed to infect these controllers and take complete control of them in a way that previous users had never imagined -- and that had never even been discussed at industrial control system conferences.

A controller infected with the first Stuxnet variant actually becomes decoupled from physical reality. Legitimate control logic only "sees" what Stuxnet wants it to see. Before the attack sequence executes (which is approximately once per month), the malicious code is kind enough to show operators in the control room the physical reality of the plant floor. But that changes during attack execution.

One of the first things this Stuxnet variant does is take steps to hide its tracks, using a trick straight out of Hollywood. Stuxnet records the cascade protection system's sensor values for a period of 21 seconds. Then it replays those 21 seconds in a constant loop during the execution of the attack. In the control room, all appears to be normal, both to human operators and any software-implemented alarm routines.

by Ralph Langer, FP |  Read more:
Image: uncredited

A Case for Life Panels

[ed. See also: How Doctors Die: Showing Others the Way]

At the beginning of 2012, my mother was 95 years old. She lived in an assisted-living center, with hospice care, in a hospital bed 24/7. She was hollow-eyed and emaciated. Though she had moments of clarity, she was confused, anxious and uncomfortable. Her quality of life was minimal, at best. And the cost to keep her in this condition had risen to close to $100,000 a year.

Three years earlier, when she was completely rational, my mother told me that while she had lived a full and rewarding life, she was ready to go. By 2012, when her life was more punishment than reward, she did not have the mental faculties to reaffirm her desire, nor was there a legal way to carry out her decision. Even if my mother had been living in one of the states like Oregon, Washington or Vermont that have “death with dignity” statutes on their books, the fact that she lacked mental competency to request an assisted death by 2012 almost certainly would have ruled out any possibility that the state would have granted her wish.

Nor would it have been an option to move her to one of the few countries that have removed the legal perils of a decision to end one’s life. It was hard enough to get my mother from her bed to her chair. How would I have transported her to the Netherlands?

No, there is only one solution to this type of situation, for anyone who may encounter it in the future. What is needed here, I suggest, is not a death panel. It’s a “life panel” with the legal authority to ensure that my mother’s request to end her own life, on her own terms, would be honored. (...)

Some of my clients are extremely realistic about the crushing expenses they could face in their final years. Others are more sanguine. When I tell them that their money is unlikely to last through their 90s, they say: “Well, that’s O.K. I don’t plan to live past 85, anyway.” I have a standard answer in these cases. I say: “Yes, you expect to die at 85, but what if you’re unlucky? What if you live to 95?” At that point, I tell them about my mother. Then we get down to work.

Occasionally, people tell me that their end dates are guaranteed. They are saving pills that will put them out of their misery, or they have made “arrangements” with friends. For all their planning, my clients do not realize that when the time comes, they may be too sick or demented to carry out their do-it-yourself strategies. And so we come back to the life panel. Who is on it? Certainly, a doctor would be involved. After all, we laymen might feel guilty about making decisions that would hasten the end of a life, but under current law in most states, doctors would be guilty — of murder. On a life panel, a doctor would be held blameless. And I would have no problem adding a medical ethicist and a therapist.

Most important, I think the individual should be allowed to nominate panelists who are likely to understand the person’s wishes: family members, close friends, a person with whom they share religious beliefs.

This may seem like a reach, but in fact we already come quite close to this now. As any financial planner will tell you, everyone needs a living will. This is a legal document that instructs a surrogate or a medical center on the level of life-prolonging or palliative care you want if you become unable to make medical decisions.

But legal documents go only so far. Doctors I have asked about this issue know firsthand the uncertainties of deciding when a person has lost medical decision-making capacity. Nor is it possible to write out instructions for every possible medical eventuality.

A life panel might not be the perfect solution, but neither is draining a family’s resources to support a joyless existence in a hospital bed.

by Bob Goldman, NY Times |  Read more:
Image: Federica Bordoni

Wednesday, November 20, 2013

All The Selves We Have Been

It is when we are young that we are most obviously busy with the project of trying to construct a self we hope the world will appreciate, monitoring and rearranging the impressions we make upon others. Yet as we age, most of us are still trying to hold on to some sense of who and what we are, however hard this may become for those who start to feel increasingly invisible. Everywhere I look nowadays I see older people busily engaged with the world and eager, just as I am, to relate to others, while also struggling to shore up favored ways of seeing ourselves. However, the world in general is rarely sympathetic to these attempts, as though the time had come, or were long overdue, for the elderly to withdraw altogether from worrying about how they appear to others. In my view, such a time never comes, which means finding much better ways of affirming old age than those currently available. (...)

Aging encompasses so much, and yet most people’s thoughts about it embrace so little. Against the dominant fixation, for instance, I write not primarily about aging bodies, with their rising demands, frequent embarrassments, and endless diversities—except that of course our bodies are there, in every move we make, or sometimes fail to complete. I have little to say, either, about the corrosions of dementia. It is telling nowadays how often those who address the topic of aging alight on dementia—often, paradoxically, in criticism of others who simply equate aging with decline, while doing just this themselves. For the faint-hearted, I need to point out that although the incidence of dementia will indeed accelerate in the age group now headed towards their nineties, even amongst the very oldest it will not predominate—though this information hardly eliminates our fear of such indisputable decline.

Conversely, I do not make, or not in quite the usual way, an exploration of those many narratives of resilience, which suggest that with care of the self, diligent monitoring, and attention to spiritual concerns we can postpone aging itself, at least until those final moments of very old age. On this view, we can stay healthy, fit and “young”—or youngish—performing our yoga, practicing Pilates, eating our greens, avoiding hazards and spurning envy and resentment. It is true, we may indeed remain healthy, but we will not stay young. “You are only as old as you feel,” though routinely offered as a jolly form of reassurance, carries its own disavowal of old age.

Aging faces, aging bodies, as we should know, are endlessly diverse. Many of them are beautifully expressive, once we choose to look—those eyes rarely lose their luster, when engrossed. However, I am primarily concerned with the possibilities for and impediments to staying alive to life itself, whatever our age. This takes me first of all to the temporal paradoxes of aging, and to enduring ways of remaining open and attached to the world.

As we age, changing year on year, we also retain, in one manifestation or another, traces of all the selves we have been, creating a type of temporal vertigo and rendering us psychically, in one sense, all ages and no age. “All ages and no age” is an expression once used by the psychoanalyst Donald Winnicott to describe the wayward temporality of psychic life, writing of his sense of the multiple ages he could detect in those patients once arriving to lie on the couch at his clinic in Hampstead in London. Thus the older we are the more we encounter the world through complex layerings of identity, attempting to negotiate the shifting present while grappling with the disconcerting images of the old thrust so intrusively upon us. “Live in the layers, / not on the litter,” the North American poet, Stanley Kunitz, wrote in one of his beautiful poems penned in his seventies. (...)

“I don’t feel old,” elderly informants repeatedly told the oral historian Paul Thompson. Their voices echo the words he’d read in his forays into published autobiography and archived interviews. Similarly, in the oral histories collected by the writer Ronald Blythe, an eighty-four-year-old ex-schoolmaster reflects: “I tend to look upon other old men as old men—and not include myself… My boyhood stays imperishable and is such a great part of me now. I feel it very strongly—more than ever before.”

“How can a 17-year-old, like me, suddenly be 81?” the exactingly scientific developmental biologist Lewis Wolpert asks in the opening sentences of his book on the surprising nature of old age, wryly entitled You’re Looking Very Well. Once again, this keen attachment to youth tells us a great deal about the stigma attending old age: “you’re looking old” would never be said, except to insult. On the one hand there can be a sense of continuous fluidity, as we travel through time; on the other, it is hard to ignore those distinct positions we find ourselves in as we age, whatever the temptation. I have been finding, however, that it becomes easier to face up to my own anxieties about aging after surveying the radical ambiguities in the speech or writing of others thinking about the topic, especially when they do so neither to lament nor to celebrate old age, but simply to affirm it as a significant part of life. This is the trigger for the words that follow, as I assemble different witnesses to help guide me through the thoughts that once kept me awake at night, pondering all the things that have mattered to me and wondering what difference aging makes to my continuing ties to them.

by Lynne Segal, Guernica |  Read more:
Image: from Flickr via Abode of Chaos

Paul Gauguin, The Meal. Paris Musée d'Orsay

Most Lives Are Lived by Default

Jamie lives in a large city in the midwest. He’s a copywriter for an advertising firm, and he’s good at it.

He’s also good at thinking of reasons why he ought to be happy with his life. He has health insurance, and now savings. A lot of his friends have neither. His girlfriend is pretty. They never fight. His boss has a sense of humor, doesn’t micromanage, and lets him go early most Fridays.

On most of those Fridays, including this one, instead of taking the train back to his suburban side-by-side, he walks to a downtown pub to meet his friends. He will have four beers. His friends always stay longer.

Jamie’s girlfriend Linda typically arrives on his third beer. She greets them all with polite hugs, Jamie with a kiss. He orders his final beer when she orders her only one. They take a taxi home, make dinner together, and watch a movie on Netflix. When it’s over they start a second one and don’t finish it. They have sex, then she goes to wash her face and brush her teeth. When she returns, he goes.

There was never a day Jamie sat down and decided to be a copywriter living in the midwest. A pair of lawyers at his ex-girlfriend’s firm took him out one night when he was freshly laid-off from writing for a tech magazine, bought him a hundred dollars worth of drinks and gave him the business card of his current boss. It was a great night. That was nine years ago.

His friends are from his old job. White collar, artsy and smart. If one of the five of them is missing at the pub on Friday, they’ll have lunch during the week.

Jamie isn’t unhappy. He’s bored, but doesn’t quite realize it. As he gets older his boredom is turning to fear. He has no health problems but he thinks about them all the time. Cancer. Arthritis. Alzheimer’s. He’s thirty-eight, fit, has no plans for children, and when he really thinks about the course of his life he doesn’t quite know what to do with himself, except on Fridays.

In two months he and Linda are going to Cuba for ten days. He’s looking forward to that right now.

***

A few weeks ago I asked everyone reading to share their biggest problem in life in the comment section. I’ve done this before — ask about what’s going on with you — and every time I do I notice two things.

The first thing is that everyone has considerable problems. Not simply occasional tough spots, but the type of issue that persists for years or decades. The kind that becomes a theme in life, that feels like part of your identity. By the sounds of it, it’s typical among human beings to feel like something huge is missing.

The other thing is that they tend to be one of the same few problems: lack of human connection, lack of personal freedom (due to money or family situations), lack of confidence or self-esteem, or lack of self-control.

The day-to-day feel and quality of each of our lives sits on a few major structures: where we live, what we do for a living, what we do with ourselves when we’re not at work, and which people we spend most of our time with.

by David Cain, Raptitude |  Read more:
Image: uncredited

Craft Transit

At the 2013 Walking Summit early this month in Washington, DC, I spent a lot of time looking at other people’s shoes.

My interest in footwear-as-fashion borders on nil, but I was curious about locomotion. I saw a lot of sensible, flat-heeled shoes on women, and some efficient Tevas and Hi-Techs on men. But also quite a few painful and pointy dress shoes on both sexes, all inappropriate for walking more than to the nearest Starbucks. I tried not to judge, but, well, what can I say?

I spent two days at the summit listening, learning, and chatting with advocates for walking. It brought together a diverse crowd of nearly 400 people: urban planners, doctors, transit advocates, public health professionals, recreational trail directors, and people who blog and write about getting around. They talked about how much we walk, why we don’t do more of it, where we walk, how to get people walking more.

As at conferences everywhere, these discussions were decked out with splashy statistics. Many came from a newly released survey about American attitudes toward walking, which had been commissioned by health care provider Kaiser Permanente (the muscle behind the summit). Seventy-nine percent of Americans, for instance, agree that they “should probably walk more.” And 66 percent believe that distracted drivers were a problem in their neighborhoods.

But one statistic really caught my attention: 72 percent of respondents think walking “is cool.”

Seriously? I suspect a finger on the scale. Because walking has long been the antithesis of cool. Walking is what the elderly do in malls. Walking is what the poor do because they can’t afford righteous wheels, or even bus fare. Walking is what a baseball player does, with a limp, when he’s hit by a ball — it’s the opposite of a home run. And race walkers? They may have set back walking by several generations with their alarmingly wobbly, hip-gimballing walk. The Facebook page “Walking is Cool?” It has a total of seven “likes.”

Walking as a cool activity is hobbled by a number of obstructions. For instance, those who crusade for walking often scare the common people with exclamation points. “Fun you say? Yes, fun!” enthuses a web site advocating walking, posted under a heading reading “Why Not Walk?!” Many walking advocates appear to use keyboards lacking the basic period. You could lose an eye on all their punctuation. True believers scare people.

This is compounded by a persistent belief — at least among many I’ve spoken with — that walking is quite possibly the most boring activity anyone can engage in. Washing dishes by hand is preferable. It’s no coincidence that a synonym for “boring” is “pedestrian.” One young woman — who has evidently been so traumatized by exclamation points that she can no longer employ any punctuation whatsoever — recently groused on an online forum: “I try and try but I can't stand it its too boring I tried listening to songs on my iPod and even walking with a friend but its no use I just don't like walking… but the thing is I want to walk but can’t.”

In my experience, many others share her view that walking may be good, but leads to a slow death by boredom. The only cure? Take two automobiles and call me in the morning.

Running isn’t saddled with this baggage. This is part because when you run briskly down a city street, all rustly in your nylon, it conveys that you’re a can-do person with a busy life, although not too busy to take care of The Big Dog. In contrast, when someone walks past, they’re invisible, or if they’re walking a bit faster than normal, one may note them only to assume they’ve missed their bus. Also running has cool accessories that convey social status and tech savviness. Last summer, for instance, Adidas introduced Springblade, “the first running shoe with individually tuned blades engineered to help propel runners forward with one of the most effective energy returns in the industry.” I assume they couldn’t call it “Bladerunner” because of trademark issues, which is too bad. I don’t even run and I want a pair.

Same with biking — cool and expensive equipment is abundant, including jerseys in colors garish enough to be seen from the orbiting space station. Of course, the dork-helmet remains one of our generation’s unresolved problems, but great minds are at work on this.

How to overcome walking’s dull reputation?

by Wayne Curtis, The Smart Set |  Read more:
Image: Wayne Curtis

Tuesday, November 19, 2013

Joe Walsh


Robert Carrithers, Wedding Reception
via:

U.S. helicopters land in Haiti.
via:

[ed. Sistine Living Room]
via:

The 40-Year Slump


[ed. See also: Paul Krugman's A Permanent Slump.]

The steady stream of Watergate revelations, President Richard Nixon’s twists and turns to fend off disclosures, the impeachment hearings, and finally an unprecedented resignation—all these riveted the nation’s attention in 1974. Hardly anyone paid attention to a story that seemed no more than a statistical oddity: That year, for the first time since the end of World War II, Americans’ wages declined.

Since 1947, Americans at all points on the economic spectrum had become a little better off with each passing year. The economy’s rising tide, as President John F. Kennedy had famously said, was lifting all boats. Productivity had risen by 97 percent in the preceding quarter-century, and median wages had risen by 95 percent. As economist John Kenneth Galbraith noted in The Affluent Society, this newly middle-class nation had become more egalitarian. The poorest fifth had seen their incomes increase by 42 percent since the end of the war, while the wealthiest fifth had seen their incomes rise by just 8 percent. Economists have dubbed the period the “Great Compression.”

This egalitarianism, of course, was severely circumscribed. African Americans had only recently won civil equality, and economic equality remained a distant dream. Women entered the workforce in record numbers during the early 1970s to find a profoundly discriminatory labor market. A new generation of workers rebelled at the regimentation of factory life, staging strikes across the Midwest to slow down and humanize the assembly line. But no one could deny that Americans in 1974 lived lives of greater comfort and security than they had a quarter-century earlier. During that time, median family income more than doubled.

Then, it all stopped. In 1974, wages fell by 2.1 percent and median household income shrunk by $1,500. To be sure, it was a year of mild recession, but the nation had experienced five previous downturns during its 25-year run of prosperity without seeing wages come down.

What no one grasped at the time was that this wasn’t a one-year anomaly, that 1974 would mark a fundamental breakpoint in American economic history. In the years since, the tide has continued to rise, but a growing number of boats have been chained to the bottom. Productivity has increased by 80 percent, but median compensation (that’s wages plus benefits) has risen by just 11 percent during that time. The middle-income jobs of the nation’s postwar boom years have disproportionately vanished. Low-wage jobs have disproportionately burgeoned. Employment has become less secure. Benefits have been cut. The dictionary definition of “layoff” has changed, from denoting a temporary severance from one’s job to denoting a permanent severance.

As their incomes flat-lined, Americans struggled to maintain their standard of living. In most families, both adults entered the workforce. They worked longer hours. When paychecks stopped increasing, they tried to keep up by incurring an enormous amount of debt. The combination of skyrocketing debt and stagnating income proved predictably calamitous (though few predicted it). Since the crash of 2008, that debt has been called in.

All the factors that had slowly been eroding Americans’ economic lives over the preceding three decades—globalization, deunionization, financialization, Wal-Martization, robotization, the whole megillah of nefarious –izations—have now descended en masse on the American people. Since 2000, even as the economy has grown by 18 percent, the median income of households headed by people under 65 has declined by 12.4 percent. Since 2001, employment in low-wage occupations has increased by 8.7 percent while employment in middle-wage occupations has decreased by 7.3 percent. Since 2003, the median wage has not grown at all.

The middle has fallen out of the American economy—precipitously since 2008, but it’s been falling out slowly and cumulatively for the past 40 years. Far from a statistical oddity, 1974 marked an epochal turn. The age of economic security ended. The age of anxiety began.

by Harold Meyerson, American Prospect |  Read more:
Image: Jason Schneider

The Wow! Signal


[ed. I don't think I'd use celebrity videos and Twitter feeds if I were searching for intelligent life.]

The Wow! signal was a strong narrowband radio signal detected by Jerry R. Ehman on August 15, 1977, while he was working on a SETI project at the Big Ear radio telescope of The Ohio State University, then located at Ohio Wesleyan University's Perkins Observatory in Delaware, Ohio. The signal bore the expected hallmarks of non-terrestrial and non-Solar System origin. It lasted for the full 72-second window that Big Ear was able to observe it, but has not been detected again. The signal has been the subject of significant media attention.

Amazed at how closely the signal matched the expected signature of an interstellar signal in the antenna used, Ehman circled the signal on the computer printout and wrote the comment "Wow!" on its side. This comment became the name of the signal.

In 2012, on the 35th anniversary of the Wow! signal, Arecibo Observatory beamed a response from humanity, containing 10,000 Twitter messages, in the direction from which the signal originated. In the response, Arecibo scientists have attempted to increase the chances of intelligent life receiving and decoding the celebrity videos and crowd-sourced Tweets by attaching a repeating sequence header to each message that will let the recipient know that the messages are intentional and from another intelligent life form.

by Wikipedia |  Read more:
Image: J. Ehman

Bringing God Into It

The political left struggles, Rabbi Michael Lerner believes, because it has abandoned the spiritual values that undergird it—kindness, compassion, generosity spur the left’s concerns for social justice and a benevolent approach to public policy, yet these things can’t be weighed by science or valued through the stock exchange. The left, the editor of Tikkun magazine argues, has ceased to talk about the motivators that lend meaning to people’s lives. Lerner is one of the nation’s most influential progressive intellectuals and political leaders.

“The left’s hostility to religion is one of the main reasons people who otherwise might be involved with progressive politics get turned off,” he said. “So it becomes important to ask why.

“One reason is that conservatives have historically used religion to justify oppressive social systems and political regimes. Another reason is that many of the most rigidly anti-religious folk on the left are themselves refugees from repressive religious communities. Rightly rejecting the sexism, homophobia and authoritarianism they experienced in their own religious community, they unfairly generalize that to include all religious communities, unaware of the many religious communities that have played leadership roles in combating these and other forms of social injustice. Yet a third possible reason is that some on the left have never seen a religious community that embodies progressive values. But the left enjoyed some of its greatest success in the 1960s, when it was led by a black religious community and by a religious leader, Martin Luther King Jr.”

Indeed, Lerner points out, the great changes in American society—the end of slavery, the increase of rights for women and minorities—all have their progressive origins in the religious community. It’s time to reclaim that legacy, he said, and create a new community.

“It’s not true that the left is without belief,” he said. “The left is captivated by a belief I’ve called scientism.

“Science is not the same as scientism—the belief that the only things that are real or can be known are those that can be empirically observed and measured. As a religious person, I don’t rely on science to tell me what is right and wrong or what love means or why my life is important. I understand that such questions cannot be answered through empirical observations. Claims about God, ethics, beauty and any other face of human experience that is not subject to empirical verification—all these spiritual dimensions of life—are dismissed by the scientistic worldview as inherently unknowable and hence meaningless.

“Scientism extends far beyond an understanding and appreciation of the role of science in society,” Lerner said. “It has become the religion of the secular consciousness. Why do I say it’s a religion? Because it is a belief system that has no more scientific foundation than any other belief system. The view that that which is real and knowable is that which can be empirically verified or measured is a view that itself cannot be empirically measured or verified and thus by its own criterion is unreal or unknowable. It is a religious belief system with powerful adherents. Spiritual progressives therefore insist on the importance of distinguishing between our strong support for science and our opposition to scientism.”

Liberalism, he argues, emerged as part of the broad movement against the feudal order, which taught that God had appointed people to their place in the hierarchical economic and political order for the good of the greater whole. Our current economic system, capitalism, was created by challenging the church’s role in organizing social life, and empirical observation and rational thought became the battering ram the merchant class used to weaken the church’s authority.

“The idea that people are only motivated by material self-interest became the basis for a significant part of what we now call the political left, or labor movement, and the Democratic Party,” Lerner said. “We reduce it to, ‘It’s the economy, stupid.’ But in the research I did with thousands of middle-income working-class people, I found that there was a pervasive desire for meaning and a purpose-driven life, and for recognition by others in a nonutilitarian way, and that the absence of this kind of recognition and deprivation of meaning caused a huge amount of suffering and could best be described as a deep spiritual hunger that had little to do with how much money people were making.

by Tim Johnson, Cascadia Weekly |  Read more:
Image: uncredited

Monday, November 18, 2013

Auto Correct

Human beings make terrible drivers. They talk on the phone and run red lights, signal to the left and turn to the right. They drink too much beer and plow into trees or veer into traffic as they swat at their kids. They have blind spots, leg cramps, seizures, and heart attacks. They rubberneck, hotdog, and take pity on turtles, cause fender benders, pileups, and head-on collisions. They nod off at the wheel, wrestle with maps, fiddle with knobs, have marital spats, take the curve too late, take the curve too hard, spill coffee in their laps, and flip over their cars. Of the ten million accidents that Americans are in every year, nine and a half million are their own damn fault.

A case in point: The driver in the lane to my right. He’s twisted halfway around in his seat, taking a picture of the Lexus that I’m riding in with an engineer named Anthony Levandowski. Both cars are heading south on Highway 880 in Oakland, going more than seventy miles an hour, yet the man takes his time. He holds his phone up to the window with both hands until the car is framed just so. Then he snaps the picture, checks it onscreen, and taps out a lengthy text message with his thumbs. By the time he puts his hands back on the wheel and glances up at the road, half a minute has passed.

Levandowski shakes his head. He’s used to this sort of thing. His Lexus is what you might call a custom model. It’s surmounted by a spinning laser turret and knobbed with cameras, radar, antennas, and G.P.S. It looks a little like an ice-cream truck, lightly weaponized for inner-city work. Levandowski used to tell people that the car was designed to chase tornadoes or to track mosquitoes, or that he belonged to an élite team of ghost hunters. But nowadays the vehicle is clearly marked: “Self-Driving Car.”

Every week for the past year and a half, Levandowski has taken the Lexus on the same slightly surreal commute. He leaves his house in Berkeley at around eight o’clock, waves goodbye to his fiancée and their son, and drives to his office in Mountain View, forty-three miles away. The ride takes him over surface streets and freeways, old salt flats and pine-green foothills, across the gusty blue of San Francisco Bay, and down into the heart of Silicon Valley. In rush-hour traffic, it can take two hours, but Levandowski doesn’t mind. He thinks of it as research. While other drivers are gawking at him, he is observing them: recording their maneuvers in his car’s sensor logs, analyzing traffic flow, and flagging any problems for future review. The only tiresome part is when there’s roadwork or an accident ahead and the Lexus insists that he take the wheel. A chime sounds, pleasant yet insistent, then a warning appears on his dashboard screen: “In one mile, prepare to resume manual control.” (...)

Not everyone finds this prospect appealing. As a commercial for the Dodge Charger put it two years ago, “Hands-free driving, cars that park themselves, an unmanned car driven by a search-engine company? We’ve seen that movie. It ends with robots harvesting our bodies for energy.” Levandowski understands the sentiment. He just has more faith in robots than most of us do. “People think that we’re going to pry the steering wheel from their cold, dead hands,” he told me, but they have it exactly wrong. Someday soon, he believes, a self-driving car will save your life. (...)

The driverless-car project occupies a lofty, garagelike space in suburban Mountain View. It’s part of a sprawling campus built by Silicon Graphics in the early nineties and repurposed by Google, the conquering army, a decade later. Like a lot of high-tech offices, it’s a mixture of the whimsical and the workaholic—candy-colored sheet metal over a sprung-steel chassis. There’s a Foosball table in the lobby, exercise balls in the sitting room, and a row of what look like clown bicycles parked out front, free for the taking. When you walk in, the first things you notice are the wacky tchotchkes on the desks: Smurfs, “Star Wars” toys, Rube Goldberg devices. The next things you notice are the desks: row after row after row, each with someone staring hard at a screen.

It had taken me two years to gain access to this place, and then only with a staff member shadowing my every step. Google guards its secrets more jealously than most. At the gourmet cafeterias that dot the campus, signs warn against “tailgaters”—corporate spies who might slink in behind an employee before the door swings shut. Once inside, though, the atmosphere shifts from vigilance to an almost missionary zeal. “We want to fundamentally change the world with this,” Sergey Brin, the co-founder of Google, told me.

Brin was dressed in a charcoal hoodie, baggy pants, and sneakers. His scruffy beard and flat, piercing gaze gave him a Rasputinish quality, dulled somewhat by his Google Glass eyewear. At one point, he asked if I’d like to try the glasses on. When I’d positioned the miniature projector in front of my right eye, a single line of text floated poignantly into view: “3:51 p.m. It’s okay.”

“As you look outside, and walk through parking lots and past multilane roads, the transportation infrastructure dominates,” Brin said. “It’s a huge tax on the land.” Most cars are used only for an hour or two a day, he said. The rest of the time, they’re parked on the street or in driveways and garages. But if cars could drive themselves, there would be no need for most people to own them. A fleet of vehicles could operate as a personalized public-transportation system, picking people up and dropping them off independently, waiting at parking lots between calls. They’d be cheaper and more efficient than taxis—by some calculations, they’d use half the fuel and a fifth the road space of ordinary cars—and far more flexible than buses or subways. Streets would clear, highways shrink, parking lots turn to parkland. “We’re not trying to fit into an existing business model,” Brin said. “We are just on such a different planet.”

by Burkhard Bilger, New Yorker |  Read more:
Image: Harry Campbell

[ed. It's a little known fact (... or maybe not) that brown bears often step in the same footprints of other bears. I've heard of a place in S.E. Alaska where their prints are worn into solid rock. I haven't seen the site, but have observed the phenomenon on a lot of other trails. I'll post some pics one of these days.]

via:

The Insanity of Our Food Policy

American food policy has long been rife with head-scratching illogic. We spend billions every year on farm subsidies, many of which help wealthy commercial operations to plant more crops than we need. The glut depresses world crop prices, harming farmers in developing countries. Meanwhile, millions of Americans live tenuously close to hunger, which is barely kept at bay by a food stamp program that gives most beneficiaries just a little more than $4 a day. (...)

The House has proposed cutting food stamp benefits by $40 billion over 10 years — that’s on top of $5 billion in cuts that already came into effect this month with the expiration of increases to the food stamp program that were included in the 2009 stimulus law. Meanwhile, House Republicans appear satisfied to allow farm subsidies, which totaled some $14.9 billion last year, to continue apace. Republican proposals would shift government assistance from direct payments — paid at a set rate to farmers every year to encourage them to keep growing particular crops, regardless of market fluctuations — to crop insurance premium subsidies. But this is unlikely to be any cheaper. Worse, unlike direct payments, the insurance premium subsidies carry no income limit for the farmers who would receive this form of largess. (...)

Farm subsidies were much more sensible when they began eight decades ago, in 1933, at a time when more than 40 percent of Americans lived in rural areas. Farm incomes had fallen by about a half in the first three years of the Great Depression. In that context, the subsidies were an anti-poverty program.

Now, though, the farm subsidies serve a quite different purpose. From 1995 to 2012, 1 percent of farms received about $1.5 million each, which is more than a quarter of all subsidies, according to the Environmental Working Group. Some three-quarters of the subsidies went to just 10 percent of farms. These farms received an average of more than $30,000 a year — about 20 times the amount received by the average individual beneficiary last year from the federal Supplemental Nutrition Assistant Program, or SNAP, commonly called food stamps.

Today, food stamps are one of the main support beams in our anti-poverty efforts. More than 80 percent of the 45 million or so Americans who participated in SNAP in 2011, the last year for which there is comprehensive data from the United States Department of Agriculture, had gross household incomes below the poverty level. (Since then, the total number of participants has expanded to nearly 48 million.) Even with that support, many of them experience food insecurity, that is, they had trouble putting food on the table at some point during the year. (...)

This is not how America is supposed to work. In his famous 1941 “four freedoms” speech, Franklin D. Roosevelt enunciated the principle that all Americans should have certain basic economic rights, including “freedom from want.” These ideas were later embraced by the international community in the Universal Declaration of Human Rights, which also enshrined the right to adequate food. But while the United States was instrumental in advocating for these basic economic human rights on the international scene — and getting them adopted — America’s performance back home has been disappointing.

It is, of course, no surprise that with the high level of poverty millions of Americans have had to turn to the government to meet the basic necessities of life. And those numbers increased drastically with the onset of the Great Recession. The number of Americans on food stamps went up by more than 80 percent between 2007 and 2013.

To say that most of these Americans are technically poor only begins to get at the depth of their need. In 2012, for example, two in five SNAP recipients had gross incomes that were less than half of the poverty line. The amount they get from the program is very small — $4.39 a day per recipient. This is hardly enough to survive on, but it makes an enormous difference in the lives of those who get it: The Center on Budget and Policy Priorities estimates that SNAP lifted four million Americans out of poverty in 2010.

by Joseph E. Stiglitz, NY Times |  Read more:
Image: Javier Jaén