Friday, June 1, 2012

Morals and the Machine

[ed. The emerging field of machine ethics. Can it keep pace with the development of robotic technology?]

In the classic science-fiction film “2001”, the ship’s computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship’s mission (investigating an artefact near Jupiter) and to keep the mission’s true purpose secret from the ship’s crew. To resolve the contradiction, he tries to kill the crew.

As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was.

A bestiary of robots

Military technology, unsurprisingly, is at the forefront of the march towards self-determining machines (see Technology Quarterly). Its evolution is producing an extraordinary variety of species. The Sand Flea can leap through a window or onto a roof, filming all the while. It then rolls along on wheels until it needs to jump again. RiSE, a six-legged robo-cockroach, can climb walls. LS3, a dog-like robot, trots behind a human over rough terrain, carrying up to 180kg of supplies. SUGV, a briefcase-sized robot, can identify a man in a crowd and follow him. There is a flying surveillance drone the weight of a wedding ring, and one that carries 2.7 tonnes of bombs.

Robots are spreading in the civilian world, too, from the flight deck to the operating theatre (see article). Passenger aircraft have long been able to land themselves. Driverless trains are commonplace. Volvo’s new V40 hatchback essentially drives itself in heavy traffic. It can brake when it senses an imminent collision, as can Ford’s B-Max minivan. Fully self-driving vehicles are being tested around the world. Google’s driverless cars have clocked up more than 250,000 miles in America, and Nevada has become the first state to regulate such trials on public roads. In Barcelona a few days ago, Volvo demonstrated a platoon of autonomous cars on a motorway.

As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.

As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong.

by The Economist |  Read more:
Illustration by Derek Bacon

Karin Ceelen, “Seeds”
via:

The 1 Percent’s Problem

Let’s start by laying down the baseline premise: inequality in America has been widening for dec­ades. We’re all aware of the fact. Yes, there are some on the right who deny this reality, but serious analysts across the political spectrum take it for granted. I won’t run through all the evidence here, except to say that the gap between the 1 percent and the 99 percent is vast when looked at in terms of annual income, and even vaster when looked at in terms of wealth—that is, in terms of accumulated capital and other assets. Consider the Walton family: the six heirs to the Walmart empire possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society. (Many at the bottom have zero or negative net worth, especially after the housing debacle.) Warren Buffett put the matter correctly when he said, “There’s been class warfare going on for the last 20 years and my class has won.”

So, no: there’s little debate over the basic fact of widening inequality. The debate is over its meaning. From the right, you sometimes hear the argument made that inequality is basically a good thing: as the rich increasingly benefit, so does everyone else. This argument is false: while the rich have been growing richer, most Americans (and not just those at the bottom) have been unable to maintain their standard of living, let alone to keep pace. A typical full-time male worker receives the same income today he did a third of a century ago.

From the left, meanwhile, the widening inequality often elicits an appeal for simple justice: why should so few have so much when so many have so little? It’s not hard to see why, in a market-driven age where justice itself is a commodity to be bought and sold, some would dismiss that argument as the stuff of pious sentiment.

Put sentiment aside. There are good reasons why plutocrats should care about inequality anyway—even if they’re thinking only about themselves. The rich do not exist in a vacuum. They need a functioning society around them to sustain their position. Widely unequal societies do not function efficiently and their economies are neither stable nor sustainable. The evidence from history and from around the modern world is unequivocal: there comes a point when inequality spirals into economic dysfunction for the whole society, and when it does, even the rich pay a steep price.

Let me run through a few reasons why.

The Consumption Problem

When one interest group holds too much power, it succeeds in getting policies that help itself in the short term rather than help society as a whole over the long term. This is what has happened in America when it comes to tax policy, regulatory policy, and public investment. The consequence of channeling gains in income and wealth in one direction only is easy to see when it comes to ordinary household spending, which is one of the engines of the American economy.

It is no accident that the periods in which the broadest cross sections of Americans have reported higher net incomes—when inequality has been reduced, partly as a result of progressive taxation—have been the periods in which the U.S. economy has grown the fastest. It is likewise no accident that the current recession, like the Great Depression, was preceded by large increases in inequality. When too much money is concentrated at the top of society, spending by the average American is necessarily reduced—or at least it will be in the absence of some artificial prop. Moving money from the bottom to the top lowers consumption because higher-income individuals consume, as a fraction of their income, less than lower-income individuals do.

In our imaginations, it doesn’t always seem as if this is the case, because spending by the wealthy is so conspicuous. Just look at the color photographs in the back pages of the weekend Wall Street Journal of houses for sale. But the phenomenon makes sense when you do the math. Consider someone like Mitt Romney, whose income in 2010 was $21.7 million. Even if Romney chose to live a much more indulgent lifestyle, he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes. But take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.

The relationship is straightforward and ironclad: as more money becomes concentrated at the top, aggregate demand goes into a decline. Unless something else happens by way of intervention, total demand in the economy will be less than what the economy is capable of supplying—and that means that there will be growing unemployment, which will dampen demand even further. In the 1990s that “something else” was the tech bubble. In the first dec­ade of the 21st century, it was the housing bubble. Today, the only recourse, amid deep recession, is government spending—which is exactly what those at the top are now hoping to curb.

by Joseph E. Stiglitz, Vanity Fair |  Read more:
Stephen Doyle

There’s No Stopping the Rise of E-Money

...All this activity has people once again talking about a cashless society. Because let’s face it: Cash is expensive. In the United States, for instance, studies indicate that maintaining a cash system—including printing new bills, recycling old ones, moving them about in armored trucks, using them to replenish automatic cash machines—costs the country about 1 percent of GDP. Those studies also show that the marginal cost of a cash transaction is around double that of a debit-card transaction.

Cash’s indirect costs are huge, too. In a 2011 study [PDF], Edgar L. Feige of the University of Wisconsin, in Madison, and Richard Cebula of Jacksonville University, in Florida, found that in the United States 18 to 19 percent of total reportable income is hidden from federal tax men, a shortfall of about US $500 billion. The Justice Department estimated in 2008 that secret offshore bank accounts were responsible for about one-fifth of the tax gap, suggesting that the remaining 80 percent is attributable to unreported cash.  (...)

Thus the allure of the mobile phone as an alternative to cash. The enabling technology has finally arrived, and it’s taking root because the business drivers (that is, the high cost of cash) and the social drivers (cash’s disproportionate cost to the poor) were already there. And just as the plastic card and the Web made it easy for us to pay merchants, the mobile phone will soon make it easy for us to pay each other.

So let's assume that the mobile phone will take over and that in a few years’ time, you’ll be able to pay Walmart or your window cleaner or your niece with your mobile phone. In this world, switching among dollars and euros and frequent-flier miles and Facebook Credits and Google Bucks and any other form of money will be just a matter of choosing from a menu on the phone. The cost of introducing new currencies will collapse—anyone will be able to do it. The future of money, in other words, won’t be that single galactic currency of science fiction. (We already know that, because we can’t even make a single currency work between Germany and Greece, let alone Ganymede and Gamma Centauri.) Instead, we can look forward not merely to hundreds but thousands or even millions of currencies. And though regulators may oppose the trend, they can’t hold it back.

That must sound as crazy to you as the idea of paper money once did to your ancestors, but it really isn’t. Trying to imagine a wallet with a hundred currencies in it and a Coke machine with a hundred slots for them is, of course, nuts. But based on the available currencies in your mobile “wallet” and prevailing market conditions, your phone and the Coke machine will be able to negotiate an exchange rate in a fraction of a second.

Likewise, I don’t want to carry around a hundred different retailer credit and loyalty cards, but my phone can hold a zillion. So when I go to Starbucks, my phone will select my Starbucks app, load up my Starbucks account, and generally not bother me about the details. When I walk next door into Target, my phone will select my Target app, fire up my Target card, and get down to business.

Who will want to issue these new currencies? Corporations, for starters. When Edward de Bono wrote The IBM Dollar: A Proposal for the Wider Use of “Target” Currencies back in 1994, he looked forward to a time when “the successors to Bill Gates will have put the successors to Alan Greenspan out of business,” arguing that it would be more efficient for companies to issue money than equity. Even if all I’ve got is Microsoft Moola, and you want to get paid in Samsung Shekels, who cares? Our phones can sort it out for us.

by David G.W. Birch, IEEE Spectrum |  Read more: 
Illustration: Harry Campbell

Thursday, May 31, 2012


A female artist, 1903, Karoly Ferenczy. Hungarian (1862 - 1917)
via:

Jane Maxwell, Walking Girls Black, 2012.
via:

Self-Portrait in a Sheet Mirror: On Vivian Maier


Imagine being the kind of person who finds everything provocative. All you have to do is set out on a walk through city streets, a Rolleiflex hanging from a strap around your neck, and your heart starts pounding in anticipation. In a world that never fails to startle, it is up to you to find the perfect angle of vision and make use of the available light to illuminate thrilling juxtapositions. You have the power to create extraordinary images out of ordinary scenes, such as two women crossing the street, minks hanging listlessly down the backs of their matching black jackets; or a white man dropping a coin in a black man’s cup while a white dog on a leash looks away, as if in embarrassment; or a stout old woman braced in protest, gripping the hands of a policeman; or three women waiting at a bus stop, lips set in grim response to the affront represented by your camera, their expressions saying “go away” despite the sign behind them announcing, “Welcome to Chicago.”

Welcome to this crowded stage of a city, where everyone is an actor—the poor, the rich, the policemen and street vendors, the nuns and nannies. Even a leaf, a balloon, a puddle, the corpse of a cat or horse can play a starring role. And you are there, too, as involved in the action of this vibrant theater as anyone else, caught in passing at just the right time, your self-portraits turned to vaporous mirages in store windows, outlined in the silhouettes of shadows and reflected in mirrors that you find in unexpected places. You have to be quick if you’re going to get the image you want. You are quick—so quick that you can snap the picture before the doorman has a chance to come outside and tell you to move on.

There is so much drama worth capturing on film; you don’t have the time or resources to turn all of your many thousands of negatives into prints. Anyway, prints aren’t the point of these adventures. It’s enough to delight in your own ingenuity over and over again, with each click of the shutter. You’ll leave the distribution of your art to someone else.

On a winter’s day in 2007, a young realtor named John Maloof paid $400 for a box full of negatives that was being sold by an auction house in Chicago. The box had been repossessed from a storage locker gone into arrears, and Maloof was hoping it contained images he could use to illustrate a book he was co-writing about the Chicago neighborhood of Portage Park. As it turned out, he had stumbled upon a much more valuable treasure: the work of a photographer who looks destined to take her place as one of the pre-eminent street photographers of the twentieth century.

Like all good stories, this one is full of false leads and startling surprises. Maloof was unimpressed initially by the negatives and disappointed that he hadn’t found any materials for his book on Portage Park. As he told a reporter from the Guardian, “Nothing was pertinent for the book so I thought: ‘Well, this sucks, but we can probably sell them on eBay or whatever.’” He created a blog and posted scans of the negatives, but after the blog received no visitors for months, he posted the scans on Flickr. People began to take notice, and their responses helped Maloof appreciate the importance of his purchase.

His growing excitement led him to take a crash course in photography, buy a Rolleiflex—the same kind of camera that had been used to capture the images on the negatives—and even build a darkroom in his attic. He tracked down other buyers who had been at the auction and persuaded them to sell him their boxes, ultimately accumulating a collection of more than 100,000 negatives and 3,000 prints, hundreds of rolls of film, home movies and audiotapes, as well as personal items like clothes, letters and books on photography. A second Chicago collector, Jeffrey Goldstein, held on to materials he acquired from one of the initial bidders. But Maloof estimates that he succeeded in gathering 90 percent of the photographer’s archive.

At some point between 2007 and 2009, Maloof set out to identify the person who had taken the photographs, though this portion of the story remains murky. According to the Chicago Sun-Times, Maloof was “sifting through the negatives in 2009 when he found” a name, that of Vivian Maier, “on an envelope and Googled it. What he found was an obit.” But in a discussion on Flickr, Maloof indicated that he had found Maier’s name earlier. He reported that he came across her name on a photo-label envelope a year after he’d purchased the materials from the auction house. He considered trying to meet Maier but was told by the auction house that she was ill. “I didn’t want to bother her,” he said. “Soooo many questions would have been answered if I had. It eats at me from time to time.” In April 2009 he Googled Maier’s name and found her obituary, which had been placed the previous day. “How weird?” Maloof commented on Flickr.  (...)

In an interview with Chicago Magazine, Lane Gensburg described his former nanny as having “an amazing ability to relate to children.” Gensburg indicated that he wanted nothing unflattering said about Maier, not foreseeing how an offhand epithet would, for some, become the basis of her legacy: “She was like Mary Poppins,” he reportedly said, introducing a loving comparison that has been repeated less lovingly in subsequent accounts of Maier’s life. Maier may have left behind a huge archive of fascinating visual material that is inviting the world’s attention. But it’s not easy for Mary Poppins to be taken seriously as an artist.

by Joanna Scott, The Nation |  Read more:
Photo: Vivian Maier, Self Portrait

Meet 'Flame, 'The Massive Spy Malware Infiltrating Iranian Computers

A massive, highly sophisticated piece of malware has been newly found infecting systems in Iran and elsewhere and is believed to be part of a well-coordinated, ongoing, state-run cyberespionage operation. (...)

Early analysis of Flame by the Lab indicates that it’s designed primarily to spy on the users of infected computers and steal data from them, including documents, recorded conversations and keystrokes. It also opens a backdoor to infected systems to allow the attackers to tweak the toolkit and add new functionality.

The malware, which is 20 megabytes when all of its modules are installed, contains multiple libraries, SQLite3 databases, various levels of encryption — some strong, some weak — and 20 plug-ins that can be swapped in and out to provide various functionality for the attackers. It even contains some code that is written in the LUA programming language — an uncommon choice for malware.  (...)

“It’s a very big chunk of code. Because of that, it’s quite interesting that it stayed undetected for at least two years,” Gostev said. He noted that there are clues that the malware may actually date back to as early as 2007, around the same time period when Stuxnet and DuQu are believed to have been created.

Gostev says that because of its size and complexity, complete analysis of the code may take years.

“It took us half a year to analyze Stuxnet,” he said. “This is 20 times more complicated. It will take us 10 years to fully understand everything.”

Among Flame’s many modules is one that turns on the internal microphone of an infected machine to secretly record conversations that occur either over Skype or in the computer’s near vicinity; a module that turns Bluetooth-enabled computers into a Bluetooth beacon, which scans for other Bluetooth-enabled devices in the vicinity to siphon names and phone numbers from their contacts folder; and a module that grabs and stores frequent screenshots of activity on the machine, such as instant-messaging and e-mail communications, and sends them via a covert SSL channel to the attackers’ command-and-control servers.

The malware also has a sniffer component that can scan all of the traffic on an infected machine’s local network and collect usernames and password hashes that are transmitted across the network. The attackers appear to use this component to hijack administrative accounts and gain high-level privileges to other machines and parts of the network. (...)

Because Flame is so big, it gets loaded to a system in pieces. The machine first gets hit with a 6-megabyte component, which contains about half a dozen other compressed modules inside. The main component extracts, decompresses and decrypts these modules and writes them to various locations on disk. The number of modules in an infection depends on what the attackers want to do on a particular machine.

Once the modules are unpacked and loaded, the malware connects to one of about 80 command-and-control domains to deliver information about the infected machine to the attackers and await further instruction from them. The malware contains a hardcoded list of about five domains, but also has an updatable list, to which the attackers can add new domains if these others have been taken down or abandoned.

While the malware awaits further instruction, the various modules in it might take screenshots and sniff the network. The screenshot module grabs desktop images every 15 seconds when a high-value communication application is being used, such as instant messaging or Outlook, and once every 60 seconds when other applications are being used.

Although the Flame toolkit does not appear to have been written by the same programmers who wrote Stuxnet and DuQu, it does share a few interesting things with Stuxnet.

Stuxnet is believed to have been written through a partnership between Israel and the United States, and was first launched in June 2009. It is widely believed to have been designed to sabotage centrifuges used in Iran’s uranium enrichment program. DuQu was an espionage tool discovered on machines in Iran, Sudan, and elsewhere in 2011 that was designed to steal documents and other data from machines. Stuxnet and DuQu appeared to have been built on the same framework, using identical parts and using similar techniques. But Flame doesn’t resemble either of these in framework, design or functionality.

by Kim Zetter, Wired |  Read more:
Image: Courtesy of Kaspersky

Booktography is fast becoming a viral fad all over the web. The best ones are those which seamlessly integrates the book’s cover with the live person. A dead person may also be used for the purposes of this meme but that’s rather macabre. Much like photobombs, and jumping-in-the-air photos; the originator of this concept is unknown, but the concept behind his/her creative idea will go on to spawn many more memes.

More here:

Freaks, Geeks and Microsoft


When the Kinect was introduced in November 2010 as a $150 motion-control add-on to Microsoft’s Xbox consoles, it drew attention from more than just video-gamers. A slim, black, oblong 11½-inch wedge perched on a base, it allowed a gamer to use his or her body to throw virtual footballs or kick virtual opponents without a controller, but it was also seen as an important step forward in controlling technology with natural gestures.

In fact, as the company likes to note, the Kinect set “a Guinness World Record for the fastest-selling consumer device ever.” And at least some of the early adopters of the Kinect were not content just to play games with it. “Kinect hackers” were drawn to the fact that the object affordably synthesizes an arsenal of sophisticated components — notably, a fancy video camera, a “depth sensor” to capture visual data in three dimensions and a multiarray microphone capable of a similar trick with audio.

Combined with a powerful microchip and software, these capabilities could be put to uses unrelated to the Xbox. Like: enabling a small drone to “see” its surroundings and avoid obstacles; rigging up a 3-D scanner to create small reproductions of most any object (or person); directing the music of a computerized orchestra with conductorlike gestures; remotely controlling a robot to brush a cat’s fur. It has been used to make animation, to add striking visual effects to videos, to create an “interactive theme park” in South Korea and to control a P.C. by the movement of your hands (or, in a variation developed by some Japanese researchers, your tongue).

At the International Consumer Electronics Show earlier this year, Steve Ballmer, Microsoft’s chief executive, used his keynote presentation to announce that the company would release a version specifically meant for use outside the Xbox context and to indicate that the company would lay down formal rules permitting commercial uses for the device. A result has been a fresh wave of Kinect-centric experiments aimed squarely at the marketplace: helping Bloomingdale’s shoppers find the right size of clothing; enabling a “smart” shopping cart to scan Whole Foods customers’ purchases in real time; making you better at parallel parking.

An object that spawns its own commercial ecosystem is a thing to take seriously. Think of what Apple’s app store did for the iPhone, or for that matter how software continuously expanded the possibilities of the personal computer. Patent-watching sites report that in recent months, Sony, Apple and Google have all registered plans for gesture-control technologies like the Kinect. But there is disagreement about exactly how the Kinect evolved into an object with such potential. Did Microsoft intentionally create a versatile platform analogous to the app store? Or did outsider tech-artists and hobbyists take what the company thought of as a gaming device and redefine its potential?

This clash of theories illustrates a larger debate about the nature of innovation in the 21st century, and the even larger question of who, exactly, decides what any given object is really for. Does progress flow from a corporate entity’s offering a whiz-bang breakthrough embraced by the masses? Or does techno-thing success now depend on the company’s acquiescing to the crowd’s input? Which vision of an object’s meaning wins? The Kinect does not neatly conform to either theory. But in this instance, maybe it’s not about whose vision wins; maybe it’s about the contest.

by Rob Walker, NY Times |  Read more:
Illustration by Robbie Porter

Internet to Grow Fourfold in Four Years

Cisco Systems (NASDAQ: CSCO) put out its annual Visual Networking Index (VNI) forecast for 2011 to 2016. The huge router company projects the Internet will be four times as large in four years as it will be this year. The “wired” world, which has changed human interaction and the growth of the availability of information, will explode, if Cisco is correct.

It is hard to find an analogue to this expansion in recent business and social history. Perhaps the growth of the number of TV sets or cable use. Or, maybe the growth of car ownership at the beginning of the last century. At any rate, the growth cannot be matched by anything that has happened in recent memory. The Cisco forecast means that billions of people will be tethered to the Internet. Cisco does not believe its job is to say what the impact of this will be, but there are some reasonable guesses.

The path to the fourfold increase includes these things:
  • By 2016, the forecast projects there will be nearly 18.9 billion network connections — almost 2.5 connections for each person on earth — compared with 10.3 billion in 2011.
  • By 2016, there are expected to be 3.4 billion Internet users — about 45% of the world’s projected population, according to UN estimates.
  • The average fixed broadband speed is expected to increase nearly fourfold, from 9 megabits per second (Mbps) in 2011 to 34 Mbps in 2016.
  • By 2016, 1.2 million video minutes — the equivalent of 833 days (or more than two years) — will travel the Internet every second.
It is to Cisco’s advantage to make telecom, cable and wireless providers believe these numbers, because the increased use of the company’s routers will be needed to carry the burgeoning load. But, based on recent history, it is not hard to believe that Cisco is right — at least directionally.

The weight of video use is likely to be the greatest burden on Internet systems. While news is probably a large part of this, entertainment is likely to be larger. Businesses modeled on companies like Netflix (NASDAQ: NFLX) and Google’s (NASDAQ: GOOG) YouTube will expand not just in America and Europe. Similar companies will be established in the most populous nations, with the largest probably coming from China, Russia and much of South America. No one knows yet from where the content for these new businesses, whether or not they are legitimate, will come. If the past is any indication, a great deal will originate from U.S. studios. It will either be a revenue windfall for them or part of the growing trouble with piracy.

by Douglas A. McIntyre, 24/7 Wall Street |  Read more:

Wednesday, May 30, 2012


Josef Sudek (Czech, 1896-1976). Advertising photograph for Ladislav Sutnar porcelain set (with black rim), 1932. Gelatin silver print. 23.2 x 17.1 cm.

The Art Institute of Chicago, Laura T. Magnuson Acquisition Fund.
via:

The Antidepressant Wars


I began to think of suicide at sixteen. An anxious and driven child, I entered in my mid-teens a clinical depression that would last for 40 years. I participated in psychotropic drug therapy for almost 30 of those, and now, owing in part, but only in part, to the drug Cymbalta, I have respite from the grievous suffering that is mental illness.

As a health policy scholar, I understand the machinations of the pharmaceutical industry. My students learn about “me-too” drugs, which barely improve on existing medications, and about “pay-for-delay,” whereby pharmaceutical companies cut deals with manufacturers of generic drugs to keep less expensive products off the market. I study policymakers’ widespread use of effectiveness research and their belief that effectiveness will contain costs while improving quality. I appreciate that randomized controlled trials are the gold standard for determining what works. Specifically, I know that antidepressant medication is vigorously promoted, that the diagnostic criteria for depression are muddled and limited, and that recent research attributes medicated patients’ positive outcomes to the placebo effect. In my own research and advocacy work, I take a political, rather than a medical, approach to recovery from mental illness.

Cymbalta in particular epitomizes pharmaceutical imperialism. Approved by the FDA in August 2004 for the treatment of major depressive disorder, it has since gotten the go-ahead for treating generalized anxiety disorder, fibromyalgia, and chronic musculoskeletal pain, including osteoarthritis and lower back pain. It remains under patent to Eli Lilly.

I would not have been surprised if Cymbalta had not worked for me or had not bested the myriad drugs and drug combinations that came before. My path through clinical depression is strewn with discarded remedies. “Who are these people?” I wondered about patients who were said to achieve happiness with the first pill and therefore to violate societal notions of identity and independence. I was just trying to get out of bed, and although my first antidepressant, at age 26, had a strong positive result, it also had incommodious side effects, and relief was tentative and partial. Decades of new and evolving treatment regimens followed. I have been treated with every class of antidepressant medication, often in combination with other psychotropic drugs. Some drugs worked better than others, some did not work at all, and some had unendurable side effects. But Cymbalta did not disappoint, and now I have become a teller of two tales, one about health policy, the other about health.

Like many depressed people, I resisted the idea of psychotropic medication. I was deeply hurt when my psychotherapist suggested I see a psychiatrist about antidepressant drugs. How could she think I was that crazy or that weak? But she said she was concerned for my survival, and I eventually did as she asked. I became an outpatient at a venerable psychiatric hospital, where I found a kind stranger who knew my deepest secrets and wanted to end my suffering. He wrote a prescription, and thus began my 30-year trek.

Depression is sometimes confused with sadness. Many depressed people are very sad, as I was, but the essence of my depression was feeling dead among the living. Everything was just so hard. William Styron describes depression as “a storm of murk.” Andrew Solomon’s atlas of depression is titled Noonday Demon. I too found depression to be fierce, wrapping me in a heavy woolen blanket and mocking my attempts to cast it off. The self-loathing was palpable; it felt like I was chewing glass. I sensed that other people were seeing things I did not, and apparently they were, because when I began my first course of antidepressants, it was as if someone had turned on the lights. It did not make me happy or even content. The world simply looked different—brighter, deeper—and I was a part of it. I saw something other than the impassable flatness and enervating dullness, and I was amazed.

My progress came at a cost. In the late 1970s, before Prozac, antidepressant medication was seldom spoken of. The people I told about my treatment echoed my first reaction and sang throaty choruses of why-don’t-you-just-cheer-up and won’t-this-make-you-a-drug-addict. I was also drowsy after I ate, my mouth was always dry, and when a second medication was added, I began to lose control of my limbs and fall down. I insisted to my psychiatrist that it was the second drug that was causing me to fall. A champion of that one, he instructed me to discontinue the first. I responded in the way only privileged patients can: I went around him, using personal connections to wrest an informal second opinion from a resident in the lab run by my psychiatrist’s mentor. My doctor was convinced, and a little embarrassed, and we both learned something about therapeutic alliances. (...)

In the years that followed, we just kept trying. I would remain on a regimen until my psychiatrist proposed another, and, looking back, I was remarkably game. I was treated with monoamine oxidase inhibitors, which can be fatal in combination with some foods, and a famous psychiatrist in Manhattan prescribed a drug sold only in Canada. When a medication produced double vision, my psychiatrist suggested I drive with one eye closed. Drug cocktails deteriorated into over-medication. I tried to enroll in a clinical trial that would implant electrodes in my brain, but it was already full. There was only one remedy I rejected outright: electroconvulsive therapy. I was told by other patients about their memory loss, and I needed a good memory to do my job. (...)

Medications that affect the mind seem to discomfit us deeply, culturally, viscerally. And so do the people who need them: psychiatric patients have gone, in this discourse, from covetous of an unfair advantage to oblivious to a colossal con. I am not sure which characterization I prefer, but I know my heart will break when a friend in the grip of depression forgoes medication—not because it is not right for her, but because it is only for cheaters or fools.

Most parties to the debate agree that antidepressants can be effective for severely depressed patients such as me, but selfishly I fear the rhetoric of antidepressant uselessness will influence the pharmacy policies of my health plan. At present I am charged an inflated copayment for Cymbalta because my health plan claims it is no more effective than generic antidepressants. I am not privy to the basis for this determination; I do not know if it is based on average treatment effects, the preferences of plan professionals, or an overriding concern for cost. I do know that it does not include my experience, and when I queried the plan about an appeal, I was told I could appeal but should not bother: there are no successful appeals. The plan representative was unmoved by my savings on psychiatry, rheumatology, and hospitalization. She intimated that it is just too hard to satisfy individuals and that the plan has enough to do managing costs.

by Sandra J. Tanenbaum, Boston Review |  Read more:
Photo: Jordan Olels

Marjorie and the Birds (Fiction)

After her husband died, Marjorie took up hobbies, lots of them, just to see what stuck. She went on a cruise for widows and widowers, which was awful for everyone except the people who hadn’t really loved their spouses to begin with. She took up knitting, which made her fingers hurt, and modern dance for seniors, which made the rest of her body hurt, too. Most of all, Marjorie enjoyed birding, which didn’t seem like a hobby at all, but like agreeing to be more observant. She’d always been good at paying attention.

She signed up for an introductory course at the Museum of Natural History, sending her check in the mail with a slip of paper wrapped around it. It was the sort of thing that her children made fun of her for, but Marjorie had her ways. The class met twice a week at seven in the morning, always gathering on the Naturalist’s Bridge just past the entrance to the park at 77th Street. Marjorie liked that, the consistency. Even on days when she was late—all year, it had only happened twice, and she’d been mortified both times—Marjorie knew just where to find the group, as they always wound around the park on the same path, moving at a snail’s pace, a birder’s pace, their eyes up in the trees and their hands loosely holding onto the binoculars around their necks.

Dr. Lawrence was in charge. He was a small man, smaller than Marjorie, who stood five foot seven in her walking shoes. His hair was thin but not gone, pale but not white. To Marjorie, he seemed a youthful spirit, though he must have been in his late fifties. Dr. Lawrence had another job at the museum, unrelated to birds. Marjorie could never remember exactly what it was. He arranged bones, or pinned butterfly wings, or dusted off the dinosaurs with a toothbrush. She was too embarrassed to keep asking. But the birds were his real love, that was clear. Marjorie loved listening to Dr. Lawrence describe what he saw in the trees. Warbling in the fir tree, behind the maple, eleven o’clock. Upper branches, just below the moon. Do you hear them calling to each other? Don’t you hear them? Sometimes Marjorie would close her eyes, even though she knew that wasn’t the point. But the park sounded so beautiful to her, like it and she had been asleep together and were only now waking up, were only now beginning to understand what was possible on a daily basis.

Marjorie’s husband, Steve, had had a big personality and the kind of booming voice that often made people turn around in restaurants. In the end, it was his heart that stopped working, as they had long suspected it would be. There had been too many decades of three-hour dinners, too much butter, too much fun. Steve had resisted all the diets his doctors suggested on principle—if that was living, what was the point? He’d known that it would happen this way, that he would go down swinging, or swigging as the case may have been. Marjorie understood. It was the children who argued.

Their daughter, Kate, was the eldest, and already had two children of her own. She would send articles over email, knowing that neither of her parents would read them. Lowering his salt, lowering his sugar, lowering his alcohol intake. Simple exercises that could be done while sitting in a chair—Kate had tried them, they were easy. Marjorie knew how to press delete.

by Emma Straub, Fifty-Two Stories |  Read more:

linda vachon / tête de caboche
via:

Pictures and Vision

Okay, I’m going to argue that the futures of Facebook and Google are pretty much totally embedded in these two images:

 
 

The first one you know. What you might not know is just how completely central photos are to Facebook’s product, and by extension its whole business. The company’s S1 filing reports that, in the last three months of 2011, users uploaded around 250 million photos every day. For context, around 480 million people used the service on any given day in that span. That’s like… quite a ratio. A whole lot of people sign up for Facebook because they want to see a friend or family member’s photos, and a whole lot of people return to the site to see new ones. (And I mean, really: does the core Facebook behavior of stalking provide any satisfaction without photos? No, it does not.)

Really, Facebook is the world’s largest photo sharing site—that also happens to be a social network and a login system. In this context, the Instagram acquisition and the new Facebook Camera app make perfect sense; this is Facebook trebling down on photos. The day another service steals the photo throne is the day that Facebook’s trajectory starts to bend.

(As an aside, I’d love to know how many photo views happen daily on Facebook. My guess is that the number utterly dwarfs every other metric in the system—other than pageviews, of which it is obviously a subset.)

You might not recognize the second image up above. It was posted on Sebastian Thrun’s Google+ page, and it was taken with a working version of Project Glass out in the wild, or at least in Thrun’s backyard. It’s a POV shot taken hands-free: Thrun’s son Jasper, just as Thrun saw him.

Thrun also demonstrated Glass on Charlie Rose and it’s worth watching the first five minutes there just to see (a) exactly how weird the glasses look, and (b) exactly how wonderful the interaction seems. This isn’t about sharing pictures. This is about sharing your vision.

Now, Google’s big pitch video for Glass is all about utility, with just a dollop of delight at the end, but don’t let that fool you. There is serious delight waiting here. Imagine actors and athletes doing what they do today on Twitter—sharing their adventures from a first-person POV—except doing it with Glass. It’s pretty exciting, actually, and if the glasses look criminally dorky, well, we didn’t expect to find ourselves walking the world staring down into skinny little black boxes, either.

So the titanic showdown between Facebook and Google might not be the News Feed vs. Google+ after all. It might be Facebook Camera vs. Project Glass.

It might, in fact, be pictures vs. vision.

by Robin Sloan |  Read more: