Sunday, January 7, 2018

The Secret Lives of Students Who Mine Cryptocurrency in Their Dorm Rooms

Mark was a sophomore at MIT in Cambridge, Massachusetts, when he began mining cryptocurrencies more or less by accident.

In November 2016, he stumbled on NiceHash, an online marketplace for individuals to mine cryptocurrency for willing buyers. His desktop computer, boosted with a graphics card, was enough to get started. Thinking he might make some money, Mark, who asked not to use his last name, downloaded the platform’s mining software and began mining for random buyers in exchange for payments in bitcoin. Within a few weeks, he had earned back the $120 cost of his graphics card, as well as enough to buy another for $200.

From using NiceHash, he switched to mining ether, then the most popular bitcoin alternative. To increase his computational power, he scrounged up several unwanted desktop computers from a professor who “seemed to think that they were awful and totally trash.” When equipped with the right graphics cards, the “trash” computers worked fine.

Each time Mark mined enough ether to cover the cost, he bought a new graphics card, trading leftover currency into bitcoin for safekeeping. By March 2017, he was running seven computers, mining ether around the clock from his dorm room. By September his profits totaled one bitcoin—worth roughly $4,500 at the time. Now, four months later, after bitcoin’s wild run and the diversification of his cryptocoin portfolio, Mark estimates he has $20,000 in digital cash. “It just kind of blew up,” he says.

Exploiting a crucial competitive advantage and motivated by profit and a desire to learn the technology, students around the world are launching cryptocurrency mining operations right from their dorm rooms. In a typical mining operation, electricity consumption accounts for the highest fraction of operational costs, which is why the largest bitcoin mines are based in China. But within Mark’s dorm room, MIT foots the bill. That gives him and other student miners the ability to earn higher profit margins than most other individual miners.

In the months since meeting Mark, I’ve interviewed seven other miners from the US, Canada, and Singapore who ran or currently run dorm room cryptomining operations, and I’ve learned of many more who do the same. Initially, almost every student began mining because it was fun, cost-free, and even profitable. As their operations grew, so did their interest in cryptocurrency and in blockchain, the underlying technology. Mining, in other words, was an unexpected gateway into discovering a technology that many predict will dramatically transform our lives.  (...)

A dorm room operation

Years before meeting Mark, when I was a junior at MIT, I had heard rumors of my peers mining bitcoin. After its value exploded, and along with it, the necessary computational and electrical power to mine it, I assumed that dorm room mining was no longer viable. What I hadn’t considered was the option of mining alternate cryptocurrencies, including ethereum, which can and do thrive as small-scale operations.

When mining for cryptocurrency, computational power, along with low power costs, is king. Miners around the world compete to solve math problems for a chance to earn digital coins. The more computational power you have, the greater your chances of getting returns.

To profitably mine bitcoin today, you need an application-specific integrated circuit, or ASIC—specialized hardware designed for bitcoin-mining efficiency. An ASIC can have 100,000 times more computational power than a standard desktop computer equipped with a few graphics cards. But ASICs are expensive—the most productive ones easily cost several thousands of dollars—and they suck power. If bitcoin prices aren’t high enough to earn more revenue than the cost of electricity, the pricey hardware cannot be repurposed for any other function.

In contrast, alternate currencies like ethereum are “ASIC-resistant,” because ASICS designed to mine ether don’t exist. That means ether can be profitably mined with just a personal computer. Rather than rely solely on a computer’s core processor (colloquially called a “CPU”), however, miners pair it with graphics cards (“GPUs”) to increase the available computational power. Whereas CPUs are designed to solve one problem at a time, GPUs are designed to simultaneously solve hundreds. The latter dramatically raises the chances of getting coins.

by Karen Hao, Quartz |  Read more:
Image: rebcenter-moscow/Pixabay

William-Adolphe Bouguereau, The song of angels (1881)
via:

Of All the Blogs in the World, He Walks Into Mine

A man born to an Orthodox Jewish family in Toronto and schooled at a Yeshiva and a Japanese-American man raised on the island of Oahu, Hawaii, were married in the rare books section of the Strand Bookstore in Greenwich Village before a crowd of 200 people, against a backdrop of an arch of gold balloons that were connected to each other like intertwined units of a necklace chain or the link emoji, in a ceremony led by a Buddhist that included an operatic performance by one friend, the reading of an original poem based on the tweets of Yoko Ono by another, and a lip-synced rendition of Whitney Houston’s “I Will Always Love You” by a drag queen dressed in a white fringe jumper and a long veil.

The grooms met on the internet. But this isn’t a story about people who swiped right.

Adam J. Kurtz, 29, and Mitchell Kuga, 30, first connected Dec. 1, 2012, five years to the day before their wedding.

It was just before 5 p.m. and Mr. Kurtz, living in the Williamsburg section of Brooklyn, ordered a pizza. As one does, when one is 24 and living amid a generation of creative people whose every utterance and experience might be thought of as content, Mr. Kurtz filmed and posted to Tumblr a 10-minute video showing him awaiting the delivery.

Among those who liked the video was a stranger Mr. Kurtz had already admired from afar. It was a guy named Mitchell who didn’t reveal his last name on his Tumblr account, just his photographic eye for Brooklyn street scenes and, on occasion, his face. Mr. Kurtz had developed a bit of a social-media crush on him. “I would think, ‘He’s not even sharing his whole life, that is so smart and impressive,’” Mr. Kurtz said. (...)

When they met, they both were relatively new to New York. Mr. Kuga had moved to the city from Oahu in 2010, after having studied magazine journalism at Syracuse University, from which he graduated in 2009. He is a freelance journalist who has written for Next Magazine and for Gothamist, including an article about Spam (the food product, not the digital menace).

Mr. Kurtz graduated from the University of Maryland, Baltimore County in 2009 and moved to New York in 2012 to work as a graphic artist. He was always creative and enjoyed making crafts with bits and bobs of paper he had saved, ticket stubs and back-of-the-envelope doodles.

He began to build a large social media following, particularly on Instagram, of those who enjoyed his wry humor in celebrating paper culture through digital media, as well as the witty items he began to sell online (like little heart-shaped Valentine’s Day candies that say, “RT 4 YES, FAV 4 NO” AND “REBLOG ME”).

by Katherine Rosman, NY Times |  Read more:
Image: Rebecca Smeyne
[ed. Gay, straight, sideways... this just hurts my brain.]

Saturday, January 6, 2018

The Real Future of Work

In 2013, Diana Borland and 129 of her colleagues filed into an auditorium at the University of Pittsburgh Medical Center. Borland had worked there for the past 13 years as a medical transcriptionist, typing up doctors’ audio recordings into written reports. The hospital occasionally held meetings in the auditorium, so it seemed like any other morning.

The news she heard came as a shock: A UPMC representative stood in front of the group and told them their jobs were being outsourced to a contractor in Massachusetts. The representative told them it wouldn’t be a big change, since the contractor, Nuance Communications, would rehire them all for the exact same position and the same hourly pay. There would just be a different name on their paychecks.

Borland soon learned that this wasn’t quite true. Nuance would pay her the same hourly rate—but for only the first three months. After that, she’d be paid according to her production, 6 cents for each line she transcribed. If she and her co-workers passed up the new offer, they couldn’t collect unemployment insurance, so Borland took the deal. But after the three-month transition period, her pay fell off a cliff. As a UPMC employee, she had earned $19 per hour, enough to support a solidly middle-class life. Her first paycheck at the per-line rate worked out to just $6.36 per hour—below the minimum wage.

“I thought they made a mistake,” she said. “But when I asked the company, they said, ‘That’s your paycheck.’”

Borland quit not long after. At the time, she was 48, with four kids ranging in age from 9 to 24. She referred to herself as retired and didn’t hold a job for the next two years. Her husband, a medical technician, told her that “you need to be well for your kids and me.” But early retirement didn’t work out. The family struggled financially. Two years ago, when the rival Allegheny General Hospital recruited her for a transcriptionist position, she took the job. To this day, she remains furious about UPMC’s treatment of her and her colleagues.

“The bottom line was UPMC was going to do what they were going to do,” she said. “They don’t care about what anybody thinks or how it affects any family.” UPMC, reached by email, said the outsourcing was a way to save the transcriptionists’ jobs as the demand for transcriptionists fell.

It worked out for her former employer: In the four years since the outsourcing, UPMC’s net income has more than doubled.

What happened to Borland and her co-workers may not be as dramatic as being replaced by a robot, or having your job exported to a customer service center in Bangalore. But it is part of a shift that may be even more historic and important—and has been largely ignored by lawmakers in Washington. Over the past two decades, the U.S. labor market has undergone a quiet transformation, as companies increasingly forgo full-time employees and fill positions with independent contractors, on-call workers or temps—what economists have called “alternative work arrangements” or the “contingent workforce.” Most Americans still work in traditional jobs, but these new arrangements are growing—and the pace appears to be picking up. From 2005 to 2015, according to the best available estimate, the number of people in alternative work arrangements grew by 9 million and now represents roughly 16 percent of all U.S. workers, while the number of traditional employees declined by 400,000. A perhaps more striking way to put it is that during those 10 years, all net job growth in the American economy has been in contingent jobs.

Around Washington, politicians often talk about this shift in terms of the so-called gig economy. But those startling numbers have little to do with the rise of Uber, TaskRabbit and other “disruptive” new-economy startups. Such firms actually make up a small share of the contingent workforce. The shift that came for Borland is part of something much deeper and longer, touching everything from janitors and housekeepers to lawyers and professors.

“This problem is not new,” said Senator Sherrod Brown of Ohio, one of the few lawmakers who has proposed a comprehensive plan on federal labor law reform. “But it’s being talked about as if it’s new.”

The repercussions go far beyond the wages and hours of individuals. In America, more than any other developed country, jobs are the basis for a whole suite of social guarantees meant to ensure a stable life. Workplace protections like the minimum wage and overtime, as well as key benefits like health insurance and pensions, are built on the basic assumption of a full-time job with an employer. As that relationship crumbles, millions of hardworking Americans find themselves ejected from that implicit pact. For many employees, their new status as “independent contractor” gives them no guarantee of earning the minimum wage or health insurance. For Borland, a new full-time job left her in the same chair but without a livable income.

In Washington, especially on Capitol Hill, there’s not much talk about this shift in the labor market, much less movement toward solutions. Lawmakers attend conference after conference on the “Future of Work” at which Republicans praise new companies like Uber and TaskRabbit for giving workers more flexibility in their jobs, and Democrats argue that those companies are simply finding new ways to skirt federal labor law. They all warn about automation and worry that robots could replace humans in the workplace. But there’s actually not much evidence that the future of work is going to be jobless. Instead, it’s likely to look like a new labor market in which millions of Americans have lost their job security and most of the benefits that accompanied work in the 20th century, with nothing to replace them.

by Danny Vinik, Politico |  Read more:
Image: Chris Gash

Jackson Pollock
via:

Lawrence Wheeler
via:

Mick and Keith
via:

What “Affordable Housing” Really Means

When people — specifically market urbanists versus regulation fans — argue about housing affordability on the internet it seems to me that the two groups are using the concept of "affordable" in different ways.
  1. In one usage, the goal of improving affordability is to make it possible for more people to share in the economic dynamism of a growing, high-income city like Seattle.
  2. In the other usage, the goal of improving affordability is to reduce (or slow the rise of) average rents in an economically dynamic, high-income city like Seattle.
These are both things that a reasonable person could be interested in. But since they are different things, different policies will impact them.

The first definition is what market urbanists are talking about. I live in a neighborhood of Washington, DC, that's walkable to much of the central business district, has good transit assets, and though predominantly poor in the very recent past has now become expensive (i.e., it's gentrifying).

If the city changed the zoning to allow for denser construction, the number of housing units available in the neighborhood would increase and thus (essentially by definition) the number of people who are able to afford to live there would go up.

What's not entirely clear is whether a development boom would reduce prices in the neighborhood. I think it's pretty clear that on some scale, "more supply equals lower prices" is true. The extra residents don't materialize out of thin air, after all, so there must be somewhere that demand is eased as a result of the increased development.

But skeptics are correct to note that the actual geography of the price impact is going to depend on a huge array of factors and there are no guarantees here. In particular, there's no guarantee that incumbent low-income residents will be more able to stay in place under a high-development regime than a low-development one.

To accomplish the goals of (2), you really do need regulation — either traditional rent control or some newfangled inclusionary zoning or what have you.

But — critically — (2) doesn't accomplish (1). If you're concerned that we are locking millions of Americans out of economic opportunity by making it impossible for thriving, high-wage metro areas to grow their housing stock rapidly, then simply reducing the pace of rent increases in those areas won't do anything to help. Indeed, there's some possibility that it might hurt by further constraining overall housing supply.

by Matthew Yglesias, Vox | Read more:
Image: Shutterstock

Our Cloud-Centric Future Depends on Open Source Chips

When a Doctor First Handed Me Opioids

On a sunny September morning in 2012, my wife and I returned to our apartment from walking our eldest daughter to her first day of kindergarten. When we entered our home, in the Washington, DC, suburb of Greenbelt, Maryland, I immediately felt that something was off. My Xbox 360 and Playstation 3 were missing.

My wife ran to the bedroom, where drawers were open, clothing haphazardly strewn about. It was less than a minute before a wave of terror washed over me: My work backpack was gone. Inside that bag were notebooks and my ID for getting into work at NBC News Radio, where I was an editor. But the most important item in my life was in that bag: my prescription bottle of Oxycodone tablets.

“I can’t believe this happened to us,” my wife said.

“They took my pills,” I said.

We repeated those lines to each other over and over, my wife slowly growing annoyed with me. Why didn’t I feel the same sense of violation? Why wasn’t I more upset about the break-in? Oh, but I was. Because they took my pills. The game consoles, few dollars and cheap jewelry they stole would all be replaced. But my pills! They took my fucking pills!

We had to call the police. Not because of the break-in but, rather, so I could have a police report to show my doctor. That was all I could think about. My pills.

How did I become this person? How did I get to a place where the most important thing in my life was a round, white pill of opiate pleasure?
***
Before 2010, I only had taken opiates a few times. In 2007, I went to the emergency room in my hometown of Cleveland, Ohio, because I could not stop vomiting from abdominal pain. Upon my discharge, I was given 15 Percocets, 5 milligrams each. I took them as prescribed, noticed that they made me feel happy, and never gave them another thought.

After I took a reporter job in Orlando, I began to get sick more frequently, requiring several visits to the ER for abdominal pain and vomiting. In September of 2008 I was diagnosed with Crohn’s, an inflammatory bowel disease, and put on a powerful chemotherapy drug called Remicade to quell the symptoms. My primary care doctor, knowing I was in pain, prescribed me Percocet every month. I took them as needed, or whenever I needed a pick-me-up at work. I shared a few with a coworker from time to time. We’d take them, and 20 minutes later, start giggling at each other. I never totally ran out—never took them that often. I never needed an early refill.

In March of 2010, I was hired as the news director of a radio station in Madison, Wisconsin. Before we moved, my doctor in Orlando wrote me a Percocet script for 90 pills to bridge the gap until my new insurance kicked in in Wisconsin—approximately three month’s worth. I went through them in four weeks. I spent about a week feeling like I had the flu and then recovered, never once realizing that I was experiencing opiate withdrawal for the first time. Soon after, I set up my primary and GI care with my new insurance, and went back to my one-to-two-pills-per-day Percocet prescription, along with a continuation of my Remicade treatment.

Two months later, while my wife and daughter were visiting family in Cleveland, I developed concerning symptoms. My joints were swollen, I couldn’t bend my elbows, I was dizzy. I went to the ER, where for two days the doctors performed all sorts of tests as my symptoms worsened. Eventually, the rheumatologist diagnosed me with drug-induced Lupus from the Remicade. I was prescribed 60 Percocets upon leaving the hospital.

When I went back to my GI doc four weeks later for a refill, he told me he was uncomfortable prescribing pain medication, so he referred me to a pain clinic. I told the physician there how I would get cramps, sharp pains that would sometimes lead to vomiting. Did it hurt when I drove over bumps, or when bending over? Yes, sometimes. I left with a prescription for Oxycodone, with a prescription to take one pill every three to four hours. My initial script was for 120 pills. I felt like I hit the jackpot.

by Anonymous, Mother Jones |  Read more:
Image: PeopleImages/Getty

Friday, January 5, 2018

Nine-Enders

You’re Most Likely to Do Something Extreme Right Before You Turn 30... or 40, or 50, or 60...

Red Hong Yi ran her first marathon when she was 29 years old. Jeremy Medding ran his when he was 39. Cindy Bishop ran her first marathon at age 49, Andy Morozovsky at age 59.

All four of them were what the social psychologists Adam Alter and Hal Hershfield call “nine-enders,” people in the last year of a life decade. They each pushed themselves to do something at ages 29, 39, 49, and 59 that they didn’t do, didn’t even consider, at ages 28, 38, 48, and 58—and didn’t do again when they turned 30, 40, 50, or 60.

Of all the axioms describing how life works, few are sturdier than this: Timing is everything. Our lives present a never-ending stream of “when” decisions—when to schedule a class, change careers, get serious about a person or a project, or train for a grueling footrace. Yet most of our choices emanate from a steamy bog of intuition and guesswork. Timing, we believe, is an art.

In fact, timing is a science. For example, researchers have shown that time of day explains about 20 percent of the variance in human performance on cognitive tasks. Anesthesia errors in hospitals are four times more likely at 3 p.m. than at 9 a.m. Schoolchildren who take standardized tests in the afternoon score considerably lower than those who take the same tests in the morning; researchers have found that for every hour after 8 a.m. that Danish public-school students take a test, the effect on their scores is equivalent to missing two weeks of school.

Other researchers have found that we use “temporal landmarks” to wipe away previous bad behavior and make a fresh start, which is why you’re more likely to go to the gym in the month following your birthday than the month before.  (...)

For example, to run a marathon, participants must register with race organizers and include their age. Alter and Hershfield found that nine-enders are overrepresented among first-time marathoners by a whopping 48 percent. Across the entire lifespan, the age at which people were most likely to run their first marathon was 29. Twenty-nine-year-olds were about twice as likely to run a marathon as 28-year-olds or 30-year-olds.

Meanwhile, first-time marathon participation declines in the early 40s but spikes dramatically at age 49. Someone who’s 49 is about three times more likely to run a marathon than someone who’s just a year older.

What’s more, nearing the end of a decade seems to quicken a runner’s pace—or at least motivates them to train harder. People who had run multiple marathons posted better times at ages 29 and 39 than during the two years before or after those ages.

The energizing effect of the end of a decade doesn’t make logical sense to the marathon-running scientist Morozovsky. “Keeping track of our age? The Earth doesn’t care. But people do, because we have short lives. We keep track to see how we’re doing,” he told me. “I wanted to accomplish this physical challenge before I hit 60. I just did.” For Yi, the artist, the sight of that chronological mile marker roused her motivation. “As I was approaching the big three-o, I had to really achieve something in my 29th year,” she said. “I didn’t want that last year just to slip by.”

However, flipping life’s odometer to a nine doesn’t always trigger healthy behavior. Alter and Hershfield also discovered that “the suicide rate was higher among nine-enders than among people whose ages ended in any other digit.” So, apparently, was the propensity of men to cheat on their wives. On the extramarital-affair website Ashley Madison, nearly one in eight men were 29, 39, 49, or 59, about 18 percent higher than chance would predict.

“People are more apt to evaluate their lives as a chronological decade ends than they are at other times,” Alter and Hershfield explain. “Nine-enders are particularly preoccupied with aging and meaningfulness, which is linked to a rise in behaviors that suggest a search for or crisis of meaning.”

by Daniel H. Pink, The Atlantic |  Read more:
Image: Mike Segar, Reuters

Politics 101

Thursday, January 4, 2018


Cy Twombly, Untitled 1970
via:

The Rise and Fall of the Blog

New York Times writer Nicholas Kristof was one of the first to start blogging for one of the most well-known media companies in the world. Yet on December 8th, he declared his blog was being shut down, writing, “we’ve decided that the world has moved on from blogs—so this is the last post here.”

The death knell of blogs might seem surprising to anyone who was around during their heyday. Back in 2008, Daniel W. Drezner and Henry Farrell wrote in Public Choice, “Blogs appear to be a staple of political commentary, legal analysis, celebrity gossip, and high school angst.” A Mother Jones writer who “flat out declared, ‘I hate blogs’…also admitted, ‘I gorge myself on these hundreds of pieces of commentary like so much candy.'”

Blogs exploded in popularity fast. According to Drezner and Farrell, in 1999, there were an estimated 50 blogs dotted around the internet. By 2007, a blog tracker theorized there were around seventy million. Yet, a popular question today is whether blogs still have any relevance. A quick Google search will yield suggested results, “are blogs still relevant 2016,” “are blogs still relevant 2017,” and “is blogging dead.”

In 2007, the blogosphere may have been crowded, but it was undeniably influential. Blogs were credited with playing a pivotal role in campaign tactics, removing a Mississippi inmate from death row, impeding the sales of arms to Hugo Chavez’s regime, and spurring several other twists and turns in important national events.

Of course, power is in the eye of the beholder, and blogs used to be seen as a powerful indicator of public opinion by the people in power. As Drezner and Farrell put it in their 2008 article, “there is strong evidence that politicians perceive that blogs are a powerful force in American politics. The top five political blogs attract a combined 1.5 million unique visits per day, suggesting that they have far more readers than established opinion magazines such as the New Republic, American Prospect, and Weekly Standard combined.”

Today, writers lament the irrelevance of blogs not just because there’s too many of them; but because not enough people are engaging with even the more popular ones. Blogs are still important to those invested in their specific subjects, but not to a more general audience, who are more likely to turn to Twitter or Facebook for a quick news fix or take on current events.

Explains author Gina Bianchini as she advises not starting a blog, “2017 is a very different world than 2007. Today is noisier and people’s attention spans shorter than any other time in history…and things are only getting worse. Facebook counts a ‘view’ as 1.7 seconds and we have 84,600 of those in a day. Your new blog isn’t equipped to compete in this new attention-deficit-disorder Thunderdome.”

by Farah Mohammed, JSTOR |  Read more:
Image: iStock
[ed. Maybe people blog for reasons other than simple metrics.]

Raw Water

Step aside, Juicero—and hold my “raw” water.

Last year, Silicon Valley entrepreneur Doug Evans brought us the Juicero machine, a $400 gadget designed solely to squeeze eight ounces of liquid from proprietary bags of fruits and vegetables, which went for $5 to $8 apiece. Though the cold-pressed juice company initially wrung millions from investors, its profits ran dry last fall after journalists at Bloomberg revealed that the pricy pouch-pressing machine was, in fact, unnecessary. The journalists simply squeezed juice out of the bags by hand.

But this didn’t crush Evans. He immediately plunged into a new—and yet somehow even more dubious—beverage trend: “raw” water.

The term refers to unfiltered, untreated, unsterilized water collected from natural springs. In the ten days following Juicero’s collapse, Evans underwent a cleanse, drinking only raw water from a company called Live Water, according to The New York Times. “I haven’t tasted tap water in a long time,” he told the Times. And Evans isn’t alone; he’s a prominent member of a growing movement to “get off the water grid,” the paper reports.

Members are taking up the unrefined drink due to both concern for the quality of tap water and the perceived benefits of drinking water in a natural state. Raw water enthusiasts are wary of the potential for contaminants in municipal water, such as traces of unfilterable pharmaceuticals and lead from plumbing. Some are concerned by harmless additives in tap water, such as disinfectants and fluoride, which effectively reduces tooth decay. Moreover, many believe that drinking “living” water that’s organically laden with minerals, bacteria, and other “natural” compounds has health benefits, such as boosting “energy” and “peacefulness.”

Mukhande Singh (né Christopher Sanborn), founder of Live Water, told the Times that tap water was “dead” water. “Tap water? You’re drinking toilet water with birth control drugs in them,” he said. “Chloramine, and on top of that they’re putting in fluoride. Call me a conspiracy theorist, but it’s a mind-control drug that has no benefit to our dental health.” (Note: There is plenty of datashowing that fluoride improves dental health, but none showing water-based mind control.)

by Beth Mole, ARS Technica |  Read more:
Image: Live Water

Wednesday, January 3, 2018


photo: markk
repost

Flip of a Coin

Before antidepressants became mainstream, drugs that treated various symptoms of depression were depicted as “tonics which could ease people through the ups and downs of normal, everyday existence,” write Jeffrey Lacasse, a Florida State University professor specializing in psychiatric medications, and Jonathan Leo, a professor of anatomy at Lincoln Memorial University, in a 2007 paper on the history of the chemical imbalance theory.

In the 1950s, Bayer marketed Butisol (a barbiturate) as “the ‘daytime sedative’ for everyday emotional stress”; in the 1970s, Roche advertised Valium (diazepam) as a treatment for the “unremitting buildup of everyday emotional stress resulting in disabling tension.”

Both the narrative and the use of drugs to treat symptoms of depression transformed after Prozac—the brand name for fluoxetine—was released. “Prozac was unique when it came out in terms of side effects compared to the antidepressants available at the time (tricyclic antidepressants and monoamine oxidase inhibitors),” Anthony Rothschild, psychiatry professor at the University of Massachusetts Medical School, writes in an email. “It was the first of the newer antidepressants with less side effects.”

Even the minimum therapeutic dose of commonly prescribed tricyclics like amitriptyline (Elavil) could cause intolerable side effects, says Hyman. “Also these drugs were potentially lethal in overdose, which terrified prescribers.” The market for early antidepressants, as a result, was small.

Prozac changed everything. It was the first major success in the selective serotonin reuptake inhibitor (SSRI) class of drugs, designed to target serotonin, a neurotransmitter. It was followed by many more SSRIs, which came to dominate the antidepressant market. The variety affords choice, which means that anyone who experiences a problematic side effect from one drug can simply opt for another. (Each antidepressant causes variable and unpredictable side effects in some patients. Deciding which antidepressant to prescribe to which patient has been described as a “flip of a coin.”)

Rothschild notes that all existing antidepressant have similar efficacy. “No drug today is more efficacious than the very first antidepressants such as the tricyclic imipramine,” agrees Hyman. Three decades since Prozac arrived, there are many more antidepressant options, but no improvement in efficacy of treatment.

Meanwhile, as Lacasse and Leo note in a 2005 paper, manufacturers typically marketed these drugs with references to chemical imbalances in the brain. For example, a 2001 television ad for sertraline (another SSRI) said, “While the causes are unknown, depression may be related to an imbalance of natural chemicals between nerve cells in the brain. Prescription Zoloft works to correct this imbalance.”

Another advertisement, this one in 2005, for the drug paroxetine, said, “With continued treatment, Paxil can help restore the balance of serotonin,” a neurotransmitter.

“[T]he serotonin hypothesis is typically presented as a collective scientific belief,” write Lacasse and Leo, though, as they note: “There is not a single peer-reviewed article that can be accurately cited to directly support claims of serotonin deficiency in any mental disorder, while there are many articles that present counterevidence.”

Despite the lack of evidence, the theory has saturated society. In their 2007 paper, Lacasse and Leo point to dozens of articles in mainstream publications that refer to chemical imbalances as the unquestioned cause of depression. One New York Times article on Joseph Schildkraut, the psychiatrist who first put forward the theory in 1965, states that his hypothesis “proved to be right.” When Lacasse and Leo asked the reporter for evidence to support this unfounded claim, they did not get a response. A decade on, there are still dozens of articles published every month in which depression is unquestionably described as the result of a chemical imbalance, and many people explain their own symptoms by referring to the myth.

Meanwhile, 30 years after Prozac was released, rates of depression are higher than ever.
* * *
Hyman responds succinctly when I ask him to discuss the causes of depression: “No one has a clue,” he says.

There’s not “an iota of direct evidence” for the theory that a chemical imbalance causes depression, Hyman adds. Early papers that put forward the chemical imbalance theory did so only tentatively, but, “the world quickly forgot their cautions,” he says.

“Neuroscientists don’t have a good way of separating when brains are functioning normally or abnormally.” Depression, according to current studies, has an estimated heritability of around 37%, so genetics and biology certainly play a significant role. Brain activity corresponds with experiences of depression, just as it corresponds with all mental experiences. This, says Horwitz, “has been known for thousands of years.” Beyond that, knowledge is precarious. “Neuroscientists don’t have a good way of separating when brains are functioning normally or abnormally,” says Horwitz.

If depression was a simple matter of adjusting serotonin levels, SSRIs should work immediately, rather than taking weeks to have an effect. Reducing serotonin levels in the brain should create a state of depression, when research has found that this isn’t the case. One drug, tianeptine (a non-SSRI sold under the brand names Stablon and Coaxil across Europe, South America, and Asia, though not the UK or US), has the opposite effect of most antidepressants and decreases levels of serotonin.

This doesn’t mean that antidepressants that affect levels of serotonin definitively don’t work—it simply means that we don’t know if they’re affecting the root cause of depression. A drug’s effect on serotonin could be a relatively inconsequential side effect, rather than the crucial treatment.

by Olivia Goldhill, Quartz |  Read more:
Image: Reuters/Lucy Nicholson
[ed. See also: Sometimes Depression Means Not Feeling Anything At All]

The Biggest Secret

My Life as a New York Times Reporter in the Shadow of the War on Terror

There's no press room at CIA headquarters, like there is at the White House. The agency doesn’t hand out press passes that let reporters walk the halls, the way they do at the Pentagon. It doesn’t hold regular press briefings, as the State Department has under most administrations. The one advantage that reporters covering the CIA have is time. Compared to other major beats in Washington, the CIA generates relatively few daily stories. You have more time to dig, more time to meet people and develop sources.

I started covering the CIA in 1995. The Cold War was over, the CIA was downsizing, and CIA officer Aldrich Ames had just been unmasked as a Russian spy. A whole generation of senior CIA officials was leaving Langley. Many wanted to talk.

I was the first reporter many of them had ever met. As they emerged from their insular lives at the CIA, they had little concept of what information would be considered newsworthy. So I decided to show more patience with sources than I ever had before. I had to learn to listen and let them talk about whatever interested them. They had fascinating stories to tell.

In addition to their experiences in spy operations, many had been involved in providing intelligence support at presidential summit meetings, treaty negotiations, and other official international conferences. I realized that these former CIA officers had been backstage at some of the most historic events over the last few decades and thus had a unique and hidden perspective on what had happened behind the scenes in American foreign policy. I began to think of these CIA officers like the title characters in Tom Stoppard’s play “Rosencrantz and Guildenstern Are Dead,” in which Stoppard reimagines “Hamlet” from the viewpoint of two minor characters who fatalistically watch Shakespeare’s play from the wings. (...)

Success as a reporter on the CIA beat inevitably meant finding out government secrets, and that meant plunging headlong into the classified side of Washington, which had its own strange dynamics.

I discovered that there was, in effect, a marketplace of secrets in Washington, in which White House officials and other current and former bureaucrats, contractors, members of Congress, their staffers, and journalists all traded information. This informal black market helped keep the national security apparatus running smoothly, limiting nasty surprises for all involved. The revelation that this secretive subculture existed, and that it allowed a reporter to glimpse the government’s dark side, was jarring. It felt a bit like being in the Matrix.

Once it became known that you were covering this shadowy world, sources would sometimes appear in mysterious ways. In one case, I received an anonymous phone call from someone with highly sensitive information who had read other stories I had written. The information from this new source was very detailed and valuable, but the person refused to reveal her identity and simply said she would call back. The source called back several days later with even more information, and after several calls, I was able to convince her to call at a regular time so I would be prepared to talk. For the next few months, she called once every week at the exact same time and always with new information. Because I didn’t know who the source was, I had to be cautious with the information and never used any of it in stories unless I could corroborate it with other sources. But everything the source told me checked out. Then after a few months, she abruptly stopped calling. I never heard from her again, and I never learned her identity. (...)

Disclosures of confidential information to the press were generally tolerated as facts of life in this secret subculture. The media acted as a safety valve, letting insiders vent by leaking. The smartest officials realized that leaks to the press often helped them, bringing fresh eyes to stale internal debates. And the fact that the press was there, waiting for leaks, lent some discipline to the system. A top CIA official once told me that his rule of thumb for whether a covert operation should be approved was, “How will this look on the front page of the New York Times?” If it would look bad, don’t do it. Of course, his rule of thumb was often ignored.

For decades, official Washington did next to nothing to stop leaks. The CIA or some other agency would feign outrage over the publication of a story it didn’t like. Officials launched leak investigations but only went through the motions before abandoning each case. It was a charade that both government officials and reporters understood. (...)

One reason that officials didn’t want to conduct aggressive leak investigations was that they regularly engaged in quiet negotiations with the press to try to stop the publication of sensitive national security stories. Government officials seemed to understand that a get-tough approach to leaks might lead to the breakdown of this informal arrangement. (...)

That spring, just as the U.S.-led invasion of Iraq began, I called the CIA for comment on a story about a harebrained CIA operation to turn over nuclear blueprints to Iran. The idea was that the CIA would give the Iranians flawed blueprints, and Tehran would use them to build a bomb that would turn out to be a dud.

The problem was with the execution of the secret plan. The CIA had taken Russian nuclear blueprints it had obtained from a defector and then had American scientists riddle them with flaws. The CIA then asked another Russian to approach the Iranians. He was supposed to pretend to be trying to sell the documents to the highest bidder.

But the design flaws in the blueprints were obvious. The Russian who was supposed to hand them over feared that the Iranians would quickly recognize the errors, and that he would be in trouble. To protect himself when he dropped off the documents at an Iranian mission in Vienna, he included a letter warning that the designs had problems. So the Iranians received the nuclear blueprints and were also warned to look for the embedded flaws.

Several CIA officials believed that the operation had either been mismanaged or at least failed to achieve its goals. By May 2003, I confirmed the story through a number of sources, wrote up a draft, and called the CIA public affairs office for comment.

Instead of responding to me, the White House immediately called Washington Bureau Chief Jill Abramson and demanded a meeting.

The next day, Abramson and I went to the West Wing of the White House to meet with National Security Adviser Condoleezza Rice. In her office, just down the hall from the Oval Office, we sat across from Rice and George Tenet, the CIA director, along with two of their aides.

Rice stared straight at me. I had received information so sensitive that I had an obligation to forget about the story, destroy my notes, and never make another phone call to discuss the matter with anyone, she said. She told Abramson and me that the New York Times should never publish the story. (...)

In the spring of 2004, just as the Plame case was heating up and starting to change the dynamics between the government and the press, I met with a source who told me cryptically that there was something really big and really secret going on inside the government. It was the biggest secret the source had ever heard. But it was something the source was too nervous to discuss with me. A new fear of aggressive leak investigations was filtering down. I decided to stay in touch with the source and raise the issue again.

Over the next few months, I met with the source repeatedly, but the person never seemed willing to divulge what the two of us had begun to refer to as “the biggest secret.” Finally, in the late summer of 2004, as I was leaving a meeting with the source, I said I had to know what the secret was. Suddenly, as we were standing at the source’s front door, everything spilled out. Over the course of about 10 minutes, the source provided a detailed outline of the NSA’s massive post-9/11 domestic spying program, which I later learned was code-named Stellar Wind.

The source told me that the NSA had been wiretapping Americans without search warrants, without court approval. The NSA was also collecting the phone and email records of millions of Americans. The operation had been authorized by the president. The Bush administration was engaged in a massive domestic spying program that was probably illegal and unconstitutional, and only a handful of carefully selected people in the government knew about it.

I left that meeting shocked, but as a reporter, I was also elated. I knew that this was the story of a lifetime.

by James Risen, The Intercept |  Read more:
Image: Elise Swain/Getty. Virginia Lozano for The Intercept
[ed. Must read account of collusion and coercion between the US government and media.]