Friday, August 14, 2015

How Wall Street’s Bankers Stayed Out of Jail



The probes into bank fraud leading up to the financial industry’s crash have been quietly closed. Is this justice?

On May 27, in her first major prosecutorial act as the new U.S. attorney general, Loretta Lynch unsealed a 47-count indictment against nine FIFA officials and another five corporate executives. She was passionate about their wrongdoing. “The indictment alleges corruption that is rampant, systemic, and deep-rooted both abroad and here in the United States,” she said. “Today’s action makes clear that this Department of Justice intends to end any such corrupt practices, to root out misconduct, and to bring wrongdoers to justice.”

Lost in the hoopla surrounding the event was a depressing fact. Lynch and her predecessor, Eric Holder, appear to have turned the page on a more relevant vein of wrongdoing: the profligate and dishonest behavior of Wall Street bankers, traders, and executives in the years leading up to the 2008 financial crisis. How we arrived at a place where Wall Street misdeeds go virtually unpunished while soccer executives in Switzerland get arrested is murky at best. But the legal window for punishing Wall Street bankers for fraudulent actions that contributed to the 2008 crash has just about closed. It seems an apt time to ask: In the biggest picture, what justice has been achieved?

Since 2009, 49 financial institutions have paid various government entities and private plaintiffs nearly $190 billion in fines and settlements, according to an analysis by the investment bank Keefe, Bruyette & Woods. That may seem like a big number, but the money has come from shareholders, not individual bankers. (Settlements were levied on corporations, not specific employees, and paid out as corporate expenses—in some cases, tax-deductible ones.) In early 2014, just weeks after Jamie Dimon, the CEO of JPMorgan Chase, settled out of court with the Justice Department, the bank’s board of directors gave him a 74 percent raise, bringing his salary to $20 million.After the savings-and-loan crisis of the 1980s, more than 1,000 bankers were jailed.

The more meaningful number is how many Wall Street executives have gone to jail for playing a part in the crisis. That number is one. (Kareem Serageldin, a senior trader at Credit Suisse, is serving a 30-month sentence for inflating the value of mortgage bonds in his trading portfolio, allowing them to appear more valuable than they really were.) By way of contrast, following the savings-and-loan crisis of the 1980s, more than 1,000 bankers of all stripes were jailed for their transgressions. (...)

Any narrative of how we got to this point has to start with the so-called Holder Doctrine, a June 1999 memorandum written by the then–deputy attorney general warning of the dangers of prosecuting big banks—a variant of the “too big to fail” argument that has since become so familiar. Holder’s memo asserted that “collateral consequences” from prosecutions—including corporate instability or collapse—should be taken into account when deciding whether to prosecute a big financial institution. That sentiment was echoed as late as 2012 by Lanny Breuer, then the head of the Justice Department’s criminal division, who said in a speech at the New York City Bar Association that he felt it was his duty to consider the health of the company, the industry, and the markets in deciding whether or not to file charges.

by William D. Cohan, The Atlantic |  Read more:
Image: Matt Chase

The Final File

Due to some flaw in my personality I love thinking about where my data goes after I fill out a form. Friends who work for giant banks describe projects that take years—endless regulatory documents, huge meetings, all to move a few on-screen pixels around. It can take 18 months to change a couple of text boxes. Why? What bureaucracy forces that slow pace? I like to imagine the flow of paperwork through the world, even if there’s no paper to consider.

This happens to me every few months, the desire to just grab and hold on to and explore a large database. I’ll download the text on Wikipedia, for example, and mess around with it. I might erase it to save space later. I’ll go fetch a few thousand public domain books and make a list of word frequencies, or get a list of millions of songs. I don’t have a motive; some idea just tugs at my shirttails, and, well, why not? It’s only a few hours of a lifetime, and perhaps I’ll discover something new. Databases are interesting to read and explore. They’re one of the things that makes the web the web.

Which explains how I found myself in possession of the names of more than 85 million dead Americans—the Social Security Death Master File. I’d asked on Twitter for interesting databases, and someone told me: Check this one out! It’s full of corpses! But after I had a copy, I realized that it’s a strange thing to be in possession of a massive list of dead people. It turns out that not just anyone is supposed to download the government’s book of death; you must undergo a certification by the National Technical Information Service, to demonstrate a legitimate reason to use the data, plus pay $1,825 for an entry-level subscription to access it. Restrictions on use and security were added to the federal budget for 2014. Its customers are banks and other organizations that want to track data about dead people to protect their interests and avoid fraud. The data also shows up on genealogy web sites while other companies resell access as a service.

Social Security numbers were first issued in 1936, to track people through the Social Security system—payment of benefits, and so forth. Getting a Social Security card became a ritual of American life. I remember going to some blank office, aged eleven, and signing with pomp, because how often do eleven-year-olds get to sign anything official?

A universal identifying number for Americans turned out to be extraordinarily handy for all: You already had nine digits identifying you; why should a giant company go to the trouble of creating another? So the numbers began to be used outside of the Social Security system. This was bad. Having a single, semi-secure number that represents an American citizen turns out to be a massive security flaw in how we identify and manage human beings online. Very leaky. The numbers, required for banks and mortgages and college applications, are used way too broadly. They are listed in vulnerable databases. Those databases are hacked. The stolen information is distributed and resold. A secondary industry of credit fraud has flourished to handle this flaw. We were not ready for a world where everyone can be associated with a number; no one anticipated how broken our digital world would become.

But that’s all for the living. When we are done with life, like great baseball players, our numbers are retired, and this being the government, filed, into the Death Master File. Which is, as you’d expect, a list of dead people and their similarly deceased SSNs. (...)

A name is just some impulse by your parents; it does not determine who you are, except alphabetically, and yet it’s hard not to have at least a little occult sensitivity about one’s name. I have a common name and, occasionally, will hear from other people on Twitter or via email who share it, marveling at the coincidence; one person wrote to ask me to keep our name cool. I don’t know how I’m doing at that.

Being a digitally minded person I’ve followed the large public conversation about what happens to our passwords and social media presences when we die. How do you get into a dead person’s gmail? What should happen to their blog posts? Every social media platform must eventually face the consequences of its users dying. The tension is between these relatively new institutions, like Facebook and Twitter, and our typically long lives. They are too young to know what to do with our deaths and are learning to cope, as all children must.

by Paul Ford, TNR |  Read more:
Image: Mikey Burton

Thursday, August 13, 2015


Shibata Zeshin, Landscape
via:

Yeast Can Now Make Opiates

Yeast isn’t just for beer and bread — now it makes opiates, too.

A strain of yeast engineered in a lab was able to transform sugar into a pain-killing drug — called hydrocodone — for the first time. And a second strain was able to produce thebaine, an opiate precursor that drug companies use to make oxycodone. The findings, published in Science, could completely change the way drug companies make pain-relieving medicine. Unfortunately, it may also open the door to less positive outcomes, like"home-brewed" heroin.

Opiates like heroin and morphine are made from opium poppies grown in places like Australia, Europe, and the Middle East — producing the stuff in a morphine drip is an expensive process that takes over a year. An estimated 5.5. billion people have trouble accessing pain treatments worldwide, partly because of their cost. So scientists have been hoping to drive down costs with yeast-made opiates. But until recently, engineered yeast have only been able to make small quantities of a chemical precursor that, through a number of steps, could be used to make morphine and codeine. That's why today's study is so important; it's the first example of scientists altering yeast's genetic code to successfully transform sugar into an actual opioid painkiller. (...)

In the short term, yeast-made opiates might lead to cheaper drugs. But the true excitement is farther down the road: scientists may be able to use this technology to make more effective pain-killers. "We're not just limited to what happens in nature or what the poppies make," Smolke says. "We can begin to modify these compounds in ways that will, for example, reduce the negative side effects that are associated with these medicines, but still keep the pain relieving properties."

The two yeast strains aren't anywhere near ready for commercial use. Right now, they make such small quantities of drugs that it would take about 4,400 gallons of engineered yeast to make a single dose of standard pain-relieving medicine. So the next step for researchers is boosting the drug yields — which could take years. And for once, that might actually be a good thing; health officials and scientists will need that time to figure out how to keep these strains from being used to fuel the illegal drug market.

by Arielle Duhaime-Ross, The Verge | Read more:
Image: (Lily_M / Wikimedia Commons)

Social Security At 80

Social Security turns 80 on Friday, and the massive retirement and disability program is showing its age.

Social Security's disability fund is projected to run dry next year. The retirement fund has enough money to pay full benefits until 2035. But once the fund is depleted, the shortfalls are enormous.

The stakes are huge: Nearly 60 million retirees, disabled workers, spouses and children get monthly Social Security payments, a number that is projected to grow to 90 million over the next two decades.

And the timing is bad: Social Security faces these problems as fewer employers are offering traditional pensions, forcing older workers to think hard about how they will afford retirement.

"This is a program that's been immensely popular since it began," said Nancy LeaMond, executive vice president of AARP. "Increasingly, people recognize that saving for retirement is becoming harder and harder, and Social Security is becoming even more important."

President Franklin Delano Roosevelt signed the Social Security Act on Aug. 14, 1935. Here are things to know about the federal government's largest program on its 80th birthday: (...)

How big is the long-term problem?

The numbers are beyond comprehension.

Social Security uses a 75-year window to forecast its finances, so the projections cover the life expectancy of every worker paying into the system. Over the next 75 years, Social Security is projected to pay out $159 trillion more in benefits than it will collect in taxes, according to agency data.

That's not a typo.

Adjusted for inflation, it comes to $35.3 trillion in 2015 dollars. That's nearly twice the national debt, which took the entire federal government 239 years to accumulate.

Did Congress already spend the trust funds?

Yes. For much of the past three decades, Social Security produced big surpluses, collecting more in taxes than it paid in benefits. Social Security invested those surpluses in special U.S. Treasury bonds, which are backed by the full faith and credit of the U.S. government.

They are now valued at $2.8 trillion.

But as Social Security was generating surpluses, the rest of the federal government was running deficits, for all but a few years around the turn of the century.

To finance deficit spending, the Treasury borrowed from the public and from other federal programs, including Social Security.

by Stephen Olemacher, AP |  Read more:
Image: AP

Wednesday, August 12, 2015

Her Hair

While riding my university shuttle, I used to stare at women’s hair. They were mostly young white women like me, who would sit in rows facing each other at the front of the bus, compulsively checking their phones. I would ride to campus in the afternoons and just gaze, marveling at how well-coiffed they were, all of them with the same long, straightened, voluminous hair—the hair that sets the standard for all other hair. The rows of hair would be so perfect and shiny and smooth—not a single strand out of place, no flyaway or split ends—that I could stare into them as if they were one luminous mass, like volcanic glass.

I come from a line of perfect women: perfectly dressed, cordial, well spoken. An unbroken line of scheduling and doing and achieving. I remember my mother telling me, more than once, that the only thing that matters is that I be an intelligent, educated woman. I was squandering my potential if I was anything less. And that would be a shame, she said.

As a little girl, I remember watching her at her vanity, doing her hair. She would take an impossibly long time to get ready each morning: daubing on makeup out of shiny containers, plucking earrings off her jewelry stand—and curling her hair. As a young mother in the ’90s, she had a chin-length bob and bangs that she styled precisely at the end of her morning routine using a prickly curling brush, which looked like a plant from a Dr. Seuss book. Hair, to me, was the crowning glory. It’s the last thing you fix and put into place before you walk out the door, the thing that signifies that you’re really together.

My mother has never shown up underprepared in her life. She is a doctorate-holding professor, an only child, the first in her family to go to college. When I think of her as a mother, I picture the perfect calendar she kept for my brother and me as children: the times of our gymnastics classes and choir practices and basketball games all penned neatly into the squares, with dentist and doctors’ appointment cards taped in columns down the sides. Her mother was the same way.

I’ve never known how to live up to my maternal line, though I’ve burned up a lot of energy trying. Womanhood to me is the feeling of always striving. Striving even when there is no endpoint. I learned early on that to be a good woman—a strong woman—means scheduling, doing, achieving. You execute this series flawlessly and without any complaints. You survive in this world by showing up, pretty and prepared and perfect, hopefully more articulate than anyone else in the room—and always with done hair.

In the chapter of The Second Sex where Simone de Beauvoir makes her famous pronouncement that one is not born, but rather becomes a woman, she further argues an essential part of this “becoming” involves practicing womanhood through an alter ego: a doll. De Beauvoir describes how it represents the female body, a passive object to be coddled:
The little girl coddles her doll and dresses her up as she dreams of being coddled and dressed up herself; inversely, she thinks of herself as a marvelous doll… she soon learns that in order to be pleasing she must be “pretty as a picture”… she puts on fancy clothes, she studies herself in a mirror, she compares herself with princesses and fairies.
Feminists have often identified hair grooming as the first lesson in gender socialization. Dolls are perfectly designed to aid girls in learning submission, letting them play-act the labor that will later be expected of them when it comes to appearances.

Naturally, the most famous example of this is Barbie. Ann duCille, who writes extensively about black Barbie in her 1996 book Skin Trade, recalls in the book her experiences researching: poking around in the aisles of Toys “R” Us looking for the latest black Barbie doll. In the book, she includes an impromptu interview she had with a black teenage girl, who confessed to duCille “in graphic detail” the many Barbie “murders and mutilations” she had committed over the years. “It’s the hair! It’s the hair!” the girl told duCille. “The hair, that hair; I want it. I want it!”

by Rachel Wilkinson, Vela |  Read more:
Image: Mike Mozart

The Spike: What Lies Behind the New Heroin Epidemic?

[ed. One city. Millions of syringes.]

Heroin use, which had been relatively stable through most of the decade, began to spike in the late 2000s throughout the United States. Cheap and plentiful, the drug is a staple commodity in an underground market that is as big as the globe and as intimate as your arm. And while heroin has enjoyed widespread popularity since the end of World War II, demand has soared in the past five years as jonesing prescription opioid addicts like Lance have migrated to the street. (...)

Researchers trace the rise in heroin use, in part, to the doctor’s office. In the late 1990s, there was a shift in health-care philosophy that emphasized treating patients’ pain rather than just the underlying ailments causing it. Opioids that had previously been restricted to ailments like cancer or physical trauma suddenly became widely available for more broadly defined problems like chronic pain. At the same time, Purdue Pharma introduced and aggressively marketed OxyContin (the brand name for oxycodone), a painkiller designed to gradually release opioids into the body.

As the sheer amount of opioids prescribed to Americans suddenly jumped, the drugs naturally found their way onto the street.

Users quickly figured out how to circumvent the drug’s time-delay feature, making oxy the vehicle of choice for people who wanted to get high on prescription drugs. “The Gucci, the drug that people wanted,”—people like Lance—“was OxyContin,” says Banta-Green. (...)

Withdrawal from opioids isn’t lethal, as it can be with alcohol or benzodiazepines (Valium), but it is deeply unpleasant, particularly for people with the kind of trauma or poverty that might drive them to drug abuse in the first place. Medication-assisted detox can ease the withdrawal by manipulating the brain receptors that trigger cravings. But without meds, a seasoned opioid addict can expect perhaps a week of snot, sweating, vomiting, nausea, and hot- and cold-flashes, plus—and often more importantly—the resurfacing of painful emotions that had previously been repressed by their drug use. Some addicts do manage to white-knuckle their way out of opioid addiction, but many—separated from friends and resources—are overwhelmed by the painful emptiness of their sober lives. Others, recognizing themselves as “addicts” who are a scourge on their friends and family, fall into a cycle of despair that heroin is particularly good at feeding.

For those on the frontlines of the new heroin epidemic, it’s that loss of hope that is nearly as dangerous as the drug itself. (...)

The holy grail of Murphy’s work, he says, is to reverse that exclusion—to welcome drug users back into the human community. “Our job ... is to convince them that they’re worth something,” he says, because “then you will make different choices” than someone who revels in self-destruction. So the Alliance tries to meet users where they’re at instead of telling them where they should be. Sometimes this looks like the abstinence that Lance tried and failed to achieve; other times, it’s finding a way to stabilize their drug use.

Murphy’s motto, he says, is “Be the best damn drug user that you can be.”

He shows me the Alliance’s supply room. Brown cardboard boxes are piled up to the ceiling, packed so deep there’s barely room for us to shimmy between them. Boxes of syringes are stacked in towering brown columns.

The Alliance gives out a lot of syringes—about 3.2 million per year to King County residents, says Murphy, and collects back as many as 5 million used ones. About a million of the former go to suburban users, he says—a demographic that he saw rapidly grow starting around 2010. That would have been around the same time that Lance, and thousands of others, began their migration from Big Pharma to black tar.

“For me, it was a really sad and stressful time,” says Murphy. For a couple of months, the phone at the Alliance was ringing off the hook from prescription users asking for help. “We were getting multiple calls every week,” says Murphy, from frightened suburbanites trying to figure out how to buy heroin. Callers would say “I’m so scared” and “You gotta help me.” But Murphy couldn’t: The Alliance doesn’t hook people up with drugs. “It was hard to hear all these young folks in this really chaotic and traumatic experience,” he says. “We saw these folks quickly change into injection drug users, sometimes on the streets, sometimes in the suburbs.” Stable drug use, says Murphy, was transforming into unstable drug use, and quality-controlled drugs were being replaced by heroin off the street. “Our delivery service really skyrocketed, to where in the Eastside and North King County, we do over a million syringes a year just delivering to the suburbs. The suburbs have just as much injection drug use as the city.

“The average drug user,” he says, “was much younger, and much more, let’s say, lack of city smarts or street smarts. It was really sad, that whole story and that generation. There wasn’t really a lot of older drug users to help teach them. They were left on their own.”

None of this seems fair to Murphy. “We give people OxyContin,” he says, referring to society at large, “which is essentially legal heroin, and then we tell them that they can’t have it anymore and the only way they can get it is street heroin. We also let drug cartels be our FDA on what’s quality control. We allow people to ingest horrible cuts of drugs, with people getting horrible allergic reactions to stuff it’s [mixed] with.” Criminalization, he says, only drives people further into addiction, cutting them off from the social bonds that can help addicts to cope with undiluted reality.

“It’s not that hard to figure out that beating a human being up isn’t helpful,” says Murphy. “It’s not that hard to figure out that stripping someone of their rights and dignities by taking them to jail is a detriment to society.”

by Casey Jaywork, Seattle Weekly | Read more:
Image: Barry Blankenship

Tuesday, August 11, 2015

War

A World Without Work

For centuries, experts have predicted that machines would make workers obsolete. That moment may finally be arriving. Could that be a good thing?

In the past few years, even as the United States has pulled itself partway out of the jobs hole created by the Great Recession, some economists and technologists have warned that the economy is near a tipping point. When they peer deeply into labor-market data, they see troubling signs, masked for now by a cyclical recovery. And when they look up from their spreadsheets, they see automation high and low—robots in the operating room and behind the fast-food counter. They imagine self-driving cars snaking through the streets and Amazon drones dotting the sky, replacing millions of drivers, warehouse stockers, and retail workers. They observe that the capabilities of machines—already formidable—continue to expand exponentially, while our own remain the same. And they wonder: Is any job truly safe?

Futurists and science-fiction writers have at times looked forward to machines’ workplace takeover with a kind of giddy excitement, imagining the banishment of drudgery and its replacement by expansive leisure and almost limitless personal freedom. And make no mistake: if the capabilities of computers continue to multiply while the price of computing continues to decline, that will mean a great many of life’s necessities and luxuries will become ever cheaper, and it will mean great wealth—at least when aggregated up to the level of the national economy.

But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen. If John Russo is right, then saving work is more important than saving any particular job. Industriousness has served as America’s unofficial religion since its founding. The sanctity and preeminence of work lie at the heart of the country’s politics, economics, and social interactions. What might happen if work goes away? (...)

What does the “end of work” mean, exactly? It does not mean the imminence of total unemployment, nor is the United States remotely likely to face, say, 30 or 50 percent unemployment within the next decade. Rather, technology could exert a slow but continual downward pressure on the value and availability of work—that is, on wages and on the share of prime-age workers with full-time jobs. Eventually, by degrees, that could create a new normal, where the expectation that work will be a central feature of adult life dissipates for a significant portion of society.

After 300 years of people crying wolf, there are now three broad reasons to take seriously the argument that the beast is at the door: the ongoing triumph of capital over labor, the quiet demise of the working man, and the impressive dexterity of information technology.

Labor’s losses. One of the first things we might expect to see in a period of technological displacement is the diminishment of human labor as a driver of economic growth. In fact, signs that this is happening have been present for quite some time. The share of U.S. economic output that’s paid out in wages fell steadily in the 1980s, reversed some of its losses in the ’90s, and then continued falling after 2000, accelerating during the Great Recession. It now stands at its lowest level since the government started keeping track in the mid‑20th century.

A number of theories have been advanced to explain this phenomenon, including globalization and its accompanying loss of bargaining power for some workers. But Loukas Karabarbounis and Brent Neiman, economists at the University of Chicago, have estimated that almost half of the decline is the result of businesses’ replacing workers with computers and software. In 1964, the nation’s most valuable company, AT&T, was worth $267 billion in today’s dollars and employed 758,611 people. Today’s telecommunications giant, Google, is worth $370 billion but has only about 55,000 employees—less than a tenth the size of AT&T’s workforce in its heyday.

The spread of nonworking men and underemployed youth. The share of prime-age Americans (25 to 54 years old) who are working has been trending down since 2000. Among men, the decline began even earlier: the share of prime-age men who are neither working nor looking for work has doubled since the late 1970s, and has increased as much throughout the recovery as it did during the Great Recession itself. All in all, about one in six prime-age men today are either unemployed or out of the workforce altogether. This is what the economist Tyler Cowen calls “the key statistic” for understanding the spreading rot in the American workforce. Conventional wisdom has long held that under normal economic conditions, men in this age group—at the peak of their abilities and less likely than women to be primary caregivers for children—should almost all be working. Yet fewer and fewer are.

Economists cannot say for certain why men are turning away from work, but one explanation is that technological change has helped eliminate the jobs for which many are best suited. Since 2000, the number of manufacturing jobs has fallen by almost 5 million, or about 30 percent.

Young people just coming onto the job market are also struggling—and by many measures have been for years. Six years into the recovery, the share of recent college grads who are “underemployed” (in jobs that historically haven’t required a degree) is still higher than it was in 2007—or, for that matter, 2000. And the supply of these “non-college jobs” is shifting away from high-paying occupations, such as electrician, toward low-wage service jobs, such as waiter. More people are pursuing higher education, but the real wages of recent college graduates have fallen by 7.7 percent since 2000. In the biggest picture, the job market appears to be requiring more and more preparation for a lower and lower starting wage. The distorting effect of the Great Recession should make us cautious about overinterpreting these trends, but most began before the recession, and they do not seem to speak encouragingly about the future of work.

The shrewdness of software. One common objection to the idea that technology will permanently displace huge numbers of workers is that new gadgets, like self-checkout kiosks at drugstores, have failed to fully displace their human counterparts, like cashiers. But employers typically take years to embrace new machines at the expense of workers. The robotics revolution began in factories in the 1960s and ’70s, but manufacturing employment kept rising until 1980, and then collapsed during the subsequent recessions. Likewise, “the personal computer existed in the ’80s,” says Henry Siu, an economist at the University of British Columbia, “but you don’t see any effect on office and administrative-support jobs until the 1990s, and then suddenly, in the last recession, it’s huge. So today you’ve got checkout screens and the promise of driverless cars, flying drones, and little warehouse robots. We know that these tasks can be done by machines rather than people. But we may not see the effect until the next recession, or the recession after that.”

Some observers say our humanity is a moat that machines cannot cross. They believe people’s capacity for compassion, deep understanding, and creativity are inimitable. But as Erik Brynjolfsson and Andrew McAfee have argued in their book The Second Machine Age, computers are so dexterous that predicting their application 10 years from now is almost impossible. Who could have guessed in 2005, two years before the iPhone was released, that smartphones would threaten hotel jobs within the decade, by helping homeowners rent out their apartments and houses to strangers on Airbnb? Or that the company behind the most popular search engine would design a self-driving car that could soon threaten driving, the most common job occupation among American men?

In 2013, Oxford University researchers forecast that machines might be able to perform half of all U.S. jobs in the next two decades. The projection was audacious, but in at least a few cases, it probably didn’t go far enough. For example, the authors named psychologist as one of the occupations least likely to be “computerisable.” But some research suggests that people are more honest in therapy sessions when they believe they are confessing their troubles to a computer, because a machine can’t pass moral judgment. Google and WebMD already may be answering questions once reserved for one’s therapist. This doesn’t prove that psychologists are going the way of the textile worker. Rather, it shows how easily computers can encroach on areas previously considered “for humans only.”

by Derek Thompson, The Atlantic |  Read more:
Image: Adam Levey

Design Thinking Comes of Age

There’s a shift under way in large organizations, one that puts design much closer to the center of the enterprise. But the shift isn’t about aesthetics. It’s about applying the principles of design to the way people work.

This new approach is in large part a response to the increasing complexity of modern technology and modern business. That complexity takes many forms. Sometimes software is at the center of a product and needs to be integrated with hardware (itself a complex task) and made intuitive and simple from the user’s point of view (another difficult challenge). Sometimes the problem being tackled is itself multi-faceted: Think about how much tougher it is to reinvent a health care delivery system than to design a shoe. And sometimes the business environment is so volatile that a company must experiment with multiple paths in order to survive.

I could list a dozen other types of complexity that businesses grapple with every day. But here’s what they all have in common: People need help making sense of them. Specifically, people need their interactions with technologies and other complex systems to be simple, intuitive, and pleasurable.

A set of principles collectively known as design thinking—empathy with users, a discipline of prototyping, and tolerance for failure chief among them—is the best tool we have for creating those kinds of interactions and developing a responsive, flexible organizational culture.

What Is a Design-Centric Culture?

If you were around during the late-1990s dot-com craze, you may think of designers as 20-somethings shooting Nerf darts across an office that looks more like a bar. Because design has historically been equated with aesthetics and craft, designers have been celebrated as artistic savants. But a design-centric culture transcends design as a role, imparting a set of principles to all people who help bring ideas to life. Let’s consider those principles.

Focus on users’ experiences, especially their emotional ones.

To build empathy with users, a design-centric organization empowers employees to observe behavior and draw conclusions about what people want and need. Those conclusions are tremendously hard to express in quantitative language. Instead, organizations that “get” design use emotional language (words that concern desires, aspirations, engagement, and experience) to describe products and users. Team members discuss the emotional resonance of a value proposition as much as they discuss utility and product requirements.

A traditional value proposition is a promise of utility: If you buy a Lexus, the automaker promises that you will receive safe and comfortable transportation in a well-designed high-performance vehicle. An emotional value proposition is a promise of feeling: If you buy a Lexus, the automaker promises that you will feel pampered, luxurious, and affluent. In design-centric organizations, emotionally charged language isn’t denigrated as thin, silly, or biased. Strategic conversations in those companies frequently address how a business decision or a market trajectory will positively influence users’ experiences and often acknowledge only implicitly that well-designed offerings contribute to financial success.

The focus on great experiences isn’t limited to product designers, marketers, and strategists—it infuses every customer-facing function. Take finance. Typically, its only contact with users is through invoices and payment systems, which are designed for internal business optimization or predetermined “customer requirements.” But those systems are touch points that shape a customer’s impression of the company. In a culture focused on customer experience, financial touch points are designed around users’ needs rather than internal operational efficiencies.

by Jon Kolko, HBR |  Read more:
Image: via:

The Happiness Machine

How Google became such a great place to work.

A few years ago, Google’s human resources department noticed a problem: A lot of women were leaving the company. Like the majority of Silicon Valley software firms, Google is staffed mostly by men, and executives have long made it a priority to increase the number of female employees. But the fact that women were leaving Google wasn’t just a gender equity problem—it was affecting the bottom line. Unlike in most sectors of the economy, the market for top-notch tech employees is stretched incredibly thin. Google fights for potential workers with Apple, Facebook, Amazon, Microsoft, and hordes of startups, so every employee’s departure triggers a costly, time-consuming recruiting process.

Then there was the happiness problem. Google monitors its employees’ well-being to a degree that can seem absurd to those who work outside Mountain View. The attrition rate among women suggested there might be something amiss in the company’s happiness machine. And if there’s any sign that joy among Googlers is on the wane, it’s the Google HR department’s mission to figure out why and how to fix it.

Google calls its HR department People Operations, though most people in the firm shorten it to POPS. The group is headed by Laszlo Bock, a trim, soft-spoken 40-year-old who came to Google six years ago. Bock says that when POPS looked into Google’s woman problem, it found it was really a new mother problem: Women who had recently given birth were leaving at twice Google’s average departure rate. At the time, Google offered an industry-standard maternity leave plan. After a woman gave birth, she got 12 weeks of paid time off. For all other new parents in its California offices, but not for its workers outside the state, the company offered seven paid weeks of leave.

So in 2007, Bock changed the plan. New mothers would now get five months off at full pay and full benefits, and they were allowed to split up that time however they wished, including taking some of that time off just before their due date. If she likes, a new mother can take a couple months off after birth, return part time for a while, and then take the balance of her time off when her baby is older. Plus, Google began offering the seven weeks of new-parent leave to all its workers around the world.

Google’s lavish maternity and paternity leave plans probably don’t surprise you. The company’s swank perks—free gourmet food, on-site laundry, Wi-Fi commuting shuttles—are legendary in the corporate world, and they’ve driven a culture of ever-increasing luxuries for tech workers. This week, for the fourth consecutive year, Google was named the best company to work for by Fortune magazine; Microsoft was No. 75, while Apple, Amazon, and Facebook didn’t even make the list.

At times Google’s largesse can sound excessive—noble but wasteful from a bottom-line perspective. In August, for example, Forbes disclosed one previously unannounced Google perk—when an employee dies, the company pays his spouse or domestic partner half of his salary for a decade. Yet it would be a mistake to conclude that Google doles out such perks just to be nice. POPS rigorously monitors a slew of data about how employees respond to benefits, and it rarely throws money away. The five-month maternity leave plan, for instance, was a winner for the company. After it went into place, Google’s attrition rate for new mothers dropped down to the average rate for the rest of the firm. “A 50 percent reduction—it was enormous!” Bock says. What’s more, happiness—as measured by Googlegeist, a lengthy annual survey of employees—rose as well. Best of all for the company, the new leave policy was cost-effective. Bock says that if you factor in the savings in recruitment costs, granting mothers five months of leave doesn’t cost Google any more money.

The change in maternity leave exemplifies how POPS has helped Google become the country’s best employer. Under Bock, Google’s HR department functions more like a rigorous science lab than the pesky hall monitor most of us picture when we think of HR. At the heart of POPS is a sophisticated employee-data tracking program, an effort to gain empirical certainty about every aspect of Google’s workers’ lives—not just the right level of pay and benefits but also such trivial-sounding details as the optimal size and shape of the cafeteria tables and the length of the lunch lines.

In the last couple years, Google has even hired social scientists to study the organization. The scientists—part of a group known as the PiLab, short for People & Innovation Lab—run dozens of experiments on employees in an effort to answer questions about the best way to manage a large firm. How often should you remind people to contribute to their 401(k)s, and what tone should you use? Do successful middle managers have certain skills in common—and can you teach those skills to unsuccessful managers? Or, for that matter, do managers even matter—can you organize a company without them? And say you want to give someone a raise—how should you do it in a way that maximizes his happiness? Should you give him a cash bonus? Stock? A raise? More time off?

by Farhad Manjoo, Slate |  Read more:
Image: Google

Algorithmic Trading: The Play-at-Home Version

[ed. I wonder how his morning went today.]

After more than 100 hours of coding over three months, Mike Soule was finally ready to switch on his project. He didn’t know what to expect. If things went right, he could be on his way to financial success. If things went wrong, he could lose his savings.

His creation wasn’t a new mobile app or e-commerce store. It was a computer program that would buy and sell currencies 24 hours a day, five days a week.

DIY’s newest frontier is algorithmic trading. Spurred on by their own curiosity and coached by hobbyist groups and online courses, thousands of day-trading tinkerers are writing up their own trading software and turning it loose on the markets.

“It’s definitely one of those things where you are like, ‘Is this going to work?’” said Mr. Soule, who is a student at University of Nevada, Reno, and a network administrator at Tahoe Forest Health system. “When it finally started trading, wow, wow. I don’t know if that is what I expected, but I did it.”

Interactive Brokers Group Inc. actively solicits at-home algorithmic traders with services to support their transactions. YouTube videos from traders and companies explaining the basics have tens of thousands of views. More than 170,000 people enrolled in a popular online course, “Computational Investing,” taught by Georgia Institute of Technology professor Tucker Balch. Only about 5% completed it, but at an algorithmic trading event in New York in April, three people asked him for his autograph.

“College professors very rarely get asked for their autographs,” Mr. Balch said.

To learn more about the fundamentals of algorithmic trading, Alexander Sommerwatched Mr. Balch’s video lectures.

Now, every weekday morning before work, Mr. Sommer wakes up in Vienna to an email summarizing his coming trades for the day. The email is generated by his custom-built trading platform, which automatically places trades throughout the day using the algorithms he and his three trading partners developed. The four jointly trade about $200,000 of their own money on S&P 500 and Nasdaq Composite stocks.

by Austen Hufford, WSJ |  Read more:
Image: Severin Koller