Wednesday, May 14, 2014

This Plastic Is Made Of Shrimp Shells

There ain't no reason why we can't replace plastic with something biodegradable. Here's one option: a material called shrilk. It is made from a chemical in shrimp shells called chitosan, a version of chitin--the second-most abundant organic material on the planet, found in fungal cells, insect exoskeletons, and butterfly wings.

Researchers at Harvard's Wyss Institute for Biologically Inspired Engineering said the material could be relatively easily manufactured in mass quantities and used to make large 3D objects. The material breaks down within a "few weeks" of being thrown away, and provides nutrients for plants, according to a statement.

Chitosan can be obtained from shrimp shells, which are usually discarded, but also used to manufacture makeup and fertilizer. Fortunately, people with shellfish allergies don't seem to react to chitosan, according to a study of chitosan-coated bandages.

by Douglas Main, Popular Mechanics |  Read more:
Image: US Government / Wikimedia commons

The Robot Car of Tomorrow May Just Be Programmed to Hit You

Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?

As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.

But physics isn’t the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best. (...)

The problem is starkly highlighted by the next scenario, also discussed by Noah Goodall, a research scientist at the Virginia Center for Transportation Innovation and Research. Again, imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?

In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.

But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.
By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet.

Not only does this discrimination seem unethical, but it could also be bad policy. That crash-optimization design may encourage some motorcyclists to not wear helmets, in order to not stand out as favored targets of autonomous cars, especially if those cars become more prevalent on the road. Likewise, in the previous scenario, sales of automotive brands known for safety may suffer, such as Volvo and Mercedes Benz, if customers want to avoid being the robot car’s target of choice.

by Patrick Lin, Wired |  Read more:
Image: US DOT

Tuesday, May 13, 2014


Bo Bae Kim, South Korea.
via:

[ed. 'Life is all about Dopeness'. Yes, that pretty much sums up my philosophy, too.]
Kanye West
via:

Enough Is Enough: Stop Wasting Money on Vitamin and Mineral Supplements

[ed. First antimicrobial wipes, then aspirin, and now multivitamins. Pretty soon even alcohol and cigarettes will be labeled bad for your health. I've always thought stressing your system a litte bit was a good thing - making you stronger in the long run. Up to a point, of course.]

Three articles in this issue address the role of vitamin and mineral supplements for preventing the occurrence or progression of chronic diseases. First, Fortmann and colleagues (1) systematically reviewed trial evidence to update the U.S. Preventive Services Task Force recommendation on the efficacy of vitamin supplements for primary prevention in community-dwelling adults with no nutritional deficiencies. After reviewing 3 trials of multivitamin supplements and 24 trials of single or paired vitamins that randomly assigned more than 400 000 participants, the authors concluded that there was no clear evidence of a beneficial effect of supplements on all-cause mortality, cardiovascular disease, or cancer.

Second, Grodstein and coworkers (2) evaluated the efficacy of a daily multivitamin to prevent cognitive decline among 5947 men aged 65 years or older participating in the Physicians’ Health Study II. After 12 years of follow-up, there were no differences between the multivitamin and placebo groups in overall cognitive performance or verbal memory. Adherence to the intervention was high, and the large sample size resulted in precise estimates showing that use of a multivitamin supplement in a well-nourished elderly population did not prevent cognitive decline. Grodstein and coworkers’ findings are compatible with a recent review (3) of 12 fair- to good-quality trials that evaluated dietary supplements, including multivitamins, B vitamins, vitamins E and C, and omega-3 fatty acids, in persons with mild cognitive impairment or mild to moderate dementia. None of the supplements improved cognitive function.

Third, Lamas and associates (4) assessed the potential benefits of a high-dose, 28-component multivitamin supplement in 1708 men and women with a previous myocardial infarction participating in TACT (Trial to Assess Chelation Therapy). After a median follow-up of 4.6 years, there was no significant difference in recurrent cardiovascular events with multivitamins compared with placebo (hazard ratio, 0.89 [95% CI, 0.75 to 1.07]). The trial was limited by high rates of nonadherence and dropouts.

Other reviews and guidelines that have appraised the role of vitamin and mineral supplements in primary or secondary prevention of chronic disease have consistently found null results or possible harms (56). Evidence involving tens of thousands of people randomly assigned in many clinical trials shows that β-carotene, vitamin E, and possibly high doses of vitamin A supplements increase mortality (67) and that other antioxidants (6), folic acid and B vitamins (8), and multivitamin supplements (1, 5) have no clear benefit.

Despite sobering evidence of no benefit or possible harm, use of multivitamin supplements increased among U.S. adults from 30% between 1988 to 1994 to 39% between 2003 to 2006, while overall use of dietary supplements increased from 42% to 53% (9). Longitudinal and secular trends show a steady increase in multivitamin supplement use and a decline in use of some individual supplements, such as β-carotene and vitamin E. The decline in use of β-carotene and vitamin E supplements followed reports of adverse outcomes in lung cancer and all-cause mortality, respectively. In contrast, sales of multivitamins and other supplements have not been affected by major studies with null results, and the U.S. supplement industry continues to grow, reaching $28 billion in annual sales in 2010. Similar trends have been observed in the United Kingdom and in other European countries.

The large body of accumulated evidence has important public health and clinical implications. Evidence is sufficient to advise against routine supplementation, and we should translate null and negative findings into action. The message is simple: Most supplements do not prevent chronic disease or death, their use is not justified, and they should be avoided. This message is especially true for the general population with no clear evidence of micronutrient deficiencies, who represent most supplement users in the United States and in other countries (9).

by Eliseo Guallar, MD, DrPH; Saverio Stranges, MD, PhD; Cynthia Mulrow, MD, MSc, Senior Deputy Editor; Lawrence J. Appel, MD, MPH; and Edgar R. Miller III, MD, PhD, Annals of Internal Medicine |  Read more: 
Image: via:

Why You Won’t Be the Person You Expect to Be


When we remember our past selves, they seem quite different. We know how much our personalities and tastes have changed over the years. But when we look ahead, somehow we expect ourselves to stay the same, a team of psychologists said Thursday, describing research they conducted of people’s self-perceptions.

They called this phenomenon the “end of history illusion,” in which people tend to “underestimate how much they will change in the future.” According to their research, which involved more than 19,000 people ages 18 to 68, the illusion persists from teenage years into retirement.

“Middle-aged people — like me — often look back on our teenage selves with some mixture of amusement and chagrin,” said one of the authors, Daniel T. Gilbert, a psychologist at Harvard. “What we never seem to realize is that our future selves will look back and think the very same thing about us. At every age we think we’re having the last laugh, and at every age we’re wrong.”

Other psychologists said they were intrigued by the findings, published Thursday in the journal Science, and were impressed with the amount of supporting evidence. Participants were asked about their personality traits and preferences — their favorite foods, vacations, hobbies and bands — in years past and present, and then asked to make predictions for the future. Not surprisingly, the younger people in the study reported more change in the previous decade than did the older respondents.

But when asked to predict what their personalities and tastes would be like in 10 years, people of all ages consistently played down the potential changes ahead.

Thus, the typical 20-year-old woman’s predictions for her next decade were not nearly as radical as the typical 30-year-old woman’s recollection of how much she had changed in her 20s. This sort of discrepancy persisted among respondents all the way into their 60s.

And the discrepancy did not seem to be because of faulty memories, because the personality changes recalled by people jibed quite well with independent research charting how personality traits shift with age. People seemed to be much better at recalling their former selves than at imagining how much they would change in the future.

Why? Dr. Gilbert and his collaborators, Jordi Quoidbach of Harvard and Timothy D. Wilson of the University of Virginia, had a few theories, starting with the well-documented tendency of people to overestimate their own wonderfulness.

by John Tierney, NY Times |  Read more:
Image: via:

The Smooth Path to Pearl Harbor

The heated rhetoric of recent months suggests that interpreting the behavior of both China and Japan during the war years will become increasingly controversial. Meanwhile, the tensions between the two countries could destabilize the American-dominated postwar order in East Asia. We may be about to witness the most important moment of change in the relations among the powers in the region since the events that led to Pearl Harbor in 1941.

In this atmosphere, understanding the reasons for Japan’s decision to go to war in the Pacific has an urgency that goes beyond the purely historical. Fortunately, Japan 1941: Countdown to Infamy, by the Japanese historian Eri Hotta, proves an outstanding guide to that devastating decision. In lucid prose, Hotta meticulously examines a wide range of primary documents in Japanese to answer the question: Why did Japan find itself on the brink of war in December 1941?

The answer begins long before the year of the book’s title. In the 1920s, Japan gave many signs of being integrated into international society. It had taken part, albeit in a limited way, in World War I and had been one of the victorious nations at the Paris Peace Conference of 1919. Its parliamentary democracy was young but appeared promising: in 1925, a new law greatly widened the male franchise. The country had become a part of the global trading system, and Japan’s external policy was defined by the liberal internationalism of Foreign Minister Shidehara Kijuro.

Yet interwar Japan was ambivalent about its status in the world, perceiving itself as an outsider in the Western-dominated global community, and aware that the bonds among different parts of its own society were fraying. The Western victors of 1919 had refused Japanese demands for a racial equality clause as part of the peace settlement, confirming the opinions of many of Tokyo’s policymakers that they would never be treated as the peers of their white allies. At home, labor unrest and an impoverished countryside showed that Japan’s society was unstable under the surface. After the devastating earthquake in Japan’s Kanto region in 1923, riots broke out against members of the local Korean population, who were falsely accused of arson and robbery. In 1927, one of the finest writers of the era, Akutagawa Ryunosuke (whose short story “In a Grove” became the basis of Kurosawa’s film Rashomon), took his own life. In his will, he declared that he was suffering from “a vague insecurity.”

Japan’s sense of insecurity was real but by no means vague, and expressed itself most vividly in the drive toward building an empire. In the early twentieth century, Japan was the only non-Western country to have its own colonies. In 1895, Japan won a war against China and was ceded Taiwan; it gained territorial and railway rights in Manchuria in 1905 at the end of its war with Russia; and in 1910, it fully annexed Korea. The depression devastated Japan’s economy after 1929, and its leaders became obsessed with the idea of expanding further onto the Asian mainland.

Japanese civilian politics also started to fall apart as the military began to make its own policy. In 1931, two officers of the locally garrisoned Japanese Kwantung Army in the south of China set off an explosion on a railway line near the city of Shenyang (then Mukden) in Manchuria, the northeastern region of China. Within days, they prepared the way for the Japanese conquest of the entire region. Protests from a commission sent by the League of Nations had no effect other than causing Japan to quit the League.

By the mid-1930s, much of northern China was essentially under Japanese influence. Then, on July 7, 1937, a small-scale clash between local Chinese and Japanese troops at the Marco Polo Bridge in Wanping, a small village outside Beijing, escalated. The Japanese prime minister, Prince Konoe, used the clash to make further territorial demands on China. Chiang Kai-shek, leader of the Nationalist government, decided that the moment had come to confront Japan rather than appease it, and full-scale war broke out between the two sides.

Within eighteen months, China and Japan were locked in a stalemate. The Japanese quickly overran eastern China, the most prosperous and advanced part of the country. But they were unable to subdue guerrilla activity in the countryside or eliminate the Communists based in the north. Nor did Chiang’s government show any inclination to surrender: by moving to the southwestern city of Chongqing, his Chinese Nationalists dug in for a long war against Japan, desperately hoping to attract allies to their cause, but gaining little response over the long years until 1940. Yet between them the Nationalist and Communist forces had more than half a million troops in China. The United States, increasingly concerned that all Asia might fall into Japan’s hands, began to assist China and impose sanctions on Japan. At that point, desperate to resolve their worsening situation, Japan embarked on the path to the attack on Pearl Harbor on December 7, 1941, and four years of war with the United States and its allies.

Hotta makes it unambiguously clear that the blame for the war lies entirely at Japan’s door. The feeling of inevitability in Tokyo was a product of the Japanese policymakers’ own blinkered perspectives. One of the most alarming revelations in her book is the weak-mindedness of the doves and skeptics, who refused to confront the growing belligerence of most of their colleagues.

by Rana Mitter, NY Review of Books |  Read more:
Image: Heinrich Hoffmann/Ullstein Bild/Granger Collection

Monday, May 12, 2014


[ed. Yikes.]
World Hairdressing Championships - in Pictures.
Image: Arne Dedert/EPA

The Soul-Killing Structure of the Modern Office

Picture Leonardo DiCaprio heading stolidly to work at the start of two of his most alliterative movies. In Revolutionary Road, set in 1955, he’s Frank Wheeler, a fedora’d nobody who takes a train into Manhattan and the elevator to a high floor in an International-style skyscraper. He smokes at his desk, slips out for a two-martini lunch, and gets periodically summoned to the executive den where important company decisions are made. Wheeler is a cog, but he is an enviable cog—by appearances, he has achieved everything a man is supposed to want in postwar America.

In The Wolf of Wall Street, set in the late 1980s, DiCaprio is a failed broker named Jordan Belfort who follows a classified ad to a Long Island strip mall, where a group of scrappy penny-stock traders cold-call their marks and drive home in sedans. His office need not be a status symbol, since prestige for stock traders is about domination, not conformity; if you become a millionaire, who cares if you did it in the Chrysler Building or your garage?

Watch these films back-to-back, and you’ll see DiCaprio traverse the recent history of the American workplace. A white-collar job used to be a signal of ambition and stability far beyond that offered by farm, factory, or retail work. But what was once a reward has become a nonnecessity—a mere company mailing address. Highways are now stuffed with sand-colored, dark-windowed cubicle barns arranged in groups like unopened moving boxes. Barely anyone who works in this kind of place expects to spend a career in that building, but no matter where you go, you can expect variations on the same fluorescent lighting, corporate wall art, and water coolers.

In his new book, Cubed: A Secret History of the Workplace, Nikil Saval claims that 60 percent of Americans still make their money in cubicles, and 93 percent of those are unhappy to do so. But rather than indict these artless workspaces, Saval traces the intellectual history of our customizable pens to find that they’re the twisted end result of utopian thinking. “The story of white-collar work hinges on promises of freedom and uplift that have routinely been betrayed,” he writes. Above all, Cubed is a graveyard of social-engineering campaigns.

Saval, an editor at n+1, traces the modern office’s roots back to the bookkeeping operations of the early industrial revolution, where clerks in starched collars itemized stuff produced by their blue-collar counterparts. Saval describes these cramped spaces as the birthplace of a new ethic of “self-improvement.” A clerkship was a step up from manual labor, and the men lucky enough to pursue it often found themselves detached—from the close-knit worlds of farming or factory work and even from their fellow clerks, who were now just competition. In Saval’s telling, this is where middle-class anxiety began. (...)

My first office job started in the summer of 2007. I’d just graduated from college, and I took the light rail to the outer suburbs of Baltimore and walked half a mile to my desk. The McCormick & Company factory was nearby, so each day smelled like a different spice. In that half-mile (sidewalk-free, of course), I passed three other corporate campuses and rarely saw anyone coming or going. I worked in a cubicle of blue fabric and glass partitions and reported to the manager with the nearest window. For team meetings, we’d head into a room with a laminate-oak table and a whiteboard. If it was warm, I’d take lunch at a wooden picnic table in the parking lot, the only object for miles that looked like weather could affect it. In my sensible shoes and flat-front khakis, I’d listen to the murmur of Interstate 83 from just over a tree-lined highway barrier, the air smelling faintly of cumin or allspice. This was not a sad scene, but it was an empty one, and I was jolted back to it when I read Saval’s assertion that post-skyscraper office design “had to be eminently rentable. … The winners in this new American model weren’t office workers or architects, not even executives or captains of industry, but real estate speculators.”

Freelancers are expected to account for 40 percent to 50 percent of the American workforce by 2020. Saval notes a few responses to this sea change, such as “co-working” offices for multiple small companies or self-employed people to share. But he never asks why the shift is under way or why nearly a quarter of young people in America now expect to work for six or more companies. These are symptoms of the recession, and the result of baby boomers delaying retirement to make up for lost savings. But they’re also responses to businesses’ apparent feelings toward their employees. It’s not so much the blandness of corporate architecture, which can have a kind of antiseptic beauty; it’s the transience of everything in sight, from the computer-bound work to the floor plans designed so that any company can move right in when another ends its lease or bellies up.When everything is so disposable, why would anyone expect or want to stay?

by John Lingan, American Prospect |  Read more:
Image: CubeSpace/Asa Wilson

And once the storm is over you won’t remember how you made it through, how you managed to survive. You won’t even be sure, in fact, whether the storm is really over. But one thing is certain. When you come out of the storm you won’t be the same person who walked in. That’s what this storm’s all about.

Haruki Murakami, Kafka on the Shore

The Rise of Corporate Impunity

On the evening of Jan. 27, Kareem Serageldin walked out of his Times Square apartment with his brother and an old Yale roommate and took off on the four-hour drive to Philipsburg, a small town smack in the middle of Pennsylvania. Despite once earning nearly $7 million a year as an executive at Credit Suisse, Serageldin, who is 41, had always lived fairly modestly. A previous apartment, overlooking Victoria Station in London, struck his friends as a grown-up dorm room; Serageldin lived with bachelor-pad furniture and little of it — his central piece was a night stand overflowing with economics books, prospectuses and earnings reports. In the years since, his apartments served as places where he would log five or six hours of sleep before going back to work, creating and trading complex financial instruments. One friend called him an "investment-banking monk."

Serageldin's life was about to become more ascetic. Two months earlier, he sat in a Lower Manhattan courtroom adjusting and readjusting his tie as he waited for a judge to deliver his prison sentence. During the worst of the financial crisis, according to prosecutors, Serageldin had approved the concealment of hundreds of millions in losses in Credit Suisse's mortgage-backed securities portfolio. But on that November morning, the judge seemed almost torn. Serageldin lied about the value of his bank's securities — that was a crime, of course — but other bankers behaved far worse. Serageldin's former employer, for one, had revised its past financial statements to account for $2.7 billion that should have been reported. Lehman Brothers, AIG, Citigroup, Countrywide and many others had also admitted that they were in much worse shape than they initially allowed. Merrill Lynch, in particular, announced a loss of nearly $8 billion three weeks after claiming it was $4.5 billion. Serageldin's conduct was, in the judge's words, "a small piece of an overall evil climate within the bank and with many other banks." Nevertheless, after a brief pause, he eased down his gavel and sentenced Serageldin, an Egyptian-born trader who grew up in the barren pinelands of Michigan's Upper Peninsula, to 30 months in jail. Serageldin would begin serving his time at Moshannon Valley Correctional Center, in Philipsburg, where he would earn the distinction of being the only Wall Street executive sent to jail for his part in the financial crisis.

American financial history has generally unfolded as a series of booms followed by busts followed by crackdowns. After the crash of 1929, the Pecora Hearings seized upon public outrage, and the head of the New York Stock Exchange landed in prison. After the savings-and-loan scandals of the 1980s, 1,100 people were prosecuted, including top executives at many of the largest failed banks. In the '90s and early aughts, when the bursting of the Nasdaq bubble revealed widespread corporate accounting scandals, top executives from WorldCom, Enron, Qwest and Tyco, among others, went to prison.

The credit crisis of 2008 dwarfed those busts, and it was only to be expected that a similar round of crackdowns would ensue. In 2009, the Obama administration appointed Lanny Breuer to lead the Justice Department's criminal division. Breuer quickly focused on professionalizing the operation, introducing the rigor of a prestigious firm like Covington & Burling, where he had spent much of his career. He recruited elite lawyers from corporate firms and the Breu Crew, as they would later be known, were repeatedly urged by Breuer to "take it to the next level."

But the crackdown never happened. Over the past year, I've interviewed Wall Street traders, bank executives, defense lawyers and dozens of current and former prosecutors to understand why the largest man-made economic catastrophe since the Depression resulted in the jailing of a single investment banker — one who happened to be several rungs from the corporate suite at a second-tier financial institution. Many assume that the federal authorities simply lacked the guts to go after powerful Wall Street bankers, but that obscures a far more complicated dynamic. During the past decade, the Justice Department suffered a series of corporate prosecutorial fiascos, which led to critical changes in how it approached white-collar crime. The department began to focus on reaching settlements rather than seeking prison sentences, which over time unintentionally deprived its ranks of the experience needed to win trials against the most formidable law firms. By the time Serageldin committed his crime, Justice Department leadership, as well as prosecutors in integral United States attorney's offices, were de-emphasizing complicated financial cases — even neglecting clues that suggested that Lehman executives knew more than they were letting on about their bank's liquidity problem. In the mid-'90s, white-collar prosecutions represented an average of 17.6 percent of all federal cases. In the three years ending in 2012, the share was 9.4 percent. (Read the Department of Justice's response to ProPublica's inquiries.)

After the evening drive to Philipsburg, Serageldin checked into a motel. He didn't need to report to Moshannon Valley until 2 p.m. the next day, but he was advised to show up early to get a head start on his processing. Moshannon is a low-security facility, with controlled prisoner movements, a bit tougher than the one portrayed on "Orange Is the New Black." Friends of Serageldin's worried about the violence; he was counseled to keep his head down and never change the channel on the TV no matter who seemed to be watching. Serageldin, who is tall and thin with a regal bearing, was largely preoccupied with how, after a decade of 18-hour trading days, he would pass the time. He was planning on doing math-problem sets and studying economics. He had delayed marrying his longtime girlfriend, a private-equity executive in London, but the plan was for her to visit him frequently.

Other bankers have spoken out about feeling unfairly maligned by the financial crisis, pegged as "banksters" by politicians and commentators. But Serageldin was contrite. "I don't feel angry," he told me in early winter. "I made a mistake. I take responsibility. I'm ready to pay my debt to society." Still, the fact that the only top banker to go to jail for his role in the crisis was neither a mortgage executive (who created toxic products) nor the C.E.O. of a bank (who peddled them) is something of a paradox, but it's one that reflects the many paradoxes that got us here in the first place.

by Jesse Eisinger, Pro Publica |  Read more:
Image: Javier Jaen

Daily Aspirin Regimen Not Safe for Everyone: FDA

Taking an aspirin a day can help prevent heart attack and stroke in people who have suffered such health crises in the past, but not in people who have never had heart problems, according to the U.S. Food and Drug Administration.

"Since the 1990s, clinical data have shown that in people who have experienced a heart attack, stroke or who have a disease of the blood vessels in the heart, a daily low dose of aspirin can help prevent a reoccurrence," Dr. Robert Temple, deputy director for clinical science at the FDA, said in an agency news release.

A low-dose tablet contains 80 milligrams (mg) of aspirin, compared with 325 mg in a regular strength tablet.

However, an analysis of data from major studies does not support the use of aspirin as a preventive medicine in people who have not had a heart attack, stroke or heart problems. In these people, aspirin provides no benefits and puts them at risk for side effects such as dangerous bleeding in the brain or stomach, the FDA said.

by Robert Priedt, WebMD | Read more:
Image: uncredited

The Unmothered

When I was growing up in Israel, there was a short-lived show on television called “Hahaverim Shel Yael” (“Yael’s Friends”), which featured a peppy girl who introduced short clips acted out by puppets. The actress who played Yael was probably in her twenties, but she was dressed up to look like a child, in flowery dresses and pigtails. I loved that program, in which the puppets occassionally crossed into real life and made a mess of Yael’s studio. Right before the opening music came on, Yael would look into the camera and fake-whisper to the viewers, “Tell your mother to turn up the volume!” Once, as my twin sister and I were settling down on the sofa to watch, my mother overheard this opening bit. “And what about those who don’t have a mother?” she asked.

I must have been seven or eight at the time. I was irritated with her for asking that question, forever ruining the show for me. But I shouldn’t have been surprised. It summed up, I now realize, her parenting philosophy. The way she didn’t baby us, but treated us like thoughtful people, capable of empathy. The way she was always fully there—registering, questioning. But mostly, I think, it showed her unyielding belief in fairness, which, years later, I would hear her define as justice played out in the private sphere. (She was a philosophy professor, preoccupied with definitions.) It was a particular kind of fairness, one that centered on a child’s sensibility. Once, when I asked her whom she loved more, my sister or me, she answered, simply, “You.” Incredulous, my sister posed the same question. “Who do you love more, Ima? Ruth or me?” “You,” my mother said. We tried again. Each time, my mother invariably told whoever asked that she loved her more. “This doesn’t make any sense,” we finally said. She smiled and told us, “Sure it does. Don’t you see? I love you more and I love you more.” This was her sense of fairness: no kid wants to hear that they are loved the same as their sister.

This Mother’s Day, three and a half years after she died, I find myself turning over her question in my mind. And what about those who don’t have a mother?

“CALL MOM” said a sign the other day, and something inside me clenched. In my inbox, at work, an email waited from the New York Times: a limited offer to “treat Mom” to a free gift. It’s nothing, I tell myself. A day for advertisers. So I shrug off the sales and the offers, the cards and the flowers. I press delete. Still, I now mark Mother’s Day on my private calendar of grief. Anyone who has experienced a loss must have one of those. There’s August 29th, my mother’s birthday—forever stopped at sixty-four. September 17th, my parents’ anniversary—a day on which I now make a point of calling my father, and we both make a point of talking about anything but. There’s June 6th, the day she was diagnosed—when a cough that she had told us was “annoying” her and a leg that she had been dragging, thinking she must have pulled a muscle, turned out to be symptoms of Stage IV lung cancer. And then there’s October 16th: the day she died, four months and ten days after the diagnosis. The year becomes a landscape filled with little mines.

Trust me, I’m too aware of the fact that my mother is gone to wish her here in any serious way on Mother’s Day. But does the holiday have to be in May, when the lilacs are in full bloom? When a gentle breeze stirs—the kind of breeze that reminds me of days when she would recline on a deck chair on our Jerusalem porch, head tilted back, urging me to “sit a while”?

Meghan O’Rourke has a wonderful word for the club of those without mothers. She calls us not motherless but unmothered. It feels right—an ontological word rather than a descriptive one. I had a mother, and now I don’t. This is not a characteristic one can affix, like being paperless, or odorless. The emphasis should be on absence.

by Ruth Margalit, New Yorker |  Read more:
Image: Ruth Margalit

Sunday, May 11, 2014


Liberace and Cher
via:

Diane Birch, Daryl Hall

The World of Bob Dylan Obsessives

Imagine liking a singer so much you travel across the country to see him. You invade his private spaces; commit his every song to memory; change the way you dress, walk, and talk to be more like him. When people ask about your past, you answer the way he might, instead of telling the truth.

This kind of behavior might make you a subject of David Kinney’s new book The Dylanologists. But it might make you Bob Dylan himself.

When Bob Dylan arrived in New York on Jan. 24, 1961, he was a Woody Guthrie pilgrim. He talked like he came from Oklahoma instead of Minnesota and told stories of ramblin’ that would have fit in Bound for Glory. He sought out the folk singer, who was suffering from Huntington’s chorea. On the weekends Dylan would sit by Guthrie’s hospital bed and play him songs.

The Dylanologists are just as obsessive about Dylan. In the age before digital music, tape collectors strapped audio equipment to their bodies and pursued rare recordings like jewels from the Pharaoh's tomb. Bill Pagel purchased the ticket from Dylan’s prom, his high chair, and ultimately the home in Duluth where Dylan was born. Elizabeth Wolfson vacationed to California so she could drive by Dylan’s Malibu home. Security found her wandering the grounds and turned her away. The most famous of the Dylanologists, A.J. Weberman, was so consumed with trying to figure out the transcendental message behind Dylan’s music he was caught looking through the singer’s trash.

Woody Guthrie gave shape to Bob Dylan’s life and gave him an identity. That's a powerful relationship, and it's what makes the Dylan fanatic such an interesting topic. From the 1960s, when Dylan was first called the “voice of a generation,” he and his music have shaped entire lives. He's not just the guy playing in the background at the deli. He’s soundtracking marriages and inspiring in listeners a deeper understanding of themselves. Dylanologists credit Bob for driving them to certain careers or relationships.

He doesn’t want the credit. Dylan’s career has been propelled by his effort to stay one step ahead of his fans. He shed the protest-singer label almost as fast as they gave it to him, and he’s ditched every category they've tried to put him in ever since. Wherever he is right now, he’s on the run.

After reading this series of profiles, it's hard not to share Bob Dylan’s feelings about his most devoted fans. “Get a life, please,” he told one interviewer about the devotees. “You're not serving your own life well. You're wasting your life." Kinney doesn't make this argument explicitly. His book is not unlike a Bob Dylan song—he paints a picture and then you've got to interpret it yourself—but the conclusion seems plain: The life of the Dylanologist is often a wasted one.

by John Dickerson, Slate |  Read more:
Image: via:

Behind the Scenes on the NY Times Redesign

The New York Times just launched the first piece of their sitewide redesign: new article pages, with other tweaks and nudges throughout the site. We spoke with two designers and a developer who worked on the project to learn about the tech choices, design ideas, and strategy behind the new look and feel.

Strategy & Rationale

Renda Morton, product design lead: We are a really big company, that’s trying to be faster. Our website is our largest platform. To redesign the whole thing at once would be a nightmare, so we decided to start with our story page and go from there. What’s launching on the 8th is not a site redesign, but really just a redesign of our story page, and some light re-skinning to our home page, section fronts and some blogs to match the new story page. We started with the story page because, like other sites, that’s where most of our readers spend their time on our site. Most times bypassing our home page completely.

We’re going to tackle the home page and section fronts next, though we may not take the same approach. For the home page we’re going to being iterating from the re-skin, slowly adding features and refinements. Its “redesign” will be a slow evolution.

We’ll still continue work on the story page. We don’t want to just let it sit and rot on the internet, and end up right back that this point again where we had to no choice but to do a major overhaul.

What is your team hoping to achieve with the redesign work?

RM: Most of these are still works in progress, but our goals are:
  • Be faster.
  • Have a more flexible and adaptive presentation.
  • Have consistency across platforms.
  • Make the site easier to for the newsroom to produce and maintain.
  • Make it easier for our readers to read, navigation, share and explore.
  • Maintain and convert subscribers.
  • Create a high-quality advertising environment.
What spurred the redesign? Why now?

Allen Tan: It’s partly some much-needed foundation-building. We get to clear out legacy code and design that’s accumulated over 6 years (yeah, it’s been that long), and allows for quicker and easier iteration.

Eitan Konigsburg: Yeah, we had reached a bit of a technical wall in terms of being able to scale the site. A lot of technical debt was holding us back from truly modernizing the website and attempting a redesign (w/o reworking the infrastructure) would’ve been difficult. So the decision to redesign the site was an excellent chance to rebuild the technical foundation as well. These decisions could be seen as going hand-in-hand as it not only furthered the design-develop cycle, but allowed these groups to work even more closely together.

That includes using Github instead of SVN for version control, Vagrant environments, Puppet deployment, using requireJS so five different versions of jQuery don’t get loaded, proper build/test frameworks, command-line tools for generating sprites, the use of LESS with a huge set of mixins, a custom grid framework, etc. (...)

The Big Challenges

What were the biggest challenges you encountered while designing and building out the new site?

RM: The biggest “design” challenge is our own internal process and structure. Our website is where our newsroom’s editorial needs, meet our business goals and requirements, meet our reader’s goals and desires. There’s some overlap, but usually there is a compromise to be made. That’s the challenge. And sometimes you don’t even know if it can be done. Everything else, though hard, is nothing compared to that.

by Source Open News | Read more:
Image: NY Times

Saturday, May 10, 2014


Keiko Tanabe, Pacific Coast Highway III, 2010
via: