[ed. For Cal B. Get up, Stand up...]
Monday, November 30, 2015
Bob Marley
[ed. For Cal B. Get up, Stand up...]
Hit Charade
The biggest pop star in America today is a man named Karl Martin Sandberg. The lead singer of an obscure ’80s glam-metal band, Sandberg grew up in a remote suburb of Stockholm and is now 44. Sandberg is the George Lucas, the LeBron James, the Serena Williams of American pop. He is responsible for more hits than Phil Spector, Michael Jackson, or the Beatles.
After Sandberg come the bald Norwegians, Mikkel Eriksen and Tor Hermansen, 43 and 44; Lukasz Gottwald, 42, a Sandberg protégé and collaborator who spent a decade languishing in Saturday Night Live’s house band; and another Sandberg collaborator named Esther Dean, 33, a former nurse’s aide from Oklahoma who was discovered in the audience of a Gap Band concert, singing along to “Oops Upside Your Head.” They use pseudonyms professionally, but most Americans wouldn’t recognize those, either: Max Martin, Stargate, Dr. Luke, and Ester Dean.
Most Americans will recognize their songs, however. As I write this, at the height of summer, the No. 1 position on the Billboard pop chart is occupied by a Max Martin creation, “Bad Blood” (performed by Taylor Swift featuring Kendrick Lamar). No. 3, “Hey Mama” (David Guetta featuring Nicki Minaj), is an Ester Dean production; No. 5, “Worth It” (Fifth Harmony featuring Kid Ink), was written by Stargate; No. 7, “Can’t Feel My Face” (The Weeknd), is Martin again; No. 16, “The Night Is Still Young” (Minaj), is Dr. Luke and Ester Dean. And so on. If you flip on the radio, odds are that you will hear one of their songs. If you are reading this in an airport, a mall, a doctor’s office, or a hotel lobby, you are likely listening to one of their songs right now. This is not an aberration. The same would have been true at any time in the past decade. Before writing most of Taylor Swift’s newest album, Max Martin wrote No. 1 hits for Britney Spears, ’NSync, Pink, Kelly Clarkson, Maroon 5, and Katy Perry.
Millions of Swifties and KatyCats—as well as Beliebers, Barbz, and Selenators, and the Rihanna Navy—would be stunned by the revelation that a handful of people, a crazily high percentage of them middle-aged Scandinavian men, write most of America’s pop hits. It is an open yet closely guarded secret, protected jealously by the labels and the performers themselves, whose identities are as carefully constructed as their songs and dances. The illusion of creative control is maintained by the fig leaf of a songwriting credit. The performer’s name will often appear in the list of songwriters, even if his or her contribution is negligible. (There’s a saying for this in the music industry: “Change a word, get a third.”) But almost no pop celebrities write their own hits. Too much is on the line for that, and being a global celebrity is a full-time job. It would be like Will Smith writing the next Independence Day.
Impressionable young fans would therefore do well to avoid John Seabrook’s The Song Machine, an immersive, reflective, and utterly satisfying examination of the business of popular music. It is a business as old as Stephen Foster, but never before has it been run so efficiently or dominated by so few. We have come to expect this type of consolidation from our banking, oil-and-gas, and health-care industries. But the same practices they rely on—ruthless digitization, outsourcing, focus-group brand testing, brute-force marketing—have been applied with tremendous success in pop, creating such profitable multinationals as Rihanna, Katy Perry, and Taylor Swift.
The music has evolved in step with these changes. A short-attention-span culture demands short-attention-span songs. The writers of Tin Pan Alley and Motown had to write only one killer hook to get a hit. Now you need a new high every seven seconds—the average length of time a listener will give a radio station before changing the channel. “It’s not enough to have one hook anymore,” Jay Brown, a co-founder of Jay Z’s Roc Nation label, tells Seabrook. “You’ve got to have a hook in the intro, a hook in the pre, a hook in the chorus, and a hook in the bridge, too.”
Sonically, the template has remained remarkably consistent since the Backstreet Boys, whose sound was created by Max Martin and his mentor, Denniz PoP, at PoP’s Cheiron Studios, in Stockholm. It was at Cheiron in the late ’90s that they developed the modern hit formula, a formula nearly as valuable as Coca-Cola’s. But it’s not a secret formula. Seabrook describes the pop sound this way: “ABBA’s pop chords and textures, Denniz PoP’s song structure and dynamics, ’80s arena rock’s big choruses, and early ’90s American R&B grooves.” The production quality is crucial, too. The music is manufactured to fill not headphones and home stereo systems but malls and football stadiums. It is a synthetic, mechanical sound “more captivating than the virtuosity of the musicians.” This is a metaphor, of course—there are no musicians anymore, at least not human ones. Every instrument is automated. Session musicians have gone extinct, and studio mixing boards remain only as retro, semi-ironic furniture.
After Sandberg come the bald Norwegians, Mikkel Eriksen and Tor Hermansen, 43 and 44; Lukasz Gottwald, 42, a Sandberg protégé and collaborator who spent a decade languishing in Saturday Night Live’s house band; and another Sandberg collaborator named Esther Dean, 33, a former nurse’s aide from Oklahoma who was discovered in the audience of a Gap Band concert, singing along to “Oops Upside Your Head.” They use pseudonyms professionally, but most Americans wouldn’t recognize those, either: Max Martin, Stargate, Dr. Luke, and Ester Dean.
Most Americans will recognize their songs, however. As I write this, at the height of summer, the No. 1 position on the Billboard pop chart is occupied by a Max Martin creation, “Bad Blood” (performed by Taylor Swift featuring Kendrick Lamar). No. 3, “Hey Mama” (David Guetta featuring Nicki Minaj), is an Ester Dean production; No. 5, “Worth It” (Fifth Harmony featuring Kid Ink), was written by Stargate; No. 7, “Can’t Feel My Face” (The Weeknd), is Martin again; No. 16, “The Night Is Still Young” (Minaj), is Dr. Luke and Ester Dean. And so on. If you flip on the radio, odds are that you will hear one of their songs. If you are reading this in an airport, a mall, a doctor’s office, or a hotel lobby, you are likely listening to one of their songs right now. This is not an aberration. The same would have been true at any time in the past decade. Before writing most of Taylor Swift’s newest album, Max Martin wrote No. 1 hits for Britney Spears, ’NSync, Pink, Kelly Clarkson, Maroon 5, and Katy Perry.
Millions of Swifties and KatyCats—as well as Beliebers, Barbz, and Selenators, and the Rihanna Navy—would be stunned by the revelation that a handful of people, a crazily high percentage of them middle-aged Scandinavian men, write most of America’s pop hits. It is an open yet closely guarded secret, protected jealously by the labels and the performers themselves, whose identities are as carefully constructed as their songs and dances. The illusion of creative control is maintained by the fig leaf of a songwriting credit. The performer’s name will often appear in the list of songwriters, even if his or her contribution is negligible. (There’s a saying for this in the music industry: “Change a word, get a third.”) But almost no pop celebrities write their own hits. Too much is on the line for that, and being a global celebrity is a full-time job. It would be like Will Smith writing the next Independence Day.
Impressionable young fans would therefore do well to avoid John Seabrook’s The Song Machine, an immersive, reflective, and utterly satisfying examination of the business of popular music. It is a business as old as Stephen Foster, but never before has it been run so efficiently or dominated by so few. We have come to expect this type of consolidation from our banking, oil-and-gas, and health-care industries. But the same practices they rely on—ruthless digitization, outsourcing, focus-group brand testing, brute-force marketing—have been applied with tremendous success in pop, creating such profitable multinationals as Rihanna, Katy Perry, and Taylor Swift.
The music has evolved in step with these changes. A short-attention-span culture demands short-attention-span songs. The writers of Tin Pan Alley and Motown had to write only one killer hook to get a hit. Now you need a new high every seven seconds—the average length of time a listener will give a radio station before changing the channel. “It’s not enough to have one hook anymore,” Jay Brown, a co-founder of Jay Z’s Roc Nation label, tells Seabrook. “You’ve got to have a hook in the intro, a hook in the pre, a hook in the chorus, and a hook in the bridge, too.”
Sonically, the template has remained remarkably consistent since the Backstreet Boys, whose sound was created by Max Martin and his mentor, Denniz PoP, at PoP’s Cheiron Studios, in Stockholm. It was at Cheiron in the late ’90s that they developed the modern hit formula, a formula nearly as valuable as Coca-Cola’s. But it’s not a secret formula. Seabrook describes the pop sound this way: “ABBA’s pop chords and textures, Denniz PoP’s song structure and dynamics, ’80s arena rock’s big choruses, and early ’90s American R&B grooves.” The production quality is crucial, too. The music is manufactured to fill not headphones and home stereo systems but malls and football stadiums. It is a synthetic, mechanical sound “more captivating than the virtuosity of the musicians.” This is a metaphor, of course—there are no musicians anymore, at least not human ones. Every instrument is automated. Session musicians have gone extinct, and studio mixing boards remain only as retro, semi-ironic furniture.
by Nathaniel Rich, The Atlantic | Read more:
Image: Nicolas DehghaniNaomi Klein: Climate Change Makes for a Hotter and Meaner World
Naomi Klein is a journalist and author who has written for a number of publications about environmentalism, globalization, the wars in Iraq, and the impact of unrestrained neoliberal economics. She has also written several books on the anti-corporate movement (and in addition to print, has made documentary films on the subject with her husband, Avi Lewis). Her critiques of free market fundamentalism earned her the £50,000 (about $76,000) Warwick Prize for Writing and a place on the New York Times bestseller list.
In this interview, Klein tells the Bulletin’s Dan Drollette about her latest non-fiction book, This Changes Everything: Capitalism vs the Climate, published by Simon & Schuster in 2014. The book takes no prisoners, pointing at weak government efforts to address climate change; environmental groups that have compromised with industry on too many issues; what she considers to be pie-in-the-sky “techno-fixes” such as carbon sequestering; conservatives who consistently deny climate change is even happening; and corporations that Klein thinks are seeking to earn a profit by scuttling efforts to deal with the crisis. (...)
BAS: One of the shocking things to read in your book was that all these trade agreements essentially say that free trade trumps everything else, including environmental concerns. Economic growth always comes first; climate second.
Klein: And we’re seeing a flurry of these new trade deals, with the United States and China, and with Europe and the United States, to name just a few of the larger partnerships. There is this fundamental contradiction between what governments are negotiating in trade summits and what they’re negotiating at climate summits. There’s seemingly very little desire to reconcile the two.
It’s like they’re on these two parallel tracks and don’t communicate with each other.
Twenty years ago the trade lawyers could plead ignorance and say that they weren’t really up on climate change. But that’s not true anymore. It’s really a willful compartmentalization, because it’s hard to reconcile that kind of trade with a radical carbon reduction agenda.
BAS: The book laid out a case for the large environmental groups—“Big Green”—being in bed with Big Oil. The Environmental Defense Fund came off particularly badly.
Klein: Even the Sierra Club had a really ugly chapter, in which they were taking many millions from a natural gas company which had a very clear financial interest in shutting down coal plants because it was creating a market for them.
And frankly I think the same is true for Michael Bloomberg—who continues to be a major funder for the Sierra Club—because Bloomberg as a businessman is massively invested in natural gas. He has a dedicated fund for his $33 billion worth of wealth that specializes in oil and gas. And he continues to be a vocal advocate in favor of fracking in New York State.
What do I take away from this? I love the Sierra Club, I have friends who work there, they do fantastic work. But I frankly don’t see a huge difference between the conflict of interest with Michael Bloomberg and some of the other conflicts of interest I mentioned in the book.
That said, I think the Sierra Club has definitely gotten on the right side of the fracking fight. Unlike the Environmental Defense Fund, which continues to try to broker these so-called “best practices” for fracking at a time when the people affected by fracking are being very clear that they want outright bans.
What’s interesting is what has been happening in places like New York State, where local green organizations have been challenging the relevancy of some of these Big Green groups that started that whole “split the difference” middle ground approach—and Big Green is to the right of Cuomo, which is not a place where you want to be. There’s been the emergence of a harder-core, grass-roots movement that I call ‘Blockadia’—and because it is growing, and so strong, and resonating with so many people, these new movements are winning huge victories, like the ban on fracking in New York State. And the grassroots is making these Big Green groups irrelevant. (...)
BAS: In the book, you rebutted the claim by some environmentalists that the fight against climate change would bring us all together. You cited examples in which some people found a way to profiteer off of this—such as a private security firm to patrol your expensively engineered, stormproof home and protect your possessions when the next flood arrive—with the net result that wedges were driven between everyone.
Klein: This is where I come to climate from. My previous book, The Shock Doctrine, was all about this. It was about the rise of “disaster capitalism” after wars. When I started writing it, the book was about military and economic crises and how they are used to push further privatization—and how these situations offer moments for intense profiteering. Because that’s what was happening in Iraq after invasion.
And as I was writing it, I was starting to see how the same sort of thing was cropping up in the aftermath of natural disasters like the Asian tsunami and Hurricane Katrina: privatized aid, privatized security infrastructure, privatized military, and so on. I felt like I was catching a glimpse of the future when I was doing the field reporting.
And I think that’s why I feel so passionately about the fact that climate change is not just about things getting hotter, it’s about things getting meaner and much more divided. We saw this during Hurricane Katrina—and to a lesser extent, during Hurricane Sandy—where there was such a huge gap between how that storm was experienced by people of means and people who were in public housing. It’s very cruel the way this plays out.
But that’s not the worst of it. The worst is when the developers come in and say, “Well, let’s just bulldoze the public housing and have condos instead.” Or, “Let’s use this as an opportunity to privatize the school system,” which is what happened in New Orleans.
This is what our current economic system is built to do, and it does it. And I’ve spent a lot of time documenting that.
We can count on that pattern continuing, unless there is a plan for a progressive response to the climate crisis.
by Dan Drollette Jr, Bulletin of Atomic Scientists | Read more:
Image: Anya Chibis For The Guardian
In this interview, Klein tells the Bulletin’s Dan Drollette about her latest non-fiction book, This Changes Everything: Capitalism vs the Climate, published by Simon & Schuster in 2014. The book takes no prisoners, pointing at weak government efforts to address climate change; environmental groups that have compromised with industry on too many issues; what she considers to be pie-in-the-sky “techno-fixes” such as carbon sequestering; conservatives who consistently deny climate change is even happening; and corporations that Klein thinks are seeking to earn a profit by scuttling efforts to deal with the crisis. (...)
BAS: One of the shocking things to read in your book was that all these trade agreements essentially say that free trade trumps everything else, including environmental concerns. Economic growth always comes first; climate second.
Klein: And we’re seeing a flurry of these new trade deals, with the United States and China, and with Europe and the United States, to name just a few of the larger partnerships. There is this fundamental contradiction between what governments are negotiating in trade summits and what they’re negotiating at climate summits. There’s seemingly very little desire to reconcile the two.
It’s like they’re on these two parallel tracks and don’t communicate with each other.
Twenty years ago the trade lawyers could plead ignorance and say that they weren’t really up on climate change. But that’s not true anymore. It’s really a willful compartmentalization, because it’s hard to reconcile that kind of trade with a radical carbon reduction agenda.
BAS: The book laid out a case for the large environmental groups—“Big Green”—being in bed with Big Oil. The Environmental Defense Fund came off particularly badly.
Klein: Even the Sierra Club had a really ugly chapter, in which they were taking many millions from a natural gas company which had a very clear financial interest in shutting down coal plants because it was creating a market for them.
And frankly I think the same is true for Michael Bloomberg—who continues to be a major funder for the Sierra Club—because Bloomberg as a businessman is massively invested in natural gas. He has a dedicated fund for his $33 billion worth of wealth that specializes in oil and gas. And he continues to be a vocal advocate in favor of fracking in New York State.
What do I take away from this? I love the Sierra Club, I have friends who work there, they do fantastic work. But I frankly don’t see a huge difference between the conflict of interest with Michael Bloomberg and some of the other conflicts of interest I mentioned in the book.
That said, I think the Sierra Club has definitely gotten on the right side of the fracking fight. Unlike the Environmental Defense Fund, which continues to try to broker these so-called “best practices” for fracking at a time when the people affected by fracking are being very clear that they want outright bans.
What’s interesting is what has been happening in places like New York State, where local green organizations have been challenging the relevancy of some of these Big Green groups that started that whole “split the difference” middle ground approach—and Big Green is to the right of Cuomo, which is not a place where you want to be. There’s been the emergence of a harder-core, grass-roots movement that I call ‘Blockadia’—and because it is growing, and so strong, and resonating with so many people, these new movements are winning huge victories, like the ban on fracking in New York State. And the grassroots is making these Big Green groups irrelevant. (...)
BAS: In the book, you rebutted the claim by some environmentalists that the fight against climate change would bring us all together. You cited examples in which some people found a way to profiteer off of this—such as a private security firm to patrol your expensively engineered, stormproof home and protect your possessions when the next flood arrive—with the net result that wedges were driven between everyone.
Klein: This is where I come to climate from. My previous book, The Shock Doctrine, was all about this. It was about the rise of “disaster capitalism” after wars. When I started writing it, the book was about military and economic crises and how they are used to push further privatization—and how these situations offer moments for intense profiteering. Because that’s what was happening in Iraq after invasion.
And as I was writing it, I was starting to see how the same sort of thing was cropping up in the aftermath of natural disasters like the Asian tsunami and Hurricane Katrina: privatized aid, privatized security infrastructure, privatized military, and so on. I felt like I was catching a glimpse of the future when I was doing the field reporting.
And I think that’s why I feel so passionately about the fact that climate change is not just about things getting hotter, it’s about things getting meaner and much more divided. We saw this during Hurricane Katrina—and to a lesser extent, during Hurricane Sandy—where there was such a huge gap between how that storm was experienced by people of means and people who were in public housing. It’s very cruel the way this plays out.
But that’s not the worst of it. The worst is when the developers come in and say, “Well, let’s just bulldoze the public housing and have condos instead.” Or, “Let’s use this as an opportunity to privatize the school system,” which is what happened in New Orleans.
This is what our current economic system is built to do, and it does it. And I’ve spent a lot of time documenting that.
We can count on that pattern continuing, unless there is a plan for a progressive response to the climate crisis.
by Dan Drollette Jr, Bulletin of Atomic Scientists | Read more:
Image: Anya Chibis For The Guardian
Sunday, November 29, 2015
The Neurofix: Chicken Soup for the Ageing Brain
Stem cell therapies for the scourges of old age are on the near horizon. Will they come in time for the Baby Boomers?
No amount of Botox or Pilates can stave off the loss of brain cells, a steady erosion that began, ironically, in the days when we were part of the Woodstock nation. The brain reaches its maximum weight by age 20 and then slowly starts shrinking, losing 10 per cent or more of its volume over a lifetime. By our 50s, we’re experiencing mild mental glitches, those unsettling ‘senior moments’ when we’ve misplaced the keys for the umpteenth time or can’t remember the name of an acquaintance, all of it symptomatic of the steady erosion of neurons in our brains.
We make feeble jokes about our failing memories, but quietly worry that they are harbingers of something worse – such as scourges from dementia to Parkinson’s, which increase in frequency as we age. No matter how it manifests, the progressive shrinkage of our brain and the faulty wiring in our neural circuitry slowly rob us of our memories, identity and personality, the abilities that give our lives meaning and purpose, and the physical and emotional capacity to fully embrace the world.
All this is magnified by the demographic time bomb that threatens to exhaust society’s resources: by 2050, more than 400 million people worldwide will be aged over 80, many caring for the explosion of friends and family suffering from brain afflictions of varying kinds. The burden could bankrupt our health system – or, new technology based on neural stem cells, the progenitor cells of the nervous system, could intervene, reversing neurodegeneration, healing damaged brains and averting catastrophe in the nick of time. (...)
A team researching Alzheimer’s at the University of California, Irvine has shored up memory by transferring stem cells to the brains of mice. To do their work, they compared the performance on a simple memory test of a group of healthy mice with ones who were genetically altered to develop brain lesions that mimic Alzheimer’s. The fit mice remembered their surroundings about 70 per cent of the time, while the recall rate for the impaired animals was a scant 50 per cent. Scientists then injected the hippocampus – the brain region responsible for memory storage – of the injured mice with 200,000 neural stem cells that were engineered to glow green under an ultraviolet light so their progress could be tracked. Three months later, when both groups of mice were given the same test, they scored the same – about 70 per cent recall – while a control group of damaged mice that didn’t receive the stem cells still had significant mental deficits.
Significantly, only a handful – about 6 per cent – of the implanted cells were transformed into neurons, so the beneficial effects didn’t occur because they simply replaced dead cells. Yet there was a 75 per cent increase in the number of synapses – the connections between neurons that relay nerve impulses. Subsequent experiments suggest the stem cells release a protein called brain-derived neurotropic factor (BDNF) that seems to nurse the injured neurons back to health by keeping them alive and functional, and prompt the surrounding tissue to produce new neurites (long, thin structures called axons and dendrites that transmit electrical messages).
In replications of these studies at labs around the world, treated lab animals showed improvements even after months – roughly the equivalent of about a decade in human years. ‘The stem cells were acting as a fertiliser of sorts for the surviving cells,’ says Mathew Blurton-Jones, a neuroscientist at the University of California, Irvine involved in the research. When scientists artificially reduced the amount of BDNF the stem cells produced, the benefit disappeared too. (...)
A dose of stem cells delivered to just the right sweet spot could mend the fractured neural circuits that ferry signals throughout the brain, holding promise for relief of psychiatric diseases from bipolar disorder to schizophrenia. The technology might one day ease learning disabilities, including deficits in information processing and attention. But the application closest at hand is the one that could rescue the boomer generation, my friends, and me, from the looming loss of our memories, our social skills, and our very selves. One day soon, neural stem cells will be used like brick and mortar to shore up the crumbling walls in our brains and restore lost functions so we’re almost good as new. ‘We can change the face of therapeutics with neural stem cells,’ says Eva Feldman, a neurologist at the University of Michigan who is a leading stem cell researcher. ‘They’re like chicken soup for the brain.’
We make feeble jokes about our failing memories, but quietly worry that they are harbingers of something worse – such as scourges from dementia to Parkinson’s, which increase in frequency as we age. No matter how it manifests, the progressive shrinkage of our brain and the faulty wiring in our neural circuitry slowly rob us of our memories, identity and personality, the abilities that give our lives meaning and purpose, and the physical and emotional capacity to fully embrace the world.
All this is magnified by the demographic time bomb that threatens to exhaust society’s resources: by 2050, more than 400 million people worldwide will be aged over 80, many caring for the explosion of friends and family suffering from brain afflictions of varying kinds. The burden could bankrupt our health system – or, new technology based on neural stem cells, the progenitor cells of the nervous system, could intervene, reversing neurodegeneration, healing damaged brains and averting catastrophe in the nick of time. (...)
A team researching Alzheimer’s at the University of California, Irvine has shored up memory by transferring stem cells to the brains of mice. To do their work, they compared the performance on a simple memory test of a group of healthy mice with ones who were genetically altered to develop brain lesions that mimic Alzheimer’s. The fit mice remembered their surroundings about 70 per cent of the time, while the recall rate for the impaired animals was a scant 50 per cent. Scientists then injected the hippocampus – the brain region responsible for memory storage – of the injured mice with 200,000 neural stem cells that were engineered to glow green under an ultraviolet light so their progress could be tracked. Three months later, when both groups of mice were given the same test, they scored the same – about 70 per cent recall – while a control group of damaged mice that didn’t receive the stem cells still had significant mental deficits.
Significantly, only a handful – about 6 per cent – of the implanted cells were transformed into neurons, so the beneficial effects didn’t occur because they simply replaced dead cells. Yet there was a 75 per cent increase in the number of synapses – the connections between neurons that relay nerve impulses. Subsequent experiments suggest the stem cells release a protein called brain-derived neurotropic factor (BDNF) that seems to nurse the injured neurons back to health by keeping them alive and functional, and prompt the surrounding tissue to produce new neurites (long, thin structures called axons and dendrites that transmit electrical messages).
In replications of these studies at labs around the world, treated lab animals showed improvements even after months – roughly the equivalent of about a decade in human years. ‘The stem cells were acting as a fertiliser of sorts for the surviving cells,’ says Mathew Blurton-Jones, a neuroscientist at the University of California, Irvine involved in the research. When scientists artificially reduced the amount of BDNF the stem cells produced, the benefit disappeared too. (...)
A dose of stem cells delivered to just the right sweet spot could mend the fractured neural circuits that ferry signals throughout the brain, holding promise for relief of psychiatric diseases from bipolar disorder to schizophrenia. The technology might one day ease learning disabilities, including deficits in information processing and attention. But the application closest at hand is the one that could rescue the boomer generation, my friends, and me, from the looming loss of our memories, our social skills, and our very selves. One day soon, neural stem cells will be used like brick and mortar to shore up the crumbling walls in our brains and restore lost functions so we’re almost good as new. ‘We can change the face of therapeutics with neural stem cells,’ says Eva Feldman, a neurologist at the University of Michigan who is a leading stem cell researcher. ‘They’re like chicken soup for the brain.’
by Linda Marsa, Aeon | Read more:
Image: Centre Jean Perrin/SPLMachine Intelligence In The Real World
I’ve been laser-focused on machine intelligence in the past few years. I’ve talked to hundreds of entrepreneurs, researchers and investors about helping machines make us smarter.
In the months since I shared my landscape of machine intelligence companies, folks keep asking me what I think of them — as if they’re all doing more or less the same thing. (I’m guessing this is how people talked about “dot coms” in 1997.)
On average, people seem most concerned about how to interact with these technologies once they are out in the wild. This post will focus on how these companies go to market, not on the methods they use.
In an attempt to explain the differences between how these companies go to market, I found myself using (admittedly colorful) nicknames. It ended up being useful, so I took a moment to spell them out in more detail so, in case you run into one or need a handy way to describe yours, you have the vernacular.
The categories aren’t airtight — this is a complex space — but this framework helps our fund (which invests in companies that make work better) be more thoughtful about how we think about and interact with machine intelligence companies.
“Panopticons” Collect A Broad Dataset
Machine intelligence starts with the data computers analyze, so the companies I call “panopticons” are assembling enormous, important new datasets. Defensible businesses tend to be global in nature. “Global” is very literal in the case of a company like Planet Labs, which has satellites physically orbiting the earth. Or it’s more metaphorical, in the case of a company like Premise, which is crowdsourcing data from many countries.
With many of these new datasets we can automatically get answers to questions we have struggled to answer before. There are massive barriers to entry because it’s difficult to amass a global dataset of significance.
However, it’s important to ask whether there is a “good enough” dataset that might provide a cheaper alternative, since data license businesses are at risk of being commoditized. Companies approaching this space should feel confident that either (1) no one else can or will collect a “good enough” alternative, or (2) they can successfully capture the intelligence layer on top of their own dataset and own the end user.
Examples include Planet Labs, Premise and Diffbot.
by Shivon Zilis, TechCrunch | Read more:
Image: Razum Shutterstock
In the months since I shared my landscape of machine intelligence companies, folks keep asking me what I think of them — as if they’re all doing more or less the same thing. (I’m guessing this is how people talked about “dot coms” in 1997.)
On average, people seem most concerned about how to interact with these technologies once they are out in the wild. This post will focus on how these companies go to market, not on the methods they use.
In an attempt to explain the differences between how these companies go to market, I found myself using (admittedly colorful) nicknames. It ended up being useful, so I took a moment to spell them out in more detail so, in case you run into one or need a handy way to describe yours, you have the vernacular.
The categories aren’t airtight — this is a complex space — but this framework helps our fund (which invests in companies that make work better) be more thoughtful about how we think about and interact with machine intelligence companies.
“Panopticons” Collect A Broad Dataset
Machine intelligence starts with the data computers analyze, so the companies I call “panopticons” are assembling enormous, important new datasets. Defensible businesses tend to be global in nature. “Global” is very literal in the case of a company like Planet Labs, which has satellites physically orbiting the earth. Or it’s more metaphorical, in the case of a company like Premise, which is crowdsourcing data from many countries.
With many of these new datasets we can automatically get answers to questions we have struggled to answer before. There are massive barriers to entry because it’s difficult to amass a global dataset of significance.
However, it’s important to ask whether there is a “good enough” dataset that might provide a cheaper alternative, since data license businesses are at risk of being commoditized. Companies approaching this space should feel confident that either (1) no one else can or will collect a “good enough” alternative, or (2) they can successfully capture the intelligence layer on top of their own dataset and own the end user.
Examples include Planet Labs, Premise and Diffbot.
“Lasers” Collect A Focused Dataset
The companies I like to call “lasers” are also building new datasets, but in niches, to solve industry-specific problems with laser-like focus. Successful companies in this space provide more than just the dataset — they also must own the algorithms and user interface. They focus on narrower initial uses and must provide more value than just data to win customers.
The products immediately help users answer specific questions like, “how much should I water my crops?” or “which applicants are eligible for loans?” This category may spawn many, many companies — a hundred or more — because companies in it can produce business value right away.
With these technologies, many industries will be able to make decisions in a data-driven way for the first time. The power for good here is enormous: We’ve seen these technologies help us feed the world more efficiently, improve medical diagnostics, aid in conservation projects and provide credit to those in the world that didn’t have access to it before.
But to succeed, these companies need to find a single “killer” (meant in the benevolent way) use case to solve, and solve that problem in a way that makes the user’s life simpler, not more complex.
Examples include Tule Technologies, Enlitic, InVenture, Conservation Metrics, Red Bird, Mavrx and Watson Health.
The companies I like to call “lasers” are also building new datasets, but in niches, to solve industry-specific problems with laser-like focus. Successful companies in this space provide more than just the dataset — they also must own the algorithms and user interface. They focus on narrower initial uses and must provide more value than just data to win customers.
The products immediately help users answer specific questions like, “how much should I water my crops?” or “which applicants are eligible for loans?” This category may spawn many, many companies — a hundred or more — because companies in it can produce business value right away.
With these technologies, many industries will be able to make decisions in a data-driven way for the first time. The power for good here is enormous: We’ve seen these technologies help us feed the world more efficiently, improve medical diagnostics, aid in conservation projects and provide credit to those in the world that didn’t have access to it before.
But to succeed, these companies need to find a single “killer” (meant in the benevolent way) use case to solve, and solve that problem in a way that makes the user’s life simpler, not more complex.
Examples include Tule Technologies, Enlitic, InVenture, Conservation Metrics, Red Bird, Mavrx and Watson Health.
by Shivon Zilis, TechCrunch | Read more:
Image: Razum Shutterstock
Saturday, November 28, 2015
Korean Thanksgiving
Ths is where Bing Crosby’s buried,’ says my mom from the front seat of my middle aunt’s car. Mother is feeling triumphant because she’s conned me into a twofer. I’d been guilt-tripped into attending Catholic mass and now we were on our way to visit her parents’ gravesites. I should have brought my own car — a Corolla rental — but I’d felt so pleased with myself, so pious and doting to accompany my 65-year-old mother to church, that I never imagined she’d pull a stunt like this.
My youngest aunt is in the back with me, clutching three cellophane-bundled bouquets of flowers in her tiny, star-shaped paws. All four of us are wearing enormous, aggressively Asian sun hats. Mom and I got ours from the Korean dollar-store the day before. They are identical. I picked mine first — an angular, stylised, straw Regency bonnet that looks cool if you dress sort of goth and deconstructed — and she got hers to match. I thought about switching but relented. From the back there is no mistaking that we are together. We don’t look cool.
We’re 15 minutes late like we always are. On the verdant lawn of Holy Cross Cemetery, in Culver City, California, about three hills in from the main gate, I see my mother’s oldest brother, his wife (also in a massive sun hat) and their Yorkshire Terrier, Cherry. The outlook is grim. They’ve staked their claim with two picnic blankets side by side. The blankets mean business. They are flanked by five heaving bags of food, which means we’re in for the long haul.
This is one of two cemeteries I’ve ever been to ever, and I had no idea dining al fresco among tombstones was a thing. I wonder if it’s an Asian thing — which it so would be — and do a 360-degree turn to confirm that, while not exclusive to yellows, most of the families with an elaborate buffet set-up and another blanket to indicate ‘post-lunch napping zone’ are 100 per cent not white.
I text the cousins: no dice. It seems I’m the only kid dumb enough to get roped. Everyone else submitted iron-clad excuses ages ago — work, kids of their own, vague previous engagements (that I suspect to be golf), distance — and no one gave me the heads-up. Rookie move: it’s the weekend before the 15th day of the eighth month of the lunar calendar. That means it’s almost Korean Thanksgiving, an occasion for reflection and time-suck ancestral memorial rites.
Lately, I’m off my game with this stuff. I’d forgotten it was Sunday too, since I live in New York and work as a freelancer: days of the week are insignificant unless it comes to deadlines. Either way, I’ll be lucky to get out of here in less than two hours. My uncle’s wife squeals when she sees my mother. They haven’t seen each other in more than a year, and have stockpiled gossip to workshop. I should have brought a book.
by Mary H.K. Choi, Morning News | Read more:
Image: Michel Setboun
We’re 15 minutes late like we always are. On the verdant lawn of Holy Cross Cemetery, in Culver City, California, about three hills in from the main gate, I see my mother’s oldest brother, his wife (also in a massive sun hat) and their Yorkshire Terrier, Cherry. The outlook is grim. They’ve staked their claim with two picnic blankets side by side. The blankets mean business. They are flanked by five heaving bags of food, which means we’re in for the long haul.
This is one of two cemeteries I’ve ever been to ever, and I had no idea dining al fresco among tombstones was a thing. I wonder if it’s an Asian thing — which it so would be — and do a 360-degree turn to confirm that, while not exclusive to yellows, most of the families with an elaborate buffet set-up and another blanket to indicate ‘post-lunch napping zone’ are 100 per cent not white.
I text the cousins: no dice. It seems I’m the only kid dumb enough to get roped. Everyone else submitted iron-clad excuses ages ago — work, kids of their own, vague previous engagements (that I suspect to be golf), distance — and no one gave me the heads-up. Rookie move: it’s the weekend before the 15th day of the eighth month of the lunar calendar. That means it’s almost Korean Thanksgiving, an occasion for reflection and time-suck ancestral memorial rites.
Lately, I’m off my game with this stuff. I’d forgotten it was Sunday too, since I live in New York and work as a freelancer: days of the week are insignificant unless it comes to deadlines. Either way, I’ll be lucky to get out of here in less than two hours. My uncle’s wife squeals when she sees my mother. They haven’t seen each other in more than a year, and have stockpiled gossip to workshop. I should have brought a book.
by Mary H.K. Choi, Morning News | Read more:
Image: Michel Setboun
Friday, November 27, 2015
Food, Interrupted
The problem with food is we care too much. Take the example of Diane, a 48-year-old office manager who took part in a study of eating habits in 2010. She believed food was entirely about pleasure and imagination, a matter of “what I like and what I fancy,” she told an interviewer. She obsessed over the variables that might interfere with her enjoyment—as a gourmet might critique the texture of a sous vide chicken breast or frown at the seasoning of a broth. The temperature of her food was particularly important. Diane invited the researchers to a café nearby so they could see her navigate the menu, or rather navigate its dearth of appetizing options. When dinner was served, she ate rapidly but didn’t finish. She would only eat a cooked meal, she explained, when it was still piping hot.
So Diane was a picky eater. And this might have given the food she ate greater meaning, since in order to truly love a certain dish—not too salty, not too sweet—you have to reject other, lesser forms of it. But in truth Diane complained of being deeply miserable. Her selections were more pained than indulgent. The food she made such a show of ordering at the café was nothing more than a plain egg on toast, which quickly became revolting to her as it cooled. As she neared the age of 50, she felt she’d let her mother down because her fussiness meant they could never share a meal together; her friends no longer invited her to dinner. Though Diane wanted to change her ways, she doubted she could. She lived on a diet of de-food-ified food: processed cheese, breakfast cereal, potato chips, and sliced bread.
Of course, these culinary preferences and the anguish that often trails behind them aren’t uncommon. The British historian Bee Wilson’s new book First Bite takes on the subject of how we learn to eat as children and the habits we end up with as adults. As well as negative health effects, the book describes the contortions people perform in their social and professional lives because of disordered eating: One woman chooses her college on the assurance the cafeteria will serve the kind of pizza she finds acceptable; another has to call any restaurant she plans to visit and check that they will cook a hamburger with absolutely no fixings. Nor are the outcomes of these situations so different from the person who likes a wide range of foods but ends up buying a sandwich for lunch and pizza for dinner. These daily struggles are good examples of a much bigger dysfunction: Why do we find it so hard to eat nourishing, whole foods, even if they are available and we can afford them and we want to eat them?
Any account of the Western diet in the twenty-first century is going to be both a bleak picture and one that contains a lot of candy. A 2002 study of the foods children like to eat—tastes they would, it was hoped, grow out of—revealed that their parents favored the same popcorn, pancakes, ground beef, and pizza. Nostalgic, fattening “kids’ foods” have become part of everyday life: Wilson cites the “cereal milk” sold at David Chang’s Momofuku Milk Bars in New York and the rise of birthday-cake-flavored ice cream that asks to be consumed on the 364 un-special days of the year. (She also has a disdain for cupcakes that made me instantly trust this book.) The average American in 2006 consumed 2,533 calories per day, including 422 calories worth of drinks, compared with 2,090 calories in 1977. In studies of portion size, a common reason to stop eating was boredom.
Wilson’s explanation of how we got to this state of affairs feels the most human of the many that have been offered in the past 15 years. There is a lot of blame to go round after all, and you could start with Eric Schlosser’s target in his 2001 best-seller Fast Food Nation: the huge fast-food companies that invented supersize portions and use Disney-style marketing tactics to sell them to families. Or you could look at the corrupt politics that have given us sugary drinks in schools and deliberately confusing government dietary advice, as Marion Nestle does in Food Politics (2002). Naturally, the military is behind much of this. After World War II, the Army partnered with corporations to create a permanent market for processed foods originally developed as soldiers’ rations, as described in The Combat-Ready Kitchen (2015) by Anastacia Marx de Salcedo. And maybe Kraft and Frito-Lay are just really, deceptively good at what they do. The $1 trillion snack industry, Michael Moss’s Salt, Sugar, Fat (2013) argues, is built on the “bliss point,” an addictive combination of the three title ingredients. It doesn’t help, Michael Pollan suggests in The Omnivore’s Dilemma (2006), that Americans have few long-established food traditions to guide us—what we need is “food rules.”
Wilson doesn’t deny that all of these books identify powerful forces. These factors shape the environment in which we all now must make our individual choices: whether to breakfast on melon, grapefruit, and kombucha, for instance, or to grab a cream cheese-laden bagel and coffee from the nearest food truck en route to work? But ultimately our decisions about food are determined by long-ingrained patterns of likes and dislikes. If I choose the bagel, Wilson would argue, it’s not because I don’t know that it lacks the vitamins of the fruit salad or that I’m about to experience a huge spike in blood sugar, meaning I’ll be hungry but still strangely stuffed by 11 a.m. I know all of this and so do plenty of people—three of the books above were New York Times best-sellers. The major barrier is just that I prefer everything about the bagel. Before the day has even started I will have broken all three of Pollan’s famous rules for a healthy diet: “Eat food. Not too much. Mostly plants.” That is because, as Wilson puts it, in order to follow those rules you have to: “Like real food. Not enjoy feeling overstuffed. And appreciate vegetables.”
So Diane was a picky eater. And this might have given the food she ate greater meaning, since in order to truly love a certain dish—not too salty, not too sweet—you have to reject other, lesser forms of it. But in truth Diane complained of being deeply miserable. Her selections were more pained than indulgent. The food she made such a show of ordering at the café was nothing more than a plain egg on toast, which quickly became revolting to her as it cooled. As she neared the age of 50, she felt she’d let her mother down because her fussiness meant they could never share a meal together; her friends no longer invited her to dinner. Though Diane wanted to change her ways, she doubted she could. She lived on a diet of de-food-ified food: processed cheese, breakfast cereal, potato chips, and sliced bread.
Of course, these culinary preferences and the anguish that often trails behind them aren’t uncommon. The British historian Bee Wilson’s new book First Bite takes on the subject of how we learn to eat as children and the habits we end up with as adults. As well as negative health effects, the book describes the contortions people perform in their social and professional lives because of disordered eating: One woman chooses her college on the assurance the cafeteria will serve the kind of pizza she finds acceptable; another has to call any restaurant she plans to visit and check that they will cook a hamburger with absolutely no fixings. Nor are the outcomes of these situations so different from the person who likes a wide range of foods but ends up buying a sandwich for lunch and pizza for dinner. These daily struggles are good examples of a much bigger dysfunction: Why do we find it so hard to eat nourishing, whole foods, even if they are available and we can afford them and we want to eat them?
Any account of the Western diet in the twenty-first century is going to be both a bleak picture and one that contains a lot of candy. A 2002 study of the foods children like to eat—tastes they would, it was hoped, grow out of—revealed that their parents favored the same popcorn, pancakes, ground beef, and pizza. Nostalgic, fattening “kids’ foods” have become part of everyday life: Wilson cites the “cereal milk” sold at David Chang’s Momofuku Milk Bars in New York and the rise of birthday-cake-flavored ice cream that asks to be consumed on the 364 un-special days of the year. (She also has a disdain for cupcakes that made me instantly trust this book.) The average American in 2006 consumed 2,533 calories per day, including 422 calories worth of drinks, compared with 2,090 calories in 1977. In studies of portion size, a common reason to stop eating was boredom.
Wilson’s explanation of how we got to this state of affairs feels the most human of the many that have been offered in the past 15 years. There is a lot of blame to go round after all, and you could start with Eric Schlosser’s target in his 2001 best-seller Fast Food Nation: the huge fast-food companies that invented supersize portions and use Disney-style marketing tactics to sell them to families. Or you could look at the corrupt politics that have given us sugary drinks in schools and deliberately confusing government dietary advice, as Marion Nestle does in Food Politics (2002). Naturally, the military is behind much of this. After World War II, the Army partnered with corporations to create a permanent market for processed foods originally developed as soldiers’ rations, as described in The Combat-Ready Kitchen (2015) by Anastacia Marx de Salcedo. And maybe Kraft and Frito-Lay are just really, deceptively good at what they do. The $1 trillion snack industry, Michael Moss’s Salt, Sugar, Fat (2013) argues, is built on the “bliss point,” an addictive combination of the three title ingredients. It doesn’t help, Michael Pollan suggests in The Omnivore’s Dilemma (2006), that Americans have few long-established food traditions to guide us—what we need is “food rules.”
Wilson doesn’t deny that all of these books identify powerful forces. These factors shape the environment in which we all now must make our individual choices: whether to breakfast on melon, grapefruit, and kombucha, for instance, or to grab a cream cheese-laden bagel and coffee from the nearest food truck en route to work? But ultimately our decisions about food are determined by long-ingrained patterns of likes and dislikes. If I choose the bagel, Wilson would argue, it’s not because I don’t know that it lacks the vitamins of the fruit salad or that I’m about to experience a huge spike in blood sugar, meaning I’ll be hungry but still strangely stuffed by 11 a.m. I know all of this and so do plenty of people—three of the books above were New York Times best-sellers. The major barrier is just that I prefer everything about the bagel. Before the day has even started I will have broken all three of Pollan’s famous rules for a healthy diet: “Eat food. Not too much. Mostly plants.” That is because, as Wilson puts it, in order to follow those rules you have to: “Like real food. Not enjoy feeling overstuffed. And appreciate vegetables.”
by Laura Marsh, TNR | Read more:
Image: Davide LucianoThursday, November 26, 2015
Sex After 50 at the Supreme Court
Fifty years after the Supreme Court, in Griswold v. Connecticut, granted married couples the constitutional right to use birth control, here we are back at the court, still wrestling with contraception. Am I the only one who finds this remarkable?
It’s less startling to find abortion also back at the court, given that we’ve never stopped debating abortion even as the birth control wars receded into a dimly remembered past. It’s the conjunction of the two issues that deserves more notice than it has received. Maybe it’s just a coincidence of timing that they now sit side-by-side on the court’s docket, in cases the justices accepted on consecutive Fridays earlier this month for argument and decision later in the current term.
But it feels like more than mere coincidence. Big Supreme Court cases don’t arrive randomly at the justices’ door. Rather, they are propelled by contending forces deep within American society, conflict eventually taking the shape of a legal dispute with sufficient resonance to claim the Supreme Court’s attention. It’s from that perspective, in the waning weeks of Griswold’s anniversary year, that I propose to consider these two crucially important cases.
The birth-control case — actually seven separate appeals that the court has consolidated under the name Zubik v. Burwell — is a challenge to the accommodation the Obama administration has provided for nonprofit organizations with religious objections to covering birth control under their employee health plans, as required under the Affordable Care Act. All these organizations have to do to claim the privilege of opting out is to send a letter to the Secretary of Health and Human Services. The abortion case, Whole Woman’s Health v. Cole, is an appeal by abortion clinics in Texas from a decision upholding state regulations that invoke women’s health as a pretext for destroying the state’s abortion-provider infrastructure.
There are obvious differences between the two cases, which I’ve written about in some detail recently. The contraception case invokes not the Constitution but the Religious Freedom Restoration Act, a 1993 law aimed at shielding religious practices from federal laws that impose on them a “substantial burden” without sufficient justification. Constitutional interpretation will govern the Texas case, in which the clinics are challenging the regulations as the kind of “undue burden” that the Supreme Court’s 1992 decision in Planned Parenthood v. Casey prohibited: a regulation that has “the purpose or effect of placing a substantial obstacle in the path of a woman seeking an abortion of a nonviable fetus.”
But here’s what’s the same: sex, women and religion.
It’s less startling to find abortion also back at the court, given that we’ve never stopped debating abortion even as the birth control wars receded into a dimly remembered past. It’s the conjunction of the two issues that deserves more notice than it has received. Maybe it’s just a coincidence of timing that they now sit side-by-side on the court’s docket, in cases the justices accepted on consecutive Fridays earlier this month for argument and decision later in the current term.
But it feels like more than mere coincidence. Big Supreme Court cases don’t arrive randomly at the justices’ door. Rather, they are propelled by contending forces deep within American society, conflict eventually taking the shape of a legal dispute with sufficient resonance to claim the Supreme Court’s attention. It’s from that perspective, in the waning weeks of Griswold’s anniversary year, that I propose to consider these two crucially important cases.
The birth-control case — actually seven separate appeals that the court has consolidated under the name Zubik v. Burwell — is a challenge to the accommodation the Obama administration has provided for nonprofit organizations with religious objections to covering birth control under their employee health plans, as required under the Affordable Care Act. All these organizations have to do to claim the privilege of opting out is to send a letter to the Secretary of Health and Human Services. The abortion case, Whole Woman’s Health v. Cole, is an appeal by abortion clinics in Texas from a decision upholding state regulations that invoke women’s health as a pretext for destroying the state’s abortion-provider infrastructure.
There are obvious differences between the two cases, which I’ve written about in some detail recently. The contraception case invokes not the Constitution but the Religious Freedom Restoration Act, a 1993 law aimed at shielding religious practices from federal laws that impose on them a “substantial burden” without sufficient justification. Constitutional interpretation will govern the Texas case, in which the clinics are challenging the regulations as the kind of “undue burden” that the Supreme Court’s 1992 decision in Planned Parenthood v. Casey prohibited: a regulation that has “the purpose or effect of placing a substantial obstacle in the path of a woman seeking an abortion of a nonviable fetus.”
But here’s what’s the same: sex, women and religion.
by Linda Greenhouse, NY Times | Read more:
Image: Jabin Botsford/The New York TimesWednesday, November 25, 2015
Mapo Tofu
[ed. A favorite of one of my uncles, here are three different takes on a classic: Han Dynasty’s Mapo Tofu. Mission Chinese Food’s Mapo Tofu. Momofuku Ssam Bar’s Spicy Pork Sausage & Rice Cakes.]
Defining mapo tofu is like playing a maddening game of twenty questions: Is it plant-based? Yes. Is it vegetarian? Sometimes. Does it have pork? Probably. Is it spicy? Usually. Easy to make? It can be! The mapo tofu galaxy is one of infinite possibilities, spiraling outward from an originally spicy, oily, numbing, meaty sauce/stew of Sichuan origin. (...)
There are a couple versions of the origin story of mapo tofu, but I’m going to tell you the one I like best. Let me take you back to Chengdu, the capital of Sichuan Province, in the late 1800s. There’s this old lady, a tofu maker. She makes tofu every morning and also cooks some tofu dishes for local people or other cooks. She has smallpox scars all over her face, so people call her Ma Po—ma means pockmarks, and po means grandmother.
So there’s a gentleman who comes in to get some food. He’s just come from the market, and he has a bag of minced beef with him. He’s sitting there in Ma Po’s restaurant, and he looks out across the street and sees a very nice restaurant with a very pretty girl. Ma Po, as you know, is not the prettiest lady, and the pretty girl calls out to him to come to her restaurant. He leaves Ma Po’s place and heads across the street.
A few minutes later a table of customers comes in, and they say that they want a tofu dish with beef. Ma Po doesn’t have any beef but the gentleman who left forgot his bag of minced beef, so she’s like, I’m gonna use this motherfuckin’ beef. She makes this dish and she brings it out and the group of men love it. They go crazy over it.
A lot of people think the ma in this dish’s name refers to the numbing sensation you get from a Sichuan peppercorn, which is also called ma in Chinese. But to me it’s all about a person who creates a dish that people loved so much they named it after her. It became the most famous tofu dish that ever came out of China. There’s no tofu dish that is as famous as this. When you talk about Sichuan cuisine, you talk about this dish.
Defining mapo tofu is like playing a maddening game of twenty questions: Is it plant-based? Yes. Is it vegetarian? Sometimes. Does it have pork? Probably. Is it spicy? Usually. Easy to make? It can be! The mapo tofu galaxy is one of infinite possibilities, spiraling outward from an originally spicy, oily, numbing, meaty sauce/stew of Sichuan origin. (...)
There are a couple versions of the origin story of mapo tofu, but I’m going to tell you the one I like best. Let me take you back to Chengdu, the capital of Sichuan Province, in the late 1800s. There’s this old lady, a tofu maker. She makes tofu every morning and also cooks some tofu dishes for local people or other cooks. She has smallpox scars all over her face, so people call her Ma Po—ma means pockmarks, and po means grandmother.
So there’s a gentleman who comes in to get some food. He’s just come from the market, and he has a bag of minced beef with him. He’s sitting there in Ma Po’s restaurant, and he looks out across the street and sees a very nice restaurant with a very pretty girl. Ma Po, as you know, is not the prettiest lady, and the pretty girl calls out to him to come to her restaurant. He leaves Ma Po’s place and heads across the street.
A few minutes later a table of customers comes in, and they say that they want a tofu dish with beef. Ma Po doesn’t have any beef but the gentleman who left forgot his bag of minced beef, so she’s like, I’m gonna use this motherfuckin’ beef. She makes this dish and she brings it out and the group of men love it. They go crazy over it.
A lot of people think the ma in this dish’s name refers to the numbing sensation you get from a Sichuan peppercorn, which is also called ma in Chinese. But to me it’s all about a person who creates a dish that people loved so much they named it after her. It became the most famous tofu dish that ever came out of China. There’s no tofu dish that is as famous as this. When you talk about Sichuan cuisine, you talk about this dish.
by Brette Warshaw, Lucky Peach | Read more:
Image: Gabriele StabileAddiction Treatment Goes Public: AAC's Recovery-Center Empire
Late one afternoon in September 2013, Jeremiah Jackson stopped in at his drug dealer’s house to pick up heroin. While waiting around, he checked his voice mail and found a message from American Addiction Centers, a chain of drug and alcohol treatment clinics. An unfamiliar voice said, “Jeremiah, the game is up. It’s time for you to get help.” Jackson just laughed. “It struck me as humorous at first,” he says.
A 28-year-old college dropout, Jackson had been getting calls from American Addiction Centers for more than a month. His mother had passed his name along to several representatives at the company, and they’d call twice a week offering help. The calls were “a buzz kill,” as he puts it, but he sometimes picked up and listened, because he was lonely, he admits, and knew deep down that he had a problem. He’d moved back in with his parents in Sequim, Wash., after losing his girlfriend and apartment but was doing his best to avoid everyone. “It was a horrible year. It was just me, my dealer, and my bathroom,” he recalls. Still, he would end each conversation with AAC by saying he wasn’t interested.
This time was different. Two days earlier, Jackson had almost overdosed on heroin and methamphetamines in a Walmart parking lot. “I woke up on one of those green electrical boxes, and there were all these ambulances and police cars,” he says. “They’d responded to reports of someone screaming. I guess it was me. I had no idea how I got there. … All I had on were my boxers and my shoes. The rest of my stuff was strewn across the parking lot. I was white as a ghost and freezing.”
With that memory still raw, Jackson drove home from his dealer’s place, shut himself into his room, and listened to the latest voice message several more times. “They let me know they cared,” he says. They also noted that he’d be covered by his insurance. When his mom got home that night, he told her he was ready to do whatever it took to get and stay clean. Days later, Jackson shot up one last time and boarded a plane to Dallas, where he was met at the airport by an AAC representative holding a sign with his name.
American Addiction Centers, founded in 2011 and based in Brentwood, Tenn., is run by Michael Cartwright, a former drug addict and alcoholic who says he’s been sober for 23 years. The company owns eight facilities in six states and treats about 5,000 patients annually. In 2013 its revenue was $116 million, up from $28 million in 2011. Last October, analysts say, it became the first business focused solely on addiction treatment to go public, raising $75 million in an IPO. AAC is currently valued at about $588 million. So far, investing in some of society’s most troubled members seems to be paying off: Since October the company’s stock price has almost doubled, from $15 to $28.
“There’s a lot of opportunity in substance abuse,” says Paula Torch, senior research analyst for Avondale Partners, a Nashville-based firm that underwrote the IPO. There are more addicts than beds in treatment centers, she explains, and the industry is highly fragmented, made up largely of outpatient services and mom and pop operations. The market, meanwhile, is estimated to be worth $35 billion, and while almost 23 million Americans suffer from addiction, only about 4.1 million receive treatment each year, according to 2013 data from the U.S. Substance Abuse and Mental Health Services Administration. (The agency says more than 98 percent of those who don’t get treatment think they don’t need it.) In going public, AAC says it hopes to tap that market, fund a nationwide expansion, introduce a consistent standard of care, and create “a national brand” serving all segments of the population. (...)
Treating substance abuse isn’t like other businesses. The clients, by nature, are at a high risk of injury and death, which might expose the business to lawsuits and bad press; addiction treatment is not well understood; and insurance coverage is subject to regulatory changes. Plus, every successful outcome means losing a customer.
American Addiction Centers’ facilities are upscale, though hardly over-the-top luxurious. They cater to people with solid out-of-network insurance coverage. Each client pays about $800 per day, or $24,000 per month, roughly 90 percent of which is covered by insurance providers, according to Cartwright. The company’s profit margin, he says, is about 15 percent. Each facility has doctors and psychologists with expertise in substance abuse, and most have an on-site pharmacy. The company also has its own laboratory in Nashville. The centers, which have a staff-to-patient ratio slightly bigger than 1 to 1, also treat concurrent psychological issues, because as many as 90 percent of AAC’s patients have mental-health disorders, Cartwright says.
A comfortable environment is important for recovery, he adds, scoffing at what he calls widespread critiques that treatment centers, both luxury and lower-end, charge too much and spend too much on looks. “No one would question that, if my grandmother had cancer, that we would treat her in a beautiful facility with good-quality linens and good-quality food,” he says. “Yet a drug-and-alcohol person you put on a cot in the local mission, and that’s quality care? I do still think that there’s a prejudice around this being a moral issue vs. a disease.”
Cartwright guarantees that a patient who checks in for 90 days can come back for free if he relapses. “We were involved in 15 different federally funded research studies, and the common theme that we kept coming back to, over and over and over, is that the best predictor of outcome is length of stay,” Cartwright says. Six months or longer is even better, according to officials with Columbia University’s National Center on Addiction and Substance Abuse. If insurance won’t cover 90 days, Cartwright suggests a patient check into a cheaper facility. “Look, I personally think it’s more important that you get longer-term treatment than it is you come to me.” He argues that insurance companies may actually save money in the long run by covering one 90-day stay with a good outcome, rather than repeated 30-day stays for a patient who’s likely to relapse again and again.
by Caroline Winter, Bloomberg | Read more:
Image: Daniel Shea for Bloomberg Businessweek
A 28-year-old college dropout, Jackson had been getting calls from American Addiction Centers for more than a month. His mother had passed his name along to several representatives at the company, and they’d call twice a week offering help. The calls were “a buzz kill,” as he puts it, but he sometimes picked up and listened, because he was lonely, he admits, and knew deep down that he had a problem. He’d moved back in with his parents in Sequim, Wash., after losing his girlfriend and apartment but was doing his best to avoid everyone. “It was a horrible year. It was just me, my dealer, and my bathroom,” he recalls. Still, he would end each conversation with AAC by saying he wasn’t interested.
This time was different. Two days earlier, Jackson had almost overdosed on heroin and methamphetamines in a Walmart parking lot. “I woke up on one of those green electrical boxes, and there were all these ambulances and police cars,” he says. “They’d responded to reports of someone screaming. I guess it was me. I had no idea how I got there. … All I had on were my boxers and my shoes. The rest of my stuff was strewn across the parking lot. I was white as a ghost and freezing.”
With that memory still raw, Jackson drove home from his dealer’s place, shut himself into his room, and listened to the latest voice message several more times. “They let me know they cared,” he says. They also noted that he’d be covered by his insurance. When his mom got home that night, he told her he was ready to do whatever it took to get and stay clean. Days later, Jackson shot up one last time and boarded a plane to Dallas, where he was met at the airport by an AAC representative holding a sign with his name.
American Addiction Centers, founded in 2011 and based in Brentwood, Tenn., is run by Michael Cartwright, a former drug addict and alcoholic who says he’s been sober for 23 years. The company owns eight facilities in six states and treats about 5,000 patients annually. In 2013 its revenue was $116 million, up from $28 million in 2011. Last October, analysts say, it became the first business focused solely on addiction treatment to go public, raising $75 million in an IPO. AAC is currently valued at about $588 million. So far, investing in some of society’s most troubled members seems to be paying off: Since October the company’s stock price has almost doubled, from $15 to $28.
“There’s a lot of opportunity in substance abuse,” says Paula Torch, senior research analyst for Avondale Partners, a Nashville-based firm that underwrote the IPO. There are more addicts than beds in treatment centers, she explains, and the industry is highly fragmented, made up largely of outpatient services and mom and pop operations. The market, meanwhile, is estimated to be worth $35 billion, and while almost 23 million Americans suffer from addiction, only about 4.1 million receive treatment each year, according to 2013 data from the U.S. Substance Abuse and Mental Health Services Administration. (The agency says more than 98 percent of those who don’t get treatment think they don’t need it.) In going public, AAC says it hopes to tap that market, fund a nationwide expansion, introduce a consistent standard of care, and create “a national brand” serving all segments of the population. (...)
Treating substance abuse isn’t like other businesses. The clients, by nature, are at a high risk of injury and death, which might expose the business to lawsuits and bad press; addiction treatment is not well understood; and insurance coverage is subject to regulatory changes. Plus, every successful outcome means losing a customer.
American Addiction Centers’ facilities are upscale, though hardly over-the-top luxurious. They cater to people with solid out-of-network insurance coverage. Each client pays about $800 per day, or $24,000 per month, roughly 90 percent of which is covered by insurance providers, according to Cartwright. The company’s profit margin, he says, is about 15 percent. Each facility has doctors and psychologists with expertise in substance abuse, and most have an on-site pharmacy. The company also has its own laboratory in Nashville. The centers, which have a staff-to-patient ratio slightly bigger than 1 to 1, also treat concurrent psychological issues, because as many as 90 percent of AAC’s patients have mental-health disorders, Cartwright says.
A comfortable environment is important for recovery, he adds, scoffing at what he calls widespread critiques that treatment centers, both luxury and lower-end, charge too much and spend too much on looks. “No one would question that, if my grandmother had cancer, that we would treat her in a beautiful facility with good-quality linens and good-quality food,” he says. “Yet a drug-and-alcohol person you put on a cot in the local mission, and that’s quality care? I do still think that there’s a prejudice around this being a moral issue vs. a disease.”
Cartwright guarantees that a patient who checks in for 90 days can come back for free if he relapses. “We were involved in 15 different federally funded research studies, and the common theme that we kept coming back to, over and over and over, is that the best predictor of outcome is length of stay,” Cartwright says. Six months or longer is even better, according to officials with Columbia University’s National Center on Addiction and Substance Abuse. If insurance won’t cover 90 days, Cartwright suggests a patient check into a cheaper facility. “Look, I personally think it’s more important that you get longer-term treatment than it is you come to me.” He argues that insurance companies may actually save money in the long run by covering one 90-day stay with a good outcome, rather than repeated 30-day stays for a patient who’s likely to relapse again and again.
by Caroline Winter, Bloomberg | Read more:
Image: Daniel Shea for Bloomberg Businessweek
Labels:
Business,
Drugs,
Health,
Medicine,
Psychology
Tuesday, November 24, 2015
Facebook Quizzes Are (Still) a Privacy Threat
[ed. Yet another reason to avoid Facebook like the plague (as if we needed more).]
An online quiz that illustrates the words you use the most on Facebook as a "word cloud" has gone viral -- and it's a great reminder of why you should be wary of connecting ostensibly fun games with your account. UK-based VPN comparison website Comparitech has delved into how it collects not just your name, but also your birthdate, hometown, education details, all your Likes, photos, browser, language, your IP address and even your friends list if you link it with Facebook. Too many details for a simple game, right? If you agree, you may want to think hard before linking any other FB quiz in the future, because most of them require you to give up a similar list of information.
You'll typically see what details an FB quiz app requires on the page asking you to authorize its connection with the social network. Some apps allow you to choose which info you're willing to share: If you're lucky, you'll be able to give up as little as possible and still be able to play the game. In this case, the application didn't work properly when I didn't allow it to access most of my details. That said, it's pretty easy to click through and overlook the part where you can choose the info an app can access. And if you've been using Facebook extensively, chances are you've done it at least once or twice in the past.
Now the real problem is, like any other entity that collects data, these apps collect it for a reason. Vonvon.me, the mysterious company that created the Your Most Used Words on Facebook quiz, notes in its Privacy Policy that if you log in with FB, you're giving it express permission to continue using your info even after you terminate your account. You're also permitting it to store your details in any of its servers around the world, even in places where your privacy isn't protected by the law. Vonvon does note that it wouldn't share your personal info with third parties unless it has notified you first, but in the same sentence, it admitted that the Privacy Policy itself is already one way of notifying you. Tough luck if you haven't read it before clicking OK, because agreeing to the policy is equivalent to allowing the company to sell or share your details.
by Mariella Moon, Engadget | Read more:
An online quiz that illustrates the words you use the most on Facebook as a "word cloud" has gone viral -- and it's a great reminder of why you should be wary of connecting ostensibly fun games with your account. UK-based VPN comparison website Comparitech has delved into how it collects not just your name, but also your birthdate, hometown, education details, all your Likes, photos, browser, language, your IP address and even your friends list if you link it with Facebook. Too many details for a simple game, right? If you agree, you may want to think hard before linking any other FB quiz in the future, because most of them require you to give up a similar list of information.
You'll typically see what details an FB quiz app requires on the page asking you to authorize its connection with the social network. Some apps allow you to choose which info you're willing to share: If you're lucky, you'll be able to give up as little as possible and still be able to play the game. In this case, the application didn't work properly when I didn't allow it to access most of my details. That said, it's pretty easy to click through and overlook the part where you can choose the info an app can access. And if you've been using Facebook extensively, chances are you've done it at least once or twice in the past.
Now the real problem is, like any other entity that collects data, these apps collect it for a reason. Vonvon.me, the mysterious company that created the Your Most Used Words on Facebook quiz, notes in its Privacy Policy that if you log in with FB, you're giving it express permission to continue using your info even after you terminate your account. You're also permitting it to store your details in any of its servers around the world, even in places where your privacy isn't protected by the law. Vonvon does note that it wouldn't share your personal info with third parties unless it has notified you first, but in the same sentence, it admitted that the Privacy Policy itself is already one way of notifying you. Tough luck if you haven't read it before clicking OK, because agreeing to the policy is equivalent to allowing the company to sell or share your details.
by Mariella Moon, Engadget | Read more:
Image: uncredited
Why Pfizer’s Deal May Change the System of Taxing Multinationals
[ed. See also: Pfizer takeover: what is a tax inversion deal and why are they so controversial?]
Pfizer’s proposed merger with Allergan is a blockbuster deal in the pharmaceutical industry. History may remember the deal instead for finally killing off the United States’ outdated approach to taxing multinational corporations.
Pfizer will shed its identity as a United States corporation in the deal, notwithstanding the fact that it, not Allergan, is the larger merger partner. Allergan itself is an expatriate; it is nominally based in Ireland, but the bulk of its operations are still in Parsippany, N.J.
Our tax system is premised on taxing United States-based multinational corporations on 35 percent of worldwide income, with a credit for foreign taxes paid. This “worldwide” approach is often identified as anachronistic; most of our global trading partners have adopted some form of a “territorial” approach.
In theory, there’s not anything wrong with taxing American corporations on their worldwide income. The most vexing problem of international tax — trying to figure out the source of income within a multinational operation — would only be exacerbated by a territorial approach.
In practice, our approach has been a failure. One problem is that we allow the deferral of foreign-source income: the profits of foreign subsidiaries are not generally subject to United States tax until “repatriated” in the form of a dividend. Thus, to minimize taxes, multinationals use transfer pricing, cost sharing and other tax planning techniques to shift as much income as possible overseas.
Pfizer mastered this game early on, and its expertise in tax-shifting continues today. For example, Pfizer takes in about 40 percent of its revenue from sales in the United States and 60 percent from foreign sales. Yet, according to its 2014 annual report, Pfizer had about $17 billion in pretax income from overseas, and almost a $5 billion pretax loss here in the United States. By shifting profits overseas, Pfizer pays relatively low cash taxes compared with the nominal United States tax rate of 35 percent. Instead, much of its American tax liability is illusory, taking the form of an obligation to pay taxes in the future if it repatriates cash to the United States.
As of the end of 2014, Pfizer had $74 billion of foreign earnings “indefinitely reinvested” overseas, and another $63 billion in foreign earnings that have not been indefinitely reinvested. Repatriating $137 billion in earnings would generate a United States tax liability of about $48 billion.
Pfizer’s deal means that the United States will never see that $48 billion in tax revenue.
This isn’t Pfizer’s first visit to the circus. In 2004, Pfizer successfully lobbied Congress for a tax holiday. The American Jobs Creation Act of 2004 temporarily allowed companies to bring back foreign earnings at a tax rate of only about 5 percent instead of the usual 35 percent. Pfizer brought back $37 billion, paying less than $2 billion in taxes.
Just 11 years after cleaning out its overseas coffers, Pfizer now finds itself with $137 billion of new earnings “trapped” overseas.
Pfizer’s proposed merger with Allergan is a blockbuster deal in the pharmaceutical industry. History may remember the deal instead for finally killing off the United States’ outdated approach to taxing multinational corporations.
Pfizer will shed its identity as a United States corporation in the deal, notwithstanding the fact that it, not Allergan, is the larger merger partner. Allergan itself is an expatriate; it is nominally based in Ireland, but the bulk of its operations are still in Parsippany, N.J.
Our tax system is premised on taxing United States-based multinational corporations on 35 percent of worldwide income, with a credit for foreign taxes paid. This “worldwide” approach is often identified as anachronistic; most of our global trading partners have adopted some form of a “territorial” approach.
In theory, there’s not anything wrong with taxing American corporations on their worldwide income. The most vexing problem of international tax — trying to figure out the source of income within a multinational operation — would only be exacerbated by a territorial approach.
In practice, our approach has been a failure. One problem is that we allow the deferral of foreign-source income: the profits of foreign subsidiaries are not generally subject to United States tax until “repatriated” in the form of a dividend. Thus, to minimize taxes, multinationals use transfer pricing, cost sharing and other tax planning techniques to shift as much income as possible overseas.
Pfizer mastered this game early on, and its expertise in tax-shifting continues today. For example, Pfizer takes in about 40 percent of its revenue from sales in the United States and 60 percent from foreign sales. Yet, according to its 2014 annual report, Pfizer had about $17 billion in pretax income from overseas, and almost a $5 billion pretax loss here in the United States. By shifting profits overseas, Pfizer pays relatively low cash taxes compared with the nominal United States tax rate of 35 percent. Instead, much of its American tax liability is illusory, taking the form of an obligation to pay taxes in the future if it repatriates cash to the United States.
As of the end of 2014, Pfizer had $74 billion of foreign earnings “indefinitely reinvested” overseas, and another $63 billion in foreign earnings that have not been indefinitely reinvested. Repatriating $137 billion in earnings would generate a United States tax liability of about $48 billion.
Pfizer’s deal means that the United States will never see that $48 billion in tax revenue.
This isn’t Pfizer’s first visit to the circus. In 2004, Pfizer successfully lobbied Congress for a tax holiday. The American Jobs Creation Act of 2004 temporarily allowed companies to bring back foreign earnings at a tax rate of only about 5 percent instead of the usual 35 percent. Pfizer brought back $37 billion, paying less than $2 billion in taxes.
Just 11 years after cleaning out its overseas coffers, Pfizer now finds itself with $137 billion of new earnings “trapped” overseas.
by Victor Fleischer, NY Times/Dealbook | Read more:
Image: Mark Lennihan, Richmond Times-Dispatch
Monday, November 23, 2015
The Subscription Wars Are Here
The second wave of the web is here.
Soon you will be asking friends if they are part of the Google plan or perhaps the Amazon plan. In fact, in the very near future, we might all be part of the Google, Amazon, or possibly Netflix and Facebook plan. In fact, it is very possible that our choice of plan will be part of how the coming generation defines itself.
What is the second wave? The second wave is the idea that the internet goliaths of the world are now playing for the $150 or so we spend with the cable companies each month. In an effort to justify and grow the monthly price of their particular content bundle, these Goliaths will acquire, roll up, and merge anything and everything into the offering.
This is an all-out war, and it’s all about who you pay each month for all of your entertainment.
So, what does this mean for us consumers, how did we arrive here and where do we all go?
Let’s start by understanding what the first wave of the web meant to us as consumers. From a content perspective, the first wave of the web was all about content creators being displaced by social content aggregators. Facebook, Twitter, Instagram and others created and now own the distribution paths that content must travel to reach us, the consumer. These companies can create better ad supported monetization at scale, and provide a better user experience than any single content creator can provide on their own. Over the last decade, we have seen media company after media company succumb distribution to aggregators like Google, and more recently to Facebook and others.
While it has been exciting and easy for consumers to receive an infinite amount of free content in the palm of their hand, we all have been playing hot potato on who exactly foots the content bill. For the last decade, we’ve all danced around the fact that it’s impossible to make an ad-supported web economically sustainable for individual content creators and media companies.
With an infinite amount of content online, content creators will never be able to create and sustain enough attention to obtain the advertising rates of yesterday. Today, no professional content company can survive and thrive on ad supported revenue alone. Look at the recent death of Grantland as case in point. While Grantland is owned by ESPN, Grantland was run independently, and only generated revenue through selling ads next to their content. We have seen so many content companies hop on the ad supported hamster wheel, trying to somehow make it work. Companies like Vice and Buzzfeed have even made the hamster wheel sexy and trendy for periods of time, but in the end, it’s all the same. Nobody can do it forever.
Our culture has also suffered. Content in an ad supported world is mostly miserable. We live in an ugly clickbait world. Advertising models for content have created all of the wrong incentives. We’re subjected to an endless stream of terrible content begging us to click, and then click once more. Ryan Holiday wrote the seminal work on the misaligned incentives that arise for content companies in his 2012 book,“Trust me I’m Lying”.
But now, in 2015, something new is happening. We all finally had the sober realization there must be a better way to monetize content beyond advertising.
That’s right. The second wave.
The next wave of the web, the second wave, is essentially the post advertising web. Look at media business headlines in 2015, and they are all about subscription business models. Sure, they get called different names in different formats. In video, we call subscription offerings ‘over the top’ or for short, “OTT”. In print we call it the familiar– a membership or a subscription.
It’s worth noting that all subscription offerings offer freebie content to get you in the door. There is a reason why Netflix and HBO do not hunt you down when you share your login details with all of your closest friends. In business speak, we call this “content marketing”. Content marketing is the idea that companies will make money in the end by some other means after they give you some awesome free shows and articles. This is a huge trend across the web. For the small creators, authors, and one person shops, they give away a staggering amount of daily free content, but are now monetizing through a crowdfunding and tip-jar revolution.
For larger content creators and aggregators, they need a reliable, steady business model after showing you so much great free content everyday. Some large aggregators have started with distributing free content, while others got started by offering premium original content from the start.
For example:
So in a post-advertising web, the big question to answer is who exactly gets your hard earned money each month? That is: Who are you subscribing to for content? We know it used to be all of those damn cable companies. And now it’s Netflix, Spotify, Amazon, and HBO.
Consumers won’t shell out $9.99 a month or more for an ever increasing number of standalone ala carte subscriptions. And the fight for truly exclusive content that can distinguish a service will only make premium content more valuable. There is a reason why NBA salaries and franchises have basically doubled over the last few years. Nothing it seems is more valuable than live premium sports content.
In order to win and gain market share, the internet Goliaths will be forced to continually acquire, roll up, and consolidate everything into the content bundle.
This is the war. This is the new battle.
Soon you will be asking friends if they are part of the Google plan or perhaps the Amazon plan. In fact, in the very near future, we might all be part of the Google, Amazon, or possibly Netflix and Facebook plan. In fact, it is very possible that our choice of plan will be part of how the coming generation defines itself.
What is the second wave? The second wave is the idea that the internet goliaths of the world are now playing for the $150 or so we spend with the cable companies each month. In an effort to justify and grow the monthly price of their particular content bundle, these Goliaths will acquire, roll up, and merge anything and everything into the offering.
This is an all-out war, and it’s all about who you pay each month for all of your entertainment.
So, what does this mean for us consumers, how did we arrive here and where do we all go?
Let’s start by understanding what the first wave of the web meant to us as consumers. From a content perspective, the first wave of the web was all about content creators being displaced by social content aggregators. Facebook, Twitter, Instagram and others created and now own the distribution paths that content must travel to reach us, the consumer. These companies can create better ad supported monetization at scale, and provide a better user experience than any single content creator can provide on their own. Over the last decade, we have seen media company after media company succumb distribution to aggregators like Google, and more recently to Facebook and others.
While it has been exciting and easy for consumers to receive an infinite amount of free content in the palm of their hand, we all have been playing hot potato on who exactly foots the content bill. For the last decade, we’ve all danced around the fact that it’s impossible to make an ad-supported web economically sustainable for individual content creators and media companies.
With an infinite amount of content online, content creators will never be able to create and sustain enough attention to obtain the advertising rates of yesterday. Today, no professional content company can survive and thrive on ad supported revenue alone. Look at the recent death of Grantland as case in point. While Grantland is owned by ESPN, Grantland was run independently, and only generated revenue through selling ads next to their content. We have seen so many content companies hop on the ad supported hamster wheel, trying to somehow make it work. Companies like Vice and Buzzfeed have even made the hamster wheel sexy and trendy for periods of time, but in the end, it’s all the same. Nobody can do it forever.
Our culture has also suffered. Content in an ad supported world is mostly miserable. We live in an ugly clickbait world. Advertising models for content have created all of the wrong incentives. We’re subjected to an endless stream of terrible content begging us to click, and then click once more. Ryan Holiday wrote the seminal work on the misaligned incentives that arise for content companies in his 2012 book,“Trust me I’m Lying”.
But now, in 2015, something new is happening. We all finally had the sober realization there must be a better way to monetize content beyond advertising.
That’s right. The second wave.
The next wave of the web, the second wave, is essentially the post advertising web. Look at media business headlines in 2015, and they are all about subscription business models. Sure, they get called different names in different formats. In video, we call subscription offerings ‘over the top’ or for short, “OTT”. In print we call it the familiar– a membership or a subscription.
It’s worth noting that all subscription offerings offer freebie content to get you in the door. There is a reason why Netflix and HBO do not hunt you down when you share your login details with all of your closest friends. In business speak, we call this “content marketing”. Content marketing is the idea that companies will make money in the end by some other means after they give you some awesome free shows and articles. This is a huge trend across the web. For the small creators, authors, and one person shops, they give away a staggering amount of daily free content, but are now monetizing through a crowdfunding and tip-jar revolution.
For larger content creators and aggregators, they need a reliable, steady business model after showing you so much great free content everyday. Some large aggregators have started with distributing free content, while others got started by offering premium original content from the start.
For example:
- Netflix and HBO in premium original video content.
- Amazon in premium video content, video game content, book publishing (and even hosting web services)
- Spotify in music
- YouTube in UGC video
- Facebook in print publisher content
So in a post-advertising web, the big question to answer is who exactly gets your hard earned money each month? That is: Who are you subscribing to for content? We know it used to be all of those damn cable companies. And now it’s Netflix, Spotify, Amazon, and HBO.
Consumers won’t shell out $9.99 a month or more for an ever increasing number of standalone ala carte subscriptions. And the fight for truly exclusive content that can distinguish a service will only make premium content more valuable. There is a reason why NBA salaries and franchises have basically doubled over the last few years. Nothing it seems is more valuable than live premium sports content.
In order to win and gain market share, the internet Goliaths will be forced to continually acquire, roll up, and consolidate everything into the content bundle.
This is the war. This is the new battle.
by Benjamin Smith, Observer | Read more:
Image: Pixabay
[ed. Well, I like to eat, sleep, drink, and be in love. I like to work, read, learn, and understand life. ~ Langston Hughes, Theme for English B.]
via:
De-Stigmatizing Hawaii’s Pidgin English
“You don’t know how happy this makes me,” I wrote a colleague after she casually sent me a link to a recent news story reporting that the U.S. Census Bureau now recognizes Hawaiian Pidgin English as a language. “Oh really?!” the colleague responded, surprised at my excitement.
After all, how could a seemingly silly decision to include the local, slang-sounding vernacular on a language survey listing more than 100 other options cause so much delight? It’s not like the five-year American Community Survey gleaned accurate data on how many people in Hawaii actually speak Pidgin at home. (Roughly 1,600 of the 327,000 bilingual survey respondents said they speak it, while other sources—albeit imperfect ones—have suggested that as many as half of the state’s population of 1.4 million does.) So why was I reverberating with a sense of, to borrow a Pidgin phrase, chee hu!?
The significance of the gesture is symbolic, and it extends far beyond those who are from Hawaii and/or those who speak Hawaiian Pidgin. It shows that the federal government acknowledges the legitimacy of a tongue widely stigmatized, even among locals who dabble in it, as a crass dialect reserved for the uneducated lower classes and informal settings. It reinforces a long, grassroots effort by linguists and cultural practitioners to institutionalize and celebrate the language—to encourage educators to integrate it into their teaching, potentially elevating the achievement of Pidgin-speaking students. And it indicates that, elsewhere in the country, the speakers of comparable linguistic systems—from African American Vernacular English, or ebonics, to Chicano English—may even see similar changes one day, too.
I reported extensively on the disputes over Pidgin and its role classrooms when I was an education journalist in Hawaii, where I’m from. It was through this reporting experience—the interviews, the historical research, the observations of classrooms—that I realized how little I understood the language and what it represents. Until then, I didn’t even consider it a language; I thought of it as, well, a “pidgin”—“a language that,” according to Merriam Webster, “is formed from a mixture of several languages when speakers of different languages need to talk to each other.” It turns out that “Hawaiian Pidgin English” is a misnomer. And it turns out that resistance to the misunderstood language helps explain some of the biggest challenges stymieing educational progress in the state.
Pidgin, according to linguists, is a creole language that reflects Hawaii’s ongoing legacy as a cultural melting pot. Hawaiian Pidgin English developed during the 1800s and early 1900s, when immigrant laborers from China, Portugal, and the Philippines arrived to work in the plantations; American missionaries also came around that time. The immigrants used pidgins—first one that was based in Hawaiian and then one based in English—to communicate. That linguistic system eventually evolved into a creole, which in general develops when the children of pidgin-speakers use the pidgin as a first language. To give you a sense of what Pidgin sounds like, this is how a project about of the University of Hawaii known as Da Pidgin Coup describes this history using the language:
According to linguists, the many people in Hawaii who speak both Pidgin and conventional English—whether it be 1,600 people or 700,000—are actually bilingual. “If you don’t treat it as a language, then you get all kinds of problems that come with the stigma,” Kent Sakoda, a professor of second language studies at the University of Hawaii who’s written a book on Pidgin grammar, has explained.
But critics didn’t—and don’t—see it that way. They say allowing it in school undermines kids’ prospects in a globalized workforce, with many citing Hawaii students’ below-average writing and reading scores. This has been a long-standing view, and the state Board of Education even sought to outlaw Pidgin in schools in the late 1980s, though pushback from the community prevented that from happening. “If you use Pidgin, it can really affect your grammar,” former Hawaii Governor Ben Cayetano, who spoke the language growing up, once told me. “I think it does the kids a disservice if you allow them to continue to speak Pidgin.” (...)
When I asked Laiana Wong, a Hawaiian languages professor, whether speaking Pidgin puts kids at a disadvantage, he said that, given the way I had “couched the question, it’s obvious that we recognize that Pidgin is the subaltern language and English has got superiority.”
“Now,” he continued, “if we turn that around and say, well, what about the person who speaks a more standard form of English who cannot speak Pidgin—are they handicapped in Hawaii? And I say yes.”
After all, how could a seemingly silly decision to include the local, slang-sounding vernacular on a language survey listing more than 100 other options cause so much delight? It’s not like the five-year American Community Survey gleaned accurate data on how many people in Hawaii actually speak Pidgin at home. (Roughly 1,600 of the 327,000 bilingual survey respondents said they speak it, while other sources—albeit imperfect ones—have suggested that as many as half of the state’s population of 1.4 million does.) So why was I reverberating with a sense of, to borrow a Pidgin phrase, chee hu!?
The significance of the gesture is symbolic, and it extends far beyond those who are from Hawaii and/or those who speak Hawaiian Pidgin. It shows that the federal government acknowledges the legitimacy of a tongue widely stigmatized, even among locals who dabble in it, as a crass dialect reserved for the uneducated lower classes and informal settings. It reinforces a long, grassroots effort by linguists and cultural practitioners to institutionalize and celebrate the language—to encourage educators to integrate it into their teaching, potentially elevating the achievement of Pidgin-speaking students. And it indicates that, elsewhere in the country, the speakers of comparable linguistic systems—from African American Vernacular English, or ebonics, to Chicano English—may even see similar changes one day, too.
I reported extensively on the disputes over Pidgin and its role classrooms when I was an education journalist in Hawaii, where I’m from. It was through this reporting experience—the interviews, the historical research, the observations of classrooms—that I realized how little I understood the language and what it represents. Until then, I didn’t even consider it a language; I thought of it as, well, a “pidgin”—“a language that,” according to Merriam Webster, “is formed from a mixture of several languages when speakers of different languages need to talk to each other.” It turns out that “Hawaiian Pidgin English” is a misnomer. And it turns out that resistance to the misunderstood language helps explain some of the biggest challenges stymieing educational progress in the state.
Pidgin, according to linguists, is a creole language that reflects Hawaii’s ongoing legacy as a cultural melting pot. Hawaiian Pidgin English developed during the 1800s and early 1900s, when immigrant laborers from China, Portugal, and the Philippines arrived to work in the plantations; American missionaries also came around that time. The immigrants used pidgins—first one that was based in Hawaiian and then one based in English—to communicate. That linguistic system eventually evolved into a creole, which in general develops when the children of pidgin-speakers use the pidgin as a first language. To give you a sense of what Pidgin sounds like, this is how a project about of the University of Hawaii known as Da Pidgin Coup describes this history using the language:
Wen da keiki wen come olda da language wen come into da creole dat linguist kine people call Hawai‘i Creole. Us local people we jus’ call um “Pidgin.” Nowadays kine Pidgin get all da stuff from da pas’ inside. Plenny of da vocabulary for Pidgin come from English but plenny stuff in da gramma come from Hawaiian. Cantonese an’ Portuguese wen also help make da gramma, an’ English, Hawaiian, Portuguese, an’ Japanese wen help da vocabulary da mos’.It may read like a phonetic interpretation of a really broken version of standard American English, but linguists insist it isn’t. It has its own grammatical system and lexicon; it doesn’t use “are” or “is” in sentences, for example, and incorporates words from an array of languages like “keiki,” which means children in Hawaiian. (...)
According to linguists, the many people in Hawaii who speak both Pidgin and conventional English—whether it be 1,600 people or 700,000—are actually bilingual. “If you don’t treat it as a language, then you get all kinds of problems that come with the stigma,” Kent Sakoda, a professor of second language studies at the University of Hawaii who’s written a book on Pidgin grammar, has explained.
But critics didn’t—and don’t—see it that way. They say allowing it in school undermines kids’ prospects in a globalized workforce, with many citing Hawaii students’ below-average writing and reading scores. This has been a long-standing view, and the state Board of Education even sought to outlaw Pidgin in schools in the late 1980s, though pushback from the community prevented that from happening. “If you use Pidgin, it can really affect your grammar,” former Hawaii Governor Ben Cayetano, who spoke the language growing up, once told me. “I think it does the kids a disservice if you allow them to continue to speak Pidgin.” (...)
When I asked Laiana Wong, a Hawaiian languages professor, whether speaking Pidgin puts kids at a disadvantage, he said that, given the way I had “couched the question, it’s obvious that we recognize that Pidgin is the subaltern language and English has got superiority.”
“Now,” he continued, “if we turn that around and say, well, what about the person who speaks a more standard form of English who cannot speak Pidgin—are they handicapped in Hawaii? And I say yes.”
by Alia Wong, The Atlantic | Read more:
Image: Jennifer Sinco Kelleher / APThe End of the Internet Dream
In 20 years, the Web might complete its shift from liberator to oppressor.
Twenty years ago I attended my first Def Con. I believed in a free, open, reliable, interoperable Internet: a place where anyone can say anything, and anyone who wants to hear it can listen and respond. I believed in the Hacker Ethic: that information should be freely accessible and that computer technology was going to make the world a better place. I wanted to be a part of making these dreams — the Dream of Internet Freedom — come true. As an attorney, I wanted to protect hackers and coders from the predations of law so that they could do this important work. Many of the people in this room have spent their lives doing that work.
But today, that Dream of Internet Freedom is dying.
For better or for worse, we’ve prioritized things like security, online civility, user interface, and intellectual property interests above freedom and openness. The Internet is less open and more centralized. It’s more regulated. And increasingly it’s less global, and more divided. These trends: centralization, regulation, and globalization are accelerating. And they will define the future of our communications network, unless something dramatic changes.
Twenty years from now,
• You won’t necessarily know anything about the decisions that affect your rights, like whether you get a loan, a job, or if a car runs over you. Things will get decided by data-crunching computer algorithms and no human will really be able to understand why.
• The Internet will become a lot more like TV and a lot less like the global conversation we envisioned 20 years ago.
• Rather than being overturned, existing power structures will be reinforced and replicated, and this will be particularly true for security.
•Internet technology design increasingly facilitates rather than defeats censorship and control.
It doesn’t have to be this way. But to change course, we need to ask some hard questions and make some difficult decisions.
What does it mean for companies to know everything about us, and for computer algorithms to make life and death decisions? Should we worry more about another terrorist attack in New York, or the ability of journalists and human rights workers around the world to keep working? How much free speech does a free society really need?
How can we stop being afraid and start being sensible about risk? Technology has evolved into a Golden Age for Surveillance. Can technology now establish a balance of power between governments and the governed that would guard against social and political oppression? Given that decisions by private companies define individual rights and security, how can we act on that understanding in a way that protects the public interest and doesn’t squelch innovation? Whose responsibility is digital security? What is the future of the Dream of Internet Freedom?
But today, that Dream of Internet Freedom is dying.
For better or for worse, we’ve prioritized things like security, online civility, user interface, and intellectual property interests above freedom and openness. The Internet is less open and more centralized. It’s more regulated. And increasingly it’s less global, and more divided. These trends: centralization, regulation, and globalization are accelerating. And they will define the future of our communications network, unless something dramatic changes.
Twenty years from now,
• You won’t necessarily know anything about the decisions that affect your rights, like whether you get a loan, a job, or if a car runs over you. Things will get decided by data-crunching computer algorithms and no human will really be able to understand why.
• The Internet will become a lot more like TV and a lot less like the global conversation we envisioned 20 years ago.
• Rather than being overturned, existing power structures will be reinforced and replicated, and this will be particularly true for security.
•Internet technology design increasingly facilitates rather than defeats censorship and control.
It doesn’t have to be this way. But to change course, we need to ask some hard questions and make some difficult decisions.
What does it mean for companies to know everything about us, and for computer algorithms to make life and death decisions? Should we worry more about another terrorist attack in New York, or the ability of journalists and human rights workers around the world to keep working? How much free speech does a free society really need?
How can we stop being afraid and start being sensible about risk? Technology has evolved into a Golden Age for Surveillance. Can technology now establish a balance of power between governments and the governed that would guard against social and political oppression? Given that decisions by private companies define individual rights and security, how can we act on that understanding in a way that protects the public interest and doesn’t squelch innovation? Whose responsibility is digital security? What is the future of the Dream of Internet Freedom?
Labels:
Critical Thought,
Government,
history,
Law,
Technology
To Reach Seniors, Tech Start-Ups Must First Relate to Them
Daily, breathless announcements arrive in my inbox, heralding technology products for older adults.
A “revolutionary” gait-training robot. An emergency response device said to predict falls. A combination home phone and tablet system that “transforms how older seniors connect with and are cared for by their loved ones.”
Daily, too, I hear tales of technology failing in various ways to do what older people or their worried families expect. I hear about frail elders who remove their emergency pendants at bedtime, then fall in the dark when they walk to the bathroom and can’t summon help.
About a 90-year-old in Sacramento who stored his never-worn emergency pendant in his refrigerator. About a Cambridge, Mass., daughter who has tried four or five telephones — not cellphones or smartphones, but ordinary landlines — in an ongoing effort to find one simple enough for her 95-year-old mother to reliably dial her number and have a conversation.
A “revolutionary” gait-training robot. An emergency response device said to predict falls. A combination home phone and tablet system that “transforms how older seniors connect with and are cared for by their loved ones.”
Daily, too, I hear tales of technology failing in various ways to do what older people or their worried families expect. I hear about frail elders who remove their emergency pendants at bedtime, then fall in the dark when they walk to the bathroom and can’t summon help.
About a 90-year-old in Sacramento who stored his never-worn emergency pendant in his refrigerator. About a Cambridge, Mass., daughter who has tried four or five telephones — not cellphones or smartphones, but ordinary landlines — in an ongoing effort to find one simple enough for her 95-year-old mother to reliably dial her number and have a conversation.
Which scenario represents the likelier future for senior-oriented technology? It depends on whom you ask.
Entrepreneurs are hard at work developing platforms, apps, sites and devices meant to help older adults manage their health, live independently and maintain family and social connections, all laudable goals. Let’s call their efforts silvertech.
Until a few years ago, “the whole tech world wasn’t sufficiently focused on this enormous opportunity,” said Stephen Johnston, a co-founder of Aging2.0, which connects technology companies with the senior care industry. “It’s changing quite rapidly.” He estimated that 1,500 silvertech start-ups had arisen globally in the past three years.
A couple of recent developments have intensified American entrepreneurial interest, said Laurie Orlov, a business analyst who began the Aging in Place Technology Watch blog in 2008.
Last spring, a start-up called Honor, which matches older adults with vetted home care workers, raised $20 million in venture capital from prominent Silicon Valley investors. “That gave all kinds of organizations hope for market potential,” Ms. Orlov said.
In addition, Medicare has begun to broaden the kinds of remote health monitoring — a.k.a. telehealth — that it will cover, though so far only in rural areas or in a pilot program for accountable care organizations. Eventually, remote monitoring will be “the way people will stay out of emergency rooms and nursing homes,” Ms. Orlov predicted.
Yet Mr. Johnston, whose organization convenes pitch events for silvertech developers, acknowledges that “there have definitely been a few missteps, and there haven’t been too many huge wins yet.”
As a geriatrician at the University of California, San Francisco, Dr. Ken Covinsky often hears from Silicon Valley tinkerers with big ideas. He has become something of a skeptic, as he pointed out in a post for the GeriPal blog last month.
“It’s incredibly well meaning,” he said in an interview. “But there are assumptions that are at odds with the problems our patients and families are facing.”
Tech people seem enamored, for example, with the prospect of continually monitoring older people using sensors that transmit information on when they get up, leave the house and open the refrigerator (or don’t).
Aside from the question of whether older adults appreciate such scrutiny, Dr. Covinsky suspects that an hour or two a day from a skilled home care worker (one paid more than minimum wage, he added) would do them more good.
“They don’t necessarily need someone to know when they open the fridge,” he said. “They need someone to make or deliver a good meal.” (...)
Design will play a crucial role in how useful consumers find any of these products, but it presents tricky questions. Do you come up with something specialized for older adults? “You don’t want to be handing smartphones with shiny glass to people with Parkinson’s disease or hand tremors or macular degeneration, and say, ‘Have a nice day,’ ” Ms. Orlov cautioned.
Yet with some exceptions — the Jitterbug phone, for instance — products aimed purely at older adults have often faltered. Sometimes they’re too complex, or too difficult for those with dementia, which is a lot of people.
Or users may balk because the devices become an uncomfortably constant reminder of incapacity. Technology isn’t always the solution to a problem.
Entrepreneurs are hard at work developing platforms, apps, sites and devices meant to help older adults manage their health, live independently and maintain family and social connections, all laudable goals. Let’s call their efforts silvertech.
Until a few years ago, “the whole tech world wasn’t sufficiently focused on this enormous opportunity,” said Stephen Johnston, a co-founder of Aging2.0, which connects technology companies with the senior care industry. “It’s changing quite rapidly.” He estimated that 1,500 silvertech start-ups had arisen globally in the past three years.
A couple of recent developments have intensified American entrepreneurial interest, said Laurie Orlov, a business analyst who began the Aging in Place Technology Watch blog in 2008.
Last spring, a start-up called Honor, which matches older adults with vetted home care workers, raised $20 million in venture capital from prominent Silicon Valley investors. “That gave all kinds of organizations hope for market potential,” Ms. Orlov said.
In addition, Medicare has begun to broaden the kinds of remote health monitoring — a.k.a. telehealth — that it will cover, though so far only in rural areas or in a pilot program for accountable care organizations. Eventually, remote monitoring will be “the way people will stay out of emergency rooms and nursing homes,” Ms. Orlov predicted.
Yet Mr. Johnston, whose organization convenes pitch events for silvertech developers, acknowledges that “there have definitely been a few missteps, and there haven’t been too many huge wins yet.”
As a geriatrician at the University of California, San Francisco, Dr. Ken Covinsky often hears from Silicon Valley tinkerers with big ideas. He has become something of a skeptic, as he pointed out in a post for the GeriPal blog last month.
“It’s incredibly well meaning,” he said in an interview. “But there are assumptions that are at odds with the problems our patients and families are facing.”
Tech people seem enamored, for example, with the prospect of continually monitoring older people using sensors that transmit information on when they get up, leave the house and open the refrigerator (or don’t).
Aside from the question of whether older adults appreciate such scrutiny, Dr. Covinsky suspects that an hour or two a day from a skilled home care worker (one paid more than minimum wage, he added) would do them more good.
“They don’t necessarily need someone to know when they open the fridge,” he said. “They need someone to make or deliver a good meal.” (...)
Design will play a crucial role in how useful consumers find any of these products, but it presents tricky questions. Do you come up with something specialized for older adults? “You don’t want to be handing smartphones with shiny glass to people with Parkinson’s disease or hand tremors or macular degeneration, and say, ‘Have a nice day,’ ” Ms. Orlov cautioned.
Yet with some exceptions — the Jitterbug phone, for instance — products aimed purely at older adults have often faltered. Sometimes they’re too complex, or too difficult for those with dementia, which is a lot of people.
Or users may balk because the devices become an uncomfortably constant reminder of incapacity. Technology isn’t always the solution to a problem.
by Paula Span, NY Times | Read more:
Image: Luc MelansonSunday, November 22, 2015
The Woobie
A "woobie" is a name for any type of character who makes you feel extremely sorry for them. Basically, the first thing you think to say when you see the woobie is: "Aw, poor baby!" Woobification of a character is a curious, audience-driven phenomenon, sometimes divorced from the character's canonical morality.
A story with the Woobie allows the audience to vicariously experience relief from some pain by fantasizing about relieving the Woobie's pain. (No, not that way! Well, okay, sometimes.) Woobification can also tie into a disturbing hurt/comfort dynamic, in which fans enjoy seeing the Woobie tortured so they can wish the hurt away. This is often explored in Hurt/Comfort Fic.
An important aspect of the Woobie is that their suffering must be caused by external sources. A character who suffers as the result of their own actions is a Tragic Hero and does not qualify.
The difference between the Woobie and such Sickeningly Sweet characters as the Littlest Cancer Patient is that the audience actually finds the Woobie compelling rather than pathetic. Where you draw the line is sometimes a matter of opinion.
Sometimes a Woobie goes Omnicidal Maniac and seeks to destroy the world in a bid to make the pain stop, in which case you're dealing with a Woobie, Destroyer of Worlds. Sometimes it's possible to bring such a woobie back from the edge, but other times, only his or her destruction in a Shoot the Dog moment will stop things.
In Lighter and Fluffier fiction, the Woobie can sometimes earn their happy ending.
by TV Tropes | Read more:
Image: uncredited
A story with the Woobie allows the audience to vicariously experience relief from some pain by fantasizing about relieving the Woobie's pain. (No, not that way! Well, okay, sometimes.) Woobification can also tie into a disturbing hurt/comfort dynamic, in which fans enjoy seeing the Woobie tortured so they can wish the hurt away. This is often explored in Hurt/Comfort Fic.
An important aspect of the Woobie is that their suffering must be caused by external sources. A character who suffers as the result of their own actions is a Tragic Hero and does not qualify.
The difference between the Woobie and such Sickeningly Sweet characters as the Littlest Cancer Patient is that the audience actually finds the Woobie compelling rather than pathetic. Where you draw the line is sometimes a matter of opinion.
Sometimes a Woobie goes Omnicidal Maniac and seeks to destroy the world in a bid to make the pain stop, in which case you're dealing with a Woobie, Destroyer of Worlds. Sometimes it's possible to bring such a woobie back from the edge, but other times, only his or her destruction in a Shoot the Dog moment will stop things.
In Lighter and Fluffier fiction, the Woobie can sometimes earn their happy ending.
by TV Tropes | Read more:
Image: uncredited
Subscribe to:
Posts (Atom)