Tuesday, December 27, 2016
Monday, December 26, 2016
True [X]: Nonlinear Advertising
It may not be possible to pinpoint the exact moment that media fragmentation reached its tipping point -- when more content was being produced than people had attention to spend with it -- but for some, the launch of the Fox television network 30 years ago was at least an important milestone.
Yes, cable TV’s multichannel universe had already begun splintering media consumption across an array of new channel options, but Fox fragmented consumer attention in an even bigger and more symbolic way, expanding the network television universe from the “Big 3” to the “Big 4.”
The explosion of channel options that would follow -- both linear and nonlinear -- over the next three decades arguably has been the No. 1 factor changing the way brands reach, engage and influence consumers in what some now believe is an “attention economy” that can no longer be valued on the basis of media exposure, but only on actual “engagement.” So it is probably fitting that Fox is the same company that is now leading an effort to shift the industry’s dependence from impressions-based ad exposures to attention-based engagements.
It is why Fox agreed to acquire digital engagement startup true[X] two years ago, and why it has put that team in charge of so-called “nonlinear advertising” revenues.
“The connections between brands and consumers have continued to evolve within digital video environments,” CEO James Murdoch stated when he announced the acquisition. In the two years since, Fox has begun integrating true[X]’s methods -- and importantly, its team -- in an effort to zero-base the economics of the commercial time it sells to its advertisers and presents to its viewers.
“The consumer has more choice than ever before, so the cost of getting their attention is going up,” explains David Levy, a co-founder of true[X], who was recently put in charge of all nonlinear advertising revenues for the Fox Networks Group.
In that role, Levy is developing new products, methods, metrics and advertising models that are all designed to achieve two corresponding goals: to increase the amount of advertising revenue per minute and to reduce the number of ads the average consumer is exposed to during Fox shows airing on nonlinear platforms.
While Levy’s portfolio includes inventory on digital and video-on-demand platforms, the work he is doing could ultimately portend new business models for linear viewing as the ad industry begins rethinking its historic impressions-based ad exposure metrics. Nearly half (47%) of advertisers and agency executives surveyed recently by Advertiser Perceptions for MediaPost said they believe time-spent exposure to ads will become a form of media-buying currency, while 27% believe it will become the ad industry’s standard for media buys.
While only 20% see the discussion surrounding time-spent measures of advertising as a “novelty,” few would argue that it has not become a major focus for Madison Avenue, as advertisers and agency execs are trying to come to terms with increasing choice and fragmentation and decreasing exposure to advertising.
Levy’s mission is to explore alternative solutions to simply increasing the number of ads -- especially ones that increase the yield for both advertisers and consumers, and as a result, for Fox’s programming.
Levy says Fox is developing a suite of new ad products that will be tested over the next year to learn which ones generate the best return for advertisers and views, but he acknowledges that shifting the ad industry to “pricing based on attention will be difficult.”
One of the problems, he says, is that it is difficult to scale pure attention-based models like true[X]’s “engagements” to the mass reach and frequency scale of television advertising.
The true[X] engagements are ad experiences that consumers explicitly opt into, usually because the brand is providing access to a premium content or gaming experience. Those engagements generally reap high CPMs, but they have relatively limited reach.
Fox has already begun to experiment with a variety of new formats and will introduce more over the next year, including ones that increase the yield of advertising by improving the relevance and effectiveness of ads via “advanced targeting” methods -- “where we will have better targeting so we can hit someone with a more accurate ad," Levy explains. "When we do that, we are likely to secure higher CPMs. We can then decide whether to reduce the number of commercials in that particular pod.”
by Joe Mandese, MediaPost | Read more:
Yes, cable TV’s multichannel universe had already begun splintering media consumption across an array of new channel options, but Fox fragmented consumer attention in an even bigger and more symbolic way, expanding the network television universe from the “Big 3” to the “Big 4.”

It is why Fox agreed to acquire digital engagement startup true[X] two years ago, and why it has put that team in charge of so-called “nonlinear advertising” revenues.
“The connections between brands and consumers have continued to evolve within digital video environments,” CEO James Murdoch stated when he announced the acquisition. In the two years since, Fox has begun integrating true[X]’s methods -- and importantly, its team -- in an effort to zero-base the economics of the commercial time it sells to its advertisers and presents to its viewers.
“The consumer has more choice than ever before, so the cost of getting their attention is going up,” explains David Levy, a co-founder of true[X], who was recently put in charge of all nonlinear advertising revenues for the Fox Networks Group.
In that role, Levy is developing new products, methods, metrics and advertising models that are all designed to achieve two corresponding goals: to increase the amount of advertising revenue per minute and to reduce the number of ads the average consumer is exposed to during Fox shows airing on nonlinear platforms.
While Levy’s portfolio includes inventory on digital and video-on-demand platforms, the work he is doing could ultimately portend new business models for linear viewing as the ad industry begins rethinking its historic impressions-based ad exposure metrics. Nearly half (47%) of advertisers and agency executives surveyed recently by Advertiser Perceptions for MediaPost said they believe time-spent exposure to ads will become a form of media-buying currency, while 27% believe it will become the ad industry’s standard for media buys.
While only 20% see the discussion surrounding time-spent measures of advertising as a “novelty,” few would argue that it has not become a major focus for Madison Avenue, as advertisers and agency execs are trying to come to terms with increasing choice and fragmentation and decreasing exposure to advertising.
Levy’s mission is to explore alternative solutions to simply increasing the number of ads -- especially ones that increase the yield for both advertisers and consumers, and as a result, for Fox’s programming.
Levy says Fox is developing a suite of new ad products that will be tested over the next year to learn which ones generate the best return for advertisers and views, but he acknowledges that shifting the ad industry to “pricing based on attention will be difficult.”
One of the problems, he says, is that it is difficult to scale pure attention-based models like true[X]’s “engagements” to the mass reach and frequency scale of television advertising.
The true[X] engagements are ad experiences that consumers explicitly opt into, usually because the brand is providing access to a premium content or gaming experience. Those engagements generally reap high CPMs, but they have relatively limited reach.
Fox has already begun to experiment with a variety of new formats and will introduce more over the next year, including ones that increase the yield of advertising by improving the relevance and effectiveness of ads via “advanced targeting” methods -- “where we will have better targeting so we can hit someone with a more accurate ad," Levy explains. "When we do that, we are likely to secure higher CPMs. We can then decide whether to reduce the number of commercials in that particular pod.”
by Joe Mandese, MediaPost | Read more:
Image: MediaPost
The Widowhood Effect
I sit cross-legged on a white mat spread on the bathroom floor and examine the rows of medication lined up on the shelf of the vanity – neat piles of green-and-white boxes of blood thinners, a rainbow of pill bottles, painkillers worth thousands of dollars. I study the labels: Percocet, Zofran, Maxeran, dexamethasone. Take daily. Take twice daily. Take with food. Do not crush. Do not chew. Take as needed.
I wonder if a one-month supply of drugs intended to save a sick person’s life is enough to end a healthy one’s. It probably is if you consume them not as directed. Chew them, crush them, don’t take with food. Take handfuls at the same time. But the order matters. You must swallow an anti-nausea pill first so you don’t vomit up a $248 cancer pill. This, I know. I’ve watched someone take cancer medication when he was trying not to die.
I remember the day we brought these drugs home. On the afternoon of June 1, 2013, my 36-year-old husband, Spencer McLean, was discharged from Calgary’s Tom Baker Cancer Centre. As he changed from his hospital gown to his jeans, he let out a sob; he’d grown so thin that his jeans kept sliding down even with his belt cinched as tight as it could go.
On our way out of the cancer centre, we stopped at the hospital pharmacy to fill his prescriptions. We picked up a one-month’s supply that cost twice our monthly mortgage payment, despite our private insurance and government coverage of his $7,000-a-month cancer therapy. We sat as we waited nearly an hour for the medications to be prepared; Spencer was too tired to stand. When the pharmacist called us to the front, he handed us three white plastic bags filled with boxes and bottles.
We stepped into the foyer of our condo nervously. Our parents had come by to clean up the packaging and plastic needle covers the paramedics had tossed to the floor of our living room in a rush one week earlier before they whisked Spencer to emergency. Neither of us was comfortable being home. We knew a fair amount about medicine and cancer – he, a surgeon; me, a medical journalist. We knew Spencer’s cancer was extraordinarily aggressive. In the three weeks after his diagnosis, cancer galloped through his body at a ruthless pace, laying claim to his kidneys, his lungs, his liver. In its wake, clots formed in his blood, threatening to block arteries and veins. One had already clogged the vessel carrying blood to his liver, causing the organ to swell so large it extended across his abdomen and hogged any space that rightfully belonged to food. Each day became a balancing act in blood consistency: too thin, his kidney bled profusely; too thick, clots threatened to meander into his lungs and kill him.
At home that evening, right on schedule at 7 o’clock, Spencer took his cancer medication, then vomited it up. By morning, he was peeing out blood clots and couldn’t eat or drink. We reached our oncologist on his cellphone and he agreed we needed to return to hospital. We’d been home less than 24 hours.
Spencer and I lay down on our queen-size bed, on top of the white-and-beige duvet we’d received as a wedding present. On the other side of our open window, a bird tapped its beak on a metal vent. Spencer lay on his left side; his right ached too much to place pressure on it. I nuzzled in behind him and put my nose to his back, where I imagined his diseased kidney to be. We wept like that for half an hour. I inhaled deeply and pretended that I was drawing cancer out of his body and into mine. Then, Spencer said, “Let’s go.”
That was the last time we were home together. Three and a half weeks later, Spencer died of complications from renal-cell carcinoma – an agonizing 42 days after the day we sat holding hands and stunned on a hospital bed, as a nephrologist told us the diagnosis.
The widowhood effect
Now, our home is my home. Spencer left everything to me; he’d no time to be more deliberate in his will. He gave me his beloved bikes and skis, his damn pager that woke us up in the middle of the night, his collection of model leg bones and pelvises, and a bathroom full of drugs that were supposed to save his life.
The pile of medication in our bathroom – my bathroom, now – is a remnant of a life that no longer exists. I don’t know whether to dispose of these drugs or keep them in case I need them to end my own life. At 36, I am a widow.
The widowed are two and a half times more likely to die by suicide in the first year of widowhood than the general population. We are, in fact, more likely to die of many causes: heart attacks, car accidents, cancer, many seemingly random afflictions that are not so random after all. There’s a name for this in the scientific literature: the widowhood effect.
It’s dated now but a 1986 paper in the British Medical Journal explored death after bereavement. It opens atypically for a scientific paper: “The broken heart is well established in poetry and prose, but is there any scientific basis for such romantic imagery?” Indeed, there is, according to the author. He found that a strong association exists between spousal bereavement and death.
Multiple studies in the last 40 years have confirmed these findings. A meta-analysis published in 2012 that looked at all published studies of the widowhood effect found widowhood is associated with 22-per-cent higher risk of death compared to the married population. The effect is most pronounced among younger widows and widowers, defined as those in their 40s and 50s. The widowed in their 30s, like me, also die at higher rates than our married counterparts but the difference is not statistically significant – not because it is insignificant but because there are too few in this age group to detect measurable differences.
We are too few and too young to be significant.
by Christina Frangou, Globe and Mail | Read more:
Image: Drew Shannon
I wonder if a one-month supply of drugs intended to save a sick person’s life is enough to end a healthy one’s. It probably is if you consume them not as directed. Chew them, crush them, don’t take with food. Take handfuls at the same time. But the order matters. You must swallow an anti-nausea pill first so you don’t vomit up a $248 cancer pill. This, I know. I’ve watched someone take cancer medication when he was trying not to die.

On our way out of the cancer centre, we stopped at the hospital pharmacy to fill his prescriptions. We picked up a one-month’s supply that cost twice our monthly mortgage payment, despite our private insurance and government coverage of his $7,000-a-month cancer therapy. We sat as we waited nearly an hour for the medications to be prepared; Spencer was too tired to stand. When the pharmacist called us to the front, he handed us three white plastic bags filled with boxes and bottles.
We stepped into the foyer of our condo nervously. Our parents had come by to clean up the packaging and plastic needle covers the paramedics had tossed to the floor of our living room in a rush one week earlier before they whisked Spencer to emergency. Neither of us was comfortable being home. We knew a fair amount about medicine and cancer – he, a surgeon; me, a medical journalist. We knew Spencer’s cancer was extraordinarily aggressive. In the three weeks after his diagnosis, cancer galloped through his body at a ruthless pace, laying claim to his kidneys, his lungs, his liver. In its wake, clots formed in his blood, threatening to block arteries and veins. One had already clogged the vessel carrying blood to his liver, causing the organ to swell so large it extended across his abdomen and hogged any space that rightfully belonged to food. Each day became a balancing act in blood consistency: too thin, his kidney bled profusely; too thick, clots threatened to meander into his lungs and kill him.
At home that evening, right on schedule at 7 o’clock, Spencer took his cancer medication, then vomited it up. By morning, he was peeing out blood clots and couldn’t eat or drink. We reached our oncologist on his cellphone and he agreed we needed to return to hospital. We’d been home less than 24 hours.
Spencer and I lay down on our queen-size bed, on top of the white-and-beige duvet we’d received as a wedding present. On the other side of our open window, a bird tapped its beak on a metal vent. Spencer lay on his left side; his right ached too much to place pressure on it. I nuzzled in behind him and put my nose to his back, where I imagined his diseased kidney to be. We wept like that for half an hour. I inhaled deeply and pretended that I was drawing cancer out of his body and into mine. Then, Spencer said, “Let’s go.”
That was the last time we were home together. Three and a half weeks later, Spencer died of complications from renal-cell carcinoma – an agonizing 42 days after the day we sat holding hands and stunned on a hospital bed, as a nephrologist told us the diagnosis.
The widowhood effect
Now, our home is my home. Spencer left everything to me; he’d no time to be more deliberate in his will. He gave me his beloved bikes and skis, his damn pager that woke us up in the middle of the night, his collection of model leg bones and pelvises, and a bathroom full of drugs that were supposed to save his life.
The pile of medication in our bathroom – my bathroom, now – is a remnant of a life that no longer exists. I don’t know whether to dispose of these drugs or keep them in case I need them to end my own life. At 36, I am a widow.
The widowed are two and a half times more likely to die by suicide in the first year of widowhood than the general population. We are, in fact, more likely to die of many causes: heart attacks, car accidents, cancer, many seemingly random afflictions that are not so random after all. There’s a name for this in the scientific literature: the widowhood effect.
It’s dated now but a 1986 paper in the British Medical Journal explored death after bereavement. It opens atypically for a scientific paper: “The broken heart is well established in poetry and prose, but is there any scientific basis for such romantic imagery?” Indeed, there is, according to the author. He found that a strong association exists between spousal bereavement and death.
Multiple studies in the last 40 years have confirmed these findings. A meta-analysis published in 2012 that looked at all published studies of the widowhood effect found widowhood is associated with 22-per-cent higher risk of death compared to the married population. The effect is most pronounced among younger widows and widowers, defined as those in their 40s and 50s. The widowed in their 30s, like me, also die at higher rates than our married counterparts but the difference is not statistically significant – not because it is insignificant but because there are too few in this age group to detect measurable differences.
We are too few and too young to be significant.
by Christina Frangou, Globe and Mail | Read more:
Image: Drew Shannon
Politics 101
More Online Shopping Means More Delivery Trucks
[ed. Glad someone is thinking about this.]
Two converging trends – the rise of e-commerce and urban population growth – are creating big challenges for cities. Online shoppers are learning to expect the urban freight delivery system to bring them whatever they want, wherever they want it, within one to two hours. That’s especially true during the holidays, as shipping companies hustle to deliver gift orders on time.
City managers and policymakers were already grappling with high demand and competing uses for scarce road, curb and sidewalk space. If cities do not act quickly to revamp the way they manage increasing numbers of commercial vehicles unloading goods in streets and alleys and into buildings, they will drown in a sea of double-parked trucks.
The Supply Chain Transportation and Logistics (SCTL) Center at the University of Washington has formed a new Urban Freight Lab to solve delivery system problems that cities and the business sector cannot handle on their own. Funders of this long-term strategic research partnership include the City of Seattle Department of Transportation (SDOT) and five founding corporate members: Costco, FedEx, Nordstrom, UPS and the U.S. Postal Service.
The core problem facing cities is that they are trying to manage their part of a sophisticated data-powered 21st-century delivery system with tools designed for the 1800s – and they are often trying to do it alone. Consumers can order groceries, clothes and electronics with a click, but most cities only have a stripe of colored paint to manage truck parking at the curb. The Urban Freight Lab brings building managers, retailers, logistics and tech firms, and city government together to do applied research and develop advanced solutions.
We have reached the point where millions of people who live and work in cities purchase more than half of their goods online. This trend putting tremendous pressure on local governments to rethink how they manage street curb parking and alley operations for trucks and other delivery vehicles. It also forces building operators to plan for the influx of online goods. A few years ago, building concierges may have received a few flower bouquets. Now many are sorting and storing groceries and other goods for hundreds of residents every week. (...)
SDOT recently published Seattle’s first draft Freight Master Plan, which includes high-level strategies to improve the urban goods delivery system. But before city managers act, they need evidence to prove which concepts will deliver results.
by Anne Goodchild and Barbara Ivanov, UW/The Conversation | Read more:
Two converging trends – the rise of e-commerce and urban population growth – are creating big challenges for cities. Online shoppers are learning to expect the urban freight delivery system to bring them whatever they want, wherever they want it, within one to two hours. That’s especially true during the holidays, as shipping companies hustle to deliver gift orders on time.

The Supply Chain Transportation and Logistics (SCTL) Center at the University of Washington has formed a new Urban Freight Lab to solve delivery system problems that cities and the business sector cannot handle on their own. Funders of this long-term strategic research partnership include the City of Seattle Department of Transportation (SDOT) and five founding corporate members: Costco, FedEx, Nordstrom, UPS and the U.S. Postal Service.
The core problem facing cities is that they are trying to manage their part of a sophisticated data-powered 21st-century delivery system with tools designed for the 1800s – and they are often trying to do it alone. Consumers can order groceries, clothes and electronics with a click, but most cities only have a stripe of colored paint to manage truck parking at the curb. The Urban Freight Lab brings building managers, retailers, logistics and tech firms, and city government together to do applied research and develop advanced solutions.
We have reached the point where millions of people who live and work in cities purchase more than half of their goods online. This trend putting tremendous pressure on local governments to rethink how they manage street curb parking and alley operations for trucks and other delivery vehicles. It also forces building operators to plan for the influx of online goods. A few years ago, building concierges may have received a few flower bouquets. Now many are sorting and storing groceries and other goods for hundreds of residents every week. (...)
SDOT recently published Seattle’s first draft Freight Master Plan, which includes high-level strategies to improve the urban goods delivery system. But before city managers act, they need evidence to prove which concepts will deliver results.
by Anne Goodchild and Barbara Ivanov, UW/The Conversation | Read more:
Image: AP Photo/Elaine Thompson
Sunday, December 25, 2016
Students Have Built A Coconut-Harvesting Robot
It’s a classic conundrum: Everyone wants coconuts, but no one wants to pick them.
Fear not, though. Students at Amrita University in Kerala have developed a solution—a coconut-harvesting robot.
The students began exploring this idea when a coconut farmer approached them about it in 2013, the Times of India reports. Three years later, they have unveiled their machine, which has grasping arms, a chunky torso, and several circular-sawblade appendages. (You can see some pictures of it here.)
Coconut harvesting is a field ripe for disruption. It’s hard, dangerous work—you either have to climb the tree and hang on while plucking the coconuts, or stand beneath it and saw them off with a long, blade-ended stick. The young people who would normally do it have lately been “taking up more ‘dignified’ professions,” the Times of India says.
Even those people who have stuck with the job are less than efficient: your average human can pick only 80 coconuts in a day. Instead, many farmers are using captive macaque monkeys, who can harvest up to 1600, NPR reported last year.
by Cara Giaimo, Atlas Obscura | Read more:

The students began exploring this idea when a coconut farmer approached them about it in 2013, the Times of India reports. Three years later, they have unveiled their machine, which has grasping arms, a chunky torso, and several circular-sawblade appendages. (You can see some pictures of it here.)
Coconut harvesting is a field ripe for disruption. It’s hard, dangerous work—you either have to climb the tree and hang on while plucking the coconuts, or stand beneath it and saw them off with a long, blade-ended stick. The young people who would normally do it have lately been “taking up more ‘dignified’ professions,” the Times of India says.
Even those people who have stuck with the job are less than efficient: your average human can pick only 80 coconuts in a day. Instead, many farmers are using captive macaque monkeys, who can harvest up to 1600, NPR reported last year.
by Cara Giaimo, Atlas Obscura | Read more:
Image:PEXELS/CC0
Los Angeles Drivers on the 405 Ask: Was $1.6 Billion Worth It?
It is the very symbol of traffic and congestion. Interstate 405, or the 405, as it is known by the 300,000 drivers who endure it morning and night, is the busiest highway in the nation, a 72-mile swerving stretch of pavement that crosses the sprawling metropolis of Los Angeles.
So it was that many Angelenos applauded when officials embarked on one of the most ambitious construction projects in modern times here: a $1 billion initiative to widen the highway. And drivers and others put up with no shortage of disruption — detours and delays, highway shutdowns, neighborhood streets clogged with cars — in the hopes of relieving one of the most notorious bottlenecks anywhere.
Six years after the first bulldozer rolled in, the construction crews are gone. A new car pool lane has opened, along with a network of on- and offramps and three new earthquake-resistant bridges.
But the question remains: Was it worth it?
“In the long term, it will make no difference to the traffic pattern,” said Marcia Hobbs, who has lived her whole life in Bel Air. “I haven’t noticed substantial cutbacks in traffic. As a matter of fact, I would say it was the opposite.”
The cost of the Sepulveda Pass project was supposed to be $1 billion. It has now reached $1.6 billion, after transit officials approved $300 million in new expenses last week.
Peak afternoon traffic time has indeed decreased to five hours from seven hours’ duration (yes, you read that right) and overall traffic capacity has increased. But congestion is as bad — even worse — during the busiest rush hours of 4:30 to 6:30 p.m., according to a study by the county Metropolitan Transportation Authority.

Six years after the first bulldozer rolled in, the construction crews are gone. A new car pool lane has opened, along with a network of on- and offramps and three new earthquake-resistant bridges.
But the question remains: Was it worth it?
“In the long term, it will make no difference to the traffic pattern,” said Marcia Hobbs, who has lived her whole life in Bel Air. “I haven’t noticed substantial cutbacks in traffic. As a matter of fact, I would say it was the opposite.”
The cost of the Sepulveda Pass project was supposed to be $1 billion. It has now reached $1.6 billion, after transit officials approved $300 million in new expenses last week.
Peak afternoon traffic time has indeed decreased to five hours from seven hours’ duration (yes, you read that right) and overall traffic capacity has increased. But congestion is as bad — even worse — during the busiest rush hours of 4:30 to 6:30 p.m., according to a study by the county Metropolitan Transportation Authority.
by Adam Nagourney, NY Times | Read more:
Image: Andrew CullenFriday, December 23, 2016
Islands of Mass Destruction
On a map of the world, the South China Sea appears as a scrap of blue amid the tangle of islands and peninsulas that make up Southeast Asia between the Indian and Pacific oceans. Its 1.4 million-square-mile expanse, so modest next to its aquatic neighbors, is nonetheless economically vital to the countries that border it and to the rest of us: More than $5 trillion in goods are shipped through it every year, and its waters produce roughly 12 percent of the world’s fish catch.
Zoom in, and irregular specks skitter between the Philippines and Vietnam. These are the Spratly Islands, a series of reefs and shoals that hardly deserved the name “islands” until recently. In the past three years, China, more than 500 miles from the closest of the Spratly reefs, has transformed seven of them into artificial land masses; as it’s reshaped coral and water into runways, hangars sized for military jets, lighthouses, running tracks, and basketball courts, its claim to sovereignty over the watery domain has hardened into an unsubtle threat of armed force.
Mobile signal towers on the newly cemented islands now beam the message, in Chinese and English, “Welcome to China” to cell phones on any ships passing within reach. But its latest moves, in the long-running dispute with its neighbors over the sea, the fish in it, and the oil beneath it, are anything but welcoming: China appears to have deployed weapons systems on all seven islands, and last week seized a U.S. Navy underwater drone.
In the run-up to all this, as most international observers watched the islands bloom in time-lapse on satellite photos, John McManus arrived with a film crew in February 2016, to document a less visible crisis under the water. To McManus, a professor of marine biology and ecology at the University of Miami, the Spratlys aren’t just tiny chips out of a blue background on Google Maps; from dives there in the early 1990s, he remembers seeing schools of hammerhead sharks so dense they eclipsed the light. This time, he swam through miles of deserted dead coral—of the few fish he saw, the largest barely reached 4 inches.
“I’ve never seen a reef where you could swim for a kilometer without seeing a single fish,” he says. (...)
The first signs of what was to come appeared in late 2012. Satellite photos of reefs in the Spratlys showed mysterious arcs, like puffs of cartoon smoke, obscuring the darker areas of coral and rock. A colleague forwarded them to McManus, wondering if the shapes might be signs of muro-ami fishing, where fishermen pound large rocks into a reef, tearing up the coral to scare their prey out of hiding and up into a net above. Another theory, floated first in an article on the Asia Pacific Defense Forum, a military affairs website, explained the arcs as scars left by fishermen harvesting giant clams.
Giant clams are an important species in the rich reef systems of the Indo-Pacific waters; they anchor seaweed and sponges, shelter young fish, and help accumulate the calcium deposits that grow reefs over time. Underwater, the elegantly undulating shells part to reveal a mantle of flesh in rainbow hues: blue, turquoise, yellow, and orange—mottled and spotted with yet more colors. The largest can reach almost 5 feet across and weigh more than 600 pounds. Long hunted for their meat, they’re also prized in the aquarium market, though they’re protected by international law.
McManus found both theories implausible, particularly the giant clam one; the only method he’d ever heard of for fishing the hefty bivalves involved wrestling them by hand into the boat.
As McManus pondered this mystery, tensions in the South China Sea were flaring, with the Chinese fishermen of Tanmen as the tinder. Tanmen is a pinhead of a place on the coast of Hainan Island, China’s equivalent of Hawaii. Temperatures rarely drop below 60F, and blue skies contrast with the smoggy haze over much of the mainland. Tanmen was a subsistence fishing village until Hainan opened up to foreign investment and a Taiwanese entrepreneur arrived in 1990.
The man, Zhan Dexiong, had run a business for years in Southeast Asia turning seashells into beads and handicrafts. Tanmen had a dozen small boats and no electricity, according to Zhan’s son, Zhan Yulong. It did have a cheap and abundant supply of all kinds of seashells, which the locals discarded after taking the meat out. The elder Zhan bought generators, moved machines from his factory in the Philippines, and set up the first foreign venture in town.
By the early 2000s, the success of that first factory had attracted copycats and spurred the creation of a special industrial zone devoted to shell processing. Over the next decade, Chinese consumers, avid buyers of jade and ivory, developed a taste for objets from those factories, intricate sculptures with giant clamshells as the medium. Although China listed giant clams as a protected species, Tanmen fishermen found a loophole, going after the large shells of long dead clams, buried within reefs. By 2012 the shells from giant clams, dead or alive, had become the most valuable harvest for the vessels sailing from Tanmen into the South China Sea. Boats regularly came home with 200-ton hauls, which could sell for 2,000 yuan ($290) a ton—big money in a place where the annual income for a fisherman was 6,000 yuan.
by Dune Lawrence and Wenxin Fan, Bloomberg | Read more:
Image:Howard Chew/Alamy

Mobile signal towers on the newly cemented islands now beam the message, in Chinese and English, “Welcome to China” to cell phones on any ships passing within reach. But its latest moves, in the long-running dispute with its neighbors over the sea, the fish in it, and the oil beneath it, are anything but welcoming: China appears to have deployed weapons systems on all seven islands, and last week seized a U.S. Navy underwater drone.
In the run-up to all this, as most international observers watched the islands bloom in time-lapse on satellite photos, John McManus arrived with a film crew in February 2016, to document a less visible crisis under the water. To McManus, a professor of marine biology and ecology at the University of Miami, the Spratlys aren’t just tiny chips out of a blue background on Google Maps; from dives there in the early 1990s, he remembers seeing schools of hammerhead sharks so dense they eclipsed the light. This time, he swam through miles of deserted dead coral—of the few fish he saw, the largest barely reached 4 inches.
“I’ve never seen a reef where you could swim for a kilometer without seeing a single fish,” he says. (...)
The first signs of what was to come appeared in late 2012. Satellite photos of reefs in the Spratlys showed mysterious arcs, like puffs of cartoon smoke, obscuring the darker areas of coral and rock. A colleague forwarded them to McManus, wondering if the shapes might be signs of muro-ami fishing, where fishermen pound large rocks into a reef, tearing up the coral to scare their prey out of hiding and up into a net above. Another theory, floated first in an article on the Asia Pacific Defense Forum, a military affairs website, explained the arcs as scars left by fishermen harvesting giant clams.
Giant clams are an important species in the rich reef systems of the Indo-Pacific waters; they anchor seaweed and sponges, shelter young fish, and help accumulate the calcium deposits that grow reefs over time. Underwater, the elegantly undulating shells part to reveal a mantle of flesh in rainbow hues: blue, turquoise, yellow, and orange—mottled and spotted with yet more colors. The largest can reach almost 5 feet across and weigh more than 600 pounds. Long hunted for their meat, they’re also prized in the aquarium market, though they’re protected by international law.
McManus found both theories implausible, particularly the giant clam one; the only method he’d ever heard of for fishing the hefty bivalves involved wrestling them by hand into the boat.
As McManus pondered this mystery, tensions in the South China Sea were flaring, with the Chinese fishermen of Tanmen as the tinder. Tanmen is a pinhead of a place on the coast of Hainan Island, China’s equivalent of Hawaii. Temperatures rarely drop below 60F, and blue skies contrast with the smoggy haze over much of the mainland. Tanmen was a subsistence fishing village until Hainan opened up to foreign investment and a Taiwanese entrepreneur arrived in 1990.
The man, Zhan Dexiong, had run a business for years in Southeast Asia turning seashells into beads and handicrafts. Tanmen had a dozen small boats and no electricity, according to Zhan’s son, Zhan Yulong. It did have a cheap and abundant supply of all kinds of seashells, which the locals discarded after taking the meat out. The elder Zhan bought generators, moved machines from his factory in the Philippines, and set up the first foreign venture in town.
By the early 2000s, the success of that first factory had attracted copycats and spurred the creation of a special industrial zone devoted to shell processing. Over the next decade, Chinese consumers, avid buyers of jade and ivory, developed a taste for objets from those factories, intricate sculptures with giant clamshells as the medium. Although China listed giant clams as a protected species, Tanmen fishermen found a loophole, going after the large shells of long dead clams, buried within reefs. By 2012 the shells from giant clams, dead or alive, had become the most valuable harvest for the vessels sailing from Tanmen into the South China Sea. Boats regularly came home with 200-ton hauls, which could sell for 2,000 yuan ($290) a ton—big money in a place where the annual income for a fisherman was 6,000 yuan.
by Dune Lawrence and Wenxin Fan, Bloomberg | Read more:
Image:Howard Chew/Alamy
How We Got From Doc Brown to Walter White
The changing image of the TV scientist.
At the start of the fourth season of Breaking Bad, Walter White angrily watches an inexperienced meth cook make his trademark blue meth. Walter is afraid that mob boss Gus Fring is going to kill him, so he desperately explains that Fring can’t make the “product” without him. When the amateur cook, Victor, says he knows every step of the process, Walter snarls, “So, please, tell me. Catalytic hydrogenation—is it protic or aprotic? Because I forget. And if our reduction is not stereospecific, then how can our product be enantiomerically pure?”
Walter’s scientific knowledge saves him. The ruthless Fring slits Victor’s throat with a box cutter.
Over the course of Breaking Bad, Walter unravels from a frustrated chemistry teacher to a brutal criminal. But no matter how horrible he gets, viewers can’t help but relate to and care about him. Much of this sense of connection comes from lead actor Bryan Cranston’s skillful portrayal of a troubled family man, but it was Breaking Bad creator and head writer Vince Gilligan who conceived the character. He imagined a scientist who is mad without turning him into a mad scientist.
Part of Walter’s appeal is he knows his science. “Vince tried to get the chemistry correct as much as he could, just to make it more believable,” says Donna Nelson, a professor of chemistry at the University of Oklahoma. As Breaking Bad’s science advisor, Nelson helped him achieve that goal. (Her favorite scene in the series is Walter’s sarcastic rejoinder to Victor.) Although they were careful to never give viewers the exact or complete recipe for meth, the chemical reactions are real, and if someone were to synthesize methamphetamine by altering other chemical’s structures, they would indeed want to make sure the end product is enantiomerically pure: The three-dimensional structure of methamphetamine works on the brain in a certain way to get you high, but the enantiomer, or mirror image, of the same molecule does not.
Breaking Bad is among a host of acclaimed shows in recent times with scientists as protagonists. Westworld, Orphan Black, Masters of Sex, CSI, Bones, House, The Big Bang Theory, and several others have all written scientists as diverse and complex humans who have almost nothing in common with the scientists I saw in the 1980s movies I watched as a kid. Gone is the lone genius with a shed full of goofy contraptions and bubbling liquids. Today’s fictional researchers work in realistic labs, with high-tech equipment, and in teams with others. Their dialogue is scattered with words from the latest scientific literature, and they have so much depth and personality that they carry entire shows.
The change in TV offers insight into the image and impact of scientists today, say communication scholars. Although recent headlines may have been dominated by people who bend scientific facts into the molds of their personal ideologies, surveys reveal a deep public esteem for scientists. Viewers now want and demand their scientists to be realistic, and what the viewer wants, Hollywood delivers. As a result, scientists on screen have evolved from stereotypes and villains to credible and positive characters, due in part to scientists themselves, anxious to be part of the action and the public’s education. (...)
In 1985, George Gerbner, a communications professor at the Annenberg School of Communications at the University of Pennsylvania, led a remarkably detailed study of scientist characters on TV and their impact on culture. Scientists were smart and rational, the report noted, but of all the occupational roles on TV, scientists were the least sociable. In fact, 1 in 6 scientists were portrayed as villains. All in all, the report stated, scientists “presented an image lacking in some respects only in comparison to doctors and other professionals than in absolute terms. But it is a somewhat foreboding image, touched with a sense of evil, trouble, and peril.” Apparently those characters had a negative impact on viewers, especially “heavy viewers,” people who watched four or more hours of TV a day, cultivating an unfavorable orientation toward science.
But things have been looking up for unsociable TV scientists touched with evil. A 2011 study by Anthony Dudo and colleagues, published in Communication Research, takes up where Gerbner and colleagues left off. The authors compared several professions portrayed in prime-time TV shows and found that in the period from 2000 to 2008, only 3 percent of scientist characters were considered “bad,” less than any other TV profession in that period. Portrayals of TV scientists, the authors noted, are mostly positive, and what’s more, heavy viewing can “enhance attitudes toward science for people who share common experiences.”
What happened? Roslynn Haynes, an adjunct associate professor at the School of English, Media and Performing Arts of the University of New South Wales, has studied the representation of scientists in fiction. The world has changed since the 1960s, she says, when one-dimensional mad scientists or goofy side characters ruled. We have different things to worry about these days: political corruption, terrorism, climate change. “We don’t need the scientists to be the bad guys anymore,” says Haynes. “There are so many other bad guys now.” She points out that scientists are now often the ones we turn to for solutions. “We know we need scientists to fix up the mess we’re making of the planet. If there’s any hope at all, it has to come from scientists who monitor the risk and are able to find ways to overcome that risk. Whereas before, scientists were seen as part of the risk.” (...)
At the start of the fourth season of Breaking Bad, Walter White angrily watches an inexperienced meth cook make his trademark blue meth. Walter is afraid that mob boss Gus Fring is going to kill him, so he desperately explains that Fring can’t make the “product” without him. When the amateur cook, Victor, says he knows every step of the process, Walter snarls, “So, please, tell me. Catalytic hydrogenation—is it protic or aprotic? Because I forget. And if our reduction is not stereospecific, then how can our product be enantiomerically pure?”
Walter’s scientific knowledge saves him. The ruthless Fring slits Victor’s throat with a box cutter.
Over the course of Breaking Bad, Walter unravels from a frustrated chemistry teacher to a brutal criminal. But no matter how horrible he gets, viewers can’t help but relate to and care about him. Much of this sense of connection comes from lead actor Bryan Cranston’s skillful portrayal of a troubled family man, but it was Breaking Bad creator and head writer Vince Gilligan who conceived the character. He imagined a scientist who is mad without turning him into a mad scientist.

Breaking Bad is among a host of acclaimed shows in recent times with scientists as protagonists. Westworld, Orphan Black, Masters of Sex, CSI, Bones, House, The Big Bang Theory, and several others have all written scientists as diverse and complex humans who have almost nothing in common with the scientists I saw in the 1980s movies I watched as a kid. Gone is the lone genius with a shed full of goofy contraptions and bubbling liquids. Today’s fictional researchers work in realistic labs, with high-tech equipment, and in teams with others. Their dialogue is scattered with words from the latest scientific literature, and they have so much depth and personality that they carry entire shows.
The change in TV offers insight into the image and impact of scientists today, say communication scholars. Although recent headlines may have been dominated by people who bend scientific facts into the molds of their personal ideologies, surveys reveal a deep public esteem for scientists. Viewers now want and demand their scientists to be realistic, and what the viewer wants, Hollywood delivers. As a result, scientists on screen have evolved from stereotypes and villains to credible and positive characters, due in part to scientists themselves, anxious to be part of the action and the public’s education. (...)
In 1985, George Gerbner, a communications professor at the Annenberg School of Communications at the University of Pennsylvania, led a remarkably detailed study of scientist characters on TV and their impact on culture. Scientists were smart and rational, the report noted, but of all the occupational roles on TV, scientists were the least sociable. In fact, 1 in 6 scientists were portrayed as villains. All in all, the report stated, scientists “presented an image lacking in some respects only in comparison to doctors and other professionals than in absolute terms. But it is a somewhat foreboding image, touched with a sense of evil, trouble, and peril.” Apparently those characters had a negative impact on viewers, especially “heavy viewers,” people who watched four or more hours of TV a day, cultivating an unfavorable orientation toward science.
But things have been looking up for unsociable TV scientists touched with evil. A 2011 study by Anthony Dudo and colleagues, published in Communication Research, takes up where Gerbner and colleagues left off. The authors compared several professions portrayed in prime-time TV shows and found that in the period from 2000 to 2008, only 3 percent of scientist characters were considered “bad,” less than any other TV profession in that period. Portrayals of TV scientists, the authors noted, are mostly positive, and what’s more, heavy viewing can “enhance attitudes toward science for people who share common experiences.”
What happened? Roslynn Haynes, an adjunct associate professor at the School of English, Media and Performing Arts of the University of New South Wales, has studied the representation of scientists in fiction. The world has changed since the 1960s, she says, when one-dimensional mad scientists or goofy side characters ruled. We have different things to worry about these days: political corruption, terrorism, climate change. “We don’t need the scientists to be the bad guys anymore,” says Haynes. “There are so many other bad guys now.” She points out that scientists are now often the ones we turn to for solutions. “We know we need scientists to fix up the mess we’re making of the planet. If there’s any hope at all, it has to come from scientists who monitor the risk and are able to find ways to overcome that risk. Whereas before, scientists were seen as part of the risk.” (...)
It didn’t take long for fictional on-screen scientists to catch up with this new attitude toward their profession. Eight years after Doc Emmett Brown sent his mad invention traveling through time in Back to the Future, scientists in Jurassic Park enthralled visitors with creatures from the past. But something was different now. Although Doc Brown’s chaotic goofiness was still acceptable for scientist characters in 1985, the paleontologists in Jurassic Park (1993) were held to a much higher standard. They did work that viewers recognized as having some root in reality: Dinosaurs, DNA, clean labs with professional lab notebooks. Although it’s not possible to retrieve viable DNA from dinosaur blood in a mosquito trapped in amber, the idea isn’t entirely implausible. Just this month, real paleontologists found a feathered, amber-encased dinosaur tail fragment, in which they detected traces of iron from its blood.
David Kirby, a senior lecturer in Science Communication Studies at the University of Manchester, and author of the 2011 book Lab Coats in Hollywood, points to Jurassic Park as the film that marked the start of the trend of scientific realism in movies. The film had incredible visual effects, and they brought in experts to get the scientific details in place. When the film was a box office success, other films tried to copy this attention to realistic detail. They saw that audiences liked it, so why not do the same?
It fit an ongoing trend of increased “realism” across all genres, explains Kirby. “When you’re talking about realism in the context of fiction, you’re not just talking about ‘Did they get the appropriate watch for a particular time period?’ or ‘Did they get the right equipment to do a piece of scientific work?’ The realism is all of it: the ways in which the characters act, the context in which they’re acting.” Filmmakers, Kirby says, “are paying attention to everything in terms of that realism, to try to convey the notion that this is taking place in a world that seems realistic.”
David Kirby, a senior lecturer in Science Communication Studies at the University of Manchester, and author of the 2011 book Lab Coats in Hollywood, points to Jurassic Park as the film that marked the start of the trend of scientific realism in movies. The film had incredible visual effects, and they brought in experts to get the scientific details in place. When the film was a box office success, other films tried to copy this attention to realistic detail. They saw that audiences liked it, so why not do the same?
It fit an ongoing trend of increased “realism” across all genres, explains Kirby. “When you’re talking about realism in the context of fiction, you’re not just talking about ‘Did they get the appropriate watch for a particular time period?’ or ‘Did they get the right equipment to do a piece of scientific work?’ The realism is all of it: the ways in which the characters act, the context in which they’re acting.” Filmmakers, Kirby says, “are paying attention to everything in terms of that realism, to try to convey the notion that this is taking place in a world that seems realistic.”
by Eva Amsen, Nautilus | Read more:
Image: Breaking Bad
The Movie That Doesn’t Exist and the Redditors Who Think It Does
In the early Nineties, roughly around 1994, a now 52-year-old man named Don ordered two copies of a brand new video for the rental store his uncle owned and he helped to run.
“I had to handle the two copies we owned dozens of times over the years,” says Don (who wishes to give his first name only). “And I had to watch it multiple times to look for reported damages to the tape, rewind it and check it in, rent it out, and put the boxes out on display for rental.”
In these ways, the film Don is speaking of is exactly like the hundreds of others in his uncle’s shop. In one crucial way, however, it is not. The movie that Don is referring to doesn’t actually exist.
“It feels like a part of my childhood has now been stolen from me. How does a movie simply vanish from our history?”
This isn’t Don speaking, but another man – who he has never met – named Carl*. Carl, whose name has been changed because he wishes to remain anonymous, recalls watching a movie called Shazaam with his sister in the early Nineties, and has fond memories of discussing it with her over the last 20 years. In their recollections, the movie starred the American stand-up comedian Sinbad – real name David Adkins – as an incompetent genie who granted wishes to two young children.
“I’ve taken to Craigslist and have posted a bounty of $1,000 for anyone that can turn up a copy of this movie, whether it was ‘accidentally’ kept from Blockbuster or if someone made their own bootleg VHS copy. I want to be able to make it known that the movie is indeed real,” says Carl.
Meredith Upton, a 25-year-old videographer from Nashville, Tennessee, also remembers the same film. “Whenever I would see Sinbad anywhere in the media I would recall him playing a genie,” she says. “I remember the name of the film as Shazaam. I remember two children accidentally summoning a genie… and they try and wish for their dad to fall in love again after their mother’s passing, and Sinbad can’t [grant the wish].”
Don goes even further. Although he is not certain that the movie was called Shazaam, he has detailed scene-by-scene recollections of the film, which include the children wishing for a new wife for their father, the little girl wishing for her broken doll to be fixed, and the movie finale taking place at a pool party. Don says he remembers the film so vividly because customers would bring the video back to his rental store claiming it didn’t work, and he watched it multiple times to try and find the “problem with the tape”.
Meredith, Don, and Carl are three of hundreds of Redditors who have used the popular social news site to discuss their memories of Shazaam. Together they have scoured the internet to find evidence that the movie existed but each has repeatedly come up empty-handed. Sinbad himself has even taken to Twitter to deny that he ever played such a role.
How did this Reddit community grow? It all began in 2009. An anonymous individual took to the question-and-answer website Yahoo! Answers to pose its users a simple question. “Do you remember that sinbad movie?” they wrote. “Wasnt there a movie in the early 90s where sinbad the entertainer / comedian played a genie? … help its driving me nuts!”
At the time, nobody remembered the film, and it took another two years for somebody else to ask about it again online. Reddit user MJGSimple wrote on the site: “It’s a conspiracy! I swear this movie exists, anyone have a copy or know where I can find proof!” Replies to the post were sceptical, claiming MJGSimple simply had a false memory.
It wasn’t until last year that things took a dramatic turn.
On 11 August 2015, the popular gonzo news site VICE published a story about a conspiracy theory surrounding the children’s storybook characters the Berenstain Bears. The theory went like this: many people remember that the bears’ name was spelt “Berenstein” – with an “e” – but pictures and old copies proved it was always spelt with an “a”. The fact that so many people had the same false memory was seen as concrete proof of the supernatural.
“Berenstein” truthers believe in something called the “Mandela Effect”: a theory that a large group of people with the same false memory used to live in a parallel universe (the name comes from those who fervently believe that Nelson Mandela died while in prison). VICE’s article about the theory was shared widely, leading thousands of people to r/MandelaEffect, a subreddit for those with false memories to share their experiences.
It was there, just a few hours after the article was posted, that discussions of Shazaam – or the “Sinbad Genie movie” – took off.
“I was dumbfounded to see that there was no evidence of the movie ever being made,” says Carl. “I quickly searched the internet, scouring every way I know how to search, crafting Boolean strings into Google, doing insite: searches, and nothing. Not a damn thing.”
“I had to handle the two copies we owned dozens of times over the years,” says Don (who wishes to give his first name only). “And I had to watch it multiple times to look for reported damages to the tape, rewind it and check it in, rent it out, and put the boxes out on display for rental.”
In these ways, the film Don is speaking of is exactly like the hundreds of others in his uncle’s shop. In one crucial way, however, it is not. The movie that Don is referring to doesn’t actually exist.

This isn’t Don speaking, but another man – who he has never met – named Carl*. Carl, whose name has been changed because he wishes to remain anonymous, recalls watching a movie called Shazaam with his sister in the early Nineties, and has fond memories of discussing it with her over the last 20 years. In their recollections, the movie starred the American stand-up comedian Sinbad – real name David Adkins – as an incompetent genie who granted wishes to two young children.
“I’ve taken to Craigslist and have posted a bounty of $1,000 for anyone that can turn up a copy of this movie, whether it was ‘accidentally’ kept from Blockbuster or if someone made their own bootleg VHS copy. I want to be able to make it known that the movie is indeed real,” says Carl.
Meredith Upton, a 25-year-old videographer from Nashville, Tennessee, also remembers the same film. “Whenever I would see Sinbad anywhere in the media I would recall him playing a genie,” she says. “I remember the name of the film as Shazaam. I remember two children accidentally summoning a genie… and they try and wish for their dad to fall in love again after their mother’s passing, and Sinbad can’t [grant the wish].”
Don goes even further. Although he is not certain that the movie was called Shazaam, he has detailed scene-by-scene recollections of the film, which include the children wishing for a new wife for their father, the little girl wishing for her broken doll to be fixed, and the movie finale taking place at a pool party. Don says he remembers the film so vividly because customers would bring the video back to his rental store claiming it didn’t work, and he watched it multiple times to try and find the “problem with the tape”.
Meredith, Don, and Carl are three of hundreds of Redditors who have used the popular social news site to discuss their memories of Shazaam. Together they have scoured the internet to find evidence that the movie existed but each has repeatedly come up empty-handed. Sinbad himself has even taken to Twitter to deny that he ever played such a role.
How did this Reddit community grow? It all began in 2009. An anonymous individual took to the question-and-answer website Yahoo! Answers to pose its users a simple question. “Do you remember that sinbad movie?” they wrote. “Wasnt there a movie in the early 90s where sinbad the entertainer / comedian played a genie? … help its driving me nuts!”
At the time, nobody remembered the film, and it took another two years for somebody else to ask about it again online. Reddit user MJGSimple wrote on the site: “It’s a conspiracy! I swear this movie exists, anyone have a copy or know where I can find proof!” Replies to the post were sceptical, claiming MJGSimple simply had a false memory.
It wasn’t until last year that things took a dramatic turn.
On 11 August 2015, the popular gonzo news site VICE published a story about a conspiracy theory surrounding the children’s storybook characters the Berenstain Bears. The theory went like this: many people remember that the bears’ name was spelt “Berenstein” – with an “e” – but pictures and old copies proved it was always spelt with an “a”. The fact that so many people had the same false memory was seen as concrete proof of the supernatural.
“Berenstein” truthers believe in something called the “Mandela Effect”: a theory that a large group of people with the same false memory used to live in a parallel universe (the name comes from those who fervently believe that Nelson Mandela died while in prison). VICE’s article about the theory was shared widely, leading thousands of people to r/MandelaEffect, a subreddit for those with false memories to share their experiences.
It was there, just a few hours after the article was posted, that discussions of Shazaam – or the “Sinbad Genie movie” – took off.
“I was dumbfounded to see that there was no evidence of the movie ever being made,” says Carl. “I quickly searched the internet, scouring every way I know how to search, crafting Boolean strings into Google, doing insite: searches, and nothing. Not a damn thing.”
by Amelia Tait, The New Statesman | Read more:
Image: uncredited
Thursday, December 22, 2016
Why Time Management is Ruining Our Lives
Given that the average lifespan consists of only about 4,000 weeks, a certain amount of anxiety about using them well is presumably inevitable: we’ve been granted the mental capacities to make infinitely ambitious plans, yet almost no time at all to put them into practice. The problem of how to manage time, accordingly, goes back at least to the first century AD, when the Roman philosopher Seneca wrote On The Shortness of Life. “This space that has been granted to us rushes by so speedily, and so swiftly that all save a very few find life at an end just when they are getting ready to live,” he said, chiding his fellow citizens for wasting their days on pointless busyness, and “baking their bodies in the sun”.
Clearly, then, the challenge of how to live our lives well is not a new one. Still, it is safe to say that the citizens of first-century Rome didn’t experience the equivalent of today’s productivity panic. (Seneca’s answer to the question of how to live had nothing to do with becoming more productive: it was to give up the pursuit of wealth or high office, and spend your days philosophising instead.) What is uniquely modern about our fate is that we feel obliged to respond to the pressure of time by making ourselves as efficient as possible – even when doing so fails to bring the promised relief from stress.
The time-pressure problem was always supposed to get better as society advanced, not worse. In 1930, John Maynard Keynes famously predicted that within a century, economic growth would mean that we would be working no more than 15 hours per week – whereupon humanity would face its greatest challenge: that of figuring out how to use all those empty hours. Economists still argue about exactly why things turned out so differently, but the simplest answer is “capitalism”. Keynes seems to have assumed that we would naturally throttle down on work once our essential needs, plus a few extra desires, were satisfied. Instead, we just keep finding new things to need. Depending on your rung of the economic ladder, it’s either impossible, or at least usually feels impossible, to cut down on work in exchange for more time.
Arguably the first time management guru – the progenitor of the notion that personal productivity might be the answer to the problem of time pressure – was Frederick Winslow Taylor, an engineer hired in 1898 by the Bethlehem Steel Works, in Pennsylvania, with a mandate to improve the firm’s efficiency. “Staring out over an industrial yard that covered several square miles of the Pennsylvania landscape, he watched as labourers loaded 92lb [iron bars] on to rail cars,” writes Matthew Stewart, in his book The Management Myth. “There were 80,000 tons’ worth of iron bars, which were to be carted off as fast as possible to meet new demand sparked by the Spanish-American war. Taylor narrowed his eyes: there was waste here, he was certain.”
The Bethlehem workers, Taylor calculated, were shifting about 12.5 tons of iron per man per day – but predictably, when he offered a group of “large, powerful Hungarians” some extra cash to work as fast as they could for an hour, he found that they performed much better. Extrapolating to a full work day, and guesstimating time for breaks, Taylor concluded, with his trademark blend of self-confidence and woolly maths, that every man ought to be shifting 50 tons per day – four times their usual amount.
Workers were naturally unhappy at this transparent attempt to pay them the same money for more work, but Taylor was not especially concerned with their happiness; their job was to implement, not understand, his new philosophy of “scientific management”. “One of the very first requirements for a man who is fit to handle pig iron,” wrote Taylor, is “that he shall be so stupid and phlegmatic that he more nearly resembles in his mental makeup the ox than any other type … he is so stupid that the word ‘percentage’ has no meaning for him.”
The idea of efficiency that Taylor sought to impose on Bethlehem Steel was borrowed from the mechanical engineers of the industrial revolution. It was a way of thinking about improving the functioning of machines, now transferred to humans. And it caught on: Taylor enjoyed a high-profile career as a lecturer on the topic, and by 1915, according to the historian Jennifer Alexander, “the word ‘efficiency’ was plastered everywhere – in headlines, advertisements, editorials, business manuals, and church bulletins.” In the first decades of the 20th century, in a Britain panicked by the rise of German power, the National Efficiency movement united politicians on left and right. (“At the present time,” the Spectator noted in 1902, “there is a universal outcry for efficiency in all the departments of society, in all aspects of life.”)
It is not hard to grasp the appeal: efficiency was the promise of doing what you already did, only better, more cheaply, and in less time. What could be wrong with that? Unless you happened to be on the sharp end of attempts to treat humans like machines – like the workers of Bethlehem Steel – there wasn’t an obvious downside.
But as the century progressed, something important changed: we all became Frederick Winslow Taylors, presiding ruthlessly over our own lives. As the doctrine of efficiency grew entrenched – as the ethos of the market spread to more and more aspects of society, and life became more individualistic – we internalised it. In Taylor’s day, efficiency had been primarily a way to persuade (or bully) other people to do more work in the same amount of time; now it is a regimen that we impose on ourselves. (...)
Time management promised a sense of control in a world in which individuals – decreasingly supported by the social bonds of religion or community – seemed to lack it. In an era of insecure employment, we must constantly demonstrate our usefulness through frenetic doing, and time management can give you a valuable edge. Indeed, if you are among the growing ranks of the self-employed, as a freelancer or a worker in the so-called gig economy, increased personal efficiency may be essential to your survival. The only person who suffers financially if you indulge in “loafing” – a workplace vice that Taylor saw as theft – is you.
Above all, time management promises that a meaningful life might still be possible in this profit-driven environment, as Melissa Gregg explains in Counterproductive, a forthcoming history of the field. With the right techniques, the prophets of time management all implied, you could fashion a fulfilling life while simultaneously attending to the ever-increasing demands of your employer. This promise “comes back and back, in force, whenever there’s an economic downturn”, Gregg told me.
Especially at the higher-paid end of the employment spectrum, time management whispers of the possibility of something even more desirable: true peace of mind. “It is possible for a person to have an overwhelming number of things to do and still function productively with a clear head and a positive sense of relaxed control,” the contemporary king of the productivity gurus, David Allen, declared in his 2001 bestseller, Getting Things Done. “You can experience what the martial artists call a ‘mind like water’, and top athletes refer to as ‘the zone’.”
As Gregg points out, it is significant that “personal productivity” puts the burden of reconciling these demands squarely on our shoulders as individuals. Time management gurus rarely stop to ask whether the task of merely staying afloat in the modern economy – holding down a job, paying the mortgage, being a good-enough parent – really ought to require rendering ourselves inhumanly efficient in the first place.
Besides, on closer inspection, even the lesser promises of time management were not all they appeared to be. An awkward truth about Taylor’s celebrated efficiency drives is that they were not very successful: Bethlehem Steel fired him in 1901, having paid him vast sums without any clearly detectable impact on its own profits. (One persistent consequence of his schemes was that they seemed promising at first, but left workers too exhausted to function consistently over the long term.)
Likewise, it remains the frequent experience of those who try to follow the advice of personal productivity gurus – I’m speaking from years of experience here – that a “mind like water” is far from the guaranteed result. As with Inbox Zero, so with work in general: the more efficient you get at ploughing through your tasks, the faster new tasks seem to arrive. (“Work expands to fill the time available for its completion,” as the British historian C Northcote Parkinson realised way back in 1955, when he coined what would come to be known as Parkinson’s law.)
Then there’s the matter of self-consciousness: virtually every time management expert’s first piece of advice is to keep a detailed log of your time use, but doing so just heightens your awareness of the minutes ticking by, then lost for ever. As for focusing on your long-term goals: the more you do that, the more of your daily life you spend feeling vaguely despondent that you have not yet achieved them. Should you manage to achieve one, the satisfaction is strikingly brief – then it’s time to set a new long-term goal. The supposed cure just makes the problem worse.
There is a historical parallel for all this: it’s exactly what happened when the spread of “labour-saving” devices transformed the lives of housewives and domestic servants across Europe and north America from the end of the 19th century. Technology now meant that washing clothes no longer entailed a day bent over a mangle; a vacuum-cleaner could render a carpet spotless in minutes.
Yet as the historian Ruth Cowan demonstrates in her 1983 book More Work for Mother, the result, for much of the 20th century, was not an increase in leisure time among those charged with doing the housework. Instead, as the efficiency of housework increased, so did the standards of cleanliness and domestic order that society came to expect. Now that the living-room carpet could be kept perfectly clean, it had to be; now that clothes never needed to be grubby, grubbiness was all the more taboo. These days, you can answer work emails in bed at midnight. So should that message you got at 5.30pm really wait till morning for a reply? (...)
At the very bottom of our anxious urge to manage time better – the urge driving Frederick Winslow Taylor, Merlin Mann, me and perhaps you – it’s not hard to discern a familiar motive: the fear of death. As the philosopher Thomas Nagel has put it, on any meaningful timescale other than human life itself – that of the planet, say, or the cosmos – “we will all be dead any minute”. No wonder we are so drawn to the problem of how to make better use of our days: if we could solve it, we could avoid the feeling, in Seneca’s words, of finding life at an end just when we were getting ready to live. To die with the sense of nothing left undone: it’s nothing less than the promise of immortality by other means.
But the modern zeal for personal productivity, rooted in Taylor’s philosophy of efficiency, takes things several significant steps further. If only we could find the right techniques and apply enough self-discipline, it suggests, we could know that we were fitting everything important in, and could feel happy at last. It is up to us – indeed, it is our obligation – to maximise our productivity. This is a convenient ideology from the point of view of those who stand to profit from our working harder, and our increased capacity for consumer spending. But it also functions as a form of psychological avoidance. The more you can convince yourself that you need never make difficult choices – because there will be enough time for everything – the less you will feel obliged to ask yourself whether the life you are choosing is the right one.
Personal productivity presents itself as an antidote to busyness when it might better be understood as yet another form of busyness. And as such, it serves the same psychological role that busyness has always served: to keep us sufficiently distracted that we don’t have to ask ourselves potentially terrifying questions about how we are spending our days. “How we labour at our daily work more ardently and thoughtlessly than is necessary to sustain our life because it is even more necessary not to have leisure to stop and think,” wrote Friedrich Nietzsche, in what reads like a foreshadowing of our present circumstances. “Haste is universal because everyone is in flight from himself.”
Clearly, then, the challenge of how to live our lives well is not a new one. Still, it is safe to say that the citizens of first-century Rome didn’t experience the equivalent of today’s productivity panic. (Seneca’s answer to the question of how to live had nothing to do with becoming more productive: it was to give up the pursuit of wealth or high office, and spend your days philosophising instead.) What is uniquely modern about our fate is that we feel obliged to respond to the pressure of time by making ourselves as efficient as possible – even when doing so fails to bring the promised relief from stress.

Arguably the first time management guru – the progenitor of the notion that personal productivity might be the answer to the problem of time pressure – was Frederick Winslow Taylor, an engineer hired in 1898 by the Bethlehem Steel Works, in Pennsylvania, with a mandate to improve the firm’s efficiency. “Staring out over an industrial yard that covered several square miles of the Pennsylvania landscape, he watched as labourers loaded 92lb [iron bars] on to rail cars,” writes Matthew Stewart, in his book The Management Myth. “There were 80,000 tons’ worth of iron bars, which were to be carted off as fast as possible to meet new demand sparked by the Spanish-American war. Taylor narrowed his eyes: there was waste here, he was certain.”
The Bethlehem workers, Taylor calculated, were shifting about 12.5 tons of iron per man per day – but predictably, when he offered a group of “large, powerful Hungarians” some extra cash to work as fast as they could for an hour, he found that they performed much better. Extrapolating to a full work day, and guesstimating time for breaks, Taylor concluded, with his trademark blend of self-confidence and woolly maths, that every man ought to be shifting 50 tons per day – four times their usual amount.
Workers were naturally unhappy at this transparent attempt to pay them the same money for more work, but Taylor was not especially concerned with their happiness; their job was to implement, not understand, his new philosophy of “scientific management”. “One of the very first requirements for a man who is fit to handle pig iron,” wrote Taylor, is “that he shall be so stupid and phlegmatic that he more nearly resembles in his mental makeup the ox than any other type … he is so stupid that the word ‘percentage’ has no meaning for him.”
The idea of efficiency that Taylor sought to impose on Bethlehem Steel was borrowed from the mechanical engineers of the industrial revolution. It was a way of thinking about improving the functioning of machines, now transferred to humans. And it caught on: Taylor enjoyed a high-profile career as a lecturer on the topic, and by 1915, according to the historian Jennifer Alexander, “the word ‘efficiency’ was plastered everywhere – in headlines, advertisements, editorials, business manuals, and church bulletins.” In the first decades of the 20th century, in a Britain panicked by the rise of German power, the National Efficiency movement united politicians on left and right. (“At the present time,” the Spectator noted in 1902, “there is a universal outcry for efficiency in all the departments of society, in all aspects of life.”)
It is not hard to grasp the appeal: efficiency was the promise of doing what you already did, only better, more cheaply, and in less time. What could be wrong with that? Unless you happened to be on the sharp end of attempts to treat humans like machines – like the workers of Bethlehem Steel – there wasn’t an obvious downside.
But as the century progressed, something important changed: we all became Frederick Winslow Taylors, presiding ruthlessly over our own lives. As the doctrine of efficiency grew entrenched – as the ethos of the market spread to more and more aspects of society, and life became more individualistic – we internalised it. In Taylor’s day, efficiency had been primarily a way to persuade (or bully) other people to do more work in the same amount of time; now it is a regimen that we impose on ourselves. (...)
Time management promised a sense of control in a world in which individuals – decreasingly supported by the social bonds of religion or community – seemed to lack it. In an era of insecure employment, we must constantly demonstrate our usefulness through frenetic doing, and time management can give you a valuable edge. Indeed, if you are among the growing ranks of the self-employed, as a freelancer or a worker in the so-called gig economy, increased personal efficiency may be essential to your survival. The only person who suffers financially if you indulge in “loafing” – a workplace vice that Taylor saw as theft – is you.
Above all, time management promises that a meaningful life might still be possible in this profit-driven environment, as Melissa Gregg explains in Counterproductive, a forthcoming history of the field. With the right techniques, the prophets of time management all implied, you could fashion a fulfilling life while simultaneously attending to the ever-increasing demands of your employer. This promise “comes back and back, in force, whenever there’s an economic downturn”, Gregg told me.
Especially at the higher-paid end of the employment spectrum, time management whispers of the possibility of something even more desirable: true peace of mind. “It is possible for a person to have an overwhelming number of things to do and still function productively with a clear head and a positive sense of relaxed control,” the contemporary king of the productivity gurus, David Allen, declared in his 2001 bestseller, Getting Things Done. “You can experience what the martial artists call a ‘mind like water’, and top athletes refer to as ‘the zone’.”
As Gregg points out, it is significant that “personal productivity” puts the burden of reconciling these demands squarely on our shoulders as individuals. Time management gurus rarely stop to ask whether the task of merely staying afloat in the modern economy – holding down a job, paying the mortgage, being a good-enough parent – really ought to require rendering ourselves inhumanly efficient in the first place.
Besides, on closer inspection, even the lesser promises of time management were not all they appeared to be. An awkward truth about Taylor’s celebrated efficiency drives is that they were not very successful: Bethlehem Steel fired him in 1901, having paid him vast sums without any clearly detectable impact on its own profits. (One persistent consequence of his schemes was that they seemed promising at first, but left workers too exhausted to function consistently over the long term.)
Likewise, it remains the frequent experience of those who try to follow the advice of personal productivity gurus – I’m speaking from years of experience here – that a “mind like water” is far from the guaranteed result. As with Inbox Zero, so with work in general: the more efficient you get at ploughing through your tasks, the faster new tasks seem to arrive. (“Work expands to fill the time available for its completion,” as the British historian C Northcote Parkinson realised way back in 1955, when he coined what would come to be known as Parkinson’s law.)
Then there’s the matter of self-consciousness: virtually every time management expert’s first piece of advice is to keep a detailed log of your time use, but doing so just heightens your awareness of the minutes ticking by, then lost for ever. As for focusing on your long-term goals: the more you do that, the more of your daily life you spend feeling vaguely despondent that you have not yet achieved them. Should you manage to achieve one, the satisfaction is strikingly brief – then it’s time to set a new long-term goal. The supposed cure just makes the problem worse.
There is a historical parallel for all this: it’s exactly what happened when the spread of “labour-saving” devices transformed the lives of housewives and domestic servants across Europe and north America from the end of the 19th century. Technology now meant that washing clothes no longer entailed a day bent over a mangle; a vacuum-cleaner could render a carpet spotless in minutes.
Yet as the historian Ruth Cowan demonstrates in her 1983 book More Work for Mother, the result, for much of the 20th century, was not an increase in leisure time among those charged with doing the housework. Instead, as the efficiency of housework increased, so did the standards of cleanliness and domestic order that society came to expect. Now that the living-room carpet could be kept perfectly clean, it had to be; now that clothes never needed to be grubby, grubbiness was all the more taboo. These days, you can answer work emails in bed at midnight. So should that message you got at 5.30pm really wait till morning for a reply? (...)
At the very bottom of our anxious urge to manage time better – the urge driving Frederick Winslow Taylor, Merlin Mann, me and perhaps you – it’s not hard to discern a familiar motive: the fear of death. As the philosopher Thomas Nagel has put it, on any meaningful timescale other than human life itself – that of the planet, say, or the cosmos – “we will all be dead any minute”. No wonder we are so drawn to the problem of how to make better use of our days: if we could solve it, we could avoid the feeling, in Seneca’s words, of finding life at an end just when we were getting ready to live. To die with the sense of nothing left undone: it’s nothing less than the promise of immortality by other means.
But the modern zeal for personal productivity, rooted in Taylor’s philosophy of efficiency, takes things several significant steps further. If only we could find the right techniques and apply enough self-discipline, it suggests, we could know that we were fitting everything important in, and could feel happy at last. It is up to us – indeed, it is our obligation – to maximise our productivity. This is a convenient ideology from the point of view of those who stand to profit from our working harder, and our increased capacity for consumer spending. But it also functions as a form of psychological avoidance. The more you can convince yourself that you need never make difficult choices – because there will be enough time for everything – the less you will feel obliged to ask yourself whether the life you are choosing is the right one.
Personal productivity presents itself as an antidote to busyness when it might better be understood as yet another form of busyness. And as such, it serves the same psychological role that busyness has always served: to keep us sufficiently distracted that we don’t have to ask ourselves potentially terrifying questions about how we are spending our days. “How we labour at our daily work more ardently and thoughtlessly than is necessary to sustain our life because it is even more necessary not to have leisure to stop and think,” wrote Friedrich Nietzsche, in what reads like a foreshadowing of our present circumstances. “Haste is universal because everyone is in flight from himself.”
by Oliver Burkeman, The Guardian | Read more:
Image: Pete Gamlen
A Telephone Call
[ed. See also: Ladies in Waiting]
Please, God, let him telephone me now. Dear God, let him call me now. I won't ask anything else of You, truly I won't. It isn't very much to ask. It would be so little to You, God, such a little, little thing. Only let him telephone now. Please, God. Please, please, please.
Please, God, let him telephone me now. Dear God, let him call me now. I won't ask anything else of You, truly I won't. It isn't very much to ask. It would be so little to You, God, such a little, little thing. Only let him telephone now. Please, God. Please, please, please.

"I'll call you at five, darling." "Good-by, darling.,' He was busy, and he was in a hurry, and there were people around him, but he called me "darling" twice. That's mine, that's mine. I have that, even if I never see him again. Oh, but that's so little. That isn't enough. Nothing's enough, if I never see him again. Please let me see him again, God. Please, I want him so much. I want him so much. I'll be good, God. I will try to be better, I will, If you will let me see him again. If You will let him telephone me. Oh, let him telephone me now.
Ah, don't let my prayer seem too little to You, God. You sit up there, so white and old, with all the angels about You and the stars slipping by. And I come to You with a prayer about a telephone call. Ah, don't laugh, God. You see, You don't know how it feels. You're so safe, there on Your throne, with the blue swirling under You. Nothing can touch You; no one can twist Your heart in his hands. This is suffering, God, this is bad, bad suffering. Won't You help me? For Your Son's sake, help me. You said You would do whatever was asked of You in His name. Oh, God, in the name of Thine only beloved Son, Jesus Christ, our Lord, let him telephone me now.
I must stop this. I mustn't be this way. Look. Suppose a young man says he'll call a girl up, and then something happens, and he doesn't. That isn't so terrible, is it? Why, it's gong on all over the world, right this minute. Oh, what do I care what's going on all over the world? Why can't that telephone ring? Why can't it, why can't it? Couldn't you ring? Ah, please, couldn't you? You damned, ugly, shiny thing. It would hurt you to ring, wouldn't it? Oh, that would hurt you. Damn you, I'll pull your filthy roots out of the wall, I'll smash your smug black face in little bits. Damn you to hell.
No, no, no. I must stop. I must think about something else. This is what I'll do. I'll put the clock in the other room. Then I can't look at it. If I do have to look at it, then I'll have to walk into the bedroom, and that will be something to do. Maybe, before I look at it again, he will call me. I'll be so sweet to him, if he calls me. If he says he can't see me tonight, I'll say, "Why, that's all right, dear. Why, of course it's all right." I'll be the way I was when I first met him. Then maybe he'll like me again. I was always sweet, at first. Oh, it's so easy to be sweet to people before you love them.
by Dorothy Parker, Classic Short Stories | Read more:
Image: via:
Subscribe to:
Posts (Atom)