Thursday, April 24, 2014
Sitting it Out
Ah, spring is here and sidewalk cafés are again blooming across America! Some of my friends are thrilled at this seasonal turn. I am not.
My memories of outdoor dining skew toward the mildly traumatic. Such excursions often begin with companions who all but squeal “Let’s sit outside!” Confronted with such enthusiasm, it’s hard to argue for an indoor seat, and if I do I’m accused of being a troglodyte and killjoy. Enduring a long, silent, and pouty indoor meal is never fun, so I usually capitulate and go outside. Thus I leave the comfort of civilized shade and air-conditioning, and take my seat in the petting zoo set aside for masticating humans.
And here I sit — next to an overflowing dumpster screened partially from view but not in the least from aroma by cheap latticework from Home Depot. Or I’m curbside on a city street where every few minutes a bus passes and emits a great sooty plume of diesel exhaust, which gently alights upon my meal like finely ground pepper. Or, perhaps in the saddest tableaux of all, I’m sitting outside in front of a strip mall, corralled by some cagelike ironwork posted with stern wording against seating yourself, and overlooking an asphalt lagoon consisting of thousands of car windshields each reflecting the sun’s rays directly at me, as if I’m part of an experiment involving thresholds for scorched retinas. I once had to wear two pairs of sunglasses to make it through a lunch.
Truthfully, I’m actually not entirely opposed to outdoor dining — I’ve spent long afternoons in Europe overstaying my welcome at several beautiful cafés. And even in America I’ve lingered over coffee and drinks at a few lovely outdoor spots — in New York, in New Orleans, along the Lincoln Road Mall in Miami.
But the great majority of them just don’t seem to get it right. They might have the appropriate Bistro Collection Café Chairs. But everything else is slightly awry and amiss, as if designed by someone whose understanding of European café culture arose from having once, long ago, seen the Disney film The Aristocats. The café is poorly positioned, poorly arranged, or too exposed to loud traffic and passing cellphone shouters. Most U.S. cafés seem to relate to street life in an adversarial manner rather as a contributor. The technical term for this sort of design, I believe, is “fucked-up feng shui.”

And here I sit — next to an overflowing dumpster screened partially from view but not in the least from aroma by cheap latticework from Home Depot. Or I’m curbside on a city street where every few minutes a bus passes and emits a great sooty plume of diesel exhaust, which gently alights upon my meal like finely ground pepper. Or, perhaps in the saddest tableaux of all, I’m sitting outside in front of a strip mall, corralled by some cagelike ironwork posted with stern wording against seating yourself, and overlooking an asphalt lagoon consisting of thousands of car windshields each reflecting the sun’s rays directly at me, as if I’m part of an experiment involving thresholds for scorched retinas. I once had to wear two pairs of sunglasses to make it through a lunch.
Truthfully, I’m actually not entirely opposed to outdoor dining — I’ve spent long afternoons in Europe overstaying my welcome at several beautiful cafés. And even in America I’ve lingered over coffee and drinks at a few lovely outdoor spots — in New York, in New Orleans, along the Lincoln Road Mall in Miami.
But the great majority of them just don’t seem to get it right. They might have the appropriate Bistro Collection Café Chairs. But everything else is slightly awry and amiss, as if designed by someone whose understanding of European café culture arose from having once, long ago, seen the Disney film The Aristocats. The café is poorly positioned, poorly arranged, or too exposed to loud traffic and passing cellphone shouters. Most U.S. cafés seem to relate to street life in an adversarial manner rather as a contributor. The technical term for this sort of design, I believe, is “fucked-up feng shui.”
by Wayne Curtis, The Smart Set | Read more:
Image: metamerist via Flickr (Creative Commons)Death and Anger on Everest
[ed. See also: Climbers Leave Everest Amid Regrets and Tensions Among Sherpas.]
For many years, the most lucrative commercial guiding operation on Mt. Everest has been a company called Himalayan Experience, or Himex, which is owned by a New Zealand mountaineer named Russell Brice. In the spring of 2012, more than a month into the climbing season, he became increasingly worried about a bulge of glacial ice three hundred yards wide that was frozen tenuously to Everest’s West Shoulder, hanging like a massive sword of Damocles directly over the main route up the Nepal side of the mountain. Brice’s clients (“members,” in the parlance of Himalayan mountaineering), Western guides, and Sherpas repeatedly had to climb beneath the threatening ice bulge as they moved up and down the mountain to acclimatize and establish a series of higher camps necessary for their summit assault. One day, Brice timed how long it took his head guide, Adrian Ballinger (“who is incredibly fast,” he wrote in the blog post excerpted below), to climb through the most hazardous terrain:
After what happened last Friday, though, it’s hard to argue with Brice’s call. On April 18th, shortly before 7 A.M. local time, an overhanging wedge of ice the size of a Beverly Hills mansion broke loose from the same ice bulge that had frightened Brice into leaving Everest in 2012. As it crashed onto the slope below, the ice shattered into truck-size chunks and hurtled toward some fifty climbers laboring slowly upward through the Khumbu Icefall, a jumbled maze of unstable ice towers that looms above the 17,600-foot base camp. The climbers in the line of fire were at approximately nineteen thousand feet when the avalanche struck. Of the twenty-five men hit by the falling ice, sixteen were killed, all of them Nepalis working for guided climbing teams. Three of the bodies were buried beneath the frozen debris and may never be found.
Although many news reports indicated that all the victims were Sherpas, the legendary mountain people who comprise just half of one per cent of the Nepali population, three of the sixteen were members of other, much larger ethnic groups: one was Gurung, one was Tamang, and one was a member of the Hindu Chhetri caste. All, however, were employed as high-altitude climbing Sherpas—an élite profession that deservedly commands respect and admiration from mountaineers around the world. (...)
There is no denying that climbing Everest is a preposterously dangerous undertaking for the members who provide the Sherpas’ income. But running counter to the disturbing trend among Sherpas, climbing Everest has actually grown significantly safer for Western guides and members in recent years, according to the available data. This can be attributed to a number of factors. Western climbers now use bottled oxygen much more liberally than they did in the past; many Western climbers now prophylactically dose themselves with dexamethasone, a powerful steroid, when they ascend above twenty-two thousand feet, which has proven to be an effective strategy for minimizing the risk of contracting high-altitude cerebral edema (HACE) and high-altitude pulmonary edema (HAPE), potentially fatal ailments that are common on Everest; and weather forecasts are much more accurate than they were eighteen or twenty years ago. (...)

It took him 22 min from the beginning to the end of the danger zone. For the Sherpas carrying a heavy load it took 30 min and most of our members took between 45 min and one hour to walk underneath this dangerous cliff. In my opinion, this is far too long to be exposed to such a danger and when I see around 50 people moving underneath the cliff at one time, it scares me.Adding to Brice’s concern, some of his most experienced Sherpas, ordinarily exceedingly stoical men, approached him to say that the conditions on the mountain made them fear for their lives. One of them actually broke down in tears as he confessed this. So on May 7, 2012, Brice made an announcement that shocked most of the thousand people camped at the base of Everest: he was pulling all his guides, members, and Sherpas off the mountain, packing up their tents and equipment, and heading home. He was widely criticized for this decision in 2012, and not just by clients who were forced to abandon their dreams of climbing the world’s highest mountain without receiving a refund for the forty-three thousand euros they had paid him in advance. Many of the other expedition leaders also thought Brice was wildly overreacting. The reputation of Himex took a major hit.
After what happened last Friday, though, it’s hard to argue with Brice’s call. On April 18th, shortly before 7 A.M. local time, an overhanging wedge of ice the size of a Beverly Hills mansion broke loose from the same ice bulge that had frightened Brice into leaving Everest in 2012. As it crashed onto the slope below, the ice shattered into truck-size chunks and hurtled toward some fifty climbers laboring slowly upward through the Khumbu Icefall, a jumbled maze of unstable ice towers that looms above the 17,600-foot base camp. The climbers in the line of fire were at approximately nineteen thousand feet when the avalanche struck. Of the twenty-five men hit by the falling ice, sixteen were killed, all of them Nepalis working for guided climbing teams. Three of the bodies were buried beneath the frozen debris and may never be found.
Although many news reports indicated that all the victims were Sherpas, the legendary mountain people who comprise just half of one per cent of the Nepali population, three of the sixteen were members of other, much larger ethnic groups: one was Gurung, one was Tamang, and one was a member of the Hindu Chhetri caste. All, however, were employed as high-altitude climbing Sherpas—an élite profession that deservedly commands respect and admiration from mountaineers around the world. (...)
There is no denying that climbing Everest is a preposterously dangerous undertaking for the members who provide the Sherpas’ income. But running counter to the disturbing trend among Sherpas, climbing Everest has actually grown significantly safer for Western guides and members in recent years, according to the available data. This can be attributed to a number of factors. Western climbers now use bottled oxygen much more liberally than they did in the past; many Western climbers now prophylactically dose themselves with dexamethasone, a powerful steroid, when they ascend above twenty-two thousand feet, which has proven to be an effective strategy for minimizing the risk of contracting high-altitude cerebral edema (HACE) and high-altitude pulmonary edema (HAPE), potentially fatal ailments that are common on Everest; and weather forecasts are much more accurate than they were eighteen or twenty years ago. (...)
The reason the risk remains so much greater for Sherpas can be traced to several things. Sherpas aren’t provided with nearly as much bottled oxygen, because it is so expensive to buy and to stock on the upper mountain, and they tend to be much better acclimatized than Westerners. Sherpas are almost never given dexamethasone prophylactically, because they don’t have personal physicians in their villages who will prescribe the drug on request. And perhaps most significant, Sherpas do all the heavy lifting on Everest, literally and figuratively. The mostly foreign-owned guiding companies assign the most dangerous and physically demanding jobs to their Sherpa staff, thereby mitigating the risk to their Western guides and members, whose backpacks seldom hold much more than a water bottle, a camera, an extra jacket, and lunch. The work Sherpas are paid to do—carrying loads, installing the aluminum ladders, stringing and anchoring thousands of feet of rope—requires them to spend vastly more time on the most dangerous parts of the mountain, particularly in the Khumbu Icefall—the shattered, creaking, ever-shifting expanse of glacier that extends from just above base camp, at seventeen thousand six hundred feet, to the nineteen-thousand-five-hundred-foot elevation. The fact that members and Western guides now suck down a lot more bottled oxygen is wonderful for them, but it means the Sherpas have to carry those additional oxygen bottles through the Icefall for the Westerners to use.
by Jon Krakauer, New Yorker | Read more:
Image:Christian Kober/JAI/CorbisWhat We Left Behind
At the nadir of the American occupation, in 2007, Baghdad resembled a medieval city under siege. U.S. soldiers stood guard on every block, part of a force of a hundred and sixty-five thousand throughout the country, along with about thirty thousand contractors and five thousand British soldiers. Entire neighborhoods were sealed off by concrete blast walls, to protect residents from the sectarian killers who roamed the city. Nevertheless, every morning dozens of new corpses appeared in the streets, many of them frozen in their final moments: hands bound, heads bagged, burned with acid, drilled with holes.
Two years after the last American soldiers departed, it’s hard to find any evidence that they were ever there. Blast walls still stand outside office buildings, but only a handful of Americans remain, shuttling around the capital to help Iraqis use U.S. military equipment, and to drill for oil. Iraq has become one of the world’s largest oil producers, but little of the profit reaches ordinary citizens; Baghdad is as drab and trash-strewn as before, its skyline mostly unbroken by new construction. It’s as though the residents were still too exhausted to celebrate the calm that descended in late 2008, not entirely trusting that it would last.
The signature sound of the American war was the blast from a bomb—thousands of them, delivered by car or vest, or buried under the street. The bombs are back, sometimes a half-dozen a day, nearly always deployed by Sunnis to kill Shiites. In January, in a Shiite neighborhood called Kasra, a man parked his sedan in front of a tea shop, turned off the ignition, and walked away. A few moments later, the sedan exploded, obliterating a row of shops and five people unlucky enough to have been close. Twenty-seven others were wounded. One of the dead, a nineteen-year-old taxi-driver named Abdul Karim Latif, was engaged to be married. A few hours later, I watched mourners lift his coffin atop a minibus, draped in a fluorescent-pink bedsheet, to carry it to a cemetery. A group of women wailed. One of the survivors told me, “May God take vengeance on the people who did this.”
The fantastic bloodletting of the civil war, when thousands of Iraqis were dying a month, turned neighborhoods that for centuries had harbored both Sunni and Shiite Muslims into confessionally pure enclaves. Roughly speaking, Sunnis moved to the west of Baghdad and Shiites to the east. These days, whatever security can be found in the city is owed in part to the relentless segregation that took place during the civil war; as Matthew Sherman, a former civilian adviser to the U.S. Army, told me, “There was no one left to kill.” Against the odds, some Baghdad neighborhoods have regained their diversity, passing through an inferno first. In 2006, Adel, a mixed neighborhood in western Baghdad, fell to Sunni insurgents, who murdered dozens of Shiites and forced others from their homes. Today, Adel is mixed again; many of the Shiite families who fled have followed the calm back to their houses. On a recent afternoon, Shiite prayer flags fluttered in the midday breeze.
The resurgence of Iraq’s Shiites is the greatest legacy of the American invasion, which overthrew Sunni rule and replaced it with a government led by Shiites—the first since the eighteenth century. Eight years after Maliki took power, Iraqis are sorting through the consequences. The Green Zone—still known by its English name—has the same otherworldly feel that it did during the American war: a placid, manicured outpost in a jungle of trouble. Now, though, it is essentially a bastion of Shiite power, in a country shot through with angry Sunni citizens.

The signature sound of the American war was the blast from a bomb—thousands of them, delivered by car or vest, or buried under the street. The bombs are back, sometimes a half-dozen a day, nearly always deployed by Sunnis to kill Shiites. In January, in a Shiite neighborhood called Kasra, a man parked his sedan in front of a tea shop, turned off the ignition, and walked away. A few moments later, the sedan exploded, obliterating a row of shops and five people unlucky enough to have been close. Twenty-seven others were wounded. One of the dead, a nineteen-year-old taxi-driver named Abdul Karim Latif, was engaged to be married. A few hours later, I watched mourners lift his coffin atop a minibus, draped in a fluorescent-pink bedsheet, to carry it to a cemetery. A group of women wailed. One of the survivors told me, “May God take vengeance on the people who did this.”
The fantastic bloodletting of the civil war, when thousands of Iraqis were dying a month, turned neighborhoods that for centuries had harbored both Sunni and Shiite Muslims into confessionally pure enclaves. Roughly speaking, Sunnis moved to the west of Baghdad and Shiites to the east. These days, whatever security can be found in the city is owed in part to the relentless segregation that took place during the civil war; as Matthew Sherman, a former civilian adviser to the U.S. Army, told me, “There was no one left to kill.” Against the odds, some Baghdad neighborhoods have regained their diversity, passing through an inferno first. In 2006, Adel, a mixed neighborhood in western Baghdad, fell to Sunni insurgents, who murdered dozens of Shiites and forced others from their homes. Today, Adel is mixed again; many of the Shiite families who fled have followed the calm back to their houses. On a recent afternoon, Shiite prayer flags fluttered in the midday breeze.
The resurgence of Iraq’s Shiites is the greatest legacy of the American invasion, which overthrew Sunni rule and replaced it with a government led by Shiites—the first since the eighteenth century. Eight years after Maliki took power, Iraqis are sorting through the consequences. The Green Zone—still known by its English name—has the same otherworldly feel that it did during the American war: a placid, manicured outpost in a jungle of trouble. Now, though, it is essentially a bastion of Shiite power, in a country shot through with angry Sunni citizens.
by Dexter Filkins, New Yorker | Read more:
Image: Moises SamanWednesday, April 23, 2014
How America’s Leading Science Fiction Authors Are Shaping Your Future
Stories set in the future are often judged, as time passes, on whether they come true or not. “Where are our flying cars?” became a plaintive cry of disappointment as the millennium arrived, reflecting the prevailing mood that science and technology had failed to live up to the most fanciful promises of early 20th-century science fiction.
But the task of science fiction is not to predict the future. Rather, it contemplates possible futures. Writers may find the future appealing precisely because it can’t be known, a black box where “anything at all can be said to happen without fear of contradiction from a native,” says the renowned novelist and poet Ursula K. Le Guin. “The future is a safe, sterile laboratory for trying out ideas in,” she tells Smithsonian, “a means of thinking about reality, a method.”
Some authors who enter that laboratory experiment with plausible futures—envisioning where contemporary social trends and recent breakthroughs in science and technology might lead us. William Gibson (who coined the term “cyberspace” and will never be allowed to forget it) is well known for his startling and influential stories, published in the 1980s, depicting visions of a hyper-connected global society where black-hat hackers, cyberwar and violent reality shows are part of daily life. For other authors, the future serves primarily as a metaphor. Le Guin’s award-winning 1969 novel, The Left Hand of Darkness—set on a distant world populated by genetically modified hermaphrodites—is a thought experiment about how society would be different if it were genderless.
Because science fiction spans the spectrum from the plausible to the fanciful, its relationship with science has been both nurturing and contentious. For every author who meticulously examines the latest developments in physics or computing, there are other authors who invent “impossible” technology to serve as a plot device (like Le Guin’s faster-than-light communicator, the ansible) or to enable social commentary, the way H. G. Wells uses his time machine to take the reader to the far future to witness the calamitous destiny of the human race.
Sometimes it’s the seemingly weird ideas that come true—thanks, in part, to science fiction’s capacity to spark an imaginative fire in readers who have the technical knowledge to help realize its visions. Jules Verne proposed the idea of light-propelled spaceships in his 1865 novel, From the Earth to the Moon. Today, technologists all over the world are actively working on solar sails.
Jordin Kare, an astrophysicist at the Seattle-based tech company LaserMotive, who has done important practical and theoretical work on lasers, space elevators and light-sail propulsion, cheerfully acknowledges the effect science fiction has had on his life and career. “I went into astrophysics because I was interested in the large-scale functions of the universe,” he says, “but I went to MIT because the hero of Robert Heinlein’s novel Have Spacesuit, Will Travel went to MIT.” Kare himself is very active in science fiction fandom. “Some of the people who are doing the most exploratory thinking in science have a connection to the science-fiction world.”
Microsoft, Google, Apple and other firms have sponsored lecture series in which science fiction writers give talks to employees and then meet privately with developers and research departments. Perhaps nothing better demonstrates the close tie between science fiction and technology today than what is called “design fiction”—imaginative works commissioned by tech companies to model new ideas. Some corporations hire authors to create what-if stories about potentially marketable products.
“I really like design fiction or prototyping fiction,” says novelist Cory Doctorow, whose clients have included Disney and Tesco. “There is nothing weird about a company doing this—commissioning a story about people using a technology to decide if the technology is worth following through on. It’s like an architect creating a virtual fly-through of a building.” Doctorow, who worked in the software industry, has seen both sides of the development process. “I’ve been in engineering discussions in which the argument turned on what it would be like to use the product, and fiction can be a way of getting at that experience.”

Some authors who enter that laboratory experiment with plausible futures—envisioning where contemporary social trends and recent breakthroughs in science and technology might lead us. William Gibson (who coined the term “cyberspace” and will never be allowed to forget it) is well known for his startling and influential stories, published in the 1980s, depicting visions of a hyper-connected global society where black-hat hackers, cyberwar and violent reality shows are part of daily life. For other authors, the future serves primarily as a metaphor. Le Guin’s award-winning 1969 novel, The Left Hand of Darkness—set on a distant world populated by genetically modified hermaphrodites—is a thought experiment about how society would be different if it were genderless.
Because science fiction spans the spectrum from the plausible to the fanciful, its relationship with science has been both nurturing and contentious. For every author who meticulously examines the latest developments in physics or computing, there are other authors who invent “impossible” technology to serve as a plot device (like Le Guin’s faster-than-light communicator, the ansible) or to enable social commentary, the way H. G. Wells uses his time machine to take the reader to the far future to witness the calamitous destiny of the human race.
Sometimes it’s the seemingly weird ideas that come true—thanks, in part, to science fiction’s capacity to spark an imaginative fire in readers who have the technical knowledge to help realize its visions. Jules Verne proposed the idea of light-propelled spaceships in his 1865 novel, From the Earth to the Moon. Today, technologists all over the world are actively working on solar sails.
Jordin Kare, an astrophysicist at the Seattle-based tech company LaserMotive, who has done important practical and theoretical work on lasers, space elevators and light-sail propulsion, cheerfully acknowledges the effect science fiction has had on his life and career. “I went into astrophysics because I was interested in the large-scale functions of the universe,” he says, “but I went to MIT because the hero of Robert Heinlein’s novel Have Spacesuit, Will Travel went to MIT.” Kare himself is very active in science fiction fandom. “Some of the people who are doing the most exploratory thinking in science have a connection to the science-fiction world.”
Microsoft, Google, Apple and other firms have sponsored lecture series in which science fiction writers give talks to employees and then meet privately with developers and research departments. Perhaps nothing better demonstrates the close tie between science fiction and technology today than what is called “design fiction”—imaginative works commissioned by tech companies to model new ideas. Some corporations hire authors to create what-if stories about potentially marketable products.
“I really like design fiction or prototyping fiction,” says novelist Cory Doctorow, whose clients have included Disney and Tesco. “There is nothing weird about a company doing this—commissioning a story about people using a technology to decide if the technology is worth following through on. It’s like an architect creating a virtual fly-through of a building.” Doctorow, who worked in the software industry, has seen both sides of the development process. “I’ve been in engineering discussions in which the argument turned on what it would be like to use the product, and fiction can be a way of getting at that experience.”
by Eileen Gunn, Smithsonian | Read more:
Image: Mehreen MurtazaAnimal Architecture
'Animal Architecture," by Ingo Arndt and Jürgen Tautz, with a foreword by Jim Brandenburg, is a beautiful new science/photography book exploring the mystery of nature through the "complex and elegant structures that animals create both for shelter and for capturing prey."
Arndt is a world-renowned nature photographer based in Germany, whose work you may have seen in National Geographic, GEO and BBC Wildlife.
Above, a grey bowerbird's bower in Australia's Northern Territory. "The grey bowerbird goes to extreme lengths to build a love nest from interwoven sticks and then covers the floor with decorative objects. The more artful the arbor, the greater the chance a male has of attracting a mate."
by Xeni Jardin, Boing Boing | Read more:
Image: Ingo Arndt
Renewables Aren’t Enough. Clean Coal Is the Future
Proof that good things don’t always come in nice packages can be found by taking the fast train from Beijing to Tianjin and then driving to the coast. Tianjin, China’s third-biggest city, originated as Beijing’s port on the Yellow Sea. But in recent years Tianjin has reclaimed so much of its muddy, unstable shoreline that the city has effectively moved inland and a new, crazily active port has sprung up at the water’s edge. In this hyper-industrialized zone, its highways choked with trucks, stand scores of factories and utility plants, each a mass of pipes, reactors, valves, vents, retorts, crackers, blowers, chimneys, and distillation towers—the sort of facility James Cameron might have lingered over, musing, on his way to film the climax of Terminator 2.
Among these edifices, just as big and almost as anonymous as its neighbors, is a structure called GreenGen, built by China Huaneng Group, a giant state-owned electric utility, in collaboration with half a dozen other firms, various branches of the Chinese government, and, importantly, Peabody Energy, a Missouri firm that is the world’s biggest private coal company.
By Western standards, GreenGen is a secretive place; weeks of repeated requests for interviews and a tour met with no reply. When I visited anyway, guards at the site not only refused admittance but wouldn’t even confirm its name. As I drove away from the entrance, a window blind cracked open; through the slats, an eye surveyed my departure. The silence, in my view, is foolish. GreenGen is a billion-dollar facility that extracts the carbon dioxide from a coal-fired power plant and, ultimately, will channel it into an underground storage area many miles away. Part of a coming wave of such carbon-eating facilities, it may be China’s—and possibly the planet’s—single most consequential effort to fight climate change.
Because most Americans rarely see coal, they tend to picture it as a relic of the 19th century, black stuff piled up in Victorian alleys. In fact, a lump of coal is a thoroughly ubiquitous 21st-century artifact, as much an emblem of our time as the iPhone. Today coal produces more than 40 percent of the world’s electricity, a foundation of modern life. And that percentage is going up: In the past decade, coal added more to the global energy supply than any other source.
Nowhere is the preeminence of coal more apparent than in the planet’s fastest-growing, most populous region: Asia, especially China. In the past few decades, China has lifted several hundred million people out of destitution—arguably history’s biggest, fastest rise in human well-being. That advance couldn’t have happened without industrialization, and that industrialization couldn’t have happened without coal. More than three-quarters of China’s electricity comes from coal, including the power for the giant electronic plants where iPhones are assembled. More coal goes to heating millions of homes, to smelting steel (China produces nearly half the world’s steel), and to baking limestone to make cement (China provides almost half the world’s cement). In its frantic quest to develop, China burns almost as much coal as the rest of the world put together—a fact that makes climatologists shudder. (...)
Which brings me, in a way, back to the unwelcoming facility in Tianjin. GreenGen is one of the world’s most advanced attempts to develop a technology known as carbon capture and storage. Conceptually speaking, CCS is simple: Industries burn just as much coal as before but remove all the pollutants. In addition to scrubbing out ash and soot, now standard practice at many big plants, they separate out the carbon dioxide and pump it underground, where it can be stored for thousands of years.
Many energy and climate researchers believe that CCS is vital to avoiding a climate catastrophe. Because it could allow the globe to keep burning its most abundant fuel source while drastically reducing carbon dioxide and soot, it may be more important—though much less publicized—than any renewable-energy technology for decades to come. No less than Steven Chu, the Nobel-winning physicist who was US secretary of energy until last year, has declared CCS essential. “I don’t see how we go forward without it,” he says. (...)
Coal is MEGO—until you live near it. MEGO is old journalistic slang for “my eyes glaze over”—a worthy story that is too dull to read. In America, where coal is mostly burned far out of sight, readers tend to react to the word coal by hitting Close Tab.
But people in Hebei don’t think coal is MEGO, at least in my experience. Hebei is the province that surrounds Beijing. When the capital city set up for the 2008 Olympics, the government pushed out the coal-powered utilities and factories that were polluting its air. Mostly, these facilities moved to Hebei. The province ended up with many new jobs. But it also ended up with China’s dirtiest air.
Because I was curious, I hired a taxi to drive in and around the Hebei city of Tangshan, southeast of Beijing. Visibility was about a quarter mile—a good day, the driver told me. Haze gave buildings the washed-out look of an old photographic print. Not long ago, Tangshan had been a relatively poor place. Now the edge of town held a murderer’s row of luxury-car dealerships: BMW, Jaguar, Mercedes, Lexus, Porsche. Most of the vehicles were displayed indoors. Those outside were covered with gray crud.

By Western standards, GreenGen is a secretive place; weeks of repeated requests for interviews and a tour met with no reply. When I visited anyway, guards at the site not only refused admittance but wouldn’t even confirm its name. As I drove away from the entrance, a window blind cracked open; through the slats, an eye surveyed my departure. The silence, in my view, is foolish. GreenGen is a billion-dollar facility that extracts the carbon dioxide from a coal-fired power plant and, ultimately, will channel it into an underground storage area many miles away. Part of a coming wave of such carbon-eating facilities, it may be China’s—and possibly the planet’s—single most consequential effort to fight climate change.
Because most Americans rarely see coal, they tend to picture it as a relic of the 19th century, black stuff piled up in Victorian alleys. In fact, a lump of coal is a thoroughly ubiquitous 21st-century artifact, as much an emblem of our time as the iPhone. Today coal produces more than 40 percent of the world’s electricity, a foundation of modern life. And that percentage is going up: In the past decade, coal added more to the global energy supply than any other source.
Nowhere is the preeminence of coal more apparent than in the planet’s fastest-growing, most populous region: Asia, especially China. In the past few decades, China has lifted several hundred million people out of destitution—arguably history’s biggest, fastest rise in human well-being. That advance couldn’t have happened without industrialization, and that industrialization couldn’t have happened without coal. More than three-quarters of China’s electricity comes from coal, including the power for the giant electronic plants where iPhones are assembled. More coal goes to heating millions of homes, to smelting steel (China produces nearly half the world’s steel), and to baking limestone to make cement (China provides almost half the world’s cement). In its frantic quest to develop, China burns almost as much coal as the rest of the world put together—a fact that makes climatologists shudder. (...)
Which brings me, in a way, back to the unwelcoming facility in Tianjin. GreenGen is one of the world’s most advanced attempts to develop a technology known as carbon capture and storage. Conceptually speaking, CCS is simple: Industries burn just as much coal as before but remove all the pollutants. In addition to scrubbing out ash and soot, now standard practice at many big plants, they separate out the carbon dioxide and pump it underground, where it can be stored for thousands of years.
Many energy and climate researchers believe that CCS is vital to avoiding a climate catastrophe. Because it could allow the globe to keep burning its most abundant fuel source while drastically reducing carbon dioxide and soot, it may be more important—though much less publicized—than any renewable-energy technology for decades to come. No less than Steven Chu, the Nobel-winning physicist who was US secretary of energy until last year, has declared CCS essential. “I don’t see how we go forward without it,” he says. (...)
Coal is MEGO—until you live near it. MEGO is old journalistic slang for “my eyes glaze over”—a worthy story that is too dull to read. In America, where coal is mostly burned far out of sight, readers tend to react to the word coal by hitting Close Tab.
But people in Hebei don’t think coal is MEGO, at least in my experience. Hebei is the province that surrounds Beijing. When the capital city set up for the 2008 Olympics, the government pushed out the coal-powered utilities and factories that were polluting its air. Mostly, these facilities moved to Hebei. The province ended up with many new jobs. But it also ended up with China’s dirtiest air.
Because I was curious, I hired a taxi to drive in and around the Hebei city of Tangshan, southeast of Beijing. Visibility was about a quarter mile—a good day, the driver told me. Haze gave buildings the washed-out look of an old photographic print. Not long ago, Tangshan had been a relatively poor place. Now the edge of town held a murderer’s row of luxury-car dealerships: BMW, Jaguar, Mercedes, Lexus, Porsche. Most of the vehicles were displayed indoors. Those outside were covered with gray crud.
by Charles C. Mann, Wired | Read more:
Image: Dan WintersAmerican Labor’s Death
This would be a critical blow for these unions, because it would greatly reduce the cash flow into union offices, and therefore hinder their ability to function and serve members. Small locals could go into severe financial trouble. Larger ones might have to stop their campaigns to reach out to workers to ensure that they sign union cards and pay dues. (Disclosure: Readers should know that the author is employed as an editor for a public sector union in New York City.)
Since neoliberalism has steadily killed off American manufacturing since the 1970s, the government sector has been the center of labor’s power. The 2008 financial crisis allowed state-level Republicans to exploit the economic pain to downsize government, which of course means weakening public sector workers rights. It started most dramatically with Wisconsin ridding workers of there of collective bargaining rights. In Detroit, the city cited its bankruptcy as a reason not to fulfill some of its pension obligations. And not a day seems to pass in the right-wing media when all of the world’s ills are blamed squarely on unionized public school teachers.
It’s very easy to blame this as the final phase of the Reagan Revolution, where the New Right began an attack on federal government services and unions, destroying major aspect of both and pulling the Democrats away from class politics and to the political center (Something similar happened at the same time in the United Kingdom with Margaret Thatcher, the unions and the Labour Party). But there’s an alternative narrative.
To borrow a theory from Daniel Gross, an anarchist trade unionist most famous for leading efforts to organize Starbucks baristas, American labor’s decline goes back much further than the rise of the Gipper, to the 1930s, which is most often thought of as labor’s finest hour, when after widespread labor unrest the government enshrined the right to organize in the National Labor Relations Act.
The alternative view is that this codification meant no longer could labor be an organized opposition force to capitalism, or any vehicle to organize workers not just for better wages and benefits, but for a post-capitalist future. Instead, unions became dependent on employers and the government for their power, creating a tripartite political understanding that would remain until the 1970s. In that time, radicals were purged from unions, and while today there are unions like the Industrial Workers of the World, the International Longshore and Warehouse Union and the United Electrical, Radio and Machine Workers of America who advocate that progress comes from confrontation, rather than collaboration, with employers, their numbers are small and voices absent from the discourse.
And so this partially explains labor’s exclusion from 1960s radicalism—recall Mario Savio’s famous Berkeley Free Speech Movement speech in which he vowed that students should not be molded by business, government or organized labor, the latter seen as just as big a part of the establishment as the other two. Today, unions find themselves at odds with progressives and radicals on a whole host of issues. The United Mineworkers of America are against new environmental regulations, and construction unions are fighting environmentalists who want to block the creation of a new oil pipeline because it will create jobs. Unions in upstate New York squirm at criminal justice reform measures that meant fewer inmates, which means fewer prisons and fewer prison jobs.
The fact is that despite the right-wing rhetoric that unions are a left-wing enemy to industrial order, unions are historically tethered to the interests of American capitalism. In purely Marxist terms, in the time of détente, from the 1930s to Reagan, unions helped workers recoup some of the surplus value extracted from them in the form of higher wages and benefits, but still allowed enough surplus value extraction in order for business to profit and eventually grow. For blue-collar workers, it was a pretty good deal; this allowed workers to own homes and cars, send their children to college and participate in the political process.
But as Thomas Piketty’s celebrated new history of capital suggestions, wealth has a tendency to concentrate, so this agreement became untenable. Moving production to the Global South solved labor questions in the industrial sector, driving down wages and forcing the working class into largely non-union service sector. That left the government sector.
What has happened there? It can’t be off-shored, but it can be outsourced.
by Ari Paul, Souciant | Read more:
Tuesday, April 22, 2014
The Secret History of Life-Hacking
We live in the age of life-hacking. The concept, which denotes a kind of upbeat, engineer-like approach to maximizing one’s personal productivity, first entered the mainstream lexicon in the mid-2000s, via tech journalists, the blogosphere, and trendspotting articles with headlines like “Meet the Life Hackers.” Since then the term has become ubiquitous in popular culture—just part of the atmosphere, humming with buzzwords, of the Internet age.
Variations on a blog post called “50 Life Hacks to Simplify Your World” have become endlessly, recursively viral, turning up on Facebook feeds again and again like ghost ships. Lifehacker.com, one of the many horses in Gawker Media’s stable of workplace procrastination sites, furnishes office workers with an endless array of ideas on how to live fitter, happier, and more productively: Track your sleep habits with motion-sensing apps and calculate your perfect personal bed-time; learn how to “supercharge your Gmail filters”; oh, and read novels, because it turns out that “reduces anxiety.” The tribune of life hackers, the author and sometime tech investor Timothy Ferriss, drums up recipes for a life of ease with an indefatigable frenzy, and enumerates the advantages in bestselling books and a reality TV show; outsource your bill payments to a man in India, he advises, and you can enjoy 15 more minutes of “orgasmic meditation.”
Life-hacking wouldn’t be popular if it didn’t tap into something deeply corroded about the way work has, without much resistance, managed to invade every corner of our lives. The idea started out as a somewhat earnest response to the problem of fragmented attention and overwork—an attempt to reclaim some leisure time and autonomy from the demands of boundaryless labor. But it has since become just another hectoring paradigm of self-improvement. The proliferation of apps and gurus promising to help manage even the most basic tasks of simple existence—the “quantified self” movement does life hacking one better, turning the simple act of breathing or sleeping into something to be measured and refined—suggests that merely getting through the day has become, for many white-collar professionals, a set of problems to solve and systems to optimize. Being alive is easier, it turns out, if you treat it like a job. (...)
And yet by comparison, the modern day self-Taylorization of the life hacker has broad appeal. In a way this makes sense: There’s no manager stop-watching you, or forcing you to work in particular ways; you’re ostensibly choosing, of your own will, to make your life better. The way true believers like Ferriss so thoroughly master-plan their lives has a gonzo attractiveness to it. What’s more, “hacking” sounds much better than “management.”

Life-hacking wouldn’t be popular if it didn’t tap into something deeply corroded about the way work has, without much resistance, managed to invade every corner of our lives. The idea started out as a somewhat earnest response to the problem of fragmented attention and overwork—an attempt to reclaim some leisure time and autonomy from the demands of boundaryless labor. But it has since become just another hectoring paradigm of self-improvement. The proliferation of apps and gurus promising to help manage even the most basic tasks of simple existence—the “quantified self” movement does life hacking one better, turning the simple act of breathing or sleeping into something to be measured and refined—suggests that merely getting through the day has become, for many white-collar professionals, a set of problems to solve and systems to optimize. Being alive is easier, it turns out, if you treat it like a job. (...)
And yet by comparison, the modern day self-Taylorization of the life hacker has broad appeal. In a way this makes sense: There’s no manager stop-watching you, or forcing you to work in particular ways; you’re ostensibly choosing, of your own will, to make your life better. The way true believers like Ferriss so thoroughly master-plan their lives has a gonzo attractiveness to it. What’s more, “hacking” sounds much better than “management.”
by Nikil Saval, Pacific Standard | Read more:
Image: Philip Gendreau/Berrmann/CorbisMorgan and Jeff's Divorce Party Invitation
Morgan + Jeff
Kindly Request Your Presence
At a Party to Celebrate
Their Upcoming Divorce
Or, Extreme Makeover: Our Entire Life and All Our Choices Edition
Taking Place at
What is Now Morgan’s Home
On Friday, February 21, 8 pm.
The Party Will Include Dancing, Photos,
Memories, Drinks, and Snacks.
Because Who Needs a Sustained and Loving Relationship
Based on Mutual Admiration and Support
When You Can Have Mini Franks!!
The Party Will Also Include Games Such as:
“Match the Annoying Quality to Morgan or Jeff,”
“Talk About the Early Days and Try to Pinpoint
Precisely When Things Started Going Wrong,”
“Wonder if Marriage is Even a Viable Institution
Or if it is a Construction of the Patriarchy.”
Also: Badminton!
And We Got a Fire Pit.
To ‘Wink’ at the Differences
That Slowly Pulled Morgan + Jeff Apart
There Will Be “Morgan”- and “Jeff”-Themed Areas
To Represent Their Separate Interests.
Morgan’s Theme Celebrates Her Interest in
Reading, Movies, and Learning About Other People.
Jeff’s Celebrates His Interest in
Staring at His Phone 24/7
And Ignoring Morgan’s Basic Human Need
For Connection.
This is Only for
Close Personal Friends And Family
So Please No Plus-Ones.
And No One Invite Tom
Who, as You All Knew Before Jeff Did,
Morgan Has Been Having an Affair With
For Over a Year.
And Please, No Kids!
Though Morgan + Jeff Have Chosen To Separate
They Still Love Each Other Very Much
So Please No Bad-Mouthing
One to the Other
Or Asking Morgan to Detail
All the Weird Sex Stuff Jeff is Into.
by Blythe Roberson, McSweeny's | Read more:
Image: via
Transcending Complacency on Superintelligent Machines
[ed. Not often do you see Stephen Hawking as a co-author of an opinion piece, especially one related to a blockbuster movie.]
Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.
The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the risks. (...)
Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it may play out differently than in the movie: as Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence." One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here -- we'll leave the lights on"? Probably not -- but this is more or less what is happening with AI.
by Stephen Hawking, Max Tegmark, Stuart Russell, and Frand Wilczek, Huffington Post | Read more:
Image: AP
Subscribe to:
Posts (Atom)