Wednesday, July 5, 2017
The Seventh Man
The man was the last one to tell his story that night. The hands of the clock had moved past ten. The small group that huddled in a circle could hear the wind tearing through the darkness outside, heading west. It shook the trees, set the windows to rattling, and moved past the house with one final whistle.
‘It was the biggest wave I had ever seen in my life,’ he said. ‘A strange wave. An absolute giant.’
He paused.
‘It just barely missed me, but in my place it swallowed everything that mattered most to me and swept it off to another world. I took years to find it again and to recover from the experience–precious years that can never be replaced.’

He cleared his throat, and for a moment or two his words were lost in silence. The others waited for him to go on.
‘In my case, it was a wave,’ he said. ‘There’s no way for me to tell, of course, what it will be for each of you. But in my case it just happened to take the form of a gigantic wave. It presented itself to me all of a sudden one day, without warning. And it was devastating.’
Igrew up in a seaside town in the Province of S. It was such a small town, I doubt that any of you would recognize the name if I were to mention it. My father was the local doctor, and so I led a rather comfortable childhood. Ever since I could remember, my best friend was a boy I’ll call K. His house was close to ours, and he was a grade behind me in school. We were like brothers, walking to and from school together, and always playing together when we got home. We never once fought during our long friendship. I did have a brother, six years older, but what with the age difference and differences in our personalities, we were never very close. My real brotherly affection went to my friend K.
K. was a frail, skinny little thing, with a pale complexion and a face almost pretty enough to be a girl’s. He had some kind of speech impediment, though, which might have made him seem retarded to anyone who didn’t know him. And because he was so frail, I always played his protector, whether at school or at home. I was kind of big and athletic, and the other kids all looked up to me. But the main reason I enjoyed spending time with K. was that he was such a sweet, pure-hearted boy. He was not the least bit retarded, but because of his impediment, he didn’t do too well at school. In most subjects, he could barely keep up. In art class, though, he was great. Just give him a pencil or paints and he would make pictures that were so full of life that even the teacher was amazed. He won prizes in one contest after another, and I’m sure he would have become a famous painter if he had continued with his art into adulthood. He liked to do seascapes. He’d go out to the shore for hours, painting. I would often sit beside him, watching the swift, precise movements of his brush, wondering how, in a few seconds, he could possibly create such lively shapes and colours where, until then, there had been only blank white paper. I realize now that it was a matter of pure talent.
One year, in September, a huge typhoon hit our area. The radio said it was going to be the worst in ten years. The schools were closed, and all the shops in town lowered their shutters in preparation for the storm. Starting early in the morning, my father and brother went around the house nailing shut all the storm-doors, while my mother spent the day in the kitchen cooking emergency provisions. We filled bottles and canteens with water, and packed our most important possessions in rucksacks for possible evacuation. To the adults, typhoons were an annoyance and a threat they had to face almost annually, but to the kids, removed as we were from such practical concerns, it was just a great big circus, a wonderful source of excitement.
Just after noon the colour of the sky began to change all of a sudden. There was something strange and unreal about it. I stayed outside on the porch, watching the sky, until the wind began to howl and the rain began to beat against the house with a weird dry sound, like handfuls of sand. Then we closed the last storm-door and gathered together in one room of the darkened house, listening to the radio. This particular storm did not have a great deal of rain, it said, but the winds were doing a lot of damage, blowing roofs off houses and capsizing ships. Many people had been killed or injured by flying debris. Over and over again, they warned people against leaving their homes. Every once in a while, the house would creak and shudder as if a huge hand were shaking it, and sometimes there would be a great crash of some heavy-sounding object against a storm-door. My father guessed that these were tiles blowing off the neighbours’ houses. For lunch we ate the rice and omelettes my mother had cooked, waiting for the typhoon to blow past.
But the typhoon gave no sign of blowing past. The radio said it had lost momentum almost as soon as it came ashore at S. Province, and now it was moving north-east at the pace of a slow runner. The wind kept up its savage howling as it tried to uproot everything that stood on land.
Perhaps an hour had gone by with the wind at its worst like this when a hush fell over everything. All of a sudden it was so quiet, we could hear a bird crying in the distance. My father opened the storm-door a crack and looked outside. The wind had stopped, and the rain had ceased to fall. Thick, grey clouds edged across the sky, and patches of blue showed here and there. The trees in the yard were still dripping their heavy burden of rainwater.
‘We’re in the eye of the storm,’ my father told me. ‘It’ll stay quiet like this for a while, maybe fifteen, twenty minutes, kind of like an intermission. Then the wind’ll come back the way it was before.’
I asked him if I could go outside. He said I could walk around a little if I didn’t go far. ‘But I want you to come right back here at the first sign of wind.’
I went out and started to explore. It was hard to believe that a wild storm had been blowing there until a few minutes before. I looked up at the sky. The storm’s great ‘eye’ seemed to be up there, fixing its cold stare on all of us below. No such ‘eye’ existed, of course: we were just in that momentary quiet spot at the centre of the pool of whirling air.
While the grown-ups checked for damage to the house, I went down to the beach. The road was littered with broken tree branches, some of them thick pine boughs that would have been too heavy for an adult to lift alone. There were shattered roof tiles everywhere, cars with cracked windshields, and even a doghouse that had tumbled into the middle of the street. A big hand might have swung down from the sky and flattened everything in its path.
K. saw me walking down the road and came outside.
‘Where are you going?’ he asked.
‘Just down to look at the beach,’ I said.
Without a word, he came along with me. He had a little white dog that followed after us.
‘The minute we get any wind, though, we’re going straight back home,’ I said, and K. gave me a silent nod.
The shore was a 200-yard walk from my house. It was lined with a concrete breakwater–a big dyke that stood as high as I was tall in those days. We had to climb a short flight of steps to reach the water’s edge. This was where we came to play almost every day, so there was no part of it we didn’t know well. In the eye of the typhoon, though, it all looked different: the colour of the sky and of the sea, the sound of the waves, the smell of the tide, the whole expanse of the shore. We sat atop the breakwater for a time, taking in the view without a word to each other. We were supposedly in the middle of a great typhoon, and yet the waves were strangely hushed. And the point where they washed against the beach was much farther away than usual, even at low tide. The white sand stretched out before us as far as we could see. The whole, huge space felt like a room without furniture, except for the band of flotsam that lined the beach.
We stepped down to the other side of the breakwater and walked along the broad beach, examining the things that had come to rest there. Plastic toys, sandals, chunks of wood that had probably once been parts of furniture, pieces of clothing, unusual bottles, broken crates with foreign writing on them, and other, less recognizable items: it was like a big candy store. The storm must have carried these things from very far away. Whenever something unusual caught our attention, we would pick it up and look at it every which way, and when we were done, K.’s dog would come over and give it a good sniff.
We couldn’t have been doing this more than five minutes when I realized that the waves had come up right next to me. Without any sound or other warning, the sea had suddenly stretched its long, smooth tongue out to where I stood on the beach. I had never seen anything like it before. Child though I was, I had grown up on the shore and knew how frightening the ocean could be–the savagery with which it could strike unannounced.
And so I had taken care to keep well back from the waterline. In spite of that, the waves had slid up to within inches of where I stood. And then, just as soundlessly, the water drew back–and stayed back. The waves that had approached me were as unthreatening as waves can be–a gentle washing of the sandy beach. But something ominous about them–something like the touch of a reptile’s skin–had sent a chill down my spine. My fear was totally groundless–and totally real. I knew instinctively that they were alive. The waves were alive. They knew I was here and they were planning to grab me. I felt as if some huge, man-eating beast were lying somewhere on a grassy plain, dreaming of the moment it would pounce and tear me to pieces with its sharp teeth. I had to run away.
by Haruki Murakami, Granta | Read more:
Image: Clark Little
Tuesday, July 4, 2017
U.S. Military Spending: The Cost of Wars
One of the striking aspects of American military power is how little serious attention is spent on examining the key elements of its total cost by war and mission, and the linkage between the use of resources and the presence of an effective strategy. For the last several decades, there has been little real effort to examine the costs of key missions and strategic commitments and the longer term trends in force planning and cost. Both the Executive Branch and the Congress have failed to reform any key aspect of the defense and foreign policy budgets to look beyond input budgeting by line item and by military service, and doing so on an annual basis.
The program budgeting and integrated force planning efforts pioneered towards the end of the Eisenhower Administration—and put into practice in the Kennedy and Johnson Administrations—have decayed into hollow shells. The effort to create meaningful Future Year Defense Programs seem to have been given a final death blow by the Budget Control Act (BCA)—legislation originally designed to be so stupid that the Congress could not possibly accept it. Efforts to integrate net assessment with budget submissions were effectively killed by the Joint Staff decades earlier, during the Reagan Administration.
Critical Failures by Both the Executive Branch and Congress
What is even more striking, is the failure of both recent presidents and the Congress to properly analyze and justify the cost of America's wars. If one counts the Cold War, the United States has been at war for virtually every year since 1941. The United States has been actively in combat since late 2001, and there is little prospect that it can end the need to use force to check terrorism and violent extremism within the next decade. Moreover, the Cold War may be over, but the United States still faces strategic challenges from Russia and the emergence of China as a major global power in what is already a multipolar world.
"War" may not be the normal state of U.S. national security planning indefinitely into the future, but war—and/or the constant risk of war—is a grim reality of our time. Yet, the Administration and the Congress have tended to treat warfighting as a temporary aberration—as something to be delt with by supplementals or creating short-term budget categories like the Overseas Contingency Operations (OCO) account that seem to reflect the cost of wars, but have become something of a slush fund and a mechanism for selectively avoiding the caps on defense spending set by the BCA. (...)
This report is divided into 12 sections, each of which illustrates the need for better planning, programming, and budgeting; as well as the need for far more transparency and better official reporting:
The charts and tables in this section summarize the actual and projected cost of U.S. wars as reported for the OCO accoun—drawing heavily largely on earlier work by the Congressional Research Service.
Three alternative cost estimates are also summarized in this section.
If these alternative estimates—and many others—are taken into account, it is all too clear that the United States now has no official estimate of the cost of its wars. Further, without such an estimate, there is no credible basis for either Executive Branch or Congressional review of the budgets and plans for such wars, much less of the ability to effectively execute given strategies, and provide credible indicators of the cost effectiveness and progress of key elements of military and civil activity.
The program budgeting and integrated force planning efforts pioneered towards the end of the Eisenhower Administration—and put into practice in the Kennedy and Johnson Administrations—have decayed into hollow shells. The effort to create meaningful Future Year Defense Programs seem to have been given a final death blow by the Budget Control Act (BCA)—legislation originally designed to be so stupid that the Congress could not possibly accept it. Efforts to integrate net assessment with budget submissions were effectively killed by the Joint Staff decades earlier, during the Reagan Administration.
Critical Failures by Both the Executive Branch and Congress
What is even more striking, is the failure of both recent presidents and the Congress to properly analyze and justify the cost of America's wars. If one counts the Cold War, the United States has been at war for virtually every year since 1941. The United States has been actively in combat since late 2001, and there is little prospect that it can end the need to use force to check terrorism and violent extremism within the next decade. Moreover, the Cold War may be over, but the United States still faces strategic challenges from Russia and the emergence of China as a major global power in what is already a multipolar world.
"War" may not be the normal state of U.S. national security planning indefinitely into the future, but war—and/or the constant risk of war—is a grim reality of our time. Yet, the Administration and the Congress have tended to treat warfighting as a temporary aberration—as something to be delt with by supplementals or creating short-term budget categories like the Overseas Contingency Operations (OCO) account that seem to reflect the cost of wars, but have become something of a slush fund and a mechanism for selectively avoiding the caps on defense spending set by the BCA. (...)
This report is divided into 12 sections, each of which illustrates the need for better planning, programming, and budgeting; as well as the need for far more transparency and better official reporting:
The charts and tables in this section summarize the actual and projected cost of U.S. wars as reported for the OCO accoun—drawing heavily largely on earlier work by the Congressional Research Service.
- The Department of Defense's OCO costs of the Afghan conflict since FY2001 will rise to $840.7 billion—if the President's FY2018 budget request is met. They will be $770.5 billion for Iraq.
- The total costs for all OCO spending between FY2001 and FY2018 will be in excess of $1,909 billion. Given the costs omitted from the OCO budget, the real total cost will almost certainly be well over $2 trillion, even using OCO data as the only costs of the wars.
Three alternative cost estimates are also summarized in this section.
- One by Lina J. Blimes puts the total cost at $4 to $6 trillion by end FY2016.
- A related estimate by the Watson Institute puts the cost at $4.8 trillion through FY2016.
- A third estimate, by Neta Crawford, puts costs at more than $4.8 trillion through FY2017, plus more than $7.9 trillion in cumulative interest on past appropriations, or more than $12.7 trillion.
If these alternative estimates—and many others—are taken into account, it is all too clear that the United States now has no official estimate of the cost of its wars. Further, without such an estimate, there is no credible basis for either Executive Branch or Congressional review of the budgets and plans for such wars, much less of the ability to effectively execute given strategies, and provide credible indicators of the cost effectiveness and progress of key elements of military and civil activity.
by Anthony H. Cordesman, CSIS | Read more:
Labels:
Economics,
Government,
Military,
Politics,
Security
Monday, July 3, 2017
Where Commute Is a Four-Letter Word
Hieronymus Bosch painted a torture chamber where mutant beasts snacked on human flesh. Dante conjured fire, ice and a devil with three faces. If either man lived in New York City today, he’d know better. Hell is the subway at rush hour.
Or Penn Station almost anytime. If you’re really cursed, the subway disgorges you there, into a constipated labyrinth where all beauty, civility and dreams of punctuality go to die. It’s the designated site of the “summer of hell,” to begin on July 10, when several tracks shut down for repair and New Jersey Transit and Amtrak won’t be able to live up to their current standard of wretchedness. I can’t for the life of me imagine what worse looks like. Will conductors line us up behind the cabooses and have us push our trains to their destinations?
I’m losing faith in New York. I’m losing patience. Last week we got an especially vivid reminder of what an overwhelmed, creaky menace the city’s infrastructure has become: Two cars on an A train in Upper Manhattan derailed, injuring about three dozen people.
Gov. Andrew Cuomo subsequently declared a state of emergency for the subway system, pledged $1 billion for improvements and demanded a detailed action plan. I have just one question. What took him so long? Actually, I have another. How much of his sudden zest reflects a possible presidential bid and the need to pretty up an ugly blot on his record?
But it’s not just the subway. On so many days in so many ways, I see evidence of a city in the grip of a communal panic attack.
True story: Some weeks ago, I emerged, downtrodden, from my latest debasement on the subway to encounter a traffic jam near the street where I live. Pointlessly and obnoxiously, a driver in one car honked and honked at the cars ahead. This prompted a passing pedestrian to screech at him to stop. Then someone else began to scream at her for adding to the din. And you wonder why more people are wearing bulky headphones over their ears.
Yeah, yeah, I know: No one’s making us live here. And New York isn’t America. It’s a one-off.
But is that really so? Right now the Big Muddle — I’m sorry, Apple — strikes me as a proxy for the country and a cautionary tale. (...)
Popularity-wise, American cities are thriving. Millennials want to live in them. So do many retirees. But New York raises the question of how prepared these ever-denser hubs are. It’s dirtier than it should be. Smellier, too, especially in July and August. Its schools struggle. Even its jails are broken, as the plan to close Rikers Island affirms.
Many of us New Yorkers feared that one of our biggest headaches this year would be frequent, disruptive visits from President Trump, but he has chosen to go elsewhere on weekends. Maybe that should tell us something.
Or Penn Station almost anytime. If you’re really cursed, the subway disgorges you there, into a constipated labyrinth where all beauty, civility and dreams of punctuality go to die. It’s the designated site of the “summer of hell,” to begin on July 10, when several tracks shut down for repair and New Jersey Transit and Amtrak won’t be able to live up to their current standard of wretchedness. I can’t for the life of me imagine what worse looks like. Will conductors line us up behind the cabooses and have us push our trains to their destinations?

Gov. Andrew Cuomo subsequently declared a state of emergency for the subway system, pledged $1 billion for improvements and demanded a detailed action plan. I have just one question. What took him so long? Actually, I have another. How much of his sudden zest reflects a possible presidential bid and the need to pretty up an ugly blot on his record?
But it’s not just the subway. On so many days in so many ways, I see evidence of a city in the grip of a communal panic attack.
True story: Some weeks ago, I emerged, downtrodden, from my latest debasement on the subway to encounter a traffic jam near the street where I live. Pointlessly and obnoxiously, a driver in one car honked and honked at the cars ahead. This prompted a passing pedestrian to screech at him to stop. Then someone else began to scream at her for adding to the din. And you wonder why more people are wearing bulky headphones over their ears.
Yeah, yeah, I know: No one’s making us live here. And New York isn’t America. It’s a one-off.
But is that really so? Right now the Big Muddle — I’m sorry, Apple — strikes me as a proxy for the country and a cautionary tale. (...)
Popularity-wise, American cities are thriving. Millennials want to live in them. So do many retirees. But New York raises the question of how prepared these ever-denser hubs are. It’s dirtier than it should be. Smellier, too, especially in July and August. Its schools struggle. Even its jails are broken, as the plan to close Rikers Island affirms.
Many of us New Yorkers feared that one of our biggest headaches this year would be frequent, disruptive visits from President Trump, but he has chosen to go elsewhere on weekends. Maybe that should tell us something.
by Frank Bruni, NY Times | Read more:
Image: Daniel ZenderVideo ad Absurdum
How a collection of fifteen thousand Jerry Maguire VHS tapes reveals the ugly underbelly of the media-entertainment industry.
If you were walking down Sunset Boulevard in mid-January—right around the time America’s forty-fifth president (a man and a president fit for his time) was inaugurated—you may have come across a familiar, though half-forgotten site: the façade of a video rental store. Through the window was the usual setup: movie advertisements, shelves of VHS tapes, genre signs, a gum ball machine, a couple dorky clerks chitchatting behind the counter, even a curtain with the label XXX, beyond which, you might guess, you could sneak a glimpse of videotaped smut. What was different about this video store—besides the fact that it was 2017, not 1997—was that all the tapes, all the posters, even the celebrity cardboard cut-out and the genre signs plastered high on the shelves were for the same movie: Jerry Maguire. Indeed, it was the only movie available in the entire store, and none of them were for rent. These fifteen thousand “Jerrys,” as they are affectionately known, were put on display by the video/art collective Everything is Terrible! (EIT!), which has been mashing up videos and collecting copies of Jerry Maguire video tapes for about eight years.
On opening night of the exhibit, I watched as about a thousand of LA’s video nerds and scenesters streamed past the wall-to-wall red-and-white Jerry shelves, through the XXX curtains, and into an ankle-deep pool of unspooled Jerry Maguire, wrapping their arms around a cardboard cutout of Tom Cruise talking on his mid-90s flip phone to pinch-zoom and snap Instagram/Facebook/Twitter-ready photos. What (the fuck) to make of this? The store was a meme, hypertrophied and incarnate. Jerrys stacked into pyramids along every wall. Jerry-inspired art selling for hundreds of dollars. Jerry socks. Jerry mix-tapes. A Jerry video game. A treasure chest of still shrink-wrapped Jerrys sent from an adulatory Cameron Crowe (the film’s director). And then the actual (fucking) pyramid—the actual pyramid EIT! is in talks with actual architects to build in the actual desert—a tomb and a shrine “for all the Jerrys to live in for all time,” according to Nic Maier, one of the lead members of the video art collective. Was it Scientological mission creep? A Malkovichian nightmare? A glitch in the matrix? Or a mirror held up to an entire media-drunk generation?
[ed. What a world.]
On opening night of the exhibit, I watched as about a thousand of LA’s video nerds and scenesters streamed past the wall-to-wall red-and-white Jerry shelves, through the XXX curtains, and into an ankle-deep pool of unspooled Jerry Maguire, wrapping their arms around a cardboard cutout of Tom Cruise talking on his mid-90s flip phone to pinch-zoom and snap Instagram/Facebook/Twitter-ready photos. What (the fuck) to make of this? The store was a meme, hypertrophied and incarnate. Jerrys stacked into pyramids along every wall. Jerry-inspired art selling for hundreds of dollars. Jerry socks. Jerry mix-tapes. A Jerry video game. A treasure chest of still shrink-wrapped Jerrys sent from an adulatory Cameron Crowe (the film’s director). And then the actual (fucking) pyramid—the actual pyramid EIT! is in talks with actual architects to build in the actual desert—a tomb and a shrine “for all the Jerrys to live in for all time,” according to Nic Maier, one of the lead members of the video art collective. Was it Scientological mission creep? A Malkovichian nightmare? A glitch in the matrix? Or a mirror held up to an entire media-drunk generation?
by John Washington, Guernica | Read more:
Image: EIT![ed. What a world.]
Saturday, July 1, 2017
Why We Crash Our Motorcycles
What do you learn if you pick 100 riders, put five video cameras and data-logging equipment on their motorcycles and record them for a total of 366,667 miles?
Several things, some of which we knew, some surprising. Intersections are dangerous. We either need to pay better attention or work on our braking techniques, because we crash into the back of other vehicles way too often. We’re not good enough at cornering, especially right turns. And we drop our bikes a lot (probably more often than any of us imagined or were willing to admit).
The study was done for the Motorcycle Safety Foundation by the Virginia Tech Transportation Institute. (...)
The VTTI team explains its methodology, including efforts to standardize and define terms and procedures. All the details are in a 20-page report you can download from the MSF. But here are some of the things I picked out.
Where we crash
Intersections. No surprise there. VTTI created a system to calculate how much a certain scenario or riding behavior increased the odds of a crash or near-crash. An uncontrolled intersection presents nearly 41 times the risk of no intersection. A parking lot or driveway intersection is more than eight times as risky and an intersection with a signal is almost three times as risky.
A downhill grade increased the risk by a factor of four while an uphill grade doubled it. Riders were nine times as likely to crash or have a near-crash incident on gravel or dirt roads than on paved roads. And riders were twice as likely to have an incident in a righthand turn than on a straight section of road (crossing the center line is considered a near-crash scenario, even if nothing else bad happens).
How we crash
We complain all the time about other people on the road trying to kill us, especially cars pulling into our paths. The VTTI study partially backs that up. Of the 99 crashes and near-crashes involving another vehicle, the three categories of other vehicles crossing the rider’s path add up to 19.
Here’s the surprise, however. What’s the most common scenario? Riders hitting (or nearly hitting) another vehicle from behind. There were 35 of those incidents. Are we really almost twice as likely to plow into a stopped car in front of us as to have someone pull into our path? Or should we write this off as the result of a small sample size?
Maybe there are clues in the risk section. Researchers tried to break down rider behavior in crashes and near-crash incidents into two categories: aggressive riding or rider inattention or lack of skills. The cameras and other data helped determine, for example, if the rider ran the red light because of inattention or aggressive riding.
The study found that aggressive riding increased risk by a factor of 18 while inattention or lack of skill increased it by a factor of nine. Combine the two, and odds of an incident increased by 30.
Now here's one of the less dramatic findings, but an interesting one, just the same. It seems we drop our bikes a lot. Or at least the riders in the study did. More than half the crashes were incidents some riders wouldn't define as a crash — not a dramatic collision but an incident defined as a case where the "vehicle falls coincident with low or no speed (even if in gear)" not caused by another outside factor. Rider inattention or poor execution are to blame. The study finds "These low-speed 'crashes' appear to be relatively typical among everyday riding," but they are incidents that would never be included in a different kind of study of motorcycle crashes. The cameras, however, capture it all, even our mundane but embarrassing moments.

The study was done for the Motorcycle Safety Foundation by the Virginia Tech Transportation Institute. (...)
The VTTI team explains its methodology, including efforts to standardize and define terms and procedures. All the details are in a 20-page report you can download from the MSF. But here are some of the things I picked out.
Where we crash
Intersections. No surprise there. VTTI created a system to calculate how much a certain scenario or riding behavior increased the odds of a crash or near-crash. An uncontrolled intersection presents nearly 41 times the risk of no intersection. A parking lot or driveway intersection is more than eight times as risky and an intersection with a signal is almost three times as risky.
A downhill grade increased the risk by a factor of four while an uphill grade doubled it. Riders were nine times as likely to crash or have a near-crash incident on gravel or dirt roads than on paved roads. And riders were twice as likely to have an incident in a righthand turn than on a straight section of road (crossing the center line is considered a near-crash scenario, even if nothing else bad happens).
How we crash
We complain all the time about other people on the road trying to kill us, especially cars pulling into our paths. The VTTI study partially backs that up. Of the 99 crashes and near-crashes involving another vehicle, the three categories of other vehicles crossing the rider’s path add up to 19.
Here’s the surprise, however. What’s the most common scenario? Riders hitting (or nearly hitting) another vehicle from behind. There were 35 of those incidents. Are we really almost twice as likely to plow into a stopped car in front of us as to have someone pull into our path? Or should we write this off as the result of a small sample size?
Maybe there are clues in the risk section. Researchers tried to break down rider behavior in crashes and near-crash incidents into two categories: aggressive riding or rider inattention or lack of skills. The cameras and other data helped determine, for example, if the rider ran the red light because of inattention or aggressive riding.
The study found that aggressive riding increased risk by a factor of 18 while inattention or lack of skill increased it by a factor of nine. Combine the two, and odds of an incident increased by 30.
Now here's one of the less dramatic findings, but an interesting one, just the same. It seems we drop our bikes a lot. Or at least the riders in the study did. More than half the crashes were incidents some riders wouldn't define as a crash — not a dramatic collision but an incident defined as a case where the "vehicle falls coincident with low or no speed (even if in gear)" not caused by another outside factor. Rider inattention or poor execution are to blame. The study finds "These low-speed 'crashes' appear to be relatively typical among everyday riding," but they are incidents that would never be included in a different kind of study of motorcycle crashes. The cameras, however, capture it all, even our mundane but embarrassing moments.
by Lance Oliver, Revzilla | Read more:
Image: via:
The Importance of Fairness: A New Economic Vision for the Democratic Party
I have been a Democrat my entire life. Today, the Democratic Party matters more than ever because it is the only organization currently capable, at least theoretically, of preventing the Republicans from turning the United States into a fully-fledged banana republic, ruled by and for a handful of billionaire families and corporate chieftains, with a stagnant economy and pre-modern levels of inequality. Yet I cannot find anything to disagree with in Senator Bernie Sanders’s assessment:
The Democratic Party was once the party of working people. So why is it increasingly becoming the party of well-educated, socially tolerant, cosmopolitan city-dwellers? Because, in an age of stagnant median incomes and a disintegrating social safety net, Democrats have no economic message for the many people who are struggling to make ends meet, to pay for college, to stay in a home, or to save for retirement.
This impotence is the product of sweeping changes in the intellectual and political landscape of the United States. As I discuss in my recent book, Economism, contemporary thinking about economic issues is dominated by “economism”: the belief that simplistic models accurately describe the real world and should be the basis of public policy. (For example: The minimum wage is an artificial price floor in the labor market, therefore supply will exceed demand, therefore unemployment must increase.) This naive or disingenuous worldview, according to which unregulated markets produce the best of all possible worlds, is frequently invoked to defend policies that favor the wealthy and justify the vast inequality that results.
Economism was promoted by conservatives who sought to roll back the New Deal and restore a mythical libertarian paradise governed by free markets, with a minimal state and low taxes. Their vision became the platform of the Republican Party in the 1970s and the policy handbook for President Ronald Reagan and every conservative leader since. In response, Democrats have tacked to the right on economic issues. Since Bill Clinton, the Democratic Party’s economic vision has been that prudent management of macroeconomic factors would foster higher private sector growth, which would in turn create jobs and prosperity for working families. The central planks of this platform have included: cutting budget deficits to reduce interest rates; reappointing Republican Federal Reserve chairs who would control inflation; and even seeking a “grand bargain” that would reduce Social Security spending in exchange for modestly higher taxes. As the Republican Party has been taken over by charlatans who insist on cutting taxes and crippling government at every opportunity, Democrats have rebranded themselves as the moderate party of responsible economic stewardship.
But there are two problems with this approach. The first is that it is economism lite. While Republicans say, “Free markets solve all problems,” Democrats respond, “Free markets solve most problems, but markets sometimes fail, so sometimes they need to be judiciously regulated to produce efficient outcomes.” This may be more accurate, but it undermines Democrats’ appeal to people who have not benefited from overall economic growth—because they have the wrong skills, live in the wrong place, got sick at the wrong time, or otherwise got unlucky.
The second problem is that economism lite doesn’t work, at least not anymore. A rising tide might lift all boats, as President John F. Kennedy claimed; but, then again, it might not. Since Ronald Reagan was elected president in 1980, labor productivity—the amount that each person can produce in an hour of work—has grown by 94% (a modest but respectable 1.9% per year); real per capita gross domestic product—total economic output per person—has grown by 82% (1.7% per year). Over that same period, however, median household income has increased by only 16% (less than 0.5% per year). In other words, the country as a whole has become almost twice as rich, but the typical family makes only a little more money than in 1980. Where has all the money gone?
To the very rich, as can be seen in one chart:
(If you compare the top 1% with the bottom 99%, or the top 10% with the bottom 90%, or the top 0.01% with the bottom 90%, you get the essentially the same picture.)
To be clear, the failure of overall economic growth to benefit the middle and working classes is not solely or even primarily the Democrats’ fault. The villain in that story is the Republican conservatives who weakened unions, undermined the social safety net, and slashed taxes on the rich. Globalization and competition from low-wage countries were another factor. But since the onslaught of the conservative revolution, Democrats have played defense by claiming the space once occupied by moderate Republicans. Recall the pivot to deficit reduction in 1993, welfare reform in 1996, the capital gains tax cut of 1997, the commitment to free trade agreements from NAFTA to TPP, and the bipartisan commitment to financial deregulation that helped produce the devastating financial crisis of 2008.
Barack Obama temporarily had a filibuster-proof majority in the Senate, and yet his principal accomplishments were an economic stimulus bill that was more than one-third tax cuts; a health care plan modeled on Mitt Romney’s Massachusetts reforms; a technocratic financial reform bill that neither reduced the dominance of the megabanks that caused the 2008 crisis nor, judging from subsequent experience, deterred them from serial lawbreaking; and a financial system rescue that kept the big banks (and their executives and shareholders) afloat while they fraudulently foreclosed on millions of homeowners. There were positives in Obama’s economic record: The recession would have been worse without the stimulus, millions of people got health coverage, and the Dodd-Frank Act included some steps in the right direction. Taken as a whole, however, Obama governed as what we called a moderate Republican only a few decades ago, and the only vision one can distill from his actions is that of prudently harnessing market forces to generate growth. (Perhaps the president and his advisers would have preferred more progressive policies in some areas such as health care—but they were constrained not just by the Party of No, but also by a Democratic caucus effectively controlled by its conservative wing.) For an unemployed recent graduate buried by student debt, or a factory worker laid off in middle age while underwater on her mortgage, or a retiree who saw her savings evaporate in 2008 and 2009, the argument that Hillary Clinton and the Democrats are not as bad as the Republicans was just not compelling enough.
One of the central themes of my book is that economism is an ideological worldview: a lens through which we see the world, which affects the way we interpret reality and serves the interests of certain groups. Logically, it can only be overthrown by another worldview. And so the book ends this way:
by James Kwak, Baseline Scenario | Read more:
Image: www.wid.world
[ed. See also: Conspicuous consumption is over. It’s all about intangibles now.]
“The model the Democrats have followed for the last 10 to 20 years has been an ultimate failure. That’s just the objective evidence. We are taking on a right-wing extremist party whose agenda is opposed time after time and on issue after issue by the vast majority of the American people. Yet we have lost the White House, the U.S. House, the U.S. Senate, almost two-thirds of the governors’ chairs and close to 900 legislative seats across this country. How can anyone not conclude that the Democratic agenda and approach has been a failure?”A central shortcoming of the party is that, on economic issues, it has nothing to say to people trapped on the wrong side of our country’s growing inequality divide. Hillary Clinton won the “working class” (household income less than $50,000) vote, but by a much smaller margin than Barack Obama in 2012 or 2008—despite Donald Trump’s ardent efforts to alienate African-Americans and Latinos. Some people voted for Trump because of racism or misogyny. But Clinton was also flattened by Trump among voters who feel their financial situation was worse than a year before or who think that life will be worse for the next generation. She lost the Electoral College in the “rust belt” states of the Upper Midwest, whose economies have never fully recovered from the decline of American manufacturing.
The Democratic Party was once the party of working people. So why is it increasingly becoming the party of well-educated, socially tolerant, cosmopolitan city-dwellers? Because, in an age of stagnant median incomes and a disintegrating social safety net, Democrats have no economic message for the many people who are struggling to make ends meet, to pay for college, to stay in a home, or to save for retirement.
This impotence is the product of sweeping changes in the intellectual and political landscape of the United States. As I discuss in my recent book, Economism, contemporary thinking about economic issues is dominated by “economism”: the belief that simplistic models accurately describe the real world and should be the basis of public policy. (For example: The minimum wage is an artificial price floor in the labor market, therefore supply will exceed demand, therefore unemployment must increase.) This naive or disingenuous worldview, according to which unregulated markets produce the best of all possible worlds, is frequently invoked to defend policies that favor the wealthy and justify the vast inequality that results.
Economism was promoted by conservatives who sought to roll back the New Deal and restore a mythical libertarian paradise governed by free markets, with a minimal state and low taxes. Their vision became the platform of the Republican Party in the 1970s and the policy handbook for President Ronald Reagan and every conservative leader since. In response, Democrats have tacked to the right on economic issues. Since Bill Clinton, the Democratic Party’s economic vision has been that prudent management of macroeconomic factors would foster higher private sector growth, which would in turn create jobs and prosperity for working families. The central planks of this platform have included: cutting budget deficits to reduce interest rates; reappointing Republican Federal Reserve chairs who would control inflation; and even seeking a “grand bargain” that would reduce Social Security spending in exchange for modestly higher taxes. As the Republican Party has been taken over by charlatans who insist on cutting taxes and crippling government at every opportunity, Democrats have rebranded themselves as the moderate party of responsible economic stewardship.
But there are two problems with this approach. The first is that it is economism lite. While Republicans say, “Free markets solve all problems,” Democrats respond, “Free markets solve most problems, but markets sometimes fail, so sometimes they need to be judiciously regulated to produce efficient outcomes.” This may be more accurate, but it undermines Democrats’ appeal to people who have not benefited from overall economic growth—because they have the wrong skills, live in the wrong place, got sick at the wrong time, or otherwise got unlucky.
The second problem is that economism lite doesn’t work, at least not anymore. A rising tide might lift all boats, as President John F. Kennedy claimed; but, then again, it might not. Since Ronald Reagan was elected president in 1980, labor productivity—the amount that each person can produce in an hour of work—has grown by 94% (a modest but respectable 1.9% per year); real per capita gross domestic product—total economic output per person—has grown by 82% (1.7% per year). Over that same period, however, median household income has increased by only 16% (less than 0.5% per year). In other words, the country as a whole has become almost twice as rich, but the typical family makes only a little more money than in 1980. Where has all the money gone?
To the very rich, as can be seen in one chart:

(If you compare the top 1% with the bottom 99%, or the top 10% with the bottom 90%, or the top 0.01% with the bottom 90%, you get the essentially the same picture.)
To be clear, the failure of overall economic growth to benefit the middle and working classes is not solely or even primarily the Democrats’ fault. The villain in that story is the Republican conservatives who weakened unions, undermined the social safety net, and slashed taxes on the rich. Globalization and competition from low-wage countries were another factor. But since the onslaught of the conservative revolution, Democrats have played defense by claiming the space once occupied by moderate Republicans. Recall the pivot to deficit reduction in 1993, welfare reform in 1996, the capital gains tax cut of 1997, the commitment to free trade agreements from NAFTA to TPP, and the bipartisan commitment to financial deregulation that helped produce the devastating financial crisis of 2008.
Barack Obama temporarily had a filibuster-proof majority in the Senate, and yet his principal accomplishments were an economic stimulus bill that was more than one-third tax cuts; a health care plan modeled on Mitt Romney’s Massachusetts reforms; a technocratic financial reform bill that neither reduced the dominance of the megabanks that caused the 2008 crisis nor, judging from subsequent experience, deterred them from serial lawbreaking; and a financial system rescue that kept the big banks (and their executives and shareholders) afloat while they fraudulently foreclosed on millions of homeowners. There were positives in Obama’s economic record: The recession would have been worse without the stimulus, millions of people got health coverage, and the Dodd-Frank Act included some steps in the right direction. Taken as a whole, however, Obama governed as what we called a moderate Republican only a few decades ago, and the only vision one can distill from his actions is that of prudently harnessing market forces to generate growth. (Perhaps the president and his advisers would have preferred more progressive policies in some areas such as health care—but they were constrained not just by the Party of No, but also by a Democratic caucus effectively controlled by its conservative wing.) For an unemployed recent graduate buried by student debt, or a factory worker laid off in middle age while underwater on her mortgage, or a retiree who saw her savings evaporate in 2008 and 2009, the argument that Hillary Clinton and the Democrats are not as bad as the Republicans was just not compelling enough.
One of the central themes of my book is that economism is an ideological worldview: a lens through which we see the world, which affects the way we interpret reality and serves the interests of certain groups. Logically, it can only be overthrown by another worldview. And so the book ends this way:
“Millions if not billions of people today hunger to live in a world that is more fair, more forgiving, and more humane than the one that we were born into. Creating a new vision of society worthy of that collective yearning—one that goes beyond the false promises of economism—is the first step toward building a better future for our children. That is the story that remains to be written.”What the Democratic Party needs is an economic message that: addresses the real problems that many Americans face on a daily basis (instead of callously insisting that “America is already great”); and resonates with their very real frustrations and anxieties. Both politically and as policy, the idea that the rising tide of economic efficiency and growth would lift all boats has failed. It is time for something new.
by James Kwak, Baseline Scenario | Read more:
Image: www.wid.world
[ed. See also: Conspicuous consumption is over. It’s all about intangibles now.]
Friday, June 30, 2017
Solving the Heroin Overdose Mystery
How small doses can kill.
Heroin, like other opiates, depresses activity in the brain centre that controls breathing. Sometimes, this effect is so profound that the drug user dies, and becomes yet another overdose casualty. Some of these victims die because they took too much of the drug. Others die following self-administration of a dose that appears much too small to be lethal, but why? This is the heroin overdose mystery, and it has been known for more than half a century.
There was a heroin crisis in New York City in the 1960s, with overdose deaths increasing each year of the decade. There were almost 1,000 overdose victims in New York City in 1969, about as many as in 2015. The then chief medical examiner of New York, Milton Helpern, together with his deputy chief, Michael Baden, investigated these deaths. They discovered that many died, not from a true pharmacological overdose, but even when, on the day prior, the victim had administered a comparable dose with no ill effects. Helpern, Baden and colleagues noted that, while it is common for several users to take drugs from the same batch, only rarely does more than one user suffer a life-threatening reaction. They examined heroin packages and used syringes found near dead addicts, and tissue surrounding the sites of fatal injections, and found that victims typically self-administered a normal, usually non-fatal dose of heroin. In 1972, Helpern concluded that ‘there does not appear to be a quantitative correlation between the acute fulminating lethal effect and the amount of heroin taken’.
It was a science journalist, Edward Brecher, who first applied the term ‘overdose mystery’ when he evaluated Helpern’s data for Consumer Reports. Brecher concluded that ‘overdose’ was a misnomer. ‘These deaths are, if anything, associated with “underdose” rather than overdose,’ he wrote.
Subsequently, independent evaluations of heroin overdoses in New York City, Washington, DC, Detroit, and various cities in Germany and Hungary all confirmed the phenomenon – addicts often die after self-administering an amount of heroin that should not kill them.
Most scholarly articles concerning heroin overdose don’t mention the mystery; it is simply assumed that the victim died because he or she administered too much opiate. Even when the mystery is addressed, the explanations are wanting. For example, some have suggested that deaths seen after self-administration of a usually non-lethal dose of heroin result from an allergic-type reaction to additives, such as quinine, sometimes used to bulk up its street package. This interpretation has been discredited.
Others have noted that the effect of a small dose of heroin is greatly enhanced if the addict administers other depressant drugs (such as alcohol) with heroin. Although some cases of overdose can result from such drug interactions, many cases do not.
Some have suggested that the addict might overdose following a period of abstinence, either self-initiated or caused by imprisonment. Thus, tolerance that accumulated during a prolonged period of drug use, and which would be expected to protect the addict from the lethal effect of the drug, could dissipate during the drug-free period. If the addict goes back to his or her usual, pre-abstinence routine, the formerly well-tolerated dose could now be lethal.
But there are many demonstrations that opiate tolerance typically does notsubstantially dissipate merely with the passage of time. One piece of evidence comes from the addict’s hair, which carries a record of drug use. Many drugs, and drug metabolites, diffuse from the bloodstream into the growing hair shaft; thus, researchers can reconstruct this pharmacological record, including periods of abstinence, using ‘segmental hair analysis’. In a study that analysed the hair of 28 recently deceased heroin-overdose victims in Stockholm, there was no evidence that they had been abstinent prior to death.
A surprising solution to the overdose mystery has been provided by the testimony of addicts who overdosed, then survived to tell the tale. (Overdose is survivable if the antidote, an opiate antagonist, such as naloxone, is administered in a timely manner.) What do these survivors say was special about their experience? In independent studies, in New Jersey and in Spain, most overdose survivors said that they’d administered heroin in a novel or unusual environment – a place where they had not previously administered heroin.
by Shepard Siegel, Aeon | Read more:
Image: Bill Eppridge
Heroin, like other opiates, depresses activity in the brain centre that controls breathing. Sometimes, this effect is so profound that the drug user dies, and becomes yet another overdose casualty. Some of these victims die because they took too much of the drug. Others die following self-administration of a dose that appears much too small to be lethal, but why? This is the heroin overdose mystery, and it has been known for more than half a century.
There was a heroin crisis in New York City in the 1960s, with overdose deaths increasing each year of the decade. There were almost 1,000 overdose victims in New York City in 1969, about as many as in 2015. The then chief medical examiner of New York, Milton Helpern, together with his deputy chief, Michael Baden, investigated these deaths. They discovered that many died, not from a true pharmacological overdose, but even when, on the day prior, the victim had administered a comparable dose with no ill effects. Helpern, Baden and colleagues noted that, while it is common for several users to take drugs from the same batch, only rarely does more than one user suffer a life-threatening reaction. They examined heroin packages and used syringes found near dead addicts, and tissue surrounding the sites of fatal injections, and found that victims typically self-administered a normal, usually non-fatal dose of heroin. In 1972, Helpern concluded that ‘there does not appear to be a quantitative correlation between the acute fulminating lethal effect and the amount of heroin taken’.

Subsequently, independent evaluations of heroin overdoses in New York City, Washington, DC, Detroit, and various cities in Germany and Hungary all confirmed the phenomenon – addicts often die after self-administering an amount of heroin that should not kill them.
Most scholarly articles concerning heroin overdose don’t mention the mystery; it is simply assumed that the victim died because he or she administered too much opiate. Even when the mystery is addressed, the explanations are wanting. For example, some have suggested that deaths seen after self-administration of a usually non-lethal dose of heroin result from an allergic-type reaction to additives, such as quinine, sometimes used to bulk up its street package. This interpretation has been discredited.
Others have noted that the effect of a small dose of heroin is greatly enhanced if the addict administers other depressant drugs (such as alcohol) with heroin. Although some cases of overdose can result from such drug interactions, many cases do not.
Some have suggested that the addict might overdose following a period of abstinence, either self-initiated or caused by imprisonment. Thus, tolerance that accumulated during a prolonged period of drug use, and which would be expected to protect the addict from the lethal effect of the drug, could dissipate during the drug-free period. If the addict goes back to his or her usual, pre-abstinence routine, the formerly well-tolerated dose could now be lethal.
But there are many demonstrations that opiate tolerance typically does notsubstantially dissipate merely with the passage of time. One piece of evidence comes from the addict’s hair, which carries a record of drug use. Many drugs, and drug metabolites, diffuse from the bloodstream into the growing hair shaft; thus, researchers can reconstruct this pharmacological record, including periods of abstinence, using ‘segmental hair analysis’. In a study that analysed the hair of 28 recently deceased heroin-overdose victims in Stockholm, there was no evidence that they had been abstinent prior to death.
A surprising solution to the overdose mystery has been provided by the testimony of addicts who overdosed, then survived to tell the tale. (Overdose is survivable if the antidote, an opiate antagonist, such as naloxone, is administered in a timely manner.) What do these survivors say was special about their experience? In independent studies, in New Jersey and in Spain, most overdose survivors said that they’d administered heroin in a novel or unusual environment – a place where they had not previously administered heroin.
by Shepard Siegel, Aeon | Read more:
Image: Bill Eppridge
Disrupt the Citizen
The ouster of Travis Kalanick last week brings to an end nearly a year of accumulating scandal at Uber. The company—its specious claims to being a world-beating disruptor significantly weakened—now joins Amazon as one of the more frightening entities of our time, with Kalanick taking his place among Elizabeth Holmes, Jeff Bezos, Martin Shkreli, and the late Steve Jobs in the burgeoning pantheon of tech sociopaths. Few moments in history have been so crowded with narcissists: incapable of acknowledging the existence of others, unwilling to permit state and civil society—with their strange, confusing, downright offensive cult of taxes, regulations and public services—to impede their quest for monopolizing the mind, muscles, heart rate, and blood of every breathing person on earth. The Mormons, with their registries of the unsaved, have beaten Silicon Valley to the hosts of the dead—but it’s safe to assume that this, too, will not last. (...)
In the same vein, the proliferating but ever meaningless distinctions between the “bad” Uber and the “good” Lyft have obscured how destructive the rise of ride-sharing has been for workers and the cities they live in. The predatory lawlessness that prevails inside Valley workplaces scales up and out. Both companies entered their markets illegally, without regard to prevailing wages, regulations, or taxes. Like Amazon, which found a way to sell books without sales tax, this turned out to be one of the many boons of lawbreaking. (...)
But lying and rule-breaking to gain a monopoly are old news in liberal capitalism. What ride-sharing companies had to do, in the old spirit of Standard Oil, was secure a foothold in politics, and subject politics to the will of “the consumer.” In a telling example of our times, Uber hired former Obama campaign head David Plouffe to work the political angles. And Plouffe has succeeded wildly, since—as Washingtonians and New Yorkers are experiencing with their subways—municipal and state liberals are only nominally committed to the standards that regulate transport. Never mind that traffic is something that cities need to control, and that transportation should be a public good. Ride-sharing companies—which explode traffic and undermine public transportation—can trim the balance sheets of cities by privatizing both. The choice we make should be between unchecked ride-sharing and fully funded mass transit. Instead, the success of ride-sharing means that we choose between Uber and Lyft.
What Plouffe and the ride-sharing companies understand is that, under capitalism, when markets are pitted against the state, the figure of the consumer can be invoked against the figure of the citizen. Consumption has in fact come to replace our original ideas of citizenship. As the sociologist Wolfgang Streeck has argued in his exceptional 2012 essay, “Citizens as Customers,” the government encouragement of consumer choice in the 1960s and ’70s “radiated” into the public sphere, making government seem shabby in comparison with the endlessly attractive world of consumer society. Political goods began to get judged by the same standards as commodities, and were often found wanting.
The result is that, in Streeck’s prediction, the “middle classes, who command enough purchasing power to rely on commercial rather than political means to get what they want, will lose interest in the complexities of collective preference-setting and decision-making, and find the sacrifices of individual utility required by participation in traditional politics no longer worthwhile.” The affluent find themselves bored by goods formerly subject to collective provision, such as public transportation, ceasing to pay for them, while thereby supporting private options. Consumer choice then stands in for political choice. When Ohio governor John Kasich proposed last year that he would “Uber-ize” the state’s government, he was appealing to this sense that politics should more closely resemble the latest trends in consumption.
In the same vein, the proliferating but ever meaningless distinctions between the “bad” Uber and the “good” Lyft have obscured how destructive the rise of ride-sharing has been for workers and the cities they live in. The predatory lawlessness that prevails inside Valley workplaces scales up and out. Both companies entered their markets illegally, without regard to prevailing wages, regulations, or taxes. Like Amazon, which found a way to sell books without sales tax, this turned out to be one of the many boons of lawbreaking. (...)

What Plouffe and the ride-sharing companies understand is that, under capitalism, when markets are pitted against the state, the figure of the consumer can be invoked against the figure of the citizen. Consumption has in fact come to replace our original ideas of citizenship. As the sociologist Wolfgang Streeck has argued in his exceptional 2012 essay, “Citizens as Customers,” the government encouragement of consumer choice in the 1960s and ’70s “radiated” into the public sphere, making government seem shabby in comparison with the endlessly attractive world of consumer society. Political goods began to get judged by the same standards as commodities, and were often found wanting.
The result is that, in Streeck’s prediction, the “middle classes, who command enough purchasing power to rely on commercial rather than political means to get what they want, will lose interest in the complexities of collective preference-setting and decision-making, and find the sacrifices of individual utility required by participation in traditional politics no longer worthwhile.” The affluent find themselves bored by goods formerly subject to collective provision, such as public transportation, ceasing to pay for them, while thereby supporting private options. Consumer choice then stands in for political choice. When Ohio governor John Kasich proposed last year that he would “Uber-ize” the state’s government, he was appealing to this sense that politics should more closely resemble the latest trends in consumption.
by Nikil Saval, N+1 | Read more:
Image: uncredited
Thursday, June 29, 2017
Greetings, E.T. (Please Don’t Murder Us.)
On Nov. 16, 1974, a few hundred astronomers, government officials and other dignitaries gathered in the tropical forests of Puerto Rico’s northwest interior, a four-hour drive from San Juan. The occasion was a rechristening of the Arecibo Observatory, at the time the largest radio telescope in the world. The mammoth structure — an immense concrete-and-aluminum saucer as wide as the Eiffel Tower is tall, planted implausibly inside a limestone sinkhole in the middle of a mountainous jungle — had been upgraded to ensure its ability to survive the volatile hurricane season and to increase its precision tenfold.
To celebrate the reopening, the astronomers who maintained the observatory decided to take the most sensitive device yet constructed for listening to the cosmos and transform it, briefly, into a machine for talking back. After a series of speeches, the assembled crowd sat in silence at the edge of the telescope while the public-address system blasted nearly three minutes of two-tone noise through the muggy afternoon heat. To the listeners, the pattern was indecipherable, but somehow the experience of hearing those two notes oscillating in the air moved many in the crowd to tears.
That 168 seconds of noise, now known as the Arecibo message, was the brainchild of the astronomer Frank Drake, then the director of the organization that oversaw the Arecibo facility. The broadcast marked the first time a human being had intentionally transmitted a message targeting another solar system. The engineers had translated the missive into sound, so that the assembled group would have something to experience during the transmission. But its true medium was the silent, invisible pulse of radio waves, traveling at the speed of light.
It seemed to most of the onlookers to be a hopeful act, if a largely symbolic one: a message in a bottle tossed into the sea of deep space. But within days, the Royal Astronomer of England, Martin Ryle, released a thunderous condemnation of Drake’s stunt. By alerting the cosmos of our existence, Ryle wrote, we were risking catastrophe. Arguing that ‘‘any creatures out there [might be] malevolent or hungry,’’ Ryle demanded that the International Astronomical Union denounce Drake’s message and explicitly forbid any further communications. It was irresponsible, Ryle fumed, to tinker with interstellar outreach when such gestures, however noble their intentions, might lead to the destruction of all life on earth.
Today, more than four decades later, we still do not know if Ryle’s fears were warranted, because the Arecibo message is still eons away from its intended recipient, a cluster of roughly 300,000 stars known as M13. If you find yourself in the Northern Hemisphere this summer on a clear night, locate the Hercules constellation in the sky, 21 stars that form the image of a man, arms outstretched, perhaps kneeling. Imagine hurtling 250 trillion miles toward those stars. Though you would have traveled far outside our solar system, you would only be a tiny fraction of the way to M13. But if you were somehow able to turn on a ham radio receiver and tune it to 2,380 MHz, you might catch the message in flight: a long series of rhythmic pulses, 1,679 of them to be exact, with a clear, repetitive structure that would make them immediately detectable as a product of intelligent life. (...)
Now this taciturn phase may be coming to an end, if a growing multidisciplinary group of scientists and amateur space enthusiasts have their way. A newly formed group known as METI (Messaging Extra Terrestrial Intelligence), led by the former SETI scientist Douglas Vakoch, is planning an ongoing series of messages to begin in 2018. And Milner’s Breakthrough Listen endeavor has also promised to support a ‘‘Breakthrough Message’’ companion project, including an open competition to design the messages that we will transmit to the stars. But as messaging schemes proliferate, they have been met with resistance. The intellectual descendants of Martin Ryle include luminaries like Elon Musk and Stephen Hawking, and they caution that an assumption of interstellar friendship is the wrong way to approach the question of extraterrestrial life. They argue that an advanced alien civilization might well respond to our interstellar greetings with the same graciousness that Cortés showed the Aztecs, making silence the more prudent option. (...)
Before Doug Vakoch had even filed the papers to form the METI nonprofit organization in July 2015, a dozen or so science-and-tech luminaries, including SpaceX’s Elon Musk, signed a statement categorically opposing the project, at least without extensive further discussion, on a planetary scale. ‘‘Intentionally signaling other civilizations in the Milky Way Galaxy,’’ the statement argued, ‘‘raises concerns from all the people of Earth, about both the message and the consequences of contact. A worldwide scientific, political and humanitarian discussion must occur before any message is sent.’’
One signatory to that statement was the astronomer and science-fiction author David Brin, who has been carrying on a spirited but collegial series of debates with Vakoch over the wisdom of his project. ‘‘I just don’t think anybody should give our children a fait accompli based on blithe assumptions and assertions that have been untested and not subjected to critical peer review,’’ he told me over a Skype call from his home office in Southern California. ‘‘If you are going to do something that is going to change some of the fundamental observable parameters of our solar system, then how about an environmental-impact statement?’’
The anti-METI movement is predicated on a grim statistical likelihood: If we do ever manage to make contact with another intelligent life-form, then almost by definition, our new pen pals will be far more advanced than we are. The best way to understand this is to consider, on a percentage basis, just how young our own high-tech civilization actually is. We have been sending structured radio signals from Earth for only the last 100 years. If the universe were exactly 14 billion years old, then it would have taken 13,999,999,900 years for radio communication to be harnessed on our planet. The odds that our message would reach a society that had been tinkering with radio for a shorter, or even similar, period of time would be staggeringly long. Imagine another planet that deviates from our timetable by just a tenth of 1 percent: If they are more advanced than us, then they will have been using radio (and successor technologies) for 14 million years. Of course, depending on where they live in the universe, their signals might take millions of years to reach us. But even if you factor in that transmission lag, if we pick up a signal from another galaxy, we will almost certainly find ourselves in conversation with a more advanced civilization.
It is this asymmetry that has convinced so many future-minded thinkers that METI is a bad idea. The history of colonialism here on Earth weighs particularly heavy on the imaginations of the METI critics. Stephen Hawking, for instance, made this observation in a 2010 documentary series: ‘‘If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.’’ David Brin echoes the Hawking critique: ‘‘Every single case we know of a more technologically advanced culture contacting a less technologically advanced culture resulted at least in pain.’’

That 168 seconds of noise, now known as the Arecibo message, was the brainchild of the astronomer Frank Drake, then the director of the organization that oversaw the Arecibo facility. The broadcast marked the first time a human being had intentionally transmitted a message targeting another solar system. The engineers had translated the missive into sound, so that the assembled group would have something to experience during the transmission. But its true medium was the silent, invisible pulse of radio waves, traveling at the speed of light.
It seemed to most of the onlookers to be a hopeful act, if a largely symbolic one: a message in a bottle tossed into the sea of deep space. But within days, the Royal Astronomer of England, Martin Ryle, released a thunderous condemnation of Drake’s stunt. By alerting the cosmos of our existence, Ryle wrote, we were risking catastrophe. Arguing that ‘‘any creatures out there [might be] malevolent or hungry,’’ Ryle demanded that the International Astronomical Union denounce Drake’s message and explicitly forbid any further communications. It was irresponsible, Ryle fumed, to tinker with interstellar outreach when such gestures, however noble their intentions, might lead to the destruction of all life on earth.
Today, more than four decades later, we still do not know if Ryle’s fears were warranted, because the Arecibo message is still eons away from its intended recipient, a cluster of roughly 300,000 stars known as M13. If you find yourself in the Northern Hemisphere this summer on a clear night, locate the Hercules constellation in the sky, 21 stars that form the image of a man, arms outstretched, perhaps kneeling. Imagine hurtling 250 trillion miles toward those stars. Though you would have traveled far outside our solar system, you would only be a tiny fraction of the way to M13. But if you were somehow able to turn on a ham radio receiver and tune it to 2,380 MHz, you might catch the message in flight: a long series of rhythmic pulses, 1,679 of them to be exact, with a clear, repetitive structure that would make them immediately detectable as a product of intelligent life. (...)
Now this taciturn phase may be coming to an end, if a growing multidisciplinary group of scientists and amateur space enthusiasts have their way. A newly formed group known as METI (Messaging Extra Terrestrial Intelligence), led by the former SETI scientist Douglas Vakoch, is planning an ongoing series of messages to begin in 2018. And Milner’s Breakthrough Listen endeavor has also promised to support a ‘‘Breakthrough Message’’ companion project, including an open competition to design the messages that we will transmit to the stars. But as messaging schemes proliferate, they have been met with resistance. The intellectual descendants of Martin Ryle include luminaries like Elon Musk and Stephen Hawking, and they caution that an assumption of interstellar friendship is the wrong way to approach the question of extraterrestrial life. They argue that an advanced alien civilization might well respond to our interstellar greetings with the same graciousness that Cortés showed the Aztecs, making silence the more prudent option. (...)
Before Doug Vakoch had even filed the papers to form the METI nonprofit organization in July 2015, a dozen or so science-and-tech luminaries, including SpaceX’s Elon Musk, signed a statement categorically opposing the project, at least without extensive further discussion, on a planetary scale. ‘‘Intentionally signaling other civilizations in the Milky Way Galaxy,’’ the statement argued, ‘‘raises concerns from all the people of Earth, about both the message and the consequences of contact. A worldwide scientific, political and humanitarian discussion must occur before any message is sent.’’
One signatory to that statement was the astronomer and science-fiction author David Brin, who has been carrying on a spirited but collegial series of debates with Vakoch over the wisdom of his project. ‘‘I just don’t think anybody should give our children a fait accompli based on blithe assumptions and assertions that have been untested and not subjected to critical peer review,’’ he told me over a Skype call from his home office in Southern California. ‘‘If you are going to do something that is going to change some of the fundamental observable parameters of our solar system, then how about an environmental-impact statement?’’
The anti-METI movement is predicated on a grim statistical likelihood: If we do ever manage to make contact with another intelligent life-form, then almost by definition, our new pen pals will be far more advanced than we are. The best way to understand this is to consider, on a percentage basis, just how young our own high-tech civilization actually is. We have been sending structured radio signals from Earth for only the last 100 years. If the universe were exactly 14 billion years old, then it would have taken 13,999,999,900 years for radio communication to be harnessed on our planet. The odds that our message would reach a society that had been tinkering with radio for a shorter, or even similar, period of time would be staggeringly long. Imagine another planet that deviates from our timetable by just a tenth of 1 percent: If they are more advanced than us, then they will have been using radio (and successor technologies) for 14 million years. Of course, depending on where they live in the universe, their signals might take millions of years to reach us. But even if you factor in that transmission lag, if we pick up a signal from another galaxy, we will almost certainly find ourselves in conversation with a more advanced civilization.
It is this asymmetry that has convinced so many future-minded thinkers that METI is a bad idea. The history of colonialism here on Earth weighs particularly heavy on the imaginations of the METI critics. Stephen Hawking, for instance, made this observation in a 2010 documentary series: ‘‘If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.’’ David Brin echoes the Hawking critique: ‘‘Every single case we know of a more technologically advanced culture contacting a less technologically advanced culture resulted at least in pain.’’
by Steven Johnson, NY Times Magazine | Read more:
Image: Paul Sahre
[ed. If you find this topic interesting, I'd also suggest The Three-Body Problem by Liu Cixin.]
The Botanists’ Last Stand
Steve Perlman doesn’t take Prozac, like some of the other rare-plant botanists he knows. Instead, he writes poetry.
Either way, you have to do something when a plant you’ve long known goes extinct. Let’s say for 20 years you’ve been observing a tree on a fern-covered crag thousands of feet above sea level on an island in the Pacific. Then one day you hike up to check on the plant and find it dying. You know it’s the last one of its species, and that you’re the only witness to the end of hundreds of thousands of years of evolution, the snuffing out of a line of completely unique genetic material. You might have to sit down and write a poem. Or at least bring a bit of the dead plant to a bar and raise a beer to its life. (Perlman has done both.) You might even need an antidepressant.
“I’ve already witnessed about 20 species go extinct in the wild,” Perlman says. “It can be like you’re dealing with your friends or your family, and then they die.”
Perlman tells me this as we drive up a winding road on the northwestern edge of Kauai, the geologically oldest Hawaiian island. Perlman is 69 with a sturdy build and white hair. That’s been enough to last him 45 years and counting on the knife’s edge of extreme botany.
The stakes are always high: As the top botanist at Hawaii’s Plant Extinction Prevention Program (PEPP), Perlman deals exclusively in plants with 50 or fewer individuals left—in many cases, much fewer, maybe two or three. Of the 238 species currently on that list, 82 are on Kauai; Perlman literally hangs off cliffs and jumps from helicopters to reach them.
Without him, rare Hawaiian plants die out forever. With him, they at least have a shot. Though now, due to forces beyond Perlman’s control, even that slim hope of survival is in jeopardy. Looming budget cuts threaten to make this the final chapter not only in the history of many native Hawaiian species, but in the program designed to keep them alive.
The silver lining: even if a species does go extinct in the wild, chances are Perlman has already collected enough seeds and genetic material before the last plant disappeared to grow others in a greenhouse. Extra seeds are shipped to a seed bank, where they sit, dehydrated and chilled, awaiting a more hospitable future. There may not be a viable habitat for that plant now, but what about in 50 years? Or 150? “Part of it is saving all that genetic information,” he says. “If your house is on fire, you run in and grab the kid.”
Most people probably wouldn’t speak about obscure threatened plants with this much regard. But we don’t necessarily know what we’re losing when we let a plant species die, Perlman says. Could it have been a source of medicine? Could it be supporting a food chain that will come tumbling down in its stead? Our foresight on this kind of thing has been abominable so far; one only has to look at what happened when wolves were driven out of Yellowstone National Park, only to cause a massive boom in the newly predator-free elk population, which in turn ate every plant and baby tree in sight, starving bears of their berry supply, birds of their nest sites, and bees of flowers to feed on.
Everything was beautiful, and nothing hurt
Every native plant on Kauai is an insane stroke of luck and chance. Each species arrived to the island as a single seed floating at sea or flying in a bird’s belly from thousands of miles away—2,000 miles of open ocean sit between Kauai and the nearest continent. “We think…probably one or two seeds made it every 1,000 years,” says botanist Ken Wood, Perlman’s longtime field partner.
Once a seed took root, the plant would evolve into a completely new species, or several, all of which came to be “endemic,” or found exclusively on the island. Any defenses the plant’s predecessors may have had—thorns, or poison, or repellent scents—were completely dropped. No large mammals or other potential predators made the journey from mainland to the remote island chain. From the plant’s perspective, there was no reason to spend energy on defenses when there were no predators to fend off. So stinging nettles no longer stung. Mint lost its mint oil. Scientists ominously refer to this process as species becoming “naive.”
The same was true for animals like birds and insects when they began to arrive. Famously, when a species of duck made it to the Hawaiian islands, it evolved to drop the concept of flying altogether. Its wings became little nubs. After all, there were no large mammals around to fly away from. The bird grew very large; “gigantism” is an evolutionary phenomena common to islands. Predictably, this huge, flightless duck, known as the “mao-nalo,” went extinct once humans showed up, likely finding them an easy-to-catch source of meat.
Fatal naiveté
When plants are allowed to evolve without fear, they get really, really specific. Take the Hibiscadelphus, for example. Found only in Hawaii, members of this genus of plant have flowers custom-shaped to fit the hooked beak of the honeycreeper, the specific bird that pollinates them. “They’re extremely rare. There were only about seven species described ever, and six were already extinct when I found a new one,” says Perlman. He published the discovery in 2014—it was his 50th new plant species discovery.
Almost 15% of the plants of Hawaii evolved to have separate male and female populations—a very high percentage, says Wood, compared to mainland plants. Under normal circumstances, that trait is good for island plants: it forces them to cross-pollinate, keeping the gene pool relatively diverse even if the population is small. But by “small,” evolutionary forces were probably thinking at least 200 individuals—not four or five. When you can count the number of individual plants on one hand, it’s almost certain that the few remaining males and females won’t be anywhere near each other. In those cases, Perlman and Wood painstakingly gather pollen from the males and bring it to the females.
They have to time this just right—or at least try. There is no perfect math to predict what day an individual plant will decide to flower. “And often you need to dangle off helicopters to get to them,” Wood adds. So missing the mark by a day or two and arriving to a flower that is still closed can mean having leapt from a helicopter and rappelled off a cliff and possibly camped for a day or two for naught.
“That’s what Ken doesn’t like—he likes to go in and go out,” Perlman tells me later. He proudly points to a photo on his laptop screen. It shows him collecting seeds from the last-known member of the endemic fan palm species Pritchardia munroi. The palm was clinging to a slope 2,000 feet up in the air on the tiny Hawaiian island of Molokai. “I had to go there three times to get the seed when it’s ripe,” Perlman says.
by Zoë Schlanger, Quartz | Read more:
Image: Steve Perlman
Either way, you have to do something when a plant you’ve long known goes extinct. Let’s say for 20 years you’ve been observing a tree on a fern-covered crag thousands of feet above sea level on an island in the Pacific. Then one day you hike up to check on the plant and find it dying. You know it’s the last one of its species, and that you’re the only witness to the end of hundreds of thousands of years of evolution, the snuffing out of a line of completely unique genetic material. You might have to sit down and write a poem. Or at least bring a bit of the dead plant to a bar and raise a beer to its life. (Perlman has done both.) You might even need an antidepressant.
“I’ve already witnessed about 20 species go extinct in the wild,” Perlman says. “It can be like you’re dealing with your friends or your family, and then they die.”

The stakes are always high: As the top botanist at Hawaii’s Plant Extinction Prevention Program (PEPP), Perlman deals exclusively in plants with 50 or fewer individuals left—in many cases, much fewer, maybe two or three. Of the 238 species currently on that list, 82 are on Kauai; Perlman literally hangs off cliffs and jumps from helicopters to reach them.
Without him, rare Hawaiian plants die out forever. With him, they at least have a shot. Though now, due to forces beyond Perlman’s control, even that slim hope of survival is in jeopardy. Looming budget cuts threaten to make this the final chapter not only in the history of many native Hawaiian species, but in the program designed to keep them alive.
The silver lining: even if a species does go extinct in the wild, chances are Perlman has already collected enough seeds and genetic material before the last plant disappeared to grow others in a greenhouse. Extra seeds are shipped to a seed bank, where they sit, dehydrated and chilled, awaiting a more hospitable future. There may not be a viable habitat for that plant now, but what about in 50 years? Or 150? “Part of it is saving all that genetic information,” he says. “If your house is on fire, you run in and grab the kid.”
Most people probably wouldn’t speak about obscure threatened plants with this much regard. But we don’t necessarily know what we’re losing when we let a plant species die, Perlman says. Could it have been a source of medicine? Could it be supporting a food chain that will come tumbling down in its stead? Our foresight on this kind of thing has been abominable so far; one only has to look at what happened when wolves were driven out of Yellowstone National Park, only to cause a massive boom in the newly predator-free elk population, which in turn ate every plant and baby tree in sight, starving bears of their berry supply, birds of their nest sites, and bees of flowers to feed on.
Everything was beautiful, and nothing hurt
Every native plant on Kauai is an insane stroke of luck and chance. Each species arrived to the island as a single seed floating at sea or flying in a bird’s belly from thousands of miles away—2,000 miles of open ocean sit between Kauai and the nearest continent. “We think…probably one or two seeds made it every 1,000 years,” says botanist Ken Wood, Perlman’s longtime field partner.
Once a seed took root, the plant would evolve into a completely new species, or several, all of which came to be “endemic,” or found exclusively on the island. Any defenses the plant’s predecessors may have had—thorns, or poison, or repellent scents—were completely dropped. No large mammals or other potential predators made the journey from mainland to the remote island chain. From the plant’s perspective, there was no reason to spend energy on defenses when there were no predators to fend off. So stinging nettles no longer stung. Mint lost its mint oil. Scientists ominously refer to this process as species becoming “naive.”
The same was true for animals like birds and insects when they began to arrive. Famously, when a species of duck made it to the Hawaiian islands, it evolved to drop the concept of flying altogether. Its wings became little nubs. After all, there were no large mammals around to fly away from. The bird grew very large; “gigantism” is an evolutionary phenomena common to islands. Predictably, this huge, flightless duck, known as the “mao-nalo,” went extinct once humans showed up, likely finding them an easy-to-catch source of meat.
Fatal naiveté
When plants are allowed to evolve without fear, they get really, really specific. Take the Hibiscadelphus, for example. Found only in Hawaii, members of this genus of plant have flowers custom-shaped to fit the hooked beak of the honeycreeper, the specific bird that pollinates them. “They’re extremely rare. There were only about seven species described ever, and six were already extinct when I found a new one,” says Perlman. He published the discovery in 2014—it was his 50th new plant species discovery.
Almost 15% of the plants of Hawaii evolved to have separate male and female populations—a very high percentage, says Wood, compared to mainland plants. Under normal circumstances, that trait is good for island plants: it forces them to cross-pollinate, keeping the gene pool relatively diverse even if the population is small. But by “small,” evolutionary forces were probably thinking at least 200 individuals—not four or five. When you can count the number of individual plants on one hand, it’s almost certain that the few remaining males and females won’t be anywhere near each other. In those cases, Perlman and Wood painstakingly gather pollen from the males and bring it to the females.
They have to time this just right—or at least try. There is no perfect math to predict what day an individual plant will decide to flower. “And often you need to dangle off helicopters to get to them,” Wood adds. So missing the mark by a day or two and arriving to a flower that is still closed can mean having leapt from a helicopter and rappelled off a cliff and possibly camped for a day or two for naught.
“That’s what Ken doesn’t like—he likes to go in and go out,” Perlman tells me later. He proudly points to a photo on his laptop screen. It shows him collecting seeds from the last-known member of the endemic fan palm species Pritchardia munroi. The palm was clinging to a slope 2,000 feet up in the air on the tiny Hawaiian island of Molokai. “I had to go there three times to get the seed when it’s ripe,” Perlman says.
by Zoë Schlanger, Quartz | Read more:
Image: Steve Perlman
Subscribe to:
Posts (Atom)