Thursday, October 8, 2015

The Sky's Gone Dark

Today, the commercial exploitation of outer space appears to be a growth area. Barely a week goes by without a satellite launch somewhere on the planet. SpaceX has a gigantic order book and a contract to ferry astronauts to the ISS, probably starting in 2018; United Launch Alliance have a similar manned space taxi under development, and there are multiple competing projects under way to fill low earth orbit with constellations of hundreds of small data relay satellites to bring internet connectivity to the entire planet. For the first time since the 1960s it's beginning to look as if human activity beyond low earth orbit is a distinct possibility within the next decade.

But there's a fly in the ointment.

Kessler Syndrome, or collisional cascading, is a nightmare scenario for space activity. Proposed by NASA scientist Donald Kessler in 1978, it proposes that at a certain critical density, orbiting debris shed by satellites and launch vehicles will begin to impact on and shatter other satellites, producing a cascade of more debris, so that the probability of any given satellite being hit rises, leading to a chain reaction that effectively renders access to low earth orbit unacceptably hazardous.

This isn't just fantasy. There are an estimated 300,000 pieces of debris already in orbit; a satellite is destroyed every year by an impact event. Even a fleck of shed paint a tenth of a millimeter across carries as much kinetic energy as a rifle bullet when it's traveling at orbital velocity, and the majority of this crud is clustered in low orbit, with a secondary belt of bits in geosychronous orbit as well. The ISS carries patch kits in case of a micro-particle impact and periodically has to expend fuel to dodge dead satellites drifting into its orbit; on occasion the US space shuttles suffered windscreen impacts that necessitated ground repairs.

If a Kessler cascade erupts in low earth orbit, launching new satellites or manned spacecraft will become very hazardous, equivalent to running across a field under beaten fire from a machine gun with an infinite ammunition supply. Sooner or later you'll be hit. And the debris stays in orbit for a very long time, typically years to decades (centuries or millennia for the particles in higher orbits). Solar flares might mitigate the worst of the effect by causing the earth's ionosphere to bulge—it was added drag resulting from a solar event that took down Skylab prematurely in the 1970s—but it could still deny access to low orbit for long enough to kill the viability of any commercial launch business. And then there's the nightmare scenario: a Kessler cascade in geosynchronous orbit. The crud up there will take centuries to disperse, mostly due to radiation degradation and the solar wind gradually blowing it into higher orbits.

So here's my question.

Postulate a runaway Kessler syndrome kicks off around 2030, at a point when there are thousands of small comsats (and a couple of big space stations), ranging from very low orbits to a couple of thousand kilometers up. Human access to space is completely restricted; any launch at all becomes a game of Russian roulette. (You can't carry enough armor plating to protect a manned capsule against a Kesseler cascade—larger bits of debris, and by "large" I mean with masses in the 0.1-10 gram range—carry as much kinetic energy as an armor-piercing anti-tank projectile.) Unmanned satellites are possible, but risk adding to the cascade.So basically we completely lose access to orbit.

There are some proposals to mitigate the risk of Kessler Syndrome by using microsats to recover and deorbit larger bits of debris, and lasers to evaporate smaller particles, but let's ignore these for now: whether or not they work, they don't work unless we start using them before Kessler syndrome kicks in.

So, suppose that with the exception of already-on-orbit GPS clusters and high altitude comsats, we can't launch anything else for a century. What effect does it have on society and geopolitics when the sky goes dark?

by Charlie Stross, Charlie's Diary |  Read more:
Image: via:

The Paradox of the First Black President


There is a photo by Pete Souza, the White House’s canny and peripatetic photographer, that surfaces from time to time online. The setting is Marine One, and it features a modest cast of five. Valerie Jarrett, dressed in a suit of blazing pink, is staring at her cell phone. Barack Obama, twisted around in his seat, is listening to a conversation between his then–body guy, Reggie Love, and Patrick Gaspard, one of his then–top advisers. Obama’s former deputy press secretary, Bill Burton, is looking on too, with just the mildest hint of a grin on his face.

In many ways, it’s a banal shot — just another photo for the White House Instagram feed, showing the president and his aides busily attending to matters of state. Stare at it a second longer, though, and a subtle distinction comes into focus: Everyone onboard is black. “We joked that it was Soul Plane,” says Burton. “And we’ve often joked about it since — that it was the first time in history only black people were on that helicopter.”

Souza snapped that shot on August 9, 2010, but it didn’t make any prominent appearances in the mainstream press until mid-2012, when it appeared in The New York Times Magazine. The following summer, July 2013, the president had a group of civil-rights leaders come visit him in the Roosevelt Room of the White House, and the optics, as they like to say in politics, were similar: An all-star cast of minorities (African-American and Latino this time) gathered in a historic place to which the barriers to entry were once insuperably high.

But this was not a meeting the participants laughed about afterward. When Obama opened up the floor, everyone spoke about what they’d witnessed in the 2012 election: how states that limited voter-registration drives and early-voting initiatives had left many African-Americans off the rolls; how strict new laws concerning IDs had prevented many minorities from voting and created hours-long lines at the polls. The answer was clear: legislation to restore the Voting Rights Act. The Supreme Court had just overturned a key provision of the landmark civil-rights legislation the month before.

But Obama’s response was equally clear: Nothing could be done. Not in this political climate, not under these circumstances. Congress would never allow it.

The group was stunned. As they’d stumped for Obama, one of the many talking points they’d used to turn out the black vote was the threat of disenfranchisement, the possibility that the Voting Rights Act was in jeopardy. Yet here was Obama telling them that a bill addressing this vital issue didn’t stand a chance.

These proximal events — the publication of a historic photo in a major news outlet, a demoralizing discussion about the prospects of amending our voting laws — may seem unrelated. But to many who’ve watched this White House for the last six and three-quarter years, particularly with an eye toward race, the two events are finely intertwined. They would more likely say: One cannot have that photo without a massive reaction to that photo. In a country whose basic genetic blueprint includes the same crooked mutations that made slavery and Jim Crow possible, it is not possible to have a black president surrounded by black aides on Marine One without paying a price. And the price that Obama has had to pay — and, more important, that African-Americans have had to pay — is one of caution, moderation, and at times compromised policies: The first black president could do only so much, and say only so much, on behalf of other African-Americans. That is the bittersweet irony of the first black presidency.

But now, as Obama’s presidency draws to a close, African-American intellectuals and civil-rights leaders have grown increasingly vocal in their discontents. They frame them, for the most part, with love and respect. But current events have broken their hearts and stretched their patience. A proliferation of videos documenting the murders of unarmed black men and women — by the very people charged with their safety — has given rise to a whole movement defined by three words and a hashtag: #BlackLivesMatter.

“That’s one of the fundamental paradoxes of Obama’s presidency — that we have the Black Lives Matter movement under a black president,” says Fredrick Harris, a political scientist at Columbia University. “Your man is in office, and you have this whole movement around criminal-justice reform asserting black people’s humanity?”

Obama is hardly uncomprehending of these concerns. One can hear it in his rhetoric on race these days, which has become much more lyrical, personal, explicit. “Amazing Grace,” he sang in Charleston. “Racism, we are not cured of,” he told Marc Maron, “and it’s not just a matter of it not being polite to say ‘n-----’ in public,” using the full word. This summer, Obama visited a prison, the first president to do so, and commuted the sentences of 46 nonviolent drug offenders. Last year, he started the My Brother’s Keeper initiative, which zeros in on programs within federal agencies that can help young men of color. He is now trying, with the improbable cooperation of congressional Republicans, to pass a bill on criminal-justice reform.

Still, the question many African-American leaders are now asking is what his efforts will amount to, and whether they’re sufficient. At a panel about African-American millennials in August, the journalist Charlayne Hunter-Gault made note of Obama’s recent emphasis on race matters and asked the group if it was “too little, too late.” Their responses, not surprisingly, were mixed. At the Aspen Ideas Festival this summer, Jarrett fielded a similar question from Walter Isaacson, the writer and head of the Aspen Institute. He noted that some Americans thought Obama publicly engaged with issues of race only “halfway.” Her reply was swift, pointed, and poignant. “I think you have to ask yourself: Why is that all on him?”

by Jennifer Senior, NY Magazine |  Read more:
Image: Pete Souza

Adventures in the Science of the Superorganism

It is not only possible, it has in fact happened that a woman who vaginally conceived a child, then vaginally delivered her, had Protective Services threaten to take the child when a maternity test showed she was not, in fact, the mother. Nor was she the mother of her second child, genetically. Or her third, whom she was still carrying throughout the dispute with her estranged boyfriend — the man who, those same tests proved, was definitively the father. Only later did Lydia Fairchild discover that the true mother of all three of her children was her twin — if twin is really the word for one human embryo more or less swallowed by another before birth. The eggs that produced those babies had been with Fairchild her whole life, but genetically they belonged to an unborn sister, unknown to her and even her parents, living on in small parts inside her — a phenomenon that poetic scientists have called “parasitic” or “vanishing” twins. These days, they tend to prefer “chimerism,” after the mythic beast assembled, like Frankenstein’s monster, from multiple animals. But, man, isn’t that even creepier?

Don’t relax — it’s not just twins. In a new paper, “Humans As Superorganisms,” Peter Kramer and Paola Bressan of the University of Padua describe a typical human body as a teeming mass of what they call “selfish entities.” Picture a tree warped by fungus, wrapped with vines, dotted at the base with mushrooms and flowers, and marked, midway up, by what the tree thought the whole time was just a knot but turns out to be a parasitic twin. This is the human superorganism — not the tree, not the tangled mess of things doing battle with it, but the whole chunk of forest — and Kramer and Bressan would like to place it at the very center of the way we think about human behavior. They are psychologists, and their paper is a call to arms to their fellow shrinks, exhorting them to take seriously as a possible cause of an enormous buffet of behavioral phenomena — from quotidian quirks, to maddeningly opaque disorders like autism, to schizophrenia — the sheer volume and weird diversity of completely crazy alien shit going on in just about all of our bodies, just about all the time.

At least one part of this superorganism theory is not all that unfamiliar, especially to anyone who remembers recent articles by Michael Pollan and others about what is often called “the brain in your gut.” That part: that our stomachs are, actually, zoos. In fact, they’re not really our stomachs. Principally, they belong to the hundred trillion bacteria enticed by evolution into your chutes-and-ladders intestinal tract, then enlisted to eat your food for you. The weirder thing is that evolution also put hundreds of millions of neurons there, which means there’s a lot of trouble to be caused by those 160 or more species of bacteria (yes, full species). And the behavioral effects are pretty startling. Take a mouse, evacuate his intestines, and repopulate them with the microbes of another mouse, and he’ll act like the other mouse — adventurous mice become timid. In humans, what is delicately called “gut flora” affects not just obesity but also anxiety, and some think it plays a role in disorders as far-ranging as MS and Parkinson’s. What role exactly? Who knows? Though there have been some attempts to treat autism with yogurt.

Okay, so, the gut is weird. But what if you lived in the gut? What if you were the gut? Kramer and Bressan want us to stop looking at our stomachs like we’re hosts to some messy guests, or homeowners too disgusted by a particular closet to ever go poking around in it, because, they write, the human superorganism isn’t something to observe from the privileged perch of the self. Instead, they suggest, it envelops the self — the environment in which and against which genes give rise to who you are, an internal environment populated nevertheless by an entire orchestra of aliens, some of them fiddling away in the brain, and each with its own evolutionary interests at stake.

by David Wallace-Wells, NY Magazine | Read more:
Image: Bobby Doherty

Egg McNothin'

The greatest luxury is the one we cannot have—or at least, the one we cannot have very often. This is the definition of luxury, really, and not just expensive, unreachable luxuries, but also cheaper, smaller ones. Thanksgiving turkey and dressing, a decadence limited to one day a year. And also, breakfast—real breakfast, with grains and eggs and meat and starch. Even at a place like McDonald’s where, as of this week, a selection of the struggling fast food figurehead’s breakfast menu, including the venerable Egg McMuffin, is available all day.

Let me say something heretical: The Egg McMuffin is not that great, actually. Warm but slightly wet and gooey, sloppily constructed, oozing with quasi-cheese, the slap of Canadian bacon failing to yield to incisors. But what is great is the idea of an egg McMuffin. It’s an improbable domestication of Eggs Benedict, condensing that civil dish of lazy brunches into the harried hand of the commuter or the road-tripper.

For years, more Americans came into contact with the Egg McMuffin as an idea rather than a reality. Only occasionally, when dawn’s rosy fingers intersected with the golden arches: an early Interstate departure, or a next-morning drive-of-shame lamentation, or a pre-planned indulgence before a cross-town optometrist appointment.

Yes, sure, I realize that McDonald’s has breakfast regulars, and that breakfast is a meal whose delights are unfairly sequestered into the brisk, single-digit hours. But equally common is the McDonald’s near-miss breakfast. Hungover, lurching through the drive-thru at 10:25a.m. in search of cheap proteins; or skipping through glass doors with kids in tow, having succumbed to their big eyes; or meandering in to kill some time after arriving early for a client meeting in a strange part of town—only to discover that the lunch menus had cruelly flipped into place already.

Even when you wanted one, McDonald’s breakfast was withheld more often than it was supplied. Perhaps it was meant to be. Perhaps the dream of the Egg McMuffin is its truest payload, rather than its shaped meat and egg between English-muffin halves.

Writing in The Atlantic upon the announcement of McDondald’s all-day breakfast, Adam Chandler lamented the violation of well-established ritual. The 24/7 work world turns “morning” into “that time after whenever you woke up,” and all-day breakfast at McDonald’s only spreads a new layer of oil atop an already greasy period of precarity and overwork. “In demanding eternal breakfast,” Chandler mourns, “America is reverting to its adolescence.”

Perhaps so. But also, America is giving up McDonald’s breakfast as an indulgence meant mostly to be missed rather than savored. The Egg McMuffin and its brethren offered different sustenance—spiritual sustenance. Under the fluorescent lights inside its boxy chapel one discovered and not just endured but enjoyed the sensation of inaccessibility. Light door closing on its pneumatic hinge, coat unzipped, cold hands rubbing together, glasses fogging from the temperature change, accidental early birds enter McDonald’s for the anticipation itself. It might be on the way to or from a long drive or a dead-end job or a screaming child or a fouled-up marriage, but a dip into the quick-serve cathedral affirms that the universe is ultimately indifferent: “I’m sorry, sir, we’ve just stopped serving breakfast.”

by Ian Bogost, The Atlantic |  Read more:
Image: blu_pineappl3 / Flickr

Wednesday, October 7, 2015


Andrew Nicholl, Poppies and Wild Flowers on the Northern Irish Coastline
via:

Life after Death

Life after Death?

Yes. Can I help you?

Well, you know . . . I saw your ad in that travel magazine.

AFAR? or Destinations?

I don’t remember. It was at the doctor’s office. Does that matter?

Just wondering. It wouldn’t have been The New Yorker would it? One of those little bitty ads in the back?

I look at the cartoons but I never read those back ads.

You should. They can be pretty weird. Weird as in interesting.

Well, it was a travel magazine. Which is what I said. Which is why I called.

Right. That would be AFAR or Destinations. There’s a discount deal with AFAR, but only if you are a subscriber.

That doesn’t apply to me. I just saw it at the doctor’s office.

There’s a website, too. You can Google it. Lifeafterdeath.org, all spelled out with no periods. It has all the information.

That’s where I got I got this number, from the website. I wanted to talk to a live person. It’s kind of a thing I have.

That’s ironic, sort of, if you think about it.

What do you mean?

Never mind. I can tell you everything you need to know over the phone. It will be my privilege. Can I start by asking your name?

What does that have to do with anything? I just want to ask a few questions.

It’s all strictly confidential, if that’s what worrying you.

I called to get information, not give information.

Hey, I understand. That’s perfectly fine. I’ll be glad to help. What can I tell you about Life after Death?

Well, that’s it. Life after Death. Is this for real? How does it work? What does it cost?

It’s for real all right. First you’re dead and then you’re not. It’s quite a ride while it lasts.

What do you mean, while it lasts?

It’s not Eternal. It’s important that you understand that. It’s in the ad I think.

It did say Not Eternal but it didn’t say it was temporary.

Temporary is not exactly the word for it; just not permanent. It lasts about three months, give or take. It can seem longer.

Ninety days. And then what?

Then you are dead again. Life after Death is not permanent, that’s what Not Eternal means. It’s not affiliated with any religion. And I can assure you, it’s not hocus pocus. It’s for real.

I know what Not Eternal means. So what does it cost? The ad I saw was careful not to mention that.

$99,000.

Ninety-nine thousand dollars?

It’s not for everyone. That’s why it’s only advertised in certain magazines.

Which anybody can pick up at the doctor’s office.

What do you mean?

Never mind. And what do you get, what does one get, for one’s hundred grand.

Ninety-nine. Life after Death. First you’re dead and then you’re not. It’s quite a ride while it lasts.

How long are you dead?

Not long. You don’t need a death certificate or anything. It’s all prearranged, and prepaid of course. The service kicks in within hours after you’re gone.

Gone where? What’s going on there? Where is it?

It’s not exactly a where.

Then how can I be there if there is no where?

The where is not the thing. Think of it as adventure travel. Have you ever been to Antarctica?

That’s none of your business. But yes, in fact. Once. Year before last.

And did you get to the South Pole? Did you hug a penguin? Did you trek to the top of a mighty glacier? Probably not.

It was on a cruise ship. You’re not allowed to go ashore. What’s your point?

The thrill was just being there, right? Even just standing at the rail of the ship.

There was a helicopter trip included.

That too. You were experiencing it. That was the adventure, the experience.

by Terry Bisson, The Baffler |  Read more:
Image: Amanda Konishi

Tuesday, October 6, 2015


Julien Tatham, People at bus stop
via:

RBF


[ed. I'm not mad. That's just my RBF.]
Image: New Yorker

via:
[ed. Costco run.]

Wu Guanzhong, City Overlooks the Yangtze River, 1974
via:

The Price Is Right

What advertising does to TV.

Ever since the finale of “Mad Men,” I’ve been meditating on its audacious last image. Don Draper, sitting cross-legged and purring “Ommmm,” is achieving inner peace at an Esalen-like retreat. He’s as handsome as ever, in khakis and a crisp white shirt. A bell rings, and a grin widens across his face. Then, as if cutting to a sponsor, we move to the iconic Coke ad from 1971—a green hillside covered with a racially diverse chorus of young people, trilling, in harmony, “I’d like to teach the world to sing.” Don Draper, recently suicidal, has invented the world’s greatest ad. He’s back, baby.

The scene triggered a debate online. From one perspective, the image looked cynical: the viewer is tricked into thinking that Draper has achieved Nirvana, only to be slapped with the source of his smile. It’s the grin of an adman who has figured out how to use enlightenment to peddle sugar water, co-opting the counterculture as a brand. Yet, from another angle, the scene looked idealistic. Draper has indeed had a spiritual revelation, one that he’s expressing in a beautiful way—through advertising, his great gift. The night the episode aired, it struck me as a dark joke. But, at a discussion a couple of days later, at the New York Public Library, Matthew Weiner, the show’s creator, told the novelist A. M. Homes that viewers should see the hilltop ad as “very pure,” the product of “an enlightened state.” To regard it otherwise, he warned, was itself the symptom of a poisonous mind-set.

The question of how television fits together with advertising—and whether we should resist that relationship or embrace it—has haunted the medium since its origins. Advertising is TV’s original sin. When people called TV shows garbage, which they did all the time, until recently, commercialism was at the heart of the complaint. Even great TV could never be good art, because it was tainted by definition. It was there to sell.

That was the argument made by George W. S. Trow in this magazine, in a feverish manifesto called “In the Context of No Context.” That essay, which ran in 1980, became a sensation, as coruscating denunciations of modernity so often do. In television, “the trivial is raised up to power,” Trow wrote. “The powerful is lowered toward the trivial.” Driven by “demography”—that is, by the corrupting force of money and ratings—television treats those who consume it like sales targets, encouraging them to view themselves that way. In one of several sections titled “Celebrities,” he writes, “The most successful celebrities are products. Consider the real role in American life of Coca-Cola. Is any man as well-loved as this soft drink is?”

Much of Trow’s essay, which runs to more than a hundred pages, makes little sense. It is written in the style of oracular poetry, full of elegant repetitions, elegant repetitions that induce a hypnotic effect, elegant repetitions that suggest authority through their wonderful numbing rhythms, but which contain few facts. It’s élitism in the guise of hipness. It is more nostalgic than “Mad Men” ever was for the era when Wasp men in hats ran New York. It’s a screed against TV written at the medium’s low point—after the energy of the sitcoms of the seventies had faded but before the innovations of the nineties—and it paints TV fans as brainwashed dummies.

And yet there’s something in Trow’s manifesto that I find myself craving these days: that rude resistance to being sold to, the insistence that there is, after all, such a thing as selling out. Those of us who love TV have won the war. The best scripted shows are regarded as significant art—debated, revered, denounced. TV showrunners are embraced as heroes and role models, even philosophers. At the same time, television’s business model is in chaos, splintered and re-forming itself, struggling with its own history. Making television has always meant bending to the money—and TV history has taught us to be cool with any compromise. But sometimes we’re knowing about things that we don’t know much about at all.

by Emily Nussbaum, New Yorker |  Read more:
Image: Michael Kirkham

Sex and Suffering: The Tragic Life of the Courtesan in Japan's Floating World


It’s difficult to get a window into the world of Edo-Period Japanese prostitutes without the gauzy romantic filter of the male gaze. The artworks in the new San Francisco Asian Art Museum exhibition, “Seduction: Japan’s Floating World,” were made by men for men, the patrons of the Yoshiwara pleasure district outside of Edo, which is now known as Tokyo. Every little detail of Yoshiwara—from the décor and fashion, to the delicacies served at teahouses, to the talents of courtesans, both sexual and intellectual—was engineered to sate a warlord’s every whim.

We’re left with the client-commissioned pretty-girl scroll paintings by masters like Hishikawa Moronobu and Katsukawa Shunshō, as well as woodblock prints and guidebooks by commercial artists meant to lure repeat visitors through the red-light district gates. These often lush and colorful artworks are rife with romantic longing, from the images of interchangeable beauties with inscrutable expressions, to the layers of richly patterned textiles they wore, and the highly symbolic haiku poetry written about them. The showstopper of the exhibition is Moronobu’s nearly 58-foot-long handscroll painting “A Visit to the Yoshiwara,” which takes viewers on a tour of the pleasure district from the street vendors and the food being prepared to the high-ranking courtesans on parade and a couple cuddling under the covers in a teahouse.

The Yoshiwara pleasure district was just part of what the Japanese referred to as “ukiyo” or “the floating world,” which also included the Kabuki theaters of Edo. Originally, the Buddhist term “ukiyo” referred to the sorrow and grief caused by desire, which was seen as an impediment to enlightenment.

“In the Buddhist context, ‘ukiyo’ was written with characters that meant ‘suffering world,’ which is the concept that desire leads to suffering and that’s the root of all the problems in the world,” explains Laura W. Allen, the curator of Japanese art at the Asian Art Museum who originated “Seduction.” “In the 17th century, that term was turned on its head and the term ‘ukiyo’ was written with new characters to mean ‘floating world.’ The concept of the floating world was ignoring the problems that might have existed in a very strictly regulated society and abandoning yourself, bobbing along on the current of pleasure. Then it became associated with two particular sites in Edo, one of which was the Kabuki theater district, the other the Yoshiwara pleasure quarter. The art of the floating worlds ‘ukiyo-e,’ which means ‘floating world pictures,’ usually depicts those two subjects.”

But, of course, by and large, this free-floating sensation belonged to men. Allen suggests that we, as viewers, resist indulging in the fantasies of Yoshiwara prostitutes presented in the artworks, and instead, consider the real lives of the women portrayed. Unfortunately, no true records of the Edo-Period prostitutes’ personal thoughts and experiences exists—and with good reason. Publicizing the dark side of the pleasure district would have been bad for business.

“Don’t take these paintings at face value,” Allen says. “It’s easy to say, ‘Oh, yes, it’s a picture of a beautiful woman, wearing beautiful clothing.’ But it’s not a photograph. It’s some artist’s rendition, made to promote this particular world, which was driven by economics. The profiteers urged the production of more paintings, which continued to feed the frenzy for the Yoshiwara.

“The artwork is very much glamorized and idealized,” she continues. “I haven’t been to 17th-century Japan so I don’t know what it was actually like, and the women didn’t write about it, so we don’t have their firsthand accounts. To imagine it from a woman’s perspective, it must have been a very harsh reality. There’s been some modern scholarship that promotes the idea that the women working as prostitutes had an economic power that they might not have otherwise had. But I think the day-to-day reality of living in the Yoshiwara could not have been pleasant.”

For one thing, most of the women involved didn’t have a choice about their occupation. Born into impoverished farming or fishing villages, they were sold to brothels by desperate parents around the ages of 7 or 8. This tradition was rationalized by Confucian ideals that allowed the children to work out of a duty to their parents, who usually brokered 10-year contracts with the brothel owners that their girls would have to work off. The little girls would do daily chores at the brothels and tended to their “sister” courtesans, cleaning and delivering messages. In those early years, they’d learn the tricks of the trade, how to speak using manipulative language, to write “love letters,” and to fake tears with a bit of alum hidden in their collars.

If a child attendant proved she was gifted by age 11 or 12, she would be chosen for elite courtesan training, where she would learn etiquette and refined arts from masters, including how to play flute or a three-stringed instrument called a samisen, to sing, to paint, to write haiku, to write in calligraphy, to dance, to perform a tea ceremony, and how to play games like go, backgammon, and kickball. She would be well-read and literate in order to engage in stimulating conversation. While these are pleasurable activities and such talents would be a source of pride, these women weren’t encouraged to pursue them for their own fulfillment, but to make themselves more attractive to men.

“They would be trained in the very polite, cultural accomplishments of the type that aristocratic women would have,” Allen says. “The idea was that they were comparable to the wife of a daimyo [feudal lord] or a high-ranking samurai [warrior] in terms of their level of accomplishment. The elite courtesans were supposed to know all of the lady-like skills, and their skill level was keyed to how much space they would have in a brothel and how lavish their clothing was. It was a very carefully calibrated hierarchy.”

by Lisa Hix, Collector's Weekly |  Read more:
Image: Katsukawa Shunshō, Secret Games in the Spring Palace

Monday, October 5, 2015


Paris's 'Day Without Cars'
Image: Tom Craig/Demotix/Corbis

The Reign of Recycling

If you live in the United States, you probably do some form of recycling. It’s likely that you separate paper from plastic and glass and metal. You rinse the bottles and cans, and you might put food scraps in a container destined for a composting facility. As you sort everything into the right bins, you probably assume that recycling is helping your community and protecting the environment. But is it? Are you in fact wasting your time?

In 1996, I wrote a long article for The New York Times Magazine arguing that the recycling process as we carried it out was wasteful. I presented plenty of evidence that recycling was costly and ineffectual, but its defenders said that it was unfair to rush to judgment. Noting that the modern recycling movement had really just begun just a few years earlier, they predicted it would flourish as the industry matured and the public learned how to recycle properly.

So, what’s happened since then? While it’s true that the recycling message has reached more people than ever, when it comes to the bottom line, both economically and environmentally, not much has changed at all.

Despite decades of exhortations and mandates, it’s still typically more expensive for municipalities to recycle household waste than to send it to a landfill. Prices for recyclable materials have plummeted because of lower oil prices and reduced demand for them overseas. The slump has forced some recycling companies to shut plants and cancel plans for new technologies. The mood is so gloomy that one industry veteran tried to cheer up her colleagues this summer with an article in a trade journal titled, “Recycling Is Not Dead!”

While politicians set higher and higher goals, the national rate of recycling has stagnated in recent years. Yes, it’s popular in affluent neighborhoods like Park Slope in Brooklyn and in cities like San Francisco, but residents of the Bronx and Houston don’t have the same fervor for sorting garbage in their spare time.

The future for recycling looks even worse. As cities move beyond recycling paper and metals, and into glass, food scraps and assorted plastics, the costs rise sharply while the environmental benefits decline and sometimes vanish. “If you believe recycling is good for the planet and that we need to do more of it, then there’s a crisis to confront,” says David P. Steiner, the chief executive officer of Waste Management, the largest recycler of household trash in the United States. “Trying to turn garbage into gold costs a lot more than expected. We need to ask ourselves: What is the goal here?”

Recycling has been relentlessly promoted as a goal in and of itself: an unalloyed public good and private virtue that is indoctrinated in students from kindergarten through college. As a result, otherwise well-informed and educated people have no idea of the relative costs and benefits.

by John Tierney, NY Times |  Read more:
Image: Santtu Mustonen

A Country Is Not a Company

College students who plan to go into business often major in economics, but few believe that they will end up using what they hear in the lecture hall. Those students understand a fundamental truth: What they learn in economics courses won’t help them run a business.

The converse is also true: What people learn from running a business won’t help them formulate economic policy. A country is not a big corporation. The habits of mind that make a great business leader are not, in general, those that make a great economic analyst; an executive who has made $1 billion is rarely the right person to turn to for advice about a $6 trillion economy.

Why should that be pointed out? After all, neither businesspeople nor economists are usually very good poets, but so what? Yet many people (not least successful business executives themselves) believe that someone who has made a personal fortune will know how to make an entire nation more prosperous. In fact, his or her advice is often disastrously misguided.

I am not claiming that business-people are stupid or that economists are particularly smart. On the contrary, if the 100 top U.S. business executives got together with the 100 leading economists, the least impressive of the former group would probably outshine the most impressive of the latter. My point is that the style of thinking necessary for economic analysis is very different from that which leads to success in business. By understanding that difference, we can begin to understand what it means to do good economic analysis and perhaps even help some businesspeople become the great economists they surely have the intellect to be.

Let me begin with two examples of economic issues that I have found business executives generally do not understand: first, the relationship between exports and job creation, and, second, the relationship between foreign investment and trade balances. Both issues involve international trade, partly because it is the area I know best but also because it is an area in which businesspeople seem particularly inclined to make false analogies between countries and corporations.

Exports and Jobs

Business executives consistently misunderstand two things about the relationship between international trade and domestic job creation. First, since most U.S. business-people support free trade, they generally agree that expanded world trade is good for world employment. Specifically, they believe that free trade agreements such as the recently concluded General Agreement on Tariffs and Trade are good largely because they mean more jobs around the world. Second, businesspeople tend to believe that countries compete for those jobs. The more the United States exports, the thinking goes, the more people we will employ, and the more we import, the fewer jobs will be available. According to that view, the United States must not only have free trade but also be sufficiently competitive to get a large proportion of the jobs that free trade creates.

Do those propositions sound reasonable? Of course they do. This sort of rhetoric dominated the last U.S. presidential election and will likely be heard again in the upcoming race. However, economists in general do not believe that free trade creates more jobs worldwide (or that its benefits should be measured in terms of job creation) or that countries that are highly successful exporters will have lower unemployment than those that run trade deficits.

Why don’t economists subscribe to what sounds like common sense to businesspeople? The idea that free trade means more global jobs seems obvious: More trade means more exports and therefore more export-related jobs. But there is a problem with that argument. Because one country’s exports are another country’s imports, every dollar of export sales is, as a matter of sheer mathematical necessity, matched by a dollar of spending shifted from some country’s domestic goods to imports. Unless there is some reason to think that free trade will increase total world spending—which is not a necessary outcome—overall world demand will not change.

Moreover, beyond this indisputable point of arithmetic lies the question of what limits the overall number of jobs available. Is it simply a matter of insufficient demand for goods? Surely not, except in the very short run. It is, after all, easy to increase demand. The Federal Reserve can print as much money as it likes, and it has repeatedly demonstrated its ability to create an economic boom when it wants to. Why, then, doesn’t the Fed try to keep the economy booming all the time? Because it believes, with good reason, that if it were to do so—if it were to create too many jobs—the result would be unacceptable and accelerating inflation. In other words, the constraint on the number of jobs in the United States is not the U.S. economy’s ability to generate demand, from exports or any other source, but the level of unemployment that the Fed thinks the economy needs in order to keep inflation under control.

That is not an abstract point. During 1994, the Fed raised interest rates seven times and made no secret of the fact that it was doing so to cool off an economic boom that it feared would create too many jobs, overheat the economy, and lead to inflation. Consider what that implies for the effect of trade on employment. Suppose that the U.S. economy were to experience an export surge. Suppose, for example, that the United States agreed to drop its objections to slave labor if China agreed to buy $200 billion worth of U.S. goods. What would the Fed do? It would offset the expansionary effect of the exports by raising interest rates; thus any increase in export-related jobs would be more or less matched by a loss of jobs in interest-rate-sensitive sectors of the economy, such as construction. Conversely, the Fed would surely respond to an import surge by lowering interest rates, so the direct loss of jobs to import competition would be roughly matched by an increased number of jobs elsewhere.

Even if we ignore the point that free trade always increases world imports by exactly as much as it increases world exports, there is still no reason to expect free trade to increase U.S. employment, nor should we expect any other trade policy, such as export promotion, to increase the total number of jobs in our economy. When the U.S. secretary of commerce returns from a trip abroad with billions of dollars in new orders for U.S. companies, he may or may not be instrumental in creating thousands of export-related jobs. If he is, he is also instrumental in destroying a roughly equal number of jobs elsewhere in the economy. The ability of the U.S. economy to increase exports or roll back imports has essentially nothing to do with its success in creating jobs.

Needless to say, this argument does not sit well with business audiences. (When I argued on one business panel that the North American Free Trade Agreement would have no effect, positive or negative, on the total number of jobs in the United States, one of my fellow panelists—a NAFTA supporter—reacted with rage: “It’s comments like that that explain why people hate economists!”) The job gains from increased exports or losses from import competition are tangible: You can actually see the people making the goods that foreigners buy, the workers whose factories were closed in the face of import competition. The other effects that economists talk about seem abstract. And yet if you accept the idea that the Fed has both a jobs target and the means to achieve it, you must conclude that changes in exports and imports have little effect on overall employment.

by Paul Krugman, Harvard Business Review |  Read more:
Image: Carlo Giambarresi

Prospects Are Dim for America’s Great Outdoors

[ed. The LWCF has been fundamental to conservation management for decades. What's gained by letting it expire?]

On September 30, Congress allowed a relatively little-known but very important conservation provision to expire: The Land and Water Conservation Fund. While the average outdoor lover might not be familiar with this program, chances are good that they’ve enjoyed one of the places it’s helped protect.

Over the years, it has contributed tens of millions of dollars to protect lands in all 50 states. Thanks to the LWCF, visitors can enjoy areas in Mount Rainier; Redwood and Acadia National Parks; George Washington’s birthplace; Brown v. the Board of Education historic sites; Cape Hatteras in North Carolina and other national seashores; and countless wildlife refuges, management areas, and access points. Closer to home, the fund has supported more than 40,000 state and local projects—ball fields, trails, parks, and community open spaces. Almost every county in the nation has a park project covered by the fund.

The fund uses royalty revenue from something dirty (offshore drilling in public waters) to fund something clean—namely new conservation efforts. The idea is to bring balance to the use of our public resources. The monies are often used to match grants for state and local parks and recreation projects. They’re also used for voluntary buy-outs of private inholdings in national parks and wildlife areas that would otherwise be developed. It’s an idea that has been tremendously successful and widely supported. After all, who wants a beautiful overlook of a subdivision?

The fund even has strong bipartisan support in Congress. Yet, thanks to a handful of ideologues, it has expired. Put simply, the loss of the fund jeopardizes the continued conservation of our outdoors. Congress’ past refusal to fully fund the program has created a backlog of billions of dollars in needs for land acquisition, state and local park maintenance, and public access improvements. With the total loss of the program, even more projects will go unrealized.

This means that the pristine natural environment of Hawaii Volcanoes National Park’s Pohue Bay could become a resort, that plans to secure permanent public access to more than 7,000 acres of forest in the Northern Rockies could fall by the wayside, and that parts of Gettysburg National Military Park, including an Underground Railroad site, could be developed. But it’s not just national parks and historic sites at risk. Also affected are local projects like plans to relocate parts of California’s Pacific Crest Trail, the Appalachian Trail in Tennessee, and the New England National Scenic Trail in Massachusetts.

by Dan Chu, Outside | Read more:
Image: Giant Ginkgo, flickr