Tuesday, March 18, 2014

How "Revolution" Became an Adjective

    In case of rain, the revolution will take place in the hall.
    -- Erwin Chargaff

For the last several years, the word “revolution” has been hanging around backstage on the national television talk-show circuit waiting for somebody, anybody -- visionary poet, unemployed automobile worker, late-night comedian -- to cue its appearance on camera. I picture the word sitting alone in the green room with the bottled water and a banana, armed with press clippings of its once-upon-a-time star turns in America’s political theater (tie-dyed and brassiere-less on the barricades of the 1960s countercultural insurrection, short-haired and seersucker smug behind the desks of the 1980s Reagan Risorgimento), asking itself why it’s not being brought into the segment between the German and the Japanese car commercials.

Surely even the teleprompter must know that it is the beast in the belly of the news reports, more of them every day in print and en blog, about income inequality, class conflict, the American police state. Why then does nobody have any use for it except in the form of the adjective, revolutionary, unveiling a new cellphone app or a new shade of lipstick?

I can think of several reasons, among them the cautionary tale told by the round-the-clock media footage of dead revolutionaries in Syria, Egypt, and Tunisia, also the certain knowledge that anything anybody says (on camera or off, to a hotel clerk, a Facebook friend, or an ATM) will be monitored for security purposes. Even so, the stockpiling of so much careful silence among people who like to imagine themselves on the same page with Patrick Henry -- “Give me liberty, or give me death” -- raises the question as to what has become of the American spirit of rebellion. Where have all the flowers gone, and what, if anything, is anybody willing to risk in the struggle for “Freedom Now,” “Power to the People,” “Change We Can Believe In”?

My guess is next to nothing that can’t be written off as a business expense or qualified as a tax deduction. Not in America at least, but maybe, with a better publicist and 50% of the foreign rights, somewhere east of the sun or west of the moon. (...)

I inherited the instinct as a true-born American bred to the worship of both machinery and money; an appreciation of its force I acquired during a lifetime of reading newspaper reports of political uprisings in the provinces of the bourgeois world state -- in China, Israel, and Greece in the 1940s; in the 1950s those in Hungary, Cuba, Guatemala, Algeria, Egypt, Bolivia, and Iran; in the 1960s in Vietnam, France, America, Ethiopia, and the Congo; in the 1970s and 1980s in El Salvador, Poland, Nicaragua, Kenya, Argentina, Chile, Indonesia, Czechoslovakia, Turkey, Jordan, Cambodia, again in Iran; over the last 24 years in Russia, Venezuela, Lebanon, Croatia, Bosnia, Libya, Tunisia, Syria, Ukraine, Iraq, Somalia, South Africa, Romania, Sudan, again in Algeria and Egypt.

The plot line tends to repeat itself -- first the new flag on the roof of the palace, rapturous crowds in the streets waving banners; then searches, requisitions, massacres, severed heads raised on pikes; soon afterward the transfer of power from one police force to another police force, the latter more repressive than the former (darker uniforms, heavier motorcycles) because more frightened of the social and economic upheavals they can neither foresee nor control.

All the shiftings of political power produced changes within the committees managing regional budgets and social contracts on behalf of the bourgeois imperium. None of them dethroned or defenestrated Adams’ dynamo or threw off the chains of Marx’s cash nexus. That they could possibly do so is the “romantic idea” that Albert Camus, correspondent for the French Resistance newspaper Combat during and after World War II, sees in 1946 as having been “consigned to fantasy by advances in the technology of weaponry.”

The French philosopher Simone Weil draws a corollary lesson from her acquaintance with the Civil War in Spain, and from her study of the communist Sturm und Drang in Russia, Germany, and France subsequent to World War I. “One magic word today seems capable of compensating for all sufferings, resolving all anxieties, avenging the past, curing present ills, summing up all future possibilities: that word is revolution... This word has aroused such pure acts of devotion, has repeatedly caused such generous blood to be shed, has constituted for so many unfortunates the only source of courage for living, that it is almost a sacrilege to investigate it; all this, however, does not prevent it from possibly being meaningless.”

by Lewis Lapham, Tom Dispatch |  Read more:
Image: via:

On a Strange Roof, Thinking of Home

In 2009 The Oxford American polled 134 Southern writers and academics and put together a list of the greatest Southern novels of all time based on their responses. All save one, The Adventures of Huckleberry Finn, were published between 1929 and 1960. What we think of when we think of “Southern fiction” exists now almost entirely within the boundaries of the two generations of writers that occupied that space. Asked to name great American authors, we’ll give answers that span time from Hawthorne and Melville to Whitman to DeLillo. Ask for great Southern ones and you’ll more than likely get a name from the Southern Renaissance: William Faulkner, Harper Lee, Flannery O’Connor, Walker Percy, Eudora Welty, Thomas Wolfe—all of them sandwiched into the same couple of post-Agrarian decades.

The two waves of Southern writers that crested in the wake of the Agrarian-Mencken fight, first in the 1930s and ’40s, and then in the ’50s and ’60s, didn’t build upon the existing tradition of Southern letters. They weren’t conceived of as new additions to the canon, but as an entirely new canon unto themselves, supplanting the old. They remade the popular notion of Southern literary culture, obscuring predecessors who had, in their time, seemed immortal.

“Southern,” as a descriptor of literature, is immediately familiar, possessed of a thrilling, evocative, almost ontological power. It is a primary descriptor, and alone among American literary geographies in that respect. Faulkner’s work is essentially “Southern” in the same way that Thomas Pynchon’s is essentially “postmodern,” but not, you’ll note, “Northeastern.” To displace Faulkner from his South would be to remove an essential quality; he would functionally cease to exist in a recognizable way.

It applies to the rest of the list, too (with O’Connor the possible exception, being inoculated somewhat by her Catholicism). It is impossible to imagine these writers divorced from the South. This is unusual, and a product of the unusual circumstances that gave rise to them. Faulkner, Lee, Percy, and Welty were no more Southern than Edgar Allen Poe or Sidney Lanier or Kate Chopin, and yet their writing, in the context of the South at that time, definitively was. There’s a universal appeal to their work, to be certain, but it’s also very much a regional literature, one grappling with a very specific set of circumstances in a fixed time, and correspondingly, one with very specific interests: the wearing away of the old Southern social structures, the economic uncertainty inherent in family farming, and overt, systematized racism (which, while undoubtedly still present in the South today, is very much changed from what it was).  (...)

Put a character in a tobacco field and give them a shotgun and an accent and it will evoke, without fail, a sense of the South; this is true. If they pop off with a “Hey there, y’all,” it will sound fitting, correct, like the accordion bleats that mark transitions between stories in a public radio program; useful in pushing you toward a desired emotional state, and fun to listen to when done well. But, on the other hand, it doesn’t mean anything. If this is, in fact, “Southern fiction,” then it is becoming as stale as it was a century ago—updated only in that, instead of regurgitating the Lost Cause ethos, it is now Faulkner’s South that’s subjected to the regional nostalgic impulse, a double reverberation.

There is nothing wrong with these writers because of this. It’s not that they’ve failed somehow to keep up, or are stupefying readers, or anything of the sort. It’s that this kind of writing is no longer reflective of the South—or, it reflects a South that is no longer. We wouldn’t think of someone writing whaling novels as quintessentially “New England” anymore, either. The South isn’t so homogenous a culture as it once was, and the societal tropes that Faulkner and Welty and even Barry Hannah grew up with and explored in their fiction are, in large part, gone. The rise of industrial-scale agribusiness, rapid suburbanization, the death of traditional industries like textiles, the corresponding growth of high-tech industries, a major increase in the Hispanic population: all these things and many more have contributed to a wildly different South than the one summoned in what we casually call “Southern writing.”

by Ed Winstead, Guernica | Read more:
Image: Alec Soth

Jonathan Curry
via:

DHS Wants to Track You Everywhere You Drive

[ed. See also: this.]

Immigrations, Customs Enforcement wants to firm up its relationship with Vigilant Solutions, the most dominant actor in the increasingly powerful license plate reader industry, to enable agents to more efficiently track down people they want to deport. Vigilant maintains a national database, called the National Vehicle Location Service, containing information revealing the sensitive driving histories of millions of law-abiding people. According to the company, the database currently contains nearly 2 billion discrete records of our movements, and grows by almost 100 million records per month.

In a widely reported but largely misunderstood solicitation for bids, DHS announced that it wants access to a nationwide license plate reader database, along with technology enabling agents to capture and view data from the field, using their smartphones. Reading the solicitation, I was struck by the fact that it almost perfectly describes Vigilant’s system. It’s almost as if the solicitation was written by Vigilant, it so comprehensively sketches out the contours of the corporation’s offerings.

Lots of news reports are misinterpreting DHS’ solicitation, implying that the agency wants to either build its own database or ask a contractor to build one. The department doesn’t intend to build its own license plate reader database, and it isn’t asking corporations to build one. Instead, it is seeking bids from private companies that already maintain national license plate reader databases. And because it’s the only company in the country that offers precisely the kind of services that DHS wants, there’s about a 99.9 percent chance that this contract will be awarded to Vigilant Solutions. (Mark my words.)

According to documents obtained by the ACLU, ICE agents and other branches of DHS have already been tapping into Vigilant’s data sets for years. So why did the agency decide to go public with this solicitation now? Your guess is as good as mine, but it may simply be a formality so that the agency can pretend as if there was actually robust competition in the bidding process. (As recent reporting about the FBI’s secretive surveillance acquisitions has shown, no-bid contracts for spy gear tend to raise eyebrows when they’re finally discovered.)

What’s the problem with a nationwide license plate tracking database, anyway? If you aren't the subject of a criminal investigation, the government shouldn't be keeping tabs on when you go to the grocery store, your friend's house, the abortion clinic, the antiwar protest, or the mosque. In a democratic society, we should know almost everything about what the government's doing, and it should know very little to nothing about us, unless it has a good reason to believe we're up to no good and shows that evidence to a judge. Unfortunately, that basic framework for an open, democracy society has been turned on its head. Now the government routinely collects vast troves of data about hundreds of millions of innocent people, casting everyone as a potential suspect until proven innocent. That's unacceptable.

by Admin, SOS |  Read more:
Image: uncredited

Fast Fashion


Over the past 15 years, the fashion industry has undergone a profound and baffling transformation. What used to be a stable three-month production cycle—the time it takes to design, manufacture, and distribute clothing to stores, in an extraordinary globe-spanning process—has collapsed, across much of the industry, to just two weeks. The “on-trend” clothes that were, until recently, only accessible to well-heeled, slender urban fashionistas, are now available to a dramatically broader audience, at bargain prices. A design idea for a blouse, cribbed from a runway show in Paris, can make it onto the racks in Wichita in a wide range of sizes within the space of a month.

Popularly known as “fast fashion,” this trend has inspired a great deal of media attention, but not many satisfying explanations as to how this huge shift came about, especially in the United States, and why it happened when it did. Some accounts attribute the new normal to top-down “process innovations” at big companies like Inditex, the parent company of Zara and the world’s largest—but hardly most typical—fast-fashion retailer. And at times, popular writing has simply lumped fast fashion in with the generally sped-up pace of life in the digital age, as if complex industrial systems were as fluid as our social media habits.

So the questions remain: Who is designing and manufacturing these garments in the U.S.? How are so many different suppliers producing such large volumes of clothes so quickly, executing coordinated feats of design, production, and logistics in a matter of days?

For my own part, I went looking for the answers in church.

Specifically, I paid a visit this past summer to the Ttokamsa Home Mission Church, a large, gray, industrial box of a building near a highway on the edge of Echo Park, a residential neighborhood in East Los Angeles. A well-known local institution among Korean Americans, the church is the spiritual home of the Chang family—the owners of Forever 21, the largest fast-fashion retailer based in the U.S. (Look on the bottom of any canary-yellow Forever 21 shopping bag and you’ll find the words “John 3:16.”)

With more than 630 locations worldwide, the Changs’ retail empire employs more than 35,000 people and made $3.7 billion in revenue in 2012. But in the pews at Ttokamsa, the Changs are in good company: The vast majority of their fellow parishioners are Korean families that also make their livelihoods in fast fashion.

As an anthropologist, I have been coming to Los Angeles with the photographer Lauren Lancaster for the past two years to study the hundreds of Korean families who have, over the last decade, transformed the city’s garment district into a central hub for fast fashion in the Americas. These families make their living by designing clothes, organizing the factory labor that will cut and sew them in places like China and Vietnam, and selling them wholesale to many of the most famous retailers in the U.S.—including Forever 21, Urban Outfitters, T.J. Maxx, Anthropologie, and Nordstrom.

I first became curious about the garment sector in Los Angeles after noticing that an increasingly large proportion of students at Parsons, the New York design school where I teach, were second-generation children of Korean immigrants from Southern California. Many of them were studying fashion marketing and design so they could return to Los Angeles to help scale up their parents’ businesses. These students and their contemporaries were, I came to understand, the driving force behind U.S. fast fashion—a phenomenon whose rise is less a story about corporate innovation than one about an immigrant subculture coming of age.

by Christina Moon, Pacific Standard |  Read more:
Image: Lauren Lancaster

Arthur Meyerson, Color of Light
via:

Monday, March 17, 2014

A Scientific Breakthrough Lets Us See to the Beginning of Time

At rare moments in scientific history, a new window on the universe opens up that changes everything. Today was quite possibly such a day. At a press conference on Monday morning at the Harvard-Smithsonian Center for Astrophysics, a team of scientists operating a sensitive microwave telescope at the South Pole announced the discovery of polarization distortions in the Cosmic Microwave Background Radiation, which is the observable afterglow of the Big Bang. The distortions appear to be due to the presence of gravitational waves, which would date back to almost the beginning of time.

This observation, made possible by the fact that gravitational waves can travel unimpeded through the universe, takes us to 10-35 seconds after the Big Bang. By comparison, the Cosmic Microwave Background—which, until today, was the earliest direct signal we had of the Big Bang—was created when the universe was already three hundred thousand years old.

If the discovery announced this morning holds up, it will allow us to peer back to the very beginning of time—a million billion billion billion billion billion times closer to the Big Bang than any previous direct observation—and will allow us to explore the fundamental forces of nature on a scale ten thousand billion times smaller than can be probed at the Large Hadron Collider, the world’s largest particle accelerator. Moreover, it will allow us to test some of the most ambitious theoretical speculations about the origin of our observed universe that have ever been made by humans—speculations that may first appear to verge on metaphysics. It might seem like an esoteric finding, so far removed from everyday life as to be of almost no interest. But, if confirmed, it will have increased our empirical window on the origins of the universe by a margin comparable to the amount it has grown in all of the rest of human history. Where this may lead, no one knows, but it should be cause for great excitement.

Even for someone who has been thinking about these possibilities for the past thirty-five years, the truth can sometimes seem stranger than fiction. In 1979, a young particle physicist named Alan Guth proposed what seemed like an outrageous possibility, which he called Inflation: that new physics, involving a large extrapolation from what could then be observed, might imply that the universe expanded in size by over thirty orders of magnitude in a tiny fraction of a second after the Big Bang, increasing in size by a greater amount in that instance than it has in the fourteen billion years since.

by Lawrence Krauss, New Yorker |  Read more:
Image: Steffen Richter/Harvard University

Why the Long Face?

I was listening to Paul Simon’s Hearts and Bones album recently, for the first time in many years – the first time, really, since I was a young teenager. I bought it when it came out in 1983 and listened to it over and over. But hearing it again, and particularly listening to the title track, I was struck by a question: how did I take this back then? What did it mean to me, and why did it mean so much?

So: the title song is a beautifully worn-down response to a relationship at its end, a mix of nostalgic glimpses of happier times and a weary, bruised sense of life in the aftermath of some cathartic break-up. Listening to it as a young teenager, still a virgin and almost wholly inexperienced in such emotions, I wonder if I didn’t think this is how I want to feel. I wanted the happiness, but in a retrospective way (because then it’s done and dusted and safe); and I wanted the melancholy because it just seemed so grown-up and sophisticated and suave. I wanted, as an old joke has it, to skip the marriage and go straight to the divorce. After all – and I am hardly the first person to point this out – there is a complex sort of joy in sadness.

But can this be right? Surely what people want is to be happy. Whole philosophies (I’m looking at you, utilitarianism) rest on the premise that more happiness is always and everywhere a good thing. There is a Global Happiness Index, measuring how happy people are (Denmark tops the league). Bhutan even has a Gross National Happiness Commission, with the power to review government policy decisions and allocate resources.

It’s good to be happy sometimes, of course. Yet the strange truth is that we don’t wish to be happy all the time. If we did, more of us would be happy – it’s not as if we in the affluent West lack tools or means to gratify ourselves. Sometimes we are sad because we have cause, and sometimes we are sad because – consciously or unconsciously – we want to be. Perhaps there’s a sense in which emotional variety is better than monotony, even if the monotone is a happy one. But there’s more to it than that, I think. We value sadness in ways that make happiness look a bit simple-minded. (...)

It was Charles Darwin, in The Expression of the Emotions in Man and Animals (1872), who noted that sadness manifested the same way in all cultures. For something so ubiquitous, it is tempting to venture an evolutionary explanation. Alas, the anthropological and evolutionary work in this area has focused almost entirely upon depression, which is not quite what we are talking about here. I can tell you with rather grim authority that the difference between elegant ennui and the black dog is like the difference between pleasant intoxication and typhus. Many evolutionary theories have been proposed for depression’s adaptive value, but no one has, so far as I am aware, tried to claim that it is enjoyable.

If depression is a foul miasma wreathing the brain, elegant sadness is more like a peacock’s tail, coloured in blue-gentian and rich marine greens. Is it also universal? To this question, anthropology offers no definitive answer. Yet the condition certainly manifests itself in a suggestive array of cultures. It is the sadness to which the Japanese phrase mono no aware gestures (物の哀れ, literally ‘the beautiful sorrow of things’). It is the haunted simplicity of those musical traditions that spread from Africa into the New World as the Blues. It’s the mixture of strength, energy, pity and melancholy that Claude Lévi-Strauss found in Brazil, encapsulated in the title of his book about his travels there Tristes Tropiques (1955). It’s the insight of Vergil’s Aeneas, as he looks back over his troubled life and forward to troubles yet to some: sunt lacrimae rerum; there are tears in everything, said not mournfully nor hopelessly but as a paradoxical statement about the beauty of the world (Aeneid 1:462).

by Adam Roberts, Aeon |  Read more:
Image via: 

Sunday, March 16, 2014

Wild Darkness

For twenty-six Septembers I’ve hiked up streams littered with corpses of dying humpbacked salmon. It is nothing new, nothing surprising, not the stench, not the gore, not the thrashing of black humpies plowing past their dead brethren to spawn and die. It is familiar; still, it is terrible and wild. Winged and furred predators gather at the mouths of streams to pounce, pluck, tear, rip, and plunder the living, dying hordes. This September, it is just as terrible and wild as ever, but I gather in the scene with different eyes, the eyes of someone whose own demise is no longer an abstraction, the eyes of someone who has experienced the tears, rips, and plunder of cancer treatment. In spring, I learned my breast cancer had come back, had metastasized to the pleura of my right lung. Metastatic breast cancer is incurable. Through its prism I now see this world.

I’m not a salmon biologist. I don’t hike salmon streams as part of my job. I hike up streams and bear trails and muskegs and mountains for pleasure. The work my husband, Craig, and I do each field season in Prince William Sound is sedentary. We study whales. For weeks at a stretch, we live on a thirty-four-foot boat far from any town, often out of cell-phone and internet range. We sit for hours on the flying bridge with binoculars or a camera pressed to our eyes. Periodically, we climb down the ladder and walk a few paces to the cabin to retrieve the orca or humpback catalogue, to drop the hydrophone, or to grab fresh batteries, mugs of hot soup or tea, or granola bars. We climb back up. We get wet; we get cold; we get bored; sometimes we even get sunburned. We eat, sleep, and work on the boat. Hikes are our sanity, our maintenance. We hike because we love this rainy, lush, turbulent, breathing, expiring, windy place as much as we love our work with whales. It’s a good thing, because in autumn weather thwarts our research half the time and sends us ashore, swaddled in heavy rain gear, paddling against williwaw gusts and sideways rain in our red plastic kayaks. What we find there is not always pretty.

Normally, September is the beginning of the end of our field season, which starts most years in April or May. But for me, this year it’s just the beginning, and conversely, like everything else in my life since I learned cancer had come back, it’s tinged with the prescience of ending. The median survival for a person with metastatic breast cancer is twenty-six months. Some people live much longer. An oncologist told me he could give me a prognosis if I demanded one, but it would most likely be wrong. I changed the subject. No one can tell me how long I will live. Will this be my last field season? Will the chemo drug I’m taking subdue the cancer into a long-term remission? Will I be well enough to work on the boat next summer? Will I be alive?

A summer of tests and procedures and doctor appointments kept me off the boat until now. A surgery and six-day hospitalization in early August to prevent fluid from building up in my pleural space taught me that certain experiences cut us off entirely from nature—or seem to; I know that as long as we inhabit bodies of flesh, blood, and bone, we are wholly inside nature. But under medical duress, we forget this. Flesh, blood, and bone not withstanding, a body hooked by way of tubes to suction devices, by way of an IV to a synthetic morphine pump, forgets its organic, animal self. In the hospital, I learned to fear something more than death: existence dependent upon technology, machines, sterile procedures, hoses, pumps, chemicals easing one kind of pain only to feed a psychic other. Existence apart from dirt, mud, muck, wind gust, crow caw, fishy orca breath, bog musk, deer track, rain squall, bear scat. The whole ordeal was a necessary palliation, a stint of suffering to grant me long-term physical freedom. And yet it smacked of the way people too often spend their last days alive, and it really scared me.

Ultimately, what I faced those hospital nights, what I face every day, is death impending—the other side, the passing over into, the big unknown—what poet Joseph Brodsky called his “wild darkness,” what poet Christian Wiman calls his “bright abyss.” Death may be the wildest thing of all, the least tamed or known phenomenon our consciousness has to reckon with. I don’t understand how to meet it, not yet—maybe never. Perhaps (I tell myself), though we deny and abhor and battle death in our society, though we hide it away, it is something so natural, so innate, that when the time comes, our bodies—our whole selves— know exactly how it’s done. All I know right now is that something has stepped toward me, some invisible presence in the woods, one I’ve always sensed and feared and backed away from, called out to in a tentative voice (hello?), trying to scare it off, but which I now must approach. I stumble toward it in dusky conifer light: my own predatory, furred, toothed, clawed angel. (...)

Can I take comfort in the countless births and deaths this earth enacts each moment, the jellyfish, the barnacles, the orcas, the salmon, the fungi, the trees, much less the humans? I woke this morning to the screech of gulls at the stream mouth. We’d anchored in Sleepy Bay for the night, a cove wide open to the strait where we often find orcas. The humpbacked salmon—millions returned this summer, a record run—are all up the creeks now. Before starting our daily search, Craig and I kayaked to shore. As we approached, I watched the gulls, dozens of them, launching from the sloping beach where the stream branches into rivulets and pours into the bay. They wheeled and dipped over our heads, then quickly settled again to their grim task, plucking at faded salmon carcasses scattered all over the stones. The stench of a salmon stream in September is a cloying muck of rot, waste, ammonia. Rocks are smeared with black bear shit, white gull shit. This is in-your-face death, death without palliation or mercy or intervention. At the same time, it is enlivening, feeding energy to gulls, bears, river otters, eagles, and the invisible decomposers who break the carcasses down to just bones and scales, which winter then erases. In spring, I kneel and drink from the same stream’s clear cold water, or plunge my head into it. It is snowmelt and rain filtered through alpine tundra, avalanche chute, muskeg, fen, and bog. It is water newly born, fresh, alive, and oxygenated, rushing over clean stones, numbing my skin.

by Eva Saulitis, Orion |  Read more:
Image: NOAA

[ed. I know Sleepy Bay well. The stream she mentions was reconstructed after being buried under a foot of oil during the Exxon Valdez spill. It's heartening to hear that it's still productive, and that people still enjoy it. It took many battles.]

How Finance Gutted Manufacturing

In May 2013 shareholders voted to break up the Timken Company—a $5 billion Ohio manufacturer of tapered bearings, power transmissions, gears, and specialty steel—into two separate businesses. Their goal was to raise stock prices. The company, which makes complex and difficult products that cannot be easily outsourced, employs 20,000 people in the United States, China, and Romania. Ward “Tim” Timken, Jr., the Timken chairman whose family founded the business more than a hundred years ago, and James Griffith, Timken’s CEO, opposed the move.

The shareholders who supported the breakup hardly looked like the “barbarians at the gate” who forced the 1988 leveraged buyout of RJR Nabisco. This time the attack came from the California State Teachers Retirement System pension fund, the second-largest public pension fund in the United States, together with Relational Investors LLC, an asset management firm. And Tim Timken was not, like the RJR Nabisco CEO, eagerly pursuing the breakup to raise his own take. But beneath these differences are the same financial pressures that have shaped corporate structure for thirty years.  (...)

In the radical downsizing of American manufacturing, changes in corporate structures since the 1980s have been a powerful driver, though not one that is generally recognized. Over the first decade of the twenty-first century, about 5.8 million U.S. manufacturing jobs disappeared. The most frequent explanations for this decline are productivity gains and increased trade with low-wage economies. Both of these factors have been important, but they explain far less of the picture than is usually claimed.  (...)

To better understand the decline of American manufacturing, we need to go back well before the last decade to see how changes in corporate structures made it more difficult to scale up innovation through production to market.

In the 1980s about two-dozen large, vertically integrated companies such as Motorola, DuPont, and IBM dominated the American scene. With some notable exceptions (for example, GE), large vertically integrated companies today have pared off activities and become not only smaller but also more narrowly focused on core competencies. Under pressure from financial markets, they have shed activities that investors deemed peripheral—such as Timken’s steel.

This process has been fostered by great technological advances in digitization, which have allowed companies to outsource and offshore many of the functions they previously had to carry out themselves. In the 1970s a Hewlett-Packard engineer who designed circuits for a new semiconductor chip had to work together with a technician with a razor blade to cut a mask to place on silicon. Now the engineer can send a complete file of digital instructions over the Internet to a cutting machine. The mask and the chip fabrication can take place in different companies, anywhere in the world. A senior executive of Cisco told MIT researchers:
The separation of R&D and manufacturing has today become possible at a level not even conceivable five years ago. Progress in technology allows us to have people working anywhere collaborating. We no longer need to have them located in clusters or centers of excellence. We now have the ability to sense and monitor what’s going on in our suppliers at any place and any time. Most of this is based on sensors deployed locally, distributed control systems, and new middleware and encryption schemes that allow this to be done securely over the open Internet. . . . In other words, not only do we monitor and control what’s happening inside a factory, but we’re also deeply into the supply chain feeding in and feeding out of the factory.
Digitization and the Internet continue in multiple ways to enable the fragmentation of corporate structures that financial markets demand.

The breakup of vertically integrated corporations and their recomposition into globally linked value chains of designers, researchers, manufacturers, and distributors has had some enormous benefits both for the United States and for developing economies. It has meant lower costs for consumers, new pathways for building businesses, and a chance for poor countries to create new industries and raise incomes.

But the changes in corporate structures that brought about these new opportunities also left big holes in the American industrial ecosystem. These holes are market failures. Functions once performed by big companies are now carried out by no one.

by Suzanne Berger, Boston Review |  Read more:
Image: Timken Company

Saturday, March 15, 2014

Amy Winehouse


[ed. Just wasted 15 minutes trying to change the intro thumbnail to this video. It can be done, just not by me. Sorry.] 

A Passage from Hong Kong

Imagine the Empire State Building. Now imagine tipping it on its side, nudging it into the Hudson, and putting out to sea. That was the scale of thing I contemplated one day in late November, as I gaped at the immense navy hull of CMA CGM Christophe Colomb, one of the world’s largest container ships, which stretched above and out of sight on either side of me, on a quayside in Hong Kong. Nearly twelve hundred feet long, it’s bigger than an aircraft carrier and longer than the world’s largest cruise ships. On Christophe Colomb, all of that space goes to boxes. The ship has a capacity of 13,344TEUs—“twenty-foot equivalent units,” the size of a standard shipping container. These are stacked seven high above deck and another six to eight below. In cheerful shades of turquoise, maroon, navy, gold, and green, they look like a set of Legos designed for a young giant.

Trying to see where one even boards such a vessel, I noticed a steep aluminum gangway and went up its seventy-four steps, through two hatches, and into the eight-story “castle” that sits above the main deck and houses the ship’s living quarters, offices, and bridge. This was to be my home for nearly four weeks, as I took passage on Christophe Colomb from Hong Kong to Southampton, England, via the Suez Canal. (...)

In the 1960s, the shipping industry was transformed by the widespread adoption of the standardized shipping container. Developed by American trucking entrepreneur Malcom McLean, the container served as a one-size-fits-all package for goods. These twenty-foot boxes could be packed at the place of consignment (whether a factory, a warehouse, or a person’s front door), hitched up to a truck, driven to the quayside, lifted off the truck by a crane, and loaded directly into their designated places in a ship’s hold—thus eliminating expensive, time-consuming transfers from land transport to port warehouse, warehouse to ship hold. If 13,000 containers seem like a lot to load onto a ship, consider what it was like when every single item within those containers would have to be loaded individually; inventories of even modest-sized ships in the pre-container age ran into the hundreds of thousands of items. Now it can take less than a minute for a gantry crane to grab a container off the quayside, lift, swing, and drop it into place on a ship, then slide immediately back for another. (...)

When a container ship arrives in port today, it slides into a 24/7 operation superintended by logistics experts in distant offices. On board ship, the chief officer checks to make sure things go according to the computerized plan sent to him by the logistics office. As we watched the boxes pile on board Christophe Colomb in Hong Kong, I asked the chief officer if he had any idea what was in them. He shrugged, not even curious. All he knows—all anybody on board knows—is whether they need to be refrigerated, or whether they contain hazardous materials and need to be placed in a secure storage area. A port call lasts only six to twenty-four hours, and sailors rarely bother to get off the boat. The containers thus put up a wall between sea and land, making each side less accessible to the other.

By reducing the cost of transport, containerization accelerated a process of global economic integration whose earlier stages Conrad had witnessed. Today “shipping is so cheap,” writes the British journalist Rose George in Ninety Percent of Everything, “that it makes more financial sense for Scottish cod to be sent ten thousand miles to China to be filleted, then sent back to Scottish shops and restaurants, than to pay Scottish filleters.” Residents of the English port city Southampton were recently asked what percentage of goods they thought traveled by sea. All their answers, George says, “had the interrogative upswing of the unsure. ‘Thirty-five percent?’ ‘Not a lot?’ The answer is, nearly everything.” Ninety percent of everything, to be more accurate: most of the clothes you put on this morning; the coffee or tea you drank; your car, or at least parts of it, and some of the gas you put into it; your computer, television, phone, earphones—in short, the stuff of daily life.

by Maya Jasanoff, NY Review of Books |  Read more:
Image: Ocean/Corbis

Bill Gates: The Rolling Stone Interview

At 58, Bill Gates is not only the richest man in the world, with a fortune that now exceeds $76 billion, but he may also be the most optimistic. In his view, the world is a giant operating system that just needs to be debugged. Gates' driving idea – the idea that animates his life, that guides his philanthropy, that keeps him late in his sleek book-lined office overlooking Lake Washington, outside Seattle – is the hacker's notion that the code for these problems can be rewritten, that errors can be fixed, that huge systems – whether it's Windows 8, global poverty or climate change – can be improved if you have the right tools and the right skills. The Bill & Melinda Gates Foundation, the philanthropic organization with a $36 billion endowment that he runs with his wife, is like a giant startup whose target market is human civilization.

Personally, Gates has very little Master of the Universe swagger and, given the scale of his wealth, his possessions are modest: three houses, one plane, no yachts. He wears loafers and khakis and V-neck sweaters. He often needs a haircut. His glasses haven't changed much in 40 years. For fun, he attends bridge tournaments.

But if his social ambitions are modest, his intellectual scope is mind-boggling: climate, energy, agriculture, infectious diseases and education reform, to name a few. He has former nuclear physicists helping cook up nutritional cookies to feed the developing world. A polio SWAT team has already spent $1.5 billion (and is committed to another $1.8 billion through 2018) to eradicate the virus. He's engineering better toilets and funding research into condoms made of carbon nanotubes.

It's a long way from the early days of the digital revolution, when Gates was almost a caricature of a greedy monopolist hell-bent on installing Windows on every computer in the galaxy ("The trouble with Bill," Steve Jobs once told me, "is that he wants to take a nickel for himself out of every dollar that passes through his hands"). But when Gates stepped down as Microsoft CEO in 2000, he found a way to transform his aggressive drive to conquer the desktop into an aggressive drive to conquer poverty and disease.

Now he's returning to Microsoft as a "technology adviser" to Satya Nadella, Microsoft's new CEO. "Satya has asked me to review the product plans and come in and help make some quick decisions and pick some new directions," Gates told me as we talked in his office on a rainy day a few weeks ago. He estimates­ that he'll devote a third of his time to Microsoft and two-thirds to his foundation and other work. But the Microsoft of today is nothing like the world-dominating behemoth of the Nineties. The company remained shackled to the desktop for too long, while competitors – namely, Apple and Google – moved on to phones and tablets. And instead of talking in visionary terms about the company's future, Gates talks of challenges­ that sound almost mundane for a man of his ambitions, like reinventing Windows and Office for the era of cloud computing. But in some ways, that's not unexpected: Unlike, say, Jobs, who returned to Apple with a religious zeal, Gates clearly has bigger things on his mind than figuring out how to make spreadsheets workable in the cloud.

When you started Microsoft, you had a crazy-sounding idea that someday there would be a computer on every desktop. Now, as you return to Microsoft 40 years later, we have computers not just on our desktops, but in our pockets – and everywhere else. What is the biggest surprise to you in the way this has all played out?

Well, it's pretty amazing to go from a world where computers were unheard of and very complex to where they're a tool of everyday life. That was the dream that I wanted to make come true, and in a large part it's unfolded as I'd expected. You can argue about advertising business models or which networking protocol would catch on or which screen sizes would be used for which things. There are less robots now than I would have guessed. Vision and speech have come a little later than I had guessed. But these are things that will probably emerge within five years, and certainly within 10 years.

If there's a deal that symbolizes where Silicon Valley is today, it's Facebook's $19 billion acquisition of WhatsApp. What does that say about the economics of Silicon Valley right now?
It means that Mark Zuckerberg wants Facebook to be the next Facebook. Mark has the credibility to say, "I'm going to spend $19 billion to buy something that has essentially no revenue model." I think his aggressiveness is wise – although the price is higher than I would have expected. It shows that user bases are extremely valuable. It's software; it can morph into a broad set of things – once you're set up communicating with somebody, you're not just going to do text. You're going to do photos, you're going to share documents, you're going to play games together.

Apparently, Google was looking at it.
Yeah, yeah. Microsoft would have been willing to buy it, too. . . . I don't know for $19 billion, but the company's extremely valuable.

You mentioned Mark Zuckerberg. When you look at what he's done, do you see some of yourself in him?
Oh, sure. We're both Harvard dropouts, we both had strong, stubborn views of what software could do. I give him more credit for shaping the user interface of his product. He's more of a product manager than I was. I'm more of a coder, down in the bowels and the architecture, than he is. But, you know, that's not that major of a difference. I start with architecture, and Mark starts with products, and Steve Jobs started with aesthetics.

by Jeff Goodell, Rolling Stone |  Read more:
Image: Roberto Parada