Monday, January 9, 2012


Gold fireflies in Japan
via:

How (not) to Communicate New Scientific Information

In 1983, at the Urodynamics Society meeting in Las Vegas, Professor G.S. Brindley first announced to the world his experiments on self-injection with papaverine to induce a penile erection. This was the first time that an effective medical therapy for erectile dysfunction (ED) was described, and was a historic development in the management of ED. The way in which this information was first reported was completely unique and memorable, and provides an interesting context for the development of therapies for ED. I was present at this extraordinary lecture, and the details are worth sharing. Although this lecture was given more than 20 years ago, the details have remained fresh in my mind, for reasons which will become obvious.

The lecture, which had an innocuous title along the lines of ‘Vaso-active therapy for erectile dysfunction’ was scheduled as an evening lecture of the Urodynamics Society in the hotel in which I was staying. I was a senior resident, hungry for knowledge, and at the AUA I went to every lecture that I could. About 15 min before the lecture I took the elevator to go to the lecture hall, and on the next floor a slight, elderly looking and bespectacled man, wearing a blue track suit and carrying a small cigar box, entered the elevator. He appeared quite nervous, and shuffled back and forth. He opened the box in the elevator, which became crowded, and started examining and ruffling through the 35 mm slides of micrographs inside. I was standing next to him, and could vaguely make out the content of the slides, which appeared to be a series of pictures of penile erection. I concluded that this was, indeed, Professor Brindley on his way to the lecture, although his dress seemed inappropriately casual.

The lecture was given in a large auditorium, with a raised lectern separated by some stairs from the seats. This was an evening programme, between the daytime sessions and an evening reception. It was relatively poorly attended, perhaps 80 people in all. Most attendees came with their partners, clearly on the way to the reception. I was sitting in the third row, and in front of me were about seven middle-aged male urologists, and their partners in ‘full evening regalia’.

Professor Brindley, still in his blue track suit, was introduced as a psychiatrist with broad research interests. He began his lecture without aplomb. He had, he indicated, hypothesized that injection with vasoactive agents into the corporal bodies of the penis might induce an erection. Lacking ready access to an appropriate animal model, and cognisant of the long medical tradition of using oneself as a research subject, he began a series of experiments on self-injection of his penis with various vasoactive agents, including papaverine, phentolamine, and several others. (While this is now commonplace, at the time it was unheard of). His slide-based talk consisted of a large series of photographs of his penis in various states of tumescence after injection with a variety of doses of phentolamine and papaverine. After viewing about 30 of these slides, there was no doubt in my mind that, at least in Professor Brindley's case, the therapy was effective. Of course, one could not exclude the possibility that erotic stimulation had played a role in acquiring these erections, and Professor Brindley acknowledged this.

The Professor wanted to make his case in the most convincing style possible. He indicated that, in his view, no normal person would find the experience of giving a lecture to a large audience to be erotically stimulating or erection-inducing. He had, he said, therefore injected himself with papaverine in his hotel room before coming to give the lecture, and deliberately wore loose clothes (hence the track-suit) to make it possible to exhibit the results. He stepped around the podium, and pulled his loose pants tight up around his genitalia in an attempt to demonstrate his erection.

At this point, I, and I believe everyone else in the room, was agog. I could scarcely believe what was occurring on stage. But Prof. Brindley was not satisfied. He looked down sceptically at his pants and shook his head with dismay. ‘Unfortunately, this doesn’t display the results clearly enough’. He then summarily dropped his trousers and shorts, revealing a long, thin, clearly erect penis. There was not a sound in the room. Everyone had stopped breathing.

by Laurence Klotz, Wiley Online Library |  Read more:

perfect storm
via:

Sex, Bombs and Burgers


Our lives today are more defined by technology than ever before. Thanks to Skype and Google, we can video chat with our family from across the planet. We have robots to clean our floors and satellite TV that allows us to watch anything we want, whenever we want it. We can reheat food at the touch of a button. But without our basest instincts — our most violent and libidinous tendencies — none of this would be possible. Indeed, if Canadian tech journalist Peter Nowak is to be believed, the key drivers of 20th-century progress were bloodlust, gluttony and our desire to get laid.

In his new book, “Sex, Bombs and Burgers,” Nowak argues that porn, fast food and the military have completely reshaped modern technology and our relationship to it. He points to inventions like powderized food, which emerged out of the Second World War effort and made restaurant chains like McDonald’s and Dairy Queen possible. He shows how outsourced phone sex lines have helped bring wealth to poor countries, like Guyana. And he explains how pornography helped drive both the home entertainment industry and modern Web technology, like video chat. An entertaining and well-research read, filled with surprising facts, “Sex, Bombs and Burgers” offers a provocative alternate history of 20th-century progress.

Salon spoke with Nowak over the phone from Toronto about the importance of the Second World War, the military roots of the Barbie Doll and why the Roomba is our future.

How would you summarize the broader argument behind the book?

It’s a look at some of the darker instincts that we as a race have: the need to fight, the need to engorge ourselves and the need to reproduce. Despite thousands of years of conscious evolution, we haven’t been able to escape those things. It’s the story of how our negative side has resulted in some of our most positive accomplishments.

So much of the technology you talk about came out of the Second World War. Why was that period so important for innovation?

It was when the military really started spending a lot of money on research. At one point during the war, the U.S. was devoting something like 85 percent of its entire income to military spending. So when you take that kind of effort and those resources and that brainpower and you devote them to one particular thing, the effects are going to be huge and long-lasting, which is why World War II was probably the most important technological event in human history. And the sequel, at least technologically speaking, to that period was the Space Race. I’m of the belief that cancer could be cured if somebody in the United States would dedicate the same kinds of resources in the same amount of time as it did to developing the atom bomb and putting someone on the moon.

What kinds of things came out of the war?

The food innovations that happened during the war paved the way for the rest of the 20th century. The U.S. military had to move large numbers of troops over to other parts of the world and then feed them, so a lot of techniques were created and perfected, from packaging to dehydrating and powderizing foods. Powdered coffee and powdered milk came of age during World War II. These advancements in food processing techniques created the foundation of the food plentifulness in the U.S. and created the opportunity for countries to become global food exporting powers.

Plastics are interesting because they — 60 years later it’s hard for us to think about this — but they really revolutionized the way everything was done because materials were running short in every sense during the war. During the war, there was a lot of emphasis put on creating synthetic materials and chemicals. These plastics were used during the war for things like insulating cables or lining drums or coating bullets. Then, after the war, chemical-makers like Dow started to come up with new uses for these things, which translated into everything from Tupperware to Saran wrap to Teflon to Silly Putty to Barbie dolls.

by Thomas Rogers, Salon |  Read more:
Photo: (Credit: Olinchuck and Anetlanda via Shutterstock/Wikipedia)

Get a Midlife

You may be surprised to learn that when researchers asked people over 65 to pick the age they would most like to return to, the majority bypassed the wild and wrinkle-less pastures of their teens, 20s and 30s, and chose their 40s.

We are more accustomed to seeing the entry into middle age treated as a punch line or a cause for condolences. Despite admonishments that “50 is the new 30,” middle age continues to be used as a metaphor for decline or stasis. Having just completed a book about the history and culture of middle age, I found that the first question people asked me was, “When does it begin?” anxiously hoping to hear a number they hadn’t yet reached.

Elderly people who find middle age to be the most desirable period of life, however, are voicing what was a common sentiment in the 19th century, when the idea of a separate stage of development called “middle age” began to emerge. Although middle age may seem like a universal truth, it is actually as much of a manufactured creation as polyester or the rules of chess. And like all the other so-called stages into which we have divvied up the uninterrupted flow of life, middle age, too, is a cultural fiction, a story we tell about ourselves.

The story our great-great-great-grandparents told was that midlife was the prime of life. “Our powers are at the highest point of development,” The New York Times declared in 1881, “and our power of disciplining these powers should be at their best.”

Yes, yes, you think, bully for higher powers and all, but what about thickening waistlines, sagging skin, aching knees and multiplying responsibilities for aging, ailing parents? Is there anyone past 40 who, at one point or other, hasn’t pushed aside qualms and pushed back the skin above their cheekbones to smooth out those deepening nasolabial folds? Gym addicts aside, when it comes to face and physique, middle age doesn’t have a chance.

The problem with the physical inventory of middle age, though, is that it inevitably emphasizes loss — the end of fertility, decreased stamina, the absence of youth. Middle age begins, one cultural critic declared, the moment you think of yourself as “not young.” The approach is the same as that taken by physicians and psychologists, who have defined wellness and happiness in terms of what was missing: health was an absence of illness; a well-adjusted psyche meant an absence of depression and dysfunction.

The most recent research on middle age, by contrast, has looked at gains as well as deficits. To identify the things that contribute to feeling fulfilled and purposeful, Carol Ryff, the director of the Institute on Aging at the University of Wisconsin, Madison, developed a list of questions to measure well-being and divided them into six broad categories: personal growth (having new experiences that challenge how you think about yourself); autonomy (having confidence in your opinions even if they are contrary to the general consensus); supportive social relationships; self-regard (liking most aspects of your personality); control of your life; and a sense of purpose.

by Patricia Cohen, NY Times |  Read more:
Illustration: Gemma Correl

Sunday, January 8, 2012

How Scientists Came to Love the Whale


“Whale Carpaccio — 130 Kroner.”

Thus read an appetizer on a menu at a restaurant in Bergen, Norway, when I dined there a few years back. I wanted to sample this odd dish. What would the experience be like? Would the meat be chewy like pork, or flaky like fish?

These were my thoughts when the waitress approached and asked (maybe a little sadistically?) if I’d like to “try the whale.” But before I could signal my assent, somewhere in the back of my mind a fuzzy ’70s-era television memory arose — the image of a Greenpeace Zodiac bobbing on the high seas defensively poised between a breaching whale and a Soviet harpoon cannon. “No,” I said, “I’ll have the mussels.”

I reprise this anecdote here not to show how evolved I am, but rather to juxtapose my hazy whale-belief structure with the much more nuanced understanding of a man who has immersed himself in the subtleties, trickeries, scandals and science of cetaceans. D. Graham Burnett, the author of “The Sounding of the Whale,” a sweeping, important study of cetacean science and policy, has quite literally “tried the whale” and could probably describe for you whale meat’s precise consistency. But he has also been tried by the whale in the deepest sense, because he spent a decade poring over thousands upon thousands of pages scattered in far-flung archives. If the whale swallowed Jonah whole, then Burnett has made a considerable effort to get as much of the whale as possible down his voluminous intellectual gullet.

A reviewer pressed for time could, in lieu of an essay, put together a very respectable (or at least very weird) collage of all the “you’re kidding me, right?” facts about whales and whaling that appear on almost every one of Burnett’s information-soaked pages. That the waxy plug in a whale’s ear might work as a sound lens focusing song from miles away. That the Japanese World War II pilots who spotted submarines were retrained, postwar, to find whales. That whale scientists were seriously considering using tropical atolls as corrals for whale farms. But what makes Burnett’s book notable is the big-picture arc he traces, from the early “hip-booted” cetologist who earned his stripes “the old-fashioned way, by cutting his way into the innards of hundreds of whales while standing in the icy slurry of an Antarctic whaling station,” right up to the scientist-turned-Age-of-Aquarius-psychedelic-guru who studied the behavior of living cetaceans, drawing such conclusions as “Whales and dolphins quite naturally go in the directions we call spiritual, in that they get into meditative states quite simply and easily.” While tracking the evolution of something he probably wouldn’t quite call interspecies empathy (but that I might), Burnett keeps a cool head and gives what should become the definitive account of whalekind’s transformation from cipher to signifier.

by Paul Greenberg, NY Times |  Read more:
Photo: Associated Press via NY Times


photo: markk
Cressida Campbell, Nasturtiums, 2002
via:

Creative Writing (Fiction)

The first story Maya wrote was about a world in which people split themselves in two instead of reproducing. In that world, every person could, at any given moment, turn into two beings, each one half his/her age. Some chose to do this when they were young; for instance, an eighteen-year-old might split into two nine-year-olds. Others would wait until they’d established themselves professionally and financially and go for it only in middle age. The heroine of Maya’s story was splitless. She had reached the age of eighty and, despite constant social pressure, insisted on not splitting. At the end of the story, she died.

It was a good story, except for the ending. There was something depressing about that part, Aviad thought. Depressing and predictable. But Maya, in the writing workshop she had signed up for, actually got a lot of compliments on the ending. The instructor, who was supposed to be this well-known writer, even though Aviad had never heard of him, told her that there was something soul-piercing about the banality of the ending, or some other piece of crap. Aviad saw how happy that compliment made Maya. She was very excited when she told him about it. She recited what the writer had said to her the way people recite a verse from the Bible. And Aviad, who had originally tried to suggest a different ending, backpedalled and said that it was all a matter of taste and that he really didn’t understand much about it.

It had been her mother’s idea that she should go to a creative-writing workshop. She’d said that a friend’s daughter had attended one and enjoyed it very much. Aviad also thought that it would be good for Maya to get out more, to do something with herself. He could always bury himself in work, but, since the miscarriage, she never left the house. Whenever he came home, he found her in the living room, sitting up straight on the couch. Not reading, not watching TV, not even crying. When Maya hesitated about the course, Aviad knew how to persuade her. “Go once, give it a try,” he said, “the way a kid goes to day camp.” Later, he realized that it had been a little insensitive of him to use a child as an example, after what they’d been through two months before. But Maya actually smiled and said that day camp might be just what she needed.

The second story she wrote was about a world in which you could see only the people you loved. The protagonist was a married man in love with his wife. One day, his wife walked right into him in the hallway and the glass he was holding fell and shattered on the floor. A few days later, she sat down on him as he was dozing in an armchair. Both times, she wriggled out of it with an excuse: she’d had something else on her mind; she hadn’t been looking when she sat down. But the husband started to suspect that she didn’t love him anymore. To test his theory, he decided to do something drastic: he shaved off the left side of his mustache. He came home with half a mustache, clutching a bouquet of anemones. His wife thanked him for the flowers and smiled. He could sense her groping the air as she tried to give him a kiss. Maya called the story “Half a Mustache,” and told Aviad that when she’d read it aloud in the workshop some people had cried. Aviad said, “Wow,” and kissed her on the forehead. That night, they fought about some stupid little thing. She’d forgotten to pass on a message or something like that, and he yelled at her. He was to blame, and in the end he apologized. “I had a hellish day at work,” he said and stroked her leg, trying to make up for his outburst. “Do you forgive me?” She forgave him.

by Etgar Keret, The New Yorker |  Read more:
PHOTOGRAPH: QUENTIN BERTOUX/AGENCE VU/AURORA

The Evolved Self-Management System


NICHOLAS HUMPHREY: I was asked to write an essay recently for "Current Biology" on the evolution of human health. It's not really my subject, I should say, but it certainly got me thinking. One of the more provocative thoughts I had is about the role of medicine. If human health has changed for the better in the late stages of evolution, this has surely had a lot to do with the possibility of consulting doctors, and the use of drugs. But the surprising thing is that, until less than 100 years ago, there was hardly anything a doctor could do that would be effective in any physiological medicinal way—and still the doctor's ministrations often "worked". That's to say, under the influence of what we would today call placebo medicine people came to feel less pain, to experience less fever, their inflammations receded, and so on.

Now, when people are cured by placebo medicine, they are in reality curing themselves. But why should this have become an available option late in human evolution, when it wasn't in the past.

I realized it must be the result of a trick that has been played by human culture. The trick isto persuade sick people that they have a "license" to get better, because they'rein the hands of supposed specialists who know what's best for them and can offer practical help and reinforcements. And the reason this works is that it reassures people—subconsciously —that the costs of self-cure will be affordable and that it's safe to let down their guard. So health has improved because of a cultural subterfuge. It's been a pretty remarkable development.

I'm now thinking about a larger issue still. If placebo medicine can induce people to release hidden healing resources, are there other ways in which the cultural environment can "give permission" to people to come out of their shells and to do things they wouldn't have done in the past? Can cultural signals encourage people to reveal sides of their personality or faculties that they wouldn't have dared to reveal in the past? Or for that matter can culture block them? There's good reason to think this is in fact our history.

Go back 10 or 20,000 years ago. Eccentricity would not have been tolerated. Unusual intelligence would not have been tolerated. Even behaving "out of character" would not have been tolerated. People were expected to conform, and they did conform, because they picked up the cues from their environment about the right and proper—the adaptive—way to behave. In response to cultural signals people were in effect policing their own personality.

And they still are. In fact we now have plenty of experimental evidence about the operation of "sub-conscious primes", how signals from the local environment get to people without their knowing it and, by changing their character and attitudes, regulate the face they present to the world. It can be a change for the worse (at least as we'd see it today). But so too it can be a change for the better. People become, let's say, more pro-social, more generous.

by Nicholas Humphrey, Edge |  Read more:

A Postmodern Elks Club Serving Some of the World's Best Beer

Or, how a mini-mart in a Seattle neighborhood came to pour some of the most sought-after brews around--and changed the community.


From the parking lot the convenience store looks like any other 7-Eleven knock-off: A freezer that reads I-C-E hums out front. A lottery sign in the window flashes this week's Powerball pipe dream. Open the door: See the racks crammed with Twinkies and Cheetos and lip balm and aspirin and fishing magazines and single rolls of toilet paper. Behind the register are Swisher Sweets and Camels, and the Korean shop owner who will ring it all up for you.

It takes a minute for your eyes to adjust and see that something odd and wonderful is afoot inside Super Deli Mart in West Seattle. First you notice the jeroboam of Stone Brewing's praised 15th Anniversary Escondidian Imperial Black IPA behind the counter -- and its $130 sticker. And why is that dude standing by the Hostess Cherry Fruit Pies drinking a pint of -- is it Port Brewing's Angel's Share? -- a barley wine that was given 100 out of 100 points by RateBeer.com. Now a young dad enters juggling a child and two growlers and queues to fill the jugs from taps pouring beers like Stone's new Vertical Epic Ale, a limited-release brew rich with Anaheim chiles and cinnamon.

This convenience store may be the oddest place in North America to enjoy some of the best beers around -- a quirky testament to Seattle's redoubled passion for the frothy stuff. Long a good microbrew town, the city that birthed Redhook in 1981 has undergone a craft-beer Renaissance in the last few years. Today some 30 breweries call the Greater Seattle area home, and with a raft of newer taprooms pouring the best stuff from here and around the world, residents of the Emerald City are drowning in great draughts.

This trend is hardly limited to the vanguard Pacific Northwest: More breweries now exist nationwide than at any time since the late 1800s, according to the Brewers Association; nearly 25 percent more craft breweries opened in just the last decade. While overall beer sales have been falling, the amount of beer made by the nation's craft brewers has increased nearly 90 percent since 2001. (Craft beer is still just five percent of the market, though its market share nearly doubled in the last decade.)

Min Chung saw this new revolution coming and jumped aboard. Chung, 38, is a son of Korean immigrants with a business degree, a nose for marketing, and a mouth that loves to talk and drink good beer. He can usually be found wearing his preferred uniform of cargo shorts and running shoes, a sport vest stretched a little taut across a midsection that hasn't been denied the occasional pint. Chung bought the tired convenience store in early 2009 with the vision of sprucing it up and, among the Slim Jims and Red Bull, selling bottles of high-end brew to the Amazon workers and Boeing engineers who live near Puget Sound. Soon he thought, Why not pour beer so people could taste first? "Would people pay 11, 12 bucks a bottle if they didn't know what it is?" he asks. After much back-and-forth with the nonplussed Liquor Control Board, Chung got licensed as a restaurant (the "deli" in Super Deli Mart) and started pouring beer that August -- a first in the state for a mini-mart, as far as he knows.

What Chung didn't predict is what happened next. By last summer Super Deli Mart was burning though up to 25 kegs per week as people came to the store not just to pick up a six-pack of Dale's Pale Ale and a Snickers, but just to quaff pints and hang out.

by Christopher Soloman, The Atlantic |  Read more:
Image: Russian River Brewing Company.

Saturday, January 7, 2012

Land of the Banana Pancake Eaters


At first, the sheer ease of travelling in Southeast Asia came as a pleasant shock. After flying in from Calcutta, Bangkok’s budget hotels seemed exceptionally clean, and were as affordable as their Indian equivalents. We didn’t need to trek halfway across the city to buy bus tickets from dingy ticket offices filled with aggressive queue jumpers; they were sold by agents for the same price. We spent our first month between Bangkok and an idyllic island in the Gulf of Thailand, without any of the familiar hassles and challenges of travel, and when our Thai visas expired, we continued into Laos. My thoughts often turned to India and the twelve months I’d spent travelling there, testing and tormenting myself on long sweaty journeys to vast, polluted cities where a concrete box with a creaky overhead fan was often all I could get for my money. Had all the hassles and challenges been worth it?

The day I arrived in Vang Vieng the answer slapped me in the face. Or, rather, a few dozen pairs of barely-bikinied breasts slapped me in the face, closely pursued by as many pairs of luminous shorts, emblazoned with Vang Vieng, In the Tubing.

Tubers making their way out of the Nam Song River in Vang Vieng, LaosVang Vieng is famous – in Australia. To most eighteen year old backpackers – and like-minded twenty-somethings – Vang Vieng is the highlight of any coming-of-age jaunt around Southeast Asia. To other travellers, it is a small town in northern Laos where people hire rubber tubes and float down the Nam Song River, stopping at ramshackle bars along the riverbank to drink buckets of whiskey and coke, or truly test their endurance with opium-laced cocktails or a bucket of magic mushrooms blended with fruit juice, hoping to god they won’t need to swim. Several travellers die every year, most from drowning or cracking their skulls on a rock. There are several tragic stories of people swimming after runaway tubes, only to disappear in the current – for the sake of a seven dollar deposit. Some float their way to the end of the tubing course in the dark, having lost track of time, and are robbed by groups of teenage locals who pretend to be helping them ashore.

Everything we’d heard about Vang Vieng warned us to steer clear – and we’d had every intention of doing so, but a few days before leaving Laos’ capital, Vientiane, Iain and I saw a postcard labelled Blue Lagoon, Vang Vieng. It was an image of an immodestly blue body of water, glassy and clear beneath knotted trees, and fringed with bushes, leaves and more trees of assorted greens. Three lengths of rope hung temptingly into the water from a branch above, each with a wooden swing-seat at the end. Vientiane was scorching; the sun was hot enough to burn my skin during a fifteen minute walk to lunch. The thought of submerging ourselves in that pool of cool water was, quite simply, irresistible.

Claire van den Heever, Old World Wandering |  Read more:

The Red Giant

One of the biggest market events of the coming year will undoubtedly be the Facebook IPO.  You will read seven million articles about it in the next three months (sorry about that).  It will likely come public as one of the largest IPOs in history, with a starting valuation somewhere in the vicinity of $100 billion. It is a tech giant to be sure, one of the most important companies in world right now.

But there is a major difference between Facebook and the other tech giants of the past and present like Microsoft, Apple, Google, Oracle, IBM, Yahoo, Netscape and Cisco.  The difference is that Facebook will be the first tech giant to have come public after its growth rate peaks.  it will be the first almost-mature tech giant to IPO at the end of it's biggest growth phase rather than in the early stages.  The others offered public investors the chance to invest ahead of the Golden Age - but in this new era, the lion's share of valuation growth has been awarded to a relatively small handful of early stage investors and people need to accept that.

Facebook is a Red Giant, a star larger than the sun - but a dying star nonetheless.  Red Giants are mid-sized stellar bodies that have already exhausted the hydrogen within their cores.  They begin to live off the hydrogen surrounding them, burning it in a lower-intensity process called thermonuclear fusion.  Similarly, Facebook is likely peaking right now in terms of new users, page views per user, engagement and so on - it will burn brightly off of the massive scale it's already built and that's pretty much it going forward.

This does not mean that the company won't become wildly profitable as they turn on the engines and monetize what's already there (which is obviously a huge amount of  web real estate and mindshare at the moment).  What it does mean is that, like the Red Giant, Facebook already is what it is.  It is highly doubtful that the company's web presence and engagement can get any bigger or better.

In fact, it is more likely that:

Something new comes along - It is laughable how seamlessly, completely and quickly Facebook supplanted MySpace - let's not act like anything on the web is permanently dominant forever.  Facebook is picking up major steam in countries like Indonesia and Brazil right now, the rate of new users signing up is breathtaking.  But consider that they are pulling people from Google-owned network Orkut and that one day someone else will do the same to them.  (...) 

Kids rebel against a social network that includes their dorky parents - Can you imagine being 15 years old and being involved in any kind of socializing that involved your parents and aunts and uncles and Sunday school teachers and god knows who else from the dark side?  There is a Facebook hipness hourglass somewhere and it has already been turned over...it is only a matter of time before the grains of sand slipping from the top to the bottom become noticeable and the tide turns.  The kids will be first, the advertisers will follow.  In the end, Facebook will be comprised of dormant and inactive profiles with a majority of its "engagement" coming from people in their forties stalking their exes from high school in the late 80's.  For the younger generation, talking about Facebook at all will become painfully lame.  Every generation mocks the one that came before.  This moment rapidly approaches, the emptying of that hipness hourglass is inexorable.

by Joshua M. Brown, The Reformed Broker |  Read more:

Give Pods a Chance

Pods – also known as self-directed work teams – have been around for more than 20 years. Pods are 30% to 50% more effective than their traditional counterparts. A survey of senior line managers offers some of the benefits derived from implementing self-directed teams:
    Improved quality, productivity and service. Greater flexibility. Reduced operating costs. Faster response to technological change. Fewer, simpler job classifications. Better response to workers’ values. Increased employee commitment to the organization. Ability to attract and retain the best people.
So if it’s such a great idea to go podular, why aren’t more companies doing it?

Pods

Podular design is a concept that focuses on modularizing work: making units more independent, adaptive, linkable, and swappable. But the environment that surrounds the pods is equally critical to the success or failure of a podular system. Modular components are a critical element of a connected company. But to take advantage of pods you also need a business that is designed to support them. 

Architectural vs component innovation

Architectural vs component innovation

Most innovation involves small, incremental improvements to the parts of a system. A better spark plug, a better kind of tire, a better bar of soap, and so on. This is because these kinds of innovations are easier to inject into an existing system.

But some kinds of innovations – often called disruptive innovations – involve changes to the system itself. The PC revolution is an example of disruptive innovation, because the entire system of work computing had to change to accommodate it. This required a whole host of component innovations beyond the PC itself, such as the office scanner, printer, networking, and so on. System innovation like this requires changes to the fundamental architecture – known as architectural innovation.

Component innovation swaps out one node for another, which usually results in an incremental improvement. Architectural innovation changes the links. Changing the relationships between nodes is a sweeping change that usually transforms the way that the entire system works. Apple’s iTunes/iPhone ecosystem was an architectural innovation that changed the music industry forever.

Perhaps one of the reasons more companies haven’t organized around small, empowered teams is that their business architectures don’t allow it. It’s not easy to plug modules into a platform that isn’t designed for it. 

What kinds of companies have been successful with a podular approach?

Xerox, Procter and Gamble, AT& T and many other companies have credited self-directed teams with marked impact on their operations, including improvements in customer service, manufacturing, inventory management, and other productivity gains. In this post I’d like to highlight three highly effective podular systems: one old-school company, one new-school company, and one old-school industry that’s reinventing itself.

by Dave Gray |  Read more:

Friday, January 6, 2012


photo: markk

Max Beckmann:  Black Irises (1928)

Streaming Dreams

On a rainy night in late November, Robert Kyncl was in Google’s New York City offices, on Ninth Avenue, whiteboarding the future of TV. Kyncl holds a senior position at YouTube, which Google owns. He is the architect of the single largest cultural transformation in YouTube’s seven-year history. Wielding a black Magic Marker, he charted the big bang of channel expansion and audience fragmentation that has propelled television history so far, from the age of the three networks, each with a mass audience, to the hundreds of cable channels, each serving a niche audience—twenty-four-hour news, food, sports, weather, music—and on to the dawning age of Internet video, bringing channels by the tens of thousands. “People went from broad to narrow,” he said, “and we think they will continue to go that way—spend more and more time in the niches—because now the distribution landscape allows for more narrowness.”

Kyncl puts his whole body into his whiteboard performances, and you can almost see the champion skier he used to be. As a teen-ager in Czechoslovakia, he was sent to a state-run boarding school where talented young skiers trained for the Olympics. At eighteen, “I realized then that all I knew was skiing,” he told me. After the Velvet Revolution of 1989, he applied to a program that placed Eastern Europeans in American summer camps as counsellors, and spent the summer in Charlottesville, Virginia. The following year, Kyncl went to SUNY, in New Paltz, where he majored in international relations.

People prefer niches because “the experience is more immersive,” Kyncl went on. “For example, there’s no horseback-riding channel on cable. Plenty of people love horseback riding, and there’s plenty of advertisers who would like to market to them, but there’s no channel for it, because of the costs. You have to program a 24/7 loop, and you need a transponder to get your signal up on the satellite. With the Internet, everything is on demand, so you don’t have to program 24/7—a few hours is all you need.”

For the past sixty years, TV executives have been making the decisions about what we watch in our living rooms. Kyncl would like to change that. Therefore YouTube, the home of grainy cell-phone videos and skateboarding dogs, is going pro. Kyncl has recruited producers, publishers, programmers, and performers from traditional media to create more than a hundred channels, most of which will début in the next six months—a sort of YouTV. Streaming video, delivered over the Internet, is about to engage traditional TV in a skirmish in the looming war for screen time.

Kyncl attacked the two-dimensional plane with his marker, schussing the media moguls, racing over and around them to the future. He drew a vertiginously plunging double-diamond run representing the dissolution of mass TV audiences as cable channels began to proliferate. Then he drew the bunny slope of Web-based channels that will further fragment audiences. According to Forrester Research, by 2016 half of all households will have Wi-Fi-enabled devices on their televisions, which will bring all those new channels into the living room, tempting people to cancel their pricey cable subscriptions. The only way for the networks and the cable companies to grow will be to buy Web-based channels.

Isn’t that more or less what happened thirty years ago? I asked. The networks, which had originally disparaged the new cable channels as cheap-looking and too narrowly focussed, ended up buying them when cable took off.

“Absolutely that’s what happened,” Kyncl said, with a slight Czech accent. “And it will happen again.”

He set the marker down on the conference-room table, and smiled. YouTube had won the gold.

by John Seabrook, New Yorker |  Read more:
Illustration: Anders Wenngren

Center of the Universe

On the first day, God created the heavens and the earth.

“Let there be light,” He said, and there was light. And God saw that it was good. And there was evening—the first night.

On the second day, God separated the oceans from the sky. “Let there be a horizon,” He said. And lo: a horizon appeared and God saw that it was good. And there was evening—the second night.

On the third day, God’s girlfriend came over and said that He’d been acting distant lately.

“I’m sorry,” God said. “Things have been crazy this week at work.”

He smiled at her, but she did not smile back. And God saw that it was not good.

“I never see you,” she said.

“That’s not true,” God said. “We went to the movies just last week.”

And she said, “Lo. That was last month.”

And there was evening—a tense night.

On the fourth day, God created stars, to divide the light from the darkness. He was almost finished when He looked at His cell phone and realized that it was almost nine-thirty.

“Fuck,” He said. “Kate’s going to kill me.”

He finished the star He was working on and cabbed it back to the apartment.

“Sorry I’m late!” He said.

And lo: she did not even respond.

“Are you hungry?” He asked. “Let there be yogurt!” And there was that weird lo-cal yogurt that she liked.

“That’s not going to work this time,” she said.

by Simon Rich, New Yorker |  Read more:
Illustration Maximilian Bode