Monday, June 11, 2012

Why Japan Prefers Pets to Parenthood

In a smart and expensive neighbourhood of Tokyo, Toshiko Horikoshi relaxes by playing her grand piano. She’s a successful eye surgeon, with a private clinic, a stylish apartment, a Porsche and two pet pooches: Tinkerbell, a chihuahua, and Ginger, a poodle. “Japanese dog owners think a dog is like a child,” says Toshiko. “I have no children, so I really love my two dogs.”

Many Japanese women like Toshiko prefer pets to parenthood. Startlingly, in a country panicking over its plummeting birthrate, there are now many more pets than children. While the birthrate has been falling dramatically and the average age of Japan’s population has been steadily climbing, Japan has become a pet superpower. Official estimates put the pet population at 22 million or more, but there are only 16.6 million children under 15.

Tinkerbell and Ginger have their own room and a wardrobe full of designer clothes. They have jumpers, dresses, coats and fancy dress outfits, neatly hung on jewelled hangers; hats, sunglasses and even tiny shoes. Toshiko says she shops for her dogs most weekends and they get new clothes each season.

In Japan designer labels such as Chanel, Dior, Hermès and Gucci offer luxury dog products. This canine couture doesn’t come cheap. A poodle pullover can cost $250 (around £160) or more. In many parts of Tokyo, it is easier to buy clothes for dogs than for children. Boutiques sell everything from frilly frocks to designer jeans, from nappies to organic nibbles, and smart “doggie bags” and buggies or pushchairs to transport them in. (...).

Despite the economic stagnation, people seem happy to spend any spare money on photo sessions, massages and treats for their four-legged “babies”. The average fertility rate is now 1.39 children per woman – well below the number needed to keep the population stable. Japan has, in effect, a self-imposed one-child policy. Government projections show if current trends continue, today’s population of 128 million will fall to 43 million over the next century.

“The most important reason for Japan’s declining birthrate is less sex,” says Dr Kunio Kitamara, director of Japan’s Family Planning Research Centre. His annual surveys indicate that the nation’s libido has been lagging in the last decade. The birthrate has declined, but fewer contraceptives are being used and there are fewer abortions and lower rate of sexually transmitted diseases. “Why?” asks Dr Kitamara: “Less sex!”

by Ruth Evans and Roland Buerk, The Long Good Read |  Read more:
Photo: Flickr / lauriepix1  via:

Empire of Ice

On a $500 million man-made island in the frozen Arctic Ocean, just off the coast of a vast, uninhabitable tundra known as Alaska’s North Slope, a pipeline begins. In temperatures that hover around forty-five degrees below zero, in perpetual darkness, a tight-knit band of roughnecks spends twelve hours a day, seven days a week, drilling down, down into the earth and pulling up precious crude. If you want to know how badly we need oil, here is your answer

The island is named Oooguruk, an Inupiaq word meaning “bearded seal,” an animal plentiful on the shores of Alaska’s North Slope. The Slope is where the Trans-Alaska Pipeline starts, where the crude gets pumped up from more than a mile inside the earth, then gets sent on the 800-mile journey south to Valdez, Alaska, where the pipeline ends and tankers come and load the crude up and deliver it down the coast. There, in places like northern Washington and Long Beach, California, it gets processed into the fuel America now so grudgingly remembers makes the world go round.

People have known for thousands of years that oil was abundant on Alaska’s North Slope, a vast tundra, flat and treeless, on and on and on, from the foothills of the Brooks Mountain Range to the Arctic Ocean, an endless, unchanging landscape bigger than Idaho. For centuries native Eskimos cut blocks of oil-soaked tundra from natural seeps to use as fuel. In the 1920s, explorers arrived and began poking holes. In 1968 they discovered Prudhoe Bay State No. 1, the largest oil field in North America and one of the largest in the world, and a year later the adjacent Kuparuk field, the second-largest. Today, five of our ten largest oil fields are on Alaska’s North Slope, where twenty-four separate fields pump out about 16 percent of our total domestic oil supply.

A person can’t just drive around the North Slope, visit the locals, stop in at a burger joint. There are no locals, no burger joints, no houses, no cities, no churches. The gateway to the oil fields is the town of Deadhorse, where the airport is, and where security restricts passage to anyone but workers who fly in and get bused to camps for two-week hitches.

It took nearly a year for me to gain access to the Slope. The corporate giants who control the fields—BP, ConocoPhillips, ExxonMobil—have little to gain from public scrutiny. Rarely do stories of Alaska’s oil emerge unless there is a freak accident to talk about—the odd spill, usually set against a snowy backdrop featuring a winsome caribou looking dismayed about the greedy nature of the human race. But Pioneer Natural Resources—the company that built the island, where TooDogs is in charge of the rig and Kung Fu plays the fiddle and Turtle fake-hates Jason for lying to him about being married to a stripper named Onyx—was willing to allow me in. Pioneer is the first independent operator to produce oil on the Slope, a market cornered by the three majors for its entire history. In many ways, it represents a glimmer of hope. Everyone knows the oil up here is running out; production is declining 6 percent a year, down from an all-time high of 2 million barrels a day in 1988 to 700,000 today. But everyone also knows the oil isn’t really running out—it’s just a lot harder to get to. It is a common story in the saga of natural resources, whether you are talking coal or gas or oil: The big companies suck out the easy, vast reservoirs, and then in come the little companies nimble enough to pick away at the leftovers.

The ongoing debate over whether or not we should be drilling for oil in Alaska—onward to ANWR to the east—typically leaves out one factor: We are drilling for oil in Alaska, every hour of every day for the past thirty years, drilling in some of the most extreme conditions on earth, where the windchill can easily reach minus ninety-eight degrees, so cold that you have to leave your pickup running twenty-four hours a day or you’ll never get it started again, where it is pitch dark for nearly two months each winter, where people live without families, without homes, without access to so much of what most of us think of when we think of what it means to be human.

by Jeanne Marie Laskas, GQ (2008) |  Read more:
Photo: Christopher Lamarca

Of Flying Cars and the Declining Rate of Profit


A secret question hovers over us, a sense of disappointment, a broken promise we were given as children about what our adult world was supposed to be like. I am referring not to the standard false promises that children are always given (about how the world is fair, or how those who work hard shall be rewarded), but to a particular generational promise—given to those who were children in the fifties, sixties, seventies, or eighties—one that was never quite articulated as a promise but rather as a set of assumptions about what our adult world would be like. And since it was never quite promised, now that it has failed to come true, we’re left confused: indignant, but at the same time, embarrassed at our own indignation, ashamed we were ever so silly to believe our elders to begin with.

Where, in short, are the flying cars? Where are the force fields, tractor beams, teleportation pods, antigravity sleds, tricorders, immortality drugs, colonies on Mars, and all the other technological wonders any child growing up in the mid-to-late twentieth century assumed would exist by now? Even those inventions that seemed ready to emerge—like cloning or cryogenics—ended up betraying their lofty promises. What happened to them?

We are well informed of the wonders of computers, as if this is some sort of unanticipated compensation, but, in fact, we haven’t moved even computing to the point of progress that people in the fifties expected we’d have reached by now. We don’t have computers we can have an interesting conversation with, or robots that can walk our dogs or take our clothes to the Laundromat.

As someone who was eight years old at the time of the Apollo moon landing, I remember calculating that I would be thirty-nine in the magic year 2000 and wondering what the world would be like. Did I expect I would be living in such a world of wonders? Of course. Everyone did. Do I feel cheated now? It seemed unlikely that I’d live to see all the things I was reading about in science fiction, but it never occurred to me that I wouldn’t see any of them. (...)

Might the cultural sensibility that came to be referred to as postmodernism best be seen as a prolonged meditation on all the technological changes that never happened? The question struck me as I watched one of the recent Star Wars movies. The movie was terrible, but I couldn’t help but feel impressed by the quality of the special effects. Recalling the clumsy special effects typical of fifties sci-fi films, I kept thinking how impressed a fifties audience would have been if they’d known what we could do by now—only to realize, “Actually, no. They wouldn’t be impressed at all, would they? They thought we’d be doing this kind of thing by now. Not just figuring out more sophisticated ways to simulate it.”

That last word—simulate—is key. The technologies that have advanced since the seventies are mainly either medical technologies or information technologies—largely, technologies of simulation. They are technologies of what Jean Baudrillard and Umberto Eco called the “hyper-real,” the ability to make imitations that are more realistic than originals. The postmodern sensibility, the feeling that we had somehow broken into an unprecedented new historical period in which we understood that there is nothing new; that grand historical narratives of progress and liberation were meaningless; that everything now was simulation, ironic repetition, fragmentation, and pastiche—all this makes sense in a technological environment in which the only breakthroughs were those that made it easier to create, transfer, and rearrange virtual projections of things that either already existed, or, we came to realize, never would. Surely, if we were vacationing in geodesic domes on Mars or toting about pocket-size nuclear fusion plants or telekinetic mind-reading devices no one would ever have been talking like this. The postmodern moment was a desperate way to take what could otherwise only be felt as a bitter disappointment and to dress it up as something epochal, exciting, and new. (...)

Why did the projected explosion of technological growth everyone was expecting—the moon bases, the robot factories—fail to happen? There are two possibilities. Either our expectations about the pace of technological change were unrealistic (in which case, we need to know why so many intelligent people believed they were not) or our expectations were not unrealistic (in which case, we need to know what happened to derail so many credible ideas and prospects).

by David Graeber, The Baffler |  Read more:
Illustration Mark Fisher

Cross-Section Tissue of Marram Grass Leaf
via:

Sunday, June 10, 2012


Robert Doisneau
via:

Neil Young


White peach in rose water syrup
via:

Guitar Zero


Can science turn a psychologist into Jimi Hendrix?

Are musicians born or made? All my life, I wanted to become musical but I always assumed that I never had a chance. My ears are dodgy, my fingers too clumsy. I have no sense of rhythm and a lousy sense of pitch. I have always loved music, but could never sing, let alone play an instrument; in school, I came to believe that I was destined to be a spectator rather than a participant, no matter how hard I tried.

As I grew older, I figured my chances only diminished. Our lives, once we finish school, tend to focus on execution rather than enrichment. Whether we are breadwinners or caretakers, our success is measured by outcomes. The work it takes to achieve those outcomes, we are meant to understand, is something that should happen quickly and behind closed doors. If the conventional wisdom is right, by the time we are adults it's too late to learn anything new. Children may be able to learn anything, but if you wanted to learn French you should have started when you were six.

Until recently, science supported this theory. Virtually everybody in developmental psychology was a firm believer in "critical periods" of learning. The idea is that there are particular time windows in which complex skills, such as languages, can be learned; if you don't learn them by the time the window shuts, you never will. Case closed. But the more people have actually studied critical periods, the shakier the data has become. Although adults rarely achieve the same level of fluency that children do, the scientific research suggests that differences typically pertain more to accent than grammar.

There is also no magical window that slams shut the moment puberty begins. In fact, in recent years, scientists have identified people who have managed to learn languages with near-native fluency, even though they only started as adults.

If critical periods aren't quite so firm as once believed, a world of possibility emerges for the many adults who harbour secret dreams – whether to learn a language, become a pastry chef or pilot a plane. And quests such as these, no matter how quixotic they may seem, and whether they succeed, could bring unanticipated benefits, not just for their ultimate goals but for the journey itself.

Exercising our brains helps maintain them, by preserving plasticity (the capacity of the nervous system to learn new things), warding off degeneration and literally keeping the blood flowing. Beyond the potential benefits for our brains, there are benefits for our emotional wellbeing, too. There may be no better way to achieve lasting happiness – as opposed to mere fleeting pleasure – than pursuing a goal that helps broaden our horizons.

From primary school, every musical attempt I made ended in failure. The first time I tried to play guitar, a few years ago, my friend Dan Levitin (who had not yet finished his book This Is Your Brain on Music) kindly offered to give me a few lessons. When I came back to him after a week or two of practice, he quickly realised what my primary school teachers had realised long ago: that I had no sense of rhythm whatsoever. Dan offered me a metronome, and when that didn't help, he gave me something my teachers couldn't – a diagnosis: congenital arrhythmia.

And yet I never lost the desire to play. Music hasn't been studied as systematically as language in terms of critical periods, but there are certainly artists who started late and still became first-rate musicians. Tom Morello, the guitarist of Rage Against the Machine and among Rolling Stone magazine's greatest guitarists of all time, didn't start until he was 17. Patti Smith scarcely considered becoming a professional singer until she was in her mid-20s. Then there is the jazz guitar star Pat Martino, who relearned how to play after a brain aneurysm at the age of 35, and Dr John, who switched his primary allegiance from guitar to piano at 21 (after his left ring finger was badly injured in a bar-room fight) and won the first of his five Grammy awards in his late 40s.

Given my arrhythmia, I had no aspiration of reaching such heights, but at 38, long after I had completed my PhD and become a professor of cognitive psychology, I realised that my desire to become musical wasn't going away. I wanted to know whether I could overcome my intrinsic limits, my age and my lack of talent. Perhaps few people had less talent for music than I did, but few people wanted to be able to play more acutely.

I began to read the scientific literature. How did children learn music? Were there any lessons for adults? To my surprise, although children had been well studied, there was hardly any systematic research on people my age. Nobody seemed to know much about whether adults could learn to play late in life, and it wasn't just music that we knew little about; the literature on the capacity of adults to learn new skills in general was far sparser than I had imagined.

We know something about gradual declines in memory, but the only truly firm result I could find concerned perfect pitch (the ability to identify a single note in isolation). For that, one must indeed start early, but luckily for me and anyone else starting late, it is also clear that perfect pitch is more luxury than necessity. Duke Ellington didn't have it and neither did Igor Stravinsky (nor, for that matter, did Joey Ramone).

Other studies show some advantages for music learners who began earlier in life, but most of those don't take consideration of the total amount of practice. When it came to other aspects of music, such as the ability to improvise or compose, or even to learn a simple melody, there was almost no compelling literature. Although any number of studies have shown that the more you practise the better you get, startlingly few have compared what happens when people of different ages get the same amount of practice.

How could such a basic scientific question remain so unanswered? I wondered about this for months, until Caroline Palmer, a professor of psychology at McGill University in Montreal, explained the answer to me. The problem wasn't a lack of scientific interest – it was a lack of subjects. To learn a musical instrument, you need to put in a lot of work – 10,000 hours is an oft-mentioned (if somewhat oversold) number – and to do a proper study, you'd need a reasonably large sample of participants, which is to say a big group of adult novices with sufficient commitment. Nobody has studied the outcomes of adults who put in 10,000 hours of practice starting at 42 because most people of that age have lives and responsibilities – few adult learners are prepared to invest the kind of time that a teenager has. No subjects, no science. At that point, I decided to become a guinea pig.

by Gary Marcus, The Guardian |  Read more:
Photograph: Jan Persson/Redferns

How to Say Goodbye


We are taught to start our stories at the beginning. We open with “once upon a time,” hoping to capture the nascent moment when everything came to be. But there are few lessons — in our culture, in our schooling, in our socialization — in how to exit well. Our culture applauds the spirit, gumption and promise of beginnings. We admire the entry — the moment when people launch themselves into something new, plan and execute a new project, embark on important work, get married, take an adventure. Our habit is to tilt toward the future, perpetually poised for the next move, the strategic opportunity.

By contrast, our exits are often ignored or invisible. They seem to represent the negative spaces in our life narratives. There is little appreciation or applause when we decide (or it is decided for us) that it is time to move on. We often slink away in the night hoping that no one will notice; that the darkness will make the departure disappear. If the entry recalls a straight and erect posture, a person who is strong and determined; then we imagine a person stooped, weakened, and despairing as he makes his exit. (...)

Why is all of this so important? Why do we need to wrest our exits from the shadows of inattention and guilt? Why must we readjust our cultural lens in order to see and compose the exits in our lives? Because, I believe, that our preoccupation with beginnings reveals only half the story; offering a partial and distorted view of the layers and trajectories of our growth and development; exaggerating the power and potential of our launches while neglecting the undertows of over-reaching. We might chart and judge our journeys very differently if we looked through the prism of our exits; a prism that would reveal the interplay of reflection and propulsion, hindsight and generativity that come with navigating our endings well.

The wisdom and insights I gathered from listening to dozens of people tell their stories of exit — some in the midst of composing them, others anticipating their departures, still others looking back over long years; revisiting the ancient narratives that had changed their lives — point to a radical reframing of the meaning and worthiness of exits, moving exits from the shadows to the light, from the invisible to the visible. In order for exits to be productive and expansive, we must give them our full attention, and grapple with the range of emotions they stir up in us; the often paradoxical sensations of loss and liberation, grief and jubilation, and pain and beauty that accompany our departures from our relationships, families, institutions, and communities; from our former identities. And the daily practice of navigating the small exits that punctuate our days — a hug at the door, a lullaby at bedtime, a thank you as you leave the office — helps us design and enact the grander send-offs with intentionality and care. The micro and the macro seem to be inextricably linked, the former informing and heralding the latter.

Another paradox: The exit signs are bold and blurred; clear and confusing. On the one hand, people can recall the exact moment —in bold relief, the blood red exit sign in a darkened movie theater — when they decided to leave, when they felt that they no longer had a choice, when all the forces and sensations came together in a perfect storm and they said to themselves, “I’m out of here.” On the other hand, those who take leave, see the messiness and ambivalence of their departures through their rear view mirrors; the long process of retreat that came well before the marked moment of announced leaving and the many aftershocks of exiting that followed. Exits feel both abrupt and final — a leap of faith, a moment of reckless abandon — and gnawingly cautious and iterative. Those who exit must be ready to ride out these paradoxical sensations.

by Sara Lawrence-Lightfoot, Salon |  Read more:
Credit: Rose-Marie Henriksson via Shutterstock

Gustav Klimt, Portrait of Adele Bloch Bauer
via:

Looking for the umpteenth time at Gustav Klimt’s “Portrait of Adele Bloch-Bauer I” (1907) at the estimable Neue Galerie, on the occasion of a show celebrating Klimt’s hundred and fiftieth birthday, I’ve changed my mind. The gold- and silver-encrusted picture, bought by the museum’s co-founder Ronald Lauder for a headline-grabbing hundred and thirty-five million dollars, in 2006, isn’t a peculiarly incoherent painting, as I had once thought. It’s not a painting at all, but a largish, flattish bauble: a thing. It is classic less of its time than of ours, by sole dint of the money sunk in it.

“Adele” belongs to a special class of iffy art works whose price is their object. A dispirited version in pastels of Edvard Munch’s “The Scream,” which fetched a hundred and nineteen million last month. Another example is the sadly discolored van Gogh “Sunflowers,” which set a market record—forty million—in 1987, when sold to a Japanese insurance company. (The purchase amounted to a cherry on top of Japan’s then ballooning, doomed real-estate bubble.) And I remember asking the director of Australia’s National Gallery why, in 1973, he had plunked an unheard-of two million for Jackson Pollock’s amazing but, to my eye, overworked “Blue Poles.” He mused, “Well, I’ve always liked blue.”

by Peter Schjeldahl, The New Yorker | Changing My Mind About Gustav Klimt's "Adele" |  Read More:

Landscape in Cagnes, 1923 Felix Vallotton (by BoFransson)
via:

Warehouse Area, San Francisco by Minor White
via:

In 1839, a year after the first photo containing a human being was made, photography pioneer Robert Cornelius made the first ever portrait of a human being.

On a sunny day in October, Robert Cornelius set up his camera in the back of his father’s gas lamp-importing business on Chestnut Street in Center City, Philadelphia. After removing the lens cap, he sprinted into the frame, where he sat for more than a minute before covering up the lens. The picture he produced that day was the first photographic self-portrait. It is also widely considered the first successful photographic portrait of a human being.

[…] the words written on the back of the self-portrait, in Cornelius’ own hand, said it all: “The first light Picture ever taken. 1839.”

via:
See also: History of Photography  (Wikipedia)

The Ultralightlife



Lightweight Trail shoes that zip completely into themselves to minimize the space it takes up in your pack. Ultralightlife.
via: YMFY

A Roe, by Any Other Name



The swampy Atchafalaya Basin is a far cry from the cold waters of the Caspian Sea. And its lowly native bowfin, often derided as a throwaway fish, is no prized sturgeon. Yet it is laying golden eggs.

Bowfin caviar, from the single-employee Louisiana Caviar Company (motto: “Laissez-les manger beaucoup Cajun caviar!”) is earning a place on the menus at such top-notch establishments here as Commander’s Palace and Restaurant Stella. The executive chef of Galatoire’s Restaurant, Michael Sichel, served it up at the New Orleans Wine and Food Experience last month, an annual bacchanal.

And now, even the Russians are coming.

“There’s pretty good demand from lots of clients,” said Igor Taksir, a Russian-born exporter who ships the glistening roe, which is actually black but turns yellow-gold when cooked, to Moscow and Ukraine. Mr. Taksir said he was “skeptical in the beginning,” when he discovered bowfin caviar at a seafood show in Boston three years ago. “But when we started tasting,” he said, “we realized the quality was surprisingly good.”

Still, this is not the caviar of gilded dreams. If beluga sturgeon from the Caspian Sea, the king of them all, is paired best with Champagne, then bowfin from the bayou, some of it infused with hot pepper and served deep-fried, might go better with a beer. It represents what is a populist twist and an accommodation by chefs to the environmental and ethical realities that come with serving Russian and Iranian caviar.

Global efforts to all but ban the international trade of caviar from the Caspian Sea, where overfishing and pollution have depleted sturgeon populations, have opened enormous opportunities for affordable substitutes from unlikely places in America. Even landlocked Montana, North Dakota and Oklahoma have thriving markets based on wild river fish.

“I think any chef or any food person with a conscience is only eating domestic or farmed caviar,” said Mitchell Davis, executive vice president of the James Beard Foundation.

The world has come to have a taste for the growing American market of caviar and fish roe. Between 2001 and 2010, annual exports of white sturgeon, shovelnose sturgeon (also called hackleback) and paddlefish roe increased to about 37,712 pounds from roughly 5,214 pounds, with a majority of wild origin, according to the American branch of the Convention on International Trade in Endangered Species of Wild Fauna and Flora and the federal Fish and Wildlife Service.

Seventy percent of the total caviar and roe exported from the United States in 2010 went to countries in the European Union, Ukraine and Japan. (...)

Some caviar enthusiasts will never agree.

“I haven’t sampled bowfin myself, and quite frankly wouldn’t want to,” said Ryan Sutton, the food critic at Bloomberg News, who has lived and studied in Russia. Mr. Sutton was also critical of American paddlefish caviar, which he described as lacking both texture and flavor.

In caviar, a taster wants firmness and pop, “with a clean flavor of the sea,” Mr. Sutton said. (...)

Even the fact that the Food and Drug Administration allows the roe from fish other than the sturgeon to be called caviar — as long as it is qualified by the fish’s name, as in “bowfin caviar” — rubs some people the wrong way.

“The F.D.A. looks at the word ‘caviar’ as synonymous with roe, but that is not true,” said Douglas Peterson, associate professor of fisheries and aquaculture research at the Warnell School of Forestry and Natural Resources at the University of Georgia. “Caviar only comes from sturgeon,” he said. “Everything else is fish eggs.”

Susan Saulny, NY Times |  Read more:
Photo: William Widmer for The New York Times

Saturday, June 9, 2012

Herbie Hancock & Leonard Cohen


[ed. Sorry about the ad at the beginning. You can skip it after a few seconds...]

A Universe of Self-Replicating Code


...What's, in a way, missing in today's world is more biology of the Internet. More people like Nils Barricelli to go out and look at what's going on, not from a business or what's legal point of view, but just to observe what's going on.

Many of these things we read about in the front page of the newspaper every day, about what's proper or improper, or ethical or unethical, really concern this issue of autonomous self-replicating codes. What happens if you subscribe to a service and then as part of that service, unbeknownst to you, a piece of self-replicating code inhabits your machine, and it goes out and does something else? Who is responsible for that? And we're in an increasingly gray zone as to where that's going.

The most virulent codes, of course, are parasitic, just as viruses are. They're codes that go out and do things, particularly codes that go out and gather money. Which is essentially what these things like cookies do. They are small strings of code that go out and gather valuable bits of information, and they come back and sell it to somebody. It's a very interesting situation. You would have thought this was inconceivable 20 or 30 years ago. Yet, you probably wouldn't have to go … well, we're in New York, not San Francisco, but in San Francisco, you wouldn't have to go five blocks to find five or 10 companies whose income is based on exactly that premise. And doing very well at it.

Walking over here today, just three blocks from my hotel, the street right out front is blocked off. There are 20 police cars out there and seven satellite news vans, because Apple is releasing a new code. They're couching it as releasing a new piece of hardware, but it's really a new gateway into the closed world of Apple's code. And that's enough to block human traffic.

Why is Apple one of the world's most valuable companies? It's not only because their machines are so beautifully designed, which is great and wonderful, but because those machines represent a closed numerical system. And they're making great strides in expanding that system. It's no longer at all odd to have a Mac laptop. It's almost the normal thing.

But I'd like to take this to a different level, if I can change the subject... Ten or 20 years ago I was preaching that we should look at digital code as biologists: the Darwin Among the Machines stuff. People thought that was crazy, and now it's firmly the accepted metaphor for what's going on. And Kevin Kelly quoted me in Wired, he asked me for my last word on what companies should do about this. And I said, "Well, they should hire more biologists."

But what we're missing now, on another level, is not just biology, but cosmology. People treat the digital universe as some sort of metaphor, just a cute word for all these products. The universe of Apple, the universe of Google, the universe of Facebook, that these collectively constitute the digital universe, and we can only see it in human terms and what does this do for us?

We're missing a tremendous opportunity. We're asleep at the switch because it's not a metaphor. In 1945 we actually did create a new universe. This is a universe of numbers with a life of their own, that we only see in terms of what those numbers can do for us. Can they record this interview? Can they play our music? Can they order our books on Amazon? If you cross the mirror in the other direction, there really is a universe of self-reproducing digital code. When I last checked, it was growing by five trillion bits per second. And that's not just a metaphor for something else. It actually is. It's a physical reality.

by George Dyson, Edge |  Read more:

What is Cool?

[ed. Is there really a Journal of Individual Differences?]

Do rebelliousness, emotional control, toughness and thrill-seeking still make up the essence of coolness?

Can performers James Dean and Miles Davis still be considered the models of cool?

Research led by a University of Rochester Medical Center psychologist and published by the Journal of Individual Differences has found the characteristics associated with coolness today are markedly different than those that generated the concept of cool.

“When I set out to find what people mean by coolness, I wanted to find corroboration of what I thought coolness was,” said Ilan Dar-Nimrod, Ph.D., lead author of “Coolness: An Empirical Investigation.” “I was not prepared to find that coolness has lost so much of its historical origins and meaning—the very heavy countercultural, somewhat individualistic pose I associated with cool.

“James Dean is no longer the epitome of cool,” Dar-Nimrod said. “The much darker version of what coolness is still there, but it is not the main focus. The main thing is: Do I like this person? Is this person nice to people, attractive, confident and successful? That’s cool today, at least among young mainstream individuals.”  (...)

“We have a kind of a schizophrenic coolness concept in our mind,” Dar-Nimrod said. “Almost any one of us will be cool in some people’s eyes, which suggests the idiosyncratic way coolness is evaluated. But some will be judged as cool in many people’s eyes, which suggests there is a core valuation to coolness, and today that does not seem to be the historical nature of cool. We suggest there is some transition from the countercultural cool to a generic version of it’s good and I like it. But this transition is by no way completed.”

by Michael Wentzel, University of Rochester |  Read more:

Charles Sheeler, The Upstairs (1938)
via:

Bell X1