Tuesday, June 26, 2012

Joe Jackson


[ed. Repost. Just because Joe is so great..]

The Most Important Numbers of the Next Half-Century

In 1991, former MIT dean Lester Thurow wrote: "If one looks at the last 20 years, Japan would have to be considered the betting favorite to win the economy honors of owning the 21st century."

It hasn't, and it likely won't. But 20 years ago, the view was nearly universal. Japan's economy was breathtaking -- rapid growth, innovation, and efficiency like no one had seen. From 1960 to 1990, real per-capital GDP grew by nearly 6%, double the rate of America's.

But then it all stopped. Japan's economy isn't the scene of decline some depict it as, but its growth slowed to a trickle at best.

What happened?

You can write volumes of books analyzing Japan's decline (and some have), but one of the biggest contributors to its stagnation is simple: It got old.

Decades in the making

The story begins, as so many about the modern day do, with World War II. Japan's toll in the world war was among the highest as a percentage of its population. Some estimate 4.4% of the Japanese population died in the war (the figure is 0.3% for the United States).

Demographically, two things resulted from that population shock that would shape the country's economic fate for the next half-century. Like America, Japan underwent a "baby boom" immediately after the war as returning soldiers married and families were rebuilt. More than 8 million Japanese babies were born from 1947 to 1949 -- a staggering sum given a population of around 70 million at the time.

Yet post-war devastation couldn't be ignored. Its major cities largely reduced to rubble, Japan didn't have the infrastructure necessary to support its existing population, let alone growth -- a problem amplified by the country's relative lack of natural resources. Tokyo-based journalist Eamonn Fingleton explains what happened next:
[In] the terrible winter of 1945-6 ... newly bereft of their empire, the Japanese nearly starved to death. With overseas expansion no longer an option, Japanese leaders determined as a top priority to cut the birthrate. Thereafter a culture of small families set in that has continued to the present day.
This created an extreme bulge in the country's demographics: a spike in population immediately after the war followed by decades of low birthrates.

As Japan entered the 1970s and 1980s, the baby boom generation -- called "dankai," or the "massive group" -- hit their peak earning and spending years. They bought cars, built houses and took vacations, helping to fuel the country's economic boom (which turned into an epic bubble). Observers like Thurow ostensibly extrapolated that growth and became dewy-eyed.

But as the 1990s rolled around Japan's dankai not only waved goodbye to their prime spending years, they crept into retirement. Consumption growth dropped and the need for assistance rose. Meanwhile, the small-family culture endured. Japan's birth rate per 1,000 people has averaged 12.4 per year since 1960, compared with 16 per year in the U.S, according to the United Nations. Combine the two trends, and Japan's aging population has created a demographic brick wall that has kept economic growth low for the last two decades, and will likely worsen for more to come. Adult diapers outsold baby diapers in Japan last year for the first time ever. There's your sign, as they say.

by Morgan Housel, Motley Fool |  Read more:

Robert Longo, Men Trapped In Ice, 1979
via:

Edward Hopper, Seven A.M., 1948. Oil on canvas.
via:

How Many Computers to Identify a Cat? Machines Teaching Machines to Learn


Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.

There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.

Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.

The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland. The Google scientists and programmers will note that while it is hardly news that the Internet is full of cat videos, the simulation nevertheless surprised them. It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.

The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.

Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech.

“This is the hottest thing in the speech recognition field these days,” said Yann LeCun, a computer scientist who specializes in machine learning at the Courant Institute of Mathematical Sciences at New York University.

And then, of course, there are the cats.

To find them, the Google research team, led by the Stanford University computer scientist Andrew Y. Ng and the Google fellow Jeff Dean, used an array of 16,000 processors to create a neural network with more than one billion connections. They then fed it random thumbnails of images, one each extracted from 10 million YouTube videos.

The videos were selected randomly and that in itself is an interesting comment on what interests humans in the Internet age. However, the research is also striking. That is because the software-based neural network created by the researchers appeared to closely mirror theories developed by biologists that suggest individual neurons are trained inside the brain to detect significant objects.

by John Markoff, NY Times |  Read more:
Photo: Jim Wilson/The New York Times

Can the Guardian Survive?

The Guardian is easy to mock for its sandal-wearing earnestness, its champagne socialism and congenital weakness for typos, but its readers en masse seemed like the kind any editor would be glad to have: curious, questioning, quick to laugh. Seeing the rapport between them and their paper, feeling its pull for the powerful and the talented, enjoying this brand-new festival that felt as if it had been going for years, you could easily have assumed that everything in the Guardian was rosy.

In many ways, it is. With its journalism, the Guardian has been having an astonishing run. For 20 years or more, ever since a bold reinvention led by Rusbridger’s predecessor Peter Preston in 1988, it has been the most stylish paper in the hyper-competitive British quality pack, the wittiest and best-designed, the strongest for features, the one most likely to reflect modern life. But it ruled only at what journalists call the soft end. In the 1970s, the age of Woodward and Bernstein, the Guardian’s best-remembered story was an April fool from 1977, which dreamt up the Pacific nation of San Serriffe – beautifully done but disclosing nothing more than its own sardonic wit. In the 1990s, the Guardian began to land some scoops, notably the scandals that brought down two Tory MPs, Jonathan Aitken and Neil Hamilton. But it still wasn’t known for big investigations, the kind of stories that demand courage, persistence and resources. This is where its culture has changed. It ran a sustained investigation into illicit payments by the arms giant BAE—first alleged in 2003, finally admitted in 2010, and now the subject of nine-figure compensation settlements. It did well with the Wikileaks diplomatic cables, and the English riots of 2011 and their causes.

Above all, it has led the way in the News International phone-hacking scandal, a farrago of power, corruption and lies, exposed by Nick Davies and other Guardian reporters. For two years, their investigation was lonely and scoffed at. A police chief urged Rusbridger to drop it; the mayor of London, Boris Johnson, who presides over the Metropolitan Police, called it “codswallop”. Then, last July, came the Guardian’s disclosure that the targets included the murdered teenager Milly Dowler. The story erupted across all the media. It has now led to the closure of the News of the World, the humbling of Rupert Murdoch, the fall of his son James, the arrest of his favourite Rebekah Brooks, multiple resignations by senior policemen and media executives, at least 50 more arrests, and six official investigations—three criminal ones, employing 150 police officers; one by a House of Commons select committee, one by the communications regulator Ofcom, and, most theatrically, the Leveson inquiry into the regulation of the media, which has spent months shining a fitful light on the mucky machinations of power. By the end of May, when it emerged that the Conservative-led coalition had allowed a former Murdoch editor to work at 10 Downing Street without the normal security vetting, the trail of dirt led all the way to David Cameron’s desk. (...)

This triumph of old-school reporting has been accompanied by spectacular success in new media. The Guardian has never been a big-selling newspaper: among the 11 national dailies in Britain, it lies 10th, with only the Independent behind it. But on the internet, the Guardian lies second among British newspaper sites (behind the Mail, which cheerfully chases hits by aiming lower than its print sister) and in the top five in the world, rubbing shoulders with the New York Times. Where many newspapers treated the web with suspicion, the Guardian dived in, starting early (1995), experimenting widely, pioneering live-blogging, embracing citizen journalism, mastering slideshows and timelines and interactive graphics. By March 2012 it was putting up 400 pieces of content every 24 hours. Its network of sites had a daily average of 4m browsers, as many as the sites for Britain’s bestselling newspaper (the Sun) and its bestselling broadsheet (the Telegraph) put together. The Guardian’s total traffic, around 67m unique browsers a month, was still rising by 60-70% a year. (...)

A sceptic could point out that the Guardian might as well be owned by a billionaire, given the losses it has been able to stomach. It is owned by the Scott Trust, set up in 1936 “to secure the financial and editorial independence of the Guardian in perpetuity”. The trust became a limited company in 2008, but remains trust-like, with all the shares held by the trustees. It also owns most of Auto Trader magazine, a cash cow which usually covers the Guardian’s losses. The idea that journalists like to believe, that the service they provide is more important than any profit it might make, is enshrined in the Scott Trust’s constitution. And Rusbridger says it makes a big difference to what they publish: “The fact that it was the Guardian that did the phone-hacking [story] directly flowed from being a trust.” But being a trust leads, inevitably, to mistrust: rivals depict the Guardian as a trustafarian, not having to make a living in the real world. (...)

The Guardian is not against all charges for digital reading. It asks a token sum for its iPhone edition (£4.99 a year), and a more realistic one for the iPad (£9.99 a month). But it is fiercely resistant to charging for its website—a position it shares with the Mail, the Telegraph, the Washington Post and many others. Some editors stay out of these choppy waters, saying the decisions are made by their commercial colleagues. Rusbridger goes the other way—not only is he happy to defend the Guardian’s stance, he has built a theory around it. He calls it “open journalism”, and in March, in an online Q&A session with readers, he defined it: “Open journalism is journalism which is fully knitted into the web of information that exists in the world today. It links to it; sifts and filters it; collaborates with it and generally uses the ability of anyone to publish and share material to give a better account of the world.”

He has become quite evangelical about it. Where did that come from? “Set aside how you’re going to pay for all this, and say ‘what’s the big story about, what’s happening to information, what is the big challenge for journalism?’ Any journalist who thinks we’re still living in the 19th-, 20th-century world in which a newsroom here can adequately cover the world around us in competition with what’s available on the open web – well, I think that’s very questionable. You can probably do it if you’re the FT or the Wall Street Journal and you’re selling time-critical financial information. For a general newspaper, forgive me if you’ve heard it before but the simplest way of explaining it is this. You’ve got Michael Billington, distinguished theatre critic, in the front row at the National Theatre. Are you saying you don’t need Michael Billington any more? No, he’s the Guardian voice, he is the expert. But what about the other 900 people in the theatre, don’t they have interesting things to say? Well obviously they do, and if we don’t do something with that social experience, somebody else will. And out of those 900 people, 30 will be very knowledgeable. So let’s say Michael Billington is as good as it gets, he’s 9 out of 10, but the experience of these other knowledgeable people is 6 out of 10, so the margin is 3 out of 10, that’s what you’re charging for. You either say ‘we’ll take that then, we’ll build a big wall round Michael Billington.’ Or you say, ‘actually, let’s get them on to our platform as well,’ and you’ve got 9 + 6. So what do you do? If you don’t do this, that’s bad for professional journalism, because you’re hedging against what other people can do. If you do do it, you have a much better account of what happens in a theatre, and you begin to think that it was quite odd to send one person on one night and think that was enough. It’s just obviously better. Then the question is how do you edit them, and find the people who know their Brecht from their musicals, and that’s probably partly software and partly old-fashioned editing.

“And the next question is, if it works for theatre does it work for other areas of journalism? I think it works for everything—investigative, foreign, science, environment. By building networks, you’re going with the flow of history, and your journalism is going to be more comprehensive and better. If you reduce it instantly to paywalls, you’re not tackling the bigger issue of what’s happening to journalism.”

by Tim de Lisle, More Intelligent Life |  Read more:
Photo illustrations Meeson

Monday, June 25, 2012

Our Underground Future

A finished basement can be a beautiful thing. With the right accoutrements and enough effort, what might otherwise be a damp, empty space lined with concrete can be turned into a cozy playroom, or a den, or an office and gym. Properly planned, the basement can become an integral part of a household, even a kind of engine that powers it from below.


The same is true for the far larger basement that all of us share: that vast space that exists under our feet wherever we go, out of sight and out of mind. Those of us who are city-dwellers already keep a lot of stuff down there—subway stations, sewer pipes, electrical lines—but as our cities grow more cramped, and real estate on the surface grows more valuable, the possibility that it can be used more inventively is starting to attract attention from planners around the world.

“It used to be, ‘How high can you go up into the sky?’” said Susie Kim, of the Boston-based urban design firm Koetter Kim & Associates. “Now it’s a matter of, ‘How low can you go and still be economically viable?’”

A cadre of engineers who specialize in tunneling and excavation say that we have barely begun to take advantage of the underground’s versatility. The underground is the next great frontier, they say, and figuring out how best to use it should be a priority as we look ahead to the shape our civilization will take.

“We have so much room underground,” said Sam Ariaratnam, a professor at Arizona State University and the chairman of the International Society for Trenchless Technology. “That underground real estate—people need to start looking at it. And they are starting to look at it.”

The federal government has taken an interest, convening a panel of specialists under the banner of the National Academy of Engineering to produce a report, due out later this year, on the potential uses for America’s underground space, and in particular its importance in building sustainable cities. The long-term vision is one in which the surface of the earth is reserved for the things we want to see and be around—houses, schools, yards, parks—while all the other facilities that are needed to make a city run, from water treatment plants to data banks to freight systems, hum away underground.

Though the basic idea has existed for decades, new engineering techniques and an increasing interest in sustainable urban growth have created fresh momentum for what once seemed like a notion out of Jules Verne. And the world has witnessed some striking new achievements. The city of Almere, in the Netherlands, built an underground trash network that uses suction tubes to transport waste out of the city at 70 kilometers per hour, making garbage trucks unnecessary. In Malaysia, a sophisticated new underground highway tunnel doubles as a discharge tunnel for floodwater. In Germany, a former iron mine is being converted into a nuclear waste repository, while scientists around the world explore the possibility of building actual nuclear power plants underground.

Overall, though, the cause of the underground has encountered resistance, in large part because digging large holes and building things inside them tends to be extremely expensive and technically demanding. Boston offers perfect examples of the pluses and minuses of the endeavor: Putting the Post Office Square parking lot underground created a park and a beloved urban amenity, but the much more ambitious Big Dig turned out to be a drawn-out and unspeakably costly piece of urban reengineering. And perhaps an even greater obstacle is the psychological one. As Ariaratnam put it, “Even in a condo tower, the penthouse on the top floor is the most attractive thing—everyone wants to be higher.” The underground, by contrast, calls to mind darkness, dirt, even danger—and when we imagine what it would look like for civilization to truly colonize it, we think of gophers and mole people. Little wonder that our politicians and urban designers don’t afford the underground anywhere near the level of attention and long-term vision they lavish on the surface. In a world where most people are accustomed to thinking of progress as pointing toward the heavens, it can be hard to retrain the imagination to aim downward.

by Leon Neyfakh, Boston Globe |  Read more:
Illustration: Jesse Lefkowitz

Waiting Game


During the two weeks of play that begin on Monday, professional tennis players at Wimbledon will return thousands of first serves. Many of those returns will be entertaining. Some will be remarkable. But all will give spectators an opportunity to improve on the personal and professional decisions we make in all aspects of our lives: by helping us learn to manage delay.

Watch Novak Djokovic. His advantage over the other professionals at Wimbledon won’t be his agility or stamina or even his sense of humour. Instead, as scientists who study superfast athletes have found, the key to Djokovic’s success will be his ability to wait just a few milliseconds longer than his opponents before hitting the ball. That tiny delay is why most players won’t have a chance against him. Djokovic wins because he can procrastinate – at the speed of light.

During superfast reactions, the best-performing experts in sport, and in life, instinctively know when to pause, if only for a split-second. The same is true over longer periods: some of us are better at understanding when to take a few extra seconds to deliver the punchline of a joke, or when we should wait a full hour before making a judgment about another person. Part of this skill is gut instinct, and part of it is analytical. We get some of it from trial and error or by watching experts, but we also can learn from observing toddlers and even animals. There is both an art and a science to managing delay.

In 2008, when the financial crisis hit, I wanted to get to the heart of why our leading bankers, regulators and others were so short-sighted and wreaked such havoc on our economy: why were their decisions so wrong, their expectations of the future so catastrophically off the mark? I also wanted to figure out, for selfish reasons, whether my own tendency to procrastinate (the only light fixture in my bedroom closet has been broken for five years) was really such a bad thing.

Here is what I learnt from interviewing more than 100 experts in different fields and working through several hundred recent studies and experiments: given the fast pace of modern life, most of us tend to react too quickly. We don’t, or can’t, take enough time to think about the increasingly complex timing challenges we face. Technology surrounds us, speeding us up. We overreact to its crush every day, both at work and at home.

Yet good time managers are comfortable pausing for as long as necessary before they act, even in the face of the most pressing decisions. Some seem to slow down time. For the best decision-makers, as for the best tennis players, time is more flexible than a metronome or atomic clock.

by Frank Partnoy, FT |  Read more:

I analyzed the chords of 1300 popular songs for patterns. This is what I found.

For many people, listening to music elicits such an emotional response that the idea of dredging it for statistics and structure can seem odd or even misguided. But knowing these patterns can give one a deeper more fundamental sense for how music works; for me this makes listening to music a lot more interesting. Of course, if you play an instrument or want to write songs, being aware of these things is obviously of great practical importance.

In this article, we’ll look at the statistics gathered from 1300 choruses, verses, etc. of popular songs to discover the answer to a few basic questions. First we’ll look at the relative popularity of different chords based on the frequency that they appear in the chord progressions of popular music. Then we’ll begin to look at the relationship that different chords have with one another. For example, if a chord is found in a song, what can we say about the probability for what the next chord will be that comes after it?

The Database

To make quantitative statements about music you need to have data; lots of it. Guitar tab websites have tons of information about the chord progressions that songs use, but the quality is not very high. Just as important, the information is not in a format suitable for gathering statistics. So, over the past 2 years we’ve been slowly and painstakingly building up a database of songs taken mainly from the billboard 100 and analyzing them 1 at a time. At the moment the database of songs has over 1300 entries indexed. The genre and where they are taken from is important. This is an analysis of mainly “popular” music, not jazz or classical, so the results are not meant to be treated as universal. If you’re interested, you can check out the database here. The entries contain raw information about the chords and melody, while throwing out information about the arrangement and instrumentation.

We can use the information in the song database to answer all sorts of questions. In this introductory post, I’ll look at a few interesting preliminary results, but we invite you to propose your own questions in the comments at the end of the article.

Let’s get started.

1. Are some chords more commonly used than others?

This seems like such a basic question, but the answer doesn’t actually tell us much because songs are written in different keys. A song written in C# will have lots of C# chords in it, while a song written in G will probably have lots of G’s. That G chords are more popular than C# chords is likely only a reflection of the fact that it’s easier to play on the guitar and piano. So instead of answering this meaningless question, I’ll answer the slightly more interesting one of, what keys are most popular for the songs in the database?



C (and its relative minor, A) are the most common by far. After that there is a general trend favoring key signatures with less sharps and flats but this is not universal. Eb with 3 flats, for instance, is slightly (though not statistically significantly) more common than F with only 1 flat. Bb only has 2 flats but is way at the end of the popularity scale with only 4% of songs using that as the key.

2. What are the most common chords? Part 2

It’s much more interesting to look at songs written in a single common key. That way direct comparisons are possible and more illuminating. We transposed every song in the database to be in the key of C to make them directly comparable. Then we looked at the number of chord progressions that contained a given chord.

Below we’ve plotted the relative frequency that different chords occurred in descending order.



by Hooktheory.com | Read more:

Spoiled Rotten

With the exception of the imperial offspring of the Ming dynasty and the dauphins of pre-Revolutionary France, contemporary American kids may represent the most indulged young people in the history of the world. It’s not just that they’ve been given unprecedented amounts of stuff—clothes, toys, cameras, skis, computers, televisions, cell phones, PlayStations, iPods. (The market for Burberry Baby and other forms of kiddie “couture” has reportedly been growing by ten per cent a year.) They’ve also been granted unprecedented authority. “Parents want their kids’ approval, a reversal of the past ideal of children striving for their parents’ approval,” Jean Twenge and W. Keith Campbell, both professors of psychology, have written. In many middle-class families, children have one, two, sometimes three adults at their beck and call. This is a social experiment on a grand scale, and a growing number of adults fear that it isn’t working out so well: according to one poll, commissioned by Time and CNN, two-thirds of American parents think that their children are spoiled.

The notion that we may be raising a generation of kids who can’t, or at least won’t, tie their own shoes has given rise to a new genre of parenting books. Their titles tend to be either dolorous (“The Price of Privilege”) or downright hostile (“The Narcissism Epidemic,” “Mean Moms Rule,” “A Nation of Wimps”). The books are less how-to guides than how-not-to’s: how not to give in to your toddler, how not to intervene whenever your teen-ager looks bored, how not to spend two hundred thousand dollars on tuition only to find your twenty-something graduate back at home, drinking all your beer.

Not long ago, Sally Koslow, a former editor-in-chief of McCall’s, discovered herself in this last situation. After four years in college and two on the West Coast, her son Jed moved back to Manhattan and settled into his old room in the family’s apartment, together with thirty-four boxes of vinyl LPs. Unemployed, Jed liked to stay out late, sleep until noon, and wander around in his boxers. Koslow set out to try to understand why he and so many of his peers seemed stuck in what she regarded as permanent “adultescence.” She concluded that one of the reasons is the lousy economy. Another is parents like her.

“Our offspring have simply leveraged our braggadocio, good intentions, and overinvestment,” Koslow writes in her new book, “Slouching Toward Adulthood: Observations from the Not-So-Empty Nest” (Viking). They inhabit “a broad savannah of entitlement that we’ve watered, landscaped, and hired gardeners to maintain.” She recommends letting the grasslands revert to forest: “The best way for a lot of us to show our love would be to learn to un-mother and un-father.” One practical tip that she offers is to do nothing when your adult child finally decides to move out. In the process of schlepping Jed’s stuff to an apartment in Carroll Gardens, Koslow’s husband tore a tendon and ended up in emergency surgery.

Madeline Levine, a psychologist who lives outside San Francisco, specializes in treating young adults. In “Teach Your Children Well: Parenting for Authentic Success” (HarperCollins), she argues that we do too much for our kids because we overestimate our influence. “Never before have parents been so (mistakenly) convinced that their every move has a ripple effect into their child’s future success,” she writes. Paradoxically, Levine maintains, by working so hard to help our kids we end up holding them back.

by Elizabeth Kolbert, The New Yorker |  Read more:
ILLUSTRATION: Christoph Abbrederis

by jamie heiden
via:

The Most Amazing Bowling Story Ever

When Bill Fong approaches the lane, 15-pound bowling ball in hand, he tries not to breathe. He tries not to think about not breathing. He wants his body to perform a series of complex movements that his muscles themselves have memorized. In short, he wants to become a robot.

Fong, 48 years old, 6 feet tall with broad shoulders, pulls the ball into his chest and does a quick shimmy with his hips. He swings the ball first backward, then forward, his arm a pendulum of kinetic energy, as he takes five measured steps toward the foul line. He releases the ball, and it glides across the oiled wooden planks like it’s floating, hydroplaning, spinning counterclockwise along a trajectory that seems to be taking it straight for the right-hand gutter. But as the ball nears the edge of the lane, it veers back toward the center, as if guided by remote control. The hook carries the ball back just in time. In a heartbeat, what was a wide, sneering mouth of pins is now—nothing.

He comes back to the table where his teammates are seated—they always sit and bowl in the same order—and they congratulate him the same way they have thousands of times over the last decade. But Fong looks displeased. His strike wasn’t good enough.

“I got pretty lucky that time,” he says in his distinctly Chicago accent. “The seven was hanging there before it fell. I’ve got to make adjustments.” With a pencil, he jots down notes on a folded piece of blue paper.

His teammates aren’t interested in talking about what he can do to make his strikes more solid, though, or even tonight’s mildly competitive league game. They’re still discussing a night two years ago. They mention it every week, without fail. In fact, all you have to do is say the words “That Night” and everyone at the Plano Super Bowl knows what you’re talking about. They also refer to it as “The Incident” or “That Incredible Series.” It’s the only time anyone can remember a local recreational bowler making the sports section of the Dallas Morning News. One man, an opponent of Fong’s that evening, calls it “the most amazing thing I’ve ever seen in a bowling alley.”

Bill Fong needs no reminders, of course. He thinks about that moment—those hours—every single day of his life.

Most people think perfection in bowling is a 300 game, but it isn’t. Any reasonably good recreational bowler can get lucky one night and roll 12 consecutive strikes. If you count all the bowling alleys all over America, somebody somewhere bowls a 300 every night. But only a human robot can roll three 300s in a row—36 straight strikes—for what’s called a “perfect series.” More than 95 million Americans go bowling, but, according to the United States Bowling Congress, there have been only 21 certified 900s since anyone started keeping track.

by Michael J. Mooney, D Magazine |  Read more:
Photo: allBowling.com

A Weapon We Can’t Control

The decision by the United States and Israel to develop and then deploy the Stuxnet computer worm against an Iranian nuclear facility late in George W. Bush’s presidency marked a significant and dangerous turning point in the gradual militarization of the Internet. Washington has begun to cross the Rubicon. If it continues, contemporary warfare will change fundamentally as we move into hazardous and uncharted territory.

It is one thing to write viruses and lock them away safely for future use should circumstances dictate it. It is quite another to deploy them in peacetime. Stuxnet has effectively fired the starting gun in a new arms race that is very likely to lead to the spread of similar and still more powerful offensive cyberweaponry across the Internet. Unlike nuclear or chemical weapons, however, countries are developing cyberweapons outside any regulatory framework.

There is no international treaty or agreement restricting the use of cyberweapons, which can do anything from controlling an individual laptop to disrupting an entire country’s critical telecommunications or banking infrastructure. It is in the United States’ interest to push for one before the monster it has unleashed comes home to roost.

Stuxnet was originally deployed with the specific aim of infecting the Natanz uranium enrichment facility in Iran. This required sneaking a memory stick into the plant to introduce the virus to its private and secure “offline” network. But despite Natanz’s isolation, Stuxnet somehow escaped into the cyberwild, eventually affecting hundreds of thousands of systems worldwide.

This is one of the frightening dangers of an uncontrolled arms race in cyberspace; once released, virus developers generally lose control of their inventions, which will inevitably seek out and attack the networks of innocent parties. Moreover, all countries that possess an offensive cyber capability will be tempted to use it now that the first shot has been fired.

by Misha Glenny, NY Times |  Read more:
Illustration: Henning Wagenbreth

Sunday, June 24, 2012

I Never Owned Any Music to Begin With

When an NPR Music intern admitted to paying for almost none of the 11,000 songs in her iTunes library, David Lowery, of Cracker and Camper Van Beethoven fame and lecturer for the University of Georgia's music business program, took it as an opportunity to explain the ethics of a sustainable music industry, and the debate went viral. Here's the initial blog post, with links to the professor's response and NPR's coverage of the debate below.

A few days before my internship at All Songs Considered started, Bob Boilen posted an article titled "I Just Deleted All My Music" on this blog. The post is about entrusting his huge personal music library to the cloud. Though this seemed like a bold step to many people who responded to the article, to me, it didn't seem so bold at all.

I never went through the transition from physical to digital. I'm almost 21, and since I first began to love music I've been spoiled by the Internet.

I am an avid music listener, concertgoer, and college radio DJ. My world is music-centric. I've only bought 15 CDs in my lifetime. Yet, my entire iTunes library exceeds 11,000 songs.

David Lowery's response and NPR's coverage.

by Henry Molofsky, 3 Quarks Daily |  Read more:

Saturday, June 23, 2012


Miso Soup (by CY Phang)

Recipe (cheating way to do it fast ):

1. Use 1 sachet of prepacked bonito daishi (bonito flakes stock), I used a non-MSG version. You can get them at supermarket, Japanese food section. Boil 2 pints of water, then add in the bonito daishi powder. Boil for about 10 minutes, then add 2 tbsp of miso (I used an organic version of miso).
2. Cut up the silken tofu into small cubes.
3. Wash up a small packet of enoki mushroom and add to the boiling soup. Sprinkle some dehydrated wakame (a type of seaweed, I bought the “instant” one at the supermarket) and boil with the enoki mushroom for about 3 minutes. 
4. Dish the soup into a bowl and put in the diced silken tofu and sprinkle with some thinly sliced spring onion before serving hot.
5. ENJOY

Rendez-Vous (by mutablend)

Ellison Buys Lanai


Buying an inhabited Hawaiian island may sound extreme, even for a guy known for flaunting his fortune like a playboy – driving fancy cars, wooing beautiful women, flying his own jet and spending $200 million ((EURO)157.85 million) to build a Japanese-themed compound in California's Silicon Valley.

Larry Ellison won an America's Cup sailing in 2010, and then wrestled with San Francisco city leaders over his big plans for two waterfront piers. He owns a mansion atop a San Francisco hill with a sweeping view of the Golden Gate Bridge to the left and Alcatraz to the right. He has spent a fortune snapping up some of Southern California's most prized beachfront property in Malibu.

For Oracle CEO Ellison an island in the middle of the Pacific is right up his alley.

He built Oracle Corp. with $1,200 in 1977 and is the world's sixth richest billionaire. He inked a deal announced this week by the governor to buy 98 percent of the island's 141 square miles.

While detailed plans for the island have yet to be revealed, he's likely to do something "epic and grand," said Mike Wilson, who wrote the first biography of Ellison, "The Difference Between God and Larry Ellison: God Doesn't Think He's Larry Ellison."

"He could build the world's largest rare butterfly sanctuary, a medical research facility to help him live forever or a really cool go-cart track," Wilson said Thursday – but only half-jokingly, because those are the kinds of outlandish interests Ellison has.

As a man who feels cheated by a limited life-span, he's like a kid who never grew up but yet is a great visionary, Wilson said.

Wilson said the high-tech maverick won't be concerned with how his lifestyle will jibe with a laid-back island where longtime residents are grappling with the loss of their pineapple fields to make way for luxury development: "I don't think his primary concern is fitting in with what Hawaiians want." (...)

While Lanaians are eager for someone who might restore agriculture to the island's economy or someone who appreciates the unique culture of Hawaii, residents also are familiar with living on what Castle & Cooke calls the largest privately held island in the United States.

"Lanai folks have always been sort of under this benevolent ownership, which goes back to the Dole days," University of Hawaii historian Warren Nishimoto said of Lanai's ownership in the 1920s by the founder of Dole Foods Co. "They never felt comfortable about what the future is for the island. It's at the whims of an owner."

by Jennifer Sinco Kelleher, Huffington Post |  Read more:
Photo: Robin Kaye via Bloomberg News