Tuesday, October 2, 2012

Why There Needs to Be a Real (Grad) School of Rock


There is nothing quite like being a young rock musician walking into a good recording studio for the first time, with a record contract in your backpack, surveying the machinery. The towers of digital and analog sound-effect consoles, with their glowing gauges and blinking lights, they're here for you—paid for by the label, available to you because you cut a basement demo that made people see dollar signs. Over the hum of the amplifiers you can almost hear the whir of the industry, the interns flirting, the promotion person on the phone with the terrestrial radio person, the booking agent negotiating with club managers in far-flung college towns. It's an apparatus built to make money but also to bring your songs to teenagers and twentysomethings who are like you, who scour the Internet and the Staff Picks rack for new music that will illuminate the sublime in desperate crushes and everyday despair.

From there, things tend to get more complicated. In the case of the band I was in during my mid-20s, we quickly figured out that we didn't have anywhere near enough time to lay down a good debut album in the recording schedule afforded us, especially given the greenness of our line-up and material. A few days after those transcendent first moments in the studio, the producer confirmed our worst suspicions: Because we had a song that had been gathering online buzz and sounded like a potential hit, he explained, the label had squeezed us into an unrealistic timeframe in hopes of introducing the song to college radio before the end of the school year. "They did it to you," he said, "they've done it to other bands, and they'll probably do it to some more." We panicked, blazing through each song as efficiently as possible.

I was in debt and couldn't stomach becoming homeless to promote an album that embarrassed me, so before we went on tour, I quit the band, took a day job, and went back to being a writer. The album didn't sell very well, but our "hit" was discovered by advertisers. The song in question, "Hey Now Now,"was vaguely suicidal, written by our singer as he emerged from a black depression. The chorus went, "Hey now now/We're going down, down/And we'll ride the bus there/Pay the bus fare." But everybody misheard "We're going down, down" as "We're going downtown," and it was featured in a Pepsi commercial broadcast from South America to Europe to the Middle East, in which ethnically indeterminate rockers played the song in a practice space while the Brazilian soccer star Ronaldinho dribbled in an alley. Long after the band fell apart, it turned up again in an ad for Multi-Grain Pringles. Our legacy, in the end, was an 18-second fragment of one tune. We walked into the studio determined to make complex, aesthetically cohesive albums like our heroes in Arcade Fire; we wound up shills for snack food.

What my band needed was an Iowa Writers' Workshops for rock musicians, a Master of Fine Arts program at a university where respected veterans helped us learn to write good songs and perform them well. Such programs would establish a much-needed period of germination beyond the reach of commerce, in which young rock musicians could meet, form bands, and build a repertoire slowly, receiving feedback from seasoned rock musicians who don't have a pecuniary stake in their work. Such programs would cultivate good popular music by placing young musicians in an environment where aesthetic integrity is valued and financial strife held at bay.  (...)

A rock and roll grad school wouldn't save rock musicians from the difficulties of life on the road or from the byzantine practices of an industry desperate to find new sources of revenue after seeing its sales decline from $14 billion a year to $7 billion in the age of file-sharing. But it would give them a period of time in which to find collaborators, give one another feedback, get good, discover who they are as artists, and acquire mentors before they're exploited or pressured to sell out. It wouldn't spare musicians hardship, but it would help them make better music.

When it was founded in 1936, the Iowa Writers' Workshop was a weird proposition; it brought an unscholarly pursuit into an academic setting. Writers were supposed to be renegades who refined their intuitive and unteachable art in bars and cafes. They were regarded, in other words, much the same way rock musicians are regarded now. But the Workshop went on to graduate 17 Pulitzer Prize winners, and there are now roughly 250 MFA programs in creative writing. Some are cash cows for universities. Some, like Iowa, are not unlike charitable organizations, in that they pay their students stipends to write whatever they want or give them work teaching undergrads.

Rock music should be wild, unprofessional, spontaneous, indifferent to convention—it shouldn't feel like a craft honed in school. But neither should literature, and many of the most radically original writers in recent American letters have passed through the MFA system: Raymond Carver and Denis Johnson went to Iowa, David Foster Wallace went to the program at the University of Arizona, George Saunders to the one at Syracuse, Karen Russell to the one at Columbia. If there are any romantic punk rockers out there concerned that MFA programs would inhibit budding Lou Reeds from drinking too much and sleeping around and doing drugs, or otherwise constrain their outrĂ© behavior, this Iowa graduate is here to reassure them they do not.

by Benjamin Nugent, The Atlantic |  Read more:
Photo: Paramount

Monday, October 1, 2012

Hey, @SeattlePD: What’s the Latest?


Seattle -- The business of policing, as cops have known since at least the first bobbies on the beat, is partly about being seen on the job, having a local presence, even if it is just twirling a baton down the avenue.

But does “local” mean the same thing in the disembodied chatter of social media? The Seattle Police Department, which presides over one of the nation’s more tech-savvy — if not saturated — cities, is diving in to find out, in a project that began last week with 51 hyper-local neighborhood Twitter accounts providing moment-to-moment crime reports.

The project, called Tweets-by-beat, is the most ambitious effort of its kind in the nation, authorities in law enforcement and social media say, transforming the pen and ink of the old police blotter into the bits and bytes of the digital age. It allows residents — including, presumably, criminals — to know in almost real time about many of the large and small transgressions, crises, emergencies and downright weirdness in their neighborhoods.

Say you live on Olive Way east of downtown. There was an “intoxicated person” on your street at 3:31 a.m. Monday, so the neighborhood report said, as well as a “mental complaint,” unspecified and mysterious, nearby at 9:30 a.m. Sunday was busy for property crime on the beat, with two burglaries and a shoplifting case, along with a grab bag of noise and disturbance complaints, accident investigations and a several reports of “suspicious vehicles.”

“More and more people want to know what’s going on on their piece of the rock,” said the chief of police, John Diaz. “They want to specifically know what’s going on in the areas around their home, around their work, where their children might be going to school. This is just a different way we could put out as much information as possible as quickly as possible.”

Not everything that happens in a neighborhood will automatically pop up in 140 characters or fewer. Sex crimes were excluded, on the theory that Web attention could discourage people from reporting a rape or sexual assault, and domestic violence cases will remain off the Twitter list as well for similar reasons. Drawing attention to a private matter and alerting neighbors, department officials said, could make things worse for the victim.

The reports are also structured with an automatic one-hour delay, aimed at preventing people from learning about an investigation in progress and swarming over to gawk and perhaps interfere.

“This is trailblazing stuff,” said Eugene O’Donnell, a professor of police studies at John Jay College of Criminal Justice in Manhattan. “It shows a willingness I haven’t seen in large supply to really affirmatively make available, warts and all, a clear picture to people of what’s going on.”

But Professor O’Donnell, a former New York City police officer and prosecutor, said he thought there could be unintended consequences. Increased awareness of local crime, he said, could lead people to a greater feeling of vulnerability or to the conclusion that the police are not resolving the local crime problem — even if it is a problem they might not have been aware of had the beat-tweet not informed them.

by Kirk Johnson, NY Times |  Read more:
Photo: Michael Hanson

Leap of Faith

In the context of historical evidence and outcomes, present market conditions give us no choice but to remain highly defensive. Valuations remain rich on the basis of normalized earnings (which are better correlated with subsequent returns than numerous popular alternatives based on forward operating earnings, the Fed Model and the like). Investor sentiment is overcrowded on the bullish side even as corporate insiders are liquidating at a rate of eight shares sold for every share purchased – a surge that Investors Intelligence describes as a “panic.” Market conditions remain steeply overbought on an intermediate and long-term basis, with the S&P 500 still near its upper Bollinger bands (two standard deviations above the 20-period moving average) on weekly and monthly resolutions. We continue to observe wide divergences in market action, from century-old criteria such as the weakness in transports versus industrials (which suggests an unwanted buildup of inventories) to more subtle divergences and signs of exhaustion in market internals.

Overall, we continue to estimate a steeply negative return/risk for stocks on horizons from 2-weeks to 18 months. I recognize that this is easy to treat as disposable news, given that the ensemble methods we developed to capture both post-war and Depression-era data have indicated a negative return/risk profile for stocks since April 2010, yet the S&P 500 is 18% higher today than it was at that time. Central bank interventions have certainly played a role in that gain. But then, our prospective return/risk estimates have been in the lowest 1% of historical data only since March, and the market loss that would erase the intervening gain since April 2010 is one that we would consider small from the perspective of present market conditions. The average cyclical bear market has historically wiped out more than half of the preceding bull market advance, and stocks have typically surrendered closer to 80% of their preceding bull market gains when the cyclical bear is part of a “secular” bear period such as the one we’ve experienced since 2000 (see the discussion of cyclical and secular fluctuations in A False Sense of Security). I remain convinced that we will observe numerous points in the market cycle ahead where the evidence will support a significant and even aggressive exposure to market fluctuations. Now is not one of those points.

While our estimates of prospective return/risk in stocks remain among the most negative instances we’ve observed in a century of market history, it is important to note that these estimates are largely independent of our conviction that the U.S. economy has already entered a recession. They are also independent of our concerns about instability in the European banking system. With regard to Europe’s banking strains, the capital needs of Spanish banks were estimated at modest levels last week only due to the heroic assumption that distressed banks would be able to massively deleverage their balance sheets without amplifying their losses – an assumption that ZeroHedge refers to as deus ex-fudge while begging the question “just who will these banks sell said debt to?”

by John P. Hussman, PhD., Hussman Funds |  Read more:

The Ripple Effect


Medinah, Ill. -- You never know which teacher lessons will stick with you. A teacher, for some reason, once told us that when you’re carrying liquid -- a bowl of soup, a cup filled too high with water, a coffee mug -- you shouldn’t look at the liquid while you walk.

Why? Because, she said, if you look at the water you will see it slosh and splash shake. And seeing that will make you shake. And the water will wade a little more, making you shake a little more, making the liquid move a little more, making you shake even more, on and on, until you spill.

Here’s the part that sticks with me: She said that you will always underestimate the power of tiny ripples. You will always believe that you are steadier than you think you are. (...)

***

The coolest way to observe a Ryder Cup is through sound. Once the matches are rolling, you will hear sounds everywhere. Huge roars. Light cheers. Applause. Groans. U-S-A chants. Ole-ole-ole sing-song. It’s hard to tell where they come from -- they rattle up in the trees and come down all around you -- but after a while you learn to make out what they mean.

The sounds in the early part of Sunday’s golf suggested that everything was going OK for the U.S.A. It was a cool on Sunday afternoon. The merchandise tent was overflowing -- they must make millions. The Europeans were trying to charge, but the U.S. was holding its own. Webb Simpson had a two-up lead on Ian Poulter early. The Johnson boys -- Dustin and Zach -- seemed to be controlling their matches. Jim Furyk seemed to be outplaying Sergio Garcia. There was no reason to believe that anything unusual was going to happen.

It’s hard to pinpoint exactly when things started to turn. Maybe it was when Poulter squared up his match against Simpson with back-to-back birdies. Poulter was the heartbeat for this European team. He’s brash, he’s a bit goofy, he’s a Twitter fanatic, he loves attention, and he has never quite broken through.

But he is a fierce Ryder Cup player. His record at the Ryder Cup is otherworldly -- 10-3 coming into Sunday’s singles -- and on Saturday evening, after the sun set, he made a putt in the dark that gave Europe a half point and inspired his teammates. “That was when we thought [a comeback] was possible,” Olazabal would say.

The crowd tried to give Poulter the hardest time -- he loves engaging the U.S. crowd. But the harder time they gave him, the better he played. When he evened the match with Simpson, maybe there was something a little bit bigger rippling.

***

Seve Ballesteros’s memory was ever-present all week at the Ryder Cup. Ballesteros, probably more than anyone, created the Ryder Cup as we now know it, with all the intensity and fervor and pressure. He died last year. People talked about him constantly around Medinah. There were images of him wherever you turned, especially around the European team. And, of course, his favorite playing partner, Olazabal, was coaching the Europeans … and nearly crying every time Seve’s name came up.

Ballesteros was a force of nature. He saw the Ryder Cup as a cause … a chance to prove to everyone that players in Europe were just as good as American players, a chance to show just how fierce they could be. He did not just lead the European teams to victory, he told his teammates (and players, when he was a coach) that they were tougher than the U.S. players, that they had more heart, that they would win, that they had ALREADY won, but they just didn’t know it yet.

His positive force pushed players beyond their expectations. Could any of this have played a role on Sunday, even with Seve gone? I guess it depends what you believe. The European players did talk about feeling his presence. “I have no doubt in my mind that he was with me today all day,” Sergio Garcia would say. The way it ended for Garcia and American Jim Furyk, it’s not hard to imagine the ghost of Seve Ballesteros being nearby.

by Joe Posanski, Sports on Earth |  Read more:
Photo: Getty Images

Japan’s Tech Giants Are in a Free Fall


While electronics giants Apple and Samsung fight each other for market dominance, with hotly competitive product releases and tit-for-tat patent lawsuits, Japan’s consumer electronics makers find themselves in an increasingly perilous fight for relevance and, in some cases, survival.

Companies such as Sony, Panasonic and Sharp once controlled the industry, outclassing and outselling their U.S. rivals. But now they represent the most alarming telltale of corporate Japan’s ­two-decade struggle to adapt, downsize and innovate.

While the Japanese economy staggers, the consumer electronics companies are in an accelerated free fall, unable to catch on in the digital world of tablets and smartphones. They’re cycling through executives, watching their stock prices dip toward 10-year lows and laying off employees; Sharp recently reported plans to slash nearly one-fifth of its workforce. The companies — bleeding money on their once-profitable televisions — have also set off on a nontraditional hunt for profits, developing everything from solar panels to medical devices.

The companies still have famous brand names, and tech analysts say they still produce some of the world’s highest-quality hardware devices. But they face a fundamental problem: It’s been years since they’ve turned out products that people feel they need to have.

Those who study the consumer electronics industry describe a decade of missteps and miscalculations. Japan’s giants concentrated on stand-alone devices like televisions and phones and computers, but devoted little thought to software and the ways their devices synced with one another. As a result, their products don’t always work in harmony, in the way an iPhone connects naturally with a laptop and a digital music store.

In other cases, the Japanese companies were simply too slow to turn cutting-edge technology into usable technology. Sony, for instance, was early to embrace e-book technology, but struggled to pair it with intuitive software or an easy-to-download selection of books. The companies also completely missed the rapid rise of smartphones, with Apple and South Korea’s Samsung grabbing the majority of the market.

Even the Japanese companies’ strengths matter less now, as consumers have lost the willingness to pay a premium for quality. Sharp and Sony and Panasonic make among the world’s best televisions, for instance, but such Korean competitors as LG and Samsung have found ways to make products that are almost as good for far less money.

“In the past there was a huge gap between the best of breed and second best,” said Michael Gartenberg, an industry analyst at Gartner, a technology research company. “Now, maybe there’s still a small gap between a Sony high-definition screen and an LG screen, but most consumers can’t see it. And if most consumers can’t see it, it’s not there.

“Japanese companies,” Gartenberg added, “were busy defending old business models that the world simply bypassed.”

The pace of problems is accelerating. Sony hasn’t made a profit in four years. Panasonic has lost money in three of the past four. Along with Sharp, the companies’ combined market value, according to Bloomberg, is $32 billion — making them one-fifth the value of Samsung and one-twentieth the value of Apple.

by Chico Harlan, Washington Post |  Read more:

Blood Test Accurately Detects Early Stages of Lung, Breast Cancer in Humans

Researchers at Kansas State University have developed a simple blood test that can accurately detect the beginning stages of cancer.

In less than an hour, the test can detect breast cancer and non-small lung cancer -- the most common type of lung cancer -- before symptoms like coughing and weight loss start. The researchers anticipate testing for the early stages of pancreatic cancer shortly.

The test was developed by Stefan Bossmann, professor of chemistry, and Deryl Troyer, professor of anatomy and physiology. Both are also researchers affiliated with Kansas State University's Johnson Cancer Research Center and the University of Kansas Cancer Center. Gary Gadbury, professor of statistics at Kansas State University, helped analyze the data from tests with lung and breast cancer patients. The results, data and analysis were recently submitted to the Kansas Bio Authority for accelerated testing.

"We see this as the first step into a new arena of investigation that could eventually lead to improved early detection of human cancers," Troyer said. "Right now the people who could benefit the most are those classified as at-risk for cancer, such as heavy smokers and people who have a family history of cancer. The idea is these at-risk groups could go to their physician's office quarterly or once a year, take an easy-to-do, noninvasive test, and be told early on whether cancer has possibly developed."

The researchers say the test would be repeated a short time later. If cancer is confirmed, diagnostic imaging could begin that would otherwise not be routinely pursued.

According to the American Cancer Society, an estimated 39,920 breast cancer deaths and 160,340 lung cancer deaths are expected in the U.S. in 2012.

With the exception of breast cancer, most types of cancer can be categorized in four stages based on tumor growth and the spread of cancer cells throughout the body. Breast and lung cancer are typically found and diagnosed in stage 2, the stage when people often begin exhibiting symptoms such as pain, fatigue and coughing. Numerous studies show that the earlier cancer is detected, the greater chance a person has against the disease.

"The problem, though, is that nobody knows they're in stage 1," Bossmann said. "There is often not a red flag to warn that something is wrong. Meanwhile, the person is losing critical time."

The test developed by Kansas State University's Bossmann and Troyer works by detecting increased enzyme activity in the body. Iron nanoparticles coated with amino acids and a dye are introduced to small amounts of blood or urine from a patient. The amino acids and dye interact with enzymes in the patient's urine or blood sample. Each type of cancer produces a specific enzyme pattern, or signature, that can be identified by doctors.

by Kansas State University, Science Daily |  Read more:
Image via:

To Encourage Biking, Cities Lose the Helmets

[ed. I've posted about this before. If you'd like more information about the issues for and against bike helmet use see: cyclehelmets.org]

One spectacular Sunday in Paris last month, I decided to skip museums and shopping to partake of something even more captivating for an environment reporter: VĂ©lib, arguably the most successful bike-sharing program in the world. In their short lives, Europe’s bike-sharing systems have delivered myriad benefits, notably reducing traffic and its carbon emissions. A number of American cities — including New York, where a bike-sharing program is to open next year — want to replicate that success.

So I bought a day pass online for about $2, entered my login information at one of the hundreds of docking stations that are scattered every few blocks around the city and selected one of VĂ©lib’s nearly 20,000 stodgy gray bikes, with their basic gears, upright handlebars and practical baskets.

Then I did something extraordinary, something I’ve not done in a quarter-century of regular bike riding in the United States: I rode off without a helmet.

I rode all day at a modest clip, on both sides of the Seine, in the Latin Quarter, past the Louvre and along the Champs-ÉlysĂ©es, feeling exhilarated, not fearful. And I had tons of bareheaded bicycling company amid the Parisian traffic. One common denominator of successful bike programs around the world — from Paris to Barcelona to Guangzhou — is that almost no one wears a helmet, and there is no pressure to do so.

In the United States the notion that bike helmets promote health and safety by preventing head injuries is taken as pretty near God’s truth. Un-helmeted cyclists are regarded as irresponsible, like people who smoke. Cities are aggressive in helmet promotion.

But many European health experts have taken a very different view: Yes, there are studies that show that if you fall off a bicycle at a certain speed and hit your head, a helmet can reduce your risk of serious head injury. But such falls off bikes are rare — exceedingly so in mature urban cycling systems.

On the other hand, many researchers say, if you force or pressure people to wear helmets, you discourage them from riding bicycles. That means more obesity, heart disease and diabetes. And — Catch-22 — a result is fewer ordinary cyclists on the road, which makes it harder to develop a safe bicycling network. The safest biking cities are places like Amsterdam and Copenhagen, where middle-aged commuters are mainstay riders and the fraction of adults in helmets is minuscule.

“Pushing helmets really kills cycling and bike-sharing in particular because it promotes a sense of danger that just isn’t justified — in fact, cycling has many health benefits,” says Piet de Jong, a professor in the department of applied finance and actuarial studies at Macquarie University in Sydney. He studied the issue with mathematical modeling, and concludes that the benefits may outweigh the risks by 20 to 1.

He adds: “Statistically, if we wear helmets for cycling, maybe we should wear helmets when we climb ladders or get into a bath, because there are lots more injuries during those activities.” The European Cyclists’ Federation says that bicyclists in its domain have the same risk of serious injury as pedestrians per mile traveled.

by Elisabeth Rosenthal, NY Times |  Read more:
Photo: via Cyclehelmets.org

Sunday, September 30, 2012


De Niro. Scorsese.
via:

via:

My Life as a Replacement Ref: Three Unlikely Months Inside the NFL

Time's Sean Gregory spoke Friday with Jerry Frump, a long-time college football referee who served as a “replacement ref” during the recent NFL labor dispute. Highlights from the conversation, including Frump’s thoughts on the wide range of experience among his replacement colleagues, can be found here. Full transcript below:

Sean Gregory: When did you first start officiating? I believe you’ve done a bunch of games – what they used to call I-AA. How did you first start refereeing, when you were a kid?

Jerry Frump: I started in basketball first. And after one year in basketball an opportunity came up, a friend of mine said, “Do you want to try football?” I had never been a very good athlete, I was very small in high school, didn’t get my growth spurt, I guess if there was one, until later. But I got involved in officiating at a very young age.

How old were you when you started refereeing basketball?

I would have been 21.

And you played high school football?

I was a bench warmer. Small town. Like I said, I wasn’t very big, but I got my interest and what abilities I had probably after most guys had gotten involved and learned the fundamentals. But nonetheless I just loved sports. And so this became my passion.

I officiated basketball, I coached and officiated little league baseball, softball, semi-pro baseball, football, did a little bit of everything. And after a number of years my vocation caused me to move to the Chicago area. Starting off in a large area like this, it’s kind of starting over with your refereeing career, but I had an opportunity and got a few breaks with people and got involved and continued working at the high school level in the Chicago area then got involved working some junior college and Division III football. Along the way, it’s kind of a pecking order. You get some recognition, and somebody takes an interest in you at the next level and brings you along. And I had a supervisor at the Division III level who was very instrumental in pushing me to the next level and that was how I got involved in what was the Gateway Conference, which is now known as the Mountain Valley Conference. I had officiated that for 14 years. And shortly after getting involved in that back in 2001, you may recall that the NFL had another labor walkout and dispute. And I think I was one of about a half a dozen officials involved in the 2012 season who was also involved in 2001.

Circumstances in 2001 were significantly different. I think we had about four hours of training before they put us in a preseason game, but it was a very unique experience and something that I still remember to this day. Most of the guys worked also the first regular season game. I was one of the guys who could not get from my college game to the pro game the next day in time. So they had people that were on a crew and then they had some supplemental or alternatives that they had brought in for this purpose. At that time the NFL was willing to work with the college schedules, work around everybody’s timelines; this time it was made clear up front that that was not going to be the situation. They knew that this was going to be a more contentious negotiation. They said, “you have to make a choice.” As the NFL was putting out feelers for interested officials, the supervisors put out a notice that if you choose to make that decision, then obviously you’re sacrificing your college season – and probably your career. They didn’t say your career, but you could read between the lines. I’ve been officiating for over 40 years, this is my 41st or 42nd year of officiating football, period. It was an opportunity as I neared the end of my career, that I didn’t want to look back one day a year or two from now and say “gee, I wonder what if.”

So I rolled the dice and did that not knowing whether I would ever get on the field for a preseason game. And certainly not believing that it would go beyond that to get into the regular season, but you know, we did.

So you read between the lines, that if you worked for the NFL, you’d be out this season but possibly not be able to get back in.

That was the rumor. As a crew chief, there’s a lot of responsibilities put on you. I certainly knew and understood that in doing this, it left him in a lurch and it was a business decision that the supervisor had to make. I didn’t take it as a threat. I knew that if this got into the regular season, he couldn’t at the last minute try to bring in and put in a new crew chief in place. So I understand why they had to make that ultimatum.

And have you reached out to them to see where you’re at?

I have not.

Are you operating under the assumption right now that they might not let you back this year or down the road?

Correct.

And you feel like it’s almost like a blacklisting?

No. I think it’s a matter of when you step aside, somebody else is going to take over. For me to come back as a crew chief means that they’ve either got to get rid of somebody else, there’s got to be another opening, and there’s no guarantees of that.

by Sean Gregory, Time |  Read more:
Photo: George Gojkovich/Getty Images

How to Make Almost Anything

A new digital revolution is coming, this time in fabrication. It draws on the same insights that led to the earlier digitizations of communication and computation, but now what is being programmed is the physical world rather than the virtual one. Digital fabrication will allow individuals to design and produce tangible objects on demand, wherever and whenever they need them. Widespread access to these technologies will challenge traditional models of business, aid, and education.

The roots of the revolution date back to 1952, when researchers at the Massachusetts Institute of Technology (MIT) wired an early digital computer to a milling machine, creating the first numerically controlled machine tool. By using a computer program instead of a machinist to turn the screws that moved the metal stock, the researchers were able to produce aircraft components with shapes that were more complex than could be made by hand. From that first revolving end mill, all sorts of cutting tools have been mounted on computer-controlled platforms, including jets of water carrying abrasives that can cut through hard materials, lasers that can quickly carve fine features, and slender electrically charged wires that can make long thin cuts.

Today, numerically controlled machines touch almost every commercial product, whether directly (producing everything from laptop cases to jet engines) or indirectly (producing the tools that mold and stamp mass-produced goods). And yet all these modern descendants of the first numerically controlled machine tool share its original limitation: they can cut, but they cannot reach internal structures. This means, for example, that the axle of a wheel must be manufactured separately from the bearing it passes through.

In the 1980s, however, computer-controlled fabrication processes that added rather than removed material (called additive manufacturing) came on the market. Thanks to 3-D printing, a bearing and an axle could be built by the same machine at the same time. A range of 3-D printing processes are now available, including thermally fusing plastic filaments, using ultraviolet light to cross-link polymer resins, depositing adhesive droplets to bind a powder, cutting and laminating sheets of paper, and shining a laser beam to fuse metal particles. Businesses already use 3-D printers to model products before producing them, a process referred to as rapid prototyping. Companies also rely on the technology to make objects with complex shapes, such as jewelry and medical implants. Research groups have even used 3-D printers to build structures out of cells with the goal of printing living organs.

Additive manufacturing has been widely hailed as a revolution, featured on the cover of publications from Wired to The Economist. This is, however, a curious sort of revolution, proclaimed more by its observers than its practitioners. In a well-equipped workshop, a 3-D printer might be used for about a quarter of the jobs, with other machines doing the rest. One reason is that the printers are slow, taking hours or even days to make things. Other computer-controlled tools can produce parts faster, or with finer features, or that are larger, lighter, or stronger. Glowing articles about 3-D printers read like the stories in the 1950s that proclaimed that microwave ovens were the future of cooking. Microwaves are convenient, but they don’t replace the rest of the kitchen.

The revolution is not additive versus subtractive manufacturing; it is the ability to turn data into things and things into data. That is what is coming; for some perspective, there is a close analogy with the history of computing. The first step in that development was the arrival of large mainframe computers in the 1950s, which only corporations, governments, and elite institutions could afford. Next came the development of minicomputers in the 1960s, led by Digital Equipment Corporation’s PDP family of computers, which was based on MIT’s first transistorized computer, the TX-0. These brought down the cost of a computer from hundreds of thousands of dollars to tens of thousands. That was still too much for an individual but was affordable for research groups, university departments, and smaller companies. The people who used these devices developed the applications for just about everything one does now on a computer: sending e-mail, writing in a word processor, playing video games, listening to music. After minicomputers came hobbyist computers. The best known of these, the MITS Altair 8800, was sold in 1975 for about $1,000 assembled or about $400 in kit form. Its capabilities were rudimentary, but it changed the lives of a generation of computing pioneers, who could now own a machine individually. Finally, computing truly turned personal with the appearance of the IBM personal computer in 1981. It was relatively compact, easy to use, useful, and affordable.

Just as with the old mainframes, only institutions can afford the modern versions of the early bulky and expensive computer-controlled milling devices. In the 1980s, first-generation rapid prototyping systems from companies such as 3D Systems, Stratasys, Epilog Laser, and Universal brought the price of computer-controlled manufacturing systems down from hundreds of thousands of dollars to tens of thousands, making them attractive to research groups. The next-generation digital fabrication products on the market now, such as the RepRap, the MakerBot, the Ultimaker, the PopFab, and the MTM Snap, sell for thousands of dollars assembled or hundreds of dollars as parts. Unlike the digital fabrication tools that came before them, these tools have plans that are typically freely shared, so that those who own the tools (like those who owned the hobbyist computers) can not only use them but also make more of them and modify them. Integrated personal digital fabricators comparable to the personal computer do not yet exist, but they will.

Personal fabrication has been around for years as a science-fiction staple. When the crew of the TV series Star Trek: The Next Generation was confronted by a particularly challenging plot development, they could use the onboard replicator to make whatever they needed. Scientists at a number of labs (including mine) are now working on the real thing, developing processes that can place individual atoms and molecules into whatever structure they want. Unlike 3-D printers today, these will be able to build complete functional systems at once, with no need for parts to be assembled. The aim is to not only produce the parts for a drone, for example, but build a complete vehicle that can fly right out of the printer. This goal is still years away, but it is not necessary to wait: most of the computer functions one uses today were invented in the minicomputer era, long before they would flourish in the era of personal computing. Similarly, although today’s digital manufacturing machines are still in their infancy, they can already be used to make (almost) anything, anywhere. That changes everything.

by Neil Gershenfeld, Foreign Affairs |  Read more:
Photo: flickr / Mads Boedker

Will We Ever Predict Earthquakes?


In 1997, Charles Richter – the man who gave his name to a now-defunct scale of earthquake strength – wrote, “Journalists and the general public rush to any suggestion of earthquake prediction like hogs toward a full trough… [Prediction] provides a happy hunting ground for amateurs, cranks, and outright publicity-seeking fakers.” Susan Hough from the United States Geological Survey says the 1970s witnessed a heyday of earthquake prediction “But the pendulum swung [because of too many false alarms],” says Hough, who wrote a book about the practice called Predicting the Unpredictable. “People became very pessimistic, and prediction got a really bad name.”

Indeed, some scientists, such as Robert Geller from the University of Tokyo, think that prediction is outright impossible. In a 1997 paper, starkly titled Earthquakes Cannot Be Predicted, he argues that the factors that influence the birth and growth of earthquakes are so numerous and complex that measuring and analysing them is a fool’s errand. Nothing in the last 15 years has changed his mind. In an email to me, he wrote: “All serious scientists know there are no prospects in the immediate future.”

Finding fault

Earthquakes start when two of the Earth’s tectonic plates – the huge, moving slabs of land that carry the continents – move around each other. The plates squash, stretch and catch against each other, storing energy which is then suddenly released, breaking and shaking the rock around them.

Those are the basics; the details are much more complex. Ross Stein from the United States Geological Survey explains the problem by comparing tectonic plates to a brick sitting on a desk, and attached to a fishing rod by a rubber band. You can reel it in to mimic the shifting plates, and because the rubber band is elastic, just like the Earth’s crust, the brick doesn’t slide smoothly. Instead, as you turn the reel, the band stretches until, suddenly, the brick zips forward. That’s an earthquake.

If you did this 10 times, says Stein, you would see a huge difference in the number of turns it took to move the brick, or in the distance the brick slid before stopping. “Even when we simplify the Earth down to this ridiculous extreme, we still don’t get regular earthquakes,” he says. The Earth, of course, isn’t simple. The mass, elasticity and friction of the sliding plates vary between different areas, or even different parts of the same fault. All these factors can influence where an earthquake starts (which, Stein says, can be an area as small as your living room), when it starts, how strong it is, and how long it lasts. “We have no business thinking we’ll see regular periodic earthquakes in the crust,” he says.

That hasn’t stopped people from trying to find “anomalies” that reliably precede an earthquake, including animals acting strangely, radon gas seeping from rocks, patterns of precursor earthquakes, and electromagnetic signals from pressurised rocks. None of these have been backed by strong evidence. Studying such “anomalies” may eventually tell us something useful about the physics of earthquakes, but their value towards a predictive test is questionable.

by Ed Yong, Not Exactly Rocket Science |  Read more:

The Keys to the Park


There are 383 aspirational keys in circulation in the Big City, each of them numbered and coded, all of them equipped to unlock any of four wrought-iron gates offering privileged access to undisturbed siestas or tranquil ambulation inside the tree-lined boundaries of Gramercy Park. At age 181, the only truly private park in Manhattan is lovelier and more ornamental than ever; yes, the colorful Calder sculpture swaying blithely in the breeze inside the fence is “Janey Waney,” on indefinite loan from the Calder Foundation.

Alexander Rower, a grandson of Mr. Calder, lives on Gramercy Park, as does Samuel G. White, whose great-grandfather was Stanford White, and who has taken on an advisory role in a major redesign of its landscaping. Both are key-holders who, validated by an impressive heritage, are exerting a significant influence on Gramercy Park’s 21st-century profile. Because Gramercy is fenced, not walled in, the Calder and the rest of the evolving interior scenery are visible in all seasons to passers-by and the legions of dog-walkers who daily patrol the perimeter.

Parkside residents rationalize that their communal front yard is privatized for its own protection. Besides, they, not the city it enhances, have footed its bills for nearly two centuries. Any of the 39 buildings on the park that fails to pay the yearly assessment fee of $7,500 per lot, which grants it two keys — fees and keys multiply accordingly for buildings on multiple lots — will have its key privileges rescinded. The penalty is so painful that it has never had to be applied.

For connection-challenged mortals, though, the park is increasingly problematic to appreciate from within, particularly now that Arthur W. and William Lie Zeckendorf, and Robert A. M. Stern, the architect of their 15 Central Park West project, are recalibrating property values in a stratospheric direction by bringing the neighborhood its first-ever $42 million duplex penthouse, at 18 Gramercy Park South, formerly a Salvation Army residence for single women.

The unique housewarming gift the Zeckendorfs decided to bestow on the buyer-who-has-everything types purchasing there is none other than a small metallic item they might not already own: a personal key to the park. (...)

The locks and keys are changed every year, and the four gates are, for further safekeeping, self-locking: the key is required for exiting as well as entering.

“In a way it’s kind of a priceless amenity,” said Maurice Mann, the landlord who restored 36 Gramercy Park East, “because everyone is so enamored with the park, and owning a key still holds a certain amount of bragging rights and prestige. Not everybody can have one, so it’s like, if there’s something I can’t have, I want it.”

by Robin Finn, NY Times |  Read more:
Photo: Chang W. Lee