Monday, October 1, 2012

Japan’s Tech Giants Are in a Free Fall


While electronics giants Apple and Samsung fight each other for market dominance, with hotly competitive product releases and tit-for-tat patent lawsuits, Japan’s consumer electronics makers find themselves in an increasingly perilous fight for relevance and, in some cases, survival.

Companies such as Sony, Panasonic and Sharp once controlled the industry, outclassing and outselling their U.S. rivals. But now they represent the most alarming telltale of corporate Japan’s ­two-decade struggle to adapt, downsize and innovate.

While the Japanese economy staggers, the consumer electronics companies are in an accelerated free fall, unable to catch on in the digital world of tablets and smartphones. They’re cycling through executives, watching their stock prices dip toward 10-year lows and laying off employees; Sharp recently reported plans to slash nearly one-fifth of its workforce. The companies — bleeding money on their once-profitable televisions — have also set off on a nontraditional hunt for profits, developing everything from solar panels to medical devices.

The companies still have famous brand names, and tech analysts say they still produce some of the world’s highest-quality hardware devices. But they face a fundamental problem: It’s been years since they’ve turned out products that people feel they need to have.

Those who study the consumer electronics industry describe a decade of missteps and miscalculations. Japan’s giants concentrated on stand-alone devices like televisions and phones and computers, but devoted little thought to software and the ways their devices synced with one another. As a result, their products don’t always work in harmony, in the way an iPhone connects naturally with a laptop and a digital music store.

In other cases, the Japanese companies were simply too slow to turn cutting-edge technology into usable technology. Sony, for instance, was early to embrace e-book technology, but struggled to pair it with intuitive software or an easy-to-download selection of books. The companies also completely missed the rapid rise of smartphones, with Apple and South Korea’s Samsung grabbing the majority of the market.

Even the Japanese companies’ strengths matter less now, as consumers have lost the willingness to pay a premium for quality. Sharp and Sony and Panasonic make among the world’s best televisions, for instance, but such Korean competitors as LG and Samsung have found ways to make products that are almost as good for far less money.

“In the past there was a huge gap between the best of breed and second best,” said Michael Gartenberg, an industry analyst at Gartner, a technology research company. “Now, maybe there’s still a small gap between a Sony high-definition screen and an LG screen, but most consumers can’t see it. And if most consumers can’t see it, it’s not there.

“Japanese companies,” Gartenberg added, “were busy defending old business models that the world simply bypassed.”

The pace of problems is accelerating. Sony hasn’t made a profit in four years. Panasonic has lost money in three of the past four. Along with Sharp, the companies’ combined market value, according to Bloomberg, is $32 billion — making them one-fifth the value of Samsung and one-twentieth the value of Apple.

by Chico Harlan, Washington Post |  Read more:

Blood Test Accurately Detects Early Stages of Lung, Breast Cancer in Humans

Researchers at Kansas State University have developed a simple blood test that can accurately detect the beginning stages of cancer.

In less than an hour, the test can detect breast cancer and non-small lung cancer -- the most common type of lung cancer -- before symptoms like coughing and weight loss start. The researchers anticipate testing for the early stages of pancreatic cancer shortly.

The test was developed by Stefan Bossmann, professor of chemistry, and Deryl Troyer, professor of anatomy and physiology. Both are also researchers affiliated with Kansas State University's Johnson Cancer Research Center and the University of Kansas Cancer Center. Gary Gadbury, professor of statistics at Kansas State University, helped analyze the data from tests with lung and breast cancer patients. The results, data and analysis were recently submitted to the Kansas Bio Authority for accelerated testing.

"We see this as the first step into a new arena of investigation that could eventually lead to improved early detection of human cancers," Troyer said. "Right now the people who could benefit the most are those classified as at-risk for cancer, such as heavy smokers and people who have a family history of cancer. The idea is these at-risk groups could go to their physician's office quarterly or once a year, take an easy-to-do, noninvasive test, and be told early on whether cancer has possibly developed."

The researchers say the test would be repeated a short time later. If cancer is confirmed, diagnostic imaging could begin that would otherwise not be routinely pursued.

According to the American Cancer Society, an estimated 39,920 breast cancer deaths and 160,340 lung cancer deaths are expected in the U.S. in 2012.

With the exception of breast cancer, most types of cancer can be categorized in four stages based on tumor growth and the spread of cancer cells throughout the body. Breast and lung cancer are typically found and diagnosed in stage 2, the stage when people often begin exhibiting symptoms such as pain, fatigue and coughing. Numerous studies show that the earlier cancer is detected, the greater chance a person has against the disease.

"The problem, though, is that nobody knows they're in stage 1," Bossmann said. "There is often not a red flag to warn that something is wrong. Meanwhile, the person is losing critical time."

The test developed by Kansas State University's Bossmann and Troyer works by detecting increased enzyme activity in the body. Iron nanoparticles coated with amino acids and a dye are introduced to small amounts of blood or urine from a patient. The amino acids and dye interact with enzymes in the patient's urine or blood sample. Each type of cancer produces a specific enzyme pattern, or signature, that can be identified by doctors.

by Kansas State University, Science Daily |  Read more:
Image via:

To Encourage Biking, Cities Lose the Helmets

[ed. I've posted about this before. If you'd like more information about the issues for and against bike helmet use see: cyclehelmets.org]

One spectacular Sunday in Paris last month, I decided to skip museums and shopping to partake of something even more captivating for an environment reporter: Vélib, arguably the most successful bike-sharing program in the world. In their short lives, Europe’s bike-sharing systems have delivered myriad benefits, notably reducing traffic and its carbon emissions. A number of American cities — including New York, where a bike-sharing program is to open next year — want to replicate that success.

So I bought a day pass online for about $2, entered my login information at one of the hundreds of docking stations that are scattered every few blocks around the city and selected one of Vélib’s nearly 20,000 stodgy gray bikes, with their basic gears, upright handlebars and practical baskets.

Then I did something extraordinary, something I’ve not done in a quarter-century of regular bike riding in the United States: I rode off without a helmet.

I rode all day at a modest clip, on both sides of the Seine, in the Latin Quarter, past the Louvre and along the Champs-Élysées, feeling exhilarated, not fearful. And I had tons of bareheaded bicycling company amid the Parisian traffic. One common denominator of successful bike programs around the world — from Paris to Barcelona to Guangzhou — is that almost no one wears a helmet, and there is no pressure to do so.

In the United States the notion that bike helmets promote health and safety by preventing head injuries is taken as pretty near God’s truth. Un-helmeted cyclists are regarded as irresponsible, like people who smoke. Cities are aggressive in helmet promotion.

But many European health experts have taken a very different view: Yes, there are studies that show that if you fall off a bicycle at a certain speed and hit your head, a helmet can reduce your risk of serious head injury. But such falls off bikes are rare — exceedingly so in mature urban cycling systems.

On the other hand, many researchers say, if you force or pressure people to wear helmets, you discourage them from riding bicycles. That means more obesity, heart disease and diabetes. And — Catch-22 — a result is fewer ordinary cyclists on the road, which makes it harder to develop a safe bicycling network. The safest biking cities are places like Amsterdam and Copenhagen, where middle-aged commuters are mainstay riders and the fraction of adults in helmets is minuscule.

“Pushing helmets really kills cycling and bike-sharing in particular because it promotes a sense of danger that just isn’t justified — in fact, cycling has many health benefits,” says Piet de Jong, a professor in the department of applied finance and actuarial studies at Macquarie University in Sydney. He studied the issue with mathematical modeling, and concludes that the benefits may outweigh the risks by 20 to 1.

He adds: “Statistically, if we wear helmets for cycling, maybe we should wear helmets when we climb ladders or get into a bath, because there are lots more injuries during those activities.” The European Cyclists’ Federation says that bicyclists in its domain have the same risk of serious injury as pedestrians per mile traveled.

by Elisabeth Rosenthal, NY Times |  Read more:
Photo: via Cyclehelmets.org

Sunday, September 30, 2012


De Niro. Scorsese.
via:

via:

My Life as a Replacement Ref: Three Unlikely Months Inside the NFL

Time's Sean Gregory spoke Friday with Jerry Frump, a long-time college football referee who served as a “replacement ref” during the recent NFL labor dispute. Highlights from the conversation, including Frump’s thoughts on the wide range of experience among his replacement colleagues, can be found here. Full transcript below:

Sean Gregory: When did you first start officiating? I believe you’ve done a bunch of games – what they used to call I-AA. How did you first start refereeing, when you were a kid?

Jerry Frump: I started in basketball first. And after one year in basketball an opportunity came up, a friend of mine said, “Do you want to try football?” I had never been a very good athlete, I was very small in high school, didn’t get my growth spurt, I guess if there was one, until later. But I got involved in officiating at a very young age.

How old were you when you started refereeing basketball?

I would have been 21.

And you played high school football?

I was a bench warmer. Small town. Like I said, I wasn’t very big, but I got my interest and what abilities I had probably after most guys had gotten involved and learned the fundamentals. But nonetheless I just loved sports. And so this became my passion.

I officiated basketball, I coached and officiated little league baseball, softball, semi-pro baseball, football, did a little bit of everything. And after a number of years my vocation caused me to move to the Chicago area. Starting off in a large area like this, it’s kind of starting over with your refereeing career, but I had an opportunity and got a few breaks with people and got involved and continued working at the high school level in the Chicago area then got involved working some junior college and Division III football. Along the way, it’s kind of a pecking order. You get some recognition, and somebody takes an interest in you at the next level and brings you along. And I had a supervisor at the Division III level who was very instrumental in pushing me to the next level and that was how I got involved in what was the Gateway Conference, which is now known as the Mountain Valley Conference. I had officiated that for 14 years. And shortly after getting involved in that back in 2001, you may recall that the NFL had another labor walkout and dispute. And I think I was one of about a half a dozen officials involved in the 2012 season who was also involved in 2001.

Circumstances in 2001 were significantly different. I think we had about four hours of training before they put us in a preseason game, but it was a very unique experience and something that I still remember to this day. Most of the guys worked also the first regular season game. I was one of the guys who could not get from my college game to the pro game the next day in time. So they had people that were on a crew and then they had some supplemental or alternatives that they had brought in for this purpose. At that time the NFL was willing to work with the college schedules, work around everybody’s timelines; this time it was made clear up front that that was not going to be the situation. They knew that this was going to be a more contentious negotiation. They said, “you have to make a choice.” As the NFL was putting out feelers for interested officials, the supervisors put out a notice that if you choose to make that decision, then obviously you’re sacrificing your college season – and probably your career. They didn’t say your career, but you could read between the lines. I’ve been officiating for over 40 years, this is my 41st or 42nd year of officiating football, period. It was an opportunity as I neared the end of my career, that I didn’t want to look back one day a year or two from now and say “gee, I wonder what if.”

So I rolled the dice and did that not knowing whether I would ever get on the field for a preseason game. And certainly not believing that it would go beyond that to get into the regular season, but you know, we did.

So you read between the lines, that if you worked for the NFL, you’d be out this season but possibly not be able to get back in.

That was the rumor. As a crew chief, there’s a lot of responsibilities put on you. I certainly knew and understood that in doing this, it left him in a lurch and it was a business decision that the supervisor had to make. I didn’t take it as a threat. I knew that if this got into the regular season, he couldn’t at the last minute try to bring in and put in a new crew chief in place. So I understand why they had to make that ultimatum.

And have you reached out to them to see where you’re at?

I have not.

Are you operating under the assumption right now that they might not let you back this year or down the road?

Correct.

And you feel like it’s almost like a blacklisting?

No. I think it’s a matter of when you step aside, somebody else is going to take over. For me to come back as a crew chief means that they’ve either got to get rid of somebody else, there’s got to be another opening, and there’s no guarantees of that.

by Sean Gregory, Time |  Read more:
Photo: George Gojkovich/Getty Images

How to Make Almost Anything

A new digital revolution is coming, this time in fabrication. It draws on the same insights that led to the earlier digitizations of communication and computation, but now what is being programmed is the physical world rather than the virtual one. Digital fabrication will allow individuals to design and produce tangible objects on demand, wherever and whenever they need them. Widespread access to these technologies will challenge traditional models of business, aid, and education.

The roots of the revolution date back to 1952, when researchers at the Massachusetts Institute of Technology (MIT) wired an early digital computer to a milling machine, creating the first numerically controlled machine tool. By using a computer program instead of a machinist to turn the screws that moved the metal stock, the researchers were able to produce aircraft components with shapes that were more complex than could be made by hand. From that first revolving end mill, all sorts of cutting tools have been mounted on computer-controlled platforms, including jets of water carrying abrasives that can cut through hard materials, lasers that can quickly carve fine features, and slender electrically charged wires that can make long thin cuts.

Today, numerically controlled machines touch almost every commercial product, whether directly (producing everything from laptop cases to jet engines) or indirectly (producing the tools that mold and stamp mass-produced goods). And yet all these modern descendants of the first numerically controlled machine tool share its original limitation: they can cut, but they cannot reach internal structures. This means, for example, that the axle of a wheel must be manufactured separately from the bearing it passes through.

In the 1980s, however, computer-controlled fabrication processes that added rather than removed material (called additive manufacturing) came on the market. Thanks to 3-D printing, a bearing and an axle could be built by the same machine at the same time. A range of 3-D printing processes are now available, including thermally fusing plastic filaments, using ultraviolet light to cross-link polymer resins, depositing adhesive droplets to bind a powder, cutting and laminating sheets of paper, and shining a laser beam to fuse metal particles. Businesses already use 3-D printers to model products before producing them, a process referred to as rapid prototyping. Companies also rely on the technology to make objects with complex shapes, such as jewelry and medical implants. Research groups have even used 3-D printers to build structures out of cells with the goal of printing living organs.

Additive manufacturing has been widely hailed as a revolution, featured on the cover of publications from Wired to The Economist. This is, however, a curious sort of revolution, proclaimed more by its observers than its practitioners. In a well-equipped workshop, a 3-D printer might be used for about a quarter of the jobs, with other machines doing the rest. One reason is that the printers are slow, taking hours or even days to make things. Other computer-controlled tools can produce parts faster, or with finer features, or that are larger, lighter, or stronger. Glowing articles about 3-D printers read like the stories in the 1950s that proclaimed that microwave ovens were the future of cooking. Microwaves are convenient, but they don’t replace the rest of the kitchen.

The revolution is not additive versus subtractive manufacturing; it is the ability to turn data into things and things into data. That is what is coming; for some perspective, there is a close analogy with the history of computing. The first step in that development was the arrival of large mainframe computers in the 1950s, which only corporations, governments, and elite institutions could afford. Next came the development of minicomputers in the 1960s, led by Digital Equipment Corporation’s PDP family of computers, which was based on MIT’s first transistorized computer, the TX-0. These brought down the cost of a computer from hundreds of thousands of dollars to tens of thousands. That was still too much for an individual but was affordable for research groups, university departments, and smaller companies. The people who used these devices developed the applications for just about everything one does now on a computer: sending e-mail, writing in a word processor, playing video games, listening to music. After minicomputers came hobbyist computers. The best known of these, the MITS Altair 8800, was sold in 1975 for about $1,000 assembled or about $400 in kit form. Its capabilities were rudimentary, but it changed the lives of a generation of computing pioneers, who could now own a machine individually. Finally, computing truly turned personal with the appearance of the IBM personal computer in 1981. It was relatively compact, easy to use, useful, and affordable.

Just as with the old mainframes, only institutions can afford the modern versions of the early bulky and expensive computer-controlled milling devices. In the 1980s, first-generation rapid prototyping systems from companies such as 3D Systems, Stratasys, Epilog Laser, and Universal brought the price of computer-controlled manufacturing systems down from hundreds of thousands of dollars to tens of thousands, making them attractive to research groups. The next-generation digital fabrication products on the market now, such as the RepRap, the MakerBot, the Ultimaker, the PopFab, and the MTM Snap, sell for thousands of dollars assembled or hundreds of dollars as parts. Unlike the digital fabrication tools that came before them, these tools have plans that are typically freely shared, so that those who own the tools (like those who owned the hobbyist computers) can not only use them but also make more of them and modify them. Integrated personal digital fabricators comparable to the personal computer do not yet exist, but they will.

Personal fabrication has been around for years as a science-fiction staple. When the crew of the TV series Star Trek: The Next Generation was confronted by a particularly challenging plot development, they could use the onboard replicator to make whatever they needed. Scientists at a number of labs (including mine) are now working on the real thing, developing processes that can place individual atoms and molecules into whatever structure they want. Unlike 3-D printers today, these will be able to build complete functional systems at once, with no need for parts to be assembled. The aim is to not only produce the parts for a drone, for example, but build a complete vehicle that can fly right out of the printer. This goal is still years away, but it is not necessary to wait: most of the computer functions one uses today were invented in the minicomputer era, long before they would flourish in the era of personal computing. Similarly, although today’s digital manufacturing machines are still in their infancy, they can already be used to make (almost) anything, anywhere. That changes everything.

by Neil Gershenfeld, Foreign Affairs |  Read more:
Photo: flickr / Mads Boedker

Will We Ever Predict Earthquakes?


In 1997, Charles Richter – the man who gave his name to a now-defunct scale of earthquake strength – wrote, “Journalists and the general public rush to any suggestion of earthquake prediction like hogs toward a full trough… [Prediction] provides a happy hunting ground for amateurs, cranks, and outright publicity-seeking fakers.” Susan Hough from the United States Geological Survey says the 1970s witnessed a heyday of earthquake prediction “But the pendulum swung [because of too many false alarms],” says Hough, who wrote a book about the practice called Predicting the Unpredictable. “People became very pessimistic, and prediction got a really bad name.”

Indeed, some scientists, such as Robert Geller from the University of Tokyo, think that prediction is outright impossible. In a 1997 paper, starkly titled Earthquakes Cannot Be Predicted, he argues that the factors that influence the birth and growth of earthquakes are so numerous and complex that measuring and analysing them is a fool’s errand. Nothing in the last 15 years has changed his mind. In an email to me, he wrote: “All serious scientists know there are no prospects in the immediate future.”

Finding fault

Earthquakes start when two of the Earth’s tectonic plates – the huge, moving slabs of land that carry the continents – move around each other. The plates squash, stretch and catch against each other, storing energy which is then suddenly released, breaking and shaking the rock around them.

Those are the basics; the details are much more complex. Ross Stein from the United States Geological Survey explains the problem by comparing tectonic plates to a brick sitting on a desk, and attached to a fishing rod by a rubber band. You can reel it in to mimic the shifting plates, and because the rubber band is elastic, just like the Earth’s crust, the brick doesn’t slide smoothly. Instead, as you turn the reel, the band stretches until, suddenly, the brick zips forward. That’s an earthquake.

If you did this 10 times, says Stein, you would see a huge difference in the number of turns it took to move the brick, or in the distance the brick slid before stopping. “Even when we simplify the Earth down to this ridiculous extreme, we still don’t get regular earthquakes,” he says. The Earth, of course, isn’t simple. The mass, elasticity and friction of the sliding plates vary between different areas, or even different parts of the same fault. All these factors can influence where an earthquake starts (which, Stein says, can be an area as small as your living room), when it starts, how strong it is, and how long it lasts. “We have no business thinking we’ll see regular periodic earthquakes in the crust,” he says.

That hasn’t stopped people from trying to find “anomalies” that reliably precede an earthquake, including animals acting strangely, radon gas seeping from rocks, patterns of precursor earthquakes, and electromagnetic signals from pressurised rocks. None of these have been backed by strong evidence. Studying such “anomalies” may eventually tell us something useful about the physics of earthquakes, but their value towards a predictive test is questionable.

by Ed Yong, Not Exactly Rocket Science |  Read more:

The Keys to the Park


There are 383 aspirational keys in circulation in the Big City, each of them numbered and coded, all of them equipped to unlock any of four wrought-iron gates offering privileged access to undisturbed siestas or tranquil ambulation inside the tree-lined boundaries of Gramercy Park. At age 181, the only truly private park in Manhattan is lovelier and more ornamental than ever; yes, the colorful Calder sculpture swaying blithely in the breeze inside the fence is “Janey Waney,” on indefinite loan from the Calder Foundation.

Alexander Rower, a grandson of Mr. Calder, lives on Gramercy Park, as does Samuel G. White, whose great-grandfather was Stanford White, and who has taken on an advisory role in a major redesign of its landscaping. Both are key-holders who, validated by an impressive heritage, are exerting a significant influence on Gramercy Park’s 21st-century profile. Because Gramercy is fenced, not walled in, the Calder and the rest of the evolving interior scenery are visible in all seasons to passers-by and the legions of dog-walkers who daily patrol the perimeter.

Parkside residents rationalize that their communal front yard is privatized for its own protection. Besides, they, not the city it enhances, have footed its bills for nearly two centuries. Any of the 39 buildings on the park that fails to pay the yearly assessment fee of $7,500 per lot, which grants it two keys — fees and keys multiply accordingly for buildings on multiple lots — will have its key privileges rescinded. The penalty is so painful that it has never had to be applied.

For connection-challenged mortals, though, the park is increasingly problematic to appreciate from within, particularly now that Arthur W. and William Lie Zeckendorf, and Robert A. M. Stern, the architect of their 15 Central Park West project, are recalibrating property values in a stratospheric direction by bringing the neighborhood its first-ever $42 million duplex penthouse, at 18 Gramercy Park South, formerly a Salvation Army residence for single women.

The unique housewarming gift the Zeckendorfs decided to bestow on the buyer-who-has-everything types purchasing there is none other than a small metallic item they might not already own: a personal key to the park. (...)

The locks and keys are changed every year, and the four gates are, for further safekeeping, self-locking: the key is required for exiting as well as entering.

“In a way it’s kind of a priceless amenity,” said Maurice Mann, the landlord who restored 36 Gramercy Park East, “because everyone is so enamored with the park, and owning a key still holds a certain amount of bragging rights and prestige. Not everybody can have one, so it’s like, if there’s something I can’t have, I want it.”

by Robin Finn, NY Times |  Read more:
Photo: Chang W. Lee

Saturday, September 29, 2012

Fender: A Guitar Maker Aims to Stay Plugged In

In 1948, a radio repairman named Leo Fender took a piece of ash, bolted on a length of maple and attached an electronic transducer.

You know the rest, even if you don’t know you know the rest.

You’ve heard it — in the guitar riffs of Buddy Holly, Jimi Hendrix, George Harrison, Keith Richards, Eric Clapton, Pete Townshend, Bruce Springsteen, Mark Knopfler, Kurt Cobain and on and on.

It’s the sound of a Fender electric guitar. Mr. Fender’s company, now known as the Fender Musical Instruments Corporation, is the world’s largest maker of guitars. Its Stratocaster, which made its debut in 1954, is still a top seller. For many, the Strat’s cutting tone and sexy, double-cutaway curves mean rock ’n’ roll.

But this heart of rock isn’t beating quite the way it once did. Like many other American manufacturers, Fender is struggling to hold on to what it’s got in a tight economy. Sales and profits are down this year. A Strat, after all, is what economists call a consumer discretionary item — a nonessential.

More than macroeconomics, however, is at work here. Fender, based in Scottsdale, Ariz., is also being buffeted by powerful forces on Wall Street.

A private investment firm, Weston Presidio, controls nearly half of the company and has been looking for an exit. It pushed to take Fender public in March, to howls in the guitar-o-sphere that Fender was selling out. But, to Fender’s embarrassment, investors balked. They were worried about the lofty price and, even more, about how Fender can keep growing.

And that, really, is the crux of the matter. Times have changed, and so has music. In the 1950s, ’60s and ’70s, electric guitars powered rock and pop. Today, turntable rigs, drum machines and sampler synthesizers drive music like hip-hop. Electric guitars, huge as they are, have lost some of their old magic in this era of Jay-Z, Kanye West and “The Voice.”

Games like Guitar Hero have helped underpin sales, but teenagers who once might have hankered after guitars now get by making music on laptops. It’s worth remembering that the accordion was once the most popular instrument in America.

Granted, Fender is such a powerful brand that it can ride out the lean times. But sales of all kinds of musical instruments plunged during the recession, and they still haven’t recovered fully. Sales of all instruments in the United States totaled $6.5 billion last year, down roughly 13 percent from their peak in 2005, according to Music Trades, which tracks the industry.

Many of the guitars that are selling these days are cheap ones made in places like China — ones that cost a small fraction of, say, a $1,599 Fender Artist “Eric Clapton” Strat. Fender has been making its own lines of inexpensive guitars overseas for years, but the question is how the company can keep growing and compete profitably in a fast-moving, global marketplace. Its margins are already under pressure.

“What possible niche is left unexploited by Fender?” asks Jeffrey Bronchick, founder of Cove Street Capital, an investment advisory firm in El Segundo, Calif., and the owner of some 40 guitars, including four Fenders.

by Janet Morrissey, NY Times |  Read more:
Photo: Monica Almeida/

How Wikipedia Works


Imagine a world without Wikipedia: a place where this encyclopaedic behemoth couldn’t be scanned by anxious school students keen to speed up their homework by rewriting the first four paragraphs of hard-working Wikipedia editors. Can you envision University undergraduates actually picking up a dead-tree media copy of Encyclopaedia Britannica, or actually physically visiting a brick and mortar library to ferret out original sources? Neither can I – especially when considering the fact Wikipedia is now one of the most established and well patronised reference websites in the contemporary world.

Not only is Wikipedia one of the most used resources for data-gathering and seemingly instantaneous information retrieval – it’s also free to use with no advertisements clogging up the interface (except those quirky requests to donate). So just how does such a key knowledge resource function, and who are the faces behind such an indispensable modern take on the traditional Encyclopaedia?

In 1994, Ward Cunningham created a website format previously unknown to Internet users at the time: one that would drive knowledge creation and collation to new heights. This website was called a Wiki.

This style of website propelled the user into the collaboration spotlight, by encouraging anyone to update and edit internet-hosted content in real time. Although this early version of collaborative content creation now seems standard, remember that this was both pre social media and prior to the multitudes of current platforms and applications that cater for users who are keen to create, edit and share information in an aggregation space.

On his personal web page, Ward says of the Wiki:
The idea of a “Wiki” may seem odd at first, but dive in, explore its links and it will soon seem familiar. “Wiki” is a composition system; it’s a discussion medium; it’s a repository; it’s a mail system; it’s a tool for collaboration. We don’t know quite what it is, but we do know it’s a fun way to communicate asynchronously across the network.
In 2000, this original concept of tying large amounts of content to an open, networked based collation system propelled Jimmy Wales and Larry Sanger to create Nupedia. Nupedia (unlike Wikipedia) was initially created as a for-profit project that acted as an online encyclopaedia funded by Bomis. The project had lofty ideals: it was to be free to access for all users and all content was to be academically robust, with a mandatory peer-review policy as well as a 7-step review process.

From the outset, Nupedia had performance problems. In its first year, Nupedia had only 20 or so articles approved for publication, with as many as 150 drafts stagnating in the yet-to-be published vault. It was also assumed by the founders of Nupedia that scholars and experts would want to voluntarily provide high-end content, regardless of the absence of incentives to do so. Then there was the infighting between Sanger and Wales, with Sanger determined to adhere strongly to the content control of all published material and demanding more reliable content (Sanger would go on to create a more academically-robust alternative called Citizendium that is still in operation).

What further added to the demise of Nupedia was the fact that both Sanger and Wales wanted to adopt the Wiki format in order to utilise elements they thought would act to enhance the existing Nupedia model, such as ease of editing, less restricted review processes and a more inclusive and open approach to information organization. Thus, Wikipedia was born as a side-project to help enhance Nupedia – instead, it ended up eclipsing it and providing the trigger for Nupedia’s eventual demise.

When the non-profit Wikipedia project went live in January 2001, both Sanger and Wales had no idea that this side-project would become the force it is today. By 2002, this knowledge repository contained upwards of 20,000 entries: at the end of 2006, it had reached the 1 million article mark. Wikipedia itself provides the latest up-to-date stats regarding the current statistics concerning the present contributor base and popularity:

As of September 2012, Wikipedia includes over 23 million freely usable articles in 285 languages, written by over 36 million registered users and numerous anonymous contributors worldwide. According to Alexa Internet, Wikipedia is the world’s sixth-most-popular website, visited monthly by around 12% of all internet users.

by Mez Breeze, The Next Web |  Read more:
Photo: Mandel Ngan/Getty Images

[ed. Golf ball impact at 150 mph. Today's PGA tour pros normally hit the ball around 120 mph, so this is a bit of an exaggeration but still quite impressive.]

[ed. I've been experiencing this a lot lately and it pisses me off (sometimes even the ad won't load). My usual response is to just shut the whole thing down and move on. Hopefully video delivery systems will find a better way to monetize their content than by alienating viewers.]
via

Native Tongues


The scene is a mysterious one, beguiling, thrilling, and, if you didn’t know better, perhaps even a bit menacing. According to the time-enhanced version of the story, it opens on an afternoon in the late fall of 1965, when without warning, a number of identical dark-green vans suddenly appear and sweep out from a parking lot in downtown Madison, Wisconsin. One by one they drive swiftly out onto the city streets. At first they huddle together as a convoy. It takes them only a scant few minutes to reach the outskirts—Madison in the sixties was not very big, a bureaucratic and academic omnium-gatherum of a Midwestern city about half the size of today. There is then a brief halt, some cursory consultation of maps, and the cars begin to part ways.

All of this first group of cars head off to the south. As they part, the riders wave their farewells, whereupon each member of this curious small squadron officially commences his long outbound adventure—toward a clutch of carefully selected small towns, some of them hundreds and even thousands of miles away. These first few cars are bound to cities situated in the more obscure corners of Florida, Oklahoma, and Alabama. Other cars that would follow later then went off to yet more cities and towns scattered evenly across every corner of every mainland state in America. The scene as the cars leave Madison is dreamy and tinted with romance, especially seen at the remove of nearly fifty years. Certainly nothing about it would seem to have anything remotely to do with the thankless drudgery of lexicography.

But it had everything to do with the business, not of illicit love, interstate crime, or the secret movement of monies, but of dictionary making. For the cars, which would become briefly famous, at least in the somewhat fame-starved world of lexicography, were the University of Wisconsin Word Wagons. All were customized 1966 Dodge A100 Sportsman models, purchased en masse with government grant money. Equipped for long-haul journeying, they were powered by the legendarily indestructible Chrysler Slant-Six 170-horsepower engine and appointed with modest domestic fixings that included a camp bed, sink, and stove. Each also had two cumbersome reel-to-reel tape recorders and a large number of tape spools.

The drivers and passengers who manned the wagons were volunteers bent to one overarching task: that of collecting America’s other language. They were being sent to more than a thousand cities, towns, villages, and hamlets to discover and record, before it became too late and everyone started to speak like everybody else, the oral evidence of exactly what words and phrases Americans in those places spoke, heard, and read, out in the boondocks and across the prairies, down in the hollows and up on the ranges, clear across the great beyond and in the not very long ago.

These volunteers were charged with their duties by someone who might at first blush seem utterly unsuitable for the task of examining American speech: a Briton, born in Kingston, of a Canadian father and a Jamaican mother: Frederic Gomes Cassidy, a man whose reputation—he died twelve years ago, aged ninety-two—is now about to be consolidated as one of the greatest lexicographers this country has ever known. Cassidy’s standing—he is now widely regarded as this continent’s answer to James Murray, the first editor of the Oxford English Dictionary; Cassidy was a longtime English professor at the University of Wisconsin, while Murray’s chops were earned at Oxford—rests on one magnificent achievement: his creation of a monumental dictionary of American dialect speech, conceived roughly half a century ago, and over which he presided for most of his professional life.

The five-thousand-page, five-volume book, known formally as the Dictionary of American Regional English and colloquially just as DARE, is now at last fully complete. The first volume appeared in 1985: it listed tens of thousands of geographically specific dialect words, from tall flowering plants known in the South as “Aaron’s Rod,” to a kind of soup much favored in Wisconsin, made from duck’s blood, known as “czarina.” The next two volumes appeared in the 1990s, the fourth after 2000, so assiduously planned and organized by Cassidy as to be uninterrupted by his passing. The fifth and final volume, the culminating triumph of this extraordinary project, is being published this March—it offers up regionalisms running alphabetically from “slab highway” (as concrete-covered roads are apparently still known in Indiana and Missouri) to “zydeco,” not the music itself, but a kind of raucous and high-energy musical party that is held in a long swathe of villages arcing from Galveston to Baton Rouge.

“Aaron’s rod” to “zydeco”—between these two verbal bookends lies an immense and largely hidden American vocabulary, one that surely, more than perhaps any other aspect of society, reveals the wonderfully chaotic pluribusout of which two centuries of commerce and convention have forged the duller reality of the unum. Which was precisely what Cassidy and his fellow editors sought to do—to capture, before it faded away, the linguistic coat of many colors of this immigrant-made country, and to preserve it in snapshot, in part for strictly academic purposes, in part for the good of history, and in part, maybe, on the off chance that the best of the lexicon might one day be revived.

by Simon Winchester, Lapham's Quarterly |  Read more:
Photo: UW-Madison Archives

Color of Blues
via: