Wednesday, October 23, 2013

The Decline of Wikipedia


The sixth most widely used website in the world is not run anything like the others in the top 10. It is not operated by a sophisticated corporation but by a leaderless collection of volunteers who generally work under pseudonyms and habitually bicker with each other. It rarely tries new things in the hope of luring visitors; in fact, it has changed little in a decade. And yet every month 10 billion pages are viewed on the English version of Wikipedia alone. When a major news event takes place, such as the Boston Marathon bombings, complex, widely sourced entries spring up within hours and evolve by the minute. Because there is no other free information source like it, many online services rely on Wikipedia. Look something up on Google or ask Siri a question on your iPhone, and you’ll often get back tidbits of information pulled from the encyclopedia and delivered as straight-up facts.

Yet Wikipedia and its stated ambition to “compile the sum of all human knowledge” are in trouble. The volunteer workforce that built the project’s flagship, the English-language Wikipedia—and must defend it against vandalism, hoaxes, and manipulation—has shrunk by more than a third since 2007 and is still shrinking. Those participants left seem incapable of fixing the flaws that keep Wikipedia from becoming a high-quality encyclopedia by any standard, including the project’s own. Among the significant problems that aren’t getting resolved is the site’s skewed coverage: its entries on Pokemon and female porn stars are comprehensive, but its pages on female novelists or places in sub-Saharan Africa are sketchy. Authoritative entries remain elusive. Of the 1,000 articles that the project’s own volunteers have tagged as forming the core of a good encyclopedia, most don’t earn even Wikipedia’s own middle-­ranking quality scores.

The main source of those problems is not mysterious. The loose collective running the site today, estimated to be 90 percent male, operates a crushing bureaucracy with an often abrasive atmosphere that deters newcomers who might increase participation in Wikipedia and broaden its coverage.

In response, the Wikimedia Foundation, the 187-person nonprofit that pays for the legal and technical infrastructure supporting Wikipedia, is staging a kind of rescue mission. The foundation can’t order the volunteer community to change the way it operates. But by tweaking Wikipedia’s website and software, it hopes to steer the encyclopedia onto a more sustainable path.

The foundation’s campaign will bring the first major changes in years to a site that is a time capsule from the Web’s earlier, clunkier days, far removed from the easy-to-use social and commercial sites that dominate today. “Everything that Wikipedia is was utterly appropriate in 2001 and it’s become increasingly out of date since,” says Sue Gardner, executive director of the foundation, which is housed on two drab floors of a downtown San Francisco building with a faulty elevator. “This is very much our attempt to get caught up.” She and Wikipedia’s founder, Jimmy Wales, say the project needs to attract a new crowd to make progress. “The biggest issue is editor diversity,” says Wales. He hopes to “grow the number of editors in topics that need work.”

Whether that can happen depends on whether enough people still believe in the notion of online collaboration for the greater good—the ideal that propelled Wikipedia in the beginning. But the attempt is crucial; Wikipedia matters to many more people than its editors and students who didn’t make time to read their assigned books. More of us than ever use the information found there, both directly and via other services. Meanwhile, Wikipedia has either killed off the alternatives or pushed them down the Google search results.

by Tom Simonite, MIT Technology Review | Read more:
Image: Wikipedia

Edison’s Revenge


Fiddly cables, incompatible plugs and sockets, and the many adaptors needed to fit them all together used to be the travellers’ bane. But the USB (Universal Serial Bus) has simplified their life. Most phones and other small gadgets can charge from a simple USB cable plugged into a computer or an adaptor. Some 10 billion of them are already in use. Hotel rooms, aircraft seats, cars and new buildings increasingly come with USB sockets as a standard electrical fitting.

Now a much bigger change is looming. From 2014, a USB cable will be able to provide power to bigger electronic devices. In the long term this could change the way homes and offices use electricity, cutting costs and improving efficiency.

The man who invented the USB, Ajay Bhatt of Intel, a chipmaker, barely thought about power. His main aim was to cut the clutter and time-wasting involved in plugging things into a computer. The keyboard, mouse, speakers and so forth all required different cables, and often drivers (special bits of software) as well. The USB connection’s chief role was to help computers and devices negotiate and communicate.

Mr Bhatt did not think he was creating a new charging system. Indeed, the trickle of electricity (up to ten watts on the existing standard) is still barely enough for devices such as an iPad. Yet USB charging is now the default for phones, e-readers and other small gadgets. Some mobile-phone manufacturers are already shipping their products without a power adaptor. Ingenious inventors have eked out the slender USB power supply to run fans, tiny fridges and toy rocket-launchers.

The big change next year will be a new USB PD (Power Delivery) standard, which brings much more flexibility and ten times as much oomph: up to 100 watts. (...)

Current affairs

That could presage a much bigger shift, reviving the cause of direct current (DC) as the preferred way to power the growing number of low-voltage devices in homes and offices. DC has been something of a poor relation in the electrical world since it lost out to alternating current (AC) in a long-ago battle in which its champion Nikola Tesla (pictured, left) trounced Thomas Edison (right). Tesla won, among other reasons, because it was (in those days) easier to shift AC power between different voltages. It was therefore a better system for transmitting and distributing electricity.

But the tide may be turning. Turning AC into the direct current required to power transistors (the heart of all electronic equipment) is a nuisance. The usual way is through a mains adaptor. These ubiquitous little black boxes are now cheap and light. But they are often inefficient, turning power into heat. And they are dumb: they run night and day, regardless of whether the price of electricity is high or low. It would be better to have a DC network, of the kind Mr Daniel has rigged up, for all electronic devices in a home or office.

This is where USB cables come in. They carry direct current and also data. That means they can help set priorities between devices that are providing power and those that are consuming it: for example, a laptop that is charging a mobile phone. “The computer can say ‘I need to start the hard disk now, so no charging for the next ten seconds’,” says Mr Bhatt. The new standard, with variable voltage and greater power, enlarges the possibilities. So does another new feature: that power can flow in any direction.

This chimes with another advantage. A low-voltage DC network works well with solar panels. These produce DC power at variable times and in variable amounts. They are increasingly cheap, and can fit in windows or on roofs. Though solar power is tricky to feed into the AC mains grid, it is ideally suited to a low-voltage local DC network. When the sun is shining, it can help charge all your laptops, phones and other battery-powered devices.

by The Economist |  Read more:
Image: Matt Herring

$10 Smartphone to Digital Microscope Conversion


The world is an interesting place, but it's fascinating up close. Through the lens of a microscope you can find details that you would otherwise never notice. But now you can.

This instructable will show you how to build a stand for about $10 that will transform your smartphone into a powerful digital microscope. This DIY conversion stand is more than capable of functioning in an actual laboratory setting. With magnification levels as high as 175x, plant cells and their nuclei are easily observed! In addition to allowing the observation of cells, this setup also produces stunning macro photography.

The photos in this instructable were taken with an iPhone 4S.
 

by Yoshinok, Instructables |  Read more:
Image: Yoshinok

Emil Nolde, Alps Mountain Landscape 1930.
via:

Tuesday, October 22, 2013

Are We Puppets in a Wired World?

Internet activities like online banking, social media, web browsing, shopping, e-mailing, and music and movie streaming generate tremendous amounts of data, while the Internet itself, through digitization and cloud computing, enables the storage and manipulation of complex and extensive data sets. Data—especially personal data of the kind shared on Facebook and the kind sold by the state of Florida, harvested from its Department of Motor Vehicles records, and the kind generated by online retailers and credit card companies—is sometimes referred to as “the new oil,” not because its value derives from extraction, which it does, but because it promises to be both lucrative and economically transformative.

In a report issued in 2011, the World Economic Forum called for personal data to be considered “a new asset class,” declaring that it is “a new type of raw material that’s on par with capital and labour.” Morozov quotes an executive from Bain and Company, which coauthored the Davos study, explaining that “we are trying to shift the focus from purely privacy to what we call property rights.” It’s not much of a stretch to imagine who stands to gain from such “rights.”

Individually, data points are typically small and inconsequential, which is why, day to day, most people are content to give them up without much thought. They only come alive in aggregate and in combination and in ways that might never occur to their “owner.” For instance, records of music downloads and magazine subscriptions might allow financial institutions to infer race and deny a mortgage. Or search terms plus book and pharmacy purchases can be used to infer a pregnancy, as the big-box store Target has done in the past. (...)

This brings us back to DARPA and its quest for an algorithm that will sift through all manner of seemingly disconnected Internet data to smoke out future political unrest and acts of terror. Diagnosis is one thing, correlation something else, prediction yet another order of magnitude, and for better and worse, this is where we are taking the Internet. Police departments around the United States are using Google maps, together with crime statistics and social media, to determine where to patrol, and half of all states use some kind of predictive data analysis when making parole decisions. More than that, gush the authors of Big Data:
In the future—and sooner than we may think—many aspects of our world will be augmented or replaced by computer systems that today are the sole purview of human judgment…perhaps even identifying “criminals” before one actually commits a crime.
The assumption that decisions made by machines that have assessed reams of real-world information are more accurate than those made by people, with their foibles and prejudices, may be correct generally and wrong in the particular; and for those unfortunate souls who might never commit another crime even if the algorithm says they will, there is little recourse. In any case, computers are not “neutral”; algorithms reflect the biases of their creators, which is to say that prediction cedes an awful lot of power to the algorithm creators, who are human after all. Some of the time, too, proprietary algorithms, like the ones used by Google and Twitter and Facebook, are intentionally biased to produce results that benefit the company, not the user, and some of the time algorithms can be gamed. (There is an entire industry devoted to “optimizing” Google searches, for example.)

But the real bias inherent in algorithms is that they are, by nature, reductive. They are intended to sift through complicated, seemingly discrete information and make some sort of sense of it, which is the definition of reductive. But it goes further: the infiltration of algorithms into everyday life has brought us to a place where metrics tend to rule. This is true for education, medicine, finance, retailing, employment, and the creative arts. There are websites that will analyze new songs to determine if they have the right stuff to be hits, the right stuff being the kinds of riffs and bridges found in previous hit songs.

Amazon, which collects information on what readers do with the electronic books they buy—what they highlight and bookmark, if they finish the book, and if not, where they bail out—not only knows what readers like, but what they don’t, at a nearly cellular level. This is likely to matter as the company expands its business as a publisher. (Amazon already found that its book recommendation algorithm was more likely than the company’s human editors to convert a suggestion into a sale, so it eliminated the humans.)

Meanwhile, a company called Narrative Science has an algorithm that produces articles for newspapers and websites by wrapping current events into established journalistic tropes—with no pesky unions, benefits, or sick days required. Call me old-fashioned, but in each case, idiosyncrasy, experimentation, innovation, and thoughtfulness—the very stuff that makes us human—is lost. A culture that values only what has succeeded before, where the first rule of success is that there must be something to be “measured” and counted, is not a culture that will sustain alternatives to market-driven “creativity.”

by Sue Halpern, NY Review of Books |  Read more:
Image: Eric Edelman

Katsushika Hokusai - Kanagawa oki nami ura (1830-31) (variation)

Healthcare.gov: It Could Be Worse

On October 1st, the first day of the government shutdown, the U.S. Centers for Medicare & Medicaid Services launched Healthcare.gov, a four-hundred-million-dollar online marketplace designed to help Americans research and purchase health insurance. In its first days, only a small fraction of users could create an account or log in. The problems were initially attributed to high demand. But as days turned into weeks, Healthcare.gov’s troubles only seemed to multiply. Reports appeared of applications freezing half-completed and of the system “putting users in inescapable loops, and miscalculating healthcare subsidies.” Politico reported that “Web brokers … have been unable to connect to the federal system.” Healthcare.gov is the public face of the Obama Administration’s signature policy achievement, and its launch has been widely derided as a disaster. But it could have been worse.

On September 11, 2001, the F.B.I. was still using a computer system that couldn’t store or display pictures; entering data was time-consuming and awkward, and retrieving it even more so. A 9/11 Commission staff report concluded that “the FBI’s primary information management system, designed using 1980s technology already obsolete when installed in 1995, limited the Bureau’s ability to share its information internally and externally.” But an overhaul of that system had already begun in the months leading up to 9/11. In June, 2001, the F.B.I. awarded the contractor Science Applications International Corp. (S.A.I.C.) a fourteen-million-dollar contract to upgrade the F.B.I.’s computer systems. The project was called Virtual Case File, or V.C.F., and it would ultimately cost over six hundred million dollars before finally being abandoned, in early 2005, unfinished and never deployed. V.C.F. was then replaced with a project called Sentinel, expected to launch in 2009, which was “designed to be everything V.C.F. was not, with specific requirements, regular milestones and aggressive oversight,” according to F.B.I. officials who spoke to the Washington Post in 2006. But by 2010, Sentinel was also being described as “troubled,” and only two out of a planned four phases had been completed. Sentinel was finally deployed on July 1, 2012, after the F.B.I. took over the project from the contractor Lockheed-Martin in 2010, bringing it in-house for completion—at an ultimate cost of at least four hundred and fifty-one million dollars. In the end, the upgrade took the F.B.I. more than a decade and over a billion dollars.

Healthcare.gov is not so much a Web site as an interface for accessing a collection of databases and information systems. Behind the nicely designed Web forms are systems to create accounts, manage user logins, and collect insurance-application data. There’s a part that determines subsidy eligibility, a part that sends applications to the right insurance company, and other parts that glue these things together. Picture the dashboard of your car, which has a few knobs and buttons, some switches, and a big wheel—simple controls for a lot of complex machinery under the hood. All of these systems, whether in your car or on Healthcare.gov, have to communicate the right information at the right time for any of it to work properly. In the case of Healthcare.gov, we don’t know what precisely has gone wrong, because the system isn’t open-source—meaning the code used to build it isn’t available for anyone to see—and nobody involved has released technical information. But the multiple databases and subsystems are probably distributed all over the country, written in a variety of computer languages, and handle data in very different ways. Some are brand new, others are old.

For large software projects, failure is generally determined early in the process, because failures almost exclusively have to do with planning: the failure to create a workable plan, to stick to it, or both. Healthcare.gov reportedly involved over fifty-five contractors, managed by a human-services agency that lacked deep experience in software engineering or project management. The final product had to be powerful enough to navigate any American through a complex array of different insurance offerings, secure enough to hold sensitive private data, and robust enough to withstand peak traffic in the hundreds of thousands, if not millions, of concurrent users. It also had to be simple enough so that anyone who can open a Web browser could use it. In complexity, this is a project on par with the F.B.I.’s V.C.F. or Sentinel. The number and variety of systems to be connected may not be quite as large, but the interface had to be usable by anyone, without special training. And, unlike V.C.F., Healthcare.gov was given only twenty-two months from contract award to launch—less than two years for a project similar to one that took the F.B.I. more than ten years and over twice the budget.

by Rusty Foster, New Yorker |  Read more:
Image: Michael Kupperman

Reflections on a Paris Left Behind


Even Hemingway struggled with this city, working on a memoir of his poor early days, “A Moveable Feast,” off and on for years, before it was finally published after his death. Christopher Hitchens once called it “an ur-text of the American enthrallment with Paris,” identifying an unthinking nostalgia “as we contemplate a Left Bank that has since become a banal tourist enclave in a Paris where the tough and plebeian districts are gone, to be replaced by seething Muslim banlieues all around the periphery.”

Sometimes, reading about Paris in newspapers, magazines and on Web sites devoted to tourism, I feel the clichés piling high enough to touch the Eiffel Tower — or even the still-hideous Tour Montparnasse, which for decades has given skyscrapers a bad name here.

All the clichés are still there, if that’s as far as you’re willing to look, from the supposedly haughty waiters to the baguettes and croissants and the nighttime lights on the Notre-Dame de Paris, shimmering with a faith now largely abandoned. (...)

There are parts of Paris that are “cool,” to be sure, but not the way London is, or Berlin, or even Amsterdam. Paris is a city of the well-to-do, mostly white, and their careful pleasures: museums, restaurants, opera, ballet and bicycle lanes. Bertrand Delanoë, the Paris mayor since 2001, is a Socialist Michael Bloomberg — into bobo virtues like health and the environment and very much down on cars.

Adam Gopnik, a New Yorker writer, finds “the Parisian achievement” to have created, in the 19th century, two concepts of society: “the Haussmannian idea of bourgeois order and comfort, and the avant-garde of ‘la vie de bohème.’ ” While these two societies seemed to be at war, he suggests, in fact they were “deeply dependent on each other.”

Today, however, the balance is gone, and Paris is too ordered, too antiseptic and too tightly policed to have much of a louche life beyond bourgeois adulteries. In that sense, something important has been lost. (...)

Paris is the most beautiful city in the world; to me, only Prague comes close. But Paris is also filthy. While tourists regard Paris with awe and respect, for the most part many Parisians treat it with studied indifference, a high virtue here, or with contempt.

It is the Parisians who leave dog excrement on the sidewalks, who ignore the trash containers. With smoking now supposedly banned inside restaurants, the terraces of cafes become more crowded. But the streets have become ashtrays, and the rubbish defeats the traditional sluicing of the gutters with city water by men with long green nylon brushes. Large parts of Paris remind me of how, in the never quite-so-bad old days, Times Square used to look at 8 a.m. on a Sunday.

France still gets more foreign tourists than most any other country: 83 million in 2012, and 83 percent of them from Europe, compared with only 29.3 million who visited Britain. Paris alone gets 33 million tourists a year, half of them foreigners, many in search of that mythical place where Charles Aznavour meets Catherine Deneuve meets Zidane meets Dior, all drinking Champagne and nibbling foie gras, truffles, oysters and langouste.

While tourists to Israel sometimes suffer from the Jerusalem syndrome, imagining themselves in direct contact with God, some Japanese tourists suffer from what is called the “Paris Syndrome,” distraught at the difference between what they imagine and what they find. Of course, as Walt Whitman wrote about himself, Paris contains multitudes, and most visitors go away having found just enough of what they craved to develop a lifelong yearning to return.

by Steven Erlanger, NY Times |  Read more:
Image: Kosuke Okahara

New Technique Holds Promise for Hair Growth

Scientists have found a new way to grow hair, one that they say may lead to better treatments for baldness.

So far, the technique has been tested only in mice, but it has managed to grow hairs on human skin grafted onto the animals. If the research pans out, the scientists say, it could produce a treatment for hair loss that would be more effective and useful to more people than current remedies like drugs or hair transplants.

Present methods are not much help to women, but a treatment based on the new technique could be, the researchers reported Monday in Proceedings of the National Academy of Sciences.

Currently, transplants move hair follicles from the back of the head to the front, relocating hair but not increasing the amount. The procedure can take eight hours, and leave a large scar on the back of the head. The new technique would remove a smaller patch of cells involved in hair formation from the scalp, culture them in the laboratory to increase their numbers, and then inject them back into the person’s head to fill in bald or thinning spots. Instead of just shifting hair from one spot to another, the new approach would actually add hair. (...)

In the current study, Dr. Christiano worked with researchers from Durham University in Britain. They focused on dermal papillae, groups of cells at the base of hair follicles that give rise to the follicles. Researchers have known for more than 40 years that papilla cells from rodents could be transplanted and would lead to new hair growth. The cells from the papillae have the ability to reprogram the surrounding skin cells to form hair follicles.

But human papilla cells, grown in culture, mysteriously lose the ability to make hair follicles form. A breakthrough came when the researchers realized they might be growing the cells the wrong way.

One of Dr. Christiano’s partners from Durham University, Dr. Colin Jahoda, noticed that the rodent papilla cells formed clumps in culture, but the human cells did not. Maybe the clumps were important, he reasoned. So, instead of trying to grow the cells the usual way, in a flat, one-cell layer on a petri dish, he turned to an older method called the “hanging drop culture.”

That method involves putting about 3,000 papilla cells — the number in a typical papilla — into a drop of culture medium on the lid of a dish, and then flipping the lid over so that the drops are hanging upside down.

“The droplets aren’t so heavy that they drip off,” Dr. Christiano said. “The force of gravity just takes the 3,000 cells and draws them into an aggregate at the bottom of the drop.”

The technique made all the difference. The cells seem to need to touch one another in three dimensions rather than two to send and receive the signals they need to induce hair formation.

by Denise Grady, NY Times |  Read more:
Image: Ruth Fremson

Monday, October 21, 2013


Andy Warhol, Kimiko
via:

Why Have Young People in Japan Stopped Having Sex?

Ai Aoyama is a sex and relationship counsellor who works out of her narrow three-storey home on a Tokyo back street. Her first name means "love" in Japanese, and is a keepsake from her earlier days as a professional dominatrix. Back then, about 15 years ago, she was Queen Ai, or Queen Love, and she did "all the usual things" like tying people up and dripping hot wax on their nipples. Her work today, she says, is far more challenging. Aoyama, 52, is trying to cure what Japan's media calls sekkusu shinai shokogun, or "celibacy syndrome".

Japan's under-40s appear to be losing interest in conventional relationships. Millions aren't even dating, and increasing numbers can't be bothered with sex. For their government, "celibacy syndrome" is part of a looming national catastrophe. Japan already has one of the world's lowest birth rates. Its population of 126 million, which has been shrinking for the past decade, is projected to plunge a further one-third by 2060. Aoyama believes the country is experiencing "a flight from human intimacy" – and it's partly the government's fault. (...)

The number of single people has reached a record high. A survey in 2011 found that 61% of unmarried men and 49% of women aged 18-34 were not in any kind of romantic relationship, a rise of almost 10% from five years earlier. Another study found that a third of people under 30 had never dated at all. (There are no figures for same-sex relationships.) Although there has long been a pragmatic separation of love and sex in Japan – a country mostly free of religious morals – sex fares no better. A survey earlier this year by the Japan Family Planning Association (JFPA) found that 45% of women aged 16-24 "were not interested in or despised sexual contact". More than a quarter of men felt the same way.

Many people who seek her out, says Aoyama, are deeply confused. "Some want a partner, some prefer being single, but few relate to normal love and marriage." However, the pressure to conform to Japan's anachronistic family model of salaryman husband and stay-at-home wife remains. "People don't know where to turn. They're coming to me because they think that, by wanting something different, there's something wrong with them." (...)

Marriage has become a minefield of unattractive choices. Japanese men have become less career-driven, and less solvent, as lifetime job security has waned. Japanese women have become more independent and ambitious. Yet conservative attitudes in the home and workplace persist. Japan's punishing corporate world makes it almost impossible for women to combine a career and family, while children are unaffordable unless both parents work. Cohabiting or unmarried parenthood is still unusual, dogged by bureaucratic disapproval.

Aoyama says the sexes, especially in Japan's giant cities, are "spiralling away from each other". Lacking long-term shared goals, many are turning to what she terms "Pot Noodle love" – easy or instant gratification, in the form of casual sex, short-term trysts and the usual technological suspects: online porn, virtual-reality "girlfriends", anime cartoons. Or else they're opting out altogether and replacing love and sex with other urban pastimes. (...)

Aversion to marriage and intimacy in modern life is not unique to Japan. Nor is growing preoccupation with digital technology. But what endless Japanese committees have failed to grasp when they stew over the country's procreation-shy youth is that, thanks to official shortsightedness, the decision to stay single often makes perfect sense. This is true for both sexes, but it's especially true for women. "Marriage is a woman's grave," goes an old Japanese saying that refers to wives being ignored in favour of mistresses. For Japanese women today, marriage is the grave of their hard-won careers.

by Abigail Haworth, Guardian |  Read more:
Image: Eric Rechsteiner

Free Thinkers

José Urbina López Primary School sits next to a dump just across the US border in Mexico. The school serves residents of Matamoros, a dusty, sunbaked city of 489,000 that is a flash point in the war on drugs. There are regular shoot-outs, and it’s not uncommon for locals to find bodies scattered in the street in the morning. To get to the school, students walk along a white dirt road that parallels a fetid canal. On a recent morning there was a 1940s-era tractor, a decaying boat in a ditch, and a herd of goats nibbling gray strands of grass. A cinder-block barrier separates the school from a wasteland—the far end of which is a mound of trash that grew so big, it was finally closed down. On most days, a rotten smell drifts through the cement-walled classrooms. Some people here call the school un lugar de castigo—“a place of punishment.”

For 12-year-old Paloma Noyola Bueno, it was a bright spot. More than 25 years ago, her family moved to the border from central Mexico in search of a better life. Instead, they got stuck living beside the dump. Her father spent all day scavenging for scrap, digging for pieces of aluminum, glass, and plastic in the muck. Recently, he had developed nosebleeds, but he didn’t want Paloma to worry. She was his little angel—the youngest of eight children.

After school, Paloma would come home and sit with her father in the main room of their cement-and-wood home. Her father was a weather-beaten, gaunt man who always wore a cowboy hat. Paloma would recite the day’s lessons for him in her crisp uniform—gray polo, blue-and-white skirt—and try to cheer him up. She had long black hair, a high forehead, and a thoughtful, measured way of talking. School had never been challenging for her. She sat in rows with the other students while teachers told the kids what they needed to know. It wasn’t hard to repeat it back, and she got good grades without thinking too much. As she headed into fifth grade, she assumed she was in for more of the same—lectures, memorization, and busy work.
Sergio Juárez Correa was used to teaching that kind of class. For five years, he had stood in front of students and worked his way through the government-mandated curriculum. It was mind-numbingly boring for him and the students, and he’d come to the conclusion that it was a waste of time. Test scores were poor, and even the students who did well weren’t truly engaged. Something had to change.

He too had grown up beside a garbage dump in Matamoros, and he had become a teacher to help kids learn enough to make something more of their lives. So in 2011—when Paloma entered his class—Juárez Correa decided to start experimenting. He began reading books and searching for ideas online. Soon he stumbled on a video describing the work of Sugata Mitra, a professor of educational technology at Newcastle University in the UK. In the late 1990s and throughout the 2000s, Mitra conducted experiments in which he gave children in India access to computers. Without any instruction, they were able to teach themselves a surprising variety of things, from DNA replication to English.

Juárez Correa didn’t know it yet, but he had happened on an emerging educational philosophy, one that applies the logic of the digital age to the classroom. That logic is inexorable: Access to a world of infinite information has changed how we communicate, process information, and think. Decentralized systems have proven to be more productive and agile than rigid, top-down ones. Innovation, creativity, and independent thinking are increasingly crucial to the global economy.

And yet the dominant model of public education is still fundamentally rooted in the industrial revolution that spawned it, when workplaces valued punctuality, regularity, attention, and silence above all else. (In 1899, William T. Harris, the US commissioner of education, celebrated the fact that US schools had developed the “appearance of a machine,” one that teaches the student “to behave in an orderly manner, to stay in his own place, and not get in the way of others.”) We don’t openly profess those values nowadays, but our educational system—which routinely tests kids on their ability to recall information and demonstrate mastery of a narrow set of skills—doubles down on the view that students are material to be processed, programmed, and quality-tested. School administrators prepare curriculum standards and “pacing guides” that tell teachers what to teach each day. Legions of managers supervise everything that happens in the classroom; in 2010 only 50 percent of public school staff members in the US were teachers.

The results speak for themselves: Hundreds of thousands of kids drop out of public high school every year. Of those who do graduate from high school, almost a third are “not prepared academically for first-year college courses,” according to a 2013 report from the testing service ACT. The World Economic Forum ranks the US just 49th out of 148 developed and developing nations in quality of math and science instruction. “The fundamental basis of the system is fatally flawed,” says Linda Darling-Hammond, a professor of education at Stanford and founding director of the National Commission on Teaching and America’s Future. “In 1970 the top three skills required by the Fortune 500 were the three Rs: reading, writing, and arithmetic. In 1999 the top three skills in demand were teamwork, problem-solving, and interpersonal skills. We need schools that are developing these skills.”

That’s why a new breed of educators, inspired by everything from the Internet to evolutionary psychology, neuroscience, and AI, are inventing radical new ways for children to learn, grow, and thrive. To them, knowledge isn’t a commodity that’s delivered from teacher to student but something that emerges from the students’ own curiosity-fueled exploration. Teachers provide prompts, not answers, and then they step aside so students can teach themselves and one another. They are creating ways for children to discover their passion—and uncovering a generation of geniuses in the process.

by Joshua Davis, Wired |  Read more:
Image: Peter Yang

Without Copyrights: Piracy, Publishing, and the Public Domain. What Exactly is "Piracy" in the Digital Age?

“PIRACY,” the newly created National Intellectual Property Rights Protection Coordination Center (IPR Center) informs DVD viewers, “is not a victimless crime.” Setting aside the fact that the IPR Center and its partners in the FBI and Department of Homeland Security target this message at precisely the wrong audience — those who’ve chosen to purchase or rent a DVD — the campaign begs a couple of questions. Is this “piracy” actually a “crime”? And more importantly, what exactly is “piracy”?

While content-industry trade groups like the Recording Industry Association of America (RIAA), Motion Picture Association of America (MPAA), and Association of American Publishers (AAP) would doubtless like to take credit for popularizing the term to mean “using creative products without the permission of the creator or rights holder,” “piracy” has meant that for centuries, as Robert Spoo points out in his new book Without Copyrights: Piracy, Publishing, and the Public Domain (Oxford). But it’s never been so simple, particularly in the United States, long a holdout from international copyright norms. “Piracy” is always a term of rhetoric, suggesting a legal force that it frequently does not have; the word was and is a tool to sway the public and lawmakers. And even as their copyright protections were dramatically expanded in the late 20th century, rights holders sought to broaden the definition of “piracy” and concomitantly shrink the public domain, that ocean of content free for all of us to use.

In Without Copyrights, Spoo provides a deeply researched case study of the complicated American copyright situation surrounding the great literary landmark of the 20th century, James Joyce’s 1922 novel Ulysses. He shows that lax and fuzzy copyright laws in the US created a large and fertile public domain that infuriated writers, benefited readers, and provided publishers an opportunity for informal self-governance. But most importantly for the current American debate about intellectual property, Spoo makes clear that “piracy” has never been a clear-cut concept. Rights holders like to define “piracy” as any act of which they disapprove, even when — as with unauthorized publication of Ulysses in the US, or sampling of funk records in 1980s rap recordings, or uploading clips from TV awards shows to YouTube — those acts are expressly or plausibly legal. In part by using loaded terms like “piracy” to influence legislators and law enforcement agencies, rights holders try, and recently have succeeded, in then expanding the legal meaning of those terms and contracting the cultural commons.

The context Spoo ably recreates, though, is the legal environment governing American publishing from the early 19th century through the post-World War II period. In the 19th century, the so-called “reprint industry,” which mined previously published books, largely British, dominated American publishing. And while reprinters bore most of the fixed costs facing any publishing concern (labor, materials, advertising, distribution) they had one great competitive advantage: they didn’t have to pay their authors. Until 1891, US law extended copyright protection only to works by American citizens, so these reprinters made a business model out of selling British books, generally without ever contacting (much less entering into an agreement with) their authors. It’s hard to think of a more obvious example of “piracy” than this, and authors from Dickens to Wilde fumed about their vast lost revenue. A familiar anecdote describes Dickens fans, desperate to find out whether Little Nell was dead, storming the New York wharves as ships laden with the latest issue of Master Humphrey’s Clock docked. Some of those impatient fans, though, were probably publishers’ agents, frantic to grab their copies, get back to their presses, and be the first ones to market with a “pirated,” but entirely legal, American edition of the novel.

Frustrating as it was to aggrieved British authors, the law had some justification. The US was a large but largely under-booked nation in the early 1800s. In keeping with the spirit of the US Constitution’s Copyright Clause, which emphasizes that the real goal of copyright is not first and foremost the protection of an author’s rights but the promotion “of Science and useful Arts,” the law subsidized the production and dissemination of books. A lot of books. A lot of cheap books that would, Congress hoped, spread across (and educate) our widely dispersed and unschooled nation. And while the 1790 Copyright Act assured American citizens of copyright protection, ironically it did little to cultivate a native literary culture: why sign up an American author and pay royalties when one could print a guaranteed seller like Tennyson or George Eliot instead, and pocket the difference? As a result, British literature dominated American reading through the 19th century (with notable exceptions such as Uncle Tom’s Cabin, which was, in a neat turnabout, widely “pirated” in Britain).

If anyone could publish any British author, how, then, did the American publishing industry not consume itself through self-destructive cost-cutting? A professor at the University of Tulsa College of Law, Spoo is sensitive to the important distinctions between common law, legislated law, and informal community norms that carry the force of law, and thus identifies “trade courtesy” as the mechanism that saved publishing houses from bankrupting themselves through competitive discounting. These “pirate” publishers behaved more like a genteel cartel than like bootlegging gangsters, Spoo makes clear. A publisher would make it known among the community of reprinters that he intended to publish a given author or a book. Other publishers, parties to this informal gentlemen’s agreement, respected that publisher’s claim to that title, and renegades were punished through public shaming (manifested in advertisements that questioned the quality or authenticity of their texts) or, in the cases of particularly obstinate transgressors, commercial retaliation. Like Wal-Mart meeting Main Street, colluding reprinters would print their own editions of a violator’s books, pricing them ruinously low or even at a loss in pursuit of the greater good of the stability of the industry. At this time, in fact, while British authors referred to the entire American industry as “pirates,” publishers used the word internally to describe those members of their community who deviated from norms of trade courtesy.

by Greg Barnhisel, LA Review of Books |  Read more:
Image: Oxford University Press

Shovels & Rope