Saturday, April 27, 2013

Fapstinence

Traditionally, people undergo a bit of self-examination when faced with a ­potentially fatal rupture in their long-term relationship. Thirty-two-year-old Henry* admits that what he did was a little more extreme. “If you’d told me that I wasn’t going to masturbate for 54 days, I would have told you to fuck off,” he says.

Masturbation had been part of Henry’s daily routine since childhood. Although he remembered a scandalized babysitter who “found me trying to have sex with a chair” at age 5, Henry says he never felt shame about his habit. While he was of the opinion that a man who has a committed sexual relationship with porn was probably not going to have as successful a relationship with a woman, he had no qualms about watching it. Which he did most days.

Then, early last year and shortly before his girlfriend of two years moved to Los Angeles, Henry happened to watch a TED talk by the psychologist Philip Zimbardo called “The Demise of Guys.” It described males who “prefer the asynchronistic Internet world to the spontaneous interactions in social relationships” and therefore fail to succeed in school, work, and with women. When his girlfriend left, Henry went on to watch a TEDX talk by Gary Wilson, an anatomist and physiologist, whose lecture series, “Your Brain on Porn,” claims, among other things, that porn conditions men to want constant variety—an endless set of images and fantasies—and requires them to experience increasingly heightened stimuli to feel aroused. A related link led Henry to a community of people engaged in attempts to quit masturbation on the social news site Reddit. After reading the ­enthusiastic posts claiming improved virility, Henry began frequenting the site.

by Emily Witt, New York Magazine | Read more:
Photo: Bobby Doherty/New York Magazine



Pierrot and Harlequin - Anya Stasenko & Slava Leontiev
via:

Life in the City Is Essentially One Giant Math Problem


The systematic study of cities dates back at least to the Greek historian Herodotus. In the early 20th century, scientific disciplines emerged around specific aspects of urban development: zoning theory, public health and sanitation, transit and traffic engineering. By the 1960s, the urban-planning writers Jane Jacobs and William H. Whyte used New York as their laboratory to study the street life of neighborhoods, the walking patterns of Midtown pedestrians, the way people gathered and sat in open spaces. But their judgments were generally aesthetic and intuitive (although Whyte, photographing the plaza of the Seagram Building, derived the seat-of-the-pants formula for bench space in public spaces: one linear foot per 30 square feet of open area). “They had fascinating ideas,” says Luís Bettencourt, a researcher at the Santa Fe Institute, a think tank better known for its contributions to theoretical physics, “but where is the science? What is the empirical basis for deciding what kind of cities we want?” Bettencourt, a physicist, practices a discipline that shares a deep affinity with quantitative urbanism. Both require understanding complex interactions among large numbers of entities: the 20 million people in the New York metropolitan area, or the countless subatomic particles in a nuclear reaction.

The birth of this new field can be dated to 2003, when researchers at SFI convened a workshop on ways to “model”—in the scientific sense of reducing to equations—aspects of human society. One of the leaders was Geoffrey West, who sports a neatly trimmed gray beard and retains a trace of the accent of his native Somerset. He was also a theoretical physicist, but had strayed into biology, exploring how the properties of organisms relate to their mass. An elephant is not just a bigger version of a mouse, but many of its measurable characteristics, such as metabolism and life span, are governed by mathematical laws that apply all up and down the scale of sizes. The bigger the animal, the longer but the slower it lives: A mouse heart rate is around 500 beats per minute; an elephant’s pulse is 28. If you plotted those points on a logarithmic graph, comparing size with pulse, every mammal would fall on or near the same line. West suggested that the same principles might be at work in human institutions. From the back of the room, Bettencourt (then at Los Alamos National Laboratory) and José Lobo, an economist at Arizona State University (who majored in physics as an undergraduate), chimed in with the motto of physicists since Galileo: “Why don’t we get the data to test it?”

Out of that meeting emerged a collaboration that produced the seminal paper in the field: “Growth, Innovation, Scaling, and the Pace of Life in Cities.” In six pages dense with equations and graphs, West, Lobo and Bettencourt, along with two researchers from the Dresden University of Technology, laid out a theory about how cities vary according to size. “What people do in cities—create wealth, or murder each other—shows a relationship to the size of the city, one that isn’t tied just to one era or nation,” says Lobo. The relationship is captured by an equation in which a given parameter—employment, say—varies exponentially with population. In some cases, the exponent is 1, meaning whatever is being measured increases linearly, at the same rate as population. Household water or electrical use, for example, shows this pattern; as a city grows bigger its residents don’t use their appliances more. Some exponents are greater than 1, a relationship described as “superlinear scaling.” Most measures of economic activity fall into this category; among the highest exponents the scholars found were for “private [research and development] employment,” 1.34; “new patents,” 1.27; and gross domestic product, in a range of 1.13 to 1.26. If the population of a city doubles over time, or comparing one big city with two cities each half the size, gross domestic product more than doubles. Each individual becomes, on average, 15 percent more productive. Bettencourt describes the effect as “slightly magical,” although he and his colleagues are beginning to understand the synergies that make it possible. Physical proximity promotes collaboration and innovation, which is one reason the new CEO of Yahoo recently reversed the company’s policy of letting almost anyone work from home. The Wright brothers could build their first flying machines by themselves in a garage, but you can’t design a jet airliner that way.

Unfortunately, new AIDS cases also scale superlinearly, at 1.23, as does serious crime, 1.16. Lastly, some measures show an exponent of less than 1, meaning they increase more slowly than population. These are typically measures of infrastructure, characterized by economies of scale that result from increasing size and density. New York doesn’t need four times as many gas stations as Houston, for instance; gas stations scale at 0.77; total surface area of roads, 0.83; and total length of wiring in the electrical grid, 0.87.

Remarkably, this phenomenon applies to cities all over the world, of different sizes, regardless of their particular history, culture or geography. Mumbai is different from Shanghai is different from Houston, obviously, but in relation to their own pasts, and to other cities in India, China or the U.S., they follow these laws. “Give me the size of a city in the United States and I can tell you how many police it has, how many patents, how many AIDS cases,” says West, “just as you can calculate the life span of a mammal from its body mass.”

by Jerry Adler, Smithsonian |  Read more:
(Illustration by Traci Daberko

Their Master’s Voice - Michael Sowa
via:

Everything is Rigged: The Biggest Price-Fixing Scandal Ever

Conspiracy theorists of the world, believers in the hidden hands of the Rothschilds and the Masons and the Illuminati, we skeptics owe you an apology. You were right. The players may be a little different, but your basic premise is correct: The world is a rigged game. We found this out in recent months, when a series of related corruption stories spilled out of the financial sector, suggesting the world's largest banks may be fixing the prices of, well, just about everything.

You may have heard of the Libor scandal, in which at least three – and perhaps as many as 16 – of the name-brand too-big-to-fail banks have been manipulating global interest rates, in the process messing around with the prices of upward of $500 trillion (that's trillion, with a "t") worth of financial instruments. When that sprawling con burst into public view last year, it was easily the biggest financial scandal in history – MIT professor Andrew Lo even said it "dwarfs by orders of magnitude any financial scam in the history of markets."

That was bad enough, but now Libor may have a twin brother. Word has leaked out that the London-based firm ICAP, the world's largest broker of interest-rate swaps, is being investigated by American authorities for behavior that sounds eerily reminiscent of the Libor mess. Regulators are looking into whether or not a small group of brokers at ICAP may have worked with up to 15 of the world's largest banks to manipulate ISDAfix, a benchmark number used around the world to calculate the prices of interest-rate swaps.

Interest-rate swaps are a tool used by big cities, major corporations and sovereign governments to manage their debt, and the scale of their use is almost unimaginably massive. It's about a $379 trillion market, meaning that any manipulation would affect a pile of assets about 100 times the size of the United States federal budget.

It should surprise no one that among the players implicated in this scheme to fix the prices of interest-rate swaps are the same megabanks – including Barclays, UBS, Bank of America, JPMorgan Chase and the Royal Bank of Scotland – that serve on the Libor panel that sets global interest rates. In fact, in recent years many of these banks have already paid multimillion-dollar settlements for anti-competitive manipulation of one form or another (in addition to Libor, some were caught up in an anti-competitive scheme, detailed in Rolling Stone last year, to rig municipal-debt service auctions). Though the jumble of financial acronyms sounds like gibberish to the layperson, the fact that there may now be price-fixing scandals involving both Libor and ISDAfix suggests a single, giant mushrooming conspiracy of collusion and price-fixing hovering under the ostensibly competitive veneer of Wall Street culture.

Why? Because Libor already affects the prices of interest-rate swaps, making this a manipulation-on-manipulation situation. If the allegations prove to be right, that will mean that swap customers have been paying for two different layers of price-fixing corruption. If you can imagine paying 20 bucks for a crappy PB&J because some evil cabal of agribusiness companies colluded to fix the prices of both peanuts and peanut butter, you come close to grasping the lunacy of financial markets where both interest rates and interest-rate swaps are being manipulated at the same time, often by the same banks.

"It's a double conspiracy," says an amazed Michael Greenberger, a former director of the trading and markets division at the Commodity Futures Trading Commission and now a professor at the University of Maryland. "It's the height of criminality."

The bad news didn't stop with swaps and interest rates. In March, it also came out that two regulators – the CFTC here in the U.S. and the Madrid-based International Organization of Securities Commissions – were spurred by the Libor revelations to investigate the possibility of collusive manipulation of gold and silver prices. "Given the clubby manipulation efforts we saw in Libor benchmarks, I assume other benchmarks – many other benchmarks – are legit areas of inquiry," CFTC Commissioner Bart Chilton said.

But the biggest shock came out of a federal courtroom at the end of March – though if you follow these matters closely, it may not have been so shocking at all – when a landmark class-action civil lawsuit against the banks for Libor-related offenses was dismissed. In that case, a federal judge accepted the banker-defendants' incredible argument: If cities and towns and other investors lost money because of Libor manipulation, that was their own fault for ever thinking the banks were competing in the first place. (...)

Libor, which measures the prices banks charge one another to borrow money, is a perfect example, not only of this basic flaw in the price-setting system but of the weakness in the regulatory framework supposedly policing it. Couple a voluntary reporting scheme with too-big-to-fail status and a revolving-door legal system, and what you get is unstoppable corruption.

Every morning, 18 of the world's biggest banks submit data to an office in London about how much they believe they would have to pay to borrow from other banks. The 18 banks together are called the "Libor panel," and when all of these data from all 18 panelist banks are collected, the numbers are averaged out. What emerges, every morning at 11:30 London time, are the daily Libor figures.

Banks submit numbers about borrowing in 10 different currencies across 15 different time periods, e.g., loans as short as one day and as long as one year. This mountain of bank-submitted data is used every day to create benchmark rates that affect the prices of everything from credit cards to mortgages to currencies to commercial loans (both short- and long-term) to swaps.

Dating back perhaps as far as the early Nineties, traders and others inside these banks were sometimes calling up the company geeks responsible for submitting the daily Libor numbers (the "Libor submitters") and asking them to fudge the numbers. Usually, the gimmick was the trader had made a bet on something – a swap, currencies, something – and he wanted the Libor submitter to make the numbers look lower (or, occasionally, higher) to help his bet pay off.

Famously, one Barclays trader monkeyed with Libor submissions in exchange for a bottle of Bollinger champagne, but in some cases, it was even lamer than that. This is from an exchange between a trader and a Libor submitter at the Royal Bank of Scotland:

SWISS FRANC TRADER: can u put 6m swiss libor in low pls?...
PRIMARY SUBMITTER: Whats it worth
SWSISS FRANC TRADER: ive got some sushi rolls from yesterday?...
PRIMARY SUBMITTER: ok low 6m, just for u
SWISS FRANC TRADER: wooooooohooooooo. . . thatd be awesome

Screwing around with world interest rates that affect billions of people in exchange for day-old sushi – it's hard to imagine an image that better captures the moral insanity of the modern financial-services sector.

by Matt Taibbi, Rolling Stone |  Read more:
Illustration by Victor Juhasz


Christian Faur, Crayon Photography
via:

Searing Squid

Squid is easy to cook but hard to sear. It releases so much moisture when it hits the pan that it tends to steam rather than brown. And since it cooks so quickly (two to three minutes will do it), it is usually done before much of the liquid evaporates.

If I’m cooking squid in a sauce, the excess pan liquid is an asset. It has a wonderful ocean flavor, like fish stock without the work.

But sometimes, a pale golden sear, with its gentle toasty notes, is what I’m after. The secret is the meeting of an extremely hot pan with some extremely dry squid.

Since squid continues to ooze juices as it sits, the vigilant wiping is a necessity. I like to rinse the sea creatures thoroughly, then cut their slim bodies into rings (tentacles can be left whole or halved as desired). I lay the rings out (cut side up) on a clean dish towel or several layers of paper towels and pat them dry. If I’ve planned ahead, I’ll let them air dry, allowing them to sit out for up to an hour.

Meanwhile, I’ll heat a heavy-duty pan for at least five minutes. Don’t use nonstick here; it impedes browning.

Then (and this is the crucial part) transfer the squid from the towels to a plate before moving it to the pan. The reason for this is that as the squid sits, it will release liquid and glue itself to the toweling. Transferring it to a plate first unsticks it, encouraging it to slide into the hot pan in one fell swoop so all of it cooks at the same rate.

Unless the pan is large and quantity of squid small, cook the squid in batches, taking care not to overcrowd the pan. If you cram the bodies in like a rush-hour subway car in August, they’re bound to sweat.

Seared squid, deeply saline and caramelized, doesn’t need much in terms of seasonings. But garlic, fresh mint and sliced jalapeño add a welcome kick.

Recipe: Sauteed Squid with Chiles, Mint and Lime

by Melissa Clark, NY Times |  Read more:
Image: Andrew Scrivani for The New York Times

Friday, April 26, 2013

Bad Land

America is full of guns—one gun for every citizen—and Americans often use them to shoot one another. After this week’s failure of gun-control legislation to survive the Senate, it’s not enough anymore to say Americans love their guns. The question is: Why do we kill?
The essential American soul is hard, isolate, stoic, and a killer. It has never yet melted. —D.H. Lawrence
(...) When I started my research, I must admit, I assumed the academic literature would turn up unassailable arguments along the lines of this headline from Harvard: “Where there are more guns there is more homicide.”

But the literature on guns is just as messy as the statistics—often completely contradictory, with some studies showing convincing correlation between guns and homicide, while others equally convincingly show none. In the end, most of it shares an unfortunately quality: too much of the language implies causation (or lack thereof) between guns and death but only really shows varying levels of correlation. This is a problem in much social science research, but seems particularly vivid here. A Centers for Disease Control task force in 2003 summed up the situation nicely: “The application of imperfect methods to imperfect data has commonly resulted in inconsistent and otherwise insufficient evidence with which to determine the effectiveness of firearms laws in modifying violent outcomes.”

Talk about a moving target.

But the ambiguity can be useful, if you’re willing to explore the dark gap between correlation and causation. So if we can give up on guns as the root of the problem, just for a while, there are a host of possible other causes for the special American brand of rich country violence. Let’s start with what we are not going to discuss. The list runs very roughly from least relevant to most relevant:

Hurricanes. Tornadoes. Riots. Terrorists. Gangs. Lone criminals
Race
Mental illness
Drug use
Religiosity or lack thereof
Violent media and video games
Poverty
Gun control laws or lack thereof
Crises of masculinity
Culture of honor
Public faith or lack thereof in government
Inequality

Of all these, income inequality rings the most true—and there is high correlation between inequality and homicide in studies—but beneath even that there is another issue that transcends all the standard bugaboos of race, class, and poverty, one possibly rooted deep in the primate building blocks of humanity.

It’s called social capital, and while it’s a relatively new term, it is an old concept, with American roots reaching as far back as Alexis de Tocqueville and his classic analysis of the United States in the 1830s, in which he identified both American individualism and an American propensity to gather into groups “very general and very particular, immense and very small.”

“No sooner do you set foot upon the American soil, than you are stunned by a kind of tumult; a confused clamour is heard on every side; and a thousand simultaneous voices demand the immediate satisfaction of their social wants,” he wrote. “Everything is in motion around you; here, the people of one quarter of a town are met to decide upon the building of a church; there, the election of a representative is going on; a little further, the delegates of a district are posting to the town in order to consult upon some local improvements; or in another place the labourers of a village quit their ploughs to deliberate upon the project of a road or a public school.”

This engagement was central to de Tocqueville’s understanding of American democracy. He saw voluntary groups spreading like wildfire (among primarily white males, of course) and filling a gap between family on the local end and the state on the more distant end. And it was in this middle ground that de Tocqueville perceived a budding sense of a new and better common good.

Both in academia and the wider culture, social capital burst into the national consciousness in the mid-1990s, driven by political scientist Robert Putnam, who defined it as “the collective value of all ‘social networks’ (who people know) and the inclinations that arise from these networks to do things for each other (‘norms of reciprocity’).”

It includes everything from voting to dinner parties to Little League, from religious groups to farmer’s markets and the local zoning board. It includes Facebook, yoga classes, picnics of all kinds, hanging out on the stoop, and watching over the neighbor’s kids. Putnam raised an alarm about declining social capital, pointing to precipitous drops in the very voluntary associations—the Rotary Club, the Boy Scouts, the Jaycees—that de Tocqueville had gushed over and writing that “we are becoming mere observers of our collective destiny.” He attributed the decline to sprawl, television, and demographic shifts, but his underlying focus was on the struggles of dual-income middle-class families whose overworked members were not able to participate in wider society as they once did.

There are two kinds of social capital—bonding and bridging—and each impact a society differently. Bonding capital is what you get within a given group. These tend to be closer and more reliable bonds that form the foundation of our social capital. Yet bonding social capital is not always positive: Tight-knit groups can turn insular, reaching their logical conclusion in gangs and militias but with negative effects found in everything from families to groups of friends to certain kinds of religious communities.

In contrast, bridging social capital reaches across a societal divide such as race, region or religion and is by nature weak. But it also promotes empathy and tolerance and enlarges our radius of trust, allowing us to see other people as people, not as a faceless other.

This sense of bridging a divide is especially important in the U.S. because, contrary to popular opinion, we regularly put the needs of the group ahead of the needs of the individual in a way Europeans don’t. In surveys, Western Europeans are more likely than Americans to say citizens should follow their conscience and break an unjust law or that citizens should defy their homeland if they believe their country is acting immorally.

On the other hand, Americans are more likely to believe they control their own fate and to believe in a more laissez-faire relationship with the state. It’s a more complex mix than our myths allow for, and the end result is that it can be hard to fathom just how different Americans are from the rest of the world.

by Nathan Hegedus, TMN |  Read more:
Image: Georg Baselitz, Das Motiv im Grand Canyon, 2003. Copyright © Georg Baselitz. Courtesy Galleri Bo Bjerggaard, Copenhagen.

Merzbild 29A. Bild mit Drehrad (1920)
via:

unfriend
via:

John Bogle: The “Train Wreck” Awaiting American Retirement

In terms of the evolution of America’s retirement health, we’ve moved from defined benefit programs, pensions, to defined contribution. … Describe what’s happening to our retirement health as these [different] instruments are introduced and the rush of people into mutual funds. …

… In my new book, which is called The Clash of the Cultures, I have a chapter on future retirement planning, and it says our retirement system is … headed for a train wreck unless we do something about it.

I start off, simply put, with Social Security, which has to be changed in gradual, small ways to become solvent again. … Then you go to corporate defined benefit plans. They are assuming — and state and local government defined benefit plans even worse — they are all assuming that the market return in their portfolio will be 8 percent a year.

There is no way under the sun that they’re going to earn 8 percent. It’s just impossible. No matter what they do, they’re stuck in a bind given the kind of markets we expect in stocks and bonds. … The best they can really hope for is a 5 percent return unless some wonderful, attractive scenario for the future unfolds, which is really unimaginable. If anything, it’s going to be worse.

So if you think about them compounding their returns at 4 percent instead of the 8 percent that they build into the plan, they’re going to have to start putting a lot of money into those plans. They’re going to be bankrupt.

Those plans have been dying out for a long time.

… They’re dropping out. They’re changing to defined contribution plans, the corporations are. But if you have a bad year, you don’t make any contributions for your employees, the management says, “Can’t afford it this year,” well, that’s the year they should afford it. So the defined contribution system is deeply flawed.

And what it really is — when you look at IRA and 401(k), and particularly 401(k) thrift plans — they are thrift plans. They are not retirement plans. They were never designed to be retirement plans, but we’re using them to build a retirement plan now, and it simply is not going to work. …

The 401(k) arrived. What prompted its creation?

… Some very smart people found a sort of loophole in the law, not a bad loophole, where you could have companies put their money in and employees put their money in together, and you could get clearance to make sure that didn’t have any taxes on it. That’s the 401(k) plan in essence.

But you can get out of it when you want. You can say you have an emergency when you want. And here’s the worst of it: You can pick any fund that you want. …

If you want to gamble with your retirement money, all I can say is be my guest, but be aware of the mathematical reality. The chances you will do better playing that game are infinitely small. If I want to put a number on it, let me just say [off the] top of the head that maybe you have 0.1 of 1 percent chance of beating the market over time.

Now, think about this for a minute. You’re 25 years old, and you’re going to invest for the next 50 years, so you’re going to buy an index fund and hold it all that time. You never have to worry about the manager. There aren’t new brooms that come in and sweep clean.

Now you buy an actually managed fund. First of all, half of the actively managed mutual funds that are out there today aren’t going to be around 10 years from now. There’s going to be a 50 percent failure rate. We’ve had that in the past, in the last 20 years. …

So how can you be a long-term investor if the fund you own doesn’t last for the long term? And then there’s something else. Even if you’re lucky enough to be in that half of funds that does survive, they’re going to have a new manager every five years. That’s how long a portfolio manager lasts in this business.

So if you have, say, four mutual funds, you’re going to have four managers every five years, and if you take that to 50 years, you’re going to have 40 managers. Think about the possibility of 40 mutual fund managers with those high fees coming anywhere near the return of an unmanaged low-expense index fund. It just isn’t there. Mathematically can’t be there.

The marketing tells you otherwise. And the industry has created legends, such as Peter Lynch at Fidelity Magellan Fund, who outperform the broad market, outperform your index fund, year after year after year. So in the interest of giving people choices, the industry puts forward funds like Magellan and gives you an opportunity to beat the market. Isn’t that a good thing?

Well, if only the past were prologue it would be a great thing. But look, the Magellan Funds are a great example. … The pressure from employers to bring in outside funds, to have “open architecture” for their investors, was so powerful that we allowed them to add Magellan Fund.

Bad judgment. Magellan Fund reached its peak over the market in 1992. It had $105 billion of assets in 1992. It has been pretty much an abject failure, worse than mediocre, in the 20 years that followed. Way below par. And the fund now has assets of $10 billion. That’s $95 billion smaller, 92 percent smaller than it was in 1992. Everybody’s getting out of Magellan now. …

by PBS, Frontline |  Read more:
Image: uncredited

Georges Braque, The Pantry. 1920
via:

The 'Napster Moment' for 3-D Food Printing

The revolution in 3D printing is seeing enthusiasts sharing designs for everything from chairs to guns to faces. With small steps, it's even making its way into the world of food.

That might seem the most natural of all, on the face of it. Food is a social thing, from the sharing of recipes to the sharing of a meal. But it's a different kind of sharing to that we associate with other arts. Sharing a recipe isn't an economic issue for the food industry like sharing a song is to the music industry -- but what if you could print off not just a hamburger, but a Big Mac? For a look at how this future might turn out, let's look at the Coca-Cola recipe. (...)

The spread of Open Cola is interesting, given this framework of secrecy. The terms of the Open Cola GNU license are such that anyone can take the recipe and adapt it, as long as they put their own version online for others to also take advantage of. Take Open Soda in the US, which produces a range of different colas and sodas both for fun and for selling at large events. Its latest recipe, as of April 2009, has some significant differences with Open Cola, but it's still a cola. It's still an attempt at cloning something famous. (...)

Imagine yourself in twenty years sitting down in your kitchen and wanting a glass of cola and a hamburger. You could download Coca-Cola's classic recipe to go with a McDonald's Big Mac, but you could also download that extra-caffeinated cola someone's hacked onto the server along with a Big Mac with a particularly smoky ketchup in place of the banal, "official" version. Or you could knock something new up yourself, a drink that's sugar- and caffeine-free and with an extra shot of vitamin B and a burger bun that's gluten-free.

Open Cola can be see a first, extremely crude example of this change, in this case. Once the infrastructure for 3D printing is in place -- the cultural expectation of being able to get home, slot a cartridge into the machine, and print out anything you want -- then the food industry is going to struggle to keep its secrets safe. In large part, the mystique around the brand is what protects Coca-Cola -- in For God, Country & Coca-Cola, Pendergrast is told by a Coke spokesperson that he could safely print the real recipe if he had it and go into competition with Coke, but there's no way an upstart would be able to match the real thing for price, distribution, marketing, history, and all the other things that maintain Coke's position around the world.

But if it did want to sue someone who overcame these hurdles -- as the decentralised 3D printing might well facilitate -- then that's made trickier for the copyright/patent holder with the legal grey area recipes lie within. A list of ingredients isn't something that can be copyrighted, but their preparation in a certain way can be -- that's how you can copyright a Jaffa Cake, but not the ingredients within in. Coca-Cola currently relies on established legal precedent, such as that in the Coco v Clark case of 1969 that established an employee leaking a trade secret was in breach of a confidentiality contract, and could be sued.

Kurman and Lipson have collaborated on Fabricated: the new world of 3D printing, a book exploring the social issues that will come from the spread of 3D printing. Lipson said: "The moment somebody is making money off the recipes, that's when you'll see digital rights management around it. But it's very social, there's a big social component through sharing these things, and therefore it will propagate and follow the same path [as music]."

by Ian Steadman, Wired UK |  Read more:
Image: Shutterstock

Game Theory in Teaching

[ed. Alternatively titled 'Why I Let My Students Cheat on Their Exam' although, technically, they weren't really cheating...]

On test day for my Behavioral Ecology class at UCLA, I walked into the classroom bearing an impossibly difficult exam. Rather than being neatly arranged in alternate rows with pen or pencil in hand, my students sat in one tight group, with notes and books and laptops open and available. They were poised to share each other’s thoughts and to copy the best answers. As I distributed the tests, the students began to talk and write. All of this would normally be called cheating. But it was completely OK by me.

Who in their right mind would condone and encourage cheating among UCLA juniors and seniors? Perhaps someone with the idea that concepts in animal behavior can be taught by making their students live those concepts. (...)

Much of evolution and natural selection can be summarized in three short words: “Life is games.” In any game, the object is to win—be that defined as leaving the most genes in the next generation, getting the best grade on a midterm, or successfully inculcating critical thinking into your students. An entire field of study, Game Theory, is devoted to mathematically describing the games that nature plays. Games can determine why ant colonies do what they do, how viruses evolve to exploit hosts, or how human societies organize and function.

So last quarter I had an intriguing thought while preparing my Game Theory lectures. Tests are really just measures of how the Education Game is proceeding. Professors test to measure their success at teaching, and students take tests in order to get a good grade. Might these goals be maximized simultaneously? What if I let the students write their own rules for the test-taking game? Allow them to do everything we would normally call cheating?

A week before the test, I told my class that the Game Theory exam would be insanely hard—far harder than any that had established my rep as a hard prof. But as recompense, for this one time only, students could cheat. They could bring and use anything or anyone they liked, including animal behavior experts. (Richard Dawkins in town? Bring him!) They could surf the Web. They could talk to each other or call friends who’d taken the course before. They could offer me bribes. (I wouldn’t take them, but neither would I report it to the dean.) Only violations of state or federal criminal law such as kidnapping my dog, blackmail, or threats of violence were out of bounds.

Gasps filled the room. The students sputtered. They fretted. This must be a joke. I couldn’t possibly mean it. What, they asked, is the catch?

“None,” I replied. “You are UCLA students. The brightest of the bright. Let’s see what you can accomplish when you have no restrictions and the only thing that matters is getting the best answer possible.”

by Peter Nonacs/ Zócalo Public Square | Read more:
Image: mrfishersclass

A Messenger for the Internet of Things

[ed. For more information on the IoT see: this, this and this.]

The vision of the Internet of Things is inspiring, if much-hyped. Billions of digital devices, from smartphones to sensors in homes, cars and machines of all kinds, will communicate with each other to automate tasks and make life better.

But some daunting obstacles litter the road to this mechanized nirvana. A crucial challenge is figuring out how all the smartish gadgets will talk to each other. A group of technology companies — including Cisco Systems, I.B.M., Red Hat and Tibco — thinks a technology with a mouthful of a name is the answer. On Thursday, they are officially introducing the Message Queuing Telemetry Transport protocol as an open standard through an international standards organization, Oasis.

MQTT, the less-than-catchy abbreviation for the software, is not really a lingua franca for machine-to-machine communication, but a messenger and carrier for data exchange. MQTT’s advocates compare its potential role in the Internet of Things to that played by the Hypertext Transfer Protocol, or HTTP, on the Web. HTTP is the foundation of data communication on the Web.

MQTT’s origins go back nearly two decades. Its co-inventor, Andy Stanford-Clark, who holds the title of distinguished engineer at I.B.M., has long been a passionate home-automation tinkerer. His laboratory has been his house, a 16th-century stone cottage with a thatched roof on the Isle of Wight, in the English Channel. His electronic gadgets range from temperature and energy monitors to an automated mousetrap. His TedX talk explains the back story.

by Steve Lohr, NY Times |  Read more:
Image via: