Wednesday, June 4, 2014


Anja Rubik for Exhibition Magazine March 2014 by Luigi & Daniele + Iango
via:

Scalpers Inc.

Early in the afternoon of 6 May 2010, the leading stock market index in the US, the Dow Jones Industrial Average, suddenly started falling. There was no evident external reason for the fall – no piece of news or economic data – but the market, which had been drifting slowly downwards that day, in a matter of minutes dropped by 6 per cent. There was pandemonium: some stocks in the Dow were trading for prices as low as 1 cent, others for prices as high as $100,000, in both cases with no apparent rationale. A 15-minute period saw a loss of roughly $1 trillion in market capitalisation.

So far, so weird, but it wasn’t as if nothing like this had ever happened before. Strange things happen in markets, often with no obvious trigger other than mass hysteria; there’s a good reason one of the best books about the history of finance is called Manias, Panics and Crashes. What was truly bizarre and unprecedented, though, was what happened next. Just as quickly as the market had collapsed, it recovered. Prices bounced back, and at the end of a twenty-minute freak-out, the Dow was back where it began. It’s the end of the world! Oh wait, no, it’s just a perfectly normal Thursday.

This incident became known as the Flash Crash. The official report from the Securities and Exchange Commission blamed a single badly timed and unhelpfully large stock sale for the crash, but that explanation failed to convince informed observers. Instead, many students of the market blamed a new set of financial techniques and technologies, collectively known as high-frequency trading or flash trading. This argument rumbles on, and attribution of responsibility is still hotly contested. The conclusion, which becomes more troubling the more you think about it, is that nobody entirely understands the Flash Crash.

The Flash Crash was the first moment in the spotlight for high-frequency trading. This new type of market activity had grown to such a degree that most share markets were now composed not of humans buying and selling from one another, but of computers trading with no human involvement other than in the design of their algorithms. By 2008, 65 per cent of trading on public stock markets in the US was of this type. Actual humans buying and selling made up only a third of the market. Computers were (and are) trading shares in thousandths of a second, exploiting tiny discrepancies in price to make a guaranteed profit. Beyond that, though, hardly anybody knew any further details – or rather, the only people who did were the people who were making money from it, who had every incentive to keep their mouths shut. The Flash Crash dramatised the fact that public equity markets, whose whole rationale is to be open and transparent, had arrived at a point where most of their activity was secret and mysterious.

Enter Michael Lewis. Flash Boys is a number of things, one of the most important being an exposition of exactly what is going on in the stock market; it’s a one-stop shop for an explanation of high-frequency trading (hereafter, HFT). The book reads like a thriller, and indeed is organised as one, featuring a hero whose mission is to solve a mystery. The hero is a Canadian banker called Brad Katsuyama, and the mystery is, on the surface of it, a simple one. Katsuyama’s job involved buying and selling stocks. The problem was that when he sat at his computer and tried to buy a stock, its price would change at the very moment he clicked to execute the trade. The apparent market price was not actually available. He raised the issue with the computer people at his bank, who first tried to blame him, and then when he demonstrated the problem – they watched while he clicked ‘Enter’ and the price changed – went quiet.

Katsuyama came to realise that his problem was endemic across the financial industry. The price was not the price. The picture of the market given by stable prices moving across screens was an illusion; the real market was not available to him. Very many people across the industry must have asked themselves what the hell was going on, but what’s unusual about Katsuyama is that he didn’t let the question go: he kept going until he found an answer. Part of that answer came in correctly formulating the question, what the hell is the market anyway?
The market was by now a pure abstraction. There was no obvious picture to replace the old one people carried around in their heads. The same old ticker tape ran across the bottom of television screens – even though it represented only a tiny fraction of the actual trading. Market experts still reported from the floor of the New York Stock Exchange, even though trading no longer happened there. For a market expert truly to get inside the New York Stock Exchange, he’d need to climb inside a tall black stack of computer servers locked inside a fortress guarded by a small army of heavily armed men and touchy German shepherds in Mahwah, New Jersey. If he wanted an overview of the stock market – or even the trading in a single company like IBM – he’d need to inspect the computer printouts from twelve other public exchanges scattered across northern New Jersey, plus records of the private deals that occurred inside the growing number of dark pools. If he tried to do this, he’d soon learn that there was no computer printout. At least no reliable one. It didn’t seem possible to form a mental picture of the new financial market.
We want a market to be people buying and selling to and from each other, in a specific physical location, ideally with visible prices. In this new market, the principal actors are not human beings, but algorithms; the real action happens inside computers at the exchanges, and the old market is now nothing more than a stage set whose main function is to be a backdrop for news stories about the stock market. As for the prices, they move when you try to act on them, and anyway, as Lewis says, there’s the problem of the ‘dark pools’, which are in effect private stock markets, owned for the most part by big investment banks, whose entire function is to execute trades out of sight of the wider public: nobody knows who’s buying, nobody knows who’s selling, and nobody knows the prices paid. The man who did most to help Katsuyama understand this new market was an Irish telecoms engineer called Ronan Ryan. Ryan’s job involves the wiring inside stock exchanges, and he explained to Katsuyama just how crucial speed has become to the process of trading. All the exchanges now allow ‘co-location’, in which private firms install their own computer equipment alongside the exchanges’ own computers, in order to benefit from the tiny advantage this proximity gives in trading time.
As Ryan spoke, he filled huge empty spaces on Brad’s mental map of the financial markets. ‘What he said told me that we needed to care about microseconds and nanoseconds,’ said Brad. The US stock market was now a class system, rooted in speed, of haves and have-nots. The haves paid for nanoseconds; the have-nots had no idea that a nanosecond had value. The haves enjoyed a perfect view of the market; the have-nots never saw the market at all. What had once been the world’s most public, most democratic, financial market had become, in spirit, something like a private viewing of a stolen work of art.
by John Lanchester, LRB |  Read more:
Image: Bloomberg

Mommy-Daddy Time

The reputation of parenthood has not fared well in the modern era. Social science has concluded that parents are either no happier than people without children, or decidedly unhappier. Parents themselves have grown competitively garrulous on the subject of their dissatisfactions. Confessions of child-rearing misery are by now so unremarkable that the parent who doesn’t merrily cop to the odd infanticidal urge is considered a rather suspect figure. And yet, the American journalist Jennifer Senior argues in her earnest book about modern parenthood, it would be wrong to conclude that children only spoil their parents’ fun. Most parents, she writes, reject the findings of social science as a violation of their ‘deepest intuitions’. In fact, most parents – even the dedicated whingers – will say that the benefits of raising children ultimately outweigh the hardships.

Senior’s characterisation of parenthood as a wondrous ‘paradox’ – a nightmare slog that in spite of everything delivers transcendent joy – has gone down very well in America, where parents seem reassured to find a cheerful, pro-kids message being snatched from the jaws of sleep deprivation and despondency. The book spent six weeks on the bestseller list and has earned Senior the ultimate imprimatur of a lecturing gig at the TED conference. ‘All Joy and No Fun inspired me to think differently about my own experience as a parent,’ Andrew Solomon observed in his New York Times review. ‘Over and over again, I find myself bored by what I’m doing with my children: how many times can we read Angelina Ballerina or watch a Bob the Builder video? And yet I remind myself that such intimate shared moments, snuggling close, provide the ultimate meaning of life.’

It is possible, of course, that some parents are lying, or at least sentimentalising the truth, when they offer up this sort of rosy ‘end-of-the-day’ verdict on parenthood. (There are strong social and emotional incentives for not publicly expressing remorse about one’s reproductive choices.) But Senior rejects this surmise as unduly bleak. Having children, she contends, has always been a ‘high cost/high reward’ activity. If today’s parents appear to be having a horrible time, it is not because they aren’t getting the rewards, but because various aspects of modern life have conspired to make them feel the costs more acutely.

By ‘today’s parents’ Senior means American, middle-class, heterosexual, married parents. These are the people she interviews and about whom she generalises throughout her book. She has deliberately excluded the poor because the problems they encounter as parents are hard to separate from their more general money problems. She has also left out the rich because they can afford to outsource the arduous or tedious parts of child-rearing. Why she has chosen to glance only fleetingly – and pityingly – at the case of single parents is less clear. Given that the marriage rate in the US is the lowest it’s been in more than a century and that in 2013 nearly half of the first-time births in the US were to unmarried women, her focus on the nuclear family seems a bit quaint.

Senior identifies three main reasons why modern parents (according to her limited definition) feel more burdened by parenthood than their forebears. One is that they tend to have greater expectations of the existential satisfaction that children – and life in general – will bring them. With their unprecedented array of ‘lifestyle options’, their tendency to regard happiness and self-actualisation as entitlements and their habit of constantly taking their own emotional temperature, contemporary adults are poorly prepared, she argues, for the self-sacrificing work that child-rearing demands. They also suffer, she believes, from a general confusion about how childcare duties should be divided. Most mothers now work, but guidelines for how they should share domestic labour with their partners have yet to be established, leaving couples with the stressful task of improvising (and fighting about) their own labour-sharing arrangements.

Lastly – and in Senior’s estimation, most significantly – modern parents have to cope with the drastically elevated status of modern children. The useful little trainee adults who, just a century ago, were toiling in fields and factories and contributing to the family purse have been transformed into family pets – ‘economically useless but emotionally priceless’ in the words of the sociologist Viviana Zelizer. (Senior doesn’t mention it, but it’s worth noting that for some years the default phrase used by Americans to congratulate their offspring on any sort of achievement has been, ‘Good job!’ – a wishful idiom, it seems, designed to confer on a child’s economically useless exploits the illusion of the dignity of labour.) Rearing the priceless modern child is now a high performance, perfectible project, requiring an unprecedented outlay of money and time. In 1965, Senior observes, when women had yet to become a sizeable presence in the workforce, mothers spent 3.7 fewer hours per week on childcare than in 2008, even though women in 2008 were working almost three times as many paid hours. Fathers spent more than three times as many hours with their children in 2008 as in 1965.

What were these parents doing with all their extra parenting hours? Specifically, they were reading to their children, playing with them, helping them build replicas of the Giza pyramids, ferrying them to ballet class, taekwondo class, soccer practice, chess lessons, Scouts. Generally, they were attempting to maximise their children’s potential, to optimise their CVs, to ensure their psychological well-being – to make them happy.

by Zoe Heller, LRB | Read more:
Image: Ecco. All Joy and No Fun

Google Search: 15 Hidden Features

Calculator
Google's calculator function is far more powerful than most people realise. As well as doing basic maths (5+6 or 3*2) it can do logarithmic calculations, and it knows constants (like e and pi), as well as functions like Cos and Sin. Google can also translate numbers into binary code – try typing '12*3 in binary'.


Conversions
Currency conversions and unit conversions can be found by using the syntax: <amount> <unit1> in <unit2>. So for example, you could type '1 GBP in USD', '20 C in F' or '15 inches in cm' and get an instant answer.


Translations
A quick way to translate foreign words is to type 'translate <word> to <language>'. So for example, 'translate pomme to english' returns the result apple, and 'translate pomme to spanish' returns the result 'manzana'.
Check flight status
If you type in a flight number, the top result is the details of the flight and its status. So, for example, typing in BA 335 reveals that British Airways flight 335 departs Paris at 15.45 today and arrives at Heathrow Terminal 5 at 15.48 local time.


by Sophie Curtis, Telegraph |  Read more:
Images: Google

Tuesday, June 3, 2014

Nicki Bluhm and The Gramblers


[ed. Repost- Mar. 27, 2012. Pretty amazing cover. Special credit for the kazoo solo.]



The Role of Yik Yak in a Free Society


The horror stories are all over the Internet. Anonymous-social-media app Yik Yak tore a Connecticut high school apart. (Among the anonymous “gems”: “L. M. is affiliated with Al Qaeda.” “The cheer team couldn’t get uglier.” “K. is a slut.” “Nobody is taking H. to prom because nobody has a forklift.”) A high school in San Clemente, California was placed on lockdown after an anonymous bomb threat was posted on Yik Yak. Two teenagers in Mobile, Alabama were arrested after using the app to make threats about a campus shooting. Even an article in the Boston Globe titled “The Good News About Yik Yak” emphasized how teenagers are rejecting its dark side. 


It goes without saying that threats of cyberbullying and violence are reprehensible and need to addressed quickly and effectively. But let’s not forget that anonymous speech -- and that’s what Yik Yak and similar apps like Whisper and Secret encourage -- plays an important role in a free society. The United States was founded on it. Thomas Paine, the “Father of the American Revolution,” signed his influential pamphlet Common Sense as simply “Written by an Englishman” lest his identity became known and he was hanged for treason. The authors of the Federalist Papers, the key documents used to interpret the Constitution, published under the pseudonym “Publius.” The U.S. Supreme Court has held that the right to speak anonymously, on and off the Internet, is guaranteed by the First Amendment to the Constitution, a right that extends to the Internet.


And, in fact, anonymity apps have brought positives along with the negatives. Not long ago, a post on Secret reported that Google had acquired the poster’s five-person company and had hired everyone but her. Later posts revealed that she was the only female at the company and had been there since it was founded. The thread became the talk of Silicon Valley, generating a lively debate about suppressed sexism in the start-up community. The poster’s ability to remain anonymous was key to this information coming out. She could stand up to power, speak without embarrassment, and avoid alienating potential employers who might take a dim view of her controversial statements. That’s exactly why the First Amendment protects anonymous speech, and that’s why the value of anonymity apps like Yik Yak shouldn’t be summarily dismissed.

The targets of anonymous speech often resort to the courts to try to unmask the speaker. The plaintiff will sue a fictitious “John Doe” and immediately serve a subpoena seeking to force the Internet service provider to give up the defendant’s name. In one typical case, an anonymous poster on a Yahoo! Finance message board took part in a heated debate about the company’s managers, including the aggrieved plaintiff. The defendant posted a message saying that a male executive had made a New Year’s resolution to perform oral sex on the plaintiff though she had “fat thighs, a fake medical degree, ‘queefs’ and ha[d] poor feminine hygiene.” The plaintiff sued and served a subpoena on Yahoo! seeking the poster’s identity. The court quashed the subpoena, concluding that the statement didn’t convey libelous facts, but instead was merely crude, satirical hyperbole uttered in the course of a heated online discussion. The outcome in this case was correct, but not inevitable. (As one commentator notes, most of these John Doe lawsuits are about censorship, not money). The point is that anonymous speech can be a good thing, and often there are powerful entities out there that want to stop it.

by Robert Rotstein, Boing Boing | Read more:
Image: via:

Programming the Moral Robot


The U.S. Navy’s Office of Naval Research is funding an effort, by scientists at Tufts, Brown, and RPI, to develop military robots capable of moral reasoning:
The ONR-funded project will first isolate essential elements of human moral competence through theoretical and empirical research. Based on the results, the team will develop formal frameworks for modeling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.
That sounds straightforward. But hidden in those three short sentences are, so far as I can make out, at least eight philosophical challenges of extraordinary complexity:
  • Defining “human moral competence”
  • Boiling that competence down to a set of isolated “essential elements”
  • Designing a program of “theoretical and empirical research” that would lead to the identification of those elements
  • Developing mathematical frameworks for explaining moral reasoning
  • Translating those frameworks into formal models of moral reasoning
  • “Verifying” the outputs of those models as truthful
  • Embedding moral reasoning into computer algorithms
  • Using those algorithms to control a robot operating autonomously in the world
Barring the negotiation of a worldwide ban, which seems unlikely for all sorts of reasons, military robots that make life-or-death decisions about human beings are coming (if they’re not already here). So efforts to program morality into robots are themselves now morally necessary. It’s highly unlikely, though, that the efforts will be successful — unless, that is, we choose to cheat on the definition of success.
Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings. Since the scientific community has yet to establish what constitutes morality in humans the challenge for Bringsjord and his team is severe.
We’re trying to reverse-engineer something that wasn’t engineered in the first place.

by Nicholas Carr, Rough Type |  Read more:
Image: Frankenstein

Machines v. Lawyers

Law schools are in crisis, facing their most substantial decline in enrollment in decades, if not in the history of legal education. Applications have fallen over 40 percent since 2004. The legal workplace is troubled, too. Benjamin Barton, of the University of Tennessee College of Law, has shown that attorneys in “small law,” such as solo practitioners, have been hurting for a decade. Attorney job growth has been flat; partner incomes at large firms have recently recovered from the economic downturn, but the going rate for associates, even at the best firms, has stagnated since 2007.

Some observers, not implausibly, blame the recession for these developments. But the plight of legal education and of the attorney workplace is also a harbinger of a looming transformation in the legal profession. Law is, in effect, an information technology—a code that regulates social life. And as the machinery of information technology grows exponentially in power, the legal profession faces a great disruption not unlike that already experienced by journalism, which has seen employment drop by about a third and the market value of newspapers devastated. The effects on law will take longer to play themselves out, but they will likely be even greater because of the central role that lawyers play in public life.

The growing role of machine intelligence will create new competition in the legal profession and reduce the incomes of many lawyers. The job category that the Bureau of Labor Statistics calls “other legal services”—which includes the use of technology to help perform legal tasks—has already been surging, over 7 percent per year from 1999 to 2010. As a consequence, the law-school crisis will deepen, forcing some schools to close and others to reduce tuitions. While lawyers and law professors may mourn the loss of more lucrative professional opportunities, consumers of modest means will enjoy access to previously cost-prohibitive services.

A decline in the clout of law schools and lawyers could have potentially broader political effects. For the last half-century, many law professors and lawyers have pressed for more government intervention in the economy. This isn’t surprising. Lawyers in the modern regulatory state reap rewards from big government because their expertise is needed to understand and comply with (or exploit) complicated and ever-changing rules. In contrast, the entrepreneurs and innovators driving our computational revolution benefit more from a stable regulatory regime and limited government. As they replace lawyers in influence, they’re likely to shape a politics more friendly to markets and less so to regulation. (...)

Discovering information, finding precedents, drafting documents and briefs, and predicting the outcomes of lawsuits—these tasks encompass the bulk of legal practice. The rise of machine intelligence will therefore disrupt and transform the legal profession.

by John O. McGinnis, City Journal |  Read more:
Image: Arnold Roth

Monday, June 2, 2014

Tom Petty and the Heartbreakers



Paper Was Toast


Our eyes tell us that the words and pictures on a screen are pretty much identical to the words and pictures on a piece of paper. But our eyes lie. What we’re learning now is that reading is a bodily activity. We take in information the way we experience the world — as much with our sense of touch as with our sense of sight. Some scientists believe that our brain actually interprets written letters and words as physical objects, a reflection of the fact that our minds evolved to perceive things, not symbols of things.

via: The eunuch’s children

Angelina Jolie’s Perfect Game

Most of us don’t know a life before People magazine. It was started in 1974 as a spin-off of the “People” section in Time magazine, and with the heft of Time Inc. behind it, it enjoyed one of the most successful launches in publishing history. And in the 40 years since its launch, it’s become a publishing juggernaut.

People has dominated a category of “personality journalism” that it created, telling stories, as its first editorial proclaimed, about “the active personalities of our time — in all fields.” Its success sparked dozens of copycats: USA Today, Entertainment Tonight, and one, founded in 1978, funded by the New York Times Company. It was called…Us Magazine.

Over the next decade, the magazine would switch hands several times before Publisher Jann Wenner, best known as the wunderkind responsible for Rolling Stone, took full control in 1989. He experimented with different formats, but by 1999, the magazine was losing $10 million a year, known in the trades as “Wenner’s folly.”

Until, that is, Wenner made the decision to funnel $50 million into a complete redesign and, in 2002, hired Bonnie Fuller as editor-in-chief, notorious for her sensational yet tremendously successful tenure at Cosmopolitan and Glamour. Fuller — and her successor, Janice Min — popularized a feature that we joke about today, but one that had tremendous ramifications on the industry at large, which, as you’ll soon see, dictated the coverage of Pitt and Jolie.

That feature was “Stars: They’re Just Like Us.” You’ve almost certainly seen it, or seen it satirized, but what it did was take photos of stars doing mundane activities — pumping gas, going to the grocery store — and captioned them to suggest that stars are, in fact, just like us. As I highlighted earlier, it’s nothing new, ideologically, but it was a brilliant business move. Because, as Fuller put it, “people don’t like to read,” she flooded the magazines pages with photos — but the cheapest kind available, namely, paparazzi photos of celebrities doing unremarkable things.

Until the late ‘90s, paparazzi had been a rarified vocation. Unless contracted to a specific agency, an individual paparazzo had to bear the cost of an expensive camera, miles of film, development, and distribution. But with the rise of digital technologies at the turn of the millennium, it had become increasingly easy — and cheap — to track a celebrity’s quotidian activities. Anyone with a digital camera and an internet connection could take and sell unauthorized photos of celebrities. The number of paparazzi grew from a “handful” in 1995 to 80 in 2004 and 150 in 2005. (...)

But as Us began to slowly encroach on People’s circulation and advertising dollars, the two began to engage in massive bidding wars over exclusive rights to various photos. With Time Inc. behind it, People was able to offer huge amounts of money for all types of photos, even ones it did not plan to use. For example, People spent $75,000 for a photo of Jennifer Lopez reading Us Weekly, simply to prevent Us from publishing the photo. People was driving up prices, hoping to shut out other magazines with smaller operating budgets from scooping them on any story, no matter how small.

People would always have more buying power, but Us relied on its wiles, as evidenced by the magazine’s scoop on the first photos of the Pitt-Jolie romance. People believed it had secured the rights at $320,000, and Us countered with an offer of $500,000, but only if the agency would sign a contract immediately, without going back to People.

People tried to retaliate with a $1 million offer, but the deal was done, and the magazine had to watch as Us took the glory. When, a year later, the bidding began for the first images of Shiloh Jolie-Pitt, People refused to be outbid by Us, even if it meant paying a startling $4.1 million, which became a story in and of itself, especially when Jolie and Pitt turned around and donated that money to African charities.

Throughout this period, gossip blogs were gradually becoming a regular fixture — Perez Hilton, most notoriously, but also Just Jared, The Superficial, Go Fug Yourself, Oh No They Didn’t, and Lainey Gossip — all of which exploited the newly massive stream of digital paparazzi photos. Us and People provided weekly updates, but the blogs helped keep the Brangelina narrative in constant circulation, inundating web users with daily, even hourly updates.

The transformation of Pitt and Jolie’s “scandal” to one of “happy global family” could not have happened, at least not with the efficiency and clarity that it did, if not for the seismic changes in the gossip industry taking place at the same time. Indeed, the successful navigation of the potential scandal of their relationship could have been a fluke — if not for the masterful negotiation of the decade of Brangelina publicity to come.

Looking back, the Brangelina publicity strategy is deceptively simple. In fact, it’s a model of the strategy that has subconsciously guided star production for the last hundred years. More specifically, that the star should be at once ordinary and extraordinary, “just like us” and absolutely nothing like us. Gloria Swanson is the most glamorous star in the world — who loves to make dinner for her children. Paul Newman is the most handsome man in Hollywood — whose favorite pastime is making breakfast in his socks and loafers.

Jolie’s post-2005 image took the ordinary — she was a working mom trying to make her relationship work — and not only amplified it, but infused it with the rhetoric and imagery of globalism and liberalism. She’s not just a mom, but a mom of six. Instead of teaching her kids tolerance, she creates a family unit that engenders it; instead of reading books on kindness and generosity, she models it all over the globe. As for her partner, he isn’t just handsome — he’s the Sexiest Man Alive. And she doesn’t just have a job; instead, her job is being the most important — and influential — actress in the world.

Her image was built on the infrastructure of the status quo — a straight, white, doting mother engaged in a long-term monogamous relationship — but made just extraordinary enough to truly entice but never offend. The line between the tantalizing and the scandalizing is notoriously difficult to tread (just ask Kanye), but Jolie was able to negotiate it via two tactics: First, and most obviously, she accumulated (or, more generously, adopted and gave birth to) a dynamic group of children who were beautiful to observe; second, she figured out how to talk about her personal life in a way that seemed confessional while, in truth, revealing very little; and third, she exploited the desire for inside access into control of that access.

by Anne Helen Petersen, Buzz Feed |  Read more:
Image: uncredited

Fashion Rio spring/summer 2014 collections
via:

The Shawshank Residuals

Bob Gunton is a character actor with 125 credits to his name, including several seasons of "24" and "Desperate Housewives" and a host of movie roles in films such as the Oscar-winning "Argo." Vaguely familiar faces like his are common in the Los Angeles area where he lives, and nobody pays much attention. Many of his roles have been forgotten.

But every day, the 68-year-old actor says, he hears the whispers—from cabdrivers, waiters, the new bag boy at his neighborhood supermarket: "That's the warden in 'Shawshank.' "

He also still gets residual payments—not huge, but steady, close to six figures by the film's 10th anniversary in 2004. Since then, he has continued to get "a very substantial income" long past the age when residuals usually dry up.

"I suspect my daughter, years from now, will still be getting checks," he said.

"Shawshank" was an underwhelming box-office performer when it hit theaters 20 years ago this September, but then it began to redeem itself, finding an audience on home video and later becoming a fixture on cable TV.

The film has taken a near-mystical hold on viewers that shows no sign of abating. Steven Spielberg once told the film's writer-director Frank Darabont that he had made "a chewing-gum movie—if you step on it, it sticks to your shoe," says Mr. Darabont, who went on to create "The Walking Dead" for AMC.

The movie's enduring popularity manifests itself in ways big and small. "Shawshank" for years has been rated by users of imdb.com as the best movie of all time (the first two "Godfather" films are second and third). On a Facebook page dedicated to the film, fans show off tattoos of quotes, sites and the rock hammer Andy, played by Tim Robbins, used to tunnel out of prison. Type "370,000" into a Google search and the site auto-completes it with "in 1966." Andy escapes in 1966 with $370,000 of the warden's ill-gotten gains. The small Ohio city where it was filmed is a tourist attraction.

In the days when videocassettes mattered, "Shawshank" was the top rental of 1995. On television, as cable grew, it has consistently been among the most-aired movies.

In a shifting Hollywood landscape, film libraries increasingly are the lifeblood of studios. "Shawshank's" enduring appeal on television has made it more important than ever—a reliable annuity to help smooth the inevitable bumps in a hit-or-miss box-office business. When studios sell a package of films—many of them stinkers—a "Shawshank" acts as a much-needed locomotive to drag the others behind it.

"It's an incredible moneymaking asset that continues to resonate with viewers," said Jeff Baker, executive vice president and general manager of Warner Bros. Home Entertainment theatrical catalog.

Warner Bros. wouldn't say how much money it has gleaned from "Shawshank," one of 6,000 feature films in a library that last year helped generate $1.5 billion in licensing fees from television, plus an additional $2.2 billion from home video and electronic delivery, according to SEC filings. But it's on the shortlist of films including "The Wizard of Oz," "A Christmas Story" and "Caddyshack" that drive much of the library's value, current and former Warner Bros. executives say.

by Russell Adams, WSJ |  Read more:
Image: Columbia Pictures/Everett Collection

Frank Lloyd Wright Tried to Solve the City

Frank Lloyd Wright hated cities. He thought that they were cramped and crowded, stupidly designed, or, more often, built without any sense of design at all. He once wrote, “To look at the plan of a great City is to look at something like the cross-section of a fibrous tumor.” Wright was always looking for a way to cure the cancer of the city. For him, the central problem was that cities lacked essential elements like space, air, light, and silence. Looking at the congestion and overcrowding of New York City, he lamented, “The whole city is in agony.”

A show currently at the Museum of Modern Art—“Frank Lloyd Wright and the City: Density vs. Dispersal”—documents Wright’s attempts to fix the problem of the city. As it turns out, Wright wavered on the matter. Sometimes he favored urban density. Other times he dreamed a suburban or rural fantasy. (...)

The subtitle of the MOMA show—“Density vs. Dispersal”—suggests a dilemma, a choice. Yet the more you look at Wright’s plans—mile-high skyscrapers on the one hand, meticulously designed, spread-out, semi-rural communities on the other—the more you realize that Wright wasn’t conflicted about density versus dispersal at all. These were just two versions of the same impulse to escape. Wright was a man saying, “Get me the hell out of here.” Sometimes he wanted to go up. Sometimes he wanted to go out. If he pushed hard enough, upward or outward, Wright thought that he could find enough space for us to fix the dehumanizing problems of the city.

Wright spent his early childhood in a place he called “the Valley,” in Ixonia, Wisconsin. The Valley, Wright wrote in his 1932 autobiography, was “lovable,” “lying fertile between two ranges of diversified soft hills, with a third ridge intruding and dividing it in two smaller valleys at the upper end.” There were natural lines of demarcation between different kinds of terrain. Areas of bare land were set apart from concentrations of vegetable growth. Little houses were tucked in groves of trees here and there, along lanes “worm-fenced with oak-rails split in the hillside forests.” A root house was “partially dug into the ground and roofed with a sloping mound of grass-covered earth.” In short, there was room for each thing to be just what it needed to be.

The Valley made such an impression on Wright’s sensibilities that he created a code that would make modern cities more like the Valley. He wrote plans and rulebooks for how skyscrapers should be built and cities designed, trying to find the right amount of space between structures and over all. For Wright, implicit rules for “proper spacing” were simply true and universal. They were cosmic rules, written into the land from time immemorial. As an architect and urban planner, Wright’s job was simply to translate these rules into plans for the building of structures and cities.

In this way, Broadacre City makes a very specific kind of sense. Horizontal “spread” would leave room for parks, for personal space, for residential areas, for open vistas, and for light and air. Wright’s vertical ambitions are a little harder to understand. How would towering skyscrapers holding a hundred thousand people create a sense of freedom and space? The answer is in the context. The mile-high Illinois is not a building that stands alone. It makes space in the city. It allows for the other buildings to find their own height, even to be small. That’s the wonder of Wright’s city concepts. He envisioned his incredible urban structures as vertical “spreaders,” just as he envisioned his planned communities like Broadacre City to be horizontal spreaders, giving different aspects of a community room to exist.

by Morgan Meis, New Yorker |  Read more:
Image: Frank Lloyd Wright

Butterfly. Sacto. CA. 2014.
via: