Friday, December 26, 2014

The Conventional Wisdom On Oil Is Always Wrong

In 2008, I moved to Dallas to cover the oil industry for The Wall Street Journal. Like any reporter on a new beat, I spent months talking to as many experts as I could. They didn’t agree on much. Would oil prices — then over $100 a barrel for the first time — keep rising? Would post-Saddam Iraq ever return to the ranks of the world’s great oil producers? Would China overtake the U.S. as the world’s top consumer? A dozen experts gave me a dozen different answers.

But there was one thing pretty much everyone agreed on: U.S. oil production was in permanent, terminal decline. U.S. oil fields pumped 5 million barrels of crude a day in 2008, half as much as in 1970 and the lowest rate since the 1940s. Experts disagreed about how far and how fast production would decline, but pretty much no mainstream forecaster expected a change in direction.

That consensus turns out to have been totally, hilariously wrong. U.S. oil production has increased by more than 50 percent since 2008 and is now near a three-decade high. The U.S. is on track to surpass Saudi Arabia as the world’s top producer of crude oil; add in ethanol and other liquid fuels, and the U.S. is already on top.

The standard narrative of that stunning turnaround is familiar by now: Even as Big Oil abandoned the U.S. for easier fields abroad, a few risk-taking wildcatters refused to give up on the domestic oil industry. By combining the techniques of hydraulic fracturing (“fracking”) and horizontal drilling, they figured out how to tap previously inaccessible oil reserves locked in shale rock – and in so doing sparked an unexpected energy boom.

That narrative isn’t necessarily wrong. But in my years watching the transformation up close, I took away a lesson: When it comes to energy, and especially shale, the conventional wisdom is almost always wrong.

It isn’t just that experts didn’t see the shale boom coming. It’s that they underestimated its impact at virtually every turn. First, they didn’t think natural gas could be produced from shale (it could). Then they thought production would fall quickly if natural gas prices dropped (they did, and it didn’t). They thought the techniques that worked for gas couldn’t be applied to oil (they could). They thought shale couldn’t reverse the overall decline in U.S. oil production (it did). And they thought rising U.S. oil production wouldn’t be enough to affect global oil prices (it was).

Now, oil prices are cratering, falling below $55 a barrel from more than $100 earlier this year. And so, the usual lineup of experts — the same ones, in many cases, who’ve been wrong so many times in the past — are offering predictions for what plunging prices will mean for the U.S. oil boom. Here’s my prediction: They’ll be wrong this time, too.

by Ben Casselman, FiveThirtyEight |  Read more:
Image: Energy Information Administration

For Kids, By Kids—But Not For Long

Being a teen idol has always been a difficult balancing act. How to simultaneously project awesomeness and authenticity? How to convince a mass audience that you are worthy of their attention while retaining an aura of utter normalcy?

In many ways, today’s online stars are dealing in the same simulated intimacy that teenage celebrity has always relied on, from the goofy approachability of The Monkees to Taylor Swift’s knack for sounding as if she’s just a regular girl baring her soul to her locker neighbour. With YouTubers, though, this intimacy is turned up to extraordinary new levels. “Celebrity is more like a faraway kind of thing and this is like, you’re in their bedrooms,” 17-year-old Allie Cox explained to me while we waited in line to meet three English YouTubers, including Will Darbyshire, a 21-year-old who just started his YouTube channel earlier this year. Cox considered for a moment. “I mean… that’s kind of freaky. But at the same time you feel like you know them.”

The founding myth of YouTube is of some digital meritocracy where the line between producer and consumer has been erased and anyone with something to say can gain an audience. Many of the kids at Buffer Festival weren’t just fans, but creators of their own videos. Corey Vidal, the festival’s founder and a prominent YouTuber himself, was a poster boy for the transformative power of the humble online video. Vidal had struggled through high school. He’d been homeless, couch-surfing and spending time in a shelter. Then a geeky video he made of himself lip-syncing an a capella Star Wars song went viral. Now he’s the head of a YouTube production company, the guy in charge of a festival that brings all of his favourite people to Toronto. It was easy for the teenagers in the audience to imagine themselves one day on the stage, hanging out with their idols, collaborating with their fellow video makers. (...)

In many ways, YouTube is the perfect technology to fulfill a long-held teenage desire. When I was 13, the funniest, coolest people I could think of weren’t the lip-glossed stars of Hollywood or the wrinkled “teenagers” of Aaron Spelling productions—they were the kids a few grades ahead who played guitar in the hallway. They were people like the beautiful, effortlessly cool daughter of a family friend who came by one afternoon before starting university with a buzz cut, casually explaining to my enraptured sister and me that she was “just tired of men looking at me.” They were the older brothers of friends who, during camp-outs on the Toronto Islands, would ramble through the bushes, wild and high-spirited, cracking lewd jokes and shooting roman candles out over the lake, talking about girls and music and comics in a way that made you feel as if you were getting a peek into a thrilling world that would soon be yours to inhabit.

What 13-year-old wouldn’t want to hang out with people like that, to get a glimpse into that world, even from a distance? (...)

In so many kids’ books, the sharpest moments of sadness come from the inevitable approach of adulthood—the moment you’re no longer allowed into Narnia, the time you try to use the enchanted cupboard or the secret bell and find it no longer works, that the magic’s gone. There is nothing more melancholy than being 15 and realizing you will never, ever experience 14 again. When your heroes grow up, when the people you thought you knew so well shift their loyalties to the adult world, it can feel like a kind of betrayal.

In some ways, this sense of nostalgia hung over the festivities. On Twitter, a local YouTuber suggested that next year the programmers devote a showing to the “golden age of YouTube.” The idea that a technology still in its infancy might have already seen its best days seems absurd, but still there was the sense that, in some vital ways, the purest days of vlogging were over. Many of the stars at the festival began their YouTube careers years ago, when they were teenagers fooling around with a new technology, making silly videos for the hell of it. Now they’ve gotten older. With agents involved and sponsorship opportunities and TV deals in the air, it has become increasingly difficult to maintain the fiction that the person behind the camera is just another normal kid. Buffer Festival was ushering in a new age of professional YouTube, but it seemed not all the fans were ready. The stars, meanwhile, were awkwardly trying to make the same transition that pop singers and Disney kids and other teen idols have always had to navigate, feeling their way into adulthood and hoping their fans follow.

Last month, Charlie McDonnell posted a video simply called “Thank you :).” “The past couple of years has been very… transitional for me,” he says, smiling into the camera. “I’ve been attempting to deal with the fact that I am now growing up by doing my best to embrace it. By drinking more grown up drinks and wearing slightly more grown-up shoes and, maybe most apparently for things on your end, doing my best to make more grown-up stuff.” The video is at once a gentle explanation and a plea for understanding. He reassures his viewers that he still really, really likes making silly videos. He apologizes for neglecting his fans. He thanks those who have stuck with him. “You’re here,” he says. “Not everybody who watched me two years ago still is. But you are,” he says sincerely. The video is pitched as a note of gratitude. Mostly, though, it reads like an apology for growing up.

by Nicholas Hune-Brown, Hazlitt |  Read more:
Image: uncredited

Thursday, December 25, 2014


Nike Innovation Summit
via:

Indifference is a Power

We do this to our philosophies. We redraft their contours based on projected shadows, or give them a cartoonish shape like a caricaturist emphasising all the wrong features. This is how Buddhism becomes, in the popular imagination, a doctrine of passivity and even laziness, while Existentialism becomes synonymous with apathy and futile despair. Something similar has happened to Stoicism, which is considered – when considered at all – a philosophy of grim endurance, of carrying on rather than getting over, of tolerating rather than transcending life’s agonies and adversities.

No wonder it’s not more popular. No wonder the Stoic sage, in Western culture, has never obtained the popularity of the Zen master. Even though Stoicism is far more accessible, not only does it lack the exotic mystique of Eastern practice; it’s also regarded as a philosophy of merely breaking even while remaining determinedly impassive. What this attitude ignores is the promise proffered by Stoicism of lasting transcendence and imperturbable tranquility.

It ignores gratitude, too. This is part of the tranquility, because it’s what makes the tranquility possible. Stoicism is, as much as anything, a philosophy of gratitude – and a gratitude, moreover, rugged enough to endure anything. Philosophers who pine for supreme psychological liberation have often failed to realise that they belong to a confederacy that includes the Stoics. ‘According to nature you want to live?’ Friedrich Nietzsche taunts the Stoics in Beyond Good and Evil (1886):
O you noble Stoics, what deceptive words these are! Imagine a being like nature, wasteful beyond measure, indifferent beyond measure, without purposes and consideration, without mercy and justice, fertile and desolate and uncertain at the same time; imagine indifference itself as a power – how could you live according to this indifference? Living – is that not precisely wanting to be other than this nature? Is not living – estimating, preferring, being unjust, being limited, wanting to be different? And supposing your imperative ‘live according to nature’ meant at bottom as much as ‘live according to life’ – how could you not do that? Why make a principle of what you yourself are and must be?
This is pretty good, as denunciations of Stoicism go, seductive in its articulateness and energy, and therefore effective, however uninformed. (...)

The truth is, indifference really is a power, selectively applied, and living in such a way is not only eminently possible, with a conscious adoption of certain attitudes, but facilitates a freer, more expansive, more adventurous mode of living. Joy and grief are still there, along with all the other emotions, but they are tempered – and, in their temperance, they are less tyrannical.

If we can’t always go to our philosophers for an understanding of Stoicism, then where can we go? One place to start is the Urban Dictionary. Check out what this crowdsourced online reference to slang gives as the definition of a ‘stoic’:
stoic
Someone who does not give a shit about the stupid things in this world that most people care so much about. Stoics do have emotions, but only for the things in this world that really matter. They are the most real people alive.
Group of kids are sitting on a porch. Stoic walks by.
Kid – ‘Hey man, yur a fuckin faggot an you suck cock!’
Stoic – ‘Good for you.’
Keeps going
.
You’ve gotta love the way the author manages to make mention of a porch in there, because Stoicism has its root in the word stoa, which is the Greek name for what today we would call a porch. Actually, we’re more likely to call it a portico, but the ancient Stoics used it as a kind of porch, where they would hang out and talk about enlightenment and stuff. The Greek scholar Zeno is the founder, and the Roman emperor Marcus Aurelius the most famous practitioner, while the Roman statesman Seneca is probably the most eloquent and entertaining. But the real hero of Stoicism, most Stoics agree, is the Greek philosopher Epictetus. (...)

Among those Epictetus has taught indirectly is a whole cast of the distinguished, in all fields of endeavour. One of these is the late US Navy Admiral James Stockdale. A prisoner of war in Vietnam for seven years during that conflict, he endured broken bones, starvation, solitary confinement, and all other manner of torture. His psychological companion through it all were the teachings of Epictetus, with which he had familiarised himself after graduating from college and joining the Navy, studying philosophy at Stanford University on the side. He kept those teachings close by in Vietnam, never letting them leave his mind even when things were at their most dire. Especially then. He knew what they were about, those lessons, and he came to know their application much better than anyone should have to.

Stockdale wrote a lot about Epictetus, in speeches and memoirs and essays, but if you want to travel light (and, really, what Stoic doesn’t?), the best thing you could take with you is a speech he gave at King’s College London in 1993, published as Courage Under Fire: Testing Epictetus’s Doctrines in a Laboratory of Human Behavior (1993). That subtitle is important. Epictetus once compared the philosopher’s lecture room to a hospital, from which the student should walk out in a little bit of pain. ‘If Epictetus’s lecture room was a hospital,’ Stockdale writes, ‘my prison was a laboratory – a laboratory of human behaviour. I chose to test his postulates against the demanding real-life challenges of my laboratory. And as you can tell, I think he passed with flying colours.’

Stockdale rejected the false optimism proffered by Christianity, because he knew, from direct observation, that false hope is how you went insane in that prison. The Stoics themselves believed in gods, but ultimately those resistant to religious belief can take their Stoicism the way they take their Buddhism, even if they can’t buy into such concepts as karma or reincarnation. What the whole thing comes down to, distilled to its briefest essence, is making the choice that choice is really all we have, and that all else is not worth considering. ‘Who [...] is the invincible human being?’ Epictetus once asked, before answering the question himself: ‘One who can be disconcerted by nothing that lies outside the sphere of choice.’

Any misfortune ‘that lies outside the sphere of choice’ should be considered an opportunity to strengthen our resolve, not an excuse to weaken it. This is one of the truly great mind-hacks ever devised, this willingness to convert adversity to opportunity, and it’s part of what Seneca was extolling when he wrote what he would say to one whose spirit has never been tempered or tested by hardship: ‘You are unfortunate in my judgment, for you have never been unfortunate. You have passed through life with no antagonist to face you; no one will know what you were capable of, not even you yourself.’ We do ourselves an immense favour when we consider adversity an opportunity to make this discovery – and, in the discovery, to enhance what we find there. (...)

How did we let something so eminently understandable become so grotesquely misunderstood? How did we forget that that dark passage is really the portal to transcendence?

by Lary Wallace, Aeon |  Read more:
Image: Raymond Depardon/Magnum

Orwell's World


If there were to be a statue outside the BBC’s new offices in central London that captured the spirit of its modish interior of “workstation clusters”, “back-to-back booths” and “touchdown areas”, and the daily struggle of the 5,500 employees to produce content across multiple platforms for an audience of 240m, it might be that of the anxious, well-fed, middle-aged, middle-class white male, with a lanyard dangling over his hi-vis jacket, who is running late for his meeting and struggling to fold his Brompton bicycle. That would be Ian Fletcher, the over-stretched head of values (played by Hugh Bonneville) and central character in “W1A”, the BBC’s sprightly satire about itself. But Fletcher is not the one who will be on the plinth outside Broadcasting House. In 2016 a statue of George Orwell—paid for by Michael Frayn, Tom Stoppard, David Hare and Rowan Atkinson among others—will be unveiled, a few yards beyond the outdoor ping-pong table. (...)

The interest in Orwell, his literary executor Bill Hamilton tells me, “is accelerating and expanding practically daily”. Since his death, 65 years ago, the estate has been handled by A.M. Heath (who also look after Hilary Mantel). In his office in Holborn, overlooking the Family Courts, Hamilton describes the onward march of “1984”. “We’re selling far more. We’re licensing far more stage productions than we’ve ever done before. We’re selling in new languages—Breton, Friuli, Occitan. We’ve recently done our first Kurdish deals too. We suddenly get these calls from, say, Istanbul, from the local publisher saying, ‘I want to distribute a thousand copies to the demonstrators in the square outside as part of the campaign,’ and you think, good grief, this is actually a political tool, this book. As a global recognised name, it’s at an absolute peak.” A new Hollywood movie of “1984” is in the pipeline, “Animal Farm” is also in development as a feature film, and Lee Hall, who wrote “Billy Elliot”, is writing both a stage musical version of “Animal Farm” and a television adaptation of “Down and Out in London and Paris”. It’s boom time for Orwell: “total income”, Hamilton says, “has grown 10% a year for the last three years.”

Type “#Orwellian” into the search box on Twitter and a piece in the South China Morning Post says the Communist Party mouthpiece, the People’s Daily, has attacked the pro-democracy demonstrators in Hong Kong on the Orwellian grounds that they are “anti-democratic”. An article in Forbes magazine warns of an Orwellian future in which driverless cars catch on and computer hackers track “rich people in traffic and sell this information to fleets of criminal motorcyclists”. A story in the Wall Street Journal reports the Supreme Court judge Sonia Sotomayor warning that unmanned drones will create an Orwellian future. In a piece in Politico, Timothy Snyder, professor of history at Yale, advises, “To understand Putin, read Orwell.” By Orwell, he means “1984”: “The structure and the wisdom of the book are guides, often frighteningly precise ones, to current events.” This is just the top end of the range. Barely a minute goes by when Orwell isn’t namechecked on Twitter. Only two other novelists have inspired adjectives so closely associated in the public mind with the circumstances they set out to attack: Dickens and Kafka. And they haven’t set the terms of reference in the way Orwell has. One cartoon depicts a couple, with halos over their heads, standing on a heavenly cloud as they watch a man with a halo walk towards them. “Here comes Orwell again. Get ready for more of his ‘I told you so’.” A satirical website, the Daily Mash, has the headline “Everything ‘Orwellian’, say idiots”, below which an office worker defines the word as “people monitoring everything you do, like when my girlfriend called me six times while I was in the pub with my mates. That was totally Orwellian.”

We could be using another hashtag entirely. If Orwell had stuck to the surname he had been christened with, we might now have two types of #Blairism. As Eric Blair, he was casting around for a pseudonym for “Down and Out in London and Paris” in case his low-life adventures embarrassed his family. “The name I always use when tramping etc”, he told his agent, “is P.S. Burton, but if you don’t think this sounds a probable kind of name, what about Kenneth Miles, George Orwell, H. Lewis Allways. I rather favour George Orwell.” His pseudonym, borrowed from a river in Suffolk (where his parents lived), sounds very like “all well”, but has come, in the public imagination, to stand for All Wrong. (...)

The vision of the future Aldous Huxley had conjured up in “Brave New World”, of a society rendered passive by a surplus of comforts and distraction, seemed more prescient. In 1985, the cultural critic Neil Postman argued in “Amusing Ourselves to Death” that Orwell feared that what we hate would ruin us while Huxley feared that what we love would ruin us. In 2002 J.G. Ballard, reviewing a biography of Huxley, said that “Brave New World” was “a far shrewder guess at the likely shape of a future tyranny than Orwell’s vision of Stalinist terror…‘1984’ has never really arrived, but ‘Brave New World’ is around us everywhere.”

The appearance in 1998 of “The Complete Works of George Orwell”, a massive work of scholarship taking up 20 volumes, left even some of its most admiring reviewers wondering why, out of all the British writers of the 1930s and 1940s, it was Orwell who had been singled out for this monumental tribute. New biographies appeared for the centenary of his birth in 2003—drawing on the wealth of material in the Complete Works—but that, surely, had to be it: the Orwell industry had run its course. At the end of a three-day conference in Wellesley, Massachusetts, to mark the centenary, the Orwell scholar John Rodden wondered: “Was 2003 his swan song?”

The opposite turned out to be the case. As Bill Hamilton says, “It all came roaring back with a vengeance.” At the Q&A with the cast of “1984”, I asked the actors what they had researched in terms of everyday life in 2014 to help them understand the world of the play. One answer was Edward Snowden on YouTube showing how the National Security Agency (NSA) snoops on ordinary Americans, another was news footage from the pro-democracy demonstrations in Hong Kong, and a third—from the actress playing Julia (who hadn’t been channelling Kate Middleton)—was that the most useful research for her had been living in New York in the wake of 9/11. It wasn’t the horror of the two planes going into the twin towers: it was the fear and paranoia that followed. When George Bush first heard about the attacks, he had been reading a story to children in an elementary school in Florida and he went on and finished the task in hand. After that exemplary display of statesmanship, things deteriorated. As the novelist Andrew O’Hagan wrote recently, “9/11 unleashed terrible furies in the minds of America and its allies…it literally drove the security agencies and their leaders mad with the wish to become all-knowing.” With his “war on terror”, Bush made the mistake—which Orwell would have eviscerated him for—of picking a fight with an abstract noun. Then came rendition, Guantánamo, waterboarding and the industrial-scale expansion of homeland security. “In the past”, we’re told in “1984”, “no government had the power to keep its citizens under constant surveillance.” Now the FBI can activate the camera on a laptop without the light going on to alert the user.

by Robert Butler, More Intelligent Life |  Read more:
Image: Shonagh Rae

Wednesday, December 24, 2014


Carolyn S. Murray, The LA Times California Home Book, 1982
via:

Elvis

A Brand New World In Which Men Ruled

In the history of American higher education, it is hard to top the luck and timing of the Stanford class of 1994, whose members arrived on campus barely aware of what an email was, and yet grew up to help teach the rest of the planet to shop, send money, find love and navigate an ever-expanding online universe.

They finished college precisely when and where the web was stirring to life, and it swept many of them up, transforming computer science and philosophy majors alike into dot-com founders, graduates with uncertain plans into early employees of Netscape, and their 20-year reunion weekend here in October into a miniature biography of the Internet.

A few steps from the opening party, the original Yahoo servers, marked “1994,” stood at the entrance to an engineering building, enshrined in glass cases like religious artifacts. Brunch the next morning was hosted by a venture capitalist who had made a key investment in Facebook. At a football game, the alumni brandished “Fear the Nerds” signs and gossiped about a classmate who had recently sold the messaging service WhatsApp for over $20 billion.

“I tell people I graduated from Stanford the day the web was born,” said another alumnus, Justin Kitch, whose senior thesis turned into a start-up that turned into an Intuit acquisition.

The reunion told a more particular strand of Internet history as well. The university, already the most powerful incubator in Silicon Valley, embarked back then on a bold diversity experiment, trying to dismantle old gender and racial barriers. While women had traditionally lagged in business and finance, these students were present for the creation of an entirely new field of human endeavor, one intended to topple old conventions, embrace novel ways of doing things and promote entrepreneurship.

In some fields, the women of the class went on to equal or outshine the men, including an Olympic gold medalist and the class’s best-known celebrity. Nearly half the 1,700-person class were women, and plenty were adventurous and inventive, tinkerers and computer camp veterans who competed fiercely in engineering contests; one won mention in the school paper for creating a taco-eating machine.

Yet instead of narrowing gender gaps, the technology industry created vast new ones, according to interviews with dozens of members of the class and a broad array of Silicon Valley and Stanford figures. “We were sitting on an oil boom, and the fact is that the women played a support role instead of walking away with billion-dollar businesses,” said Kamy Wicoff, who founded a website for female writers.

It was largely the men of the class who became the true creators, founding companies that changed behavior around the world and using the proceeds to fund new projects that extended their influence. Some of the women did well in technology, working at Google or Apple or hopping from one start-up adventure to the next. Few of them described experiencing the kinds of workplace abuses that have regularly cropped up among women in Silicon Valley.

But even the most successful women could not match some of their male classmates’ achievements. Some female computer science majors had dropped out of the field, and few black or Hispanic women ever worked in technology at all. The only woman to ascend through the ranks of venture capital was shunted aside by her firm. Another appeared on the cover of Fortune magazine as a great hope for gender in Silicon Valley — just before unexpectedly leaving the company she had co-founded.

Dozens of women stayed in safe jobs, in or out of technology, while they watched their spouses or former lab partners take on ambitious quests. If the wealth among alumni traveled across gender lines, it was mostly because so many had wed one another. When Jessica DiLullo Herrin, a cheerleader turned economics whiz, arrived at the tailgate party, her classmates quietly stared: She had founded two successful start-ups, a living exception to the rule.

As the class of 1994 held its reunion weekend, its members were also coming to terms with the transformative industry some had had a hand in building. Amid the school colors and the proud families were the emblems of a world-beating class of entrepreneurs. CreditMax Whittaker for The New York Times

Not everyone was troubled by the imbalance. “If meritocracy exists anywhere on earth, it is in Silicon Valley,” David Sacks, an early figure at PayPal who went on to found other companies, emailed that weekend from San Francisco, where he was renovating one of the most expensive homes ever purchased in the city.

Without even setting foot back on campus for the reunion, he was stirring up old ghosts. Because Stanford was so intertwined with the businesses it fostered, the relationships and debates of the group’s undergraduate years had continued to ripple through Silicon Valley, imprinting a new industry in ways no one had anticipated. Mr. Sacks had fought the school’s diversity efforts bitterly; those battles had first made him an outcast among many of his classmates, and then sparked his technology career.

Every reunion is a reckoning about merit, choice and luck, but as the members of the class of ’94 told their stories, that weekend and in months of interviews before, they were also grappling with the nature of the industry some had helped create. Had the Internet failed to fulfill its promise to democratize business, or had the women missed the moment? Why did Silicon Valley celebrate some kinds of outsiders but not others?

“The Internet was supposed to be the great equalizer,” said Gina Bianchini, the woman who had appeared on the cover of Fortune. “So why hasn’t our generation of women moved the needle?”

by Jodi Kandor, NY Times |  Read more:
Image: Stanford Yearbook and Max Whittaker

The Rise and Fall of the Superfan

During last Tuesday night’s warm-ups, the Brooklyn Nets wore neon T-shirts honoring the man they knew as Jeffrey Gamblero. His dancing, explosive enthusiasm and eclectic outfits had made him something of an unofficial Nets mascot, and an air of sadness had settled on the team after his death a few days before. Gamblero had been a member of that strangest of tribes, the Superfans. Men and women who, while supporting the famous, become famous themselves. Gamblero had become intertwined with the Nets to such an extent that he was part of their identity. And what troubled many was whether his devotion to the team contributed to his death. (...)

The hipster-like aspect of the Gamblero persona included the fact that he appeared to be ironically taking his cues from the classic Superfan archetype, a character that pops up in sporting events all over. The Superfan usually wears “look at me” outfits (wigs, body paint, and outrageous clothes all in team colors) while engaging in ridiculous behavior. There’s even a classic Simpsons episode about the phenomenon, where Homer Simpson briefly finds his true calling as “Dancin’ Homer.”

They’re not universally beloved figures, Superfans, even among their fellow supporters. To some, they’re entertaining distractions. To others, they’re annoying, deserving objects of outright scorn for making the rest of us look bad.

The anger is understandable. The Superfan, after all, is the stereotypical example of one’s fandom, a figure few want to identify with: the shirtless guy wearing body paint in subzero temperatures; the overzealous screaming buffoon; the sports talk radio regular ranting about how “we” are going to run the table and win the championship every season, year after year, until the heat death of the universe, etc.

The truth is that being a fan often requires an unstable combination of ridiculousness and lack of self-consciousness. To be a fan is to be irrational, to act in ways that are unacceptable in other contexts or that completely contradict our everyday selves. Self-conscious types who never dance will boogie victoriously while their team is blowing out rivals. Stoic men’s men who never cry will blubber like babies when their team win a championship.

When we do all of this we usually look stupid. Maybe there’s a cultural need for the Superfan. The Superfan distracts attention, not to mention the camera, away from us and our often regrettable behavior during sporting events. We can look at the Superfan and say “hey at least we’re not that bad.” (...)

Superfans like Clipper Darrell and the New York Jets’ Fireman Ed (who retired his persona as the team’s struggles took their toll on him) though, are mostly regional figures and usually not the most recognizable of their ilk. Maybe the most famous examples of Superfans aren’t anything of the sort. They are celebrity fans mostly in the sense that they are fans of the idea of celebrity, and they target sporting events because they’re among the best places to be seen.

While the two types share a desire for reinvention and attention, this second type is a more disruptive figure whose desire for fame doesn’t come under the cover of “I’m just a passionate fan.” Through the 70s and 80s being a faux-Superfan, a celebrity spectator really, was mostly a way to become a reality TV star back before reality TV existed. They were the pioneers of the photobomb.

by Hunter Felt, The Guardian |  Read more:
Image: Forbes

Tuesday, December 23, 2014

The Beatles

My Unhealthy Obsession with Bob Dylan's Christmas Lights

I have mixed feelings about Christmas decorations. Often, I like them. Rarely do I find them inspiring.

I live in Malibu, California. The average version of a decorated yard in my neighborhood looks more or less like an outdoor restaurant courtyard at a four-star hotel. The decorating style is consistent because many of the residents hire the same company to wrap their trees and shrubbery for them. They all look somewhat like this:


So intimidated was I that for many years I never decorated my yard. It seemed too daunting, too expensive. And then, in 2008, the veil was lifted. A Christmas miracle occurred. A new role model appeared before me, and as luck or serendipity would have it, it was the same role model I used to turn to for creative and lifestyle advice as a teenager.

I speak now of the first time I noticed that Bob Dylan had wedged a small, decidedly uneven, single strand of Christmas lights into the hedge in front of his estate.

It's possible that they were there before 2008. I'm embarrassed to say that before then I wasn't really paying attention. But it also makes sense that this was the very beginning, since he released his one and (so far) only Christmas album, Christmas in the Heart, in 2009.

Here is the earliest known photo I took of Mr. Dylan's holiday oeuvre.


I was immediately taken by his distinctive approach to decorating. Much the way he forged his own path in music, he exhibited an independence of style significantly different than the other homes in the area.

If a professional decorating staff was enlisted, their work was subtle to the point of being invisible, deeply disguised by a faux-naïve approach that recalls Matisse or Chagall. The string of lights seemed to say, "We have been casually tossed into this hedge by someone in a hurry." But of course, this was no randomly displayed, haphazardly arranged, string of colored bulbs. What we had here was the work of Bob Dylan: prolific poet and songwriter, painter, filmmaker, paterfamilias to a whole generation of creative offspring, gate welder, patron of Christmas, born-again Christian and born-again Jew, seer, genius.

So I returned the following year, in 2009, to once again stare at the ever more erratically shaped curvilinear lines.


Having grown up in a world where nothing Bob Dylan has ever done is considered too small to merit serious consideration and scrutiny, this was the year I began to wonder if these lights contained a deeper meaning. Using Christmas lights as a medium, was there something beneath the surface that Mr. Dylan was trying to tell us?

I decided to embark on a multi-year quest. My goal: to contribute to the existing body of knowledge about this legendary artist. Thus did I return, season after season, much like the holidays themselves, as I sought to uncover the subtext behind these deceptively simple annual statements.

The serious student of Mr. Dylan will not be surprised to learn that careful examination did indeed reveal many hidden layers. What first appeared random was, in fact, the complete opposite.

by Merrill Markoe, Vice |  Read more:
Images: Merrill Markoe

Understanding “New Power”

We all sense that power is shifting in the world. We see increasing political protest, a crisis in representation and governance, and upstart businesses upending traditional industries. But the nature of this shift tends to be either wildly romanticized or dangerously underestimated.

There are those who cherish giddy visions of a new techno-utopia in which increased connectivity yields instant democratization and prosperity. The corporate and bureaucratic giants will be felled and the crowds coronated, each of us wearing our own 3D-printed crown. There are also those who have seen this all before. Things aren’t really changing that much, they say. Twitter supposedly toppled a dictator in Egypt, but another simply popped up in his place. We gush over the latest sharing-economy start-up, but the most powerful companies and people seem only to get more powerful.

Both views are wrong. They confine us to a narrow debate about technology in which either everything is changing or nothing is. In reality, a much more interesting and complex transformation is just beginning, one driven by a growing tension between two distinct forces: old power and new power.

Old power works like a currency. It is held by few. Once gained, it is jealously guarded, and the powerful have a substantial store of it to spend. It is closed, inaccessible, and leader-driven. It downloads, and it captures.

New power operates differently, like a current. It is made by many. It is open, participatory, and peer-driven. It uploads, and it distributes. Like water or electricity, it’s most forceful when it surges. The goal with new power is not to hoard it but to channel it.

The battle and the balancing between old and new power will be a defining feature of society and business in the coming years. In this article, we lay out a simple framework for understanding the underlying dynamics at work and how power is really shifting: who has it, how it is distributed, and where it is heading.

by Jeremy Heimans and Henry Timms, HBR | Read more:
Image: uncredited

Monday, December 22, 2014


[ed. My weakness]
via:

Unhappy Truckers and Other Algorithmic Problems


When Bob Santilli, a senior project manager at UPS, was invited in 2009 to his daughter’s fifth grade class on Career Day, he struggled with how to describe exactly what he did for a living. Eventually, he decided he would show the class a travel optimization problem of the kind he worked on, and impress them with how fun and complex it was. The challenge was to choose the most efficient route among six different stops, in a typical suburban-errands itinerary. The class devised their respective routes, then began picking them over. But one girl thought past the question of efficiency. “She says, my mom would never go to the store and buy perishable things—she didn’t use the word perishable, I did—and leave it in the car the whole day at work,” Santilli tells me.

Her comment reflects a basic truth about the math that runs underneath the surface of nearly every modern transportation system, from bike-share rebalancing to airline crew scheduling to grocery delivery services. Modeling a simplified version of a transportation problem presents one set of challenges (and they can be significant). But modeling the real world, with constraints like melting ice cream and idiosyncratic human behavior, is often where the real challenge lies. As mathematicians, operations research specialists, and corporate executives set out to mathematize and optimize the transportation networks that interconnect our modern world, they are re-discovering some of our most human quirks and capabilities. They are finding that their job is as much to discover the world, as it is to change it.

The problem that Santilli posed to his daughter’s class is known as a traveling salesman problem. Algorithms solving this problem are among the most important and most commonly implemented in the transportation industry. Generally speaking, the traveling salesman problem asks: Given a list of stops, what is the most time-efficient way for a salesman to make all those stops? In 1962, for example, a Procter and Gamble advertisement tasked readers with such a challenge: To help “Toody and Muldoon,” co-stars of the Emmy-award-winning television show Car 54, Where Are You?, devise a 33-city trip across the continental United States. “You should plan a route for them from location to location,” went the instructions, “which will result in the shortest total mileage from Chicago, Illinois, back to Chicago, Illinois.”

A mathematician claimed the prize, and a regal $10,000. But the contest organizers could only verify that his solution was the shortest of those submitted, and not that it was the shortest possible route. That’s because solving a 33-city problem by calculating every route individually would require 28 trillion years—on the Department of Energy’s 129,000-core supercomputer Roadrunner (which is among the world’s fastest clusters). It’s for this reason that William J. Cook, in his book In Pursuit of the Traveling Salesman, calls the traveling salesman problem “the focal point of a larger debate on the nature of complexity and possible limits to human knowledge.” Its defining characteristic is how quickly the complexity scales. A six-city tour has only 720 possible paths, while a 20-city tour has—by Cook’s quick calculations on his Mac—more than 100 quadrillion possible paths.

There are answers to some traveling salesman problems. Cook himself has produced an iPhone app that will crack 100 cities, using relaxed linear programming and other algorithmic techniques. And every few years or so, teams armed with sophisticated hardware and programming approaches set the bar higher. In 2006, for example, an optimal tour was produced by a team led by Cook for a 85,900-city tour. It did not, of course, given the computing constraints mentioned above, involve checking each route individually. “There is no hope to actually list all the road trips between New York and Los Angeles,” he says. Instead, almost all of the computation went into proving that there is no tour shorter than the one his team found. In essence, there is an answer, but there is not a solution. “By solution,” writes Cook, “we mean an algorithm, that is a step-by-step recipe for producing an optimal tour for any example we may now throw at it.”

And that solution may never come. The traveling salesman problem is at the heart of an ongoing question—the question—in computer science: whether or not P equals NP. As summarized with blunt elegance by MIT’s news office, “roughly speaking, P is a set of relatively easy problems, NP is a set of incredibly hard problems, and if they’re equal, then a large number of computer science problems that seem to be incredibly hard are actually relatively easy.” The Clay Mathematics Institute offers a $1 million reward to a meta-problem hovering like a mothership over the Car 54 challenge and its ilk: proving that P does or does not equal NP.

By now it should be clear that we are not talking just about the routing needs of salesmen, for even the most trenchant of regional reps does not think about hitting 90,000 far-flung burghs on a call. But the Traveling Salesman Problem, and its intellectual cousins, are far from theoretical; indeed, they are at the invisible heart of our transportation networks. Every time you want to go somewhere, or you want something to get to you, the chances are someone is thinking at that very moment how to make that process more efficient. We are all of us traveling salesmen.

by Tom Vanderbilt, Nautilus |  Read more:
Image: Peter and Maria Hoey

John Cameron Mitchell on "Hedwig and the Angry Inch"


[ed. A friend and I were talking movies the other day and he mentioned that he hadn't seen Hedwig and the Angry Inch. By coincidence, this interview with John Cameron Mitchell (who wrote, directed and starred in the movie) came up on my YouTube scroll just a couple days later. Great interview (and it sounds like a sequel is in the works). If you haven't seen Hedwig, you have to! Such a wonderfully conceived and sympathetic character. (The music alone is worth it - see last link below). A favorite.]


[ed. Mostly just by muddling along between planned and unplanned life events.]

NY Times: Prosecute Torturers and Their Bosses

[ed. About time. Actually, way past time.]

Since the day President Obama took office, he has failed to bring to justice anyone responsible for the torture of terrorism suspects — an official government program conceived and carried out in the years after the attacks of Sept. 11, 2001.

He did allow his Justice Department to investigate the C.I.A.'s destruction of videotapes of torture sessions and those who may have gone beyond the torture techniques authorized by President George W. Bush. But the investigation did not lead to any charges being filed, or even any accounting of why they were not filed.

Mr. Obama has said multiple times that “we need to look forward as opposed to looking backwards,” as though the two were incompatible. They are not. The nation cannot move forward in any meaningful way without coming to terms, legally and morally, with the abhorrent acts that were authorized, given a false patina of legality, and committed by American men and women from the highest levels of government on down. (...)

The American Civil Liberties Union and Human Rights Watch are to give Attorney General Eric Holder Jr. a letter Monday calling for appointment of a special prosecutor to investigate what appears increasingly to be “a vast criminal conspiracy, under color of law, to commit torture and other serious crimes.

The question everyone will want answered, of course, is: Who should be held accountable? That will depend on what an investigation finds, and as hard as it is to imagine Mr. Obama having the political courage to order a new investigation, it is harder to imagine a criminal probe of the actions of a former president.

But any credible investigation should include former Vice President Dick Cheney; Mr. Cheney’s chief of staff, David Addington; the former C.I.A. director George Tenet; and John Yoo and Jay Bybee, the Office of Legal Counsel lawyers who drafted what became known as the torture memos. There are many more names that could be considered, including Jose Rodriguez Jr., the C.I.A. official who ordered the destruction of the videotapes; the psychologists who devised the torture regimen; and the C.I.A. employees who carried out that regimen. (...)

Starting a criminal investigation is not about payback; it is about ensuring that this never happens again and regaining the moral credibility to rebuke torture by other governments. Because of the Senate’s report, we now know the distance officials in the executive branch went to rationalize, and conceal, the crimes they wanted to commit. The question is whether the nation will stand by and allow the perpetrators of torture to have perpetual immunity for their actions.

by Editors, NY Times |  Read more:
Image: Win McNamee/Getty Images