Monday, December 29, 2014
The Fall of the Creative Class
[ed. For more background on Richard Florida's work see: Questioning the Cult of the Creative Class. Also, if you're interested: the response and counter-response to this essay.]
In the late 1990s, my wife and I got in a U-Haul, hit I-90 and headed west for a few days until we came to Portland, Oregon. We had no jobs, no apartment, and no notion other than getting out of Minnesota.
We chose Portland mainly because it was cheaper than the other places we’d liked on a month-long road trip through the West (San Francisco, Seattle, Missoula), because it had a great book store we both fell in love with, and because I had a cousin who lived there in the northeast part of the city, which was somewhat less trendy back then. (Our first night, police found a body in the park across the street.) The plan was to stay a year, then try the other coast, then who knows? We were young! But we loved it and stayed for nearly five years. Then, when we started thinking of breeding, like salmon, we decided to swim back to the pool in which we were bred.
For a variety of not-very-well-thought-out reasons, this brought us to Madison, Wisconsin. It wasn’t too far from our families. It had a stellar reputation. And for the Midwest, it possessed what might pass for cachet. It was liberal and open minded. It was a college town. It had coffee shops and bike shops. Besides, it had been deemed a “Creative Class” stronghold by Richard Florida, the prophet of prosperous cool. We had no way of knowing how wrong he was about Madison…and about everything.
Florida’s idea was a nice one: Young, innovative people move to places that are open and hip and tolerant. They, in turn, generate economic innovation. I loved this idea because, as a freelance writer, it made me important. I was poor, but somehow I made everyone else rich! It seemed to make perfect sense. Madison, by that reasoning, should have been clamoring to have me, since I was one of the mystical bearers of prosperity. (...)
For some reason, these and most other relationships never quite blossomed the way we’d hoped, the way they had in all the other place we’d lived. For a time, my wife had a soulless job with a boss who sat behind her, staring at the back of her head. I found work in a dusty tomb of a bookstore, doing data entry with coworkers who complained about their neurological disorders, or who told me about the magical creatures they saw on their way home, and who kept websites depicting themselves as minotaurs.
I’m not sure what exactly I expected, but within a year or two it was clear that something wasn’t right. If Madison was such a Creative Class hotbed overflowing with independent, post-industrial workers like myself, we should have fit in. Yet our presence didn’t seem to matter to anyone, creatively or otherwise. And anyway, Madison’s economy was humming along with unemployment around four percent, while back in fun, creative Portland, it was more than twice that, at eight and a half percent. This was not how the world according to Florida was supposed to work. I started to wonder if I’d misread him. Around town I encountered a few other transplants who also found themselves scratching their heads over what the fuss had been about. Within a couple years, most of them would be gone. (...)
Jamie Peck is a geography professor who has been one of the foremost critics of Richard Florida’s Creative Class theory. He now teaches at the University of British Columbia in Vancouver, but at the time Florida’s book was published in 2002, he was also living in Madison. “The reason I wrote about this,” Peck told me on the phone, “is because Madison’s mayor started to embrace it. I lived on the east side of town, probably as near to this lifestyle as possible, and it was bullshit that this was actually what was driving Madison’s economy. What was driving Madison was public sector spending through the university, not the dynamic Florida was describing.”
In his initial critique, Peck said The Rise of the Creative Class was filled with “self-indulgent forms of amateur microsociology and crass celebrations of hipster embourgeoisement.” That’s another way of saying that Florida was just describing the “hipsterization” of wealthy cities and concluding that this was what was causing those cities to be wealthy. As some critics have pointed out, that’s a little like saying that the high number of hot dog vendors in New York City is what’s causing the presence of so many investment bankers. So if you want banking, just sell hot dogs. “You can manipulate your arguments about correlation when things happen in the same place,” says Peck.
What was missing, however, was any actual proof that the presence of artists, gays and lesbians or immigrants was causing economic growth, rather than economic growth causing the presence of artists, gays and lesbians or immigrants. Some more recent work has tried to get to the bottom of these questions, and the findings don’t bode well for Florida’s theory. In a four-year, $6 million study of thirteen cities across Europe called “Accommodating Creative Knowledge,” that was published in 2011, researchers found one of Florida’s central ideas—the migration of creative workers to places that are tolerant, open and diverse—was simply not happening.
“They move to places where they can find jobs,” wrote author Sako Musterd, “and if they cannot find a job there, the only reason to move is for study or for personal social network reasons, such as the presence of friends, family, partners, or because they return to the place where they have been born or have grown up.” But even if they had been pouring into places because of “soft” factors like coffee shops and art galleries, according to Stefan Krätke, author of a 2010 German study, it probably wouldn’t have made any difference, economically. Krätke broke Florida’s Creative Class (which includes accountants, realtors, bankers and politicians) into five separate groups and found that only the “scientifically and technologically creative” workers had an impact on regional GDP. Krätke wrote “that Florida’s conception does not match the state of findings of regional innovation research and that his way of relating talent and technology might be regarded as a remarkable exercise in simplification.”
Perhaps one of the most damning studies was in some ways the simplest. In 2009 Michele Hoyman and Chris Faricy published a study using Florida’s own data from 1990 to 2004, in which they tried to find a link between the presence of the creative class workers and any kind of economic growth. “The results were pretty striking,” said Faricy, who now teaches political science at Washington State University. “The measurement of the creative class that Florida uses in his book does not correlate with any known measure of economic growth and development. Basically, we were able to show that the emperor has no clothes.” Their study also questioned whether the migration of the creative class was happening. “Florida said that creative class presence—bohemians, gays, artists—will draw what we used to call yuppies in,” says Hoyman. “We did not find that.”
In the late 1990s, my wife and I got in a U-Haul, hit I-90 and headed west for a few days until we came to Portland, Oregon. We had no jobs, no apartment, and no notion other than getting out of Minnesota.
We chose Portland mainly because it was cheaper than the other places we’d liked on a month-long road trip through the West (San Francisco, Seattle, Missoula), because it had a great book store we both fell in love with, and because I had a cousin who lived there in the northeast part of the city, which was somewhat less trendy back then. (Our first night, police found a body in the park across the street.) The plan was to stay a year, then try the other coast, then who knows? We were young! But we loved it and stayed for nearly five years. Then, when we started thinking of breeding, like salmon, we decided to swim back to the pool in which we were bred.For a variety of not-very-well-thought-out reasons, this brought us to Madison, Wisconsin. It wasn’t too far from our families. It had a stellar reputation. And for the Midwest, it possessed what might pass for cachet. It was liberal and open minded. It was a college town. It had coffee shops and bike shops. Besides, it had been deemed a “Creative Class” stronghold by Richard Florida, the prophet of prosperous cool. We had no way of knowing how wrong he was about Madison…and about everything.
Florida’s idea was a nice one: Young, innovative people move to places that are open and hip and tolerant. They, in turn, generate economic innovation. I loved this idea because, as a freelance writer, it made me important. I was poor, but somehow I made everyone else rich! It seemed to make perfect sense. Madison, by that reasoning, should have been clamoring to have me, since I was one of the mystical bearers of prosperity. (...)
For some reason, these and most other relationships never quite blossomed the way we’d hoped, the way they had in all the other place we’d lived. For a time, my wife had a soulless job with a boss who sat behind her, staring at the back of her head. I found work in a dusty tomb of a bookstore, doing data entry with coworkers who complained about their neurological disorders, or who told me about the magical creatures they saw on their way home, and who kept websites depicting themselves as minotaurs.
I’m not sure what exactly I expected, but within a year or two it was clear that something wasn’t right. If Madison was such a Creative Class hotbed overflowing with independent, post-industrial workers like myself, we should have fit in. Yet our presence didn’t seem to matter to anyone, creatively or otherwise. And anyway, Madison’s economy was humming along with unemployment around four percent, while back in fun, creative Portland, it was more than twice that, at eight and a half percent. This was not how the world according to Florida was supposed to work. I started to wonder if I’d misread him. Around town I encountered a few other transplants who also found themselves scratching their heads over what the fuss had been about. Within a couple years, most of them would be gone. (...)
Jamie Peck is a geography professor who has been one of the foremost critics of Richard Florida’s Creative Class theory. He now teaches at the University of British Columbia in Vancouver, but at the time Florida’s book was published in 2002, he was also living in Madison. “The reason I wrote about this,” Peck told me on the phone, “is because Madison’s mayor started to embrace it. I lived on the east side of town, probably as near to this lifestyle as possible, and it was bullshit that this was actually what was driving Madison’s economy. What was driving Madison was public sector spending through the university, not the dynamic Florida was describing.”
In his initial critique, Peck said The Rise of the Creative Class was filled with “self-indulgent forms of amateur microsociology and crass celebrations of hipster embourgeoisement.” That’s another way of saying that Florida was just describing the “hipsterization” of wealthy cities and concluding that this was what was causing those cities to be wealthy. As some critics have pointed out, that’s a little like saying that the high number of hot dog vendors in New York City is what’s causing the presence of so many investment bankers. So if you want banking, just sell hot dogs. “You can manipulate your arguments about correlation when things happen in the same place,” says Peck.
What was missing, however, was any actual proof that the presence of artists, gays and lesbians or immigrants was causing economic growth, rather than economic growth causing the presence of artists, gays and lesbians or immigrants. Some more recent work has tried to get to the bottom of these questions, and the findings don’t bode well for Florida’s theory. In a four-year, $6 million study of thirteen cities across Europe called “Accommodating Creative Knowledge,” that was published in 2011, researchers found one of Florida’s central ideas—the migration of creative workers to places that are tolerant, open and diverse—was simply not happening.
“They move to places where they can find jobs,” wrote author Sako Musterd, “and if they cannot find a job there, the only reason to move is for study or for personal social network reasons, such as the presence of friends, family, partners, or because they return to the place where they have been born or have grown up.” But even if they had been pouring into places because of “soft” factors like coffee shops and art galleries, according to Stefan Krätke, author of a 2010 German study, it probably wouldn’t have made any difference, economically. Krätke broke Florida’s Creative Class (which includes accountants, realtors, bankers and politicians) into five separate groups and found that only the “scientifically and technologically creative” workers had an impact on regional GDP. Krätke wrote “that Florida’s conception does not match the state of findings of regional innovation research and that his way of relating talent and technology might be regarded as a remarkable exercise in simplification.”
Perhaps one of the most damning studies was in some ways the simplest. In 2009 Michele Hoyman and Chris Faricy published a study using Florida’s own data from 1990 to 2004, in which they tried to find a link between the presence of the creative class workers and any kind of economic growth. “The results were pretty striking,” said Faricy, who now teaches political science at Washington State University. “The measurement of the creative class that Florida uses in his book does not correlate with any known measure of economic growth and development. Basically, we were able to show that the emperor has no clothes.” Their study also questioned whether the migration of the creative class was happening. “Florida said that creative class presence—bohemians, gays, artists—will draw what we used to call yuppies in,” says Hoyman. “We did not find that.”
by Frank Bures, thirty two Magazine | Read more:
Image: Will Dinski
Kazuo Ishiguro, The Art of Fiction No. 196
[ed. One of my favorite authors has a new book coming out in March: The Buried Giant.]
The man who wrote The Remains of the Day in the pitch-perfect voice of an English butler is himself very polite. After greeting me at the door of his home in London’s Golders Green, he immediately offered to make me tea, though to judge from his lack of assurance over the choice in his cupboard he is not a regular four P.M. Assam drinker. When I arrived for our second visit, the tea things were already laid out in the informal den. He patiently began recounting the details of his life, always with an amused tolerance for his younger self, especially the guitar-playing hippie who wrote his college essays using disembodied phrases separated by full stops. “This was encouraged by professors,” he recalled. “Apart from one very conservative lecturer from Africa. But he was very polite. He would say, Mr. Ishiguro, there is a problem about your style. If you reproduced this on the examination, I would have to give you a less-than-satisfactory grade.”Kazuo Ishiguro was born in Nagasaki in 1954 and moved with his family to the small town of Guildford, in southern England, when he was five. He didn’t return to Japan for twenty-nine years. (His Japanese, he says, is “awful.”) At twenty-seven he published his first novel, A Pale View of Hills (1982), set largely in Nagasaki, to near unanimous praise. His second novel, An Artist of the Floating World (1986), won Britain’s prestigious Whitbread award. And his third, The Remains of the Day (1989), sealed his international fame. It sold more than a million copies in English, won the Booker Prize, and was made into a Merchant Ivory movie starring Anthony Hopkins, with a screenplay by Ruth Prawer Jhabvala. (An earlier script by Harold Pinter, Ishiguro recalls, featured “a lot of game being chopped up on kitchen boards.”) Ishiguro was named an Officer of the Order of the British Empire and, for a while, his portrait hung at 10 Downing Street. Defying consecration, he surprised readers with his next novel, The Unconsoled (1995), more than five hundred pages of what appeared to be stream-of-consciousness. Some baffled critics savaged it; James Wood wrote that “it invents its own category of badness.” But others came passionately to its defense, including Anita Brookner, who overcame her initial doubts to call it “almost certainly a masterpiece.” The author of two more acclaimed novels—When We Were Orphans (2000) and Never Let Me Go (2005)—Ishiguro has also written screenplays and teleplays, and he composes lyrics, most recently for the jazz chanteuse Stacey Kent. Their collaborative CD, Breakfast on the Morning Tram, was a best-selling jazz album in France.
In the pleasant white stucco house where Ishiguro lives with his sixteen-year-old daughter, Naomi, and his wife, Lorna, a former social worker, there are three gleaming electric guitars and a state-of-the-art stereo system. The small office upstairs where Ishiguro writes is custom designed in floor-to-ceiling blond wood with rows of color-coded binders neatly stacked in cubbyholes. Copies of his novels in Polish, Italian, Malaysian, and other languages line one wall. (...)
INTERVIEWER
You had success with your fiction right from the start—but was there any writing from your youth that never got published?
KAZUO ISHIGURO
After university, when I was working with homeless people in west London, I wrote a half-hour radio play and sent it to the BBC. It was rejected but I got an encouraging response. It was kind of in bad taste, but it’s the first piece of juvenilia I wouldn’t mind other people seeing. It was called “Potatoes and Lovers.” When I submitted the manuscript, I spelled potatoes incorrectly, so it said potatos. It was about two young people who work in a fish-and-chips café. They are both severely cross-eyed, and they fall in love with each other, but they never acknowledge the fact that they’re cross-eyed. It’s the unspoken thing between them. At the end of the story they decide not to marry, after the narrator has a strange dream where he sees a family coming toward him on the seaside pier. The parents are cross-eyed, the children are cross-eyed, the dog is cross-eyed, and he says, All right, we’re not going to marry.
INTERVIEWER
What possessed you to write that story?
by Susannah Hunnewell, Paris Review | Read more:
Image: Matt Carr/Getty Images
The Foreign Spell
It’s fashionable in some circles to talk of Otherness as a burden to be borne, and there will always be some who feel threatened by—and correspondingly hostile to—anyone who looks and sounds different from themselves. But in my experience, foreignness can as often be an asset. The outsider enjoys a kind of diplomatic immunity in many places, and if he seems witless or alien to some, he will seem glamorous and exotic to as many others. In open societies like California, someone with Indian features such as mine is a target of positive discrimination, as strangers ascribe to me yogic powers or Vedic wisdom that couldn’t be further from my background (or my interest).
Besides, the very notion of the foreign has been shifting in our age of constant movement, with more than fifty million refugees; every other Torontonian you meet today is what used to be called a foreigner, and the number of people living in lands they were not born to will surpass 300 million in the next generation. Soon there’ll be more foreigners on earth than there are Americans. Foreignness is a planetary condition, and even when you walk through your hometown—whether that’s New York or London or Sydney—half the people around you are speaking in languages and dealing in traditions different from your own. (...)
Growing up, I soon saw that I was ill-equipped for many things by my multi-continental upbringing—I would never enjoy settling down in any one place, and I wouldn’t vote anywhere for my first half-century on earth—but I saw, too, that I had been granted a kind of magic broomstick that few humans before me had ever enjoyed. By the age of nine, flying alone over the North Pole six times a year—between my parents’ home in California and my schools in England—I realized that only one generation before, when my parents had gone to college in Britain, they had had to travel for weeks by boat, sometimes around the stormy Cape of Good Hope. When they bid goodbye to their loved ones—think of V. S. Naipaul hearing of his father’s death while in England, but unable to return to Trinidad—they could not be sure they’d ever see them again.
At seventeen, I was lucky enough to spend the summer in India, the autumn in England, the winter in California, and the spring bumping by bus from Tijuana down to Bolivia—and then up the west coast of South America. I wasn’t rich, but the door to the world was swinging open for those of us ready to live rough and call ourselves foreigners for life. If my native India, the England of my childhood, and the America of my official residence were foreign, why not spend time in Yemen and on Easter Island?
In retrospect, it seems inevitable that I would move, in early adulthood, to what still, after twenty-seven years of residence, remains the most foreign country I know, Japan. However long I live here, even if I speak the language fluently, I will always be a gaikokujin, an “outsider person,” whom customs officials strip-search and children stare at as they might a yeti. I’m reminded of this on a daily basis. Even the dogs I pass on my morning walks around the neighborhood bark and growl every time they catch wind of this butter-reeking alien.
Japan remains itself by maintaining an unbreachable divide between those who belong to the group and those who don’t. This has, of course, left the country behind in an ever more porous world of multiple homes, and is a source of understandable frustration among, say, those Koreans who have lived in the country for generations but were—until relatively recently—obliged to be fingerprinted every year and denied Japanese passports. Yet for a lifelong visitor, the clarity of its divisions is welcome; in free-and-easy California, I always feel as accepted as everyone else, but that doesn’t make me feel any more Californian. Besides, I know that Japan can work as smoothly as it does only by having everyone sing their specific parts from the same score, creating a single choral body. The system that keeps me out produces the efficiency and harmony that draws me in.
I cherish foreignness, personally and internationally, and feel short-shrifted when United Airlines, like so many multinationals today, assures me in a slogan, “The word foreign is losing its meaning”; CNN, for decades, didn’t even use the word, in deference to what it hoped would be a global audience. Big companies have an investment in telling themselves—and us—that all the world’s a single market. Yet all the taco shacks and Ayurvedic doctors and tai chi teachers in the world don’t make the depths of other cultures any more accessible to us. “Read The Sheltering Sky,” I want to tell my neighbors in California as they talk about that adorable urchin they met in the souk in Marrakesh. Next time you’re in Jamaica—or Sri Lanka or Cambodia—think of Forster’s Marabar Caves as much as of the postcard sights that leave you pleasantly consoled. Part of the power of travel is that you stand a good chance of being hollowed out by it. The lucky come back home complaining about crooked rug merchants and dishonest taxi drivers; the unlucky never come home at all.
by Pico Iyer, Lapham's Quarterly | Read more:
Image: Islands, by Brad Kunkle, 2012Going Aboard
When Herman Melville was twenty-one, he embarked on the whaleship Acushnet, out of New Bedford. We all know what that led to. This past summer, Mystic Seaport finished their five-year, 7.5-million-dollar restoration of the 1841 whaleship Charles W. Morgan, the sister ship to the Acushnet. The Morgan is in many ways identical to Melville’s fictional Pequod, save that sperm whale jawbone tiller and a few other sinister touches. Mystic Seaport celebrated the completion by sailing the Morgan around New England for a couple months. I went aboard for a night and a day, intent on following in Ishmael’s footsteps, hoping to breathe a little life into my idea of the distant, literary ship.
by Ben Shattuck, Paris Review | Read more:
Image: Ben Shattuck
2014: The Year When Activist Documentaries Hit the Breaking Point
If I were making a documentary about the uniformity that has infested modern documentaries, it would go something like this: Open with a sequence detailing the extent of the problem, flashing on examples of its reach, cutting in quick, declarative sound bites, scored with music of steadily mounting tension that climaxes just as the title is revealed. Over the next 90-120 minutes, I would lay out the problem in greater detail, primarily via copious interviews with experts on the subject, their data points illustrated via scores of snazzily animated infographics. Along the way, I would introduce the viewer to a handful of Regular Folk affected by the issue at hand, and show how their daily lives have become a struggle (or an inspiration). But lest I send the viewer staggering from the theater bereft of hope, I’d conclude by explaining, in the simplest terms possible, exactly how to solve the problem. And then, over the end credits, I would tell you, the viewer, what you can do to help — beginning with a visit to my documentary’s official website.
What you would learn from this film is that too many of today’s documentaries have become feature-length versions of TV newsmagazine segments, each a 60 Minutes piece stretched out to two hours, two pounds of sugar in a five-pound bag. And perhaps this viewer became more aware of it in 2014 because, early in the year, I saw a film that was like a case study in what’s wrong with this approach: Fed Up, a position-paper doc on the obesity epidemic. It’s got the thesis-paragraph pre-title opening, the animated graphics (complete with cutesy, nonstop sound effects), the closing-credit instructions. And then, as if its TV-news style isn’t obvious enough, it’s even got the comically commonplace “headless fat people walking down the streets” B-roll and narration by, no kidding, Katie Couric.
Fed Up plays like something made to burn off time on MSNBC some Saturday afternoon between reruns of Caught On Camera and Lock-Up, but nope: I saw it at the beginning of 2014 because it was playing at the Sundance Film Festival. It received a simultaneous theatrical and VOD release in May; last month, Indiewire reported that its robust earnings in both have made it one of the year’s most successful documentaries.
Look, this could just be a matter of pet peeves and personal preferences, and of trends that have merely made themselves apparent to someone whose vocation requires consumption of more documentaries than the average moviegoer. But this formula, and the style that goes hand in hand with it, is infecting more and more nonfiction films, lending an air of troubling sameness to activist docs like Ivory Tower (on the financial crisis of higher education) and Citizen Koch (on the massive casualties of the Citizens United decision). But it’s been in the air for some time, with earlier films like Food Inc., Bully, The Invisible War, Waiting for “Superman,” and the granddaddy of the movement, Davis Guggenheim’s Oscar-winning An Inconvenient Truth — a film, lest we forget, about a PowerPoint presentation. And it doesn’t stop there; even a profile movie like Nas: Time Is Illmatic has a big, state-the-premise pre-title sequence, which plays, in most of these films, like the teaser before the first commercial break.
The formulaic construction of these documentaries — as set in stone as the meet-cute/hate/love progression of rom-coms or the rise/addiction/fall/comeback framework of the music biopic — is particularly galling because it’s shackling a form where even fewer rules should apply. The ubiquity (over the past decade and a half) of low-cost, low-profile, high-quality video cameras and user-friendly, dirt-cheap non-linear editing technology has revolutionized independent film in general, allowing young filmmakers opportunities to create professional-looking product even directors of the previous generation could only dream of. (...)
It’s easy to arrive at that point with these diverse subjects, the logic goes, but a more straightforward, news-doc approach is required for aggressive, activist documentaries with points to make and moviegoers to educate — and the commonness of that thinking is perhaps why so many critics have gone nuts for CITIZENFOUR, Laura Poitras’ account of Edward Snowden’s leak of NSA documents detailing surveillance programs around the world. That’s a giant topic, but the surprise of the picture is how intimate and personal it is, primarily due to the filmmaker’s place within the story: she was the contact point for Snowden, hooked in to his actions via encrypted messages, in the room with the whistleblower as he walked through the documents with Glenn Greenwald.
As a result, much of the film is spent in Snowden’s Hong Kong hotel, Poitras’ camera capturing those explanations and strategy sessions, a procedural detailing logistics, conferences, and conversations. There are no expert talking heads to provide (unnecessary, I would argue) context; there are no jazzy charts and graphs to explain it all to the (presumably) slower folks in the audience. The only such images come in a quick-cut montage of illustrations within the leaked documents, and they’re solely that — illustrations. The most powerful and informative graphics in the film are the mesmerizing images of encrypted messages from Snowden to Poitras, which fill the screen with impenetrable numbers, letters, and symbols, before clearing away to reveal the truth underneath, a powerful metaphor for Snowden’s actions (and the film itself).
What you would learn from this film is that too many of today’s documentaries have become feature-length versions of TV newsmagazine segments, each a 60 Minutes piece stretched out to two hours, two pounds of sugar in a five-pound bag. And perhaps this viewer became more aware of it in 2014 because, early in the year, I saw a film that was like a case study in what’s wrong with this approach: Fed Up, a position-paper doc on the obesity epidemic. It’s got the thesis-paragraph pre-title opening, the animated graphics (complete with cutesy, nonstop sound effects), the closing-credit instructions. And then, as if its TV-news style isn’t obvious enough, it’s even got the comically commonplace “headless fat people walking down the streets” B-roll and narration by, no kidding, Katie Couric.Fed Up plays like something made to burn off time on MSNBC some Saturday afternoon between reruns of Caught On Camera and Lock-Up, but nope: I saw it at the beginning of 2014 because it was playing at the Sundance Film Festival. It received a simultaneous theatrical and VOD release in May; last month, Indiewire reported that its robust earnings in both have made it one of the year’s most successful documentaries.
Look, this could just be a matter of pet peeves and personal preferences, and of trends that have merely made themselves apparent to someone whose vocation requires consumption of more documentaries than the average moviegoer. But this formula, and the style that goes hand in hand with it, is infecting more and more nonfiction films, lending an air of troubling sameness to activist docs like Ivory Tower (on the financial crisis of higher education) and Citizen Koch (on the massive casualties of the Citizens United decision). But it’s been in the air for some time, with earlier films like Food Inc., Bully, The Invisible War, Waiting for “Superman,” and the granddaddy of the movement, Davis Guggenheim’s Oscar-winning An Inconvenient Truth — a film, lest we forget, about a PowerPoint presentation. And it doesn’t stop there; even a profile movie like Nas: Time Is Illmatic has a big, state-the-premise pre-title sequence, which plays, in most of these films, like the teaser before the first commercial break.
The formulaic construction of these documentaries — as set in stone as the meet-cute/hate/love progression of rom-coms or the rise/addiction/fall/comeback framework of the music biopic — is particularly galling because it’s shackling a form where even fewer rules should apply. The ubiquity (over the past decade and a half) of low-cost, low-profile, high-quality video cameras and user-friendly, dirt-cheap non-linear editing technology has revolutionized independent film in general, allowing young filmmakers opportunities to create professional-looking product even directors of the previous generation could only dream of. (...)
It’s easy to arrive at that point with these diverse subjects, the logic goes, but a more straightforward, news-doc approach is required for aggressive, activist documentaries with points to make and moviegoers to educate — and the commonness of that thinking is perhaps why so many critics have gone nuts for CITIZENFOUR, Laura Poitras’ account of Edward Snowden’s leak of NSA documents detailing surveillance programs around the world. That’s a giant topic, but the surprise of the picture is how intimate and personal it is, primarily due to the filmmaker’s place within the story: she was the contact point for Snowden, hooked in to his actions via encrypted messages, in the room with the whistleblower as he walked through the documents with Glenn Greenwald.
As a result, much of the film is spent in Snowden’s Hong Kong hotel, Poitras’ camera capturing those explanations and strategy sessions, a procedural detailing logistics, conferences, and conversations. There are no expert talking heads to provide (unnecessary, I would argue) context; there are no jazzy charts and graphs to explain it all to the (presumably) slower folks in the audience. The only such images come in a quick-cut montage of illustrations within the leaked documents, and they’re solely that — illustrations. The most powerful and informative graphics in the film are the mesmerizing images of encrypted messages from Snowden to Poitras, which fill the screen with impenetrable numbers, letters, and symbols, before clearing away to reveal the truth underneath, a powerful metaphor for Snowden’s actions (and the film itself).
by Jason Bailey, Flavorwire | Read more:
Image: Fed Up
Sunday, December 28, 2014
The Capitalist Nightmare at the Heart of Breaking Bad
Back in October, you could have gone to Toys ’R’ Us and picked up the perfect present for the Breaking Bad fan in your family. Fifty bucks (all right, let’s assume you’re in Albuquerque) would buy you “Heisenberg (Walter White)” complete with a dinky little handgun clutched in his mitt; his sidekick Jesse, in an orange hazmat suit, was yours for $40. But then a Florida mom (it’s always a mom; it’s often in Florida) objected, and got a petition going, needless to say. “While the show may be compelling viewing for adults, its violent content and celebration of the drug trade make this collection unsuitable to be sold alongside Barbie dolls and Disney characters,” she wrote.
It’s worth noting, perhaps, that if Barbie’s proportions had their equivalent in an adult female, that woman would have room for only half a liver and a few inches of intestine; her tiny feet and top-heavy frame would oblige her to walk on all fours. A great role model? I’m not so sure. (And Disney is not always entirely benign. My mother was five when Snow White came out; I’m not sure she ever really recovered from her encounter with its Evil Queen.)
“I’m so mad, I am burning my Florida Mom action figure in protest,” Bryan Cranston tweeted when the storm broke. Cranston went from advertising haemorrhoid cream (“Remember – oxygen action is special with Preparation H”) to playing Hal, the goofy dad on Malcolm in the Middle, to full-on superstardom as Breaking Bad became a talisman of modern popular culture. The show began broadcasting in the US in January 2008 and ran for five seasons. Stephen King called it the best television show in 15 years; it was showered with dozens of awards; Cranston took the Emmy for Outstanding Lead Actor in a Drama Series for four out of the show’s five seasons.
So get over it, Florida Mom. Breaking Bad was, and remains (at least for the time being), the apogee of water-cooler culture: serious but seriously cool, and the nerd’s revenge, to boot. Walter White – for those of you who are yet to have your lives devoured by the show – is a high-school chemistry teacher: you might think that’s a respected, reasonably well-compensated profession, but in 21st-century America he’s got to have a second job at a carwash just to make ends meet. When he is diagnosed, as the series begins, with terminal lung cancer, his terror (his existential, male, white-collar terror) focuses not on the prospect of his own death, but on how he will provide for his family. A chance encounter with a former student, Jesse Pinkman – a classic dropout nogoodnik with a sideline in drug sales – sets his unlikely career as a drug baron in motion. As his alter ego “Heisenberg” (the name a knowing echo of that icon of uncertainty), Walter has chemical skills that enable him to cook some of the purest methamphetamine the world has ever known . . . and the rest, as they say, is history. (...)
But here’s the thing: Florida Mom is on to something, even if she’s wrong about exactly what it is she was objecting to. “A celebration of the drug trade”? I don’t think so. But why did Breaking Bad get under my skin? Why does it still bother me, all these months later? And why do I think, in an era of exceptional television, that it’s the best thing I have ever seen? (...)
Not everyone wants to use words such as “metadiegetic” when talking about telly, and the close analysis of everything from the show’s vision of landscape to its use of music, or “the epistemological implications of the use a criminal pseudonym”, may be exhausting for some. Yet Pierson’s essay, which opens the volume, draws attention to one of the chief reasons the show has such a terrible and enduring resonance.
Breaking Bad is, he argues, a demonstration of the true consequences of neoliberal ideology: the idea that “the market should be the organising agent for nearly all social, political, economic and personal decisions”. Under neoliberal criminology, the criminal is not a product of psychological disorder, but “a rational-economic actor who contemplates and calculates the risks and the rewards of his actions”. And there is Walter White in a nutshell.
It’s worth noting, perhaps, that if Barbie’s proportions had their equivalent in an adult female, that woman would have room for only half a liver and a few inches of intestine; her tiny feet and top-heavy frame would oblige her to walk on all fours. A great role model? I’m not so sure. (And Disney is not always entirely benign. My mother was five when Snow White came out; I’m not sure she ever really recovered from her encounter with its Evil Queen.)
“I’m so mad, I am burning my Florida Mom action figure in protest,” Bryan Cranston tweeted when the storm broke. Cranston went from advertising haemorrhoid cream (“Remember – oxygen action is special with Preparation H”) to playing Hal, the goofy dad on Malcolm in the Middle, to full-on superstardom as Breaking Bad became a talisman of modern popular culture. The show began broadcasting in the US in January 2008 and ran for five seasons. Stephen King called it the best television show in 15 years; it was showered with dozens of awards; Cranston took the Emmy for Outstanding Lead Actor in a Drama Series for four out of the show’s five seasons.
So get over it, Florida Mom. Breaking Bad was, and remains (at least for the time being), the apogee of water-cooler culture: serious but seriously cool, and the nerd’s revenge, to boot. Walter White – for those of you who are yet to have your lives devoured by the show – is a high-school chemistry teacher: you might think that’s a respected, reasonably well-compensated profession, but in 21st-century America he’s got to have a second job at a carwash just to make ends meet. When he is diagnosed, as the series begins, with terminal lung cancer, his terror (his existential, male, white-collar terror) focuses not on the prospect of his own death, but on how he will provide for his family. A chance encounter with a former student, Jesse Pinkman – a classic dropout nogoodnik with a sideline in drug sales – sets his unlikely career as a drug baron in motion. As his alter ego “Heisenberg” (the name a knowing echo of that icon of uncertainty), Walter has chemical skills that enable him to cook some of the purest methamphetamine the world has ever known . . . and the rest, as they say, is history. (...)
But here’s the thing: Florida Mom is on to something, even if she’s wrong about exactly what it is she was objecting to. “A celebration of the drug trade”? I don’t think so. But why did Breaking Bad get under my skin? Why does it still bother me, all these months later? And why do I think, in an era of exceptional television, that it’s the best thing I have ever seen? (...)
Not everyone wants to use words such as “metadiegetic” when talking about telly, and the close analysis of everything from the show’s vision of landscape to its use of music, or “the epistemological implications of the use a criminal pseudonym”, may be exhausting for some. Yet Pierson’s essay, which opens the volume, draws attention to one of the chief reasons the show has such a terrible and enduring resonance.
Breaking Bad is, he argues, a demonstration of the true consequences of neoliberal ideology: the idea that “the market should be the organising agent for nearly all social, political, economic and personal decisions”. Under neoliberal criminology, the criminal is not a product of psychological disorder, but “a rational-economic actor who contemplates and calculates the risks and the rewards of his actions”. And there is Walter White in a nutshell.
by Erica Wagner, New Statesman | Read more:
Image: Ralph SteadmanSaturday, December 27, 2014
Bob Dylan
[ed. NSA, CIA, VA, Health Insurance, Hospitals, Facebook, Citicorp, College Tuition, Transportation, Public Utilities, Climate Change, Publishing, Big Pharma, Net Neutrality, Minimum Wage, Guantanamo, AfghanIraq, Torture, K-Street, Wall Street, Congress (and much, much more). Happy New Year.]
Fish Cakes Conquer Their Shyness
A Recipe for Spicy Fish Cakes
The typical fish cake does not call attention to itself. Potato-rich, monochromatic and satisfying, it is the kind of thing you’d make for a homey dinner when the food wasn’t the point.
Not so with these fish cakes, which, with their mix of aromatic chiles and herbs, are a brighter and more compelling take. The recipe starts out like any other by combining cooked white fillets with mashed potatoes, bread crumbs and eggs. After chilling, the mixture is coated in flour and fried until crisp and brown.
But that’s all for the similarities. I’ve added flavor in every step. Instead of merely boiling the fish, I sear it with garlic, then steam it in vermouth or white wine. After the fish is done, the potatoes are simmered in the same pan as a way to deglaze it and incorporate the tasty browned bits stuck to its bottom. I leave the garlic cloves in the pan, too, to thoroughly soften along with the potatoes, then I mash the roots all together. Those garlicky mashed potatoes make a rich and pungent base for the fish.
For seasoning, I stir in minced scallions, cilantro and basil, grated lime zest and hot green chiles. The cakes are speckled with green in the center, rather than dull all-white. And the flavor is vibrant and spicy — though the degree of spice depends on your chile. A small serrano will give you a mild but persistent heat. Substituting a jalapeño takes it down a notch, while using a Thai chile could make it fiery enough for your cheeks to flush.
The typical fish cake does not call attention to itself. Potato-rich, monochromatic and satisfying, it is the kind of thing you’d make for a homey dinner when the food wasn’t the point.
Not so with these fish cakes, which, with their mix of aromatic chiles and herbs, are a brighter and more compelling take. The recipe starts out like any other by combining cooked white fillets with mashed potatoes, bread crumbs and eggs. After chilling, the mixture is coated in flour and fried until crisp and brown.But that’s all for the similarities. I’ve added flavor in every step. Instead of merely boiling the fish, I sear it with garlic, then steam it in vermouth or white wine. After the fish is done, the potatoes are simmered in the same pan as a way to deglaze it and incorporate the tasty browned bits stuck to its bottom. I leave the garlic cloves in the pan, too, to thoroughly soften along with the potatoes, then I mash the roots all together. Those garlicky mashed potatoes make a rich and pungent base for the fish.
For seasoning, I stir in minced scallions, cilantro and basil, grated lime zest and hot green chiles. The cakes are speckled with green in the center, rather than dull all-white. And the flavor is vibrant and spicy — though the degree of spice depends on your chile. A small serrano will give you a mild but persistent heat. Substituting a jalapeño takes it down a notch, while using a Thai chile could make it fiery enough for your cheeks to flush.
by Melissa Clark, NY Times | Read more:
Image: NY Times
Lesbianism Made Easy
The easiest way to pick up a straight woman, which is so obvious you’ll be embarrassed you didn’t think of it, is to pick up her boyfriend and/or husband. Male heterosexuals, for reasons no one really understands, find the practice of lesbianism — particularly when utilizing their favorite film stars or own personal girlfriends — a particularly appealing way of spending time, second perhaps only to receiving blow jobs. In this, they are united with their homosexual brothers, except for the lesbian part.
Surprisingly many female heterosexuals attached to males are willing to please their boyfriends in this fashion. Of course, there is no reason, other than logic and common decency, to expect the female in question to admit the pleasure she may receive from this hobby of her boyfriend’s — particularly if it has ever been a little hobby of hers in those bouncy college days or other times in her excitingly varied life.
Should you not wish to be offended or disappointed by the degree of open enthusiasm your heterosexual displays about having carnal knowledge of with or on you, it pays to adopt a hardened veneer so as to allow certain statements typical of her kind to bounce off your chest without injuring either your self-esteem or any future chances of being called upon for another go at enhancing her sacred relationship.
These statements will usually take the form of: “This isn’t really my thing”; “I’m not into women”; “I’m only doing this because I really really love Ted”; and “Oooh! That was — I mean, not that I’d ever want to do it again, but God, you’re … sweet.”
There are several possible responses to such clearly desperate, if insulting, statements. You may consider a reply along the lines of, “I don’t know what it is; I usually find sleeping with women much wilder, more uninhibited and multiorgasmic than this!” or a classically simple, “I never want to do that again.” These insults to your female heterosexual’s performance and appeal will, if she’s a woman worth having, effectively provoke her to prove to you, and herself, that you very much enjoyed sleeping with her, whatever you may think you’re pulling now. No doubt she will even be forced to make you repeat various acts until she’s satisfied it’s clear to all concerned that while she may not choose to enjoy what you’re doing together, you can’t deny that you find it fairly … compelling. You should feel free to continue denying your enjoyment, so that she will be forced to call you late into the evening to reiterate her point, during which time you can explain to her that the phone truly isn’t the place for such discussions so why doesn’t she come over so you can clear the air once and for all?
by Helen Eisenbach, Medium | Read more:
Image: uncredited
Surprisingly many female heterosexuals attached to males are willing to please their boyfriends in this fashion. Of course, there is no reason, other than logic and common decency, to expect the female in question to admit the pleasure she may receive from this hobby of her boyfriend’s — particularly if it has ever been a little hobby of hers in those bouncy college days or other times in her excitingly varied life.Should you not wish to be offended or disappointed by the degree of open enthusiasm your heterosexual displays about having carnal knowledge of with or on you, it pays to adopt a hardened veneer so as to allow certain statements typical of her kind to bounce off your chest without injuring either your self-esteem or any future chances of being called upon for another go at enhancing her sacred relationship.
These statements will usually take the form of: “This isn’t really my thing”; “I’m not into women”; “I’m only doing this because I really really love Ted”; and “Oooh! That was — I mean, not that I’d ever want to do it again, but God, you’re … sweet.”
There are several possible responses to such clearly desperate, if insulting, statements. You may consider a reply along the lines of, “I don’t know what it is; I usually find sleeping with women much wilder, more uninhibited and multiorgasmic than this!” or a classically simple, “I never want to do that again.” These insults to your female heterosexual’s performance and appeal will, if she’s a woman worth having, effectively provoke her to prove to you, and herself, that you very much enjoyed sleeping with her, whatever you may think you’re pulling now. No doubt she will even be forced to make you repeat various acts until she’s satisfied it’s clear to all concerned that while she may not choose to enjoy what you’re doing together, you can’t deny that you find it fairly … compelling. You should feel free to continue denying your enjoyment, so that she will be forced to call you late into the evening to reiterate her point, during which time you can explain to her that the phone truly isn’t the place for such discussions so why doesn’t she come over so you can clear the air once and for all?
by Helen Eisenbach, Medium | Read more:
Image: uncredited
Friday, December 26, 2014
The Secret to the Uber Economy is Wealth Inequality
The same goes for services. When I lived there, a man came around every morning to collect my clothes and bring them back crisply ironed the next day; he would have washed them, too, but I had a washing machine.
These luxuries are not new. I took advantage of them long before Uber became a verb, before the world saw the first iPhone in 2007, even before the first submarine fibre-optic cable landed on our shores in 1997. In my hometown of Mumbai, we have had many of these conveniences for at least as long as we have had landlines—and some even earlier than that.
It did not take technology to spur the on-demand economy. It took masses of poor people.
In San Francisco, another peninsular city on another west coast on the other side of the world, a similar revolution of convenience is underway, spurred by the unstoppable rise of Uber, the on-demand taxi service, which went from offering services in 60 cities around the world at the end of last year to more than 200 today.
Uber’s success has sparked a revolution, covered in great detail this summer by Re/code, a tech blog, which ran a special series about “the new instant gratification economy.” As Re/code pointed out, after Uber showed how it’s done, nearly every pitch made by starry-eyed technologists “in Silicon Valley seemed to morph overnight into an ‘Uber for X’ startup.”
Various companies are described now as “Uber for massages,” “Uber for alcohol,” and “Uber for laundry and dry cleaning,” among many, many other things (“Uber for city permits”). So profound has been their cultural influence in 2014, one man wrote a poem about them for Quartz. (Nobody has yet written a poem dedicated to the other big cultural touchstone of 2014 for the business and economics crowd, French economist Thomas Piketty’s smash hit, Capital in the Twenty-First Century.)
The conventional narrative is this: enabled by smartphones, with their GPS chips and internet connections, enterprising young businesses are using technology to connect a vast market willing to pay for convenience with small businesses or people seeking flexible work.
This narrative ignores another vital ingredient, without which this new economy would fall apart: inequality.
There are only two requirements for an on-demand service economy to work, and neither is an iPhone. First, the market being addressed needs to be big enough to scale—food, laundry, taxi rides. Without that, it’s just a concierge service for the rich rather than a disruptive paradigm shift, as a venture capitalist might say. Second, and perhaps more importantly, there needs to be a large enough labor class willing to work at wages that customers consider affordable and that the middlemen consider worthwhile for their profit margins. (...)
There is no denying the seductive nature of convenience—or the cold logic of businesses that create new jobs, whatever quality they may be. But the notion that brilliant young programmers are forging a newfangled “instant gratification” economy is a falsehood. Instead, it is a rerun of the oldest sort of business: middlemen insinuating themselves between buyers and sellers.
by Leo Mirani, Quartz | Read more:
Image: Reuters
The Conventional Wisdom On Oil Is Always Wrong
In 2008, I moved to Dallas to cover the oil industry for The Wall Street Journal. Like any reporter on a new beat, I spent months talking to as many experts as I could. They didn’t agree on much. Would oil prices — then over $100 a barrel for the first time — keep rising? Would post-Saddam Iraq ever return to the ranks of the world’s great oil producers? Would China overtake the U.S. as the world’s top consumer? A dozen experts gave me a dozen different answers.
But there was one thing pretty much everyone agreed on: U.S. oil production was in permanent, terminal decline. U.S. oil fields pumped 5 million barrels of crude a day in 2008, half as much as in 1970 and the lowest rate since the 1940s. Experts disagreed about how far and how fast production would decline, but pretty much no mainstream forecaster expected a change in direction.
That consensus turns out to have been totally, hilariously wrong. U.S. oil production has increased by more than 50 percent since 2008 and is now near a three-decade high. The U.S. is on track to surpass Saudi Arabia as the world’s top producer of crude oil; add in ethanol and other liquid fuels, and the U.S. is already on top.
The standard narrative of that stunning turnaround is familiar by now: Even as Big Oil abandoned the U.S. for easier fields abroad, a few risk-taking wildcatters refused to give up on the domestic oil industry. By combining the techniques of hydraulic fracturing (“fracking”) and horizontal drilling, they figured out how to tap previously inaccessible oil reserves locked in shale rock – and in so doing sparked an unexpected energy boom.
That narrative isn’t necessarily wrong. But in my years watching the transformation up close, I took away a lesson: When it comes to energy, and especially shale, the conventional wisdom is almost always wrong.
It isn’t just that experts didn’t see the shale boom coming. It’s that they underestimated its impact at virtually every turn. First, they didn’t think natural gas could be produced from shale (it could). Then they thought production would fall quickly if natural gas prices dropped (they did, and it didn’t). They thought the techniques that worked for gas couldn’t be applied to oil (they could). They thought shale couldn’t reverse the overall decline in U.S. oil production (it did). And they thought rising U.S. oil production wouldn’t be enough to affect global oil prices (it was).
Now, oil prices are cratering, falling below $55 a barrel from more than $100 earlier this year. And so, the usual lineup of experts — the same ones, in many cases, who’ve been wrong so many times in the past — are offering predictions for what plunging prices will mean for the U.S. oil boom. Here’s my prediction: They’ll be wrong this time, too.
But there was one thing pretty much everyone agreed on: U.S. oil production was in permanent, terminal decline. U.S. oil fields pumped 5 million barrels of crude a day in 2008, half as much as in 1970 and the lowest rate since the 1940s. Experts disagreed about how far and how fast production would decline, but pretty much no mainstream forecaster expected a change in direction.That consensus turns out to have been totally, hilariously wrong. U.S. oil production has increased by more than 50 percent since 2008 and is now near a three-decade high. The U.S. is on track to surpass Saudi Arabia as the world’s top producer of crude oil; add in ethanol and other liquid fuels, and the U.S. is already on top.
The standard narrative of that stunning turnaround is familiar by now: Even as Big Oil abandoned the U.S. for easier fields abroad, a few risk-taking wildcatters refused to give up on the domestic oil industry. By combining the techniques of hydraulic fracturing (“fracking”) and horizontal drilling, they figured out how to tap previously inaccessible oil reserves locked in shale rock – and in so doing sparked an unexpected energy boom.
That narrative isn’t necessarily wrong. But in my years watching the transformation up close, I took away a lesson: When it comes to energy, and especially shale, the conventional wisdom is almost always wrong.
It isn’t just that experts didn’t see the shale boom coming. It’s that they underestimated its impact at virtually every turn. First, they didn’t think natural gas could be produced from shale (it could). Then they thought production would fall quickly if natural gas prices dropped (they did, and it didn’t). They thought the techniques that worked for gas couldn’t be applied to oil (they could). They thought shale couldn’t reverse the overall decline in U.S. oil production (it did). And they thought rising U.S. oil production wouldn’t be enough to affect global oil prices (it was).
Now, oil prices are cratering, falling below $55 a barrel from more than $100 earlier this year. And so, the usual lineup of experts — the same ones, in many cases, who’ve been wrong so many times in the past — are offering predictions for what plunging prices will mean for the U.S. oil boom. Here’s my prediction: They’ll be wrong this time, too.
by Ben Casselman, FiveThirtyEight | Read more:
Image: Energy Information Administration
For Kids, By Kids—But Not For Long
Being a teen idol has always been a difficult balancing act. How to simultaneously project awesomeness and authenticity? How to convince a mass audience that you are worthy of their attention while retaining an aura of utter normalcy?
In many ways, today’s online stars are dealing in the same simulated intimacy that teenage celebrity has always relied on, from the goofy approachability of The Monkees to Taylor Swift’s knack for sounding as if she’s just a regular girl baring her soul to her locker neighbour. With YouTubers, though, this intimacy is turned up to extraordinary new levels. “Celebrity is more like a faraway kind of thing and this is like, you’re in their bedrooms,” 17-year-old Allie Cox explained to me while we waited in line to meet three English YouTubers, including Will Darbyshire, a 21-year-old who just started his YouTube channel earlier this year. Cox considered for a moment. “I mean… that’s kind of freaky. But at the same time you feel like you know them.”
The founding myth of YouTube is of some digital meritocracy where the line between producer and consumer has been erased and anyone with something to say can gain an audience. Many of the kids at Buffer Festival weren’t just fans, but creators of their own videos. Corey Vidal, the festival’s founder and a prominent YouTuber himself, was a poster boy for the transformative power of the humble online video. Vidal had struggled through high school. He’d been homeless, couch-surfing and spending time in a shelter. Then a geeky video he made of himself lip-syncing an a capella Star Wars song went viral. Now he’s the head of a YouTube production company, the guy in charge of a festival that brings all of his favourite people to Toronto. It was easy for the teenagers in the audience to imagine themselves one day on the stage, hanging out with their idols, collaborating with their fellow video makers. (...)
In many ways, YouTube is the perfect technology to fulfill a long-held teenage desire. When I was 13, the funniest, coolest people I could think of weren’t the lip-glossed stars of Hollywood or the wrinkled “teenagers” of Aaron Spelling productions—they were the kids a few grades ahead who played guitar in the hallway. They were people like the beautiful, effortlessly cool daughter of a family friend who came by one afternoon before starting university with a buzz cut, casually explaining to my enraptured sister and me that she was “just tired of men looking at me.” They were the older brothers of friends who, during camp-outs on the Toronto Islands, would ramble through the bushes, wild and high-spirited, cracking lewd jokes and shooting roman candles out over the lake, talking about girls and music and comics in a way that made you feel as if you were getting a peek into a thrilling world that would soon be yours to inhabit.
What 13-year-old wouldn’t want to hang out with people like that, to get a glimpse into that world, even from a distance? (...)
In so many kids’ books, the sharpest moments of sadness come from the inevitable approach of adulthood—the moment you’re no longer allowed into Narnia, the time you try to use the enchanted cupboard or the secret bell and find it no longer works, that the magic’s gone. There is nothing more melancholy than being 15 and realizing you will never, ever experience 14 again. When your heroes grow up, when the people you thought you knew so well shift their loyalties to the adult world, it can feel like a kind of betrayal.
In some ways, this sense of nostalgia hung over the festivities. On Twitter, a local YouTuber suggested that next year the programmers devote a showing to the “golden age of YouTube.” The idea that a technology still in its infancy might have already seen its best days seems absurd, but still there was the sense that, in some vital ways, the purest days of vlogging were over. Many of the stars at the festival began their YouTube careers years ago, when they were teenagers fooling around with a new technology, making silly videos for the hell of it. Now they’ve gotten older. With agents involved and sponsorship opportunities and TV deals in the air, it has become increasingly difficult to maintain the fiction that the person behind the camera is just another normal kid. Buffer Festival was ushering in a new age of professional YouTube, but it seemed not all the fans were ready. The stars, meanwhile, were awkwardly trying to make the same transition that pop singers and Disney kids and other teen idols have always had to navigate, feeling their way into adulthood and hoping their fans follow.
Last month, Charlie McDonnell posted a video simply called “Thank you :).” “The past couple of years has been very… transitional for me,” he says, smiling into the camera. “I’ve been attempting to deal with the fact that I am now growing up by doing my best to embrace it. By drinking more grown up drinks and wearing slightly more grown-up shoes and, maybe most apparently for things on your end, doing my best to make more grown-up stuff.” The video is at once a gentle explanation and a plea for understanding. He reassures his viewers that he still really, really likes making silly videos. He apologizes for neglecting his fans. He thanks those who have stuck with him. “You’re here,” he says. “Not everybody who watched me two years ago still is. But you are,” he says sincerely. The video is pitched as a note of gratitude. Mostly, though, it reads like an apology for growing up.
In many ways, today’s online stars are dealing in the same simulated intimacy that teenage celebrity has always relied on, from the goofy approachability of The Monkees to Taylor Swift’s knack for sounding as if she’s just a regular girl baring her soul to her locker neighbour. With YouTubers, though, this intimacy is turned up to extraordinary new levels. “Celebrity is more like a faraway kind of thing and this is like, you’re in their bedrooms,” 17-year-old Allie Cox explained to me while we waited in line to meet three English YouTubers, including Will Darbyshire, a 21-year-old who just started his YouTube channel earlier this year. Cox considered for a moment. “I mean… that’s kind of freaky. But at the same time you feel like you know them.”
The founding myth of YouTube is of some digital meritocracy where the line between producer and consumer has been erased and anyone with something to say can gain an audience. Many of the kids at Buffer Festival weren’t just fans, but creators of their own videos. Corey Vidal, the festival’s founder and a prominent YouTuber himself, was a poster boy for the transformative power of the humble online video. Vidal had struggled through high school. He’d been homeless, couch-surfing and spending time in a shelter. Then a geeky video he made of himself lip-syncing an a capella Star Wars song went viral. Now he’s the head of a YouTube production company, the guy in charge of a festival that brings all of his favourite people to Toronto. It was easy for the teenagers in the audience to imagine themselves one day on the stage, hanging out with their idols, collaborating with their fellow video makers. (...)In many ways, YouTube is the perfect technology to fulfill a long-held teenage desire. When I was 13, the funniest, coolest people I could think of weren’t the lip-glossed stars of Hollywood or the wrinkled “teenagers” of Aaron Spelling productions—they were the kids a few grades ahead who played guitar in the hallway. They were people like the beautiful, effortlessly cool daughter of a family friend who came by one afternoon before starting university with a buzz cut, casually explaining to my enraptured sister and me that she was “just tired of men looking at me.” They were the older brothers of friends who, during camp-outs on the Toronto Islands, would ramble through the bushes, wild and high-spirited, cracking lewd jokes and shooting roman candles out over the lake, talking about girls and music and comics in a way that made you feel as if you were getting a peek into a thrilling world that would soon be yours to inhabit.
What 13-year-old wouldn’t want to hang out with people like that, to get a glimpse into that world, even from a distance? (...)
In so many kids’ books, the sharpest moments of sadness come from the inevitable approach of adulthood—the moment you’re no longer allowed into Narnia, the time you try to use the enchanted cupboard or the secret bell and find it no longer works, that the magic’s gone. There is nothing more melancholy than being 15 and realizing you will never, ever experience 14 again. When your heroes grow up, when the people you thought you knew so well shift their loyalties to the adult world, it can feel like a kind of betrayal.
In some ways, this sense of nostalgia hung over the festivities. On Twitter, a local YouTuber suggested that next year the programmers devote a showing to the “golden age of YouTube.” The idea that a technology still in its infancy might have already seen its best days seems absurd, but still there was the sense that, in some vital ways, the purest days of vlogging were over. Many of the stars at the festival began their YouTube careers years ago, when they were teenagers fooling around with a new technology, making silly videos for the hell of it. Now they’ve gotten older. With agents involved and sponsorship opportunities and TV deals in the air, it has become increasingly difficult to maintain the fiction that the person behind the camera is just another normal kid. Buffer Festival was ushering in a new age of professional YouTube, but it seemed not all the fans were ready. The stars, meanwhile, were awkwardly trying to make the same transition that pop singers and Disney kids and other teen idols have always had to navigate, feeling their way into adulthood and hoping their fans follow.
Last month, Charlie McDonnell posted a video simply called “Thank you :).” “The past couple of years has been very… transitional for me,” he says, smiling into the camera. “I’ve been attempting to deal with the fact that I am now growing up by doing my best to embrace it. By drinking more grown up drinks and wearing slightly more grown-up shoes and, maybe most apparently for things on your end, doing my best to make more grown-up stuff.” The video is at once a gentle explanation and a plea for understanding. He reassures his viewers that he still really, really likes making silly videos. He apologizes for neglecting his fans. He thanks those who have stuck with him. “You’re here,” he says. “Not everybody who watched me two years ago still is. But you are,” he says sincerely. The video is pitched as a note of gratitude. Mostly, though, it reads like an apology for growing up.
by Nicholas Hune-Brown, Hazlitt | Read more:
Image: uncredited
Labels:
Celebrities,
Culture,
Relationships,
Technology
Subscribe to:
Comments (Atom)










