Friday, March 15, 2013

Paul Krugman Is Brilliant, but Is He Meta-Rational?

Nobel laureate, Princeton economics professor, and New York Times columnist Paul Krugman is a brilliant man. I am not so brilliant. So when Krugman makes strident claims about macroeconomics, a complex subject on which he has significantly more expertise than I do, should I just accept them? How should we evaluate the claims of people much smarter than ourselves?

A starting point for thinking about this question is the work of another Nobelist, Robert Aumann. In 1976, Aumann showed that under certain strong assumptions, disagreement on questions of fact is irrational. Suppose that Krugman and I have read all the same papers about macroeconomics, and we have access to all the same macroeconomic data. Suppose further that we agree that Krugman is smarter than I am. All it should take, according to Aumann, for our beliefs to converge is for us to exchange our views. If we have common “priors” and we are mutually aware of each others’ views, then if we do not agree ex post, at least one of us is being irrational.

It seems natural to conclude, given these facts, that if Krugman and I disagree, the fault lies with me. After all, he is much smarter than I am, so shouldn’t I converge much more to his view than he does to mine?

Not necessarily. One problem is that if I change my belief to match Krugman’s, I would still disagree with a lot of really smart people, including many people as smart as or possibly even smarter than Krugman. These people have read the same macroeconomics literature that Krugman and I have, and they have access to the same data. So the fact that they all disagree with each other on some margin suggests that very few of them behave according to the theory of disagreement. There must be some systematic problem with the beliefs of macroeconomists.

In their paper on disagreement, Tyler Cowen and Robin Hanson grapple with the problem of self-deception. Self-favoring priors, they note, can help to serve other functions besides arriving at the truth. People who “irrationally” believe in themselves are often more successful than those who do not. Because pursuit of the truth is often irrelevant in evolutionary competition, humans have an evolved tendency to hold self-favoring priors and self-deceive about the existence of these priors in ourselves, even though we frequently observe them in others.

Self-deception is in some ways a more serious problem than mere lack of intelligence. It is embarrassing to be caught in a logical contradiction, as a stupid person might be, because it is often impossible to deny. But when accused of disagreeing due to a self-favoring prior, such as having an inflated opinion of one’s own judgment, people can and do simply deny the accusation.

How can we best cope with the problem of self-deception? Cowen and Hanson argue that we should be on the lookout for people who are “meta-rational,” honest truth-seekers who choose opinions as if they understand the problem of disagreement and self-deception. According to the theory of disagreement, meta-rational people will not have disagreements among themselves caused by faith in their own superior knowledge or reasoning ability. The fact that disagreement remains widespread suggests that most people are not meta-rational, or—what seems less likely—that meta-rational people cannot distinguish one another.

by Eli Dourado, The Umlaut | Read more:
Photo:David Shankbone

The Problem with Tumblr and Photography

A little over ten years ago, when I started blogging about photography, most photoblogs were presenting a single photographer’s work, one photograph at a time, usually per day. They were maintained by the photographers themselves. The scene was very small, and there was maybe a slightly naive earnestness about how it was done, which made following those blogs an appealing experience.

In the years since, for better or for worse, many of the ideas driving those early photoblogs have fallen by the wayside, with new formats and platforms replacing each other in a bewildering fashion. Many more photographers have come to embrace the web, in particular the social-networking bits.

Before looking at this in more detail, it might be worthwhile to point out that the internet seems made for photography. Photographs offer an immediacy that survives even under the most adverse, aka attention-deficit-disorder-plagued, circumstances.

Interestingly enough, Tumblr is nothing but a variant of the very early photoblogs on steroids. The basic format is the same: present usually one photograph (or another short snippet of information, like a video, an animated GIF, or a text) at a time. Following other Tumblrs then adds the steroids. While in the past one needed to visit one photoblog after another, Tumblr now offers a seemingly incessant stream of work, all in one place. What’s more, showcasing other people’s photographs appears to have overtaken showcasing one’s own.

This all sounds pretty great, except that there’s a multitude of problems, some of them well-known, others not so much. For starters, a large number of photographers are massively concerned about copyright. If everybody were to ask photographers for permission to showcase their work, Tumblr would grind to a halt in less time than it takes to say the word “copyright.”

This isn’t to say that concerns about copyright are invalid. But photographers worried about it might want to ask themselves what damage is done to their work (and income) if someone showcases their pictures to a possibly larger, possibly different audience, for noncommercial reasons. If a photographer is very concerned, a simple solution would be to not put photographs online. The moment they’re on the web, the medium’s properties kick in; the nature of the internet makes copyright violations incredibly simple. Then again, it also makes it easier to detect and go after those violations (as retailer DNKY just found out).

As far as I’m concerned, the bigger issue is the sloppy attribution of photographs, especially on Tumblr. Often, I find photographs where the source is not given at all, or where it is given in such a way that tracking down the photographer involves considerable work. This translates to a non-fair-use copyright violation, and unfortunately, many Tumblr users — often photographers themselves — are woefully uninformed or unconcerned about this.

by Jörg Colberg, Hyperallergic |  Read more:
Photo: Alec Soth

Thursday, March 14, 2013

The Planet Trillaphon


[ed. After posting a review this morning, I realized that I had never read DFW's Planet Trillaphon and it's Relation to the Bad Thing.  Wonders of the internet, here it is (pdf).]

Gene Kloss (1903-1996) - Rain Cloud at Evening
via:

Elliott Erwitt, New York, 1946
via:

The Curse of “You May Also Like”


Of all the startups that launched last year, Fuzz is certainly one of the most intriguing and the most overlooked. Describing itself as a “people-powered radio” that is completely “robot-free,” Fuzz bucks the trend toward ever greater reliance on algorithms in discovering new music. Fuzz celebrates the role played by human DJs—regular users who are invited to upload their own music to the site in order to create and share their own “radio stations.”

The idea—or, perhaps, hope—behind Fuzz is that human curators can still deliver something that algorithms cannot; it aspires to be the opposite of Pandora, in which the algorithms do all the heavy lifting. As its founder, Jeff Yasuda, told Bloomberg News last September, “there’s a big need for a curated type of experience and just getting back to the belief that the most compelling recommendations come from a human being.”

But while Fuzz's launch attracted little attention, the growing role of algorithms in all stages of artistic production is becoming impossible to ignore. Most recently, this role was highlighted by Andrew Leonard, the technology critic for Salon, in an intriguing article about House of Cards, Netflix's first foray into original programming. The series' origin myth is by now well-known: Having studied its user logs, Netflix discovered that a remake of the British series of the same name could be a huge hit, especially if it also featured Kevin Spacey and was directed by David Fincher.

“Can the auteur survive in an age when computer algorithms are the ultimate focus group?” asked Leonard. He wondered how the massive amounts of data that Netflix has gathered while users were streaming the first season of the series—how many times did they click the pause button?—would affect future episodes.

Many other industries are facing similar questions. For example, Amazon, through its Kindle e-reader, collects vast troves of information about reading habits of its users: what books they finish and what books they don't; what sections they tend to skip and which they read most diligently; how often they look up certain words in the dictionary and underline passages. (Amazon is hardly alone here: Other e-book players are as guilty.)

Based on all these data, Amazon can predict the ingredients that will make you keep clicking to the very end of the book. Perhaps Amazon could even give you alternate endings—just to make you happier. As a recent paper on the future of entertainment puts it, ours is a world where “stories can become adaptive algorithms, creating a more engaging and interactive future.”

Just as Netflix has figured out that, given all their data, it would be stupid not to enter the filmmaking business, so has Amazon discovered that it would be stupid not to enter the publishing business. Amazon's knowledge, however, goes deeper than Netflix's: Since it also runs a site where we buy books, it knows everything that there's to know about our buying behavior and the prices that we are willing to pay. Today Amazon runs half a dozen publishing imprints and plans to add more.

by Evgeny Morozov, Slate | Read more:
Photo by Beck Diefenbach/Reuters

Boredom vs. Depression

David Foster Wallace walked into great literature, as Trotsky said of Céline, the way other people walk into their homes. From the publication of his undergraduate fiction thesis, The Broom of the System (1987)*, to the unfinished manuscript he left after his suicide in 2008, The Pale King, Wallace’s life was an object of interest for even the most inert cultural bystanders. His cockiness, insecurity, ambition, anthropological precision and meticulous avoidance of the ordinary sentence – all of this won Wallace the double-edged honour of being regularly proclaimed “the voice of his generation”. For Americans who came of age in the 1990s and worried whether their times would produce a writer of the same cultural heft as the giants of the post-war decades, Wallace’s battleship of a book, Infinite Jest (1996), and his flotilla of stories and essays arrived just in time. Now, in lock step with the worthies he once called “The Great Male Narcissists” – John Updike, Norman Mailer, Philip Roth – Wallace has a biography, a hallowed archive, and a swooning field of “Wallace studies”.  (...)

Wallace came into his own as a writer at Amherst College in Massachusetts in the 1980s, where he arrived as an interloper from Illinois among well-heeled preppy peers. “Midwestern boys might teach or read or make ironic fun of novels”, writes Max in one of his bizarre asides about the heartland, “but they did not go to college to learn how to write them.” Fiction on campus, Wallace would claim, was the province of “foppish aesthetes”, who “went around in berets stroking their chins”. Max’s portrait of these years is of a student getting the top marks, lest anyone mistake him for not being the cleverest boy in the room. (When, years later, the film Good Will Hunting came out, Wallace not only seemed to identify with Matt Damon’s character, but actually tried to follow the blurry equations on the chalkboard.) Max describes a regime that reserved forty-five minutes for dental hygiene, afternoon bong hits, six-hour bouts with the books, and evening whisky shots on the library steps. We get good glimpses of Wallace’s table-talk: “Does anyone want to see Friedrich Hayek get hit on by a girl from Wilton, Connecticut?”. Wallace’s fanbase in future years would consist of fellow liberal arts graduates who saw his work as an opportunity to exercise their education while savouring the pop-cultural references in his prose. But it was Wallace’s style itself, at once laid back and hilariously precise, that seduced a generation. Take this classic passage where Wallace pre-emptively mourns the etiquette of the old-school telephone call:

“A traditional aural-only conversation – utilizing a hand-held phone whose earpiece contained only 6 little pinholes but whose mouthpiece (rather significantly, it later seemed) contained (62) or 36 little pinholes – let you enter into a kind of highway-hypnotic semi-attentive fugue: while conversing, you could look around the room, doodle, fine-groom, peel tiny bits of dead skin away from your cuticles, compose phone-pad haiku, stir things on the stove; you could even carry on a whole separate additional sign-language-and-exaggerated-facial-expression type of conversation with people right there in the room with you, all while seeming to be right there attending closely to the voice on the phone. And yet – this was the retrospectively marvelous part – even if you were dividing your attention between the phone call and all sorts of other little fugue-like activities, you were somehow never haunted by the suspicion that the person on the other end’s attention might be similarly divided. During a traditional call, e.g., as you let’s say performed a close tactile blemish-scan of your chin, you were in no way oppressed by the thought that your phonemate was perhaps also devoting a good percentage of her attention to a close tactile blemish-scan.”

This is a snippet of a much larger passage, but it’s cherishable, not only for the way it mimics the fleetingness of our attention spans, but also for the truth it delivers about our socially repugnant self-centredness. The huge interference and distracting pleasures that we conspire to build between us would become one of Wallace’s great subjects.

Wallace’s struggle with depression is one of the main points of orientation for Max’s biography, and it’s the most valuable contribution of the book. Twice during college, Wallace was forced to leave school and return home to Champaign, Illinois, where he tried to ride out his illness on a drug called Tofranil. It’s no exaggeration to say depression was one of Wallace’s reasons for writing fiction in the first place. In “The Planet Trillaphon as it Stands in Relation to the Bad Thing”, his first published story in the Amherst Review, Wallace enters the mind of a Brown undergraduate who goes on anti-depressants after trying to kill himself. The power of the story lies in Wallace’s ability to convey what the “Bad Thing” feels like from the inside. The story begins, as Max notes, with a seemingly loose Salingeresque introduction:

“I’ve been on antidepressants for, what, about a year now, and I suppose I feel as if I’m pretty qualified to tell what they’re like. They’re fine, really, but they’re fine in the same way that, say, living on another planet that was warm and comfortable and had food and fresh water would be fine: it would be fine, but it wouldn’t be good old Earth, obviously. I haven’t been on Earth now for almost a year, because I wasn’t doing very well on Earth. I’ve been doing somewhat better here where I am now, on the planet Trillaphon, which I suppose is good news for everyone involved.”

The repetitions and played-up quaintness here give the sense of a consciousness that has been lulled into congeniality. But as the story unfolds, and the imprecisions come into focus, the narrator comes to see that depression is not “just sort of really intense sadness, like what you feel when your very good dog dies, or when Bambi’s mother gets killed in Bambi”. Rather, it’s a kind of auto-immune deficiency of the self:

“All this business about people committing suicide when they’re ‘severely depressed;’ we say, ‘Holy cow, we must do something to stop them from killing themselves!’ That’s wrong. Because all these people have, you see, by this time already killed themselves, where it really counts. By the time these people swallow entire medicine cabinets or take naps in the garage or whatever, they’ve already been killing themselves for ever so long. When they ‘commit suicide,’ they’re just being orderly.”

by Thomas Meaney, TLS | Read more: 
Photo: David Foster Wallace, 1996 © Garry Hannabarger/Corbis
h/t 3 Quarks Daily

Creeping Around Happiness


Lately I’ve become a “creeper.” I didn’t even know what this word meant until about a month ago, when my ninth grade students enlightened me. We had just started reading “Lord of the Flies” and on our first day of discussion, one student raised her hand. “What are all these creepers all over the place?” she asked. Other students chimed in with echoes of confusion. She continued: “I mean, like, I thought there weren’t any other people on this island? I don’t get it.”

After a few minutes of confusion on both our parts, she realized that William Golding’s creepers were long, twisty jungle vines, and I realized that “crepeer” to these students meant someone on Facebook who trolls through other people’s status updates but never posts their own; someone who constantly watches but never speaks.

We moved quickly on to the more important stuff (Symbolism! The conch shell! Piggy’s glasses!) but their definition, and subsequent derisive and scornful tone, lodged an uncomfortable feeling in my stomach – what I describe to my 5-year-old son as his “uh-oh feeling” (which, I have told him, he should always listen to, especially when the neighborhood children want to go bushwhacking through the forest to find a friend’s house who presumably lives “that way”).

For many years I have considered myself a happy person. Not so much the past few months. After a painful divorce (are these things ever painless?), I’ve been struggling to maintain a sense of normalcy for my son and my 2-year-old daughter while trying to figure out what really makes me happy.

I find myself looking almost desperately at the world around me, searching other people’s lives, scrutinizing them, wondering what it actually means to be happy. Last Halloween, as the children and I were making our way through the neighborhood streets in the deepening dusk, I watched as lights began to come on in the houses around us. I found myself peering in (see, this is creepy) as people went about their daily lives, making dinner, watching TV. As I watched each tiny pinprick of a moment, I found myself wondering if these people were happy. I watched and absorbed details as we passed.

I want that kind of window into the lives of people around me. I want to ask my girlfriends and my two sisters: What is your marriage like? Are you happy? For some reason, it always feels hard to have an honest conversation, as if each of us is putting up some kind of facade, so that we might at least seem happy to everyone else. Having children seems to exacerbate this. “Oh yes, I’m fine,” we reassure one another. Pause. “Oh, little Johnny did the funniest thing the other night…” and we move on to safer waters.

Why is it so hard to have honest conversations about things that really matter? Not politics or books or current events – those things are easy to talk about. It’s our own vulnerabilities that get stuck on our tongues. Is this true just for me?

by Amy Lawton, NY Times |  Read more:
Illustration: Edward Hopper, Night Windows via:

Wednesday, March 13, 2013

Rolling Stones


[ed. I think I've finally got Keith's spooky guitar part down.]

John William Waterhouse - Nymphs finding the head of Orpheus. 1900
via:

Radiohead


It Will Be Awesome if They Don't Screw it Up: 3D Printing

An Opportunity and a Warning

The next great technological disruption is brewing just out of sight. In small workshops, and faceless office parks, and garages, and basements, revolutionaries are tinkering with machines that can turn digital bits into physical atoms. The machines can download plans for a wrench from the Internet and print out a real, working wrench. Users design their own jewelry, gears, brackets, and toys with a computer program, and use their machines to create real jewelry, gears, brackets, and toys.

These machines, generically known as 3D printers, are not imported from the future or the stuff of science fiction. Home versions, imperfect but real, can be had for around $1,000. Every day they get better, and move closer to the mainstream.

In many ways, today’s 3D printing community resembles the personal computing community of the early 1990s. They are a relatively small, technically proficient group, all intrigued by the potential of a great new technology. They tinker with their machines, share their discoveries and creations, and are more focused on what is possible than on what happens after they achieve it. They also benefit from following the personal computer revolution: the connective power of the Internet lets them share, innovate, and communicate much faster than the Homebrew Computer Club could have ever imagined.

The personal computer revolution also casts light on some potential pitfalls that may be in store for the growth of 3D printing. When entrenched interests began to understand just how disruptive personal computing could be (especially massively networked personal computing) they organized in Washington, D.C. to protect their incumbent power. Rallying under the banner of combating piracy and theft, these interests pushed through laws like the Digital Millennium Copyright Act (DMCA) that made it harder to use computers in new and innovative ways. In response, the general public learned once-obscure terms like “fair use” and worked hard to defend their ability to discuss, create, and innovate. Unfortunately, this great public awakening came after Congress had already passed its restrictive laws.

Of course, computers were not the first time that incumbents welcomed new technologies by attempting to restrict them. The arrival of the printing press resulted in new censorship and licensing laws designed to slow the spread of information. The music industry claimed that home taping would destroy it. And, perhaps most memorably, the movie industry compared the VCR to the Boston Strangler preying on a woman home alone.

One of the goals of this whitepaper is to prepare the 3D printing community, and the public at large, before incumbents try to cripple 3D printing with restrictive intellectual property laws. By understanding how intellectual property law relates to 3D printing, and how changes might impact 3D printing’s future, this time we will be ready when incumbents come calling to Congress.

3D Printing

So what is 3D printing? Essentially, a 3D printer is a machine that can turn a blueprint into a physical object. Feed it a design for a wrench, and it produces a physical, working wrench. Scan a coffee mug with a 3D scanner, send the file to the printer, and produce thousands of identical mugs.

While even today there are a number of competing designs for 3D printers, most work in the same general way. Instead of taking a block of material and cutting away until it produces an object, a 3D printer actually builds the object up from tiny bits of material, layer by layer. Among other advantages, this allows a 3D printer to create structures that would be impossible if the designer needed to find a way to insert a cutting tool into a solid block of material. It also allows a 3D printer to form general-purpose material into a wide variety of diverse objects.

Because they create objects by building them up layer-by-layer, 3D printers can create objects with internal, movable parts. Instead of having to print individual parts and have a person assemble them, a 3D printer can print the object already assembled. Of course, a 3D printer can also print individual parts or replacement parts. In fact, some 3D printers can print a substantial number of their own parts, essentially allowing them to self-replicate.

by Michael Weinberg, Public Knowledge |  Read more:

Juan Gris - Still Life with Checked Tablecloth
via:

Dessin de Nicolas de Crécy


[ed. Watercolor magic.]

How Cops Became Soldiers


In 2007, journalist Radley Balko told a House subcommittee that one criminologist detected a 1,500% increase in the use of SWAT teams over the last two decades. That's reflective of a larger trend, fueled by the wars on drugs and terror, of police forces becoming heavily militarized.

Balko, an investigative reporter for the Huffington Post and author of the definitive report on paramilitary policing in the United States, has a forthcoming book on the topic, Rise of the Warrior Cop: The Militarization of America's Police Forces. He was kind enough to answer some questions about how our police turned into soldiers as well as the challenges of large-scale reform.

Motherboard: When did the shift towards militarized police forces begin in America? Is it as simple as saying it began with the War on Drugs or can we detect gradual signs of change when we look back at previous policies?

There's certainly a lot of overlap between the war on drugs and police militarization. But if we go back to the late 1960s and early 1970s, there were two trends developing simultaneously. The first was the development and spread of SWAT teams. Darryl Gates started the first SWAT team in L.A. in 1969. By 1975, there were 500 of them across the country. They were largely a reaction to riots, violent protest groups like the Black Panthers and Symbionese Liberation Army, and a couple mass shooting incidents, like the Texas clock tower massacre in 1966.

At the same time, Nixon was declaring an "all-out war on drugs." He was pushing policies like the no-knock raid, dehumanizing drug users and dealers, and sending federal agents to storm private homes on raids that were really more about headlines and photo-ops than diminishing the supply of illicit drugs.

But for the first decade or so after Gates invented them, SWAT teams were largely only used in emergency situations. There usually needed to be an immediate, deadly threat to send the SWAT guys. It wasn't until the early 1980s under Reagan that the two trends converged, and we started to see SWAT teams used on an almost daily basis -- mostly to serve drug warrants. (...)

How did 9/11 alter the domestic relationship between the military and police?

It really just accelerated a process that had already been in motion for 20 years. The main effect of 9/11 on domestic policing is the DHS grant program, which writes huge checks to local police departments across the country to purchase machine guns, helicopters, tanks, and armored personnel carriers. The Pentagon had already been giving away the same weapons and equipment for about a decade, but the DHS grants make that program look tiny.

But probably of more concern is the ancillary effect of those grants. DHS grants are lucrative enough that many defense contractors are now turning their attention to police agencies -- and some companies have sprung up solely to sell military-grade weaponry to police agencies who get those grants. That means we're now building a new industry whose sole function is to militarize domestic police departments. Which means it won't be long before we see pro-militarization lobbying and pressure groups with lots of (taxpayer) money to spend to fight reform. That's a corner it will be difficult to un-turn. We're probably there already. Say hello to the police-industrial complex.

Is police reform a battle that will have to be won legally? From the outside looking in, much of this seems to violate The Posse Comitatus Act of 1878. Are there other ways to change these policies? Can you envision a blueprint?

It won't be won legally. The Supreme Court has been gutting the Fourth Amendment in the name of the drug war since the early 1980s, and I don't think there's any reason to think the current Court will change any of that. The Posse Comitatus Act is often misunderstood. Technically, it only prohibits federal marshals (and, arguably, local sheriffs and police chiefs) from enlisting active-duty soldiers for domestic law enforcement. The president or Congress could still pass a law or executive order tomorrow ordering U.S. troops to, say, begin enforcing the drug laws, and it wouldn't violate the Constitution or the Posse Comitatus Act. The only barrier would be selling the idea to the public.

That said, I think the current state of police militarization probably violates the spirit of the Posse Comitatus Act, and probably more pertinent, the spirit and sentiment behind the Third Amendment. (Yes -- the one no one ever talks about.) When the country was founded, there were no organized police departments, and wouldn't be for another 50 to60 years. Public order was maintained through private means, in worst cases by calling up the militia.

The Founders were quite wary of standing armies and the threat they pose to liberty. They ultimately concluded -- reluctantly -- that the country needed an army for national defense. But they most feared the idea of troops patrolling city streets -- a fear colored by much of human history, and more immediately by the the antagonism between British troops and residents of Boston in the years leading up to the American Revolution. The Founders could never have envisioned police as they exist today. And I think it's safe to say they'd have been absolutely appalled at the idea of a team of police, dressed and armed like soldiers, breaking into private homes in the middle of the night for the purpose of preventing the use of mind-altering drugs.

by Michael Arria, Motherboard | Read more:
Photo: via Oregon DOT/Flickr

Zenith Radio 1935
via:
[ed. Look familiar?]

Regrettable

A little more than a week ago, during an interview with Politico, Bob Woodward came forward to claim he’d been threatened in an email by a “senior White House official” for daring to reveal certain details about the negotiations over the budget sequester. The White House responded by releasing the email exchange Woodward was referring to, which turned out to be nothing more than a cordial exchange between the reporter and Obama’s economic adviser, Gene Sperling, who was clearly implying nothing more than that Woodward would “regret” taking a position that would soon be shown to be false.

A rather trivial scandal, but the incident did manage to raise important questions about Woodward’s behavior. Was he cynically trumping up the administration’s “threat,” or does he just not know how to read an email? Pretty soon, those questions tipped over into the standard Beltway discussion that transpires anytime Woodward does anything. How accurate is his reporting? Does he deserve his legendary status?

I believe I can offer some interesting answers to those questions. Thirty-one years ago, on March 5, 1982, Saturday Night Live and Animal House star John Belushi died of a drug overdose at the Chateau Marmont in Los Angeles—which, bear with me a moment, has more to do with the current coverage of the budget sequester than you might initially think.*

Two years after Belushi died, Bob Woodward published Wired: The Short Life and Fast Times of John Belushi. While the Watergate sleuth might seem an odd choice to tackle such a subject, the book came about because both he and Belushi grew up in the same small town of Wheaton, Ill. They had friends in common. Belushi, who despised Richard Nixon, was a big Woodward fan, and after he died, his widow, Judy Belushi, approached Woodward in his role as a reporter for the Washington Post. She had questions about the LAPD’s handling of Belushi’s death and asked Woodward to look into it. He took the access she offered and used it to write a scathing, lurid account of Belushi’s drug use and death.

When Wired came out, many of Belushi’s friends and family denounced it as biased and riddled with factual errors. “Exploitative, pulp trash,” in the words of Dan Aykroyd. Wired was so wrong, Belushi’s manager said, it made you think Nixon might be innocent. Woodward insisted the book was balanced and accurate. “I reported this story thoroughly,” he told Rolling Stone. Of the book’s critics, he said, “I think they wish I had created a portrait of someone who was larger than life, larger than he was, and that, somehow, this portrait would all come out different. But that’s a fantasy, not journalism.” Woodward being Woodward, he was given the benefit of the doubt. Belushi’s reputation never recovered.

Twenty years later, in 2004, Judy Belushi hired me, then an aspiring comedy writer, to help her with a new biography of John, this one titled Belushi: A Biography. As her coauthor, I handled most of the legwork, including all of the interviews and most of the research. What started as a fun project turned out to be a rather fascinating and unique experiment. Over the course of a year, page by page, source by source, I re-reported and rewrote one of Bob Woodward’s books. As far as I know, it’s the only time that’s ever been done.

Wired is an anomaly in the Woodward catalog, the only book he’s ever written about a subject other than Washington. As such, it’s rarely cited by his critics. But Wired’s outlier status is the very thing that makes it such a fascinating piece of Woodwardology. Because he was forced to work outside of his comfort zone, his strengths and his weaknesses can be seen in sharper relief. In Hollywood, his sources weren’t top secret and confidential. They were some of the most famous people in America. The methodology behind the book is right out there in the open, waiting for someone to do exactly what I did: take it apart and see how Woodward does what he does.

by Tanner Colby, Slate |  Read more:
Photo: Courtesy of Universal Pictures