Sunday, May 26, 2013

Hummers



I have hummingbirds at my feeder (an hour after setup). They're so cool. Food recipe: 4 parts water to one part sugar.

by markk

Tim Phillips, coconut palms australia
via:

Jerry Brown's Political Reboot


One Friday morning this spring, I drove to Washington’s Dulles airport at dawn, to catch the first nonstop flight to San Francisco. When I got off the plane six hours later, the morning sun still slanting through the terminal windows, my cellphone began ringing practically as soon as I turned it on.

“Okay, you’re here!” the man on the other end of the call said, cheerily. I’d been trying to arrange a visit to his office for quite a while, and just the previous evening he’d let me know that if I got there in a hurry, he’d have time to talk the next day, as well as over the weekend. As I walked through the airport, he began reeling off turn-by-turn instructions for reaching his office in Oakland in my rental car. “You’ll take the Bay Bridge to the exit for the 580 East and the 24. But don’t go all the way to the 24! That would send you out to Concord. Take the 980 West until the exit for 27th Street, and then …”

It was like a moment from a Saturday Night Live sketch of  "The Californians”—­which seemed appropriate, since the man I was talking with was the Californian, Jerry Brown. Brown began his first two terms as governor in 1974, at age 36, following one Republican former actor, Ronald Reagan. He returned to the office at age 72, following another, Arnold Schwarzenegger. In between he ran for president three times and the U.S. Senate once, all of course unsuccessfully; served eight years as Oakland’s mayor and four as California’s attorney general; and lived in both Japan (studying Zen meditation) and India (volunteering for Mother Teresa). He celebrated his 75th birthday the weekend I was in Oakland, which means that if he runs for reelection next year and if he wins, both of which are considered likely—his approval rating this year has been the envy of other politicians in the state—he could still be governor at age 80. “This is certainly a new identity for Brown, so flighty in his first ‘Governor Moonbeam’ period as governor,” Bruce Cain, a political scientist and an expert on California politics at Stanford’s Bill Lane Center for the American West, told me. “Now he is the most trusted, stable, and reliable leader around.” I asked Kevin Starr, of the University of Southern California and the author of the acclaimed Americans and the California Dream series of books, how Brown was seen in his return to office. “He is now liked,” Starr said. “Eccentric, but liked.”

Life and health are provisional, and within the past two years, Brown has undergone radiation treatment for early-stage prostate cancer (while maintaining his normal work schedule) and had a cancerous growth removed from his nose. But he moves, talks, reacts, and laughs like someone who is in no mood, and feels no need, to slow down. He is nearly a decade older than Bill Clinton but comes across as younger and bouncier.

“I love what I am doing,” he told me once I got to his Oakland office. “I love it much more than the first time. Back then I got bored because we didn’t have big problems. Now I am very enthusiastic. Everything’s interesting, and it’s complicated. There is a zest!” He likes to pound the desk or table as he talks, and this passage was punctuated: love (bang) … love (bang) … zest! (bang bang bang!). Anne Gust Brown, a former Gap executive in her mid‑50s, who became his wife eight years ago and is widely regarded as his most influential and practical-minded adviser, arched an eyebrow from the other side of the room, where she was half-listening while working at a computer. “Ed-mund!” she said smilingly, but being sure to get his attention. (His official name is Edmund Gerald Brown Jr., after his father, Edmund G. “Pat” Brown, who was governor for eight years before he lost to Ronald Reagan in 1966.) “Don’t get yourself too worked up!” As a note on nomenclature: apart from his wife’s occasional joking use of Edmund and my own antiquated sense that I should address him as Governor, every other person I heard speak about—or with—him called him Jerry.

by James Fallows, The Atlantic |  Read more:
Photo: Chris McPherson

Patricia Robert
via:

The Gift of Siblings

Given what a mouthy thing I grew up to be, it’s shocking to me that I began talking later than most children do. But I didn’t need words. I had my older brother, Mark.

The way my mother always recounted it, I’d squirm, pout, mewl, bawl or indicate my displeasure in some comparably articulate way, and before she could press me on what I wanted and perhaps coax actual language from me, Mark would rush in to solve the riddle.

“His blanket,” he’d say, and he’d be right.

“Another cookie,” he’d say, and he’d be even righter.

From the tenor of my sob or the twitch of one of my fat little fingers, Mark knew which chair I had designs on, which toy I was ogling. He decoded the signs and procured the goods. Only 17 months older, he was my psychic and my spokesman, my shaman and my Sherpa. With Mark around, I was safe.

This weekend he’s turning 50 — it’s horrifying, trust me — and we’ll all be together, as we were at his 40th and my 40th and seemingly every big milestone: he and I and our younger brother, Harry, and our sister, Adelle, the last one to come along. We marched (or, rather, crawled and toddled) into this crazy world together, and though we had no say in that, it’s by our own volition and determination that we march together still. Among my many blessings, this is the one I’d put at the top.

Two weeks ago, the calendar decreed that we Americans pause to celebrate mothers, as it does every year. Three weeks hence, fathers get their due. But as I await the arrival of my brothers, my sister and their spouses in Manhattan, which is where we’ll sing an off-key “Happy Birthday” to Mark and drink too much, my thoughts turn to siblings, who don’t have a special day but arguably have an even more special meaning to, and influence on, those of us privileged to have them.

“Siblings are the only relatives, and perhaps the only people you’ll ever know, who are with you through the entire arc of your life,” the writer Jeffrey Kluger observed to Salon in 2011, the year his book “The Sibling Effect” was published. “Your parents leave you too soon and your kids and spouse come along late, but your siblings know you when you are in your most inchoate form.”

Of course the “entire arc” part of Kluger’s comments assumes that untimely death doesn’t enter the picture, and that acrimony, geography or mundane laziness doesn’t pull brothers and sisters apart, to a point where they’re no longer primary witnesses to one another’s lives, no longer fellow passengers, just onetime housemates with common heritages.

That happens all too easily, and whenever I ponder why it didn’t happen with Mark, Harry, Adelle and me — each of us so different from the others — I’m convinced that family closeness isn’t a happy accident, a fortuitously smooth blend of personalities.

by Frank Bruni, NY Times |  Read more:
Image: Futurity

Saturday, May 25, 2013

Edwyn Collins



Hawaii Before Statehood: Photos, 1959 | LIFE.com
via:

The Suicide Epidemic

When Thomas Joiner was 25 years old, his father—whose name was also Thomas Joiner and who could do anything—disappeared from the family’s home. At the time, Joiner was a graduate student at the University of Texas, studying clinical psychology. His focus was depression, and it was obvious to him that his father was depressed. Six weeks earlier, on a family trip to the Georgia coast, the gregarious 56-year-old—the kind of guy who was forever talking and laughing and bending people his way—was sullen and withdrawn, spending days in bed, not sick or hungover, not really sleeping.

Joiner knew enough not to worry. He knew that the desire for death—the easy way out, the only relief—was a symptom of depression, and although at least 2 percent of those diagnosed make suicide their final chart line, his father didn’t match the suicidal types he had learned about in school. He wasn’t weak or impulsive. He wasn’t a brittle person with bad genes and big problems. Suicide was understood to be for losers, basically, the exact opposite of men like Thomas Joiner Sr.—a successful businessman, a former Marine, tough even by Southern standards.

But Dad had left an unmade bed in a spare room, and an empty spot where his van usually went. By nightfall he hadn’t been heard from, and the following morning Joiner’s mother called him at school. The police had found the van. It was parked in an office lot about a mile from the house, the engine cold. Inside, in the back, the police found Joiner’s father dead, covered in blood. He had been stabbed through the heart.

The investigators found slash marks on his father’s wrists and a note on a yellow sticky pad by the driver’s seat. “Is this the answer?” it read, in his father’s shaky scrawl. They ruled it a suicide, death by “puncture wound,” an impossibly grisly way to go, which made it all the more difficult for Joiner to understand. This didn’t seem like the easy way out.

Back home for the funeral, Joiner’s pain and confusion were compounded by ancient taboos. For centuries suicide was considered an act against God, a violation of law, and a stain on the community. He overheard one relative advise another to call it a heart attack. His girlfriend fretted about his tainted DNA. Even some of his peers and professors—highly trained, doctoral-level clinicians—failed to offer a simple “my condolences.” It was as though the Joiner family had failed dear old Dad, killed him somehow, just as surely as if they had stabbed him themselves. To Joiner, however, the only real failing was from his field, which clearly had a shaky understanding of suicide.

Survivors of a suicide are haunted by the same whys and hows, the what-ifs that can never be answered. Joiner was no different. He wanted to know why people die at their own hands: What makes them desire death in the first place? When exactly do they decide to end their lives? How do they build up the nerve to do it? But unlike most other survivors of suicide, for the last two decades he has been developing answers.

Joiner is 47 now, and a chaired professor at Florida State University, in Tallahassee. Physically, he is an imposing figure, 6-foot-3 with a lantern jaw and a head shaved clean with a razor. He wears an off-and-on beard, which grows in as heavy as iron filings. The look fits his work, which is dedicated to interrogating suicide as hard as anyone ever has, to finally understand it as a matter of public good and personal duty. He hopes to honor his father, by combating what killed him and by making his death a stepping stone to better treatment. “Because,” as he says, “no one should have to die alone in a mess in a hotel bathroom, in the back of a van, or on a park bench, thinking incorrectly that the world will be better off.”

He is the author of the first comprehensive theory of suicide, an explanation, as he told me, “for all suicides at all times in all cultures across all conditions.” He also has much more than a theory: he has a moment. This spring, suicide news paraded down America’s front pages and social-media feeds, led by a report from the Centers for Disease Control and Prevention, which called self-harm “an increasing public health concern.” Although the CDC revealed grabby figures—like the fact that there are more deaths by suicide than by road accident—the effort prompted only a tired spasm of talk about aging baby boomers and life in a recession. The CDC itself, in an editorial note, suggested that the party would rock on once the economy rebounded and our Dennis Hopper–cohort rode its hog into the sunset.

But suicide is not an economic problem or a generational tic. It’s not a secondary concern, a sideline that will solve itself with new jobs, less access to guns, or a more tolerant society, although all would be welcome. It’s a problem with a broad base and terrible momentum, a result of seismic changes in the way we live and a corresponding shift in the way we die—not only in America but around the world.

We know, thanks to a growing body of research on suicide and the conditions that accompany it, that more and more of us are living through a time of seamless black: a period of mounting clinical depression, blossoming thoughts of oblivion and an abiding wish to get there by the nonscenic route. Every year since 1999, more Americans have killed themselves than the year before, making suicide the nation’s greatest untamed cause of death. In much of the world, it’s among the only major threats to get significantly worse in this century than in the last.

The result is an accelerating paradox. Over the last five decades, millions of lives have been remade for the better. Yet within this brighter tomorrow, we suffer unprecedented despair. In a time defined by ever more social progress and astounding innovations, we have never been more burdened by sadness or more consumed by self-harm. And this may be only the beginning. If Joiner and others are right—and a landmark collection of studies suggests they are—we’ve reached the end of one order of human history and are at the beginning of a new order entirely, one beset by a whole lot of self-inflicted bloodshed, and a whole lot more to come.

by Tony Dokoupil, TDB/Newsweek |  Read more:
Images: Vincent van Gogh, Wikipedia; Virginia Woolf, Bettman/Corbis

Zachary Tate Porter, Tomb for Two Brothers (2012)
via:

20 Great Essays by David Foster Wallace

[ed. The Electric Typewriter has 20 Great Essays by David Foster Wallace. All for free. Here's an excerpt from his masterpiece Infinite Jest, describing why video-phones never really took off.]

'VIDEOPHONY' SUDDENLY COLLAPSED LIKE A KICKED TENT, SO THAT, BY THE YEAR OF THE DEPEND ADULT UNDERGARMENT, FEWER THAN 10% OF ALL PRIVATE TELEPHONE COMMUNICATIONS UTILIZED ANY VIDEO-IMAGE-FIBER DATA-TRANSFERS OR COINCIDENT PRODUCTS AND SERVICES, THE AVERAGE U.S. PHONE-USER DECIDING THAT S/HE ACTUALLY PREFERRED THE RETROGRADE OLD LOW-TECH BELL-ERA VOICE-ONLY TELEPHONIC INTERFACE AFTER ALL, A PREFERENTIAL ABOUT-FACE THAT COST A GOOD MANY PRECIPITANT VIDEO-TELEPHONY-RELATED ENTREPRENEURS THEIR SHIRTS, PLUS DESTABILIZING TWO HIGHLY RESPECTED MUTUAL FUNDS THAT HAD GROUND-FLOORED HEAVILY IN VIDEO-PHONE TECHNOLOGY, AND VERY NEARLY WIPING OUT THE MARYLAND STATE EMPLOYEES' RETIREMENT SYSTEM'S FREDDIE-MAC FUND, A FUND WHOSE ADMINISTRATOR'S MISTRESS'S BROTHER HAD BEEN AN ALMOST MANICALLY PRECIPITANT VIDEO-PHONE-TECHNOLOGY ENTREPRENEUR . . . AND BUT SO WHY THE ABRUPT CONSUMER RETREAT BACK TO GOOD OLD VOICE-ONLY TELEPHONING?

The answer, in a kind of trivalent nutshell, is: (1) emotional stress, (2) physi­cal vanity, (3) a certain queer kind of self-obliterating logic in the micro­economics of consumer high-tech.

It turned out that there was something terribly stressful about visual telephone interfaces that hadn't been stressful at all about voice-only inter­faces. Videophone consumers seemed suddenly to realize that they'd been subject to an insidious but wholly marvelous delusion about conventional voice-only telephony. They'd never noticed it before, the delusion — it's like it was so emotionally complex that it could be countenanced only in the context of its loss. Good old traditional audio-only phone conversations allowed you to presume that the person on the other end was paying com­plete attention to you while also permitting you not to have to pay anything even close to complete attention to her. A traditional aural-only conversation — utilizing a hand-held phone whose earpiece contained only 6 little pinholes but whose mouthpiece (rather significantly, it later seemed) contained (62) or 36 little pinholes — let you enter a kind of highway-hypnotic semi-attentive fugue: while conversing, you could look around the room, doodle, fine-groom, peel tiny bits of dead skin away from your cuticles, compose phone-pad haiku, stir things on the stove; you could even carry on a whole separate additional sign-language-and-exaggerated-facial-expression type of conversation with people right there in the room with you, all while seeming to be right there attending closely to the voice on the phone. And yet — and this was the retrospectively marvelous part — even as you were dividing your attention between the phone call and all sorts of other idle little fuguelike activities, you were somehow never haunted by the suspicion that the person on the other end's attention might be similarly divided. During a traditional call, e.g., as you let's say performed a close tactile blemish-scan of your chin, you were in no way oppressed by the thought that your phonemate was perhaps also devoting a good percentage of her attention to a close tactile blemish-scan. It was an illusion and the illusion was aural and aurally supported: the phone-line's other end's voice was dense, tightly compressed, and vectored right into your ear, enabling you to imagine that the voice's owner's attention was similarly compressed and focused . . . even though your own attention was not, was the thing. This bilateral illusion of unilateral attention was almost infantilely gratify­ing from an emotional standpoint: you got to believe you were receiving somebody's complete attention without having to return it. Regarded with the objectivity of hindsight, the illusion appears arational, almost literally fantastic: it would be like being able both to lie and to trust other people at the same time.

Video telephony rendered the fantasy insupportable. Callers now found they had to compose the same sort of earnest, slightly overintense listener's expression they had to compose for in-person exchanges. Those callers who out of unconscious habit succumbed to fuguelike doodling or pants-crease-adjustment now came off looking rude, absentminded, or childishly self-absorbed. Callers who even more unconsciously blemish-scanned or nostril-explored looked up to find horrified expressions on the video-faces at the other end. All of which resulted in videophonic stress.

Even worse, of course, was the traumatic expulsion-from-Eden feeling of looking up from tracing your thumb's outline on the Reminder Pad or ad­justing the old Unit's angle of repose in your shorts and actually seeing your videophonic interfacee idly strip a shoelace of its gumlet as she talked to you, and suddenly realizing your whole infantile fantasy of commanding your partner's attention while you yourself got to fugue-doodle and make little genital-adjustments was deluded and insupportable and that you were actually commanding not one bit more attention than you were paying, here. The whole attention business was monstrously stressful, video callers found.

(2) And the videophonic stress was even worse if you were at all vain. I.e. if you worried at all about how you looked. As in to other people. Which all kidding aside who doesn't. Good old aural telephone calls could be fielded without makeup, toupee, surgical prostheses, etc. Even without clothes, if that sort of thing rattled your saber. But for the image-conscious, there was of course no such answer-as-you-are informality about visual-video tele­phone calls, which consumers began to see were less like having the good old phone ring than having the doorbell ring and having to throw on clothes and attach prostheses and do hair-checks in the foyer mirror before answer­ing the door.

But the real coffin-nail for videophony involved the way callers' faces looked on their TP screen, during calls. Not their callers' faces, but their own, when they saw them on video. It was a three-button affair:, after all, to use the TP's cartridge-card's Video-Record option to record both pulses in a two-way visual call and play the call back and see how your face had actu­ally looked to the other person during the call. This sort of appearance-check was no more resistible than a mirror. But the experience proved al­most universally horrifying. People were horrified at how their own faces appeared on a TP screen. It wasn't just 'Anchorman's Bloat,' that well-known impression of extra weight that video inflicts on the face. It was worse. Even with high-end TPs' high-def viewer-screens, consumers per­ceived something essentially blurred and moist-looking about their phone-faces, a shiny pallid indefiniteness that struck them as not just unflattering but somehow evasive, furtive, untrustworthy, unlikable. In an early and ominous InterLace/G.T.E. focus-group survey that was all but ignored in a storm of entrepreneurial sci-fi-tech enthusiasm, almost 60% of respondents who received visual access to their own faces during videophonic calls spe­cifically used the terms untrustworthy, unlikable, or hard to like in describ­ing their own visage's appearance, with a phenomenally ominous 71 % of senior-citizen respondents specifically comparing their video-faces to that of Richard Nixon during the Nixon-Kennedy debates of B.S. 1960.

The proposed solution to what the telecommunications industry's psy­chological consultants termed Video-Physiognomic Dysphoria (or VPD) was, of course, the advent of High-Definition Masking; and in fact it was those entrepreneurs who gravitated toward the production of high-definition videophonic imaging and then outright masks who got in and out of the short-lived videophonic era with their shirts plus solid addi­tional nets.

Mask-wise, the initial option of High-Definition Photographic Imaging — i.e. taking the most flattering elements of a variety of flattering multi-angle photos of a given phone-consumer and — thanks to existing image-configuration equipment already pioneered by the cosmetics and law-enforcement industries — combining them into a wildly attractive high-def broadcastable composite of a face wearing an earnest, slightly overintense expression of complete attention — was quickly supplanted by the more inexpensive and byte-economical option of (using the exact same cosmetic-and-FBI software) actually casting the enhanced facial image in a form-fitting polybutylene-resin mask, and consumers soon found that the high up-front cost of a permanent wearable mask was more than worth it, con­sidering the stress- and VFD-reduction benefits, and the convenient Velcro straps for the back of the mask and caller's head cost peanuts; and for a couple fiscal quarters phone/cable companies were able to rally VPD-afflicted consumers' confidence by working out a horizontally integrated deal where free composite-and-masking services came with a videophone hookup. The high-def masks, when not in use, simply hung on a small hook on the side of a TP's phone-console, admittedly looking maybe a bit surreal and discomfiting when detached and hanging there empty and wrinkled, and sometimes there were potentially awkward mistaken-identity snafus in­volving multi-user family or company phones and the hurried selection and attachment of the wrong mask taken from some long row of empty hanging masks — but all in all the masks seemed initially like a viable industry re­sponse to the vanity,-stress,-and-Nixonian-facial-image problem.

by David Foster Wallace, Excerpt from Infinite Jest, The Electric Typewriter |  Read more:
Image: uncredited

Friday, May 24, 2013

The Rise and Fall of Charm in American Men


If one were to recast The Rockford Files, as Universal Pictures is intending to do, would the Frat Pack actor Vince Vaughn seem the wisest choice to play Jim Rockford, the character James Garner inhabited with such sly intelligence and bruised suavity? Universal apparently thinks so.

One can say many things about the talents of Vaughn, and were Universal embarking on a bit of polyester parody—remaking, say, Tony Rome, among the least of the neo-noirs—Vaughn’s gift for sending up low pop would be just so. But to aim low in this case is to miss the deceptive grace that Garner brought to the original, and prompts a bigger question: Whatever happened to male charm—not just our appreciation of it, or our idea of it, but the thing itself?

Yes, yes, George Clooney—let’s get him out of the way. For nearly 20 years, any effort to link men and charm has inevitably led to Clooney. Ask women or men to name a living, publicly recognized charming man, and 10 out of 10 will say Clooney. That there exists only one choice—and an aging one—proves that we live in a culture all but devoid of male charm.

Mention Clooney, and the subject turns next to whether (or to what extent) he’s the modern version of that touchstone of male charm, Cary Grant. Significantly, Grant came to his charm only when he came, rather late, to his adulthood. An abandoned child and a teenage acrobat, he spent his first six years in Hollywood playing pomaded pretty boys. In nearly 30 stilted movies—close to half of all the pictures he would ever make—his acting was tentative, his personality unformed, his smile weak, his manner ingratiating, and his delivery creaky. See how woodenly he responds to Mae West’s most famous (and most misquoted) line, in She Done Him Wrong: “Why don’t you come up sometime and see me?” But in 1937 he made the screwball comedy The Awful Truth, and all at once the persona of Cary Grant gloriously burgeoned. Out of nowhere he had assimilated his offhand wit, his playful knowingness, and, in a neat trick that allowed him to be simultaneously cool and warm, his arch mindfulness of the audience he was letting in on the joke.

Grant had developed a new way to interact with a woman onscreen: he treated his leading lady as both a sexually attractive female and an idiosyncratic personality, an approach that often required little more than just listening to her—a tactic that had previously been as ignored in the pictures as it remains, among men, in real life. His knowing but inconspicuously generous style let the actress’s performance flourish, making his co-star simultaneously regal and hilarious.

In short, Grant suddenly and fully developed charm, a quality that is tantalizing because it simultaneously demands detachment and engagement. Only the self-aware can have charm: It’s bound up with a sensibility that at best approaches wisdom, or at least worldliness, and at worst goes well beyond cynicism. It can’t exist in the undeveloped personality. It’s an attribute foreign to many men because most are, for better and for worse, childlike. These days, it’s far more common among men over 70—probably owing to the era in which they reached maturity rather than to the mere fact of their advanced years. What used to be called good breeding is necessary (but not sufficient) for charm: no one can be charming who doesn’t draw out the overlooked, who doesn’t shift the spotlight onto others—who doesn’t, that is, possess those long-forgotten qualities of politesse and civilité. A great hostess perforce has charm (while legendary hostesses are legion—Elizabeth Montagu, Madame Geoffrin, Viscountess Melbourne, Countess Greffulhe—I can’t think of a single legendary host), but today this social virtue goes increasingly unrecognized. Still, charm is hardly selfless. All of these acts can be performed only by one at ease with himself yet also intensely conscious of himself and of his effect on others. And although it’s bound up with considerateness, it really has nothing to do with, and is in fact in some essential ways opposed to, goodness. Another word for the lightness of touch that charm requires in humor, conversation, and all other aspects of social relations is subtlety, which carries both admirable and dangerous connotations. Charm’s requisite sense of irony is also the requisite for social cruelty (...)

by Benjamin Schwarz, The Atlantic |  Read more:
Illustration: Thomas Allen

Helen Frankenthaler. Broome Street at Night 1987
via:

Glaeser on Cities

Edward Glaeser of Harvard University and author of The Triumph of Cities talks with EconTalk host Russ Roberts about American cities. The conversation begins with a discussion of the history of Detroit over the last century and its current plight. What might be done to improve Detroit's situation? Why are other cities experiencing similar challenges to those facing Detroit? Why are some cities thriving and growing? What policies might help ailing cities and what policies have helped those cities that succeed? The conversation concludes with a discussion of why cities have such potential for growth. (ed. Podcast)

Intro. [Recording date: April 15, 2013.] Russ: Topic is cities; start with recent post you had at the New York Times's blog, Economix, on Detroit. Give us a brief history of that city. It's not doing well right now, but it wasn't always that way, was it?

Guest: No. If you look back 120 years ago or so, Detroit looked like one of the most entrepreneurial places on the planet. It seemed as if there was an automotive genius on every street corner. If you look back 60 years ago, Detroit was among the most productive places on the planet, with the companies that were formed by those automotive geniuses coming to fruition and producing cars that were the technological wonder of the world. So, Detroit's decline is of more recent heritage, of the past 50 years. And it's an incredible story, an incredible tragedy. And it tells us a great deal about the way that cities work and the way that local economies function.

Russ: So, what went wrong? 

Guest: If we go back to those small-scale entrepreneurs of 120 years ago--it's not just Henry Ford; it's the Dodge brothers, the Fisher brothers, David Dunbar Buick, Billy Durant nearby Flint--all of these men were trying to figure out how to solve this technological problem, making the automobile cost effective, produce cheap, solid cars for ordinary people to run in the world. They managed to do that, Ford above all, by taking advantage of each other's ideas, each other supplies, financing that was collaboratively arranged. And together they were able to achieve this remarkable technological feat. The problem was the big idea was a vast, vertically integrated factory. And that's a great recipe for short run productivity, but a really bad recipe for long run reinvention. And a bad recipe for urban areas more generally, because once you've got a River Rouge plant, once you've got this mass vertically integrated factory, it doesn't need the city; it doesn't give to the city. It's very, very productive but you could move it outside the city, as indeed Ford did when he moved his plant from the central city of Detroit to River Rouge. And then of course once you are at this stage of the technology of an industry, you can move those plants to wherever it is that cost minimization dictates you should go. And that's of course exactly what happens. Jobs first suburbanized, then moved to lower cost areas. The work of Tom Holmes at the U. of Minnesota shows how remarkable the difference is in state policies towards unions, labor, how powerful those policies were in explaining industrial growth after 1947. And of course it globalizes. It leaves cities altogether. And that's exactly what happened in automobiles. In some sense--and what was left was relatively little, because it's a sort of inversion[?] of the natural resource curse, because it was precisely because Detroit had these incredibly productive machines that they squeezed out all other sources of invention--rather than having lots of small entrepreneurs you had middle managers for General Motors (GM) and Ford. And those guys were not going to be particularly adept at figuring out some new industry and new activity when the automobile production moved elsewhere or declined. And that's at least how I think about this--that successful cities today are marked by small firms, smart people, and connections to the outside world. And that was what Detroit was about in 1890 but it's not what Detroit was about in 1970. And I think that sowed the seeds of decline.

4:25 Russ: So, one way to describe what you are saying is in the early part of the 20th century, Detroit was something like Silicon Valley, a hub of creative talent, a lot of complementarity between the ideas and the supply chain and interactions between those people that all came together. Lots of competition, which encouraged people to try harder and innovate, or do the best they could. Are you suggesting then that Silicon Valley is prone to this kind of change at some point? If the computer were to become less important somewhere down the road or produced in a different way? 

Guest: The question is to what extent do the Silicon Valley firms become dominated by very strong returns to scale, a few dominant firms capitalize on it. I think it's built into the genes of every industry that they will eventually decline. The question is whether or not the region then reinvents itself. And there are two things that enable particular regions to reinvent themselves. One is skills, measured education, human capital. The year, the share or the fraction in the metropolitan area with a college degree as of 1940 or 1960 or 1970 has been a very good predictor of whether, particularly northeastern or northwestern metropolitan areas, have been able to turn themselves around. And a particular form of human capital, entrepreneurial human capital, also seems to be critical, despite the fact that our proxies for entrepreneurial talent are relatively weak. We typically use things like the number of establishments per worker in a given area, or the share of employment in startups from some initial time period. Those weak proxies are still very, very strong predictors of urban regeneration, places that have lots of little firms have managed to do much better than places that were dominated by a few large firms, particularly if they are in a single industry. So, let's think for a second about Silicon Valley. Silicon Valley has lots of skilled workers. That's good. But what I don't know is whether Silicon Valley is going to look like it's dominated by a few large firms, Google playing the role of General Motors. Or whether or not it will continue to have lots of little startups. There's nothing wrong with big firms in terms of productivity. But they tend to train middle managers, not entrepreneurs. So that's, I think the other thing to look for. And one of the things that we have seen historically is that those little entrepreneurs are pretty good at switching industries when they need to. Think about New York, which, the dominated industry in New York was garment manufacturing. It was a large industrial cluster in the 1950s than automobile production was. But those small scale people who led those garment firms, they were pretty adept at doing something else when the industry jettisoned hundreds of thousands of jobs in the 1960s. No way that the middle managers for U.S. Steel or General Motors were not.

by Edward Glaeser, Hosted by Russ Roberts, Library of Economics and Liberty |  Read more:
Photo: Julian Dufort, Money Magazine

Sandra MeisnerWhen the night falls.


My Dog Sighs. Disposable city art (free for the taking)
via:

Little Brother is Watching You

It’s clear that the “expectation of privacy” would vary a great deal based on circumstances, but the matter of “changing and varied social norms” bears further scrutiny. Is the proliferation of recording devices altering our concept of privacy itself? I asked Abbi, who is a P.P.E. major (Philosophy, Politics, and Economics), whether he thought the “expectation of privacy” had changed in his lifetime. His response was striking:
People my age know that there are probably twice as many photos on the Internet of us, that we’ve never seen, or even know were taken, as there are that we’ve seen. It’s a reality we live with; it’s something people are worried about, and try to have some control over, say by controlling the privacy on their social media accounts. 
But at the same time, people my age tend to know that nowhere is really safe, I guess. You’re at risk of being recorded all the time, and at least for me, and I think for a lot of people who are more reasonable, that’s only motivation to be the best person you can be; to exhibit as good character as you can, because if all eyes are on you, you don’t really have the option to be publicly immoral, or to do wrong without being accountable.
Kennerly had a different response to the same question:
In many ways, the ubiquity of recording devices (we all have one in our pockets) doesn’t really change the analysis: you’ve never had the guarantee, by law or by custom, that a roomful of strangers will keep your secrets, even if they say they will. Did Abbi violate some part of the social compact by deceiving Luntz? In my opinion, yes. But falsity has a place in our society, and, as the Supreme Court confirmed last summer in United States v. Alvarez, certain false statements (outside of defamation, fraud, and perjury) can indeed receive First Amendment protection. As Judge Kozinski said in that case (when it was in front of the 9th Circuit), “white lies, exaggerations and deceptions [ ] are an integral part of human intercourse.”
Let me quote Kozinski at length:
Saints may always tell the truth, but for mortals living means lying. We lie to protect our privacy (“No, I don’t live around here”); to avoid hurt feelings (“Friday is my study night”); to make others feel better (“Gee you’ve gotten skinny”); to avoid recriminations (“I only lost $10 at poker”); to prevent grief (“The doc says you’re getting better”); to maintain domestic tranquility (“She’s just a friend”); to avoid social stigma (“I just haven’t met the right woman”); for career advancement (“I’m sooo lucky to have a smart boss like you”); to avoid being lonely (“I love opera”); to eliminate a rival (“He has a boyfriend”); to achieve an objective (“But I love you so much”); to defeat an objective (“I’m allergic to latex”); to make an exit (“It’s not you, it’s me”); to delay the inevitable (“The check is in the mail”); to communicate displeasure (“There’s nothing wrong”); to get someone off your back (“I’ll call you about lunch”); to escape a nudnik (“My mother’s on the other line”); to namedrop (“We go way back”); to set up a surprise party (“I need help moving the piano”); to buy time (“I’m on my way”); to keep up appearances (“We’re not talking divorce”); to avoid taking out the trash (“My back hurts”); to duck an obligation (“I’ve got a headache”); to maintain a public image (“I go to church every Sunday”); to make a point (“Ich bin ein Berliner”); to save face (“I had too much to drink”); to humor (“Correct as usual, King Friday”); to avoid embarrassment (“That wasn’t me”); to curry favor (“I’ve read all your books”); to get a clerkship (“You’re the greatest living jurist”); to save a dollar (“I gave at the office”); or to maintain innocence (“There are eight tiny reindeer on the rooftop”)….
An important aspect of personal autonomy is the right to shape one’s public and private persona by choosing when to tell the truth about oneself, when to conceal, and when to deceive. Of course, lies are often disbelieved or discovered, and that, too, is part of the push and pull of social intercourse. But it’s critical to leave such interactions in private hands, so that we can make choices about who we are. How can you develop a reputation as a straight shooter if lying is not an option?

by Maria Bustillos, New Yorker |  Read more:
Illustration by Tom Bachtell

From Here You Can See Everything

In Infinite Jest, David Foster Wallace imagines a film (also called Infinite Jest) so entertaining that anyone who starts watching it will die watching it, smiling vacantly at the screen in a pool of their own soiling. It’s the ultimate gripper of eyeballs. Media, in this absurdist rendering, evolves past parasite to parasitoid, the kind of overly aggressive parasite that kills its host.

Wallace himself had a strained relationship with television. He said in his 1993 essay “E Unibus Pluram” that television “can become malignantly addictive,” which, he explained, means, “(1) it causes real problems for the addict, and (2) it offers itself as relief from the very problems is causes.” Though I don’t think he would have labeled himself a television addict, Wallace was known to indulge in multi-day television binges. One can imagine those binges raised to the power of Netflix Post-Play and all seven seasons of The West Wing.

That sort of binge-television viewing has become a normal, accepted part of American culture. Saturdays with a DVD box set, a couple bottles of wine, and a big carton of goldfish crackers are a pretty common new feature of American weekends. Netflix bet big on this trend with their release of House of Cards. They released all 13 episodes of the first season at once: roughly one full Saturday’s worth. It’s a show designed for the binge. The New York Times quoted the show’s producer as saying, with a laugh, “Our goal is to shut down a portion of America for a whole day.” They don’t say what kind of laugh it was.

The scariest part of this new binge culture is that hours spent bingeing don’t seem to displace other media consumption hours; we’re just adding them to our weekly totals. Lump in hours on Facebook, Pinterest, YouTube, and maybe even the occasional non-torrented big-screen feature film and you’re looking at a huge number of hours per person. (...)

In Wallace’s book, a Canadian terrorist informant of foggy allegiance asks an American undercover agent a form of the question: “If Americans would choose to press play on the film Infinite Jest, knowing it will kill them, doesn’t that mean they are already dead inside, that they have chosen entertainment over life?” Of course vanishingly few Americans would press play on a film that was sure to end their lives. But there’s a truth in this absurdity. Almost every American I know does trade large portions of his life for entertainment, hour by weeknight hour, binge by Saturday binge, Facebook check by Facebook check. I’m one of them.

by James A. Pearson, The Morning News |  Read more:
Image: Alistair Frost, Metaphors don't count, 2011. Courtesy the artist and Zach Feuer Gallery, New York.

Why You Like What You Like

Food presents the most interesting gateway to thinking about liking. Unlike music or art, we have a very direct relationship with what we eat: survival. Also, every time you sit down to a meal you have myriad “affective responses,” as psychologists call them.

One day, I join Debra Zellner, a professor of psychology at Montclair State University who studies food liking, for lunch at the Manhattan restaurant Del Posto. “What determines what you’re selecting?” Zellner asks, as I waver between the Heritage Pork Trio with Ribollita alla Casella & Black Cabbage Stew and the Wild Striped Bass with Soft Sunchokes, Wilted Romaine & Warm Occelli Butter.

“What I’m choosing, is that liking? It’s not liking the taste,” Zellner says, “because I don’t have it in my mouth.”

My choice is the memory of all my previous choices—“every eating experience is a learning experience,” as the psychologist Elizabeth Capaldi has written. But there is novelty here too, an anticipatory leap forward, driven in part by the language on the menu. Words such as “warm” and “soft” and “heritage” are not free riders: They are doing work. In his book The Omnivorous Mind, John S. Allen, a neuroanthropologist, notes that simply hearing an onomatopoetic word like “crispy” (which the chef Mario Batali calls “innately appealing”) is “likely to evoke the sense of eating that type of food.” When Zellner and I mull over the choices, calling out what “sounds good,” there is undoubtedly something similar going on.

As I take a sip of wine—a 2004 Antico Broilo, a Friulian red—another element comes into play: How you classify something influences how much you like it. Is it a good wine? Is it a good red wine? Is it a good wine from the refosco grape? Is it a good red wine from Friuli ?

Categorization, says Zellner, works in several ways. Once you have had a really good wine, she says, “you can’t go back. You wind up comparing all these lesser things to it.” And yet, when she interviewed people about their drinking of, and liking for, “gourmet coffee” and “specialty beer” compared with “regular” versions such as Folgers and Budweiser, the “ones who categorized actually like the everyday beer much more than the people who put all beer in the same category,” she says. Their “hedonic contrast” was reduced. In other words, the more they could discriminate what was good about the very good, the more they could enjoy the less good. We do this instinctively—you have undoubtedly said something like “it’s not bad, for airport food.”

There is a kind of tragic irony when it comes to enjoying food: As we eat something, we begin to like it less. From a dizzy peak of anticipatory wanting, we slide into a slow despond of dimming affection, slouching into revulsion (“get this away from me,” you may have said, pushing away a once-loved plate of Atomic Wings).

In the phenomenon known as “sensory specific satiety,” the body in essence sends signals when it has had enough of a certain food. In one study, subjects who’d rated the appeal of several foods were asked about them again after eating one for lunch; this time they rated the food’s pleasantness lower. They were not simply “full,” but their bodies were striving for balance, for novelty. If you have ever had carb-heavy, syrup-drenched pancakes for breakfast, you are not likely to want them again at lunch. It’s why we break meals up into courses: Once you had the mixed greens, you are not going to like or want more mixed greens. But dessert is a different story.

Sated as we are at the end of a meal, we are suddenly faced with a whole new range of sensations. The capacity is so strong it has been dubbed the “dessert effect.” Suddenly there’s a novel, nutritive gustatory sensation—and how could our calorie-seeking brains resist that? As the neuroscientist Gary Wenk notes, “your neurons can only tolerate a total deprivation of sugar for a few minutes before they begin to die.” (Quick, apply chocolate!) As we finish dessert, we may be beginning to get the “post-ingestive” nutritional benefits of our main course. Sure, that chocolate tastes good, but the vegetables may be making you feel so satisfied. In the end, memory blurs it all. A study co-authored by Rozin suggests that the pleasure we remember from a meal has little to do with how much we consumed, or how long we spent doing it (under a phenomenon called “duration neglect”). “A few bites of a favorite dish in a meal,” the researchers write, “may do the full job for memory.”

by Tom Vanderbilt, Smithsonian |  Read more:
Image: Bartholomew Cooke