Tuesday, August 19, 2014
The Teaching Class
When Mary Margaret Vojtko died last September—penniless and virtually homeless and eighty-three years old, having been referred to Adult Protective Services because the effects of living in poverty made it seem to some that she was incapable of caring for herself—it made the news because she was a professor. That a French professor of twenty-five years would be let go from her job without retirement benefits, without even severance, sounded like some tragic mistake. In the Pittsburgh Post-Gazette op-ed that broke the story, Vojtko’s friend and attorney Daniel Kovalik describes an exchange he had with a caseworker from Adult Protective Services: “The caseworker paused and asked with incredulity, ‘She was a professor?’ I said yes. The caseworker was shocked; this was not the usual type of person for whom she was called in to help.” A professor belongs to the professional class, a professor earns a salary and owns a home, probably with a leafy yard, and has good health insurance and a retirement account. In the American imagination, a professor is perhaps disheveled, but as a product of brainy eccentricity, not of penury. In the American university, this is not the case.
Most university-level instructors are, like Vojtko, contingent employees, working on a contract basis year to year or semester to semester. Some of these contingent employees are full-time lecturers, and many are adjunct instructors: part-time employees, paid per class, often without health insurance or retirement benefits. This is a relatively new phenomenon: in 1969, 78 percent of professors held tenure-track positions. By 2009 this percentage had shrunk to 33.5. The rest of the professors holding jobs—whether part time or full time—do so without any job security. These are the conditions that left Vojtko in such a vulnerable position after twenty-five years at Duquesne University. Vojtko was earning between $3,000 and $3,500 per three-credit course. During years when she taught three courses per semester, and an additional two over the summer, she made less than $25,000, and received no health benefits through her employer. Though many universities limit the number of hours that adjunct professors can work each semester, keeping them nominally “part-time” employees, teaching three three-credit courses is certainly a full-time job. These circumstances are now the norm for university instructors, as the number of tenured and tenure-track positions shrinks and the ranks of contingent laborers swell.
A moment of full disclosure: I am an adjunct. I taught freshman composition at Columbia University for two years as a graduate student, then for a few semesters more as an adjunct after I finished my degree. I now tutor in a writing center in the City University of New York system. Many of my friends do this same kind of work at colleges around New York City, commuting from campus to campus, cobbling together more-than-full-time work out of multiple part-time jobs. We talk a lot about how to make adjuncting livable, comparing pay rates at different writing centers and English departments. We crowdsource answers to questions about how to go to the dentist, for example, since none of us has dental insurance—wait for a Groupon for a cleaning, or go to the student dentists at NYU for anything urgent. I do have health insurance at my current job, though I get an email a few times per year informing me that it may expire soon because negotiations between the union and the university over adjunct health insurance have stalled. This is mostly fine—my coverage has never actually been interrupted—but it is hard to swallow the notion that the university that employs me is constantly trying to get out of providing health insurance to teachers, particularly when it announces that it is giving our new chancellor an $18,000/month apartment for free.
So I have closely followed the news and op-ed coverage of the adjunct bubble that followed Vojtke’s death. And while I have been glad to see more attention being paid to the working conditions in higher education, I’ve been surprised that the issue is consistently framed as purely a workers’ rights problem. It is this, of course. But it is not only this.
by Rachel Riederer, Guernica | Read more:
Image: Zeke Berman

A moment of full disclosure: I am an adjunct. I taught freshman composition at Columbia University for two years as a graduate student, then for a few semesters more as an adjunct after I finished my degree. I now tutor in a writing center in the City University of New York system. Many of my friends do this same kind of work at colleges around New York City, commuting from campus to campus, cobbling together more-than-full-time work out of multiple part-time jobs. We talk a lot about how to make adjuncting livable, comparing pay rates at different writing centers and English departments. We crowdsource answers to questions about how to go to the dentist, for example, since none of us has dental insurance—wait for a Groupon for a cleaning, or go to the student dentists at NYU for anything urgent. I do have health insurance at my current job, though I get an email a few times per year informing me that it may expire soon because negotiations between the union and the university over adjunct health insurance have stalled. This is mostly fine—my coverage has never actually been interrupted—but it is hard to swallow the notion that the university that employs me is constantly trying to get out of providing health insurance to teachers, particularly when it announces that it is giving our new chancellor an $18,000/month apartment for free.
So I have closely followed the news and op-ed coverage of the adjunct bubble that followed Vojtke’s death. And while I have been glad to see more attention being paid to the working conditions in higher education, I’ve been surprised that the issue is consistently framed as purely a workers’ rights problem. It is this, of course. But it is not only this.
by Rachel Riederer, Guernica | Read more:
Image: Zeke Berman
Everybody Smiley Poops
We were at a concert when Maggie saw the woman her husband had cheated on her with. She’d planned her response in advance: a simple introduction, then she’d just stand there and watch the other woman react to her last name with (ideally) horrified guilt. But in the moment, Maggie couldn’t speak. Her face flushed. Her hands shook. We left immediately.
On the walk home she finally spoke, the words coming so fast she almost choked. I rubbed her back. This is when face-to-face communication is crucial—nothing I could have said would have helped, but gestures (hugs, back rubs, concerned face) made it clear I’d heard her and understood. According to the psychiatric journal Activitas Nervosa Superior, “Emotion arises from sensory stimulation and is typically accompanied by physiological and behavioral changes in the body.” We express our emotions most clearly through physical reactions—Maggie’s flushed cheeks and shaking hands—and receive comfort most effectively this way too. In highly emotional situations, gestures and expressions can usually communicate better than words.
But gestures weren’t available the next day, so I did what any busy, young, too-broke-to-send-flowers New Yorker checking on a friend from work would do: I sent a few emojis. But what was the right combination of tiny pictures to say, “I’m sorry you’re going through this and I know it sucks hard now and I want you to know it won’t suck forever because you’re great”?
Turns out, for Maggie at least, the right emoji sequence to symbolize sympathizing with heartbreak is a series of smiley poops. Two days later we had dinner in person. “I’m feeling better,” she said. “I had a session with my therapist… And your text the other day made me laugh—that helped.”
Tiny cartoon poops helped.
It sounds stupid. Their cuteness makes any serious conversation about emojis difficult, like talking to a baby in a grown-up voice. It feels embarrassing to posit these little cartoons as a vehicle of emotion or even a global language, yet they really are an almost universally understood form of communication. Most studies by social scientists, linguists, and psychologists skew pro-emoji (“A smiling emoticon activates the same brain areas as the image of a person smiling and can therefore be considered as a representation of that emotion” reads one study), but that doesn’t stop them from seeming a little foolish: eensy cartoon images, designed to be typed alongside or in lieu of text, uncool in their enthusiasm and earnestness.
Though the most tweeted emoji is a heart, most other top emojis, lending weight to the emoticon/emotion connection, are faces:
…and the list goes on. Smiley poop is #90 out of 845, not bad as far as poop goes, but definitely not top 10. Even so, it’s been used more than eight million times since July of last year, on Twitter alone, according to the mesmerizing, potentially seizure-inducing real-time emojitracker.com.
Why can’t we stop using emojis? Are they ridiculous, turning us into the equivalent of button-pressing Neanderthals? Or are they brilliant, providing a global medium to express our emotions and creativity? “Emojis mean everything and they mean nothing at the same time,” designer Liza Nelson wrote on her emoji art tumblr, Emoji IRL. LOL. “They’re really quite stupid. And they’re the best thing that ever happened to our generation.”
On the walk home she finally spoke, the words coming so fast she almost choked. I rubbed her back. This is when face-to-face communication is crucial—nothing I could have said would have helped, but gestures (hugs, back rubs, concerned face) made it clear I’d heard her and understood. According to the psychiatric journal Activitas Nervosa Superior, “Emotion arises from sensory stimulation and is typically accompanied by physiological and behavioral changes in the body.” We express our emotions most clearly through physical reactions—Maggie’s flushed cheeks and shaking hands—and receive comfort most effectively this way too. In highly emotional situations, gestures and expressions can usually communicate better than words.
But gestures weren’t available the next day, so I did what any busy, young, too-broke-to-send-flowers New Yorker checking on a friend from work would do: I sent a few emojis. But what was the right combination of tiny pictures to say, “I’m sorry you’re going through this and I know it sucks hard now and I want you to know it won’t suck forever because you’re great”?

Turns out, for Maggie at least, the right emoji sequence to symbolize sympathizing with heartbreak is a series of smiley poops. Two days later we had dinner in person. “I’m feeling better,” she said. “I had a session with my therapist… And your text the other day made me laugh—that helped.”
Tiny cartoon poops helped.
It sounds stupid. Their cuteness makes any serious conversation about emojis difficult, like talking to a baby in a grown-up voice. It feels embarrassing to posit these little cartoons as a vehicle of emotion or even a global language, yet they really are an almost universally understood form of communication. Most studies by social scientists, linguists, and psychologists skew pro-emoji (“A smiling emoticon activates the same brain areas as the image of a person smiling and can therefore be considered as a representation of that emotion” reads one study), but that doesn’t stop them from seeming a little foolish: eensy cartoon images, designed to be typed alongside or in lieu of text, uncool in their enthusiasm and earnestness.
Though the most tweeted emoji is a heart, most other top emojis, lending weight to the emoticon/emotion connection, are faces:

…and the list goes on. Smiley poop is #90 out of 845, not bad as far as poop goes, but definitely not top 10. Even so, it’s been used more than eight million times since July of last year, on Twitter alone, according to the mesmerizing, potentially seizure-inducing real-time emojitracker.com.
Why can’t we stop using emojis? Are they ridiculous, turning us into the equivalent of button-pressing Neanderthals? Or are they brilliant, providing a global medium to express our emotions and creativity? “Emojis mean everything and they mean nothing at the same time,” designer Liza Nelson wrote on her emoji art tumblr, Emoji IRL. LOL. “They’re really quite stupid. And they’re the best thing that ever happened to our generation.”
by Mary Mann, Medium | Read more:
Images: Liza Nelson
Monday, August 18, 2014
The Breast of Times: 10 Years of Irrational Nipple Controversy
[ed. If it weren't for nipples, there'd be no point to tits.]
Janet Jackson at Super Bowl XXXVIII
The nipple-baring that started the national conversation about wardrobe malfunctions took place at the 2004 Super Bowl XXXVIII halftime show. When Justin Timberlake dance-ripped Jackson’s top, viewers caught a glimpse of Jackson’s nipple for 9/16th of a second. 1.4 million people went on to complain to the Federal Communications Commission about the supposedly indecent exposure. (Yes, Americans complained more about 9/16th of a second of nipple on CBS rather than the 10+ years of Two and a Half Men they’ve been airing.) CBS was fined $550,000 by the FCC, and Jackson’s career arguably suffered after the fact. Timberlake is doing just fine for himself.
Nancy Grace on Dancing with the Stars
In 2011, justice-seeker Nancy Grace’s Dancing with the Stars routine ended up exposing one of her nipples briefly — as one can do when partaking in a vigorous physical activity while wearing a deep-cut dress. Grace vehemently denies it to this day. Meanwhile, contestants are getting entirely naked on Dancing with the Stars’ equivalent in Argentina.
The New Yorker on Facebook
When Mike Stevens posted a cartoon (below) on The New Yorker’s Facebook page in 2012, the magazine was temporarily banned from the site for violating their terms of service.
As Bob Mankoff explained:
“Some sleuthing showed that the offense was actually caused by the inclusion of these two dots in the cartoon,
which, by the way, also contained these two non-offending dots.”
Now, we all know that The New Yorker is a filthy rag staffed by smut peddlers, but Facebook seemingly overreacted with this one.
by Gabriella Paiella, The Hairpin | Read more:
Image: New YorkerCrab, Spinach & Coconut Soup
Prep Time: 15 minutes
Cook Time: 20 minutes
Total Time: 35 minutes
Yield: 4 to 6 servings
Ingredients:
4 slices bacon, chopped
1 medium yellow onion, chopped
2 cloves garlic, minced
1 medium russet potato, peeled and chopped
1 teaspoon salt
1/2 teaspoon black pepper
1/2 teaspoon red pepper flakes (more or less to taste)
1 teaspoon fresh thyme leaves, minced
2 cups chopped Roma tomatoes
4 cups chicken stock
1 can coconut milk
8 ounces lump crabmeat
5 cups fresh spinach
Sliced scallions, for garnish
Reserved bacon, for garnish
Fresh time wedges, for garnish
Instructions:
1. Brown bacon in a large dutch oven or stock pot. Remove bacon and set aside.
2. Transfer onion to the pot and saute for 3 to 4 minutes. Add garlic and continue sauteing for 1 minute. Add potatoes, salt, pepper, red pepper flakes, and thyme. Stir to combine.
3. Add tomatoes and chicken stock and bring to a boil. Reduce heat, cover and simmer for 10 minutes or until potatoes are tender.
4. Add coconut milk, stirring to combine. Add spinach and continue cooking until wilted. Immediately before serving, add crabmeat and stir gently so you don’t break up the lumps. Serve immediately with scallions, reserved bacon, and lime wedges.
The Lethality of Loneliness
Sometime in the late ’50s, Frieda Fromm-Reichmann sat down to write an essay about a subject that had been mostly overlooked by other psychoanalysts up to that point. Even Freud had only touched on it in passing. She was not sure, she wrote, “what inner forces” made her struggle with the problem of loneliness, though she had a notion. It might have been the young female catatonic patient who began to communicate only when Fromm-Reichmann asked her how lonely she was. “She raised her hand with her thumb lifted, the other four fingers bent toward her palm,” Fromm-Reichmann wrote. The thumb stood alone, “isolated from the four hidden fingers.” Fromm-Reichmann responded gently, “That lonely?” And at that, the woman’s “facial expression loosened up as though in great relief and gratitude, and her fingers opened.”
Fromm-Reichmann would later become world-famous as the dumpy little therapist mistaken for a housekeeper by a new patient, a severely disturbed schizophrenic girl named Joanne Greenberg. Fromm-Reichmann cured Greenberg, who had been deemed incurable. Greenberg left the hospital, went to college, became a writer, and immortalized her beloved analyst as “Dr. Fried” in the best-selling autobiographical novel I Never Promised You a Rose Garden (later also a movie and a pop song). Among analysts, Fromm-Reichmann, who had come to the United States from Germany to escape Hitler, was known for insisting that no patient was too sick to be healed through trust and intimacy. She figured that loneliness lay at the heart of nearly all mental illness and that the lonely person was just about the most terrifying spectacle in the world. She once chastised her fellow therapists for withdrawing from emotionally unreachable patients rather than risk being contaminated by them. The uncanny specter of loneliness “touches on our own possibility of loneliness,” she said. “We evade it and feel guilty.”
Her 1959 essay, “On Loneliness,” is considered a founding document in a fast-growing area of scientific research you might call loneliness studies. Over the past half-century, academic psychologists have largely abandoned psychoanalysis and made themselves over as biologists. And as they delve deeper into the workings of cells and nerves, they are confirming that loneliness is as monstrous as Fromm-Reichmann said it was. It has now been linked with a wide array of bodily ailments as well as the old mental ones.
In a way, these discoveries are as consequential as the germ theory of disease. Just as we once knew that infectious diseases killed, but didn’t know that germs spread them, we’ve known intuitively that loneliness hastens death, but haven’t been able to explain how. Psychobiologists can now show that loneliness sends misleading hormonal signals, rejiggers the molecules on genes that govern behavior, and wrenches a slew of other systems out of whack. They have proved that long-lasting loneliness not only makes you sick; it can kill you. Emotional isolation is ranked as high a risk factor for mortality as smoking. A partial list of the physical diseases thought to be caused or exacerbated by loneliness would include Alzheimer’s, obesity, diabetes, high blood pressure, heart disease, neurodegenerative diseases, and even cancer—tumors can metastasize faster in lonely people.
The psychological definition of loneliness hasn’t changed much since Fromm-Reichmann laid it out. “Real loneliness,” as she called it, is not what the philosopher Søren Kierkegaard characterized as the “shut-upness” and solitariness of the civilized. Nor is “real loneliness” the happy solitude of the productive artist or the passing irritation of being cooped up with the flu while all your friends go off on some adventure. It’s not being dissatisfied with your companion of the moment—your friend or lover or even spouse— unless you chronically find yourself in that situation, in which case you may in fact be a lonely person. Fromm-Reichmann even distinguished “real loneliness” from mourning, since the well-adjusted eventually get over that, and from depression, which may be a symptom of loneliness but is rarely the cause. Loneliness, she said—and this will surprise no one—is the want of intimacy.
Today’s psychologists accept Fromm-Reichmann’s inventory of all the things that loneliness isn’t and add a wrinkle she would surely have approved of. They insist that loneliness must be seen as an interior, subjective experience, not an external, objective condition. Loneliness “is not synonymous with being alone, nor does being with others guarantee protection from feelings of loneliness,” writes John Cacioppo, the leading psychologist on the subject. Cacioppo privileges the emotion over the social fact because—remarkably—he’s sure that it’s the feeling that wreaks havoc on the body and brain.
by Judith Shulevitz, TNR | Read more:

Her 1959 essay, “On Loneliness,” is considered a founding document in a fast-growing area of scientific research you might call loneliness studies. Over the past half-century, academic psychologists have largely abandoned psychoanalysis and made themselves over as biologists. And as they delve deeper into the workings of cells and nerves, they are confirming that loneliness is as monstrous as Fromm-Reichmann said it was. It has now been linked with a wide array of bodily ailments as well as the old mental ones.
In a way, these discoveries are as consequential as the germ theory of disease. Just as we once knew that infectious diseases killed, but didn’t know that germs spread them, we’ve known intuitively that loneliness hastens death, but haven’t been able to explain how. Psychobiologists can now show that loneliness sends misleading hormonal signals, rejiggers the molecules on genes that govern behavior, and wrenches a slew of other systems out of whack. They have proved that long-lasting loneliness not only makes you sick; it can kill you. Emotional isolation is ranked as high a risk factor for mortality as smoking. A partial list of the physical diseases thought to be caused or exacerbated by loneliness would include Alzheimer’s, obesity, diabetes, high blood pressure, heart disease, neurodegenerative diseases, and even cancer—tumors can metastasize faster in lonely people.
The psychological definition of loneliness hasn’t changed much since Fromm-Reichmann laid it out. “Real loneliness,” as she called it, is not what the philosopher Søren Kierkegaard characterized as the “shut-upness” and solitariness of the civilized. Nor is “real loneliness” the happy solitude of the productive artist or the passing irritation of being cooped up with the flu while all your friends go off on some adventure. It’s not being dissatisfied with your companion of the moment—your friend or lover or even spouse— unless you chronically find yourself in that situation, in which case you may in fact be a lonely person. Fromm-Reichmann even distinguished “real loneliness” from mourning, since the well-adjusted eventually get over that, and from depression, which may be a symptom of loneliness but is rarely the cause. Loneliness, she said—and this will surprise no one—is the want of intimacy.
Today’s psychologists accept Fromm-Reichmann’s inventory of all the things that loneliness isn’t and add a wrinkle she would surely have approved of. They insist that loneliness must be seen as an interior, subjective experience, not an external, objective condition. Loneliness “is not synonymous with being alone, nor does being with others guarantee protection from feelings of loneliness,” writes John Cacioppo, the leading psychologist on the subject. Cacioppo privileges the emotion over the social fact because—remarkably—he’s sure that it’s the feeling that wreaks havoc on the body and brain.
by Judith Shulevitz, TNR | Read more:
Image: Ariel Lee
The Mystery of Murakami
[ed. Something I've wondered about, too. Having read half a dozen Murakami books over the years including what are widely considered to be his masterpieces: 1Q84, The Wind-Up Bird Chronicle, Norwegian Wood, the plotting is always familiar, the sentences are indeed sometimes awful, and the story-line inevitably peters out somewhere deep in the woods. But somehow they still remain strangely entertaining.]
Murakami, who learned to speak English by reading American crime novels, begins with an opening paragraph that would make David Goodis proud. Tsukuru Tazaki, recently turned 20, is planning his suicide: “From July of his sophomore year in college until the following January, all Tsukuru Tazaki could think about was dying.” But where Goodis would write something like “All right, he told himself firmly, let’s do it and get it over with,” Murakami is balletic, evoking metaphysical realms and a fine sense of the grotesque. “Crossing that threshold between life and death,” he writes, “would have been easier than swallowing down a slick, raw egg.” It is one of the key aspects of his style, this seamless transition from noirish dread to mystical rumination; the most perfect Murakami title, which really could have been used for any of the 13 novels he has written since 1979, remains Hard-Boiled Wonderland and the End of the World. In Murakamiland, death means merely traveling across a “threshold” between reality and some other world. It is not necessarily the end. In fact, as we soon learn, Tsukuru’s obsession with death is only the beginning. (...)
And page after page, we are confronted with the riddle that is Murakami's prose. No great writer writes as many bad sentences as Murakami does. His crimes include awkward construction ("Just as he appreciated Sara’s appearance, he also enjoyed the way she dressed”); cliché addiction (from a single, paragraph-long character description: “He really hustled on the field … He wasn’t good at buckling down … He always looked people straight in the eye, spoke in a clear, strong voice, and had an amazing appetite … He was a good listener and a born leader”); and lazy repetition (“Sara gazed at his face for some time before speaking,” followed shortly by “Sara gazed at Tsukuru for a time before she spoke”). The dialogue is often robotic, if charmingly so. (...)
How is the author of these lines capable of an atrocity like “Her smile had ratcheted up a notch”? The most charitable explanation is that in Murakami’s fiction, his ugly sentences, though often distracting, serve a strategic purpose. Like the hokey vernacular and use of brand names in Stephen King’s fiction, Murakami’s impoverished language situates us in a realm of utter banality, a simplified black-and-white world in which everything is as it appears. When, inevitably, we pass through a wormhole into an uncanny dimension of fantasy and chaos, the contrast is unnerving.
And page after page, we are confronted with the riddle that is Murakami's prose. No great writer writes as many bad sentences as Murakami does. His crimes include awkward construction ("Just as he appreciated Sara’s appearance, he also enjoyed the way she dressed”); cliché addiction (from a single, paragraph-long character description: “He really hustled on the field … He wasn’t good at buckling down … He always looked people straight in the eye, spoke in a clear, strong voice, and had an amazing appetite … He was a good listener and a born leader”); and lazy repetition (“Sara gazed at his face for some time before speaking,” followed shortly by “Sara gazed at Tsukuru for a time before she spoke”). The dialogue is often robotic, if charmingly so. (...)
How is the author of these lines capable of an atrocity like “Her smile had ratcheted up a notch”? The most charitable explanation is that in Murakami’s fiction, his ugly sentences, though often distracting, serve a strategic purpose. Like the hokey vernacular and use of brand names in Stephen King’s fiction, Murakami’s impoverished language situates us in a realm of utter banality, a simplified black-and-white world in which everything is as it appears. When, inevitably, we pass through a wormhole into an uncanny dimension of fantasy and chaos, the contrast is unnerving.
by Nathaniel Rich, The Atlantic | Read more:
Image: Richie Pope[ed. Got my motorcycle license yesterday. Yay! Unbeknownst to me, someone took a picture after my last riding test. Too bad about that instructor...]
via:
Sunday, August 17, 2014
Honda’s Global Strategy? Go Local.
When financial journalist Jeffrey Rothfeder set out to understand why globalization has failed, he got pulled into the story of Honda, a company that has thrived as a multinational. In more than 60 years in business, Honda has never lost money. Its profit margins are the highest in the industry and its factories among the most productive. Rothfeder talked with The Washington Post about “Driving Honda,” in which he explores the enduring culture established by company founder Soichiro Honda, a perfectionist who embraced mistakes as a way to learn and improve. He also goes inside Honda’s plant in Lincoln, Ala., a model of flexible manufacturing. The following was edited for length and clarity.
How did this book come about?
I didn’t think I’d be writing about Honda or even a specific company. What interested me more was the issue of why globalization is failing, because for two decades now it’s been the guiding principle that runs U.S. economic policy — that there is going to be free trade and we’ll lose the borders and there’s no difference between General Electric here and General Electric in China. And essentially globalization was going to lift all boats economically.
But it isn’t working out the way people had hoped. Most multinational companies do not make money in their globalized operations. Classically, General Electric will say that they make more than 50 percent of their revenue outside the U.S., but they are losing money in many parts of the world.
So I was wondering, what would it take to make a successful multinational? I was also interested in the auto industry because it is such a global industry.

I didn’t think I’d be writing about Honda or even a specific company. What interested me more was the issue of why globalization is failing, because for two decades now it’s been the guiding principle that runs U.S. economic policy — that there is going to be free trade and we’ll lose the borders and there’s no difference between General Electric here and General Electric in China. And essentially globalization was going to lift all boats economically.
But it isn’t working out the way people had hoped. Most multinational companies do not make money in their globalized operations. Classically, General Electric will say that they make more than 50 percent of their revenue outside the U.S., but they are losing money in many parts of the world.
So I was wondering, what would it take to make a successful multinational? I was also interested in the auto industry because it is such a global industry.
Why Honda?
Because it is one of the few multinational companies that has succeeded at globalization. Their profit margins are high in the auto industry. Almost everywhere they go — over 5 percent profit margins. In most markets, they consistently are in the top 10 of specific models that sell. They’ve never lost money. They’ve been profitable every year. And they’ve been around since 1949, 1950. And it’s a company that really does see the world as its market and thinks very hard about what it takes to be successful at that.
Everything it does — from corporate culture to its operational principles to the way it globalizes — was different from any other company I’ve ever looked at.by Kelly Johnson, Washington Post | Read more:
Image: Blake J. Discher, AP
Reading Upward
How many times have we heard this opinion expressed? On this occasion the speaker was a literary critic on Canadian radio with whom I was discussing my recent blog post “Reading: The Struggle.” Needless to say the sentiment comes along with the regret that people are reading less and less these days and the notion of a hierarchy of writing with the likes of Joyce and Nabokov at the top and Fifty Shades of Grey at the bottom. Between the two it is assumed that there is a kind of neo-Platonic stairway, such that from the bottom one can pass by stages to the top, a sort of optimistic inversion of the lament that soft porn will lead you to hard and anyone smoking marijuana is irredeemably destined to descend through coke and crack to heroin. The user, that is, is always drawn to a more intense form of the same species of experience.
Of course, while the fear that one will descend from soft to hard drugs tends to be treated as a near certainty, the hope that one might ascend from Hermione Granger to Clarissa Dalloway is usually expressed as a tentative wish. Nevertheless, it serves to justify the intellectual’s saying, “Frankly, I don’t mind what they’re reading, etc.” (as if this were some kind of concession), and underwrites our cautious optimism when we see an adolescent son or daughter immersed in George R.R. Martin. It’s not Dostoevsky, but one day it might be, and in any event it’s better than a computer game or TV since these are not part of the reading stairway.
Is any of this borne out by reality? Do people really pass from Fifty Shades of Grey to Alice Munro? (Through how many intermediate steps? Never to return?) And if it is not true why does a certain kind of intellectual continue to express them? To what end?
In 1948 W.H. Auden published an essay, “The Guilty Vicarage,” on what he calls his “addiction” to detective novels. The point he makes is that these schematic narratives serve the escapist needs of readers who share his particular psychological make-up. These people will not, as a rule, Auden claims, with some elaborate argument, be the same readers as readers of light romances or thrillers, or fantasy fiction. Each genre has its pull on different types of minds. In any event, if he, Auden, is to get any serious work done, he has to make sure that there are no detective novels around, since if there are he can’t resist opening them, and if he opens them he won’t close them till he’s reached the end. Or rather, no new detective novels; for Auden notes this difference between the stuff of his addiction and literature: that the detective novel is no sooner read than forgotten and never invites a second reading, as literature often does.
The implications are clear enough. Auden denies any continuity between literary novels and genre novels, or indeed between the different genres. One does not pass from lower to higher. On the contrary one might perfectly well fall from the higher to the lower, or simply read both, as many people eat both good food and junk food, the only problem being that the latter can be addictive; by constantly repeating the same gratifying formula (the litmus test of genre fiction) it stimulates and satisfies a craving for endless sameness, to the point that the reader can well end up spending all the time he has available for reading with exactly the same fare. (My one powerful experience of this was a spell reading Simenon’s Maigret novels; after five or six it gets harder and harder to distinguish one from another, and yet one goes on.)
Auden, it should be noted, does not propose to stop reading detective novels—he continues to enjoy them—and expresses no regret that people read detective novels rather than, say, Faulkner or Charlotte Brontë, nor any wish that they use detective novels as a stepping stone to “higher things.” He simply notes that he has to struggle to control his addiction, presumably because he doesn’t want to remain trapped in a repetitive pattern of experience that allows no growth and takes him nowhere. His essay, in fact, reads like the reasoning of someone determined to explain to himself why he must not waste too much time with detective novels, and at the same time to forgive himself for the time he does spend with them. If anything, genre fiction prevents engagement with literary fiction, rather than vice versa, partly because of the time it occupies, but more subtly because while the latter is of its nature exploratory and potentially unsettling the former encourages the reader to stay in a comfort zone.
by Tim Parks, NYR | Read more:
Image: Arnold Eagle: Boys Climbing the Fire Escape of a Deserted Building, 1935Saturday, August 16, 2014
Beautiful Girl
When I was fifteen, I cut off the last joint of my left ring finger during a woodshop class. I was laughing at a joke while cutting a board on a table saw. The bite of the blade sent a great shock through me, and I didn’t dare look down, but the bleached faces of the other boys told me just how bad it was.
They didn’t reassemble bodies in those days. Later, I heard that one of the guys in the class had picked up the joint, complete with dirty fingernail, and scared some girls with it. No surprise, no hard feelings; it was the kind of thing I would’ve done, and not only because I was a jackass. The girls around me were coming into glorious bloom, and my way of pretending not to be in awe of them was to act as if we were still kids—to tease and provoke them.
I’d never had a girlfriend, not really. In sixth grade, in Seattle, my friend Terry and I used to meet his cousin Patty and another girl at the Admiral Theatre on Saturday nights. Patty and I sat in the back and made out for two hours without exchanging a word, while Terry did the same with Patty’s friend. After the movie, Terry and I left by the side exit so his aunt wouldn’t see him when she picked the girls up. Never a dance, never a soda with two straws.
That winter, I moved to a village in the Cascades. The elementary school had four rooms, where four teachers taught the eight grades. Of the ten kids in my class, nine were boys. Nevy drove us crazy, favoring this one, then that one. I had her attention for a while when I was new, and never again. Anyway, she was into horses, not boys.
The high school was in Concrete, thirty-two miles downriver. When we finally got there, we found girls, all right, but the pretty ones in our class got picked off by juniors and seniors, and the older ones wouldn’t look at us.
That was the situation as I woke one afternoon with two-thirds of a finger and a bandage as big as a boxing glove to find a beautiful girl smiling down at me from the foot of my bed. By then, I’d been in the Mount Vernon Hospital for almost a week, because my stump had got infected and there was a danger of gangrene. I was floating on a morphine cloud and could only stare. “Hi,” she said. “See, Daddy—just like Dr. Kildare!”
“That’s my girl, Joelle,” the man in the next bed said. There were five others on the ward, all men. Joelle sat on my bed and offered me a candy bar. She said that I looked exactly like Dr. Kildare. I didn’t speak, just listened to her husky voice. She had dark-red hair held back from her high brow by pink barrettes. Her skin was pale, pearly, with a few freckles across her cheeks. Her eyes were green, her lips red with lipstick. The other men watched us with amusement. They must have seen that I was in love.

I’d never had a girlfriend, not really. In sixth grade, in Seattle, my friend Terry and I used to meet his cousin Patty and another girl at the Admiral Theatre on Saturday nights. Patty and I sat in the back and made out for two hours without exchanging a word, while Terry did the same with Patty’s friend. After the movie, Terry and I left by the side exit so his aunt wouldn’t see him when she picked the girls up. Never a dance, never a soda with two straws.
That winter, I moved to a village in the Cascades. The elementary school had four rooms, where four teachers taught the eight grades. Of the ten kids in my class, nine were boys. Nevy drove us crazy, favoring this one, then that one. I had her attention for a while when I was new, and never again. Anyway, she was into horses, not boys.
The high school was in Concrete, thirty-two miles downriver. When we finally got there, we found girls, all right, but the pretty ones in our class got picked off by juniors and seniors, and the older ones wouldn’t look at us.
That was the situation as I woke one afternoon with two-thirds of a finger and a bandage as big as a boxing glove to find a beautiful girl smiling down at me from the foot of my bed. By then, I’d been in the Mount Vernon Hospital for almost a week, because my stump had got infected and there was a danger of gangrene. I was floating on a morphine cloud and could only stare. “Hi,” she said. “See, Daddy—just like Dr. Kildare!”
“That’s my girl, Joelle,” the man in the next bed said. There were five others on the ward, all men. Joelle sat on my bed and offered me a candy bar. She said that I looked exactly like Dr. Kildare. I didn’t speak, just listened to her husky voice. She had dark-red hair held back from her high brow by pink barrettes. Her skin was pale, pearly, with a few freckles across her cheeks. Her eyes were green, her lips red with lipstick. The other men watched us with amusement. They must have seen that I was in love.
by Tobias Wolff, New Yorker | Read more:
Image: Christian Gralingen
Friday, August 15, 2014
Surveillance as a Business Model
[ed. Read of the day. How surveillance became the default condition and principal business model in the Internet's evolution. See also: The Internet's Original Sin.]

It was pretty scary. I had never seen a permanent record, but I knew exactly what it must look like. It was bright red, thick, tied with twine. Full of official stamps.
The permanent record would follow you through life, and whenever you changed schools, or looked for a job or moved to a new house, people would see the shameful things you had done in fifth grade.
How wonderful it felt when I first realized the permanent record didn't exist. They were bluffing! Nothing I did was going to matter! We were free!
And then when I grew up, I helped build it for real.
Anyone who works with computers learns to fear their capacity to forget. Like so many things with computers, memory is strictly binary. There is either perfect recall or total oblivion, with nothing in between. It doesn't matter how important or trivial the information is. The computer can forget anything in an instant. If it remembers, it remembers for keeps.
This doesn't map well onto human experience of memory, which is fuzzy. We don't remember anything with perfect fidelity, but we're also not at risk of waking up having forgotten our own name. Memories tend to fade with time, and we remember only the more salient events.
Every programmer has firsthand experience of accidentally deleting something important. Our folklore as programmers is filled with stories of lost data, failed backups, inadvertently clobbering some vital piece of information, undoing months of work with a single keystroke. We learn to be afraid.
And because we live in a time when storage grows ever cheaper, we learn to save everything, log everything, and keep it forever. You never know what will come in useful. Deleting is dangerous. There are no horror stories—yet—about keeping too much data for too long.
Unfortunately, we've let this detail of how computers work percolate up into the design of our online communities. It's as if we forced people to use only integers because computers have difficulty representing real numbers.
Our lives have become split between two worlds with two very different norms around memory.
The offline world works like it always has. I saw many of you talking yesterday between sessions; I bet none of you has a verbatim transcript of those conversations. If you do, then I bet the people you were talking to would find that extremely creepy.
I saw people taking pictures, but there's a nice set of gestures and conventions in place for that. You lift your camera or phone when you want to record, and people around you can see that. All in all, it works pretty smoothly.
The online world is very different. Online, everything is recorded by default, and you may not know where or by whom. If you've ever wondered why Facebook is such a joyless place, even though we've theoretically surrounded ourselves with friends and loved ones, it's because of this need to constantly be wearing our public face. Facebook is about as much fun as a zoning board hearing.
It's interesting to watch what happens when these two worlds collide. Somehow it's always Google that does it.
One reason there's a backlash against Google glasses is that they try to bring the online rules into the offline world. Suddenly, anything can be recorded, and there's the expectation (if the product succeeds) that everything will be recorded. The product is called 'glass' instead of 'glasses' because Google imagines a world where every flat surface behaves by the online rules. [The day after this talk, it was revealed Google is seeking patents on showing ads on your thermostat, refrigerator, etc.]
Well, people hate the online rules!
Google's answer is, wake up, grandpa, this is the new normal. But all they're doing is trying to port a bug in the Internet over to the real world, and calling it progress.
You can dress up a bug and call it a feature. You can also put dog crap in the freezer and call it ice cream. But people can taste the difference.
by Maciej Cegłowski, Lecture: Beyond Tellerrand Web Conference, Germany, May 2014 | Read more:
Image: uncredited
Here is How to Be Sorry
"Here Is How to Be Sorry" - an erasure poem from page 175 of David Foster Wallace’s Infinite Jest
“Our attachments are our temple, what we worship, no? What we give ourselves to, what we invest with faith.’ Are we not all of us fanatics? I say only what you of the U.S.A. only pretend you do not know. Attachments are of great seriousness. Choose your attachments carefully. Choose your temple of fanaticism with great care.” — David Foster Wallace, Infinite Jest
Erasure poetry is at once a metaphor for death and a mechanism for dealing with it. We are all eventually erased, whether at the hands of time, illness or accident — opportunities for addition and revision over. What we leave in our stead, however, is never a complete absence. To cope with the loss, friends, family and colleagues each weave new stories from memories and mementos – stories that say not who we were, but who we were to them, stories that hold in spite of the gaps.
via: Jenni B. Baker
Subscribe to:
Posts (Atom)