Monday, November 7, 2016
Life in Circadia
[ed. Another good reason we should get rid of daylight savings time.]
To be fully integrated with an ecosystem, an organism must cling to its niches, and one of those is a carefully carved-out temporal niche. For example, the first mammals’ warm-bloodedness allowed for successful colonisation of the nighttime world, when reptilian systems slowed down. Two similar species can comfortably occupy the same space if they do so at different times of day. Our modern built environment provides food, warmth and light at all hours, as well as safeguarding us from nocturnal predators. Superficially, we’ve been liberated from our own niche in the day/night environment but, under the surface, that desynchrony is causing all manner of problems. We have not outgrown our need for an internal timing system – far from it.
With so much of the talk about bodyclocks focused on sleep, it’s easy to forget that all of our biological processes around the clock are organised by circadian rhythms. Every day of the internal schedule is full of appointments. Nitrogen-fixing bacteria glean oxygen from the atmosphere, and they also photosynthesise to store energy. But they can’t do both at once, so they alternate between nocturnal nitrogen-fixing and daytime photosynthesis. Mammals have many such processes to orchestrate, and just about everything our body does – from metabolism and DNA repair to immune responses and cognition – is under circadian control. In humans, normal organ functioning depends on a harmony in hierarchy: synchrony among molecular rhythms within each cell, among cells in each organ, and among organs in the body. Coordinated functioning ensures that the body doesn’t work against itself.
The human body is teeming with clocks, arranged in a hierarchy. At the helm, a master clock in the brain’s hypothalamus called the suprachiasmatic nucleus (SCN) sets the overall rhythm of the body. But each organ also has its own rhythm that’s generated internally. A clock, in the broadest sense, consists of any type of regular oscillation, and these clocks take the form of a transcription-translation feedback loop that circles back to the beginning in roughly 24 hours. Clock genes activate a process that results in protein synthesis, and once the concentration of those proteins in the cell reach a critical threshold, they come into the nucleus and turn off the clock gene that produced them. Once the proteins have broken down, the gene switches on and the cycle begins again.
Every day, the body corrects its clock to match its surroundings using daylight. A photoreceptor in the retina – the third photoreceptor after our black-and-white vision rods and colour vision cones – senses only overall light levels and reports directly to that master clock to reset it when it drifts off-course.
Those other clocks, some generated within the cell and others governing the workings of organs, have basically the same molecular organisation as the SCN but they are autonomous from it. They differ enormously in the extent to which their rhythms are coupled to the central clock and they can be influenced by other factors. For example, liver and pancreas clocks are easily reset by eating late at night, which overrides the SCN signals in those organs and puts them out of sync with the rest of the body. Jetlag’s groggy unpleasantness comes more from this uncoupling of clocks than from an earlier or later internal time, per se. It takes about a day per hour of time-change to reset the master clock, but it can take even longer to corral the organs into line with each other. The effects of circadian dysfunction can be disastrous in the long term – knock out the cellular clocks in just part of a mouse pancreas, for example, and diabetes quickly ensues.
To be fully integrated with an ecosystem, an organism must cling to its niches, and one of those is a carefully carved-out temporal niche. For example, the first mammals’ warm-bloodedness allowed for successful colonisation of the nighttime world, when reptilian systems slowed down. Two similar species can comfortably occupy the same space if they do so at different times of day. Our modern built environment provides food, warmth and light at all hours, as well as safeguarding us from nocturnal predators. Superficially, we’ve been liberated from our own niche in the day/night environment but, under the surface, that desynchrony is causing all manner of problems. We have not outgrown our need for an internal timing system – far from it.
With so much of the talk about bodyclocks focused on sleep, it’s easy to forget that all of our biological processes around the clock are organised by circadian rhythms. Every day of the internal schedule is full of appointments. Nitrogen-fixing bacteria glean oxygen from the atmosphere, and they also photosynthesise to store energy. But they can’t do both at once, so they alternate between nocturnal nitrogen-fixing and daytime photosynthesis. Mammals have many such processes to orchestrate, and just about everything our body does – from metabolism and DNA repair to immune responses and cognition – is under circadian control. In humans, normal organ functioning depends on a harmony in hierarchy: synchrony among molecular rhythms within each cell, among cells in each organ, and among organs in the body. Coordinated functioning ensures that the body doesn’t work against itself.The human body is teeming with clocks, arranged in a hierarchy. At the helm, a master clock in the brain’s hypothalamus called the suprachiasmatic nucleus (SCN) sets the overall rhythm of the body. But each organ also has its own rhythm that’s generated internally. A clock, in the broadest sense, consists of any type of regular oscillation, and these clocks take the form of a transcription-translation feedback loop that circles back to the beginning in roughly 24 hours. Clock genes activate a process that results in protein synthesis, and once the concentration of those proteins in the cell reach a critical threshold, they come into the nucleus and turn off the clock gene that produced them. Once the proteins have broken down, the gene switches on and the cycle begins again.
Every day, the body corrects its clock to match its surroundings using daylight. A photoreceptor in the retina – the third photoreceptor after our black-and-white vision rods and colour vision cones – senses only overall light levels and reports directly to that master clock to reset it when it drifts off-course.
Those other clocks, some generated within the cell and others governing the workings of organs, have basically the same molecular organisation as the SCN but they are autonomous from it. They differ enormously in the extent to which their rhythms are coupled to the central clock and they can be influenced by other factors. For example, liver and pancreas clocks are easily reset by eating late at night, which overrides the SCN signals in those organs and puts them out of sync with the rest of the body. Jetlag’s groggy unpleasantness comes more from this uncoupling of clocks than from an earlier or later internal time, per se. It takes about a day per hour of time-change to reset the master clock, but it can take even longer to corral the organs into line with each other. The effects of circadian dysfunction can be disastrous in the long term – knock out the cellular clocks in just part of a mouse pancreas, for example, and diabetes quickly ensues.
by Jessa Gamble, Aeon | Read more:
Image: Richard WilkinsonSunday, November 6, 2016
Man Hacks Alexa Into Singing Fish Robot
[ed. Hilarious.]
Man hacks Alexa into singing fish robot, terror ensues.
by Russel Brandom, The Verge | Read more:
Image: Brian Kane
Thursday, November 3, 2016
Books Should Send Us Into Therapy: On The Paradox of Bibliotherapy
As an advocate for both books and therapy, I determined, upon first hearing the word “bibliotherapy,” that this might be my bespoke profession. I go to group therapy. I read a lot of novels. I’m constantly recommending novels to my group. Members struggling with various problems typically don’t count on me to empathize through personal experience. They count on me for book recommendations. Your adult son is an expat in Europe and is exploring his sexuality? See Caleb Crain’s Necessary Errors. You feel alienated from your wealthy family but drawn to nagging spiritual questions about existence? Walker Percy’s The Moviegoer is for you. Gutted by the loss of a loved one? You could do worse than James Agee’s A Death in the Family (Men’s therapy group, by the way).
The concept of bibliotherapy — a word coined in 1916 — long teetered on the edge of trendiness. But lately it has tilted toward truth. The highbrow media has weighed in favorably — consider Ceridwen Dovey’s much discussed New Yorker profile on The School of Life’s bibliotherapy team. And then the books: Azar Nafisi’s Reading Lolita in Tehran, Andy Miller’s The Year of Reading Dangerously, William Deresiewicz’s A Jane Austen Education and, perhaps most notably, The Novel Cure by Ella Berthoud and Susan Elderkin. Each book, to varying degrees, suggests connections between reading and happiness. A Google Scholar’s worth of criticism — my obscure favorite being Keith Oatley’s “Why Fiction May Be Twice as True as Fact: Fiction as Cognitive and Emotional Simulation” (pdf) — has lent the idea scholarly heft. To be clear: nobody is arguing that reading books is a substitute for the medication required to treat acute mental illness. But the notion that novels might have a genuine therapeutic benefit for certain kinds of spiritual ailments seems legit.
If we concede that books can be therapeutic, then it seems appropriate to explore the potential pitfalls of asking literature to serve that cause. Of initial concern is the inherent presumptuousness of the endeavor. When I advise my fellow group therapy members — whom I know as intimately as I know anyone, if intimacy is defined by the sharing of anxiety, fear, and grief — what they should read, the assumption is that I’m able to divine how my interpretation of a novel will intersect with their predicted interpretations of the same novel. If reception theory tells us anything, it’s that this kind of interpretive foretelling, especially when refracted through the radically subjectivity of a novel, is a matter of great uncertainty, and maybe even an implicit form of lit bullying (“What? You didn’t pick up on that theme? What’s the matter with you?).
Plus, novels don’t work this way. They aren’t narrative prescriptions. Even when done badly, novels are artistic expressions necessarily unmoored from reality, expressions that ultimately depend on idiosyncratic characters who act, think, and feel, thereby becoming emotionally, psychologically, intellectually, and even physically embodied — quite differently — in every reader’s mind. Yes, The Great Gatsby has universal appeal. But there’s a unique Gatsby for every reader who has passed eyes over the book. (Maybe even Donald Trump has one: “not great, not great; an overrated loser.”) Given the tenuousness and variability of this personal act of translation, it’s hard not to wonder: How could anyone expect to intuit how anyone else might react to certain characters in certain settings under certain circumstances?
In The Novel Cure, Berthoud and Elderkin aren’t hampered by this question. They match personal contemporary ailments with common literary themes as if they were complementary puzzle pieces. They do so under the assumption that the mere presence of a literary counterpart to a contemporary dilemma automatically imbues a novel with therapeutic agency. They advise that a person dealing with adultery in real life might want to read Madame Bovary. Or that someone who struggles to reach orgasm should read Lady Chatterly’s Lover. Does this kind of advice make any sense?
Consider the adultery example. How can Berthoud and Elderkin assess exactly how novelistic adultery will be translated into thoughts and feelings about something as deeply contextualized as real life adultery? How can they assess if it will be translated at all? Think of all the possible reactions. Use your imagination. A contemporary cuckold could go off the rails at any juncture in the Bovary narrative. He could become so immensely interested in Gustave Flaubert’s intimately detailed portrait of 19th-century provincial life, and the people in it, that he eventually finds the cuckolding theme a distraction, finishes the novel, quits his high paying job, and commits himself to a graduate program in French social history. Books have driven people to do stranger things. Sure it’s unlikely, but my point is this: Telling someone precisely what to take from a novel, based on the superficiality of a shared event, isn’t therapeutic. It’s fascist. A repression of a more genuine response.
More interesting would be to reverse the bibliotherapeutic premise altogether. Instead of asking “what’s wrong with you?” and assigning a book, assign a book and ask “what’s wrong with you?” When I lend books to friends outside of therapy, this strategy (upon reflection) is basically what I’m testing. I’m not trying to solve a person’s problem. I’m trying, in a way, to create one. I want to shake someone out of complacency. Great novels (and sometimes not so great ones) jar us, often unexpectedly. Ever have a novel sneak upon you and kick you in the gut, leaving you staring into space, dazed by an epiphany? Yes. Novels do this. They present obstacles that elicit the catharsis (from katharo, which means clearing obstacles) we didn’t think we needed. We should allow books to cause more trouble in our lives.
But the sanguine bibliotherapeutic mission will have none of that. Its premise is to take down obstacles and march us towards happiness. Proof is how easily this genre of therapy veers into self-help territory. The New York Public Library’s “Bibliotherapy” page suggests that readers check out David Brooks’s The Road to Character, Cheryl Strayed’s Brave Enough, and Elizabeth Gilbert’s Big Magic: Creative Living Beyond Fear. These books are assuredly smart books by smart writers, all of whom I admire. But the goal of this type of book is to help readers find some kind of stability. There’s obviously nothing wrong with that. But the problem from the perspective of literary fiction is that such “self-improvement” books seek to tamp down the very human emotions that literature dines out on: fear, insecurity, vulnerability, and the willingness to take strange paths to strange places. Imagine reading Fyodor Dostoyevsky’s Crime and Punishment without being at least little off kilter. You’d shut the book the moment Raskolnikov committed his murder. Being moved by fiction means being willing to be led astray a little. It helps if your rules are not ordinary.
It also seems prudent to wonder how the bibliotherapeutic pharmacy would bottle up the work of certain writers. Would it do so in a way that excludes literary genius? Almost assuredly it would. Cormac McCarthy, whom many critics consider one of the greatest writers ever — appears three times in The Novel Cure. Predictably, The Road is mentioned as a way to (a) gain insight into fatherhood and (b) achieve brevity of expression. That’s it — all talk of apocalypse and the survival instinct as integral influences on human morality is brushed aside. Inexplicably, Blood Meridian is listed as a book that sheds light on the challenge of going cold turkey. I have no idea here. None. But I do know that if you are a reader who grasps the totality of McCarthy’s work, your literary soul, as Cormac might put it, is drowning in a cesspool of roiling bile.
Because here is what bibliotherapy, as it’s now defined, has no use for: darkness. Real darkness. McCarthy’s greatest literary accomplishment is arguably Suttree, the culmination of a series of “Tennessee novels” that dealt in chilling forms of deviance — incest, necrophilia, self-imposed social alienation — that, on every page, sully the reader’s sense of decency. McCarthy’s greatest narrative accomplishment was likely No Country for Old Men, a blood splattered thriller that features a psychopath who kills random innocent people with a captive bolt pistol. These works, much like the work of Henry Miller (none of whose sex-fueled books get mentioned in The Novel Cure), aestheticize evil — in this case violence and misogynistic sex — into brilliant forms of literary beauty. They are tremendously important and profoundly gorgeous books, albeit in very disturbing ways. They are more likely to send you into therapy than practice it.
The good news for bibliotherapy is that there are too many hardcore fiction readers who know all too well that concerted reading enhances the quality of their lives. A single book might destabilize, tottering you into emotional turmoil. But books — collectively consumed through the steady focus of serious reading — undoubtedly have for many readers a comforting, even therapeutic, effect. This brand of bibliotherapy, a brand born of ongoing submission to great literature — not unlike traditional therapy — does not necessarily seek to solve specific problems. (In my group therapy, members have been dealing with the same unresolved issues for years. We define each other by them.) Instead, what evolves through both consistent reading and therapy is a deep, even profound, understanding of the dramas that underscore the challenges of being human in the modern world.
The concept of bibliotherapy — a word coined in 1916 — long teetered on the edge of trendiness. But lately it has tilted toward truth. The highbrow media has weighed in favorably — consider Ceridwen Dovey’s much discussed New Yorker profile on The School of Life’s bibliotherapy team. And then the books: Azar Nafisi’s Reading Lolita in Tehran, Andy Miller’s The Year of Reading Dangerously, William Deresiewicz’s A Jane Austen Education and, perhaps most notably, The Novel Cure by Ella Berthoud and Susan Elderkin. Each book, to varying degrees, suggests connections between reading and happiness. A Google Scholar’s worth of criticism — my obscure favorite being Keith Oatley’s “Why Fiction May Be Twice as True as Fact: Fiction as Cognitive and Emotional Simulation” (pdf) — has lent the idea scholarly heft. To be clear: nobody is arguing that reading books is a substitute for the medication required to treat acute mental illness. But the notion that novels might have a genuine therapeutic benefit for certain kinds of spiritual ailments seems legit.
If we concede that books can be therapeutic, then it seems appropriate to explore the potential pitfalls of asking literature to serve that cause. Of initial concern is the inherent presumptuousness of the endeavor. When I advise my fellow group therapy members — whom I know as intimately as I know anyone, if intimacy is defined by the sharing of anxiety, fear, and grief — what they should read, the assumption is that I’m able to divine how my interpretation of a novel will intersect with their predicted interpretations of the same novel. If reception theory tells us anything, it’s that this kind of interpretive foretelling, especially when refracted through the radically subjectivity of a novel, is a matter of great uncertainty, and maybe even an implicit form of lit bullying (“What? You didn’t pick up on that theme? What’s the matter with you?).Plus, novels don’t work this way. They aren’t narrative prescriptions. Even when done badly, novels are artistic expressions necessarily unmoored from reality, expressions that ultimately depend on idiosyncratic characters who act, think, and feel, thereby becoming emotionally, psychologically, intellectually, and even physically embodied — quite differently — in every reader’s mind. Yes, The Great Gatsby has universal appeal. But there’s a unique Gatsby for every reader who has passed eyes over the book. (Maybe even Donald Trump has one: “not great, not great; an overrated loser.”) Given the tenuousness and variability of this personal act of translation, it’s hard not to wonder: How could anyone expect to intuit how anyone else might react to certain characters in certain settings under certain circumstances?
In The Novel Cure, Berthoud and Elderkin aren’t hampered by this question. They match personal contemporary ailments with common literary themes as if they were complementary puzzle pieces. They do so under the assumption that the mere presence of a literary counterpart to a contemporary dilemma automatically imbues a novel with therapeutic agency. They advise that a person dealing with adultery in real life might want to read Madame Bovary. Or that someone who struggles to reach orgasm should read Lady Chatterly’s Lover. Does this kind of advice make any sense?
Consider the adultery example. How can Berthoud and Elderkin assess exactly how novelistic adultery will be translated into thoughts and feelings about something as deeply contextualized as real life adultery? How can they assess if it will be translated at all? Think of all the possible reactions. Use your imagination. A contemporary cuckold could go off the rails at any juncture in the Bovary narrative. He could become so immensely interested in Gustave Flaubert’s intimately detailed portrait of 19th-century provincial life, and the people in it, that he eventually finds the cuckolding theme a distraction, finishes the novel, quits his high paying job, and commits himself to a graduate program in French social history. Books have driven people to do stranger things. Sure it’s unlikely, but my point is this: Telling someone precisely what to take from a novel, based on the superficiality of a shared event, isn’t therapeutic. It’s fascist. A repression of a more genuine response.
More interesting would be to reverse the bibliotherapeutic premise altogether. Instead of asking “what’s wrong with you?” and assigning a book, assign a book and ask “what’s wrong with you?” When I lend books to friends outside of therapy, this strategy (upon reflection) is basically what I’m testing. I’m not trying to solve a person’s problem. I’m trying, in a way, to create one. I want to shake someone out of complacency. Great novels (and sometimes not so great ones) jar us, often unexpectedly. Ever have a novel sneak upon you and kick you in the gut, leaving you staring into space, dazed by an epiphany? Yes. Novels do this. They present obstacles that elicit the catharsis (from katharo, which means clearing obstacles) we didn’t think we needed. We should allow books to cause more trouble in our lives.
But the sanguine bibliotherapeutic mission will have none of that. Its premise is to take down obstacles and march us towards happiness. Proof is how easily this genre of therapy veers into self-help territory. The New York Public Library’s “Bibliotherapy” page suggests that readers check out David Brooks’s The Road to Character, Cheryl Strayed’s Brave Enough, and Elizabeth Gilbert’s Big Magic: Creative Living Beyond Fear. These books are assuredly smart books by smart writers, all of whom I admire. But the goal of this type of book is to help readers find some kind of stability. There’s obviously nothing wrong with that. But the problem from the perspective of literary fiction is that such “self-improvement” books seek to tamp down the very human emotions that literature dines out on: fear, insecurity, vulnerability, and the willingness to take strange paths to strange places. Imagine reading Fyodor Dostoyevsky’s Crime and Punishment without being at least little off kilter. You’d shut the book the moment Raskolnikov committed his murder. Being moved by fiction means being willing to be led astray a little. It helps if your rules are not ordinary.
It also seems prudent to wonder how the bibliotherapeutic pharmacy would bottle up the work of certain writers. Would it do so in a way that excludes literary genius? Almost assuredly it would. Cormac McCarthy, whom many critics consider one of the greatest writers ever — appears three times in The Novel Cure. Predictably, The Road is mentioned as a way to (a) gain insight into fatherhood and (b) achieve brevity of expression. That’s it — all talk of apocalypse and the survival instinct as integral influences on human morality is brushed aside. Inexplicably, Blood Meridian is listed as a book that sheds light on the challenge of going cold turkey. I have no idea here. None. But I do know that if you are a reader who grasps the totality of McCarthy’s work, your literary soul, as Cormac might put it, is drowning in a cesspool of roiling bile.
Because here is what bibliotherapy, as it’s now defined, has no use for: darkness. Real darkness. McCarthy’s greatest literary accomplishment is arguably Suttree, the culmination of a series of “Tennessee novels” that dealt in chilling forms of deviance — incest, necrophilia, self-imposed social alienation — that, on every page, sully the reader’s sense of decency. McCarthy’s greatest narrative accomplishment was likely No Country for Old Men, a blood splattered thriller that features a psychopath who kills random innocent people with a captive bolt pistol. These works, much like the work of Henry Miller (none of whose sex-fueled books get mentioned in The Novel Cure), aestheticize evil — in this case violence and misogynistic sex — into brilliant forms of literary beauty. They are tremendously important and profoundly gorgeous books, albeit in very disturbing ways. They are more likely to send you into therapy than practice it.
The good news for bibliotherapy is that there are too many hardcore fiction readers who know all too well that concerted reading enhances the quality of their lives. A single book might destabilize, tottering you into emotional turmoil. But books — collectively consumed through the steady focus of serious reading — undoubtedly have for many readers a comforting, even therapeutic, effect. This brand of bibliotherapy, a brand born of ongoing submission to great literature — not unlike traditional therapy — does not necessarily seek to solve specific problems. (In my group therapy, members have been dealing with the same unresolved issues for years. We define each other by them.) Instead, what evolves through both consistent reading and therapy is a deep, even profound, understanding of the dramas that underscore the challenges of being human in the modern world.
by James McWilliams, The Millions | Read more:
Image: Cormac McCarthy uncredited
Crony Beliefs
For as long as I can remember, I've struggled to make sense of the terrifying gulf that separates the inside and outside views of beliefs.
From the inside, via introspection, each of us feels that our beliefs are pretty damn sensible. Sure we might harbor a bit of doubt here and there. But for the most part, we imagine we have a firm grip on reality; we don't lie awake at night fearing that we're massively deluded.
But when we consider the beliefs of other people? It's an epistemic shit show out there. Astrology, conspiracies, the healing power of crystals. Aliens who abduct Earthlings and build pyramids. That vaccines cause autism or that Obama is a crypto-Muslim — or that the world was formed some 6,000 years ago, replete with fossils made to look millions of years old. How could anyone believe this stuff?!
No, seriously: how?
Let's resist the temptation to dismiss such believers as "crazy" — along with "stupid," "gullible," "brainwashed," and "needing the comfort of simple answers." Surely these labels are appropriate some of the time, but once we apply them, we stop thinking. This isn't just lazy; it's foolish. These are fellow human beings we're talking about, creatures of our same species whose brains have been built (grown?) according to the same basic pattern. So whatever processes beget their delusions are at work in our minds as well. We therefore owe it to ourselves to try to reconcile the inside and outside views. Because let's not flatter ourselves: we believe crazy things too. We just have a hard time seeing them as crazy.
So, once again: how could anyone believe this stuff? More to the point: how could we end up believing it?
After struggling with this question for years and years, I finally have an answer I'm satisfied with.
From the inside, via introspection, each of us feels that our beliefs are pretty damn sensible. Sure we might harbor a bit of doubt here and there. But for the most part, we imagine we have a firm grip on reality; we don't lie awake at night fearing that we're massively deluded.
But when we consider the beliefs of other people? It's an epistemic shit show out there. Astrology, conspiracies, the healing power of crystals. Aliens who abduct Earthlings and build pyramids. That vaccines cause autism or that Obama is a crypto-Muslim — or that the world was formed some 6,000 years ago, replete with fossils made to look millions of years old. How could anyone believe this stuff?!
No, seriously: how?
Let's resist the temptation to dismiss such believers as "crazy" — along with "stupid," "gullible," "brainwashed," and "needing the comfort of simple answers." Surely these labels are appropriate some of the time, but once we apply them, we stop thinking. This isn't just lazy; it's foolish. These are fellow human beings we're talking about, creatures of our same species whose brains have been built (grown?) according to the same basic pattern. So whatever processes beget their delusions are at work in our minds as well. We therefore owe it to ourselves to try to reconcile the inside and outside views. Because let's not flatter ourselves: we believe crazy things too. We just have a hard time seeing them as crazy.
So, once again: how could anyone believe this stuff? More to the point: how could we end up believing it?
After struggling with this question for years and years, I finally have an answer I'm satisfied with.
Beliefs as Employees
By way of analogy, let's consider how beliefs in the brain are like employees at a company. This isn't a perfect analogy, but it'll get us 70% of the way there.
Employees are hired because they have a job to do, i.e., to help the company accomplish its goals. But employees don't come for free: they have to earn their keep by being useful. So if an employee does his job well, he'll be kept around, whereas if he does it poorly — or makes other kinds of trouble, like friction with his coworkers — he'll have to be let go.
Similarly, we can think about beliefs as ideas that have been "hired" by the brain. And we hire them because they have a "job" to do, which is to provide accurate information about the world. We need to know where the lions hang out (so we can avoid them), which plants are edible or poisonous (so we can eat the right ones), and who's romantically available (so we know whom to flirt with). The closer our beliefs hew to reality, the better actions we'll be able to take, leading ultimately to survival and reproductive success. That's our "bottom line," and that's what determines whether our beliefs are serving us well. If a belief performs poorly — by inaccurately modeling the world, say, and thereby leading us astray — then it needs to be let go.
I hope none of this is controversial. But here's where the analogy gets interesting.
Consider the case of Acme Corp., a property development firm in a small town called Nepotsville. The unwritten rule of doing business in Nepotsville is that companies are expected to hire the city council's friends and family members. Companies that make these strategic hires end up getting their permits approved and winning contracts from the city. Meanwhile, companies that "refuse to play ball" find themselves getting sued, smeared in the local papers, and shut out of new business.
In this environment, Acme faces two kinds of incentives, one pragmatic and one political. First, like any business, it needs to complete projects on time and under budget. And in order to do that, it needs to act like a meritocracy, i.e., by hiring qualified workers, monitoring their performance, and firing those who don't pull their weight. But at the same time, Acme also needs to appease the city council. And thus it needs to engage in a little cronyism, i.e., by hiring workers who happen to be well-connected to the city council (even if they're unqualified) and preventing those crony workers from being fired (even when they do shoddy work).
Suppose Acme has just decided to hire the mayor's nephew Robert as a business analyst. Robert isn't even remotely qualified for the role, but it's nevertheless in Acme's interests to hire him. He'll "earn his keep" not by doing good work, but by keeping the mayor off the company's back.
Now suppose we were to check in on Robert six months later. If we didn't already know he was a crony, we might easily mistake him for a regular employee. We'd find him making spreadsheets, attending meetings, drawing a salary: all the things employees do. But if we look carefully enough — not at Robert per se, but at the way the company treats him — we're liable to notice something fishy. He's terrible at his job, and yet he isn't fired. Everyone cuts him slack and treats him with kid gloves. The boss tolerates his mistakes and even works overtime to compensate for them. God knows, maybe he's even promoted.
Clearly Robert is a different kind of employee, a different breed. The way he moves through the company is strange, as if he's governed by different rules, measured by a different yardstick. He's in the meritocracy, but not of the meritocracy.
And now the point of this whole analogy.
I contend that the best way to understand all the crazy beliefs out there — aliens, conspiracies, and all the rest — is to analyze them as crony beliefs. Beliefs that have been "hired" not for the legitimate purpose of accurately modeling the world, but rather for social and political kickbacks.
As Steven Pinker says,
The point is, our brains are incredibly powerful organs, but their native architecture doesn't care about high-minded ideals like Truth. They're designed to work tirelessly and efficiently — if sometimes subtly and counterintuitively — in our self-interest. So if a brain anticipates that it will be rewarded for adopting a particular belief, it's perfectly happy to do so, and doesn't much care where the reward comes from — whether it's pragmatic (better outcomes resulting from better decisions), social (better treatment from one's peers), or some mix of the two. A brain that didn't adopt a socially-useful (crony) belief would quickly find itself at a disadvantage relative to brains that are more willing to "play ball." In extreme environments, like the French Revolution, a brain that rejects crony beliefs, however spurious, may even find itself forcibly removed from its body and left to rot on a pike. Faced with such incentives, is it any wonder our brains fall in line?
Even mild incentives, however, can still exert pressure on our beliefs. Russ Roberts tells the story of a colleague who, at a picnic, started arguing for an unpopular political opinion — that minimum wage laws can cause harm — whereupon there was a "frost in the air" as his fellow picnickers "edged away from him on the blanket." If this happens once or twice, it's easy enough to shrug off. But when it happens again and again, especially among people whose opinions we care about, sooner or later we'll second-guess our beliefs and be tempted to revise them.
Mild or otherwise, these incentives are also pervasive. Everywhere we turn, we face pressure to adopt crony beliefs. At work, we're rewarded for believing good things about the company. At church, we earn trust in exchange for faith, while facing severe sanctions for heresy. In politics, our allies support us when we toe the party line, and withdraw support when we refuse. (When we say politics is the mind-killer, it's because these social rewards completely dominate the pragmatic rewards, and thus we have almost no incentive to get at the truth.) Even dating can put untoward pressure on our minds, insofar as potential romantic partners judge us for what we believe.
If you've ever wanted to believe something, ask yourself where that desire comes from. Hint: it's not the desire simply to believe what's true.
In short: Just as money can pervert scientific research, so everyday social incentives have the potential to distort our beliefs.
By way of analogy, let's consider how beliefs in the brain are like employees at a company. This isn't a perfect analogy, but it'll get us 70% of the way there.
Employees are hired because they have a job to do, i.e., to help the company accomplish its goals. But employees don't come for free: they have to earn their keep by being useful. So if an employee does his job well, he'll be kept around, whereas if he does it poorly — or makes other kinds of trouble, like friction with his coworkers — he'll have to be let go.
Similarly, we can think about beliefs as ideas that have been "hired" by the brain. And we hire them because they have a "job" to do, which is to provide accurate information about the world. We need to know where the lions hang out (so we can avoid them), which plants are edible or poisonous (so we can eat the right ones), and who's romantically available (so we know whom to flirt with). The closer our beliefs hew to reality, the better actions we'll be able to take, leading ultimately to survival and reproductive success. That's our "bottom line," and that's what determines whether our beliefs are serving us well. If a belief performs poorly — by inaccurately modeling the world, say, and thereby leading us astray — then it needs to be let go.
I hope none of this is controversial. But here's where the analogy gets interesting.
Consider the case of Acme Corp., a property development firm in a small town called Nepotsville. The unwritten rule of doing business in Nepotsville is that companies are expected to hire the city council's friends and family members. Companies that make these strategic hires end up getting their permits approved and winning contracts from the city. Meanwhile, companies that "refuse to play ball" find themselves getting sued, smeared in the local papers, and shut out of new business.
In this environment, Acme faces two kinds of incentives, one pragmatic and one political. First, like any business, it needs to complete projects on time and under budget. And in order to do that, it needs to act like a meritocracy, i.e., by hiring qualified workers, monitoring their performance, and firing those who don't pull their weight. But at the same time, Acme also needs to appease the city council. And thus it needs to engage in a little cronyism, i.e., by hiring workers who happen to be well-connected to the city council (even if they're unqualified) and preventing those crony workers from being fired (even when they do shoddy work).
Suppose Acme has just decided to hire the mayor's nephew Robert as a business analyst. Robert isn't even remotely qualified for the role, but it's nevertheless in Acme's interests to hire him. He'll "earn his keep" not by doing good work, but by keeping the mayor off the company's back.
Now suppose we were to check in on Robert six months later. If we didn't already know he was a crony, we might easily mistake him for a regular employee. We'd find him making spreadsheets, attending meetings, drawing a salary: all the things employees do. But if we look carefully enough — not at Robert per se, but at the way the company treats him — we're liable to notice something fishy. He's terrible at his job, and yet he isn't fired. Everyone cuts him slack and treats him with kid gloves. The boss tolerates his mistakes and even works overtime to compensate for them. God knows, maybe he's even promoted.
Clearly Robert is a different kind of employee, a different breed. The way he moves through the company is strange, as if he's governed by different rules, measured by a different yardstick. He's in the meritocracy, but not of the meritocracy.
And now the point of this whole analogy.
I contend that the best way to understand all the crazy beliefs out there — aliens, conspiracies, and all the rest — is to analyze them as crony beliefs. Beliefs that have been "hired" not for the legitimate purpose of accurately modeling the world, but rather for social and political kickbacks.
As Steven Pinker says,
People are embraced or condemned according to their beliefs, so one function of the mind may be to hold beliefs that bring the belief-holder the greatest number of allies, protectors, or disciples, rather than beliefs that are most likely to be true.In other words, just like Acme, the human brain has to strike an awkward balance between two different reward systems:
- Meritocracy, where we monitor beliefs for accuracy out of fear that we'll stumble by acting on a false belief; and
- Cronyism, where we don't care about accuracy so much as whether our beliefs make the right impressions on others.
The point is, our brains are incredibly powerful organs, but their native architecture doesn't care about high-minded ideals like Truth. They're designed to work tirelessly and efficiently — if sometimes subtly and counterintuitively — in our self-interest. So if a brain anticipates that it will be rewarded for adopting a particular belief, it's perfectly happy to do so, and doesn't much care where the reward comes from — whether it's pragmatic (better outcomes resulting from better decisions), social (better treatment from one's peers), or some mix of the two. A brain that didn't adopt a socially-useful (crony) belief would quickly find itself at a disadvantage relative to brains that are more willing to "play ball." In extreme environments, like the French Revolution, a brain that rejects crony beliefs, however spurious, may even find itself forcibly removed from its body and left to rot on a pike. Faced with such incentives, is it any wonder our brains fall in line?
Even mild incentives, however, can still exert pressure on our beliefs. Russ Roberts tells the story of a colleague who, at a picnic, started arguing for an unpopular political opinion — that minimum wage laws can cause harm — whereupon there was a "frost in the air" as his fellow picnickers "edged away from him on the blanket." If this happens once or twice, it's easy enough to shrug off. But when it happens again and again, especially among people whose opinions we care about, sooner or later we'll second-guess our beliefs and be tempted to revise them.
Mild or otherwise, these incentives are also pervasive. Everywhere we turn, we face pressure to adopt crony beliefs. At work, we're rewarded for believing good things about the company. At church, we earn trust in exchange for faith, while facing severe sanctions for heresy. In politics, our allies support us when we toe the party line, and withdraw support when we refuse. (When we say politics is the mind-killer, it's because these social rewards completely dominate the pragmatic rewards, and thus we have almost no incentive to get at the truth.) Even dating can put untoward pressure on our minds, insofar as potential romantic partners judge us for what we believe.
If you've ever wanted to believe something, ask yourself where that desire comes from. Hint: it's not the desire simply to believe what's true.
In short: Just as money can pervert scientific research, so everyday social incentives have the potential to distort our beliefs.
Posturing
So far we've been describing our brains as "responding to incentives," which gives them a passive role. But it can also be helpful to take a different perspective, one in which our brains actively adopt crony beliefs in order to strategically influence other people. In other words, we use crony beliefs to posture.
So far we've been describing our brains as "responding to incentives," which gives them a passive role. But it can also be helpful to take a different perspective, one in which our brains actively adopt crony beliefs in order to strategically influence other people. In other words, we use crony beliefs to posture.
by Kevin Simler, Melting Asphalt | Read more:
Image: via:
Labels:
Critical Thought,
Politics,
Psychology,
Relationships,
Religion
What’s Your Ideal Community? The Answer Is Political
The American political map that has emerged over the last half-century, with blue cities and red beyond, is a product of both the ideological realignment of the two parties and geographic sorting among voters. It also raises a fascinating question about how our politics are shaped by where we live.
Is it simply that people who are already liberal choose dense urban environments and conservatives choose more suburban living? Or do these places influence how we feel about government — and each other — in ways that make us more liberal or conservative?
Political scientists, fortunately, cannot randomly assign people to cities, suburbs or rural outposts and then wait to see if their politics adapt. But their theories of why density might matter for partisanship add a provocative layer to how we think about the differences among us that are more often defined in an election year by education, income or race.
A large Pew survey two years ago of American political life found that self-described liberals overwhelmingly said they’d prefer to live where the homes are smaller and closer together but where the amenities are within walking distance. Conservatives chose the opposite trade-off: big homes, spaced farther apart, but with schools and restaurants miles away. The question got at a pattern underlying politics today: Beyond our disagreements about taxes, welfare or health care, partisans also fundamentally favor different kinds of places. (...)
Thomas Ogorzalek, a political scientist at Northwestern, argues that liberalism has its roots in big-city governments trying to solve the kinds of local problems that arise when diverse populations cram together. Compared with the suburbs or rural America, cities are more complex. They’re harder to govern, which means in many ways that they demand bigger government: a large transit agency to move people around, intricate parking rules to govern scarce spaces, a garbage truck armada to keep the streets clean.
“Externalities accumulate faster in dense places, and you need to do something about them,” Mr. Ogorzalek said. In other words, the trash piles up.
New York City, with its 24,000 restaurants and bars, needs a system of publicly posted health grades. A town with two restaurants may not. New York needs some colossal bridges connecting Manhattan and Brooklyn. A smaller community doesn’t need public-works projects on that scale. New York requires a large police force. A rural resident may need self-reliance when the closest officer is 10 miles away.
It’s conceivable that people who live in cities come to value more active government. Or they’re more receptive to investing in welfare because they pass the homeless every day. Or they appreciate immigration because their cab rides and lunch depend on immigrants. This argument is partly about the people we’re exposed to in cities (the poor, foreigners), and partly about the logistics of living there.
“As someone who’s lived in cities for almost all of my adult life, it’s impossible to conceive of a well-functioning city without a strong public works and a strong governmental infrastructure,” said Thomas Sugrue, a historian at New York University. Government has actively shaped suburbia, too, for example engineering the mortgage tax breaks that make owning large homes more affordable. But those government interventions are often less visible. “They’re not invisible,” Mr. Sugrue said, “when you’re going down Eighth Street as it’s being repaved and the sewer lines underneath it are being replaced.”
The political analyst William Schneider articulated a similarly plausible idea about the politics of suburbia in a classic 1992 article for The Atlantic. As cities require reliance on the commons, Mr. Schneider argued that “to move to the suburbs is to express a preference for the private over the public.” The suburbs entail private yards over public parks, private cars over public transit, private malls over public squares. Suburban living even buys a kind of private government, Mr. Schneider wrote, with the promise of local control of neighborhood schools and social services that benefit only the people who can afford to live there.
His theory supports self-selection; people who want that environment move to it. But Jessica Trounstine, a political scientist at the University of California, Merced, believes that people who move to the suburbs, apolitically, can also become part of a political ideology that they find benefits them and their pocketbooks.
Is it simply that people who are already liberal choose dense urban environments and conservatives choose more suburban living? Or do these places influence how we feel about government — and each other — in ways that make us more liberal or conservative?
Political scientists, fortunately, cannot randomly assign people to cities, suburbs or rural outposts and then wait to see if their politics adapt. But their theories of why density might matter for partisanship add a provocative layer to how we think about the differences among us that are more often defined in an election year by education, income or race.A large Pew survey two years ago of American political life found that self-described liberals overwhelmingly said they’d prefer to live where the homes are smaller and closer together but where the amenities are within walking distance. Conservatives chose the opposite trade-off: big homes, spaced farther apart, but with schools and restaurants miles away. The question got at a pattern underlying politics today: Beyond our disagreements about taxes, welfare or health care, partisans also fundamentally favor different kinds of places. (...)
Thomas Ogorzalek, a political scientist at Northwestern, argues that liberalism has its roots in big-city governments trying to solve the kinds of local problems that arise when diverse populations cram together. Compared with the suburbs or rural America, cities are more complex. They’re harder to govern, which means in many ways that they demand bigger government: a large transit agency to move people around, intricate parking rules to govern scarce spaces, a garbage truck armada to keep the streets clean.
“Externalities accumulate faster in dense places, and you need to do something about them,” Mr. Ogorzalek said. In other words, the trash piles up.
New York City, with its 24,000 restaurants and bars, needs a system of publicly posted health grades. A town with two restaurants may not. New York needs some colossal bridges connecting Manhattan and Brooklyn. A smaller community doesn’t need public-works projects on that scale. New York requires a large police force. A rural resident may need self-reliance when the closest officer is 10 miles away.
It’s conceivable that people who live in cities come to value more active government. Or they’re more receptive to investing in welfare because they pass the homeless every day. Or they appreciate immigration because their cab rides and lunch depend on immigrants. This argument is partly about the people we’re exposed to in cities (the poor, foreigners), and partly about the logistics of living there.
“As someone who’s lived in cities for almost all of my adult life, it’s impossible to conceive of a well-functioning city without a strong public works and a strong governmental infrastructure,” said Thomas Sugrue, a historian at New York University. Government has actively shaped suburbia, too, for example engineering the mortgage tax breaks that make owning large homes more affordable. But those government interventions are often less visible. “They’re not invisible,” Mr. Sugrue said, “when you’re going down Eighth Street as it’s being repaved and the sewer lines underneath it are being replaced.”
The political analyst William Schneider articulated a similarly plausible idea about the politics of suburbia in a classic 1992 article for The Atlantic. As cities require reliance on the commons, Mr. Schneider argued that “to move to the suburbs is to express a preference for the private over the public.” The suburbs entail private yards over public parks, private cars over public transit, private malls over public squares. Suburban living even buys a kind of private government, Mr. Schneider wrote, with the promise of local control of neighborhood schools and social services that benefit only the people who can afford to live there.
His theory supports self-selection; people who want that environment move to it. But Jessica Trounstine, a political scientist at the University of California, Merced, believes that people who move to the suburbs, apolitically, can also become part of a political ideology that they find benefits them and their pocketbooks.
by Emily Badger, NY Times | Read more:
Image: Pew Research Center
Wednesday, November 2, 2016
How Instagram is Changing the Way We Eat
I often post pictures of my food online before I have tasted it. I take the photo, adjust the brightness, contrast and saturation, upload it to my social media accounts and rejoice in how amazing it is. Sometimes, when I go on to eat the food in front of me, I don’t even like it. That pretty orange and pistachio thing I made is bitter because the oranges have gone rancid. The photogenic Italian sfogliatella pastry, which I bought more or less entirely to take a photo of, is actually pretty tough. I am left chewing the pastry long after the “likes” have stopped trickling in. The interaction was sweet while it lasted, though.
We love to share our food. Not necessarily in the physical sense, because that would mean giving away something substantive and delicious. That gesture is still reserved for the people around us who we love and care about. But for the rest of the world – the school pals and the random followers and our prying family friends – we share our food online. We are sharing more food in this way than ever before, and a huge amount of this hungry, food-centric media revolves around food photography and short videos on platforms such as Instagram, Snapchat and Facebook.
The annual Waitrose food and drink report, released on Wednesday, focuses on the way in which food has become social currency thanks to how we share and discuss it online. It is impossible to wade through the quagmire of social media without segueing into virtual treasure troves of #foodporn, #instafood and proudly #delicious content.
According to the report, one in five Brits has shared a food photo online or with our friends in the past month. We have managed to forge what looks like a rare pure corner of social media, where pleasure is the order of the day. No matter the poster or the politics, food shines bright as something that all of us can aspire to, if only we curate our lives and our diets carefully enough.
Most of us who document our meals online are amateurs, but there exists a sizeable, and hugely profitable, industry of professional food bloggers and Instagrammers, whose pristine food styling sets the tone for a whole aesthetic movement.
Take Sarah Coates who, off the back of the success of her blog The Sugar Hit and her 36,000 followers on Instagram, has released a cookbook and shaped a particular niche for herself in the online baking world. Hers is a self-avowedly saccharine, indulgent kind of food. Unlike much of the more earnest online food world, her photographs are bright, flooded with light and popping with flashes of colour, vibrancy and life. Punchy tones and patterns give the photos a kind of levity, in spite of the (wonderfully) butter-heavy, cloying sweetness of the food itself. Certain foods become emblems with a life of their own: waffles made in a round waffle-iron; doughnuts glazed or rolled in sugar; funfetti sprinkles. These posts amass huge amounts of interaction from followers, and spawn food trends of their own. First come the savvy Instagrammers, then the foodie public, and then, once we have all moved on to something new, the traditional food press. Glazed doughnuts from the Sugar Hit blog.
Once these Instagram-friendly foods go viral, they can completely change the way we eat. Breakfast, for example, has shifted from a decidedly unphotogenic cereal or marmalade on toast to the bright hues of avocado toast (there are nearly 250,000 #avocadotoast hashtagged photos on Instagram) and smoothie bowls. Even the humble fry-up has been rebranded, in the hands of the Hemsley sisters, as an oven-baked, meticulously arranged, “healthier” big breakfast. It looks great and presumably tastes awful, the oven tray divided into neat strips of colour, from leathery lean oven bacon to overdone eggs.
Among the foods billed to gain traction in 2017, today’s Waitrose report points to Hawaiian poke and even, in an alarming twist, vegetable yoghurts. No doubt these will be helped along in the likability stakes with their colourful, snappy Instagram vibe.
There is a big generation gap in this movement, though. According to the Waitrose report, 18- to 24-year olds are five times more likely to share photos of their food online than the over 55s – and that is certainly reflected in the types of cuisine, styling and tone that are popular in the online food world. So you are unlikely to find photos of old-fashioned sherry trifles, unpretty Irish stew or traditional meat-and-two-veg meals, unless it’s in a shrewdly ironic way.
Instead, there are fun, irreverent Instagram food circles, all funfetti and ice-cream sandwiches, and – in a twist that is so very 2016 that it makes my soul scream – flamingo pool float-shaped cakes. But just as popular are the serious, aspirational channels popularised by accounts such as @violetcakeslondon and @skye_mcalpine. Here, you will find beautifully shot, intricately staged photographs of the food and, crucially, the lifestyles of successful, creative thirtysomethings. These are wishful odes to how serene and perfect your life could be, if only you had the money, the £50 ceramic platters and the time. Perhaps in keeping with the broader asymmetry between the numbers of social media users in different generations, there’s a lot less to be seen of older people, or past food fashions, in this smart, moneyed, and overwhelmingly young world.
And yet it would be wrong to assume that this online culture doesn’t bleed through to tint the ways that real people cook and eat. For every wildly successful professional food blogger, there are countless amateurs posting the minutiae of their gastronomic day online. Meals that are Instagrammable – take, for instance, Borough Market’s Bread Ahead doughnuts, of which there are nearly 5,000 tagged photos on Instagram – become viral content in their own right. These foods become the must-eat and, more importantly, must-document meals of the moment. Restaurants such as London’s Bao keep punters queuing out the door just through the photogenic strength of one good dish. (For what it’s worth, I went to Bao to try their cloud-soft steamed buns and they were as good as they looked). Going green: fresh avocado and guacamole.
Increasingly, we are being influenced not just in the types of food that we eat, but how we cook and eat that food. The Waitrose report also states that almost half of us take more care over a dish if we think a photo might be taken of it, and nearly 40% claim to worry more about presentation than they did five years ago. We might include a garnish of picked thyme leaves to bring a pop of colour to a lemon drizzle cake, even if that thyme doesn’t really stand strong against the punch of the citrus. I am guilty of weeding out the messy and the misshapen from a batch of doughnuts or muffins before I take a photo. I might add a glaze that nobody wants, just because it will make the afternoon sun catch and glint in the furrows of the churros I just fried. It’s aesthetic first, taste later and, quite often, no taste at all.
We love to share our food. Not necessarily in the physical sense, because that would mean giving away something substantive and delicious. That gesture is still reserved for the people around us who we love and care about. But for the rest of the world – the school pals and the random followers and our prying family friends – we share our food online. We are sharing more food in this way than ever before, and a huge amount of this hungry, food-centric media revolves around food photography and short videos on platforms such as Instagram, Snapchat and Facebook.
The annual Waitrose food and drink report, released on Wednesday, focuses on the way in which food has become social currency thanks to how we share and discuss it online. It is impossible to wade through the quagmire of social media without segueing into virtual treasure troves of #foodporn, #instafood and proudly #delicious content.According to the report, one in five Brits has shared a food photo online or with our friends in the past month. We have managed to forge what looks like a rare pure corner of social media, where pleasure is the order of the day. No matter the poster or the politics, food shines bright as something that all of us can aspire to, if only we curate our lives and our diets carefully enough.
Most of us who document our meals online are amateurs, but there exists a sizeable, and hugely profitable, industry of professional food bloggers and Instagrammers, whose pristine food styling sets the tone for a whole aesthetic movement.
Take Sarah Coates who, off the back of the success of her blog The Sugar Hit and her 36,000 followers on Instagram, has released a cookbook and shaped a particular niche for herself in the online baking world. Hers is a self-avowedly saccharine, indulgent kind of food. Unlike much of the more earnest online food world, her photographs are bright, flooded with light and popping with flashes of colour, vibrancy and life. Punchy tones and patterns give the photos a kind of levity, in spite of the (wonderfully) butter-heavy, cloying sweetness of the food itself. Certain foods become emblems with a life of their own: waffles made in a round waffle-iron; doughnuts glazed or rolled in sugar; funfetti sprinkles. These posts amass huge amounts of interaction from followers, and spawn food trends of their own. First come the savvy Instagrammers, then the foodie public, and then, once we have all moved on to something new, the traditional food press. Glazed doughnuts from the Sugar Hit blog.
Once these Instagram-friendly foods go viral, they can completely change the way we eat. Breakfast, for example, has shifted from a decidedly unphotogenic cereal or marmalade on toast to the bright hues of avocado toast (there are nearly 250,000 #avocadotoast hashtagged photos on Instagram) and smoothie bowls. Even the humble fry-up has been rebranded, in the hands of the Hemsley sisters, as an oven-baked, meticulously arranged, “healthier” big breakfast. It looks great and presumably tastes awful, the oven tray divided into neat strips of colour, from leathery lean oven bacon to overdone eggs.
Among the foods billed to gain traction in 2017, today’s Waitrose report points to Hawaiian poke and even, in an alarming twist, vegetable yoghurts. No doubt these will be helped along in the likability stakes with their colourful, snappy Instagram vibe.
There is a big generation gap in this movement, though. According to the Waitrose report, 18- to 24-year olds are five times more likely to share photos of their food online than the over 55s – and that is certainly reflected in the types of cuisine, styling and tone that are popular in the online food world. So you are unlikely to find photos of old-fashioned sherry trifles, unpretty Irish stew or traditional meat-and-two-veg meals, unless it’s in a shrewdly ironic way.
Instead, there are fun, irreverent Instagram food circles, all funfetti and ice-cream sandwiches, and – in a twist that is so very 2016 that it makes my soul scream – flamingo pool float-shaped cakes. But just as popular are the serious, aspirational channels popularised by accounts such as @violetcakeslondon and @skye_mcalpine. Here, you will find beautifully shot, intricately staged photographs of the food and, crucially, the lifestyles of successful, creative thirtysomethings. These are wishful odes to how serene and perfect your life could be, if only you had the money, the £50 ceramic platters and the time. Perhaps in keeping with the broader asymmetry between the numbers of social media users in different generations, there’s a lot less to be seen of older people, or past food fashions, in this smart, moneyed, and overwhelmingly young world.
And yet it would be wrong to assume that this online culture doesn’t bleed through to tint the ways that real people cook and eat. For every wildly successful professional food blogger, there are countless amateurs posting the minutiae of their gastronomic day online. Meals that are Instagrammable – take, for instance, Borough Market’s Bread Ahead doughnuts, of which there are nearly 5,000 tagged photos on Instagram – become viral content in their own right. These foods become the must-eat and, more importantly, must-document meals of the moment. Restaurants such as London’s Bao keep punters queuing out the door just through the photogenic strength of one good dish. (For what it’s worth, I went to Bao to try their cloud-soft steamed buns and they were as good as they looked). Going green: fresh avocado and guacamole.
Increasingly, we are being influenced not just in the types of food that we eat, but how we cook and eat that food. The Waitrose report also states that almost half of us take more care over a dish if we think a photo might be taken of it, and nearly 40% claim to worry more about presentation than they did five years ago. We might include a garnish of picked thyme leaves to bring a pop of colour to a lemon drizzle cake, even if that thyme doesn’t really stand strong against the punch of the citrus. I am guilty of weeding out the messy and the misshapen from a batch of doughnuts or muffins before I take a photo. I might add a glaze that nobody wants, just because it will make the afternoon sun catch and glint in the furrows of the churros I just fried. It’s aesthetic first, taste later and, quite often, no taste at all.
by Ruby Tandoh, The Guardian | Read more:
Image: healthyeating_jo/InstagramTuesday, November 1, 2016
Sing to Me
It is strange to think of karaoke as an invention. The practice predates its facilitating devices, and the concept transcends its practice: Karaoke is the hobby of being a star; it is an adjuvant for the truest you an audience could handle.
Karaoke does have a parent. In the late 1960s, Daisuke Inoue was working as a club keyboardist, accompanying drinkers who wanted to belt out a song. “Out of the 108 club musicians in Kobe, I was the worst,” he told Time. One client, the head of a steel company, asked Inoue to join him at a hot springs resort where he’d hoped to entertain business associates. Inoue declined, but instead recorded a backing tape tailored to the client’s erratic singing style. It was a success. Intuiting a demand, Inoue built a jukebox-like device fitted with a car stereo and a microphone, and leased an initial batch to bars across the city in 1971. “I’m not an inventor,” he said in an interview. “I simply put things that already exist together, which is completely different.” He never patented the device (in 1983, a Filipino inventor named Roberto del Rosario acquired the patent for his own sing-along system) though years later he patented a solution to ward cockroaches and rats away from the wiring.
In 1999, Time named Inoue one of the “most influential Asians” of the last century; in 2004, he received the Ig Nobel prize, a semiserious Nobel-parody honor by true laureates at Harvard University. At the ceremony, Inoue ended his acceptance speech with a few bars of the Coke jingle “I’d Like to Teach the World to Sing.” The crowd gave him a standing ovation, and four laureates serenaded him with “Can’t Take My Eyes Off You” in the style of Andy Williams. “I was nominated [as] the inventor of karaoke, which teaches people to bear the awful singing of ordinary citizens, and enjoy it anyway,” Inoue wrote in an essay. “That is ‘genuine peace,’ they told me.”
“While karaoke might have originated in Japan, it has certainly become global,” write Xun Zhou and Francesca Tarocco in Karaoke: The Global Phenomenon. “Each country has appropriated karaoke into its own existing culture.” My focus is limited to just a slice of North America, where karaoke has gone from a waggish punchline — an item on the list of Things We All Hate, according to late-night hosts and birthday cards — to an “ironic” pastime, to just a thing people like to do, in any number of forms. You can rent a box, or perform for a crowded bar; you can do hip-hop karaoke, metal karaoke, porno karaoke, or, in Portland, “puppet karaoke.” For the ethnography Karaoke Idols: Popular Music and the Performance of Identity, Dr. Kevin Brown spent two years in the late aughts frequenting a karaoke bar near Denver called Capone’s: “a place where the white-collar collides with the blue-collar, the straight mingle with the gay, and people of all colors drink their beer and whiskey side by side.” In university, a friend of mine took a volunteer slot hosting karaoke for inpatients at a mental health facility downtown. Years later I visited a friend at the same center on what happened to be karaoke night; we sang “It’s My Party.”
When I was growing up in Toronto, karaoke was reviled for reasons that now seem crass: There is nothing more nobodyish than pretending you’re somebody. Canada is an emphatically modest country, and the ’90s were a less extroverted age: Public attitudes were more condemnatory of those who showed themselves without seeming to have earned the right. The ’90s were less empathetic, too, and karaoke lays bare the need to be seen, and accepted; such needs are universal, and repulsive. We live now, you could say, in a karaoke age, in which you’re encouraged to show yourself, through a range of creative presets. Participating online implies that you’re worthy of being perceived, that some spark of you deserves to exist in public. Instagram is as public as a painting.
Karaoke is a social medium, a vector for a unit of your sensibility, just as mediated as any other, although it demands different materials. Twitter calls for wit, Instagram for aesthetic, but karaoke is supposed to present your nudest self.
Karaoke does have a parent. In the late 1960s, Daisuke Inoue was working as a club keyboardist, accompanying drinkers who wanted to belt out a song. “Out of the 108 club musicians in Kobe, I was the worst,” he told Time. One client, the head of a steel company, asked Inoue to join him at a hot springs resort where he’d hoped to entertain business associates. Inoue declined, but instead recorded a backing tape tailored to the client’s erratic singing style. It was a success. Intuiting a demand, Inoue built a jukebox-like device fitted with a car stereo and a microphone, and leased an initial batch to bars across the city in 1971. “I’m not an inventor,” he said in an interview. “I simply put things that already exist together, which is completely different.” He never patented the device (in 1983, a Filipino inventor named Roberto del Rosario acquired the patent for his own sing-along system) though years later he patented a solution to ward cockroaches and rats away from the wiring.
In 1999, Time named Inoue one of the “most influential Asians” of the last century; in 2004, he received the Ig Nobel prize, a semiserious Nobel-parody honor by true laureates at Harvard University. At the ceremony, Inoue ended his acceptance speech with a few bars of the Coke jingle “I’d Like to Teach the World to Sing.” The crowd gave him a standing ovation, and four laureates serenaded him with “Can’t Take My Eyes Off You” in the style of Andy Williams. “I was nominated [as] the inventor of karaoke, which teaches people to bear the awful singing of ordinary citizens, and enjoy it anyway,” Inoue wrote in an essay. “That is ‘genuine peace,’ they told me.”“While karaoke might have originated in Japan, it has certainly become global,” write Xun Zhou and Francesca Tarocco in Karaoke: The Global Phenomenon. “Each country has appropriated karaoke into its own existing culture.” My focus is limited to just a slice of North America, where karaoke has gone from a waggish punchline — an item on the list of Things We All Hate, according to late-night hosts and birthday cards — to an “ironic” pastime, to just a thing people like to do, in any number of forms. You can rent a box, or perform for a crowded bar; you can do hip-hop karaoke, metal karaoke, porno karaoke, or, in Portland, “puppet karaoke.” For the ethnography Karaoke Idols: Popular Music and the Performance of Identity, Dr. Kevin Brown spent two years in the late aughts frequenting a karaoke bar near Denver called Capone’s: “a place where the white-collar collides with the blue-collar, the straight mingle with the gay, and people of all colors drink their beer and whiskey side by side.” In university, a friend of mine took a volunteer slot hosting karaoke for inpatients at a mental health facility downtown. Years later I visited a friend at the same center on what happened to be karaoke night; we sang “It’s My Party.”
When I was growing up in Toronto, karaoke was reviled for reasons that now seem crass: There is nothing more nobodyish than pretending you’re somebody. Canada is an emphatically modest country, and the ’90s were a less extroverted age: Public attitudes were more condemnatory of those who showed themselves without seeming to have earned the right. The ’90s were less empathetic, too, and karaoke lays bare the need to be seen, and accepted; such needs are universal, and repulsive. We live now, you could say, in a karaoke age, in which you’re encouraged to show yourself, through a range of creative presets. Participating online implies that you’re worthy of being perceived, that some spark of you deserves to exist in public. Instagram is as public as a painting.
Karaoke is a social medium, a vector for a unit of your sensibility, just as mediated as any other, although it demands different materials. Twitter calls for wit, Instagram for aesthetic, but karaoke is supposed to present your nudest self.
by Alexandra Molotkow, Real Life | Read more:
Image: Farah Al-Qasimi2016: A Liberal Odyssey
His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The Angel would like to stay, awaken the dead and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress.
~ Walter Benjamin - Angel of History
In a heart-wrenching letter published in the New York Times, U.S.-born journalist Michael Luo described his family’s recent encounter with the kind of bigoted outburst—culminating with the admonition that Luo’s family should “go back to China”—that, sadly, is quite common for Asian-Americans across the country. Indeed, for many people of varying races, ethnicities, sexualities, genders, and abilities, Luo’s letter trembled with darkly familiar echoes of discrimination, fear, hatred, and intolerance. Soon after, Luo took to Twitter to invite other Asian-Americans to share their experiences with racism using the hashtag #ThisIs2016. What really stood out in the tweeted testimonies was how frequent these experiences seem to be, how familiar they are to so many.
What is also strikingly familiar, though, is the premise of the hashtag #ThisIs2016. This exclamation has become a hallmark of liberal discourse, popping up in conversations, pundit patter, social media rants, and even in the titles of articles themselves (“It’s 2016, And Even the Dictionary Is Full of Sexist Disses,” “It’s 2016: Time for cargo shorts to give up and die,” etc.). You’ll also spot it in tweets from faux-authoritative web portals like Vox—“It’s 2016. Why is anyone still keeping elephants in circuses?”—to Hillary Clinton— “It’s 2016. Women deserve equal pay.” Whether we’re talking about racism, sexism, homophobia, or some other abhorrent trace of backwardness, it’s become customary to pepper our stock responses with this ritual affirmation of what progress should look like at this advanced stage of history.
Everyone seems surprised that, in the year 2016, intolerance still exists, yet flying cars do not. And people’s genuine shock that such dark remnants of our past continue to stain our progressive present exposes their deep faith that “2016” is the bearer of some liberal-minded saving grace: the grace of history and progress that will (or should) just make things better. But I think it’s time we address what 2016 really means: jack shit. And there’s a special poison running through the belief that it means anything more.
From the beginning, Donald Trump’s vision to “Make America Great Again” has peddled a dangerously tunnel-visioned nostalgia while appealing to the anxieties and discomforts of people who find themselves adrift in a crumbling now that no longer cares for or about them like it used to. Many spot-on and necessary critiques have been quick to connect the dots between Trump’s nostalgic wet dream of bygone glory and the kind of racism, xenophobia, misogyny, etc. that’s fueled his campaign from the beginning. Such criticism rightly points out that Trump supporters who yearn for the good old days are, in fact, longing for a time when “the good life” was actually built on the oppressive exclusion of non-whites, women, LGBTQ people, and others. Trump freely includes such excluded “others” in his list of scapegoats for people’s current anxieties, and the past he and his supporters long for is dangerously fetishized as a place where such scapegoats would either lose favor in the dominant culture or be eliminated entirely.
However, in railing against the backward desires that spur the claim on history Trump and his supporters are making, we can often blind ourselves to the fallacies of our own myopic historical vision. That’s how ideology works, after all: we don’t notice how it skews our own perceptions. Like death, it’s always something that afflicts someone else. But, while Trump and many of his supporters may fetishize a past that is deeply retrograde, liberals and progressives have also demonstrated a troubling tendency to fetishize a future that they presume is on their side. There’s something peculiarly telling about this kind of progress fetishism, which has been conscripted as ideology-of-first-resort for Clintonite New Democrats.
Whether we’re talking about the sleek glitz of technological advancement or the triumph of the values of liberal humanism, the teleological view of historical progress is counterproductive and potentially dangerous. When we’re stuck in the slow hell of rush-hour traffic, for instance, we may catch ourselves grumpily wondering why the hell we can’t teleport yet. But there’s an implied consumerist asterisk next to the “we.” What we mean is, “why haven’t those eggheads in lab coats figured this stuff out yet so the rest of us can live in the future we were promised?” While imposing on the future a specific trajectory, custom-fitted to what we imagine technological progress is supposed to give us, we also entrust the production of that future to experts who, we assume, want the same things we do. This is hazardously akin to the platitudinous futurism of Clintonism, which has smuggled in technocratic neoliberalism and a globally expansive military-industrial complex under the mantle of progressive wishful thinking. (...)
In 2016, liberal values enjoy a relatively dominant place in popular culture—from the Modern Family melting pot to the Hillary Clinton campaign’s multicultural basket of deployables. The world reflected back to us through various media is one that has generally accepted the familiar values of equality, tolerance, respect for difference, a very low-grade critique of corporate greed, etc. The culture wars are over, and we on the leftish side of things have reportedly “won”. . . which is probably why the rise of Trump was so shocking for many.
But Trumpism, among many other deviations from the scripted finale to history, didn’t come from nowhere, and it won’t just go away. One of the direst products of the 2016 election has been the stubborn refusal of liberals and progressives to reevaluate our unspoken presumption that the cultural ubiquity of our “shared liberal values” meant that there was no longer any need to defend or redefine those values. Trumpism should alert liberals that there is, and always will be, infinitely more work to do. Instead, it has only assured liberals of their infinite righteousness in comparison, confirming their conviction that something must be fundamentally outdated “in the hearts” of this “other side” whose followers have chosen to stand on the “wrong side of history.”
Our bizarre obsession with being on the “right side of history” has become another weapon of the “smug style” in American liberalism. Liberal smugness involves more than condescendingly talking down to others who don’t “get it,” reducing the complicated tissue of their souls to the ignominious personal traits of racism, misogyny, etc. Liberal smugness is a posture that permits us to simply take our own righteousness for granted—to the point that we don’t even see the need to defend our positions. Rather than confront the darker sides of our own beliefs, or face head-on the counterclaims on history that other political actors are making, we remain cocooned in our social echo chambers filled with people who already agree with us. We also find affirmation in the broader echo chamber of popular culture, whose dominance further reassures us of the wrongness of the beliefs of others. This is 2016; look around you. Stay woke.
To be on the right side of anything is, as everyone knows, a matter of perspective. In reserving the vanguard spot in the historical drama for ourselves, we’re confidently presuming to know what the perspective of posterity will be. But the more obnoxious aspect of this concern for “being on the right side of history” is its promotion of a singularly self-involved relationship with history itself. History is no longer the people’s furnace of cultural creation and political invention, producing a future whose shape has not yet been hammered out. Rather, in this rigidly schematized vision, history is reduced to the role of set template—divided down the middle with a “right” and “wrong” side for us to choose from—that will bear witness to and validate our personal choice. Is this not just a kind of eschatology? Are we in heaven yet?
In a heart-wrenching letter published in the New York Times, U.S.-born journalist Michael Luo described his family’s recent encounter with the kind of bigoted outburst—culminating with the admonition that Luo’s family should “go back to China”—that, sadly, is quite common for Asian-Americans across the country. Indeed, for many people of varying races, ethnicities, sexualities, genders, and abilities, Luo’s letter trembled with darkly familiar echoes of discrimination, fear, hatred, and intolerance. Soon after, Luo took to Twitter to invite other Asian-Americans to share their experiences with racism using the hashtag #ThisIs2016. What really stood out in the tweeted testimonies was how frequent these experiences seem to be, how familiar they are to so many.
What is also strikingly familiar, though, is the premise of the hashtag #ThisIs2016. This exclamation has become a hallmark of liberal discourse, popping up in conversations, pundit patter, social media rants, and even in the titles of articles themselves (“It’s 2016, And Even the Dictionary Is Full of Sexist Disses,” “It’s 2016: Time for cargo shorts to give up and die,” etc.). You’ll also spot it in tweets from faux-authoritative web portals like Vox—“It’s 2016. Why is anyone still keeping elephants in circuses?”—to Hillary Clinton— “It’s 2016. Women deserve equal pay.” Whether we’re talking about racism, sexism, homophobia, or some other abhorrent trace of backwardness, it’s become customary to pepper our stock responses with this ritual affirmation of what progress should look like at this advanced stage of history.
Everyone seems surprised that, in the year 2016, intolerance still exists, yet flying cars do not. And people’s genuine shock that such dark remnants of our past continue to stain our progressive present exposes their deep faith that “2016” is the bearer of some liberal-minded saving grace: the grace of history and progress that will (or should) just make things better. But I think it’s time we address what 2016 really means: jack shit. And there’s a special poison running through the belief that it means anything more.
From the beginning, Donald Trump’s vision to “Make America Great Again” has peddled a dangerously tunnel-visioned nostalgia while appealing to the anxieties and discomforts of people who find themselves adrift in a crumbling now that no longer cares for or about them like it used to. Many spot-on and necessary critiques have been quick to connect the dots between Trump’s nostalgic wet dream of bygone glory and the kind of racism, xenophobia, misogyny, etc. that’s fueled his campaign from the beginning. Such criticism rightly points out that Trump supporters who yearn for the good old days are, in fact, longing for a time when “the good life” was actually built on the oppressive exclusion of non-whites, women, LGBTQ people, and others. Trump freely includes such excluded “others” in his list of scapegoats for people’s current anxieties, and the past he and his supporters long for is dangerously fetishized as a place where such scapegoats would either lose favor in the dominant culture or be eliminated entirely.
Whether we’re talking about the sleek glitz of technological advancement or the triumph of the values of liberal humanism, the teleological view of historical progress is counterproductive and potentially dangerous. When we’re stuck in the slow hell of rush-hour traffic, for instance, we may catch ourselves grumpily wondering why the hell we can’t teleport yet. But there’s an implied consumerist asterisk next to the “we.” What we mean is, “why haven’t those eggheads in lab coats figured this stuff out yet so the rest of us can live in the future we were promised?” While imposing on the future a specific trajectory, custom-fitted to what we imagine technological progress is supposed to give us, we also entrust the production of that future to experts who, we assume, want the same things we do. This is hazardously akin to the platitudinous futurism of Clintonism, which has smuggled in technocratic neoliberalism and a globally expansive military-industrial complex under the mantle of progressive wishful thinking. (...)
In 2016, liberal values enjoy a relatively dominant place in popular culture—from the Modern Family melting pot to the Hillary Clinton campaign’s multicultural basket of deployables. The world reflected back to us through various media is one that has generally accepted the familiar values of equality, tolerance, respect for difference, a very low-grade critique of corporate greed, etc. The culture wars are over, and we on the leftish side of things have reportedly “won”. . . which is probably why the rise of Trump was so shocking for many.But Trumpism, among many other deviations from the scripted finale to history, didn’t come from nowhere, and it won’t just go away. One of the direst products of the 2016 election has been the stubborn refusal of liberals and progressives to reevaluate our unspoken presumption that the cultural ubiquity of our “shared liberal values” meant that there was no longer any need to defend or redefine those values. Trumpism should alert liberals that there is, and always will be, infinitely more work to do. Instead, it has only assured liberals of their infinite righteousness in comparison, confirming their conviction that something must be fundamentally outdated “in the hearts” of this “other side” whose followers have chosen to stand on the “wrong side of history.”
Our bizarre obsession with being on the “right side of history” has become another weapon of the “smug style” in American liberalism. Liberal smugness involves more than condescendingly talking down to others who don’t “get it,” reducing the complicated tissue of their souls to the ignominious personal traits of racism, misogyny, etc. Liberal smugness is a posture that permits us to simply take our own righteousness for granted—to the point that we don’t even see the need to defend our positions. Rather than confront the darker sides of our own beliefs, or face head-on the counterclaims on history that other political actors are making, we remain cocooned in our social echo chambers filled with people who already agree with us. We also find affirmation in the broader echo chamber of popular culture, whose dominance further reassures us of the wrongness of the beliefs of others. This is 2016; look around you. Stay woke.
To be on the right side of anything is, as everyone knows, a matter of perspective. In reserving the vanguard spot in the historical drama for ourselves, we’re confidently presuming to know what the perspective of posterity will be. But the more obnoxious aspect of this concern for “being on the right side of history” is its promotion of a singularly self-involved relationship with history itself. History is no longer the people’s furnace of cultural creation and political invention, producing a future whose shape has not yet been hammered out. Rather, in this rigidly schematized vision, history is reduced to the role of set template—divided down the middle with a “right” and “wrong” side for us to choose from—that will bear witness to and validate our personal choice. Is this not just a kind of eschatology? Are we in heaven yet?
by Maximillian Alvarez, The Baffler | Read more:
Image: NY Post, Paul Klee Angelus Novus
The End of Adolescence
Adolescence as an idea and as an experience grew out of the more general elevation of childhood as an ideal throughout the Western world. By the closing decades of the 19th century, nations defined the quality of their cultures by the treatment of their children. As Julia Lathrop, the first director of the United States Children’s Bureau, the first and only agency exclusively devoted to the wellbeing of children, observed in its second annual report, children’s welfare ‘tests the public spirit and democracy of a community’.
Progressive societies cared for their children by emphasising play and schooling; parents were expected to shelter and protect their children’s innocence by keeping them from paid work and the wrong kinds of knowledge; while health, protection and education became the governing principles of child life. These institutional developments were accompanied by a new children’s literature that elevated children’s fantasy and dwelled on its special qualities. The stories of Beatrix Potter, L Frank Baum and Lewis Carroll celebrated the wonderland of childhood through pastoral imagining and lands of oz.
The United States went further. In addition to the conventional scope of childhood from birth through to age 12 – a period when children’s dependency was widely taken for granted – Americans moved the goalposts of childhood as a democratic ideal by extending protections to cover the teen years. The reasons for this embrace of ‘adolescence’ are numerous. As the US economy grew, it relied on a complex immigrant population whose young people were potentially problematic as workers and citizens. To protect them from degrading work, and society from the problems that they could create by idling on the streets, the sheltering umbrella of adolescence became a means to extend their socialisation as children into later years. The concept of adolescence also stimulated Americans to create institutions that could guide adolescents during this later period of childhood; and, as they did so, adolescence became a potent category.
With the concept of adolescence, American parents, especially those in the middle class, could predict the staging of their children’s maturation. But adolescence soon became a vision of normal development that was applicable to all youth – its bridging character (connecting childhood and adulthood) giving young Americans a structured way to prepare for mating and work. In the 21st century, the bridge is sagging at both ends as the innocence of childhood has become more difficult to protect, and adulthood is long delayed. While adolescence once helped frame many matters regarding the teen years, it is no longer an adequate way to understand what is happening to the youth population. And it no longer offers a roadmap for how they can be expected to mature.
In 1904, the psychologist G Stanley Hall enshrined the term ‘adolescence’ in two tomes dense with physiological, psychological and behavioural descriptions that were self-consciously ‘scientific’. These became the touchstone of most discussions about adolescence for the next several decades. As a visible eruption toward adulthood, puberty is recognised in all societies as a turning point, since it marks new strength in the individual’s body and the manifestation of sexual energy. But in the US, it became the basis for elaborate and consequential intellectual reflections, and for the creation of new institutions that came to define adolescence. Though the physical expression of puberty is often associated with a ritual process, there was nothing in puberty that required the particular cultural practices that grew around it in the US as the century progressed. As the anthropologist Margaret Mead argued in the 1920s, American adolescence was a product of the particular drives of American life.
Rather than simply being a turning point leading to sexual maturity and a sign of adulthood, Hall proposed that adolescence was a critical stage of development with a variety of special attributes all of its own. Dorothy Ross, Hall’s biographer, describes him as drawing on earlier romantic notions when he portrayed adolescents as spiritual and dreamy as well as full of unfocused energy. But he also associated them with the new science of evolution that early in the century enveloped a variety of theoretical perspectives in a scientific aura. Hall believed that adolescence mirrored a critical stage in the history of human development, through which human ancestors moved as they developed their full capacities. In this way, he endowed adolescence with great significance since it connected the individual life course to larger evolutionary purposes: at once a personal transition and an expression of human history, adolescence became an elemental experience. Rather than a short juncture, it was a highway of multiple transformations.
Hall’s book would provide intellectual cover for the two most significant institutions that Americans were creating for adolescents: the juvenile court and the democratic high school. (...)
On a much grander scale than the juvenile court, the publicly financed comprehensive high school became possibly the most distinctly American invention of the 20th century. As a democratic institution for all, not just a select few who had previously attended academies, it incorporated the visions of adolescence as a critically important period of personal development, and eventually came to define that period of life for the majority of Americans. In its creation, educators opened doors of educational opportunity while supervising rambunctious young people in an environment that was social as well as instructional. As the influential educational reformer Elbert Fretwell noted in 1931 about the growing extra-curricular realm that was essential to the new vision of US secondary schooling: ‘There must be joy, zest, active, positive, creative activity, and a faith that right is mighty and that it will prevail.’
In order to accommodate the needs of a great variety of students – vastly compounded by the many different sources of immigration – the US high school moved rapidly from being the site of education in subjects such as algebra and Latin (the basis for most instruction in the 19th century US and elsewhere in the West) to becoming an institution where adolescents could learn vocational and business skills, and join sports teams, musical productions, language clubs and cooking classes. In Extra-Curricular Activities in the High School (1925), Charles R Foster concluded: ‘Instead of frowning, as in olden days, upon the desire of the young to act upon their own initiative, we have learned that only upon these varied instincts can be laid the surest basis for healthy growth … The school democracy must be animated by the spirit of cooperation, the spirit of freely working together for the positive good of the whole.’ School reformers set out to use the ‘cooperative’ spirit of peer groups and the diverse interests and energy of individuals to create the comprehensive US high school of the 20th century.
Educators opened wide the doors of the high school because they were intent on keeping students there for as long as possible. Eager to engage the attention of immigrant youth, urban high schools made many adjustments to the curriculum as well as to the social environment. Because second-generation immigrants needed to learn a new way of life, keeping them in school longer was one of the major aims of the transformed high school. They succeeded beyond all possible expectations. By the early 1930s, half of all US youth between 14 and 17 was in school; by 1940, it was 79 per cent: astonishing figures when compared with the single-digit attendance at more elite and academically focused institutions in the rest of the Western world.
High schools brought young people together into an adolescent world that helped to obscure where they came from and emphasised who they were as an age group, increasingly known as teenagers. It was in the high schools of the US that adolescence found its home. And while extended schooling increased their dependence for longer periods of times, it was also here that young people created their own new culture. While its content – its clothing styles, leisure habits and lingo – would change over time, the common culture of teenagers provided the basic vocabulary that young people everywhere could recognise and identify with. Whether soda-fountain dates or school hops, jazz or rock’n’roll, rolled stockings or bobby sox, ponytails or duck-tail hairstyles – it defined the commonalities and cohesiveness of youth. By mid-century, high school was understood to be a ‘normal’ experience and the great majority of youth (of all backgrounds) were graduating from high schools, now a basic part of growing up in the US. It was ‘closer to the core of the American experience than anything else I can think of’, as the novelist Kurt Vonnegut concluded in an article for Esquire in 1970.
With their distinctive music and clothing styles, US adolescents had also become the envy of young people around the world, according to Jon Savage in Teenage (2007). They embodied not just a stage of life, but a state of privilege – the privilege not to work, the right to be supported for long periods of study, the possibility of future success. US adolescents basked in the wealth of their society, while for the rest of the world the US promise was personified by its adolescents. Neither the country’s high schools nor its adolescents were easily imitated elsewhere because both rested on the unique prosperity of the 20th-century US economy and the country’s growing cultural power. It was an expensive proposition that was supported even at the depth of the Great Depression. But it paid off in the skills of a population who graduated from school, not educated in Latin and Greek texts (the norm in lycées and gymnasia elsewhere), but where the majority were sufficiently proficient in mathematics, English and rudimentary science to make for an unusually literate and skilled population.
by Paula S Fass, Aeon | Read more:
Image: Bruce Dale/National Geographic/Getty
Progressive societies cared for their children by emphasising play and schooling; parents were expected to shelter and protect their children’s innocence by keeping them from paid work and the wrong kinds of knowledge; while health, protection and education became the governing principles of child life. These institutional developments were accompanied by a new children’s literature that elevated children’s fantasy and dwelled on its special qualities. The stories of Beatrix Potter, L Frank Baum and Lewis Carroll celebrated the wonderland of childhood through pastoral imagining and lands of oz.The United States went further. In addition to the conventional scope of childhood from birth through to age 12 – a period when children’s dependency was widely taken for granted – Americans moved the goalposts of childhood as a democratic ideal by extending protections to cover the teen years. The reasons for this embrace of ‘adolescence’ are numerous. As the US economy grew, it relied on a complex immigrant population whose young people were potentially problematic as workers and citizens. To protect them from degrading work, and society from the problems that they could create by idling on the streets, the sheltering umbrella of adolescence became a means to extend their socialisation as children into later years. The concept of adolescence also stimulated Americans to create institutions that could guide adolescents during this later period of childhood; and, as they did so, adolescence became a potent category.
With the concept of adolescence, American parents, especially those in the middle class, could predict the staging of their children’s maturation. But adolescence soon became a vision of normal development that was applicable to all youth – its bridging character (connecting childhood and adulthood) giving young Americans a structured way to prepare for mating and work. In the 21st century, the bridge is sagging at both ends as the innocence of childhood has become more difficult to protect, and adulthood is long delayed. While adolescence once helped frame many matters regarding the teen years, it is no longer an adequate way to understand what is happening to the youth population. And it no longer offers a roadmap for how they can be expected to mature.
In 1904, the psychologist G Stanley Hall enshrined the term ‘adolescence’ in two tomes dense with physiological, psychological and behavioural descriptions that were self-consciously ‘scientific’. These became the touchstone of most discussions about adolescence for the next several decades. As a visible eruption toward adulthood, puberty is recognised in all societies as a turning point, since it marks new strength in the individual’s body and the manifestation of sexual energy. But in the US, it became the basis for elaborate and consequential intellectual reflections, and for the creation of new institutions that came to define adolescence. Though the physical expression of puberty is often associated with a ritual process, there was nothing in puberty that required the particular cultural practices that grew around it in the US as the century progressed. As the anthropologist Margaret Mead argued in the 1920s, American adolescence was a product of the particular drives of American life.
Rather than simply being a turning point leading to sexual maturity and a sign of adulthood, Hall proposed that adolescence was a critical stage of development with a variety of special attributes all of its own. Dorothy Ross, Hall’s biographer, describes him as drawing on earlier romantic notions when he portrayed adolescents as spiritual and dreamy as well as full of unfocused energy. But he also associated them with the new science of evolution that early in the century enveloped a variety of theoretical perspectives in a scientific aura. Hall believed that adolescence mirrored a critical stage in the history of human development, through which human ancestors moved as they developed their full capacities. In this way, he endowed adolescence with great significance since it connected the individual life course to larger evolutionary purposes: at once a personal transition and an expression of human history, adolescence became an elemental experience. Rather than a short juncture, it was a highway of multiple transformations.
Hall’s book would provide intellectual cover for the two most significant institutions that Americans were creating for adolescents: the juvenile court and the democratic high school. (...)
On a much grander scale than the juvenile court, the publicly financed comprehensive high school became possibly the most distinctly American invention of the 20th century. As a democratic institution for all, not just a select few who had previously attended academies, it incorporated the visions of adolescence as a critically important period of personal development, and eventually came to define that period of life for the majority of Americans. In its creation, educators opened doors of educational opportunity while supervising rambunctious young people in an environment that was social as well as instructional. As the influential educational reformer Elbert Fretwell noted in 1931 about the growing extra-curricular realm that was essential to the new vision of US secondary schooling: ‘There must be joy, zest, active, positive, creative activity, and a faith that right is mighty and that it will prevail.’
In order to accommodate the needs of a great variety of students – vastly compounded by the many different sources of immigration – the US high school moved rapidly from being the site of education in subjects such as algebra and Latin (the basis for most instruction in the 19th century US and elsewhere in the West) to becoming an institution where adolescents could learn vocational and business skills, and join sports teams, musical productions, language clubs and cooking classes. In Extra-Curricular Activities in the High School (1925), Charles R Foster concluded: ‘Instead of frowning, as in olden days, upon the desire of the young to act upon their own initiative, we have learned that only upon these varied instincts can be laid the surest basis for healthy growth … The school democracy must be animated by the spirit of cooperation, the spirit of freely working together for the positive good of the whole.’ School reformers set out to use the ‘cooperative’ spirit of peer groups and the diverse interests and energy of individuals to create the comprehensive US high school of the 20th century.
Educators opened wide the doors of the high school because they were intent on keeping students there for as long as possible. Eager to engage the attention of immigrant youth, urban high schools made many adjustments to the curriculum as well as to the social environment. Because second-generation immigrants needed to learn a new way of life, keeping them in school longer was one of the major aims of the transformed high school. They succeeded beyond all possible expectations. By the early 1930s, half of all US youth between 14 and 17 was in school; by 1940, it was 79 per cent: astonishing figures when compared with the single-digit attendance at more elite and academically focused institutions in the rest of the Western world.
High schools brought young people together into an adolescent world that helped to obscure where they came from and emphasised who they were as an age group, increasingly known as teenagers. It was in the high schools of the US that adolescence found its home. And while extended schooling increased their dependence for longer periods of times, it was also here that young people created their own new culture. While its content – its clothing styles, leisure habits and lingo – would change over time, the common culture of teenagers provided the basic vocabulary that young people everywhere could recognise and identify with. Whether soda-fountain dates or school hops, jazz or rock’n’roll, rolled stockings or bobby sox, ponytails or duck-tail hairstyles – it defined the commonalities and cohesiveness of youth. By mid-century, high school was understood to be a ‘normal’ experience and the great majority of youth (of all backgrounds) were graduating from high schools, now a basic part of growing up in the US. It was ‘closer to the core of the American experience than anything else I can think of’, as the novelist Kurt Vonnegut concluded in an article for Esquire in 1970.
With their distinctive music and clothing styles, US adolescents had also become the envy of young people around the world, according to Jon Savage in Teenage (2007). They embodied not just a stage of life, but a state of privilege – the privilege not to work, the right to be supported for long periods of study, the possibility of future success. US adolescents basked in the wealth of their society, while for the rest of the world the US promise was personified by its adolescents. Neither the country’s high schools nor its adolescents were easily imitated elsewhere because both rested on the unique prosperity of the 20th-century US economy and the country’s growing cultural power. It was an expensive proposition that was supported even at the depth of the Great Depression. But it paid off in the skills of a population who graduated from school, not educated in Latin and Greek texts (the norm in lycées and gymnasia elsewhere), but where the majority were sufficiently proficient in mathematics, English and rudimentary science to make for an unusually literate and skilled population.
by Paula S Fass, Aeon | Read more:
Image: Bruce Dale/National Geographic/Getty
Monday, October 31, 2016
Billionaire Governor Taxed the Rich and Increased the Minimum Wage — Now, His State’s Economy Is One of the Best in the Country
[ed. Sorry for all the link bait (Huffington Post, after all...) but this really is an achievement worth noting.]
The next time your right-wing family member or former high school classmate posts a status update or tweet about how taxing the rich or increasing workers’ wages kills jobs and makes businesses leave the state, I want you to send them this article.
When he took office in January of 2011, Minnesota governor Mark Dayton inherited a $6.2 billion budget deficit and a 7 percent unemployment rate from his predecessor, Tim Pawlenty, the soon-forgotten Republican candidate for the presidency who called himself Minnesota’s first true fiscally-conservative governor in modern history. Pawlenty prided himself on never raising state taxes — the most he ever did to generate new revenue was increase the tax on cigarettes by 75 cents a pack. Between 2003 and late 2010, when Pawlenty was at the head of Minnesota’s state government, he managed to add only 6,200 more jobs.
During his first four years in office, Gov. Dayton raised the state income tax from 7.85 to 9.85 percent on individuals earning over $150,000, and on couples earning over $250,000 when filing jointly — a tax increase of $2.1 billion. He’s also agreed to raise Minnesota’s minimum wage to $9.50 an hour by 2018, and passed a state law guaranteeing equal pay for women. Republicans like state representative Mark Uglem warned against Gov. Dayton’s tax increases, saying, “The job creators, the big corporations, the small corporations, they will leave. It’s all dollars and sense to them.” The conservative friend or family member you shared this article with would probably say the same if their governor tried something like this. But like Uglem, they would be proven wrong.
Between 2011 and 2015, Gov. Dayton added 172,000 new jobs to Minnesota’s economy — that’s 165,800 more jobs in Dayton’s first term than Pawlenty added in both of his terms combined. Even though Minnesota’s top income tax rate is the fourth highest in the country, it has the fifth lowest unemployment rate in the country at 3.6 percent. According to 2012-2013 U.S. census figures, Minnesotans had a median income that was $10,000 larger than the U.S. average, and their median income is still $8,000 more than the U.S. average today.
By late 2013, Minnesota’s private sector job growth exceeded pre-recession levels, and the state’s economy was the fifth fastest-growing in the United States. Forbes even ranked Minnesota the ninth best state for business (Scott Walker’s “Open For Business” Wisconsin came in at a distant #32 on the same list). Despite the fearmongering over businesses fleeing from Dayton’s tax cuts, 6,230 more Minnesotans filed in the top income tax bracket in 2013, just one year after Dayton’s tax increases went through. As of January 2015, Minnesota has a $1 billion budget surplus, and Gov. Dayton has pledged to reinvest more than one third of that money into public schools. And according to Gallup, Minnesota’s economic confidence is higher than any other state.
Gov. Dayton didn’t accomplish all of these reforms by shrewdly manipulating people — this article describes Dayton’s astonishing lack of charisma and articulateness. He isn’t a class warrior driven by a desire to get back at the 1 percent — Dayton is a billionaire heir to the Target fortune. It wasn’t just a majority in the legislature that forced him to do it — Dayton had to work with a Republican-controlled legislature for his first two years in office. And unlike his Republican neighbor to the east, Gov. Dayton didn’t assert his will over an unwilling populace by creating obstacles between the people and the vote — Dayton actually created an online voter registration system, making it easier than ever for people to register to vote.
by C. Robert Gibson, Huffington Post | Read more:
Image: Glenn Stubbe, Star Tribune
The next time your right-wing family member or former high school classmate posts a status update or tweet about how taxing the rich or increasing workers’ wages kills jobs and makes businesses leave the state, I want you to send them this article.
During his first four years in office, Gov. Dayton raised the state income tax from 7.85 to 9.85 percent on individuals earning over $150,000, and on couples earning over $250,000 when filing jointly — a tax increase of $2.1 billion. He’s also agreed to raise Minnesota’s minimum wage to $9.50 an hour by 2018, and passed a state law guaranteeing equal pay for women. Republicans like state representative Mark Uglem warned against Gov. Dayton’s tax increases, saying, “The job creators, the big corporations, the small corporations, they will leave. It’s all dollars and sense to them.” The conservative friend or family member you shared this article with would probably say the same if their governor tried something like this. But like Uglem, they would be proven wrong.
Between 2011 and 2015, Gov. Dayton added 172,000 new jobs to Minnesota’s economy — that’s 165,800 more jobs in Dayton’s first term than Pawlenty added in both of his terms combined. Even though Minnesota’s top income tax rate is the fourth highest in the country, it has the fifth lowest unemployment rate in the country at 3.6 percent. According to 2012-2013 U.S. census figures, Minnesotans had a median income that was $10,000 larger than the U.S. average, and their median income is still $8,000 more than the U.S. average today.
By late 2013, Minnesota’s private sector job growth exceeded pre-recession levels, and the state’s economy was the fifth fastest-growing in the United States. Forbes even ranked Minnesota the ninth best state for business (Scott Walker’s “Open For Business” Wisconsin came in at a distant #32 on the same list). Despite the fearmongering over businesses fleeing from Dayton’s tax cuts, 6,230 more Minnesotans filed in the top income tax bracket in 2013, just one year after Dayton’s tax increases went through. As of January 2015, Minnesota has a $1 billion budget surplus, and Gov. Dayton has pledged to reinvest more than one third of that money into public schools. And according to Gallup, Minnesota’s economic confidence is higher than any other state.
Gov. Dayton didn’t accomplish all of these reforms by shrewdly manipulating people — this article describes Dayton’s astonishing lack of charisma and articulateness. He isn’t a class warrior driven by a desire to get back at the 1 percent — Dayton is a billionaire heir to the Target fortune. It wasn’t just a majority in the legislature that forced him to do it — Dayton had to work with a Republican-controlled legislature for his first two years in office. And unlike his Republican neighbor to the east, Gov. Dayton didn’t assert his will over an unwilling populace by creating obstacles between the people and the vote — Dayton actually created an online voter registration system, making it easier than ever for people to register to vote.
by C. Robert Gibson, Huffington Post | Read more:
Image: Glenn Stubbe, Star Tribune
Subscribe to:
Comments (Atom)









