Thursday, December 17, 2015
Kapu: When Hawaii Was Ruled by Shark-Like Gods
Polynesian voyagers first arrived in Hawai‘i around AD 1000 (not in the sixth century, as Moore writes based on outdated scholarship), part of an extraordinary diaspora that led, at roughly the same time, to the settlement of other remote islands including New Zealand and Easter Island. For the next four centuries, a tenuous link between Hawai‘i and the ancestral homeland in central Polynesia (especially Tahiti) was maintained by occasional voyages led by priest-navigators whose names are still celebrated in Hawaiian traditions. Then, for reasons still unclear, the voyaging ceased. Hawai‘i became an isolated world unto itself, with only an increasingly distant memory of those lands beyond the horizon, collectively labeled “Kahiki” (the Hawaiian name for Tahiti).
By the early eighteenth century, a unique variant of Polynesian culture had emerged in this large and fertile archipelago. Supported by irrigation works and dryland field systems that yielded bountiful harvests of taro and sweet potato, augmented by fishponds and the husbandry of hogs and dogs for food, the indigenous population had swelled to more than half a million (the exact number at the time of Cook’s visit is still debated). The great majority were commoners—farmers and fishermen—ruled over by a relatively small group of elites, called ali‘i. The commoners worked the land as part of their tributary obligations to the ali‘i, who in turn held large territorial estates (ahupua‘a) distributed (and frequently redistributed) by each island’s paramount chief or king.
The ali‘i were obsessed with genealogy and lineage. The most exalted of the nine ranks of chiefs, the product (called nī‘aupi‘o) of incestuous unions between high-ranking brothers and sisters, were regarded as divine beings. As the nineteenth-century Hawaiian historian David Malo put it, “the people held the chiefs in great dread and looked upon them as gods.” Metaphorically, the chiefs were regarded as sharks that traveled on the land, devouring all in sight.
Central to this hyperelaborated system of hereditary chiefship and divine kingship was the deeply rooted Polynesian concept of tapu, introduced into the English language as “taboo” thanks to the accounts of Captain Cook and other eighteenth-century voyagers. Susanna Moore zeroes in on kapu, the Hawaiian variant of tapu, as a key to understanding both the cloistered nature of Hawaiian society prior to 1778 and its subsequent dramatic unraveling.
The divinely descended Hawaiian ali‘i were understood as intermediaries through which mana—the supernatural force or power enabling life, fertility, success, and efficacy of all kinds—flowed from the gods to men. As kapu, sacred beings, the ali‘i had to be kept separate from polluting influences. Secluded in their kapu compounds, the highest-ranked ali‘i often traveled at night to avoid being seen by commoners. Any commoners encountering the ali‘i had to strip off their garments and lie prostrate on the ground until the entourage passed; to attempt a glance was to risk death.
The Hawaiian system of kapu had evolved far beyond anything elsewhere in Polynesia, pervading all aspects of daily life. Pigs, certain kinds of red fish (red was the sacred color), and bananas were kapu to women; indeed, the food of men and women had to be cooked in separate earth ovens while the two genders ate in separate houses. As Moore writes, “time itself could be placed under a kapu,” with nine days out of each lunar month consecrated to particular deities. Perhaps the most fearful kapu were those associated with the king’s war rituals, which were conducted on imposing stone temple platforms where human sacrifices were offered to the war god Kū. For a commoner, merely coughing near the warrior guard during such rituals could bring instant death.
Moore regards kapu as the invisible glue that held traditional Hawaiian society together, entwining ali‘i and commoners in bonds of mutual obligation:
by Patrick Vinton Kirch, NY Review of Books | Read more:
Image:Engraving by Thomas Cook after a drawing by John Webber, 1779
By the early eighteenth century, a unique variant of Polynesian culture had emerged in this large and fertile archipelago. Supported by irrigation works and dryland field systems that yielded bountiful harvests of taro and sweet potato, augmented by fishponds and the husbandry of hogs and dogs for food, the indigenous population had swelled to more than half a million (the exact number at the time of Cook’s visit is still debated). The great majority were commoners—farmers and fishermen—ruled over by a relatively small group of elites, called ali‘i. The commoners worked the land as part of their tributary obligations to the ali‘i, who in turn held large territorial estates (ahupua‘a) distributed (and frequently redistributed) by each island’s paramount chief or king.The ali‘i were obsessed with genealogy and lineage. The most exalted of the nine ranks of chiefs, the product (called nī‘aupi‘o) of incestuous unions between high-ranking brothers and sisters, were regarded as divine beings. As the nineteenth-century Hawaiian historian David Malo put it, “the people held the chiefs in great dread and looked upon them as gods.” Metaphorically, the chiefs were regarded as sharks that traveled on the land, devouring all in sight.
Central to this hyperelaborated system of hereditary chiefship and divine kingship was the deeply rooted Polynesian concept of tapu, introduced into the English language as “taboo” thanks to the accounts of Captain Cook and other eighteenth-century voyagers. Susanna Moore zeroes in on kapu, the Hawaiian variant of tapu, as a key to understanding both the cloistered nature of Hawaiian society prior to 1778 and its subsequent dramatic unraveling.
The divinely descended Hawaiian ali‘i were understood as intermediaries through which mana—the supernatural force or power enabling life, fertility, success, and efficacy of all kinds—flowed from the gods to men. As kapu, sacred beings, the ali‘i had to be kept separate from polluting influences. Secluded in their kapu compounds, the highest-ranked ali‘i often traveled at night to avoid being seen by commoners. Any commoners encountering the ali‘i had to strip off their garments and lie prostrate on the ground until the entourage passed; to attempt a glance was to risk death.
The Hawaiian system of kapu had evolved far beyond anything elsewhere in Polynesia, pervading all aspects of daily life. Pigs, certain kinds of red fish (red was the sacred color), and bananas were kapu to women; indeed, the food of men and women had to be cooked in separate earth ovens while the two genders ate in separate houses. As Moore writes, “time itself could be placed under a kapu,” with nine days out of each lunar month consecrated to particular deities. Perhaps the most fearful kapu were those associated with the king’s war rituals, which were conducted on imposing stone temple platforms where human sacrifices were offered to the war god Kū. For a commoner, merely coughing near the warrior guard during such rituals could bring instant death.
Moore regards kapu as the invisible glue that held traditional Hawaiian society together, entwining ali‘i and commoners in bonds of mutual obligation:
Kapu served to establish order, requiring men to respect the land, to honor the chiefs who were the literal representatives of the gods, and to serve the thousands of omnipresent big and little gods. In return, the gods endowed the land and sea with bountiful food, and protected people from danger (often the gods themselves).
The arrival of Captain Cook, first at Kaua‘i in 1778 and then for a longer stay at Hawai‘i in 1779, made the first inroads in what would become an increasing assault on the kapu system and on the social and political order of Hawaiian civilization. At Kealakekua Bay, Hawaiian women “came to the ships to offer themselves to the sailors in exchange for scissors, beads, iron, and mirrors.” Below decks on the Resolution and Discovery, the women ate forbidden pork and bananas with the sailors. Their husbands and brothers, eager to receive the gifts of iron adz blades and trinkets, did not punish them for breaking the kapu. (...)
The beginning of the nineteenth century found Kamehameha established in the port village of Honolulu on O‘ahu Island, which increasingly became the archipelago’s center of commercial and political power. No longer needing to engage in war, Kamehameha quietly abandoned the rituals of human sacrifice—another rent in the kapu fabric.
Kamehameha had taken seventeen-year-old Ka‘ahumanu—granddaughter of the revered Maui king Kekaulike—as his third wife in 1785. Although of high rank, she was not considered sacred like Keōpūolani, the exalted chiefess who bore Kamehameha his royal heir and successor Liholiho (Kamehameha II). Indeed, Ka‘ahumanu produced no offspring; her power instead sprang from her influence over Kamehameha, with whom she shared a similar political cunning. Ka‘ahumanu, rather than his birth mother, watched over and raised young Liholiho. “As Liholiho’s guardian,” Moore writes, “the subtle Ka‘ahumanu was easily able to shape him to her liking, strengthening her already formidable position at the center of court.”
When Kamehameha eventually died of old age in Kona in 1819, Ka‘ahumanu was poised to bend the pliant Liholiho (then twenty-one years old) to her will. After a period of mourning in the northern part of the island, Liholiho returned to Kona to find Ka‘ahumanu waiting. “Holding Kamehameha’s favorite spear, she was dressed in the dead king’s feather cloak and war helmet, lest there be any lingering hope that Liholiho might rule the kingdom alone.” Ka‘ahumanu proclaimed that “we two shall share the rule of the land,” appointing herself to the newly created title of kuhina nui, or regent.
Ka‘ahumanu—who had for some years broken the kapu against women eating pork and shark meat—next engineered a remarkable act, inducing Liholiho to sit down at a feast and eat with the female ali‘i. “Six months after the death of his father, and with the urging of his stepmother and guardian and the quiet persuasion of his mother, the king ate with the women, bringing to an end a thousand years of kapu.” This famous act—the‘ai noa, or “free eating”—marked the end of the entire kapu system. Shortly thereafter, Ka‘ahumanu commanded that the temples be dismantled and the wooden idols of the gods burned. As Moore writes, “the fixed world of the Hawaiians, governed by a hereditary ali‘i and priesthood with a distinctive system of kapu, suddenly became one of flux, if not chaos.”
The beginning of the nineteenth century found Kamehameha established in the port village of Honolulu on O‘ahu Island, which increasingly became the archipelago’s center of commercial and political power. No longer needing to engage in war, Kamehameha quietly abandoned the rituals of human sacrifice—another rent in the kapu fabric.
Kamehameha had taken seventeen-year-old Ka‘ahumanu—granddaughter of the revered Maui king Kekaulike—as his third wife in 1785. Although of high rank, she was not considered sacred like Keōpūolani, the exalted chiefess who bore Kamehameha his royal heir and successor Liholiho (Kamehameha II). Indeed, Ka‘ahumanu produced no offspring; her power instead sprang from her influence over Kamehameha, with whom she shared a similar political cunning. Ka‘ahumanu, rather than his birth mother, watched over and raised young Liholiho. “As Liholiho’s guardian,” Moore writes, “the subtle Ka‘ahumanu was easily able to shape him to her liking, strengthening her already formidable position at the center of court.”
When Kamehameha eventually died of old age in Kona in 1819, Ka‘ahumanu was poised to bend the pliant Liholiho (then twenty-one years old) to her will. After a period of mourning in the northern part of the island, Liholiho returned to Kona to find Ka‘ahumanu waiting. “Holding Kamehameha’s favorite spear, she was dressed in the dead king’s feather cloak and war helmet, lest there be any lingering hope that Liholiho might rule the kingdom alone.” Ka‘ahumanu proclaimed that “we two shall share the rule of the land,” appointing herself to the newly created title of kuhina nui, or regent.
Ka‘ahumanu—who had for some years broken the kapu against women eating pork and shark meat—next engineered a remarkable act, inducing Liholiho to sit down at a feast and eat with the female ali‘i. “Six months after the death of his father, and with the urging of his stepmother and guardian and the quiet persuasion of his mother, the king ate with the women, bringing to an end a thousand years of kapu.” This famous act—the‘ai noa, or “free eating”—marked the end of the entire kapu system. Shortly thereafter, Ka‘ahumanu commanded that the temples be dismantled and the wooden idols of the gods burned. As Moore writes, “the fixed world of the Hawaiians, governed by a hereditary ali‘i and priesthood with a distinctive system of kapu, suddenly became one of flux, if not chaos.”
by Patrick Vinton Kirch, NY Review of Books | Read more:
Image:Engraving by Thomas Cook after a drawing by John Webber, 1779
The Perfect Wedding Vow Template
Dear [INSERT PARTNER’S NAME],
I can’t believe this day has finally arrived. We’ve been together for [NUMBER] years, but it feels like only yesterday when we first met. Me, a [GIRL/BOY] from [NAME OF CITY] and you, a [NEPHEW/TALL WOMAN] from [NAME OF TOWNSHIP]. Standing beside you today, I’m taken back to our first [BIZARRE SEXUAL ACT] at the [NEAREST LOCAL PANERA BREAD]. At that moment, I knew you were the one with whom I wanted to share my [PROBABLE AMOUNT OF TIME UNTIL DEATH].
Thank you for being you. Thank you for being so [PARTNER’S OBJECTIVE BEAUTY LEVEL] and for having such an incredible [RACK/SET OF NUTS]. I can gaze into your [SEVERITY OF PARTNER’S DEPRESSION] eyes and can’t help but think about [ANIMAL YOU’RE ATTRACTED TO]. You are thoughtful, kind and your [DEGREE OF BURNS] face can brighten my worst day. I love your big [FAVORITE VERTEBRA] and your even bigger [LUNG CAPACITY]. I love that we both bonded over [BOOK YOU’VE LIED ABOUT READING]. I love how we both have the same [PERSONAL DEFINITION OF AMERICAN FUNDAMENTALISM]. I love that on Sunday mornings, you always wake me up and make me [ANY 18TH CENTURY POLISH DELICACY]. And I love that cute face you make when you talk about [EISENHOWER’S WORST ECONOMIC POLICY MISTAKE, IN YOUR OPINION]. I love you so much, that it’s hard to be without you. When you’re not by my side, I feel [THAT GREAT FEELING YOU GET WHEN YOU’RE AWAY FROM YOUR PARTNER].
Today, I vow before [NAMES OF TWO NEIGHBORHOOD SEX-OFFENDERS], to be loving, faithful and to always be at [MOUNTAIN YOU’VE SUMMITED] when you need me. I vow to respect you as a person, a partner, and a [TYCOON/CYBORG]. I vow to stand by [YOUR OWN NAME] and to stand up for [YOUR OWN NAME]. I vow to accept your [READING LEVEL], to encourage your [RECURRING NIGHTMARE], and inspire you to achieve your [CALF SIZE]. I promise to be the very best [APPROPRIATE SLUR] I can be. I promise to share your joy in good times, and in bad times, to bear your [LEVEL OF HORNINESS]. I promise to put [EITHER OF THE HEMSWORTH BROTHERS] first, and to do the hard [DERIVED UNIT OF ENERGY] of making now into always. I will support you while you’re working at [PARTNER’S CURRENT HUMILIATING JOB] and while I continue my work with [CHEMICAL ELEMENT YOU’VE DISCOVERED]. I will love you, for better or worse, in sickness and [ROBERT DUVALL’S CURRENT MEDICAL CONDITION], for richer or [NICOLAS CAGE’S CURRENT LEVEL OF SUCCESS] as long as [THE BEARD LENGTH OF YOUR COMMUNITY’S WISEST RABBI]. My love knows no bounds. I love you more than [YOUR FAT INTAKE TRANSLATED INTO BHUTANESE]. More than [YOUR MOTHER’S WEIGHT ON MERCURY]. More than [(YOUR CUP-SIZE/ YOUR LSAT SCORE) + (YOUR GUESS AS TO HOW MANY PEOPLE DIED IN THE GULF WAR WITHOUT GOOGLING) ^2].
I can’t believe this day has finally arrived. We’ve been together for [NUMBER] years, but it feels like only yesterday when we first met. Me, a [GIRL/BOY] from [NAME OF CITY] and you, a [NEPHEW/TALL WOMAN] from [NAME OF TOWNSHIP]. Standing beside you today, I’m taken back to our first [BIZARRE SEXUAL ACT] at the [NEAREST LOCAL PANERA BREAD]. At that moment, I knew you were the one with whom I wanted to share my [PROBABLE AMOUNT OF TIME UNTIL DEATH].
Thank you for being you. Thank you for being so [PARTNER’S OBJECTIVE BEAUTY LEVEL] and for having such an incredible [RACK/SET OF NUTS]. I can gaze into your [SEVERITY OF PARTNER’S DEPRESSION] eyes and can’t help but think about [ANIMAL YOU’RE ATTRACTED TO]. You are thoughtful, kind and your [DEGREE OF BURNS] face can brighten my worst day. I love your big [FAVORITE VERTEBRA] and your even bigger [LUNG CAPACITY]. I love that we both bonded over [BOOK YOU’VE LIED ABOUT READING]. I love how we both have the same [PERSONAL DEFINITION OF AMERICAN FUNDAMENTALISM]. I love that on Sunday mornings, you always wake me up and make me [ANY 18TH CENTURY POLISH DELICACY]. And I love that cute face you make when you talk about [EISENHOWER’S WORST ECONOMIC POLICY MISTAKE, IN YOUR OPINION]. I love you so much, that it’s hard to be without you. When you’re not by my side, I feel [THAT GREAT FEELING YOU GET WHEN YOU’RE AWAY FROM YOUR PARTNER].Today, I vow before [NAMES OF TWO NEIGHBORHOOD SEX-OFFENDERS], to be loving, faithful and to always be at [MOUNTAIN YOU’VE SUMMITED] when you need me. I vow to respect you as a person, a partner, and a [TYCOON/CYBORG]. I vow to stand by [YOUR OWN NAME] and to stand up for [YOUR OWN NAME]. I vow to accept your [READING LEVEL], to encourage your [RECURRING NIGHTMARE], and inspire you to achieve your [CALF SIZE]. I promise to be the very best [APPROPRIATE SLUR] I can be. I promise to share your joy in good times, and in bad times, to bear your [LEVEL OF HORNINESS]. I promise to put [EITHER OF THE HEMSWORTH BROTHERS] first, and to do the hard [DERIVED UNIT OF ENERGY] of making now into always. I will support you while you’re working at [PARTNER’S CURRENT HUMILIATING JOB] and while I continue my work with [CHEMICAL ELEMENT YOU’VE DISCOVERED]. I will love you, for better or worse, in sickness and [ROBERT DUVALL’S CURRENT MEDICAL CONDITION], for richer or [NICOLAS CAGE’S CURRENT LEVEL OF SUCCESS] as long as [THE BEARD LENGTH OF YOUR COMMUNITY’S WISEST RABBI]. My love knows no bounds. I love you more than [YOUR FAT INTAKE TRANSLATED INTO BHUTANESE]. More than [YOUR MOTHER’S WEIGHT ON MERCURY]. More than [(YOUR CUP-SIZE/ YOUR LSAT SCORE) + (YOUR GUESS AS TO HOW MANY PEOPLE DIED IN THE GULF WAR WITHOUT GOOGLING) ^2].
by Gil Ozeri, McSweeny's | Read more:
Image: via:
Open to Inspection
Even if the spy, Allen Dulles, should arrive in heaven through somebody’s absentmindedness, he would begin to blow up the clouds, mine the stars, and slaughter the angels.
—Ilya Ehrenburg
—Ilya Ehrenburg
I cannot think that espionage can be recommended as a technique for building an impressive civilization. It’s a lout’s game.
—Rebecca West
Reports of the CIA’s blunders tend to show up on the record well after the fact. I’ve been reading them with interest over the past fifty years, but they don’t come as a surprise. Long ago and in another country, America in 1957, I sought enlistment in the CIA and sat for an interview with a credentials committee ordained by God and country and Allen Dulles. From that day forward I’ve never doubted the agency’s talent for making a mess of almost any operation, overt or covert, beyond its capacity to perform.
In 1957 I was recently returned from a year at Cambridge University in England, where I had come to know several students who in October 1956 went to Budapest to join the uprising against the regime holding Hungary hostage to communist domination. Two of the young men died in the street fighting, and I didn’t need to be told by General Eisenhower that the communist hordes were at the gate of Western civilization. In my last year at Yale I had been tipped to the agency by an English professor (Shakespeare scholar, Tyrolean hat, former OSS), who passed on a phone number to call if I was prepared to take a shot at the dark. At the age of twenty-two I was willing to leave at once, preferably at night, with trench coat and code name, on the next train to Berlin.
In Washington the written, physical, and psychological examinations occupied the better part of a week before I was summoned to an interview with five operatives in their late twenties, all of them graduates of Yale and not unlike President George W. Bush in appearance and manner. The interview took place in a Quonset hut near the Lincoln Memorial. The design of the building imparted an air of urgent military purpose, as did the muted, offhand bravado of the young men asking the questions. Very pleased with themselves, they exchanged knowing nods to “that damned thing in Laos,” allowed me to understand that we were talking life and death, whether I had the right stuff to play for the varsity team in the big game against the Russians.
Prepared for nothing less, I had spent the days prior to the interview reading about Lenin’s train and Stalin’s prisons, the width of the Fulda Gap, the depth of the Black Sea. None of the study was called for. Instead of being asked about the treaties of Brest-Litovsk or the October Revolution, I was asked three questions bearing on my social qualifications for admission into what the young men at the far end of the table clearly regarded as the best fraternity on the campus of the free world:
1. When standing on the thirteenth tee at the National Golf Links in Southampton, which club does one take from the bag?
2. On final approach under sail into Hay Harbor on Fishers Island, what is the direction (at dusk in late August) of the prevailing wind?
3. Does Muffy Hamilton wear a slip?
The first and second questions I answered correctly, but Muffy Hamilton I knew only at a distance. In the middle 1950s she was a glamorous figure on the Ivy League weekend circuit, very beautiful and very rich, much admired for the indiscriminate fervor of her sexual enthusiasms. At the Fence Club in New Haven I had handed her a glass of brandy and milk (known to be her preferred drink by college football captains in five states) but about the mysteries of her underwear my sources were unreliable, my information limited to rumors of Belgian lace.
The three questions, however, put an end to my interest in the CIA. The smug complacence of my examiners was as smooth as their matching silk handkerchiefs and ties. When I excused myself from the interview (apologizing for having misread the job description and wasted everybody’s quality time) I remember being frightened by the presence of so much self-glorifying certainty and primogeniture crowded into so small a room. Here were people like Woodrow Wilson before them, after them Vice President Dick Cheney and Defense Secretary Donald Rumsfeld, who knew more about what was good for the world than the world—poor, lost, unhappy, un-American world—had managed to learn on its own. Even at the age of twenty-two I was old enough to recognize the attitude as not well positioned for intelligence gathering. It was better suited to the projection of monsters on the screens of deluded fantasy than to their destruction in a forest or a swamp.
—Rebecca West
By now it goes without saying or objection in most quarters of a once freedom-loving and democratic society that our lives, liberties, and pursuits of happiness are closely monitored by a paranoid surveillance apparatus possessed of the fond hopes and great expectations embedded in the fifteenth-century Spanish Inquisition. Our local fire departments don’t grant permits for burnings at the stake, but our federal intelligence agencies (seventeen at last count, staffed by more than 100,000 inquisitors petty and grand) make no secret of their missionary zeal.
Four months after the fall of the World Trade Center and President George W. Bush’s preaching of holy crusade against all the world’s evil, the Pentagon established an Information Awareness Office, adopting as an emblem for its letterhead and baseball cap the all-seeing eye of God. Under orders to secure the American future against the blasphemy of terrorist attack, the IAO’s director, Rear Admiral John Poindexter, presented plans for programming its hydra-headed computer screens and databanks to spot incoming slings and arrows of outrageous fortune well in advance of their ETA overhead the Washington Monument or Plymouth Rock—to conduct “truth maintenance” and deploy “market-based techniques for avoiding surprises”; to defeat and classify every once and future hound from hell on a near or far horizon; no envelope or email left unopened, no phone untapped, no suspicious beard or suitcase descending unnoticed from cruise ship or Toyota.
Thirteen years further along the roads to perdition, the dream of a risk-free future under the digital umbrellas of protective fantasy is the stuff of which our wars and movies now are made, the thousand natural shocks to which the flesh is heir, projected day and night on the hundred million screens that text and shred our collective consciousness, herd our public and private lives—the latter no longer distinguishable from the former—into the shelters of heavy law enforcement and harmless speech.
This issue of Lapham’s Quarterly looks for the when and why did the lout’s game of espionage become the saving grace that makes cowards of us all. I’m familiar with at least some of the story because I’m old enough to remember the provincial and easygoing American republic of the 1940s—wisecracking, open-hearted, not so scared of the undiscovered country from whose bourn no traveler returns. I also can remember the days when people weren’t afraid of cigarette smoke and saturated fats, when it was possible to apply for a job without submitting a blood or urine test, when civil liberty was a constitutional right and not a political favor, the White House unprotected by concrete revetments, and it was possible to walk the streets of New York without making a series of cameo appearances on surveillance camera. (...)
Four months after the fall of the World Trade Center and President George W. Bush’s preaching of holy crusade against all the world’s evil, the Pentagon established an Information Awareness Office, adopting as an emblem for its letterhead and baseball cap the all-seeing eye of God. Under orders to secure the American future against the blasphemy of terrorist attack, the IAO’s director, Rear Admiral John Poindexter, presented plans for programming its hydra-headed computer screens and databanks to spot incoming slings and arrows of outrageous fortune well in advance of their ETA overhead the Washington Monument or Plymouth Rock—to conduct “truth maintenance” and deploy “market-based techniques for avoiding surprises”; to defeat and classify every once and future hound from hell on a near or far horizon; no envelope or email left unopened, no phone untapped, no suspicious beard or suitcase descending unnoticed from cruise ship or Toyota.
Thirteen years further along the roads to perdition, the dream of a risk-free future under the digital umbrellas of protective fantasy is the stuff of which our wars and movies now are made, the thousand natural shocks to which the flesh is heir, projected day and night on the hundred million screens that text and shred our collective consciousness, herd our public and private lives—the latter no longer distinguishable from the former—into the shelters of heavy law enforcement and harmless speech.
This issue of Lapham’s Quarterly looks for the when and why did the lout’s game of espionage become the saving grace that makes cowards of us all. I’m familiar with at least some of the story because I’m old enough to remember the provincial and easygoing American republic of the 1940s—wisecracking, open-hearted, not so scared of the undiscovered country from whose bourn no traveler returns. I also can remember the days when people weren’t afraid of cigarette smoke and saturated fats, when it was possible to apply for a job without submitting a blood or urine test, when civil liberty was a constitutional right and not a political favor, the White House unprotected by concrete revetments, and it was possible to walk the streets of New York without making a series of cameo appearances on surveillance camera. (...)
Reports of the CIA’s blunders tend to show up on the record well after the fact. I’ve been reading them with interest over the past fifty years, but they don’t come as a surprise. Long ago and in another country, America in 1957, I sought enlistment in the CIA and sat for an interview with a credentials committee ordained by God and country and Allen Dulles. From that day forward I’ve never doubted the agency’s talent for making a mess of almost any operation, overt or covert, beyond its capacity to perform.
In 1957 I was recently returned from a year at Cambridge University in England, where I had come to know several students who in October 1956 went to Budapest to join the uprising against the regime holding Hungary hostage to communist domination. Two of the young men died in the street fighting, and I didn’t need to be told by General Eisenhower that the communist hordes were at the gate of Western civilization. In my last year at Yale I had been tipped to the agency by an English professor (Shakespeare scholar, Tyrolean hat, former OSS), who passed on a phone number to call if I was prepared to take a shot at the dark. At the age of twenty-two I was willing to leave at once, preferably at night, with trench coat and code name, on the next train to Berlin.
In Washington the written, physical, and psychological examinations occupied the better part of a week before I was summoned to an interview with five operatives in their late twenties, all of them graduates of Yale and not unlike President George W. Bush in appearance and manner. The interview took place in a Quonset hut near the Lincoln Memorial. The design of the building imparted an air of urgent military purpose, as did the muted, offhand bravado of the young men asking the questions. Very pleased with themselves, they exchanged knowing nods to “that damned thing in Laos,” allowed me to understand that we were talking life and death, whether I had the right stuff to play for the varsity team in the big game against the Russians.
Prepared for nothing less, I had spent the days prior to the interview reading about Lenin’s train and Stalin’s prisons, the width of the Fulda Gap, the depth of the Black Sea. None of the study was called for. Instead of being asked about the treaties of Brest-Litovsk or the October Revolution, I was asked three questions bearing on my social qualifications for admission into what the young men at the far end of the table clearly regarded as the best fraternity on the campus of the free world:
1. When standing on the thirteenth tee at the National Golf Links in Southampton, which club does one take from the bag?
2. On final approach under sail into Hay Harbor on Fishers Island, what is the direction (at dusk in late August) of the prevailing wind?
3. Does Muffy Hamilton wear a slip?
The first and second questions I answered correctly, but Muffy Hamilton I knew only at a distance. In the middle 1950s she was a glamorous figure on the Ivy League weekend circuit, very beautiful and very rich, much admired for the indiscriminate fervor of her sexual enthusiasms. At the Fence Club in New Haven I had handed her a glass of brandy and milk (known to be her preferred drink by college football captains in five states) but about the mysteries of her underwear my sources were unreliable, my information limited to rumors of Belgian lace.
The three questions, however, put an end to my interest in the CIA. The smug complacence of my examiners was as smooth as their matching silk handkerchiefs and ties. When I excused myself from the interview (apologizing for having misread the job description and wasted everybody’s quality time) I remember being frightened by the presence of so much self-glorifying certainty and primogeniture crowded into so small a room. Here were people like Woodrow Wilson before them, after them Vice President Dick Cheney and Defense Secretary Donald Rumsfeld, who knew more about what was good for the world than the world—poor, lost, unhappy, un-American world—had managed to learn on its own. Even at the age of twenty-two I was old enough to recognize the attitude as not well positioned for intelligence gathering. It was better suited to the projection of monsters on the screens of deluded fantasy than to their destruction in a forest or a swamp.
by Lewis Lapham, Lapham's Quarterly | Read more:
Image: Sir Frances Walsingham, attributed to John de Critz the Elder, c. 1585
Labels:
Critical Thought,
history,
Politics,
Security
Tuesday, December 15, 2015
Your Face Is Covered in Mites, and They're Full of Secrets
When you look in the mirror, you’re not just looking at you—you’re looking at a whole mess of face mites. Yeah, you’ve got ‘em. Guaranteed. The little arachnids have a fondness for your skin, shoving their tubular bodies down your hair follicles, feeding on things like oil or skin cells or even bacteria. The good news is, they don’t do you any harm. The better news is, they’ve got fascinating secrets to tell about your ancestry.
New research out today in the Proceedings of the National Academy of Sciences reveals four distinct lineages of the face mite Demodex folliculorum that correspond to different regions of the world. African faces have genetically distinct African mites, Asian faces have Asian mites, and so too do Europeans and Latin Americans have their own varieties. Even if your family moved to a different continent long ago, your forebears passed down their brand of mites to their children, who themselves passed them on down the line.
Looking even farther back, the research also hints at how face mites hitchhiked on early humans out of Africa, evolving along with them into lineages specialized for certain groups of people around the planet. It seems we’ve had face mites for a long, long while, passing them back and forth between our family members and love-ahs with a kiss—and a little bit of face-to-face skin contact.
Leading the research was entomologist Michelle Trautwein of the California Academy of Sciences, who with her colleagues scraped people’s faces—hey, there are worse ways to make a living—then analyzed the DNA of all the mites they’d gathered. “We found four major lineages,” says Trautwein, “and the first three lineages were restricted to people of African, Asian, and Latin American ancestry.”
The fourth lineage, the European variety, is a bit different. It’s not restricted—it shows up in the three other groups of peoples. But Europeans tend to have only European mites, not picking up the mites of African, Asian, or Latin American folks. (It should be noted that the study didn’t delve into the face mites of all the world’s peoples. The researchers didn’t test populations like Aboriginal Australians, for instance, so there may be still more lineages beyond the four.)
So what’s going on here? Well, ever since Homo sapiens radiated out of Africa, those four groups of people have evolved in their isolation in obvious ways, like developing darker or lighter skin color. But more subtly, all manner of microorganisms have evolved right alongside humans. And with different skin types come different environments for tiny critters like mites.
New research out today in the Proceedings of the National Academy of Sciences reveals four distinct lineages of the face mite Demodex folliculorum that correspond to different regions of the world. African faces have genetically distinct African mites, Asian faces have Asian mites, and so too do Europeans and Latin Americans have their own varieties. Even if your family moved to a different continent long ago, your forebears passed down their brand of mites to their children, who themselves passed them on down the line.Looking even farther back, the research also hints at how face mites hitchhiked on early humans out of Africa, evolving along with them into lineages specialized for certain groups of people around the planet. It seems we’ve had face mites for a long, long while, passing them back and forth between our family members and love-ahs with a kiss—and a little bit of face-to-face skin contact.
Leading the research was entomologist Michelle Trautwein of the California Academy of Sciences, who with her colleagues scraped people’s faces—hey, there are worse ways to make a living—then analyzed the DNA of all the mites they’d gathered. “We found four major lineages,” says Trautwein, “and the first three lineages were restricted to people of African, Asian, and Latin American ancestry.”
The fourth lineage, the European variety, is a bit different. It’s not restricted—it shows up in the three other groups of peoples. But Europeans tend to have only European mites, not picking up the mites of African, Asian, or Latin American folks. (It should be noted that the study didn’t delve into the face mites of all the world’s peoples. The researchers didn’t test populations like Aboriginal Australians, for instance, so there may be still more lineages beyond the four.)
So what’s going on here? Well, ever since Homo sapiens radiated out of Africa, those four groups of people have evolved in their isolation in obvious ways, like developing darker or lighter skin color. But more subtly, all manner of microorganisms have evolved right alongside humans. And with different skin types come different environments for tiny critters like mites.
by Matt Simon, Wired | Read more:
Image: USDA
Monday, December 14, 2015
Why Are There So Many Mattress Stores?
Dear Cecil:
How do mattress stores manage to stay in business? They're all over the place, but the average adult buys a mattress once every five to ten years. With high overhead and infrequent purchases, how are they around? (This question was inspired by a friend, Bethany.)
— Not Bethany
Cecil replies:
I see your query, NB, and raise you. To my mind, it’s not just about how these stores manage to stay in business: the question is, moreover, how are there so goddamn many of them — particularly right now? Where I live, in Chicago, entire blocks are all but overrun with the places, which frankly don’t do much for a street’s aesthetics. In June a Texas Monthly article described the worrisome proliferation of mattress stores in Houston, where the venerably groovy Montrose neighborhood has become known as “the Mattrose” on account of all the new sleep shops. An April headline in the Northwest Indiana Times asked, apropos the town of Schererville, “Why the heck are so many mattress stores opening?” So: you and I aren’t the only ones wondering. What gives?
One thing that jars about this state of affairs is that, in the age of Amazon, there’s something very old-economy about mattress stores, beyond their relentlessly cheesy look. No one goes to bookstores to buy books anymore, right? Well, not exactly. A 2014 report by the consulting firm A.T. Kearney found that despite the digital hype, overall a full 90 percent of retail transactions still take place in physical stores. And according to an investor presentation by industry giant Mattress Firm, dedicated mattress stores account for 46 percent of total mattress sales, handily beating out furniture stores (35 percent) and department stores (5 percent) for the largest share of the market.
So mattress delivery by drone is still a ways off. But again, these stores aren’t just surviving, they’re flourishing — that market share has more than doubled in the last 20 years. Why open a mattress store when there’s another just down the street? Turns out the economics make perfect sense:
by Cecil Adams, The Straight Dope | Read more:
Image: via:
How do mattress stores manage to stay in business? They're all over the place, but the average adult buys a mattress once every five to ten years. With high overhead and infrequent purchases, how are they around? (This question was inspired by a friend, Bethany.)
— Not Bethany
Cecil replies:
One thing that jars about this state of affairs is that, in the age of Amazon, there’s something very old-economy about mattress stores, beyond their relentlessly cheesy look. No one goes to bookstores to buy books anymore, right? Well, not exactly. A 2014 report by the consulting firm A.T. Kearney found that despite the digital hype, overall a full 90 percent of retail transactions still take place in physical stores. And according to an investor presentation by industry giant Mattress Firm, dedicated mattress stores account for 46 percent of total mattress sales, handily beating out furniture stores (35 percent) and department stores (5 percent) for the largest share of the market.
So mattress delivery by drone is still a ways off. But again, these stores aren’t just surviving, they’re flourishing — that market share has more than doubled in the last 20 years. Why open a mattress store when there’s another just down the street? Turns out the economics make perfect sense:
Running a mattress store doesn’t cost much.
by Cecil Adams, The Straight Dope | Read more:
Image: via:
How to build a better PhD
“Since 1977, we've been recommending that graduate departments partake in birth control, but no one has been listening,” said Paula Stephan to more than 200 postdocs and PhD students at a symposium in Boston, Massachusetts, in October this year.
Stephan is a renowned labour economist at Georgia State University in Atlanta who has spent much of her career trying to understand the relationships between economics and science, particularly biomedical science. And the symposium, 'Future of Research', discussed the issue to which Stephan finds so many people deaf: the academic research system is generating progeny at a startling rate. In biomedicine, said Stephan. “We are definitely producing many more PhDs than there is demand for them in research positions.”
The numbers show newly minted PhD students flooding out of the academic pipeline. In 2003, 21,343 science graduate students in the United States received a doctorate. By 2013, this had increased by almost 41% — and the life sciences showed the greatest growth. That trend is mirrored elsewhere. According to a 2014 report looking at the 34 countries that make up the Organisation for Economic Co-operation and Development, the proportion of people who leave tertiary education with a doctorate has doubled from 0.8% to 1.6% over the past 17 years.
Not all of these students want to pursue academic careers — but many do, and they find it tough because there has been no equivalent growth in secure academic positions. The growing gap between the numbers of PhD graduates and available jobs has attracted particular attention in the United States, where students increasingly end up stuck in lengthy, insecure postdoctoral research positions. Although the unemployment rate for people with science doctorates is relatively low, in 2013 some 42% of US life-sciences PhD students graduated without a job commitment of any kind, up from 28% a decade earlier. “But still students continue to enrol in PhD programmes,” Stephan wrote in her 2012 book How Economics Shapes Science. “Why? Why, given such bleak job prospects, do people continue to come to graduate school?”
One reason is that there is little institutional incentive to turn them away. Faculty members rely on cheap PhD students and postdocs because they are trying to get the most science out of stretched grants. Universities, in turn, know that PhD students help faculty members to produce the world-class research on which their reputations rest. “The biomedical research system is structured around a large workforce of graduate students and postdocs,” says Michael Teitelbaum, a labour economist at Harvard Law School in Cambridge, Massachusetts. “Many find it awkward to talk about change.”
But there are signs that the issue is becoming less taboo. In September, a group of high-profile US scientists (Harold Varmus, Marc Kirschner, Shirley Tilghman and Bruce Alberts, colloquially known as 'the Quartet') launched Rescuing Biomedical Research, a website where scientists can make recommendations on how to 'fix' different aspects of the broken biomedical research system in the United States — the PhD among them. “How can we improve graduate education so as to produce a more effective scientific workforce, while also reducing the ever-expanding PhD workforce in search of biomedical research careers?” the site asks.
Nature put a similar question to 33 PhD students, scientists, postdocs and labour economists and uncovered a range of opinions on how to build a better PhD system, from small adjustments to major overhauls. All agreed on one thing: change is urgent. “Academia really is going to have to be dragged kicking and screaming into the twenty-first century,” says Gary McDowell, a postdoctoral fellow at Tufts University in Medford, Massachusetts, and a leader of the group behind the Future of Research symposium. The renovation needs to happen now, says Jon Lorsch, director of the US National Institute of General Medical Sciences in Bethesda, Maryland. “We need to transform graduate education within five years. It's imperative. There's a lot at stake for scientists, and hence for science.”
Stephan is a renowned labour economist at Georgia State University in Atlanta who has spent much of her career trying to understand the relationships between economics and science, particularly biomedical science. And the symposium, 'Future of Research', discussed the issue to which Stephan finds so many people deaf: the academic research system is generating progeny at a startling rate. In biomedicine, said Stephan. “We are definitely producing many more PhDs than there is demand for them in research positions.”
The numbers show newly minted PhD students flooding out of the academic pipeline. In 2003, 21,343 science graduate students in the United States received a doctorate. By 2013, this had increased by almost 41% — and the life sciences showed the greatest growth. That trend is mirrored elsewhere. According to a 2014 report looking at the 34 countries that make up the Organisation for Economic Co-operation and Development, the proportion of people who leave tertiary education with a doctorate has doubled from 0.8% to 1.6% over the past 17 years.Not all of these students want to pursue academic careers — but many do, and they find it tough because there has been no equivalent growth in secure academic positions. The growing gap between the numbers of PhD graduates and available jobs has attracted particular attention in the United States, where students increasingly end up stuck in lengthy, insecure postdoctoral research positions. Although the unemployment rate for people with science doctorates is relatively low, in 2013 some 42% of US life-sciences PhD students graduated without a job commitment of any kind, up from 28% a decade earlier. “But still students continue to enrol in PhD programmes,” Stephan wrote in her 2012 book How Economics Shapes Science. “Why? Why, given such bleak job prospects, do people continue to come to graduate school?”
One reason is that there is little institutional incentive to turn them away. Faculty members rely on cheap PhD students and postdocs because they are trying to get the most science out of stretched grants. Universities, in turn, know that PhD students help faculty members to produce the world-class research on which their reputations rest. “The biomedical research system is structured around a large workforce of graduate students and postdocs,” says Michael Teitelbaum, a labour economist at Harvard Law School in Cambridge, Massachusetts. “Many find it awkward to talk about change.”
But there are signs that the issue is becoming less taboo. In September, a group of high-profile US scientists (Harold Varmus, Marc Kirschner, Shirley Tilghman and Bruce Alberts, colloquially known as 'the Quartet') launched Rescuing Biomedical Research, a website where scientists can make recommendations on how to 'fix' different aspects of the broken biomedical research system in the United States — the PhD among them. “How can we improve graduate education so as to produce a more effective scientific workforce, while also reducing the ever-expanding PhD workforce in search of biomedical research careers?” the site asks.
Nature put a similar question to 33 PhD students, scientists, postdocs and labour economists and uncovered a range of opinions on how to build a better PhD system, from small adjustments to major overhauls. All agreed on one thing: change is urgent. “Academia really is going to have to be dragged kicking and screaming into the twenty-first century,” says Gary McDowell, a postdoctoral fellow at Tufts University in Medford, Massachusetts, and a leader of the group behind the Future of Research symposium. The renovation needs to happen now, says Jon Lorsch, director of the US National Institute of General Medical Sciences in Bethesda, Maryland. “We need to transform graduate education within five years. It's imperative. There's a lot at stake for scientists, and hence for science.”
by Julie Gould, Nature | Read more:
Image: Oliver Munday[ed. Take a moment and imagine a million-watt Trump marquee over the front of the White House.]
via:
Sunday, December 13, 2015
What Happens When Computers Learn to Read Books?
In Kurt Vonnegut's classic novel Cat's Cradle, the character Claire Minton has the most fantastic ability; simply by reading the index of the book, she can deduce almost every biographical detail about the author. From scanning a sample of text in the index, she is able to figure out with near certainty that a main character in the book is gay (and therefore unlikely to marry his girlfriend). Claire Minton knows this because she is a professional indexer of books.
And that's what computers are today -- professional indexers of books.
Give a computer a piece of text from the 1950s, and based on the frequency of just fifteen words, the machine will be able to tell you whether the race of the author is white or black. That's the claim from two researchers at the University of Chicago, Hoyt Long and Richard So, who deploy complicated algorithms to examine huge bodies of text. They feed the machine thousands of scanned novels-worth of data, which it analyzes for patterns in the language -- frequency, presence, absence and combinations of words -- and then they test big questions about literary style.
"The machine can always -- with greater than a 95 percent accuracy -- separate white and black writers," So says. "That's how different their language is."
This is just an example. The group is digging deeper on other questions of race in literature but isn't ready to share the findings yet. In this case, minority writers represent a tiny fraction of American literature's canonical text. They hope that by shining a spotlight at unreviewed, unpublished or forgotten authors -- now easier to identify with digital tools -- or by simply approaching popular texts with different examination techniques, they can shake up conventional views on American literature. Though far from a perfect tool, scholars across the digital humanities are increasingly training big computers on big collections of text to answer and pose new questions about the past.
"We really need to consider rewriting American literary history when we look at things at scale," So says.
Who Made Who
A culture's corpus of celebrated literature functions like its Facebook profile. Mob rule curates what to teach future generations and does so with certain biases. It's not an entirely nefarious scheme. According to Dr. So, people can only process about 200 books. We can only compare a few at a time. So all analysis is reductive. The novel changed our relationship with complicated concepts like superiority or how we relate to the environment. Yet we needed to describe -- and communicate -- those huge shifts with mere words.
In machine learning, algorithms process reams of data on a particular topic or question. This eventually allows a computer to recognize certain patterns, whether that means spotting tumors, cycles in the weather or a quirk of the stock market. Over the last decade this has given rise to the digital humanities, where professors with large corpuses of text -- or any data, really -- use computers to develop hard metrics for areas that might be previously seen as more abstract. (...)
Mark Algee-Hewitt's group in Stanford's English department used machines to examine paragraph structure in 19th century literature. We all know that in most literature, when the writer moves to a new paragraph, the topic of the paragraph will change. That's English 101.
But Algee-Hewitt says they also found something that surprised them: whether a paragraph had a single or multiple topic was not governed by the paragraphs' length. One might think that a long paragraph would cover lots of ground. That wasn't the case. Topic variance within a paragraph has more to do with story genre and setting than the length.
Now they are looking for a pattern by narrative type.
"The truth is that we really don't know that much about the American novel because there's so much of it, so much was produced," says So. "We're finding that with these tools, we can do more scientific verification of these hypotheses. And frankly we often find that they're incorrect."
The Blind Men and The Elephant
But a computer can't read. In a human sense. Words create sentences, paragraphs, settings, characters, feelings, dreams, empathy and all the intangible bits in between. A computer simply detects, counts and follows the instructions provided by humans. No machine on earth understands Toni Morrison's Beloved.
At the same time, no human can examine, in any way, 10,000 books at a time. We're in this funny place where people assess the fundamental unit of literature (the story) while a computer assesses all the units in totality. The disparity -- that gap -- between what a human can understand and what a machine can understand is one of the root disagreements, among others, in academia when it comes to methodology around deploying computers to ask big questions about history.
Does a computer end up analyzing literature, itself or those who coded the question?
by Caleb Garling, Pricenomics | Read more:
Image: uncredited
And that's what computers are today -- professional indexers of books.
Give a computer a piece of text from the 1950s, and based on the frequency of just fifteen words, the machine will be able to tell you whether the race of the author is white or black. That's the claim from two researchers at the University of Chicago, Hoyt Long and Richard So, who deploy complicated algorithms to examine huge bodies of text. They feed the machine thousands of scanned novels-worth of data, which it analyzes for patterns in the language -- frequency, presence, absence and combinations of words -- and then they test big questions about literary style.
"The machine can always -- with greater than a 95 percent accuracy -- separate white and black writers," So says. "That's how different their language is."
This is just an example. The group is digging deeper on other questions of race in literature but isn't ready to share the findings yet. In this case, minority writers represent a tiny fraction of American literature's canonical text. They hope that by shining a spotlight at unreviewed, unpublished or forgotten authors -- now easier to identify with digital tools -- or by simply approaching popular texts with different examination techniques, they can shake up conventional views on American literature. Though far from a perfect tool, scholars across the digital humanities are increasingly training big computers on big collections of text to answer and pose new questions about the past.
"We really need to consider rewriting American literary history when we look at things at scale," So says.
Who Made Who
A culture's corpus of celebrated literature functions like its Facebook profile. Mob rule curates what to teach future generations and does so with certain biases. It's not an entirely nefarious scheme. According to Dr. So, people can only process about 200 books. We can only compare a few at a time. So all analysis is reductive. The novel changed our relationship with complicated concepts like superiority or how we relate to the environment. Yet we needed to describe -- and communicate -- those huge shifts with mere words.
In machine learning, algorithms process reams of data on a particular topic or question. This eventually allows a computer to recognize certain patterns, whether that means spotting tumors, cycles in the weather or a quirk of the stock market. Over the last decade this has given rise to the digital humanities, where professors with large corpuses of text -- or any data, really -- use computers to develop hard metrics for areas that might be previously seen as more abstract. (...)
Mark Algee-Hewitt's group in Stanford's English department used machines to examine paragraph structure in 19th century literature. We all know that in most literature, when the writer moves to a new paragraph, the topic of the paragraph will change. That's English 101.
But Algee-Hewitt says they also found something that surprised them: whether a paragraph had a single or multiple topic was not governed by the paragraphs' length. One might think that a long paragraph would cover lots of ground. That wasn't the case. Topic variance within a paragraph has more to do with story genre and setting than the length.
Now they are looking for a pattern by narrative type.
"The truth is that we really don't know that much about the American novel because there's so much of it, so much was produced," says So. "We're finding that with these tools, we can do more scientific verification of these hypotheses. And frankly we often find that they're incorrect."
The Blind Men and The Elephant
But a computer can't read. In a human sense. Words create sentences, paragraphs, settings, characters, feelings, dreams, empathy and all the intangible bits in between. A computer simply detects, counts and follows the instructions provided by humans. No machine on earth understands Toni Morrison's Beloved.
At the same time, no human can examine, in any way, 10,000 books at a time. We're in this funny place where people assess the fundamental unit of literature (the story) while a computer assesses all the units in totality. The disparity -- that gap -- between what a human can understand and what a machine can understand is one of the root disagreements, among others, in academia when it comes to methodology around deploying computers to ask big questions about history.
Does a computer end up analyzing literature, itself or those who coded the question?
by Caleb Garling, Pricenomics | Read more:
Image: uncredited
Saturday, December 12, 2015
Adapting to Climate Change
Yesterday, Thomas Schelling gave a seminar on climate change here at the Center for Study of Public Choice. Schelling’s main argument was that lots of resources are going into predicting and understanding climate change but very little thought or resources are going into planning for adaptation.
If Washington, DC, Boston and Manhattan are to remain dry, for example, we are almost certainly going to need flood control efforts on the level of the Netherlands. It takes twenty years just to come up with a plan and figure out how to pay for these kinds of projects let alone to actually implement them so it’s not too early to beginning planning for adaptation even if we don’t expect to need these adaptations for another forty or fifty years. So far, however, nothing is being done. Climate deniers think planning for adaptation is a waste and many climate change proponents think planning for adaptation is giving up.
Schelling mentioned a few bold ideas. We can protect every city on the Mediterranean from Marseilles to Alexandria to Tel Aviv or we could dam the Strait of Gibraltar. Damming the strait would be the world’s largest construction project–by far–yet by letting the Mediterranean evaporate somewhat it could also generate enough hydro-electric power to replace perhaps all of the fossil fuel stations in Europe and Africa.
Schelling didn’t mention it but in the 1920s German engineer Herman Sörgel proposed such a project calling it Atlantropa (more here). In addition to power, damming the strait would open up a huge swath of valuable land. Gene Roddenberry and Phillip K. Dick were fans but needless to say the idea never got very far. A cost-benefit analysis, however, might show that despite the difficulty, damming the strait would be cheaper than trying to save Mediterranean cities one by one. But, as Schelling argued, no one is thinking seriously about these issues.
I argued that capital depreciates so even many of our buildings, the longest-lived capital, will need to be replaced anyway. Here, for example, is a map showing the age of every building in New York City. A large fraction, though by no means all, are less than one hundred years old. If we let the areas most under threat slowly deteriorate the cost of moving inland won’t be as high as one might imagine–at least if the water rises slowly (not guaranteed!). Schelling agreed that this was the case for private structures but he doubted that we would be willing to let the White House go.
by Alex Tabarrok, Marginal Revolution | Read more:
Image: via:
If Washington, DC, Boston and Manhattan are to remain dry, for example, we are almost certainly going to need flood control efforts on the level of the Netherlands. It takes twenty years just to come up with a plan and figure out how to pay for these kinds of projects let alone to actually implement them so it’s not too early to beginning planning for adaptation even if we don’t expect to need these adaptations for another forty or fifty years. So far, however, nothing is being done. Climate deniers think planning for adaptation is a waste and many climate change proponents think planning for adaptation is giving up.
Schelling mentioned a few bold ideas. We can protect every city on the Mediterranean from Marseilles to Alexandria to Tel Aviv or we could dam the Strait of Gibraltar. Damming the strait would be the world’s largest construction project–by far–yet by letting the Mediterranean evaporate somewhat it could also generate enough hydro-electric power to replace perhaps all of the fossil fuel stations in Europe and Africa.Schelling didn’t mention it but in the 1920s German engineer Herman Sörgel proposed such a project calling it Atlantropa (more here). In addition to power, damming the strait would open up a huge swath of valuable land. Gene Roddenberry and Phillip K. Dick were fans but needless to say the idea never got very far. A cost-benefit analysis, however, might show that despite the difficulty, damming the strait would be cheaper than trying to save Mediterranean cities one by one. But, as Schelling argued, no one is thinking seriously about these issues.
I argued that capital depreciates so even many of our buildings, the longest-lived capital, will need to be replaced anyway. Here, for example, is a map showing the age of every building in New York City. A large fraction, though by no means all, are less than one hundred years old. If we let the areas most under threat slowly deteriorate the cost of moving inland won’t be as high as one might imagine–at least if the water rises slowly (not guaranteed!). Schelling agreed that this was the case for private structures but he doubted that we would be willing to let the White House go.
by Alex Tabarrok, Marginal Revolution | Read more:
Image: via:
Labels:
Cities,
Design,
Government,
Politics,
Science,
Technology
Setting the Default
I recently did couples therapy with two gay men who’d gotten married a year or so ago. Since then one of them, let’s call him Adam, decided he was bored with his sex life and went to a club where they did some things I will not describe here. His husband, let’s call him Steve, was upset by what he considered infidelity, and they had a big fight. Both of them wanted to stay together for the sake of the kids (did I mention they adopted some kids?) but this club thing was a pretty big deal, so they decided to seek professional help.
Adam made the following proposal: he knew Steve was not very kinky, so Adam would go do his kinky stuff at the club, with Steve’s knowledge and consent. That way everyone could get what they wanted. Sure, it would involve having sex with other people, but it didn’t mean anything, and it was selfish for a spouse to assert some kind of right to “control” the other spouse anyway.
Steve made the following counterproposal: no. He liked monogamy and fidelity and it would make him really jealous and angry to think of Adam going out and having sex with other people, even in a meaningless way. He argued that if Adam didn’t like monogamy, maybe he shouldn’t have proposed entering into a form of life that has been pretty much defined by its insistence on monogamy for the past several thousand years and then sworn adherence to that form of life in front of everyone they knew. If Adam hadn’t liked monogamy, he had ample opportunity to avoid it before he had bound his life together with Steve’s. Now he was stuck.
Adam gave the following counterargument: yeah, marriage usually implies remaining monogamous, but that was all legal boilerplate. He had wanted to get married to symbolize his committment to Steve – committment that he still had! – and he hadn’t realized he was interested in fetish stuff at the time or else he would have brought it up.
Steve gave the following countercounterargument: okay, this is all very sad, but now we are stuck in this position, and clearly only one of the two people could get their preference satisfied, and given the whole marriage-implies-monogamy thing, it seemed pretty clear that that person should be him.
So then of course they both turned to me for advice.
Adam made the following proposal: he knew Steve was not very kinky, so Adam would go do his kinky stuff at the club, with Steve’s knowledge and consent. That way everyone could get what they wanted. Sure, it would involve having sex with other people, but it didn’t mean anything, and it was selfish for a spouse to assert some kind of right to “control” the other spouse anyway.Steve made the following counterproposal: no. He liked monogamy and fidelity and it would make him really jealous and angry to think of Adam going out and having sex with other people, even in a meaningless way. He argued that if Adam didn’t like monogamy, maybe he shouldn’t have proposed entering into a form of life that has been pretty much defined by its insistence on monogamy for the past several thousand years and then sworn adherence to that form of life in front of everyone they knew. If Adam hadn’t liked monogamy, he had ample opportunity to avoid it before he had bound his life together with Steve’s. Now he was stuck.
Adam gave the following counterargument: yeah, marriage usually implies remaining monogamous, but that was all legal boilerplate. He had wanted to get married to symbolize his committment to Steve – committment that he still had! – and he hadn’t realized he was interested in fetish stuff at the time or else he would have brought it up.
Steve gave the following countercounterargument: okay, this is all very sad, but now we are stuck in this position, and clearly only one of the two people could get their preference satisfied, and given the whole marriage-implies-monogamy thing, it seemed pretty clear that that person should be him.
So then of course they both turned to me for advice.
by Scott Alexander, Slate Star Codex | Read more:
Image: via:
Labels:
Critical Thought,
Culture,
Psychology,
Relationships
The German War: A Nation Under Arms, 1939-45
[ed. I just finished reading Anthony Doerr's All the Light We Cannot See, a novel with a similar theme - the average French and German's reaction to, and ultimately, participation in the Second World War. It made me think again about the issue of free will vs. determinism and how a person's moral perspective and/or character could be subsumed (or elevated) by the momentum of larger forces - forces that determine one's fate long before they are felt.]
Most Germans did not want war in 1939. When it came, following Hitler’s invasion of Poland, there was no euphoria and flag-waving, as there had been in 1914, but dejection; the people were downcast, one diarist noted. The mood soon lifted, as the Third Reich overran its neighbours, but most Germans still hoped for a quick conclusion. As Nicholas Stargardt points out in his outstanding history of Germany during the second world war, the Nazi regime was most popular “when it promised peace, prosperity and easy victories”. And yet, German troops continued to fight an ever more protracted battle, with ever more brutality, while the home front held tight. Even when it was clear that all was lost, there was no collapse or uprising, as in 1918. Why?
There are two easy answers. After the war, many Germans claimed to have been cowed by an omnipotent terror apparatus. More recently, some historians have argued the opposite: the Nazi regime was buoyed by fervent support, with ordinary Germans backing Hitler to the end. Stargardt dismisses both answers convincingly. Domestic terror alone, though ever-present, did not ensure the war’s continuation. Neither did popular enthusiasm for nazism. Of course there was significant support for Hitler’s regime, at least as long as the campaign went well. “God couldn’t have sent us a better war,” one soldier wrote to his wife in summer 1940, as the Wehrmacht routed France. But opinion was fickle, fluctuating with the fortunes of war.
Grumbling about rationing and shortages began within weeks, and never ceased, even as the regime alleviated hardships at home through the ruthless exploitation of occupied Europe (midway through the war, almost 30% of Germany’s meat came from abroad). There was plenty of resentment, too, about the privileges of the Nazi elite, which gorged itself on delicacies as ordinary Germans chewed “cutlets” made from cabbage. As a popular joke had it: “When will the war end?” “When Göring fits into Goebbels’s trousers”. Resentment of the regime grew as allied bombs rained on Germany, displacing millions and killing more than 400,000. German civilians criticised their leaders for the porous air defences, and they also turned on each other. Evacuees from the cities complained about the “simple and stupid” peasants who hosted them, while the farmers accused the new arrivals of laziness and loose morals. Back in the urban centres, locals were relieved when they were spared because a different German city was hit instead. The supposedly unified Nazi “national community” was just a fiction.
Despite this lack of national cohesion and the growing war fatigue, Germans kept fighting. Most important, Stargardt suggests, were their feelings of “patriotic defiance”, arising less from fanatical nazism than familial bonds. They had to win the war at any cost, soldiers believed, to protect their loved ones and to make Germany impregnable. “Your father is away,” one soldier lectured his teenage son in 1942, “and is helping to prepare a better future for you, so that you don’t have to do it later yourselves.” Even Germans appalled by the genocidal war waged in their name rallied around their country. Their determination was fuelled by Nazi propaganda, which insisted that this was a defensive war, provoked by Germany’s enemies, and warned that defeat would mean the annihilation of the fatherland. This campaign, based on “strength through fear” (as a British commentator quipped), hit home. As another soldier wrote to his wife just weeks before the final surrender: “If we go to the dogs, then everything goes to the dogs.”
Propaganda and popular opinion are just two key themes in Stargardt’s sweeping history, which takes in almost everything, from battles to religion and entertainment. And although the focus is on wartime Germany, we also see the suffering the war brought to the rest of Europe: pulverised cities, ravaged countryside, countless victims. Crucially, the death and destruction wrought by the German conquerors was not hidden from the population back home. Germans knew that the regime relied on pillage and plunder, bolstering the war effort with raw materials and slave labour from across Europe. And they knew that huge numbers of Jews were murdered in the east.
Historians have long debunked the postwar myth of German ignorance about the Holocaust, and Stargardt presents further evidence that the genocide was an open secret. News spread via German soldiers and officials who witnessed massacres, or participated in them. “The Jews are being completely exterminated,” a policeman wrote in August 1941 to his wife in Bremen. Nazi propaganda also dropped heavy hints, creating a sense of societal complicity: in autumn 1941, for instance, the Nazi party displayed posters across the country, emblazoned with Hitler’s threat that a world war would lead to the “destruction of the Jewish race in Europe”. Ordinary Germans watched the deportations of their Jewish neighbours and purchased their abandoned property at bargain prices. Later on, the authorities distributed the belongings of Jews among bombed-out Germans, though this triggered new complaints about Nazi bigwigs grabbing the best bits and “laying their Aryan arses in the Jewish beds after they have exterminated the Jews”, as one employee in a Bavarian factory exclaimed. There was some popular unease about the genocide, and it came into the open during the intense allied bombing, in a rather twisted manner: many ordinary Germans bought into the Nazi propaganda picture of Jews pulling the strings in Britain and the USA, and understood the air raids as payback for the antisemitic pogroms and mass murders. In this way, writes Stargardt, the Germans “mixed anxieties about their culpability with a sense of their own victimhood”.
by Nikolaus Wachsmann, The Guardian | Read more:
Image: Popperfoto/Getty Images
Most Germans did not want war in 1939. When it came, following Hitler’s invasion of Poland, there was no euphoria and flag-waving, as there had been in 1914, but dejection; the people were downcast, one diarist noted. The mood soon lifted, as the Third Reich overran its neighbours, but most Germans still hoped for a quick conclusion. As Nicholas Stargardt points out in his outstanding history of Germany during the second world war, the Nazi regime was most popular “when it promised peace, prosperity and easy victories”. And yet, German troops continued to fight an ever more protracted battle, with ever more brutality, while the home front held tight. Even when it was clear that all was lost, there was no collapse or uprising, as in 1918. Why?There are two easy answers. After the war, many Germans claimed to have been cowed by an omnipotent terror apparatus. More recently, some historians have argued the opposite: the Nazi regime was buoyed by fervent support, with ordinary Germans backing Hitler to the end. Stargardt dismisses both answers convincingly. Domestic terror alone, though ever-present, did not ensure the war’s continuation. Neither did popular enthusiasm for nazism. Of course there was significant support for Hitler’s regime, at least as long as the campaign went well. “God couldn’t have sent us a better war,” one soldier wrote to his wife in summer 1940, as the Wehrmacht routed France. But opinion was fickle, fluctuating with the fortunes of war.
Grumbling about rationing and shortages began within weeks, and never ceased, even as the regime alleviated hardships at home through the ruthless exploitation of occupied Europe (midway through the war, almost 30% of Germany’s meat came from abroad). There was plenty of resentment, too, about the privileges of the Nazi elite, which gorged itself on delicacies as ordinary Germans chewed “cutlets” made from cabbage. As a popular joke had it: “When will the war end?” “When Göring fits into Goebbels’s trousers”. Resentment of the regime grew as allied bombs rained on Germany, displacing millions and killing more than 400,000. German civilians criticised their leaders for the porous air defences, and they also turned on each other. Evacuees from the cities complained about the “simple and stupid” peasants who hosted them, while the farmers accused the new arrivals of laziness and loose morals. Back in the urban centres, locals were relieved when they were spared because a different German city was hit instead. The supposedly unified Nazi “national community” was just a fiction.
Despite this lack of national cohesion and the growing war fatigue, Germans kept fighting. Most important, Stargardt suggests, were their feelings of “patriotic defiance”, arising less from fanatical nazism than familial bonds. They had to win the war at any cost, soldiers believed, to protect their loved ones and to make Germany impregnable. “Your father is away,” one soldier lectured his teenage son in 1942, “and is helping to prepare a better future for you, so that you don’t have to do it later yourselves.” Even Germans appalled by the genocidal war waged in their name rallied around their country. Their determination was fuelled by Nazi propaganda, which insisted that this was a defensive war, provoked by Germany’s enemies, and warned that defeat would mean the annihilation of the fatherland. This campaign, based on “strength through fear” (as a British commentator quipped), hit home. As another soldier wrote to his wife just weeks before the final surrender: “If we go to the dogs, then everything goes to the dogs.”
Propaganda and popular opinion are just two key themes in Stargardt’s sweeping history, which takes in almost everything, from battles to religion and entertainment. And although the focus is on wartime Germany, we also see the suffering the war brought to the rest of Europe: pulverised cities, ravaged countryside, countless victims. Crucially, the death and destruction wrought by the German conquerors was not hidden from the population back home. Germans knew that the regime relied on pillage and plunder, bolstering the war effort with raw materials and slave labour from across Europe. And they knew that huge numbers of Jews were murdered in the east.
Historians have long debunked the postwar myth of German ignorance about the Holocaust, and Stargardt presents further evidence that the genocide was an open secret. News spread via German soldiers and officials who witnessed massacres, or participated in them. “The Jews are being completely exterminated,” a policeman wrote in August 1941 to his wife in Bremen. Nazi propaganda also dropped heavy hints, creating a sense of societal complicity: in autumn 1941, for instance, the Nazi party displayed posters across the country, emblazoned with Hitler’s threat that a world war would lead to the “destruction of the Jewish race in Europe”. Ordinary Germans watched the deportations of their Jewish neighbours and purchased their abandoned property at bargain prices. Later on, the authorities distributed the belongings of Jews among bombed-out Germans, though this triggered new complaints about Nazi bigwigs grabbing the best bits and “laying their Aryan arses in the Jewish beds after they have exterminated the Jews”, as one employee in a Bavarian factory exclaimed. There was some popular unease about the genocide, and it came into the open during the intense allied bombing, in a rather twisted manner: many ordinary Germans bought into the Nazi propaganda picture of Jews pulling the strings in Britain and the USA, and understood the air raids as payback for the antisemitic pogroms and mass murders. In this way, writes Stargardt, the Germans “mixed anxieties about their culpability with a sense of their own victimhood”.
by Nikolaus Wachsmann, The Guardian | Read more:
Image: Popperfoto/Getty Images
Friday, December 11, 2015
Subscribe to:
Comments (Atom)








