Sunday, September 3, 2017
Deeper Than Deep
It’s like the discovery of the New World,” David Reich tells me. “Everything is new, nobody’s looked at it in this way before, so how can things not be interesting?”
The excitement surrounding David Reich’s ancient genetics lab at Harvard Medical School is almost palpable. Journals like Science and Nature are unstinting in their praise of the work being done in the Reich Laboratory. Reich and his colleagues are rewriting the history of the human species. Like a scientific Cecil B. DeMille, they are working toward creating an epic cinematic reenvisioning of human history that takes us deep into the mists of the past, tens of thousands of years ago.
In February of this year the forty-three-year-old Reich was named corecipient (with his colleague Svante Pääbo at Germany’s Max Planck Institute) of the $1 million Dan David Prize in archaeology and natural selection for being “the world’s leading pioneer in analyzing ancient human DNA,” which led to the discovery that Neanderthals and humans interbred—“a quantum leap in reconstructing our evolutionary past.”
A discovery, I was to learn from Reich in a conversation that preceded the prize, that had been superseded by even more astonishing developments: evidence of interaction with human and non-Neanderthal variants of hominids, including evanescent but once real “ghost populations.”
This is not “ancient history,” which goes back a few thousand years to the dawn of writing. This is deeper in the past than “deep history,” which takes us even further back—before the invention of agriculture, before the invention of language, before the invention of the wheel.
This is deep, deep history, tens of thousands of years ago. When, it’s now emerging, hordes of humans, vast tribes of variations of hominids—Homo sapiens, Neanderthals, the newly discovered “Denisovans,” the mysterious “ghost populations”—ranged and thronged and clashed and bred and interbred (and probably exterminated large portions of each other) across vast landscapes that were battlefields and graveyards.
It’s deep, deep history that’s beginning to unscroll a vast pageant through the wonders of big data crunching and the analysis of ancient DNA samples from fragments of bone and mummies that have been rotting away in the dusty basements of museums.
And not only in old bones and mummified objects. The evidence for much of these vast clashes and close encounters is something we carry around within us in microscopic stretches of DNA that are the only legacy left from extinct variant species of humans. In microscopic sequences of chemical bonds on the double helixes of heredity there are traces of ancient variations on human species who lived and thrived and left nothing else behind beyond a few random sequences of chemical bonds. The faintest of faint echoes of a prehistoric past we’re only beginning to grasp. It’s a shift in focus as radical as the one that allowed us to glimpse—through Hubble-era telescopes—the billions of galaxies of the knowable universe and radically shift our perspective on our place in deep space. Suddenly we are able to see in the galaxies of genes within us and the stories they tell of a new way of envisioning our place in the history of the planet.
And this fellow David Reich, sitting across from me in a corner of his lab on Avenue Louis Pasteur in Boston, this skinny slip of a hominid, David Reich, clad in a T-shirt and slacks—the Zuckerberg couture of Harvard geniuses, you might say—is at the heart of what is likely to be remembered as one of the great scientific revolutions. One unimaginable just a few years ago. (...)
What Reich’s lab has begun to unveil is that at least two previously unknown humanoid species interbred in the deep past with both humans and Neanderthals but are now extinct. Extinct but survive within us as fragments of ancient DNA code that reflect memories of interactions—let’s be frank, sex—with other hominid variations. Proof of interbreeding and extinctions on a scale that suggest huge dramas—wars, migrations, invasions—we, or really, Reich are only beginning to reconstruct. Just as we are only beginning to reconstruct those lost populations and deal with the realization we have the ability to build a model of the billions of genetic combinations that make up modern humans.
It’s this realization—the kind of work Reich and his colleagues are doing—that makes people nervous about the powers the ancient DNA savants hold over the shape of humans to come. (...)
I turn hesitantly to the dark side of the genetic revolution, the one highlighted by the Washington Post story about the “secret Harvard meeting” incited by concern over synthetic human genomes and its revolutionary potential. “There’s been recent concern among bioethicists about just how rapid the ability to create genomes has become. There was some meeting a while ago that dealt with the downside of being able to create and implant genes in humans or viruses.” In viruses the concern is that if genes for illness can be disarmed, they can also be armed up—creating an “arms race” of germ warfare. “What’s your feeling about this whole kerfuffle?”
“Well, actually the person involved in that is down the hall in this building, but that is a very different branch of genetics from what I do. That is engineering. What I do is inference about the past. I’m just trying to learn about history, and they’re actually trying to modify genomes, so it’s completely different. I’m trying to read genomes; they’re trying to write genomes. It’s a very different thing, and I think it’s one of these modern technologies that is potentially disruptive to our very being. Genetics. You know, the ability to engineer genomes is the biological equivalent of nuclear weapons. It’s really a fundamentally powerful—”
The biological equivalent of nuclear weapons! His concern seems heartfelt. “That’s kind of breathtaking when you think about it. Splitting the atoms, splitting the genome, or whatever…”
“Yeah, yeah, it’s a kind of reversal of things you couldn’t or haven’t done. You couldn’t split an atom apart before nuclear technology, and you could not reverse engineer the genome before modern recombinant genetics. That’s a very powerful thing. It’s a powerful tool, and it could be used—or misused, presumably—used, and abused like other types, like nuclear technology. It’s quite a profound thing.”
“Do we even know the endpoint of that? Could we create life?”
“Presumably.”
by Ron Rosenbaum, Lapham's Quarterly | Read more:
The excitement surrounding David Reich’s ancient genetics lab at Harvard Medical School is almost palpable. Journals like Science and Nature are unstinting in their praise of the work being done in the Reich Laboratory. Reich and his colleagues are rewriting the history of the human species. Like a scientific Cecil B. DeMille, they are working toward creating an epic cinematic reenvisioning of human history that takes us deep into the mists of the past, tens of thousands of years ago.
In February of this year the forty-three-year-old Reich was named corecipient (with his colleague Svante Pääbo at Germany’s Max Planck Institute) of the $1 million Dan David Prize in archaeology and natural selection for being “the world’s leading pioneer in analyzing ancient human DNA,” which led to the discovery that Neanderthals and humans interbred—“a quantum leap in reconstructing our evolutionary past.”

This is not “ancient history,” which goes back a few thousand years to the dawn of writing. This is deeper in the past than “deep history,” which takes us even further back—before the invention of agriculture, before the invention of language, before the invention of the wheel.
This is deep, deep history, tens of thousands of years ago. When, it’s now emerging, hordes of humans, vast tribes of variations of hominids—Homo sapiens, Neanderthals, the newly discovered “Denisovans,” the mysterious “ghost populations”—ranged and thronged and clashed and bred and interbred (and probably exterminated large portions of each other) across vast landscapes that were battlefields and graveyards.
It’s deep, deep history that’s beginning to unscroll a vast pageant through the wonders of big data crunching and the analysis of ancient DNA samples from fragments of bone and mummies that have been rotting away in the dusty basements of museums.
And not only in old bones and mummified objects. The evidence for much of these vast clashes and close encounters is something we carry around within us in microscopic stretches of DNA that are the only legacy left from extinct variant species of humans. In microscopic sequences of chemical bonds on the double helixes of heredity there are traces of ancient variations on human species who lived and thrived and left nothing else behind beyond a few random sequences of chemical bonds. The faintest of faint echoes of a prehistoric past we’re only beginning to grasp. It’s a shift in focus as radical as the one that allowed us to glimpse—through Hubble-era telescopes—the billions of galaxies of the knowable universe and radically shift our perspective on our place in deep space. Suddenly we are able to see in the galaxies of genes within us and the stories they tell of a new way of envisioning our place in the history of the planet.
And this fellow David Reich, sitting across from me in a corner of his lab on Avenue Louis Pasteur in Boston, this skinny slip of a hominid, David Reich, clad in a T-shirt and slacks—the Zuckerberg couture of Harvard geniuses, you might say—is at the heart of what is likely to be remembered as one of the great scientific revolutions. One unimaginable just a few years ago. (...)
What Reich’s lab has begun to unveil is that at least two previously unknown humanoid species interbred in the deep past with both humans and Neanderthals but are now extinct. Extinct but survive within us as fragments of ancient DNA code that reflect memories of interactions—let’s be frank, sex—with other hominid variations. Proof of interbreeding and extinctions on a scale that suggest huge dramas—wars, migrations, invasions—we, or really, Reich are only beginning to reconstruct. Just as we are only beginning to reconstruct those lost populations and deal with the realization we have the ability to build a model of the billions of genetic combinations that make up modern humans.
It’s this realization—the kind of work Reich and his colleagues are doing—that makes people nervous about the powers the ancient DNA savants hold over the shape of humans to come. (...)
I turn hesitantly to the dark side of the genetic revolution, the one highlighted by the Washington Post story about the “secret Harvard meeting” incited by concern over synthetic human genomes and its revolutionary potential. “There’s been recent concern among bioethicists about just how rapid the ability to create genomes has become. There was some meeting a while ago that dealt with the downside of being able to create and implant genes in humans or viruses.” In viruses the concern is that if genes for illness can be disarmed, they can also be armed up—creating an “arms race” of germ warfare. “What’s your feeling about this whole kerfuffle?”
“Well, actually the person involved in that is down the hall in this building, but that is a very different branch of genetics from what I do. That is engineering. What I do is inference about the past. I’m just trying to learn about history, and they’re actually trying to modify genomes, so it’s completely different. I’m trying to read genomes; they’re trying to write genomes. It’s a very different thing, and I think it’s one of these modern technologies that is potentially disruptive to our very being. Genetics. You know, the ability to engineer genomes is the biological equivalent of nuclear weapons. It’s really a fundamentally powerful—”
The biological equivalent of nuclear weapons! His concern seems heartfelt. “That’s kind of breathtaking when you think about it. Splitting the atoms, splitting the genome, or whatever…”
“Yeah, yeah, it’s a kind of reversal of things you couldn’t or haven’t done. You couldn’t split an atom apart before nuclear technology, and you could not reverse engineer the genome before modern recombinant genetics. That’s a very powerful thing. It’s a powerful tool, and it could be used—or misused, presumably—used, and abused like other types, like nuclear technology. It’s quite a profound thing.”
“Do we even know the endpoint of that? Could we create life?”
“Presumably.”
by Ron Rosenbaum, Lapham's Quarterly | Read more:
Image: The British Museum
Saturday, September 2, 2017
The Enduring Legacy of Zork
In 1977, four recent MIT graduates who’d met at MIT’s Laboratory for Computer Science used the lab’s PDP-10 mainframe to develop a computer game that captivated the world. Called Zork, which was a nonsense word then popular on campus, their creation would become one of the most influential computer games in the medium’s half-century-long history.
The text-based adventure challenged players to navigate a byzantine underground world full of caves and rivers as they battled gnomes, a troll, and a Cyclops to collect such treasures as a jewel-encrusted egg and a silver chalice.
During its 1980s heyday, commercial versions of Zork released for personal computers sold more than 800,000 copies. Today, unofficial versions of the game can be played online, on smartphones, and on Amazon Echo devices, and Zork is inspiring young technologists well beyond the gaming field.
It’s an impressive legacy for a project described by its developers as a hobby, a lark, and a “good hack.” Here’s the story of Zork’s creation, as recounted by its four inventors—and a look at its ongoing impact.
Tim Anderson, Marc Blank, Bruce Daniels, and Dave Lebling—who between them earned seven MIT degrees in electrical engineering and computer science, political science, and biology—bonded over their interest in computer games, then in their infancy, as they worked or consulted for the Laboratory for Computer Science’s Dynamic Modeling Group. By day, all of them but Blank (who was in medical school) developed software for the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA), which funded projects at MIT. On nights and weekends, they used their coding skills—and mainframe access—to work on Zork.
In early 1977, a text-only game called Colossal Cave Adventure—originally written by MIT grad Will Crowther—was tweaked and distributed over the ARPANET by a Stanford graduate student. “The four of us spent a lot of time trying to solve Adventure,” says Lebling. “And when we finally did, we said, ‘That was pretty good, but we could do a better job.’”
By June, they’d devised many of Zork’s core features and building blocks, including a word parser that took words the players typed and translated them into commands the game could process and respond to, propelling the story forward. The parser, which the group continued to fine-tune, allowed Zork to understand far more words than previous games, including adjectives, conjunctions, prepositions, and complex verbs. That meant Zork could support intricate puzzles, such as one that let players obtain a key by sliding paper under a door, pushing the key out of the lock so it would drop onto the paper, and retrieving the paper. The parser also let players input sentences like “Take all but rug” to scoop up multiple treasures, rather than making them type “Take [object]” over and over.
Vibrant, witty writing set Zork apart. It had no graphics, but lines like “Phosphorescent mosses, fed by a trickle of water from some unseen source above, make [the crystal grotto] glow and sparkle with every color of the rainbow” helped players envision the “Great Underground Empire” they were exploring as they brandished such weapons as glowing “Elvish swords.” “We played with language just like we played with computers,” says Daniels. Wordplay also cropped up in irreverent character names such as “Lord Dimwit Flathead the Excessive” and “The Wizard of Frobozz.”
Within weeks of its creation, Zork’s clever writing and inventive puzzles attracted players from across the U.S. and England. “The MIT machines were a nerd magnet for kids who had access to the ARPANET,” says Anderson. “They would see someone running something called Zork, rummage around in the MIT file system, find and play the game, and tell their friends.” The MIT mainframe operating system (called ITS) let Zork’s creators remotely watch users type in real time, which revealed common mistakes. “If we found a lot of people using a word the game didn’t support, we would add it as a synonym,” says Daniels.
The four kept refining and expanding Zork until February 1979. A few months later, three of them, plus seven other Dynamic Modeling Group members, founded the software company Infocom. Its first product: a modified version of Zork, split into three parts, released over three years, to fit PCs’ limited memory size and processing power.
Nearly 40 years later, those PC games, which ran on everything from the Apple II to the Commodore 64 in their 1980s heyday, are available online—and still inspire technologists. Ben Brown, founder and CEO of Howdy.ai, says Zork helped him design AI-powered chatbots. “Zork is a narrative, but embedded within it are clues about how the user can interact with and affect the story,” he says. “It’s a good model for how chatbots should teach users how to respond to and use commands without being heavy-handed and repetitive.” For example, the line “You are in a dark and quite creepy crawlway with passages leaving to the north, east, south, and southwest” hints to players that they must choose a direction to move, but it doesn’t make those instructions as explicit as actually telling them, “Type ‘north,’ ‘east,’ ‘south,’ or ‘southwest.’” Brown’s chatbot, Howdy, operates similarly, using bold and highlighted fonts to draw attention to keywords, like “check in,” and “schedule,” that people can use to communicate with the bot.
Jessica Brillhart, a filmmaker who creates virtual-reality videos, also cites Zork as an influence: “It provides a great way to script immersive experiences and shows how to craft a full universe for people to explore.”
by Elizabeth Woyke, MIT Technology Review | Read more:
Image: Zork
[ed. Zork and its pre-cursor Colossal Cave are like first loves you remember fondly for the rest of your life. Along with Eliza, they were my first experience with interactive computing. Read the comments section for similar tributes.]
The text-based adventure challenged players to navigate a byzantine underground world full of caves and rivers as they battled gnomes, a troll, and a Cyclops to collect such treasures as a jewel-encrusted egg and a silver chalice.

It’s an impressive legacy for a project described by its developers as a hobby, a lark, and a “good hack.” Here’s the story of Zork’s creation, as recounted by its four inventors—and a look at its ongoing impact.
Tim Anderson, Marc Blank, Bruce Daniels, and Dave Lebling—who between them earned seven MIT degrees in electrical engineering and computer science, political science, and biology—bonded over their interest in computer games, then in their infancy, as they worked or consulted for the Laboratory for Computer Science’s Dynamic Modeling Group. By day, all of them but Blank (who was in medical school) developed software for the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA), which funded projects at MIT. On nights and weekends, they used their coding skills—and mainframe access—to work on Zork.
In early 1977, a text-only game called Colossal Cave Adventure—originally written by MIT grad Will Crowther—was tweaked and distributed over the ARPANET by a Stanford graduate student. “The four of us spent a lot of time trying to solve Adventure,” says Lebling. “And when we finally did, we said, ‘That was pretty good, but we could do a better job.’”
By June, they’d devised many of Zork’s core features and building blocks, including a word parser that took words the players typed and translated them into commands the game could process and respond to, propelling the story forward. The parser, which the group continued to fine-tune, allowed Zork to understand far more words than previous games, including adjectives, conjunctions, prepositions, and complex verbs. That meant Zork could support intricate puzzles, such as one that let players obtain a key by sliding paper under a door, pushing the key out of the lock so it would drop onto the paper, and retrieving the paper. The parser also let players input sentences like “Take all but rug” to scoop up multiple treasures, rather than making them type “Take [object]” over and over.
Vibrant, witty writing set Zork apart. It had no graphics, but lines like “Phosphorescent mosses, fed by a trickle of water from some unseen source above, make [the crystal grotto] glow and sparkle with every color of the rainbow” helped players envision the “Great Underground Empire” they were exploring as they brandished such weapons as glowing “Elvish swords.” “We played with language just like we played with computers,” says Daniels. Wordplay also cropped up in irreverent character names such as “Lord Dimwit Flathead the Excessive” and “The Wizard of Frobozz.”
Within weeks of its creation, Zork’s clever writing and inventive puzzles attracted players from across the U.S. and England. “The MIT machines were a nerd magnet for kids who had access to the ARPANET,” says Anderson. “They would see someone running something called Zork, rummage around in the MIT file system, find and play the game, and tell their friends.” The MIT mainframe operating system (called ITS) let Zork’s creators remotely watch users type in real time, which revealed common mistakes. “If we found a lot of people using a word the game didn’t support, we would add it as a synonym,” says Daniels.
The four kept refining and expanding Zork until February 1979. A few months later, three of them, plus seven other Dynamic Modeling Group members, founded the software company Infocom. Its first product: a modified version of Zork, split into three parts, released over three years, to fit PCs’ limited memory size and processing power.
Nearly 40 years later, those PC games, which ran on everything from the Apple II to the Commodore 64 in their 1980s heyday, are available online—and still inspire technologists. Ben Brown, founder and CEO of Howdy.ai, says Zork helped him design AI-powered chatbots. “Zork is a narrative, but embedded within it are clues about how the user can interact with and affect the story,” he says. “It’s a good model for how chatbots should teach users how to respond to and use commands without being heavy-handed and repetitive.” For example, the line “You are in a dark and quite creepy crawlway with passages leaving to the north, east, south, and southwest” hints to players that they must choose a direction to move, but it doesn’t make those instructions as explicit as actually telling them, “Type ‘north,’ ‘east,’ ‘south,’ or ‘southwest.’” Brown’s chatbot, Howdy, operates similarly, using bold and highlighted fonts to draw attention to keywords, like “check in,” and “schedule,” that people can use to communicate with the bot.
Jessica Brillhart, a filmmaker who creates virtual-reality videos, also cites Zork as an influence: “It provides a great way to script immersive experiences and shows how to craft a full universe for people to explore.”
by Elizabeth Woyke, MIT Technology Review | Read more:
Image: Zork
[ed. Zork and its pre-cursor Colossal Cave are like first loves you remember fondly for the rest of your life. Along with Eliza, they were my first experience with interactive computing. Read the comments section for similar tributes.]
The Perfect Fit
Shopping in Tokyo.
I’m not sure how it is in small families, but in large ones relationships tend to shift over time. You might be best friends with one brother or sister, then two years later it might be someone else. Then it’s likely to change again, and again after that. It doesn’t mean that you’ve fallen out with the person you used to be closest to but that you’ve merged into someone else’s lane, or had him or her merge into yours. Trios form, then morph into quartets before splitting into teams of two. The beauty of it is that it’s always changing.
Twice in 2014, I went to Tokyo with my sister Amy. I’d been seven times already, so was able to lead her to all the best places, by which I mean stores. When we returned in January of 2016, it made sense to bring our sister Gretchen with us. Hugh was there as well, and while he’s a definite presence, he didn’t figure into the family dynamic. Mates, to my sisters and me, are seen mainly as shadows of the people they’re involved with. They move. They’re visible in direct sunlight. But because they don’t have access to our emotional buttons—because they can’t make us twelve again, or five, and screaming—they don’t really count as players.
Normally in Tokyo we rent an apartment and stay for a week. This time, though, we got a whole house. The neighborhood it was in—Ebisu—is home to one of our favorite shops, Kapital. The clothes they sell are new but appear to have been previously worn, perhaps by someone who was shot or stabbed and then thrown off a boat. Everything looks as if it had been pulled from the evidence rack at a murder trial. I don’t know how they do it. Most distressed clothing looks fake, but not theirs, for some reason. Do they put it in a dryer with broken glass and rusty steak knives? Do they drag it behind a tank over a still-smoldering battlefield? How do they get the cuts and stains so . . . right?
If I had to use one word to describe Kapital’s clothing, I’d be torn between “wrong” and “tragic.” A shirt might look normal enough until you try it on, and discover that the armholes have been moved, and are no longer level with your shoulders, like a capital “T,” but farther down your torso, like a lowercase one.
Jackets with patches on them might senselessly bunch at your left hip, or maybe they poof out at the small of your back, where for no good reason there’s a pocket. I’ve yet to see a pair of Kapital trousers with a single leg hole, but that doesn’t mean the designers haven’t already done it. Their motto seems to be “Why not?”
Most people would answer, “I’ll tell you why not!” But I like Kapital’s philosophy. I like their clothing as well, though I can’t say that it always likes me in return. I’m not narrow enough in the chest for most of their jackets, but what was to stop me, on this most recent trip, from buying a flannel shirt made of five differently patterned flannel shirts ripped apart and then stitched together into a kind of doleful Frankentop? I got hats as well, three of them, which I like to wear stacked up, all at the same time, partly just to get it over with but mainly because I think they look good as a tower.
I draw the line at clothing with writing on it, but numbers don’t bother me, so I also bought a tattered long-sleeved T-shirt with “99” cut from white fabric and stitched onto the front before being half burned off. It’s as though a football team’s plane had gone down and this was all that was left. Finally, I bought what might be called a tunic, made of denim and patched at the neck with defeated scraps of corduroy. When buttoned, the front flares out, making me look like I have a potbelly. These are clothes that absolutely refuse to flatter you, that go out of their way to insult you, really, and still my sisters and I can’t get enough. (...)
There are three other branches of Kapital in Tokyo, and we visited them all, staying in each one until our fingerprints were on everything. “My God,” Gretchen said, trying on a hat that seemed to have been modelled on a used toilet brush, before adding it to her pile. “This place is amazing. I had no idea!”
I’m not sure how it is in small families, but in large ones relationships tend to shift over time. You might be best friends with one brother or sister, then two years later it might be someone else. Then it’s likely to change again, and again after that. It doesn’t mean that you’ve fallen out with the person you used to be closest to but that you’ve merged into someone else’s lane, or had him or her merge into yours. Trios form, then morph into quartets before splitting into teams of two. The beauty of it is that it’s always changing.

Normally in Tokyo we rent an apartment and stay for a week. This time, though, we got a whole house. The neighborhood it was in—Ebisu—is home to one of our favorite shops, Kapital. The clothes they sell are new but appear to have been previously worn, perhaps by someone who was shot or stabbed and then thrown off a boat. Everything looks as if it had been pulled from the evidence rack at a murder trial. I don’t know how they do it. Most distressed clothing looks fake, but not theirs, for some reason. Do they put it in a dryer with broken glass and rusty steak knives? Do they drag it behind a tank over a still-smoldering battlefield? How do they get the cuts and stains so . . . right?
If I had to use one word to describe Kapital’s clothing, I’d be torn between “wrong” and “tragic.” A shirt might look normal enough until you try it on, and discover that the armholes have been moved, and are no longer level with your shoulders, like a capital “T,” but farther down your torso, like a lowercase one.
Jackets with patches on them might senselessly bunch at your left hip, or maybe they poof out at the small of your back, where for no good reason there’s a pocket. I’ve yet to see a pair of Kapital trousers with a single leg hole, but that doesn’t mean the designers haven’t already done it. Their motto seems to be “Why not?”
Most people would answer, “I’ll tell you why not!” But I like Kapital’s philosophy. I like their clothing as well, though I can’t say that it always likes me in return. I’m not narrow enough in the chest for most of their jackets, but what was to stop me, on this most recent trip, from buying a flannel shirt made of five differently patterned flannel shirts ripped apart and then stitched together into a kind of doleful Frankentop? I got hats as well, three of them, which I like to wear stacked up, all at the same time, partly just to get it over with but mainly because I think they look good as a tower.
I draw the line at clothing with writing on it, but numbers don’t bother me, so I also bought a tattered long-sleeved T-shirt with “99” cut from white fabric and stitched onto the front before being half burned off. It’s as though a football team’s plane had gone down and this was all that was left. Finally, I bought what might be called a tunic, made of denim and patched at the neck with defeated scraps of corduroy. When buttoned, the front flares out, making me look like I have a potbelly. These are clothes that absolutely refuse to flatter you, that go out of their way to insult you, really, and still my sisters and I can’t get enough. (...)
There are three other branches of Kapital in Tokyo, and we visited them all, staying in each one until our fingerprints were on everything. “My God,” Gretchen said, trying on a hat that seemed to have been modelled on a used toilet brush, before adding it to her pile. “This place is amazing. I had no idea!”
by David Sedaris, New Yorker | Read more:
Image: Tamara ShopsinFriday, September 1, 2017
The Kekulé Problem
I call it the Kekulé Problem because among the myriad instances of scientific problems solved in the sleep of the inquirer Kekulé’s is probably the best known. He was trying to arrive at the configuration of the benzene molecule and not making much progress when he fell asleep in front of the fire and had his famous dream of a snake coiled in a hoop with its tail in its mouth—the ouroboros of mythology—and woke exclaiming to himself: “It’s a ring. The molecule is in the form of a ring.” Well. The problem of course—not Kekulé’s but ours—is that since the unconscious understands language perfectly well or it would not understand the problem in the first place, why doesnt it simply answer Kekulé’s question with something like: “Kekulé, it’s a bloody ring.” To which our scientist might respond: “Okay. Got it. Thanks.”
Why the snake? That is, why is the unconscious so loathe to speak to us? Why the images, metaphors, pictures? Why the dreams, for that matter.
A logical place to begin would be to define what the unconscious is in the first place. To do this we have to set aside the jargon of modern psychology and get back to biology. The unconscious is a biological system before it is anything else. To put it as pithily as possibly—and as accurately—the unconscious is a machine for operating an animal.
All animals have an unconscious. If they didnt they would be plants. We may sometimes credit ours with duties it doesnt actually perform. Systems at a certain level of necessity may require their own mechanics of governance. Breathing, for instance, is not controlled by the unconscious but by the pons and the medulla oblongata, two systems located in the brainstem. Except of course in the case of cetaceans, who have to breathe when they come up for air. An autonomous system wouldnt work here. The first dolphin anesthetized on an operating table simply died. (How do they sleep? With half of their brain alternately.) But the duties of the unconscious are beyond counting. Everything from scratching an itch to solving math problems.
Problems in general are often well posed in terms of language and language remains a handy tool for explaining them. But the actual process of thinking—in any discipline—is largely an unconscious affair. Language can be used to sum up some point at which one has arrived—a sort of milepost—so as to gain a fresh starting point. But if you believe that you actually use language in the solving of problems I wish that you would write to me and tell me how you go about it.
I’ve pointed out to some of my mathematical friends that the unconscious appears to be better at math than they are. My friend George Zweig calls this the Night Shift. Bear in mind that the unconscious has no pencil or notepad and certainly no eraser. That it does solve problems in mathematics is indisputable. How does it go about it? When I’ve suggested to my friends that it may well do it without using numbers, most of them thought—after a while—that this was a possibility. How, we dont know. Just as we dont know how it is that we manage to talk. If I am talking to you then I can hardly be crafting at the same time the sentences that are to follow what I am now saying. I am totally occupied in talking to you. Nor can some part of my mind be assembling these sentences and then saying them to me so that I can repeat them. Aside from the fact that I am busy this would be to evoke an endless regress. The truth is that there is a process here to which we have no access. It is a mystery opaque to total blackness. (...)
Of the known characteristics of the unconscious its persistence is among the most notable. Everyone is familiar with repetitive dreams. Here the unconscious may well be imagined to have more than one voice: He’s not getting it, is he? No. He’s pretty thick. What do you want to do? I dont know. Do you want to try using his mother? His mother is dead. What difference does that make?
What is at work here? And how does the unconscious know we’re not getting it? What doesnt it know? It’s hard to escape the conclusion that the unconscious is laboring under a moral compulsion to educate us. (Moral compulsion? Is he serious?) (...)
We dont know what the unconscious is or where it is or how it got there—wherever there might be. Recent animal brain studies showing outsized cerebellums in some pretty smart species are suggestive. That facts about the world are in themselves capable of shaping the brain is slowly becoming accepted. Does the unconscious only get these facts from us, or does it have the same access to our sensorium that we have? You can do whatever you like with the us and the our and the we. I did. At some point the mind must grammaticize facts and convert them to narratives. The facts of the world do not for the most part come in narrative form. We have to do that. (...)
The unconscious seems to know a great deal. What does it know about itself? Does it know that it’s going to die? What does it think about that? It appears to represent a gathering of talents rather than just one. It seems unlikely that the itch department is also in charge of math. Can it work on a number of problems at once? Does it only know what we tell it? Or—more plausibly—has it direct access to the outer world? Some of the dreams which it is at pains to assemble for us are no doubt deeply reflective and yet some are quite frivolous. And the fact that it appears to be less than insistent upon our remembering every dream suggests that sometimes it may be working on itself. And is it really so good at solving problems or is it just that it keeps its own counsel about the failures? How does it have this understanding which we might well envy? How might we make inquiries of it? Are you sure?
by Cormac McCarthy, Nautilus | Read more:
Image: Don Kilpatrick III
[ed. See also: It’s Okay to “Forget” What You Read]
Why the snake? That is, why is the unconscious so loathe to speak to us? Why the images, metaphors, pictures? Why the dreams, for that matter.

All animals have an unconscious. If they didnt they would be plants. We may sometimes credit ours with duties it doesnt actually perform. Systems at a certain level of necessity may require their own mechanics of governance. Breathing, for instance, is not controlled by the unconscious but by the pons and the medulla oblongata, two systems located in the brainstem. Except of course in the case of cetaceans, who have to breathe when they come up for air. An autonomous system wouldnt work here. The first dolphin anesthetized on an operating table simply died. (How do they sleep? With half of their brain alternately.) But the duties of the unconscious are beyond counting. Everything from scratching an itch to solving math problems.
Problems in general are often well posed in terms of language and language remains a handy tool for explaining them. But the actual process of thinking—in any discipline—is largely an unconscious affair. Language can be used to sum up some point at which one has arrived—a sort of milepost—so as to gain a fresh starting point. But if you believe that you actually use language in the solving of problems I wish that you would write to me and tell me how you go about it.
I’ve pointed out to some of my mathematical friends that the unconscious appears to be better at math than they are. My friend George Zweig calls this the Night Shift. Bear in mind that the unconscious has no pencil or notepad and certainly no eraser. That it does solve problems in mathematics is indisputable. How does it go about it? When I’ve suggested to my friends that it may well do it without using numbers, most of them thought—after a while—that this was a possibility. How, we dont know. Just as we dont know how it is that we manage to talk. If I am talking to you then I can hardly be crafting at the same time the sentences that are to follow what I am now saying. I am totally occupied in talking to you. Nor can some part of my mind be assembling these sentences and then saying them to me so that I can repeat them. Aside from the fact that I am busy this would be to evoke an endless regress. The truth is that there is a process here to which we have no access. It is a mystery opaque to total blackness. (...)
Of the known characteristics of the unconscious its persistence is among the most notable. Everyone is familiar with repetitive dreams. Here the unconscious may well be imagined to have more than one voice: He’s not getting it, is he? No. He’s pretty thick. What do you want to do? I dont know. Do you want to try using his mother? His mother is dead. What difference does that make?
What is at work here? And how does the unconscious know we’re not getting it? What doesnt it know? It’s hard to escape the conclusion that the unconscious is laboring under a moral compulsion to educate us. (Moral compulsion? Is he serious?) (...)
We dont know what the unconscious is or where it is or how it got there—wherever there might be. Recent animal brain studies showing outsized cerebellums in some pretty smart species are suggestive. That facts about the world are in themselves capable of shaping the brain is slowly becoming accepted. Does the unconscious only get these facts from us, or does it have the same access to our sensorium that we have? You can do whatever you like with the us and the our and the we. I did. At some point the mind must grammaticize facts and convert them to narratives. The facts of the world do not for the most part come in narrative form. We have to do that. (...)
The unconscious seems to know a great deal. What does it know about itself? Does it know that it’s going to die? What does it think about that? It appears to represent a gathering of talents rather than just one. It seems unlikely that the itch department is also in charge of math. Can it work on a number of problems at once? Does it only know what we tell it? Or—more plausibly—has it direct access to the outer world? Some of the dreams which it is at pains to assemble for us are no doubt deeply reflective and yet some are quite frivolous. And the fact that it appears to be less than insistent upon our remembering every dream suggests that sometimes it may be working on itself. And is it really so good at solving problems or is it just that it keeps its own counsel about the failures? How does it have this understanding which we might well envy? How might we make inquiries of it? Are you sure?
by Cormac McCarthy, Nautilus | Read more:
Image: Don Kilpatrick III
[ed. See also: It’s Okay to “Forget” What You Read]
The Ontology of Circus Peanuts
I confess I am not by nature an early adopter. I still like manual typewriters, stick-shift cars, and simple appliances with on and off buttons instead of confusing symbols. I still do not know how to text. I am, however, very proud that I was in the vanguard when it came to hating the circus. I remember how out of sync I was when, at age nine, my parents took me to the circus at Madison Square Garden. I screamed in horror at the clowns, I was a whining bummer when the ringmaster with a whip made the frightened horses jump through fiery hoops, and I only perked up when the lion tamer stuck his head into the lion’s mouth. I was hoping he would be decapitated.
Now everyone has jumped on the “I hate the circus” bandwagon. It is under attack by animal-rights activists and fire departments and performers unions. The glory days of Barnum and Bailey are long gone. People with compassion no longer want to see elephants paraded down Main Street holding tail in trunk; the dirty-water hot dogs and rancid clouds of ancient cotton candy no longer hold sway with kids of all ages.
There is one tangential remnant of the circus that thrills me to the bone, and that is the low-grade confectionary candy called Circus Peanuts. Circus Peanuts, as far as I can tell, have literally nothing to do with circuses, or even with peanuts. They are usually found on the bottom candy shelf at gas-station convenience marts or at some chain drug stores.
A Circus Peanut is a about two inches long, it is the anemic orange color of the astronauts’ favorite drink, Tang, and it has been machine stamped to vaguely resemble a shelled peanut. The most amazing thing about Circus Peanuts is they are always stale. Not rock-hard but weirdly deflated and tough. It is hard to make a marshmallow go stale. In my kitchen pantry, I have a bag of them that has seen me through four years of holiday yam casseroles, and they are still squishy and fresh. Therefore one can’t blame the problem with Circus Peanuts on the general pillowy constitution of the marshmallow. Maybe even more mysterious then the ubiquitous staleness is that, for no logical reason, Circus Peanuts are banana flavored. Real peanuts are none of these things.
I have a few theories.
Theory 1: Decades back, when the Circus Peanut was invented, no one thought much about lawsuits. Ladders did not warn you that you should not jump from the top of them and people assumed hot coffee was hot. It may well be that the peanut industry was highly litigious and ahead of its time and woe to anyone who dared call something a peanut that wasn’t. Hence orange skin and banana flavoring became a protective shield against potential wrath.
Theory 2: Perhaps someone who lived in, say, Antarctica and had never seen or tasted a peanut invented Circus Peanuts. These are imaginary peanuts, a fantasy.
Theory 3: Around World War II, when the Circus Peanut was invented, the manufacturer was worried about shortages. A big Quonset hut was purchased to warehouse tons of them. The reason they are all stale is that we are still eating the original batch today.
Now everyone has jumped on the “I hate the circus” bandwagon. It is under attack by animal-rights activists and fire departments and performers unions. The glory days of Barnum and Bailey are long gone. People with compassion no longer want to see elephants paraded down Main Street holding tail in trunk; the dirty-water hot dogs and rancid clouds of ancient cotton candy no longer hold sway with kids of all ages.

A Circus Peanut is a about two inches long, it is the anemic orange color of the astronauts’ favorite drink, Tang, and it has been machine stamped to vaguely resemble a shelled peanut. The most amazing thing about Circus Peanuts is they are always stale. Not rock-hard but weirdly deflated and tough. It is hard to make a marshmallow go stale. In my kitchen pantry, I have a bag of them that has seen me through four years of holiday yam casseroles, and they are still squishy and fresh. Therefore one can’t blame the problem with Circus Peanuts on the general pillowy constitution of the marshmallow. Maybe even more mysterious then the ubiquitous staleness is that, for no logical reason, Circus Peanuts are banana flavored. Real peanuts are none of these things.
I have a few theories.
Theory 1: Decades back, when the Circus Peanut was invented, no one thought much about lawsuits. Ladders did not warn you that you should not jump from the top of them and people assumed hot coffee was hot. It may well be that the peanut industry was highly litigious and ahead of its time and woe to anyone who dared call something a peanut that wasn’t. Hence orange skin and banana flavoring became a protective shield against potential wrath.
Theory 2: Perhaps someone who lived in, say, Antarctica and had never seen or tasted a peanut invented Circus Peanuts. These are imaginary peanuts, a fantasy.
Theory 3: Around World War II, when the Circus Peanut was invented, the manufacturer was worried about shortages. A big Quonset hut was purchased to warehouse tons of them. The reason they are all stale is that we are still eating the original batch today.
by Jane Stern, Paris Review | Read more:
Image: uncredited
Thursday, August 31, 2017
What Would the End of Football Look Like?
The NFL is done for the year, but it is not pure fantasy to suggest that it may be done for good in the not-too-distant future. How might such a doomsday scenario play out and what would be the economic and social consequences?
By now we’re all familiar with the growing phenomenon of head injuries and cognitive problems among football players, even at the high school level. In 2009, Malcolm Gladwell asked whether football might someday come to an end, a concern seconded recently by Jonah Lehrer.
Before you say that football is far too big to ever disappear, consider the history: If you look at the stocks in the Fortune 500 from 1983, for example, 40 percent of those companies no longer exist. The original version of Napster no longer exists, largely because of lawsuits. No matter how well a business matches economic conditions at one point in time, it’s not a lock to be a leader in the future, and that is true for the NFL too. Sports are not immune to these pressures. In the first half of the 20th century, the three big sports were baseball, boxing, and horse racing, and today only one of those is still a marquee attraction.
The most plausible route to the death of football starts with liability suits. Precollegiate football is already sustaining 90,000 or more concussions each year. If ex-players start winning judgments, insurance companies might cease to insure colleges and high schools against football-related lawsuits. Coaches, team physicians, and referees would become increasingly nervous about their financial exposure in our litigious society. If you are coaching a high school football team, or refereeing a game as a volunteer, it is sobering to think that you could be hit with a $2 million lawsuit at any point in time. A lot of people will see it as easier to just stay away. More and more modern parents will keep their kids out of playing football, and there tends to be a “contagion effect” with such decisions; once some parents have second thoughts, many others follow suit. We have seen such domino effects with the risks of smoking or driving without seatbelts, two unsafe practices that were common in the 1960s but are much rarer today. The end result is that the NFL’s feeder system would dry up and advertisers and networks would shy away from associating with the league, owing to adverse publicity and some chance of being named as co-defendants in future lawsuits.
It may not matter that the losses from these lawsuits are much smaller than the total revenue from the sport as a whole. As our broader health care sector indicates (try buying private insurance when you have a history of cancer treatment), insurers don’t like to go where they know they will take a beating. That means just about everyone could be exposed to fear of legal action.
This slow death march could easily take 10 to 15 years. Imagine the timeline. A couple more college players — or worse, high schoolers — commit suicide with autopsies showing CTE. A jury makes a huge award of $20 million to a family. A class-action suit shapes up with real legs, the NFL keeps changing its rules, but it turns out that less than concussion levels of constant head contact still produce CTE. Technological solutions (new helmets, pads) are tried and they fail to solve the problem. Soon high schools decide it isn’t worth it. The Ivy League quits football, then California shuts down its participation, busting up the Pac-12. Then the Big Ten calls it quits, followed by the East Coast schools. Now it’s mainly a regional sport in the southeast and Texas/Oklahoma. The socioeconomic picture of a football player becomes more homogeneous: poor, weak home life, poorly educated. Ford and Chevy pull their advertising, as does IBM and eventually the beer companies. (...)
Despite its undeniable popularity — and the sense that the game is everywhere — the aggregate economic effect of losing the NFL would not actually be that large. League revenues are around $10 billion per year while U.S. GDP is around $15,300 billion. But that doesn’t mean everyone would be fine.
by Tyler Cowen and Kevin Grier, Grantland | Read more:
Image: Rob Tringali/Getty Images
[ed. See also: ESPN Football Analyst Walks Away, Disturbed by Brain Trauma on Field]
By now we’re all familiar with the growing phenomenon of head injuries and cognitive problems among football players, even at the high school level. In 2009, Malcolm Gladwell asked whether football might someday come to an end, a concern seconded recently by Jonah Lehrer.

The most plausible route to the death of football starts with liability suits. Precollegiate football is already sustaining 90,000 or more concussions each year. If ex-players start winning judgments, insurance companies might cease to insure colleges and high schools against football-related lawsuits. Coaches, team physicians, and referees would become increasingly nervous about their financial exposure in our litigious society. If you are coaching a high school football team, or refereeing a game as a volunteer, it is sobering to think that you could be hit with a $2 million lawsuit at any point in time. A lot of people will see it as easier to just stay away. More and more modern parents will keep their kids out of playing football, and there tends to be a “contagion effect” with such decisions; once some parents have second thoughts, many others follow suit. We have seen such domino effects with the risks of smoking or driving without seatbelts, two unsafe practices that were common in the 1960s but are much rarer today. The end result is that the NFL’s feeder system would dry up and advertisers and networks would shy away from associating with the league, owing to adverse publicity and some chance of being named as co-defendants in future lawsuits.
It may not matter that the losses from these lawsuits are much smaller than the total revenue from the sport as a whole. As our broader health care sector indicates (try buying private insurance when you have a history of cancer treatment), insurers don’t like to go where they know they will take a beating. That means just about everyone could be exposed to fear of legal action.
This slow death march could easily take 10 to 15 years. Imagine the timeline. A couple more college players — or worse, high schoolers — commit suicide with autopsies showing CTE. A jury makes a huge award of $20 million to a family. A class-action suit shapes up with real legs, the NFL keeps changing its rules, but it turns out that less than concussion levels of constant head contact still produce CTE. Technological solutions (new helmets, pads) are tried and they fail to solve the problem. Soon high schools decide it isn’t worth it. The Ivy League quits football, then California shuts down its participation, busting up the Pac-12. Then the Big Ten calls it quits, followed by the East Coast schools. Now it’s mainly a regional sport in the southeast and Texas/Oklahoma. The socioeconomic picture of a football player becomes more homogeneous: poor, weak home life, poorly educated. Ford and Chevy pull their advertising, as does IBM and eventually the beer companies. (...)
Despite its undeniable popularity — and the sense that the game is everywhere — the aggregate economic effect of losing the NFL would not actually be that large. League revenues are around $10 billion per year while U.S. GDP is around $15,300 billion. But that doesn’t mean everyone would be fine.
by Tyler Cowen and Kevin Grier, Grantland | Read more:
Image: Rob Tringali/Getty Images
[ed. See also: ESPN Football Analyst Walks Away, Disturbed by Brain Trauma on Field]
Wednesday, August 30, 2017
After Decades of Pushing Bachelor’s Degrees, U.S. Needs More Tradespeople
FONTANA, Calif. — At a steel factory dwarfed by the adjacent Auto Club Speedway, Fernando Esparza is working toward his next promotion.
Esparza is a 46-year-old mechanic for Evolution Fresh, a subsidiary of Starbucks that makes juices and smoothies. He’s taking a class in industrial computing taught by a community college at a local manufacturing plant in the hope it will bump up his wages.
It’s a pretty safe bet. The skills being taught here are in high demand. That’s in part because so much effort has been put into encouraging high school graduates to go to college for academic degrees rather than for training in industrial and other trades that many fields like his face worker shortages.
Now California is spending $6 million on a campaign to revive the reputation of vocational education, and $200 million to improve the delivery of it.
“It’s a cultural rebuild,” said Randy Emery, a welding instructor at the College of the Sequoias in California’s Central Valley.
Standing in a cavernous teaching lab full of industrial equipment on the college’s Tulare campus, Emery said the decades-long national push for high school graduates to get bachelor’s degrees left vocational programs with an image problem, and the nation’s factories with far fewer skilled workers than needed.
“I’m a survivor of that teardown mode of the ’70s and ’80s, that college-for-all thing,” he said.
This has had the unintended consequence of helping flatten out or steadily erode the share of students taking vocational courses. In California’s community colleges, for instance, it’s dropped to 28 percent from 31 percent since 2000, contributing to a shortage of trained workers with more than a high school diploma but less than a bachelor’s degree.
Research by the state’s 114-campus community college system showed that families and employers alike didn’t know of the existence or value of vocational programs and the certifications they confer, many of which can add tens of thousands of dollars per year to a graduate’s income.
“We needed to do a better job getting the word out,” said Van Ton-Quinlivan, the system’s vice chancellor for workforce and economic development.
High schools and colleges have struggled for decades to attract students to job-oriented classes ranging from welding to nursing. They’ve tried cosmetic changes, such as rebranding “vocational” courses as “career and technical education,” but students and their families have yet to buy in, said Andrew Hanson, a senior research analyst with Georgetown University’s Center on Education and the Workforce.
Federal figures show that only 8 percent of undergraduates are enrolled in certificate programs, which tend to be vocationally oriented.
Sen. Marco Rubio, R-Fla., last year focused attention on the vocational vs. academic debate by contending during his presidential campaign that “welders make more money than philosophers.”
The United States has 30 million jobs that pay an average of $55,000 per year and don’t require a bachelor’s degree, according to the Georgetown center. People with career and technical educations are actually slightly more likely to be employed than their counterparts with academic credentials, the U.S. Department of Education reports, and significantly more likely to be working in their fields of study.
At California Steel Industries, where Esparza was learning industrial computing, some supervisors without college degrees make as much as $120,000 per year and electricians also can make six figures, company officials said.
Skilled trades show among the highest potential among job categories, the economic-modeling company Emsi calculates. It says tradespeople also are older than workers in other fields — more than half were over 45 in 2012, the last period for which the subject was studied — meaning looming retirements could result in big shortages.
High schools and community colleges are the keys to filling industrial jobs, Hanson said, but something needs to change.
“You haven’t yet been able to attract students from middle-class and more affluent communities” to vocational programs, he said. “Efforts like California’s to broaden the appeal are exactly what we need.”
Esparza is a 46-year-old mechanic for Evolution Fresh, a subsidiary of Starbucks that makes juices and smoothies. He’s taking a class in industrial computing taught by a community college at a local manufacturing plant in the hope it will bump up his wages.
It’s a pretty safe bet. The skills being taught here are in high demand. That’s in part because so much effort has been put into encouraging high school graduates to go to college for academic degrees rather than for training in industrial and other trades that many fields like his face worker shortages.

“It’s a cultural rebuild,” said Randy Emery, a welding instructor at the College of the Sequoias in California’s Central Valley.
Standing in a cavernous teaching lab full of industrial equipment on the college’s Tulare campus, Emery said the decades-long national push for high school graduates to get bachelor’s degrees left vocational programs with an image problem, and the nation’s factories with far fewer skilled workers than needed.
“I’m a survivor of that teardown mode of the ’70s and ’80s, that college-for-all thing,” he said.
This has had the unintended consequence of helping flatten out or steadily erode the share of students taking vocational courses. In California’s community colleges, for instance, it’s dropped to 28 percent from 31 percent since 2000, contributing to a shortage of trained workers with more than a high school diploma but less than a bachelor’s degree.
Research by the state’s 114-campus community college system showed that families and employers alike didn’t know of the existence or value of vocational programs and the certifications they confer, many of which can add tens of thousands of dollars per year to a graduate’s income.
“We needed to do a better job getting the word out,” said Van Ton-Quinlivan, the system’s vice chancellor for workforce and economic development.
High schools and colleges have struggled for decades to attract students to job-oriented classes ranging from welding to nursing. They’ve tried cosmetic changes, such as rebranding “vocational” courses as “career and technical education,” but students and their families have yet to buy in, said Andrew Hanson, a senior research analyst with Georgetown University’s Center on Education and the Workforce.
Federal figures show that only 8 percent of undergraduates are enrolled in certificate programs, which tend to be vocationally oriented.
Sen. Marco Rubio, R-Fla., last year focused attention on the vocational vs. academic debate by contending during his presidential campaign that “welders make more money than philosophers.”
The United States has 30 million jobs that pay an average of $55,000 per year and don’t require a bachelor’s degree, according to the Georgetown center. People with career and technical educations are actually slightly more likely to be employed than their counterparts with academic credentials, the U.S. Department of Education reports, and significantly more likely to be working in their fields of study.
At California Steel Industries, where Esparza was learning industrial computing, some supervisors without college degrees make as much as $120,000 per year and electricians also can make six figures, company officials said.
Skilled trades show among the highest potential among job categories, the economic-modeling company Emsi calculates. It says tradespeople also are older than workers in other fields — more than half were over 45 in 2012, the last period for which the subject was studied — meaning looming retirements could result in big shortages.
High schools and community colleges are the keys to filling industrial jobs, Hanson said, but something needs to change.
“You haven’t yet been able to attract students from middle-class and more affluent communities” to vocational programs, he said. “Efforts like California’s to broaden the appeal are exactly what we need.”
by Matt Krupnick, PBS Newshour | Read more:
Image: PBS
[ed. The main problem being that those jobs aren't as Instagramable as some bartender working at an upscale dive for $15/hr. When the world goes to hell and robots take over every administrative and technical job that ever existed, survivors will be those that have some marketable hands-on skill: plumbers, electricians, carpenters, mechanics, etc.]
The Transhumanist FAQ
1.1 What is transhumanism?
Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows:
(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.
(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies. Transhumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as “human”.
It is not our human shape or the details of our current human biology that define what is valuable about us, but rather our aspirations and ideals, our experiences, and the kinds of lives we lead. To a transhumanist, progress occurs when more people become more able to shape themselves, their lives, and the ways they relate to others, in accordance with their own deepest values. Transhumanists place a high value on autonomy: the ability and right of individuals to plan and choose their own lives. Some people may of course, for any number of reasons, choose to forgo the opportunity to use technology to improve themselves. Transhumanists seek to create a world in which autonomous individuals may choose to remain unenhanced or choose to be enhanced and in which these choices will be respected.
Through the accelerating pace of technological development and scientific understanding, we are entering a whole new stage in the history of the human species. In the relatively near future, we may face the prospect of real artificial intelligence. New kinds of cognitive tools will be built that combine artificial intelligence with interface technology. Molecular nanotechnology has the potential to manufacture abundant resources for everybody and to give us control over the biochemical processes in our bodies, enabling us to eliminate disease and unwanted aging. Technologies such as brain-computer interfaces and neuropharmacology could amplify human intelligence, increase emotional well-being, improve our capacity for steady commitment to life projects or a loved one, and even multiply the range and richness of possible emotions. On the dark side of the spectrum, transhumanists recognize that some of these coming technologies could potentially cause great harm to human life; even the survival of our species could be at risk. Seeking to understand the dangers and working to prevent disasters is an essential part of the transhumanist agenda.
Transhumanism is entering the mainstream culture today, as increasing numbers of scientists, scientifically literate philosophers, and social thinkers are beginning to take seriously the range of possibilities that transhumanism encompasses. A rapidly expanding family of transhumanist groups, differing somewhat in flavor and focus, and a plethora of discussion groups in many countries around the world, are gathered under the umbrella of the World Transhumanist Association, a non-profit democratic membership organization.
1.2 What is a posthuman?
It is sometimes useful to talk about possible future beings whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards. The standard word for such beings is “posthuman”. (Care must be taken to avoid misinterpretation. “Posthuman” does not denote just anything that happens to come after the human era, nor does it have anything to do with the “posthumous”. In particular, it does not imply that there are no humans anymore.)
Many transhumanists wish to follow life paths which would, sooner or later, require growing into posthuman persons: they yearn to reach intellectual heights as far above any current human genius as humans are above other primates; to be resistant to disease and impervious to aging; to have unlimited youth and vigor; to exercise control over their own desires, moods, and mental states; to be able to avoid feeling tired, hateful, or irritated about petty things; to have an increased capacity for pleasure, love, artistic appreciation, and serenity; to experience novel states of consciousness that current human brains cannot access. It seems likely that the simple fact of living an indefinitely long, healthy, active life would take anyone to posthumanity if they went on accumulating memories, skills, and intelligence.
Posthumans could be completely synthetic artificial intelligences, or they could be enhanced uploads [see “What is uploading?”], or they could be the result of making many smaller but cumulatively profound augmentations to a biological human. The latter alternative would probably require either the redesign of the human organism using advanced nanotechnology or its radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, anti-aging therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable computers, and cognitive techniques.
Some authors write as though simply by changing our self-conception, we have become or could become posthuman. This is a confusion or corruption of the original meaning of the term. The changes required to make us posthuman are too profound to be achievable by merely altering some aspect of psychological theory or the way we think about ourselves. Radical technological modifications to our brains and bodies are needed. It is difficult for us to imagine what it would be like to be a posthuman person. Posthumans may have experiences and concerns that we cannot fathom, thoughts that cannot fit into the three-pound lumps of neural tissue that we use for thinking. Some posthumans may find it advantageous to jettison their bodies altogether and live as information patterns on vast super-fast computer networks. Their minds may be not only more powerful than ours but may also employ different cognitive architectures or include new sensory modalities that enable greater participation in their virtual reality settings. Posthuman minds might be able to share memories and experiences directly, greatly increasing the efficiency, quality, and modes in which posthumans could communicate with each other. The boundaries between posthuman minds may not be as sharply defined as those between humans.
Posthumans might shape themselves and their environment in so many new and profound ways that speculations about the detailed features of posthumans and the posthuman world are likely to fail.
by Nick Bostrom, Oxford University | Read more: (pdf)
Labels:
Critical Thought,
Philosophy,
Psychology,
Technology
The Mandibles
Florence Darkly suffers from all the typical problems of the middle-class Brooklynite. With cabbage up to $20 a head, the grocery bill is a constant struggle. Owing to chronic water shortages, she and her teenage son can shower only once a week. Her morning ritual no longer includes coffee — climate change has ruined the arabica bean crop — or The New York Times, which has long since folded (“God rest its soul”), along with every other newspaper. As a white woman in an America where Latinos are now the socially dominant ethnic group, she remembers her marginalization every time a robotic voice on the phone instructs her to press 1 for Spanish and 2 for English.
But Florence is better off than most. She owns her own house in rapidly gentrifying East Flatbush. After years of being displaced at work by bots (now called robs, “for obvious reasons”), she knows her job at a homeless shelter in Fort Greene is secure — “the one thing New York City was bound never to run out of was homeless people.” She even has a safety net: the fortune amassed generations earlier by the family patriarch, Elliot Mandible, pieces of which trickle down in times of need.
This future is only 13 years away, as Lionel Shriver depicts it in “The Mandibles: A Family, 2029-2047,” her searing exemplar of a disquieting new genre — call it dystopian finance fiction. When the novel opens, America is perched on the cusp of catastrophe, though no one knows it yet. The population is still reeling from the aftershocks of “the Stonage” (an abridgment of Stone Age), the technology blackout in 2024 that brought the entire country to a halt, an event at least as traumatic for this generation as Sept. 11 was for their parents. China has already established itself as the world’s superpower, a position cemented by its usurpation of the number 1 as its international calling code. (The move is largely symbolic: Phone calls have become so rare that the sound of a ringtone triggers the fear that someone must have died.) The European Union has already dissolved, with the euro replaced by local currencies like the “nouveau franc.” Then the United States defaults on its loans; Treasury bills are rendered worthless. Overnight, the dollar crashes, supplanted on the international market by the “bancor,” a currency controlled by the New IMF. The stock market follows suit, taking the Mandible family fortune with it.
Most people assume the crisis is temporary, not unlike the economic downturn that coincided with Florence’s college graduation — born in the mid-1980s, she’s a millennial. But her son, Willing, watches the business news and has an uncanny sense of how all the pieces fit together. As one character puts it, “Complex systems collapse catastrophically.” Within a few years, Florence’s family will have lost literally everything they once thought they owned. (...)
Shriver has always seemed to be at least a few steps ahead of the rest of us, but her new novel establishes her firmly as the Cassandra of American letters. Like David Mitchell’s “Cloud Atlas” or Margaret Atwood’s “Oryx and Crake,” “The Mandibles” depicts a world that is at once familiar and horribly altered. What’s most disquieting isn’t the disruption of daily life (though it’s devastating) but the ease with which people adapt to their new circumstances. Shriver’s dystopia is imagined as minutely as a pointillist image, with every detail adding another dot to the overall picture. The devolution of civilized society happens slowly at first, then all at once. The niceties of life gradually disappear: citrus fruit, olive oil, toilet paper. Streets are no longer cleaned; once-upscale storefronts are boarded shut; even Zabar’s is vandalized and looted. Florence stops ironing to save electricity and wears a bandanna to disguise her unwashed hair. Willing gives away his beloved spaniel while the family can still afford to feed it, knowing that by the time they won’t be able to, no one else will either. Homeownership, the foundation of the American dream, proves to be the longest-lasting currency. Eventually, most of the Mandible clan will seek refuge with Florence in East Flatbush, including her sister, Avery, and brother-in-law, Lowell, a former economics professor at Georgetown who failed to predict the current situation and still doesn’t comprehend it. (Tenure is among the luxuries that society can no longer afford — Lowell has been summarily sacked.) Then Florence’s grandfather, Douglas Mandible, appears on the doorstep, now 97 years old and saddled with a wife suffering from dementia. All that remains of Bountiful House, his once-grand estate, is the silver service, each piece engraved with an M.
The M stands for Mandible, of course, but it might just as well stand for Money, the novel’s true subject. The Mandible descendants never laid hands on the cash, but it was always there in the background, silently working its mysteries on their psyches. “A family fortune introduced an element of corruption,” we are told early on. Its bite is felt in subtle ways. Back in the old days, Florence’s father, Carter, wondered if it was fair that he and his sister, Enola, would have to divide the fortune equally, considering that he had three children (Florence, Avery and their younger brother, Jarred) and four grandchildren, while Enola, a novelist, remained single, with no dependents. After the Renunciation, as the economic collapse is called, even a small amount of money can be psychologically transformative. When Lowell heads to the supermarket after finally receiving a summer’s worth of back pay, the sensation of “trouser pockets that bulged with banded cash” makes him feel like “a real man” for the first time in months. But his buoyant mood is pierced by the discovery that, thanks to inflation, the cash won’t even cover the groceries in his cart. (...)
As I walk the streets of Flatbush, the rapidly gentrifying Brooklyn neighborhood where my family recently bought a house of our own, scenes from “The Mandibles” replay in my head. I don’t remember the last time a novel held me so enduringly in its grip. “The line between owners of swank Washington townhouses and denizens of his sister-in-law’s Fort Greene shelter was perhaps thinner than he’d previously appreciated,” Lowell realizes late in the novel. The line separating us from our dystopian future may be equally thin. The curse of Cassandra, after all, was that she told the truth.
This future is only 13 years away, as Lionel Shriver depicts it in “The Mandibles: A Family, 2029-2047,” her searing exemplar of a disquieting new genre — call it dystopian finance fiction. When the novel opens, America is perched on the cusp of catastrophe, though no one knows it yet. The population is still reeling from the aftershocks of “the Stonage” (an abridgment of Stone Age), the technology blackout in 2024 that brought the entire country to a halt, an event at least as traumatic for this generation as Sept. 11 was for their parents. China has already established itself as the world’s superpower, a position cemented by its usurpation of the number 1 as its international calling code. (The move is largely symbolic: Phone calls have become so rare that the sound of a ringtone triggers the fear that someone must have died.) The European Union has already dissolved, with the euro replaced by local currencies like the “nouveau franc.” Then the United States defaults on its loans; Treasury bills are rendered worthless. Overnight, the dollar crashes, supplanted on the international market by the “bancor,” a currency controlled by the New IMF. The stock market follows suit, taking the Mandible family fortune with it.
Most people assume the crisis is temporary, not unlike the economic downturn that coincided with Florence’s college graduation — born in the mid-1980s, she’s a millennial. But her son, Willing, watches the business news and has an uncanny sense of how all the pieces fit together. As one character puts it, “Complex systems collapse catastrophically.” Within a few years, Florence’s family will have lost literally everything they once thought they owned. (...)
Shriver has always seemed to be at least a few steps ahead of the rest of us, but her new novel establishes her firmly as the Cassandra of American letters. Like David Mitchell’s “Cloud Atlas” or Margaret Atwood’s “Oryx and Crake,” “The Mandibles” depicts a world that is at once familiar and horribly altered. What’s most disquieting isn’t the disruption of daily life (though it’s devastating) but the ease with which people adapt to their new circumstances. Shriver’s dystopia is imagined as minutely as a pointillist image, with every detail adding another dot to the overall picture. The devolution of civilized society happens slowly at first, then all at once. The niceties of life gradually disappear: citrus fruit, olive oil, toilet paper. Streets are no longer cleaned; once-upscale storefronts are boarded shut; even Zabar’s is vandalized and looted. Florence stops ironing to save electricity and wears a bandanna to disguise her unwashed hair. Willing gives away his beloved spaniel while the family can still afford to feed it, knowing that by the time they won’t be able to, no one else will either. Homeownership, the foundation of the American dream, proves to be the longest-lasting currency. Eventually, most of the Mandible clan will seek refuge with Florence in East Flatbush, including her sister, Avery, and brother-in-law, Lowell, a former economics professor at Georgetown who failed to predict the current situation and still doesn’t comprehend it. (Tenure is among the luxuries that society can no longer afford — Lowell has been summarily sacked.) Then Florence’s grandfather, Douglas Mandible, appears on the doorstep, now 97 years old and saddled with a wife suffering from dementia. All that remains of Bountiful House, his once-grand estate, is the silver service, each piece engraved with an M.
The M stands for Mandible, of course, but it might just as well stand for Money, the novel’s true subject. The Mandible descendants never laid hands on the cash, but it was always there in the background, silently working its mysteries on their psyches. “A family fortune introduced an element of corruption,” we are told early on. Its bite is felt in subtle ways. Back in the old days, Florence’s father, Carter, wondered if it was fair that he and his sister, Enola, would have to divide the fortune equally, considering that he had three children (Florence, Avery and their younger brother, Jarred) and four grandchildren, while Enola, a novelist, remained single, with no dependents. After the Renunciation, as the economic collapse is called, even a small amount of money can be psychologically transformative. When Lowell heads to the supermarket after finally receiving a summer’s worth of back pay, the sensation of “trouser pockets that bulged with banded cash” makes him feel like “a real man” for the first time in months. But his buoyant mood is pierced by the discovery that, thanks to inflation, the cash won’t even cover the groceries in his cart. (...)
As I walk the streets of Flatbush, the rapidly gentrifying Brooklyn neighborhood where my family recently bought a house of our own, scenes from “The Mandibles” replay in my head. I don’t remember the last time a novel held me so enduringly in its grip. “The line between owners of swank Washington townhouses and denizens of his sister-in-law’s Fort Greene shelter was perhaps thinner than he’d previously appreciated,” Lowell realizes late in the novel. The line separating us from our dystopian future may be equally thin. The curse of Cassandra, after all, was that she told the truth.
by Ruth Franklin, NY Times | Read more:
Image: Harper Publishing
A Boat Builder’s 30-Year Obsession Comes Together At Last
Noah is better known, but Steve Thon took the age-old craft of boat building to a higher level. Literally.
Rather than build a boat in a valley and end up on a mountain, Thon built his ark on a mountain and hauled it down to the sea.
Thon grew up in Minnesota, about as far as one can get from bluewater sailing. But he's long had a hankering for adventure. Minnesota is short on mountains too, but Thon has bagged peaks in the Rocky and Chugach mountains. He even made an attempt on Denali.
It was on an impromptu bid to climb Goat Rock, a precipitous peak near the end of the road in Eklutna Valley, where Thon drifted into his life's work, a sailboat big enough to carry him to any corner of the seven seas.
Finding his dreamboat
On his first night in Anchorage after moving north, Thon met Bob Linville, a commercial fisherman now living in Seward, at a mountaineering club meeting. In May 1980, Linville suggested climbing Goat Rock in Eklutna.
They were hiking up to the base of the mountain, behind where Rochelle's Ice Cream Stop is now, when they spied the large wooden mold of a boat hull outside a cabin. A conversation with the mold's owner, Preston Schultz, fanned a slow-burning fire in Thon.
Before long, Thon and Schultz agreed to work on the boat as partners.
Schultz was a commercial fisherman. Having sunk his boat in Prince William Sound during the winter crab season, he wanted to build a combination seiner/longliner that could fish far offshore for tuna, which would require refrigeration and a fair amount of storage. Fuel was expensive. A motor-sailer — part motorboat, part sailboat — seemed like a unique solution.
Schultz moved back to Anchorage and modified the original plans he had purchased for the hull. His new model was the "Spray," a decrepit sloop estimated to be at least 100 years old and completely rebuilt by Joshua Slocum in Massachusetts in the late 1890s. Slocum, who had been given the rotting hulk when he was down on his luck, subsequently sailed the Spray single-handedly around the world, the first to do so.
An Australian boat builder, Bruce Roberts, had stretched Spray's hull to make it sharper, adding more run in the aft sections. Those were the plans Schultz settled on: 47 feet long and 14 ½ feet across the beam.
The boat was designed to be seaworthy, "to behave herself and not terrify the crew," in Roberts' salty description. At her best sailing downwind on the deep-blue seas, she wouldn't necessarily be a willing partner into a headwind. Rather, she was designed to carry a lot of food and water, hold a steady course for days on end and sail sedately. A boat fit for a retired sailor. (...)
Building on a dream
Thon worked on the boat 30 years. Wait. That's not right. Putting it that way diminishes both Thon's dogged perseverance and the forehead-slapping miracle of a boat built from scratch.
Repeat after me: Thon worked on the boat for 30 years. What if every syllable of that sentence took three years? When your lips stopped moving, you'd still have three years of hard labor before the boat was ready to be taken home to the sea.
Thon worked on the boat after long days working as a carpenter and house-builder. He worked on the boat for a decade while exercising his carpentry skills with the Anchorage School District. He worked on weekends and holidays.
He worked on the boat in his sleep.
Alaska is a tough place to work outdoors, especially in winter. Before he could concentrate on the boat, Thon had to erect a shed over it. The firewood-heated, two-story building, which doubled as a shop and had enough room for Debbie's car on cold nights, was bigger than most houses.
Thinking ahead — well over the horizon, it turns out — Thon designed the front of the shed to be detachable. He cradled the boat's hull on heavy-duty metal frames so that, eventually, a trailer could be backed under it.
Boat parts are expensive. He'd save up several thousand dollars, buy a part, a necessary tool, a few rolls of fiberglass cloth or a barrel of resin, then problem-solve, fabricate and expend sweat equity until he had saved enough money for another round.
That's how, for example, he ended up with three welders, each more expensive than the last. When he brought home the third one, Thon's wife Debbie shook her head and rolled her eyes.
"Who knew there were three different kinds of welders?" she said.
Rather than build a boat in a valley and end up on a mountain, Thon built his ark on a mountain and hauled it down to the sea.

It was on an impromptu bid to climb Goat Rock, a precipitous peak near the end of the road in Eklutna Valley, where Thon drifted into his life's work, a sailboat big enough to carry him to any corner of the seven seas.
Finding his dreamboat
On his first night in Anchorage after moving north, Thon met Bob Linville, a commercial fisherman now living in Seward, at a mountaineering club meeting. In May 1980, Linville suggested climbing Goat Rock in Eklutna.
They were hiking up to the base of the mountain, behind where Rochelle's Ice Cream Stop is now, when they spied the large wooden mold of a boat hull outside a cabin. A conversation with the mold's owner, Preston Schultz, fanned a slow-burning fire in Thon.
Before long, Thon and Schultz agreed to work on the boat as partners.
Schultz was a commercial fisherman. Having sunk his boat in Prince William Sound during the winter crab season, he wanted to build a combination seiner/longliner that could fish far offshore for tuna, which would require refrigeration and a fair amount of storage. Fuel was expensive. A motor-sailer — part motorboat, part sailboat — seemed like a unique solution.
Schultz moved back to Anchorage and modified the original plans he had purchased for the hull. His new model was the "Spray," a decrepit sloop estimated to be at least 100 years old and completely rebuilt by Joshua Slocum in Massachusetts in the late 1890s. Slocum, who had been given the rotting hulk when he was down on his luck, subsequently sailed the Spray single-handedly around the world, the first to do so.
An Australian boat builder, Bruce Roberts, had stretched Spray's hull to make it sharper, adding more run in the aft sections. Those were the plans Schultz settled on: 47 feet long and 14 ½ feet across the beam.
The boat was designed to be seaworthy, "to behave herself and not terrify the crew," in Roberts' salty description. At her best sailing downwind on the deep-blue seas, she wouldn't necessarily be a willing partner into a headwind. Rather, she was designed to carry a lot of food and water, hold a steady course for days on end and sail sedately. A boat fit for a retired sailor. (...)
Building on a dream
Thon worked on the boat 30 years. Wait. That's not right. Putting it that way diminishes both Thon's dogged perseverance and the forehead-slapping miracle of a boat built from scratch.
Repeat after me: Thon worked on the boat for 30 years. What if every syllable of that sentence took three years? When your lips stopped moving, you'd still have three years of hard labor before the boat was ready to be taken home to the sea.
Thon worked on the boat after long days working as a carpenter and house-builder. He worked on the boat for a decade while exercising his carpentry skills with the Anchorage School District. He worked on weekends and holidays.
He worked on the boat in his sleep.
Alaska is a tough place to work outdoors, especially in winter. Before he could concentrate on the boat, Thon had to erect a shed over it. The firewood-heated, two-story building, which doubled as a shop and had enough room for Debbie's car on cold nights, was bigger than most houses.
Thinking ahead — well over the horizon, it turns out — Thon designed the front of the shed to be detachable. He cradled the boat's hull on heavy-duty metal frames so that, eventually, a trailer could be backed under it.
Boat parts are expensive. He'd save up several thousand dollars, buy a part, a necessary tool, a few rolls of fiberglass cloth or a barrel of resin, then problem-solve, fabricate and expend sweat equity until he had saved enough money for another round.
That's how, for example, he ended up with three welders, each more expensive than the last. When he brought home the third one, Thon's wife Debbie shook her head and rolled her eyes.
"Who knew there were three different kinds of welders?" she said.
by Rick Sinnott, ADN | Read more:
Image: Rick Sinnott
Tuesday, August 29, 2017
Subscribe to:
Posts (Atom)