Monday, September 4, 2017

One Woman’s Snorkeling Death Might Help Save Lives

Nancy Peacock walked down a boat ramp that descends into the cool blue waters of Pohoiki Bay, anxious to try her new full-face snorkeling mask in an environment where she could see parrotfish, moorish idols, corals and other sea creatures.

She had ordered the mask on Amazon and tried it out in the local pool near her California home in preparation for the September trip to visit longtime friends on the Big Island.

But less than an hour after entering the relatively calm bay, Peacock was dead.

Five months later her husband, Guy Cooper, is still searching for answers.

Did she drown because of the mask’s unique design, which covers the entire face so you can breathe out of your mouth and nose as opposed to the traditional snorkel tube in your mouth? Or was it a freak accident, even for a healthy 70-year-old who was at least somewhat familiar with Hawaii’s waters?

Cooper’s quest has brought to light significant gaps in data collection by government agencies, inadequate policies with the chain of custody for evidence and confounding decisions by the county medical examiner.

“My God, these masks could be killing others and no one has a clue,” Cooper said. “Isn’t that something you would want to know, that the public needs to know?” (...)

Robert Wintner, who owns Snorkel Bob’s snorkel-rental stores on four of the Main Hawaiian Islands, said his employees tested the full-face masks.

“They have been so aggressive in their marketing. ‘You’ve got to give it a try, you’ve got to give it a try, you’ve got to give it a try,'” he said. “We tested it and said, ‘No way. We won’t carry it.’”

Wintner noted its potential for carbon dioxide buildup due to the full-face design and likelihood of leaking because of using a cheap substitute for silicon to create a secure seal when worn.

“You have to base your assessments on experience, intuition and instinct,” he said. “When I saw that thing, it didn’t look right.”

Wintner said he could see how the mask could create a situation that cause its user to panic, which ocean-safety experts often identify — along with age and underlying health conditions — as a primary reason why so many visitors die while snorkeling in Hawaii. (...)

Preserving the equipment that a person was using in a drowning is just the first step to Cooper.

He ultimately wants to see a database that logs information about the equipment in each incident so authorities can identify dangerous trends, much the same way that the National Highway Traffic Safety Administration collects data to determine if a particular type of airbag is faulty in fatal car crashes.

Hawaii is not alone. Cooper has spent hours researching this issue but has been unable to find any government agency in the U.S. or abroad that has created a database that includes details about the equipment worn in a drowning or near-drowning incident.

“As I looked into it further, I was stunned to find that apparently no one in the world makes the connection,” he said. “No one is paying attention. In my wife’s case, neither the first responders nor the police nor the coroner had any concern for the equipment. My wife’s mask was just tossed in the trash. I also found no evidence of any independent testing or certification of these things.”

The Hawaii Department of Health’s Injury Prevention and Control Section compiles records about the number of ocean drownings, the location of the incident, the victim’s residence and what they were doing.

But there are few details beyond a label of “snorkeling,” for instance. Nothing about what brand of mask was worn, what type of snorkel or fins.

Cooper maintains that recording the make and manufacturer is critical. He said the Azorro brand of mask his wife wore seems to be a Chinese knock-off of the original French design.

The Azorro mask goes for $49.99 on Amazon, compared to up to $199 for the version by Tribord, which says on its website that it created the first full-face snorkeling mask “allowing you to breathe just as easily and naturally underwater as you would on land.”

Attempts to find contact information for Azorro were not successful.

by Nathan Eagle, Honolulu Civil Beat |  Read more:
Image: Nathan Eagle
[ed. See also: Stand Up Or Die: Snorkeling In Hawaii Is A Leading Cause Of Tourist Deaths]

Sunday, September 3, 2017

Toward a 21st-Century Labor Movement

Between the 1930s and the 1970s, unions and collective bargaining helped to power the creation of America’s vast middle class. Unions smoothed the distribution of wealth over the entire economy, constraining the percentage of wealth and income concentrated at the top of the economy while lifting up the bottom and the middle. But union strength has been on the wane since the 1950s and, beginning in the 1980s, suffered a catastrophic free fall in the private sector that continues to this day. The ability to form a union and bargain collectively is inaccessible to more than 93 percent of private-sector workers—a major reason why working people have experienced 40 years of wage stagnation even as the economy grew and the rich got richer.

Most progressive economists, scholars, think tank analysts, and centrist or left-of-center politicians in the United States agree: The scale has tipped too far in favor of business and away from workers. Generally, they support government measures to rebalance the power of capital and labor by improving the conditions for union organizing. Such measures include banning the permanent replacement of striking workers, increasing penalties for labor-law violations by employers, allowing workers to achieve union representation more quickly and simply, requiring binding arbitration in labor contract disputes, and repealing the 1947 Taft-Hartley Act (which restricted or banned many effective union tactics and permitted states to go “right to work” and thereby cripple many unions financially).

But these sorts of federal legislative strategies, which attempt to augment or restore America’s collective-bargaining framework, have failed repeatedly for the past 50 years: Unions have never been able to secure both a majority in the House and the required supermajority in the Senate, even when both bodies have had substantial Democratic majorities. And as union density fades with each passing year, the probability of gaining support from senators in states with no real union presence declines accordingly.

Underlying this failure is a more fundamental problem: American enterprise-based collective bargaining is an inherently weak model of industrial and labor relations compared with the possible alternatives.

Under America’s current “enterprise bargaining” framework, agreements are reached between a single union and a single employer. Under enterprise bargaining, the right to a voice in the workplace is considered an optional right that workers must opt into on a workplace-by-workplace basis via a majority vote. This means that only a minority of workers is ever likely to benefit from collective bargaining, a fact that weakens political support for unions and worker bargaining rights. It also means that employers are highly incentivized to avoid unions before they form or to crush them once they exist. Where unions do form and exist, employers who agree to union demands often perceive that they have been placed at a competitive disadvantage on price or flexibility within their industries—unless a supermajority of their competitors is also unionized. In addition, under the current system of enterprise bargaining, unions can’t require that employers negotiate over some of the most important factors in worker prosperity, such as the overall strategic direction of a firm; worker equity in a firm; or worker control of health, pension, and training funds.

The confluence of these facts means that unions are hard to form, difficult to maintain, and limited in the scope of their bargaining. It means they face constant workplace and political opposition from employers. That political opposition in turn leads to the repeated failure of labor-law reform in Congress. As Marx once speculated about capitalism, we can now say with some certainty about our system of collective bargaining: It sowed the seeds of its own destruction.

Organized labor’s legislative strategy since the 1950s—restoring the old model of union bargaining—is unlikely to prevail in the 21st century. That model thrived in an era of standardized industrial production, long-term or even lifelong employment in an industry or firm, and the relative geographic immobility of both workers and capital. This was also a period that witnessed mass worker militancy, industrial strikes, and rampant inter-union competition—overlaid with fears of communism abroad. Added to this mix was a domestic Communist Party that trained skilled anti-capitalist organizers; organized-crime syndicates that cynically promoted unions so they could loot union treasuries and extort employers; and a federal government broadly committed to using collective bargaining to maintain industrial stability during world wars, cold wars, and depressions. One could no more bring back such a unique set of historical factors and conditions than one could repeal refrigeration, globalization, or the Internet (each of which also in its own way helped hasten union decline).

But workers still need mechanisms to exercise power and to do so at a scale that improves the lives of millions of workers. They need to build organizations that can sustain worker bargaining power for the long haul. If 20th century–style unions as we knew them aren’t going to play that role, we’ll need to invent new forms of powerful, scalable, sustainable worker organizations if any effort to rebuild the middle class is going to succeed.

Such organizations might take several forms. Borrowing from labor law in other countries, from U.S. history, and from promising experiments happening in the United States today, there are several potential overlapping strategies for how future forms of worker power might operate and that suggest what U.S. labor policy might eventually look like.

Geographic and/or sectoral bargaining. With changes in federal law, unions could represent workers throughout an entire industry and not on a firm-by-firm basis, eliminating much of the dysfunction of firm-by-firm bargaining. But even without federal statute changes, cities or states could develop stakeholder or tripartite (government, company, and union) bargaining by geography or by industry. Wage-setting boards, for example, were commonplace at the state and municipal levels in the early 20th century. Representatives of workers, employers, and government could determine legally binding standards for wages and benefits throughout an industry or within a geographic area. This is similar to the stakeholder process we used in Seattle for the minimum-wage negotiations, and is exactly how New York’s fast-food workers achieved a $15 wage policy in 2015.

Co-determination. Common in Europe, co-determination allows employees a greater role in the management of a company, increasing worker voice and aligning incentives for quality and productivity between labor and management. Germany is home to the most successful example of this model, but a variation is used in the United States by health giant Kaiser Permanente. Under co-determination, labor agreements are made at the national level by unions and employer associations, and then local plants and firms meet with “works councils” to adjust the national agreements to local circumstances. In Germany, large firms are required to have worker representation on their boards of directors and workers elect works councils to solve problems at each worksite.

Since 1997, Kaiser and its 28 unions, which represent more than 100,000 workers, have partnered to give unions and individual workers a seat at the table in management decisions over quality, efficiency, and performance. Bargaining over employment conditions happens nationally. And in each facility, managers, unions, frontline workers, and physicians form thousands of Unit-Based Teams empowered to make patient-care decisions together. Two goals of the Kaiser Labor Management Partnership are to continuously improve the quality of health care Kaiser delivers while also becoming the employer of choice in the health-care industry.

by David Rolf, American Prospect |  Read more:
Image: via:

Peter McFarland
via:

Deeper Than Deep

It’s like the discovery of the New World,” David Reich tells me. “Everything is new, nobody’s looked at it in this way before, so how can things not be interesting?”

The excitement surrounding David Reich’s ancient genetics lab at Harvard Medical School is almost palpable. Journals like Science and Nature are unstinting in their praise of the work being done in the Reich Laboratory. Reich and his colleagues are rewriting the history of the human species. Like a scientific Cecil B. DeMille, they are working toward creating an epic cinematic reenvisioning of human history that takes us deep into the mists of the past, tens of thousands of years ago.

In February of this year the forty-three-year-old Reich was named corecipient (with his colleague Svante Pääbo at Germany’s Max Planck Institute) of the $1 million Dan David Prize in archaeology and natural selection for being “the world’s leading pioneer in analyzing ancient human DNA,” which led to the discovery that Neanderthals and humans interbred—“a quantum leap in reconstructing our evolutionary past.”

A discovery, I was to learn from Reich in a conversation that preceded the prize, that had been superseded by even more astonishing developments: evidence of interaction with human and non-Neanderthal variants of hominids, including evanescent but once real “ghost populations.”

This is not “ancient history,” which goes back a few thousand years to the dawn of writing. This is deeper in the past than “deep history,” which takes us even further back—before the invention of agriculture, before the invention of language, before the invention of the wheel.

This is deep, deep history, tens of thousands of years ago. When, it’s now emerging, hordes of humans, vast tribes of variations of hominids—Homo sapiens, Neanderthals, the newly discovered “Denisovans,” the mysterious “ghost populations”—ranged and thronged and clashed and bred and interbred (and probably exterminated large portions of each other) across vast landscapes that were battlefields and graveyards.

It’s deep, deep history that’s beginning to unscroll a vast pageant through the wonders of big data crunching and the analysis of ancient DNA samples from fragments of bone and mummies that have been rotting away in the dusty basements of museums.

And not only in old bones and mummified objects. The evidence for much of these vast clashes and close encounters is something we carry around within us in microscopic stretches of DNA that are the only legacy left from extinct variant species of humans. In microscopic sequences of chemical bonds on the double helixes of heredity there are traces of ancient variations on human species who lived and thrived and left nothing else behind beyond a few random sequences of chemical bonds. The faintest of faint echoes of a prehistoric past we’re only beginning to grasp. It’s a shift in focus as radical as the one that allowed us to glimpse—through Hubble-era telescopes—the billions of galaxies of the knowable universe and radically shift our perspective on our place in deep space. Suddenly we are able to see in the galaxies of genes within us and the stories they tell of a new way of envisioning our place in the history of the planet.

And this fellow David Reich, sitting across from me in a corner of his lab on Avenue Louis Pasteur in Boston, this skinny slip of a hominid, David Reich, clad in a T-shirt and slacks—the Zuckerberg couture of Harvard geniuses, you might say—is at the heart of what is likely to be remembered as one of the great scientific revolutions. One unimaginable just a few years ago. (...)

What Reich’s lab has begun to unveil is that at least two previously unknown humanoid species interbred in the deep past with both humans and Neanderthals but are now extinct. Extinct but survive within us as fragments of ancient DNA code that reflect memories of interactions—let’s be frank, sex—with other hominid variations. Proof of interbreeding and extinctions on a scale that suggest huge dramas—wars, migrations, invasions—we, or really, Reich are only beginning to reconstruct. Just as we are only beginning to reconstruct those lost populations and deal with the realization we have the ability to build a model of the billions of genetic combinations that make up modern humans.

It’s this realization—the kind of work Reich and his colleagues are doing—that makes people nervous about the powers the ancient DNA savants hold over the shape of humans to come. (...)

I turn hesitantly to the dark side of the genetic revolution, the one highlighted by the Washington Post story about the “secret Harvard meeting” incited by concern over synthetic human genomes and its revolutionary potential. “There’s been recent concern among bioethicists about just how rapid the ability to create genomes has become. There was some meeting a while ago that dealt with the downside of being able to create and implant genes in humans or viruses.” In viruses the concern is that if genes for illness can be disarmed, they can also be armed up—creating an “arms race” of germ warfare. “What’s your feeling about this whole kerfuffle?”

“Well, actually the person involved in that is down the hall in this building, but that is a very different branch of genetics from what I do. That is engineering. What I do is inference about the past. I’m just trying to learn about history, and they’re actually trying to modify genomes, so it’s completely different. I’m trying to read genomes; they’re trying to write genomes. It’s a very different thing, and I think it’s one of these modern technologies that is potentially disruptive to our very being. Genetics. You know, the ability to engineer genomes is the biological equivalent of nuclear weapons. It’s really a fundamentally powerful—”

The biological equivalent of nuclear weapons! His concern seems heartfelt. “That’s kind of breathtaking when you think about it. Splitting the atoms, splitting the genome, or whatever…”

“Yeah, yeah, it’s a kind of reversal of things you couldn’t or haven’t done. You couldn’t split an atom apart before nuclear technology, and you could not reverse engineer the genome before modern recombinant genetics. That’s a very powerful thing. It’s a powerful tool, and it could be used—or misused, presumably—used, and abused like other types, like nuclear technology. It’s quite a profound thing.”

“Do we even know the endpoint of that? Could we create life?”

“Presumably.”

by Ron Rosenbaum, Lapham's Quarterly |  Read more:
Image: The British Museum

Walter Becker (1950-2017)

Saturday, September 2, 2017

The Enduring Legacy of Zork

In 1977, four recent MIT graduates who’d met at MIT’s Laboratory for Computer Science used the lab’s PDP-10 mainframe to develop a computer game that captivated the world. Called Zork, which was a nonsense word then popular on campus, their creation would become one of the most influential computer games in the medium’s half-century-long history.

The text-based adventure challenged players to navigate a byzantine underground world full of caves and rivers as they battled gnomes, a troll, and a Cyclops to collect such treasures as a jewel-encrusted egg and a silver chalice.

During its 1980s heyday, commercial versions of Zork released for personal computers sold more than 800,000 copies. Today, unofficial versions of the game can be played online, on smartphones, and on Amazon Echo devices, and Zork is inspiring young technologists well beyond the gaming field.

It’s an impressive legacy for a project described by its developers as a hobby, a lark, and a “good hack.” Here’s the story of Zork’s creation, as recounted by its four inventors—and a look at its ongoing impact.

Tim Anderson, Marc Blank, Bruce Daniels, and Dave Lebling—who between them earned seven MIT degrees in electrical engineering and computer science, political science, and biology—bonded over their interest in computer games, then in their infancy, as they worked or consulted for the Laboratory for Computer Science’s Dynamic Modeling Group. By day, all of them but Blank (who was in medical school) developed software for the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA), which funded projects at MIT. On nights and weekends, they used their coding skills—and mainframe access—to work on Zork.

In early 1977, a text-only game called Colossal Cave Adventure—originally written by MIT grad Will Crowther—was tweaked and distributed over the ARPANET by a Stanford graduate student. “The four of us spent a lot of time trying to solve Adventure,” says Lebling. “And when we finally did, we said, ‘That was pretty good, but we could do a better job.’”

By June, they’d devised many of Zork’s core features and building blocks, including a word parser that took words the players typed and translated them into commands the game could process and respond to, propelling the story forward. The parser, which the group continued to fine-tune, allowed Zork to understand far more words than previous games, including adjectives, conjunctions, prepositions, and complex verbs. That meant Zork could support intricate puzzles, such as one that let players obtain a key by sliding paper under a door, pushing the key out of the lock so it would drop onto the paper, and retrieving the paper. The parser also let players input sentences like “Take all but rug” to scoop up multiple treasures, rather than making them type “Take [object]” over and over.

Vibrant, witty writing set Zork apart. It had no graphics, but lines like “Phosphorescent mosses, fed by a trickle of water from some unseen source above, make [the crystal grotto] glow and sparkle with every color of the rainbow” helped players envision the “Great Underground Empire” they were exploring as they brandished such weapons as glowing “Elvish swords.” “We played with language just like we played with computers,” says Daniels. Wordplay also cropped up in irreverent character names such as “Lord Dimwit Flathead the Excessive” and “The Wizard of Frobozz.”

Within weeks of its creation, Zork’s clever writing and inventive puzzles attracted players from across the U.S. and England. “The MIT machines were a nerd magnet for kids who had access to the ARPANET,” says Anderson. “They would see someone running something called Zork, rummage around in the MIT file system, find and play the game, and tell their friends.” The MIT mainframe operating system (called ITS) let Zork’s creators remotely watch users type in real time, which revealed common mistakes. “If we found a lot of people using a word the game didn’t support, we would add it as a synonym,” says Daniels.

The four kept refining and expanding Zork until February 1979. A few months later, three of them, plus seven other Dynamic Modeling Group members, founded the software company Infocom. Its first product: a modified version of Zork, split into three parts, released over three years, to fit PCs’ limited memory size and processing power.

Nearly 40 years later, those PC games, which ran on everything from the Apple II to the Commodore 64 in their 1980s heyday, are available online—and still inspire technologists. Ben Brown, founder and CEO of Howdy.ai, says Zork helped him design AI-powered chatbots. “Zork is a narrative, but embedded within it are clues about how the user can interact with and affect the story,” he says. “It’s a good model for how chatbots should teach users how to respond to and use commands without being heavy-handed and repetitive.” For example, the line “You are in a dark and quite creepy crawlway with passages leaving to the north, east, south, and southwest” hints to players that they must choose a direction to move, but it doesn’t make those instructions as explicit as actually telling them, “Type ‘north,’ ‘east,’ ‘south,’ or ‘southwest.’” Brown’s chatbot, Howdy, operates similarly, using bold and highlighted fonts to draw attention to keywords, like “check in,” and “schedule,” that people can use to communicate with the bot.

Jessica Brillhart, a filmmaker who creates virtual-reality videos, also cites Zork as an influence: “It provides a great way to script immersive experiences and shows how to craft a full universe for people to explore.”

by Elizabeth Woyke, MIT Technology Review | Read more:
Image: Zork
[ed. Zork and its pre-cursor Colossal Cave are like first loves you remember fondly for the rest of your life. Along with Eliza, they were my first experience with interactive computing. Read the comments section for similar tributes.]

HonoMobo Container Homes


HonoMobo’s container homes can be shipped anywhere in North America
[ed. I imagine there are other companies that repurpose shipping containers, I just stumbled across this one today. Great idea that doesn't get enough attention. Installation video here (and, mistakes to avoid if you go the DIY route here).]

The Perfect Fit

Shopping in Tokyo.

I’m not sure how it is in small families, but in large ones relationships tend to shift over time. You might be best friends with one brother or sister, then two years later it might be someone else. Then it’s likely to change again, and again after that. It doesn’t mean that you’ve fallen out with the person you used to be closest to but that you’ve merged into someone else’s lane, or had him or her merge into yours. Trios form, then morph into quartets before splitting into teams of two. The beauty of it is that it’s always changing.

Twice in 2014, I went to Tokyo with my sister Amy. I’d been seven times already, so was able to lead her to all the best places, by which I mean stores. When we returned in January of 2016, it made sense to bring our sister Gretchen with us. Hugh was there as well, and while he’s a definite presence, he didn’t figure into the family dynamic. Mates, to my sisters and me, are seen mainly as shadows of the people they’re involved with. They move. They’re visible in direct sunlight. But because they don’t have access to our emotional buttons—because they can’t make us twelve again, or five, and screaming—they don’t really count as players.

Normally in Tokyo we rent an apartment and stay for a week. This time, though, we got a whole house. The neighborhood it was in—Ebisu—is home to one of our favorite shops, Kapital. The clothes they sell are new but appear to have been previously worn, perhaps by someone who was shot or stabbed and then thrown off a boat. Everything looks as if it had been pulled from the evidence rack at a murder trial. I don’t know how they do it. Most distressed clothing looks fake, but not theirs, for some reason. Do they put it in a dryer with broken glass and rusty steak knives? Do they drag it behind a tank over a still-smoldering battlefield? How do they get the cuts and stains so . . . right?

If I had to use one word to describe Kapital’s clothing, I’d be torn between “wrong” and “tragic.” A shirt might look normal enough until you try it on, and discover that the armholes have been moved, and are no longer level with your shoulders, like a capital “T,” but farther down your torso, like a lowercase one.

Jackets with patches on them might senselessly bunch at your left hip, or maybe they poof out at the small of your back, where for no good reason there’s a pocket. I’ve yet to see a pair of Kapital trousers with a single leg hole, but that doesn’t mean the designers haven’t already done it. Their motto seems to be “Why not?”

Most people would answer, “I’ll tell you why not!” But I like Kapital’s philosophy. I like their clothing as well, though I can’t say that it always likes me in return. I’m not narrow enough in the chest for most of their jackets, but what was to stop me, on this most recent trip, from buying a flannel shirt made of five differently patterned flannel shirts ripped apart and then stitched together into a kind of doleful Frankentop? I got hats as well, three of them, which I like to wear stacked up, all at the same time, partly just to get it over with but mainly because I think they look good as a tower.

I draw the line at clothing with writing on it, but numbers don’t bother me, so I also bought a tattered long-sleeved T-shirt with “99” cut from white fabric and stitched onto the front before being half burned off. It’s as though a football team’s plane had gone down and this was all that was left. Finally, I bought what might be called a tunic, made of denim and patched at the neck with defeated scraps of corduroy. When buttoned, the front flares out, making me look like I have a potbelly. These are clothes that absolutely refuse to flatter you, that go out of their way to insult you, really, and still my sisters and I can’t get enough. (...)

There are three other branches of Kapital in Tokyo, and we visited them all, staying in each one until our fingerprints were on everything. “My God,” Gretchen said, trying on a hat that seemed to have been modelled on a used toilet brush, before adding it to her pile. “This place is amazing. I had no idea!”

by David Sedaris, New Yorker |  Read more:
Image: Tamara Shopsin

Friday, September 1, 2017

The Kekulé Problem

I call it the Kekulé Problem because among the myriad instances of scientific problems solved in the sleep of the inquirer Kekulé’s is probably the best known. He was trying to arrive at the configuration of the benzene molecule and not making much progress when he fell asleep in front of the fire and had his famous dream of a snake coiled in a hoop with its tail in its mouth—the ouroboros of mythology—and woke exclaiming to himself: “It’s a ring. The molecule is in the form of a ring.” Well. The problem of course—not Kekulé’s but ours—is that since the unconscious understands language perfectly well or it would not understand the problem in the first place, why doesnt it simply answer Kekulé’s question with something like: “Kekulé, it’s a bloody ring.” To which our scientist might respond: “Okay. Got it. Thanks.”

Why the snake? That is, why is the unconscious so loathe to speak to us? Why the images, metaphors, pictures? Why the dreams, for that matter.

A logical place to begin would be to define what the unconscious is in the first place. To do this we have to set aside the jargon of modern psychology and get back to biology. The unconscious is a biological system before it is anything else. To put it as pithily as possibly—and as accurately—the unconscious is a machine for operating an animal.

All animals have an unconscious. If they didnt they would be plants. We may sometimes credit ours with duties it doesnt actually perform. Systems at a certain level of necessity may require their own mechanics of governance. Breathing, for instance, is not controlled by the unconscious but by the pons and the medulla oblongata, two systems located in the brainstem. Except of course in the case of cetaceans, who have to breathe when they come up for air. An autonomous system wouldnt work here. The first dolphin anesthetized on an operating table simply died. (How do they sleep? With half of their brain alternately.) But the duties of the unconscious are beyond counting. Everything from scratching an itch to solving math problems.

Problems in general are often well posed in terms of language and language remains a handy tool for explaining them. But the actual process of thinking—in any discipline—is largely an unconscious affair. Language can be used to sum up some point at which one has arrived—a sort of milepost—so as to gain a fresh starting point. But if you believe that you actually use language in the solving of problems I wish that you would write to me and tell me how you go about it.

I’ve pointed out to some of my mathematical friends that the unconscious appears to be better at math than they are. My friend George Zweig calls this the Night Shift. Bear in mind that the unconscious has no pencil or notepad and certainly no eraser. That it does solve problems in mathematics is indisputable. How does it go about it? When I’ve suggested to my friends that it may well do it without using numbers, most of them thought—after a while—that this was a possibility. How, we dont know. Just as we dont know how it is that we manage to talk. If I am talking to you then I can hardly be crafting at the same time the sentences that are to follow what I am now saying. I am totally occupied in talking to you. Nor can some part of my mind be assembling these sentences and then saying them to me so that I can repeat them. Aside from the fact that I am busy this would be to evoke an endless regress. The truth is that there is a process here to which we have no access. It is a mystery opaque to total blackness. (...)

Of the known characteristics of the unconscious its persistence is among the most notable. Everyone is familiar with repetitive dreams. Here the unconscious may well be imagined to have more than one voice: He’s not getting it, is he? No. He’s pretty thick. What do you want to do? I dont know. Do you want to try using his mother? His mother is dead. What difference does that make?

What is at work here? And how does the unconscious know we’re not getting it? What doesnt it know? It’s hard to escape the conclusion that the unconscious is laboring under a moral compulsion to educate us. (Moral compulsion? Is he serious?) (...)

We dont know what the unconscious is or where it is or how it got there—wherever there might be. Recent animal brain studies showing outsized cerebellums in some pretty smart species are suggestive. That facts about the world are in themselves capable of shaping the brain is slowly becoming accepted. Does the unconscious only get these facts from us, or does it have the same access to our sensorium that we have? You can do whatever you like with the us and the our and the we. I did. At some point the mind must grammaticize facts and convert them to narratives. The facts of the world do not for the most part come in narrative form. We have to do that. (...)

The unconscious seems to know a great deal. What does it know about itself? Does it know that it’s going to die? What does it think about that? It appears to represent a gathering of talents rather than just one. It seems unlikely that the itch department is also in charge of math. Can it work on a number of problems at once? Does it only know what we tell it? Or—more plausibly—has it direct access to the outer world? Some of the dreams which it is at pains to assemble for us are no doubt deeply reflective and yet some are quite frivolous. And the fact that it appears to be less than insistent upon our remembering every dream suggests that sometimes it may be working on itself. And is it really so good at solving problems or is it just that it keeps its own counsel about the failures? How does it have this understanding which we might well envy? How might we make inquiries of it? Are you sure?

by Cormac McCarthy, Nautilus | Read more:
Image: Don Kilpatrick III
[ed. See also: It’s Okay to “Forget” What You Read]

The Ontology of Circus Peanuts

I confess I am not by nature an early adopter. I still like manual typewriters, stick-shift cars, and simple appliances with on and off buttons instead of confusing symbols. I still do not know how to text. I am, however, very proud that I was in the vanguard when it came to hating the circus. I remember how out of sync I was when, at age nine, my parents took me to the circus at Madison Square Garden. I screamed in horror at the clowns, I was a whining bummer when the ringmaster with a whip made the frightened horses jump through fiery hoops, and I only perked up when the lion tamer stuck his head into the lion’s mouth. I was hoping he would be decapitated.

Now everyone has jumped on the “I hate the circus” bandwagon. It is under attack by animal-rights activists and fire departments and performers unions. The glory days of Barnum and Bailey are long gone. People with compassion no longer want to see elephants paraded down Main Street holding tail in trunk; the dirty-water hot dogs and rancid clouds of ancient cotton candy no longer hold sway with kids of all ages.

There is one tangential remnant of the circus that thrills me to the bone, and that is the low-grade confectionary candy called Circus Peanuts. Circus Peanuts, as far as I can tell, have literally nothing to do with circuses, or even with peanuts. They are usually found on the bottom candy shelf at gas-station convenience marts or at some chain drug stores.

A Circus Peanut is a about two inches long, it is the anemic orange color of the astronauts’ favorite drink, Tang, and it has been machine stamped to vaguely resemble a shelled peanut. The most amazing thing about Circus Peanuts is they are always stale. Not rock-hard but weirdly deflated and tough. It is hard to make a marshmallow go stale. In my kitchen pantry, I have a bag of them that has seen me through four years of holiday yam casseroles, and they are still squishy and fresh. Therefore one can’t blame the problem with Circus Peanuts on the general pillowy constitution of the marshmallow. Maybe even more mysterious then the ubiquitous staleness is that, for no logical reason, Circus Peanuts are banana flavored. Real peanuts are none of these things.

I have a few theories.

Theory 1: Decades back, when the Circus Peanut was invented, no one thought much about lawsuits. Ladders did not warn you that you should not jump from the top of them and people assumed hot coffee was hot. It may well be that the peanut industry was highly litigious and ahead of its time and woe to anyone who dared call something a peanut that wasn’t. Hence orange skin and banana flavoring became a protective shield against potential wrath.

Theory 2: Perhaps someone who lived in, say, Antarctica and had never seen or tasted a peanut invented Circus Peanuts. These are imaginary peanuts, a fantasy.

Theory 3: Around World War II, when the Circus Peanut was invented, the manufacturer was worried about shortages. A big Quonset hut was purchased to warehouse tons of them. The reason they are all stale is that we are still eating the original batch today.

by Jane Stern, Paris Review |  Read more:
Image: uncredited

Thursday, August 31, 2017


Matteo Nannini, Wave Goodbye
via:

Trickle Down 12.0

What Would the End of Football Look Like?

The NFL is done for the year, but it is not pure fantasy to suggest that it may be done for good in the not-too-distant future. How might such a doomsday scenario play out and what would be the economic and social consequences?

By now we’re all familiar with the growing phenomenon of head injuries and cognitive problems among football players, even at the high school level. In 2009, Malcolm Gladwell asked whether football might someday come to an end, a concern seconded recently by Jonah Lehrer.

Before you say that football is far too big to ever disappear, consider the history: If you look at the stocks in the Fortune 500 from 1983, for example, 40 percent of those companies no longer exist. The original version of Napster no longer exists, largely because of lawsuits. No matter how well a business matches economic conditions at one point in time, it’s not a lock to be a leader in the future, and that is true for the NFL too. Sports are not immune to these pressures. In the first half of the 20th century, the three big sports were baseball, boxing, and horse racing, and today only one of those is still a marquee attraction.

The most plausible route to the death of football starts with liability suits. Precollegiate football is already sustaining 90,000 or more concussions each year. If ex-players start winning judgments, insurance companies might cease to insure colleges and high schools against football-related lawsuits. Coaches, team physicians, and referees would become increasingly nervous about their financial exposure in our litigious society. If you are coaching a high school football team, or refereeing a game as a volunteer, it is sobering to think that you could be hit with a $2 million lawsuit at any point in time. A lot of people will see it as easier to just stay away. More and more modern parents will keep their kids out of playing football, and there tends to be a “contagion effect” with such decisions; once some parents have second thoughts, many others follow suit. We have seen such domino effects with the risks of smoking or driving without seatbelts, two unsafe practices that were common in the 1960s but are much rarer today. The end result is that the NFL’s feeder system would dry up and advertisers and networks would shy away from associating with the league, owing to adverse publicity and some chance of being named as co-defendants in future lawsuits.

It may not matter that the losses from these lawsuits are much smaller than the total revenue from the sport as a whole. As our broader health care sector indicates (try buying private insurance when you have a history of cancer treatment), insurers don’t like to go where they know they will take a beating. That means just about everyone could be exposed to fear of legal action.

This slow death march could easily take 10 to 15 years. Imagine the timeline. A couple more college players — or worse, high schoolers — commit suicide with autopsies showing CTE. A jury makes a huge award of $20 million to a family. A class-action suit shapes up with real legs, the NFL keeps changing its rules, but it turns out that less than concussion levels of constant head contact still produce CTE. Technological solutions (new helmets, pads) are tried and they fail to solve the problem. Soon high schools decide it isn’t worth it. The Ivy League quits football, then California shuts down its participation, busting up the Pac-12. Then the Big Ten calls it quits, followed by the East Coast schools. Now it’s mainly a regional sport in the southeast and Texas/Oklahoma. The socioeconomic picture of a football player becomes more homogeneous: poor, weak home life, poorly educated. Ford and Chevy pull their advertising, as does IBM and eventually the beer companies. (...)

Despite its undeniable popularity — and the sense that the game is everywhere — the aggregate economic effect of losing the NFL would not actually be that large. League revenues are around $10 billion per year while U.S. GDP is around $15,300 billion. But that doesn’t mean everyone would be fine.

by Tyler Cowen and Kevin Grier, Grantland | Read more:
Image: Rob Tringali/Getty Images
[ed. See also: ESPN Football Analyst Walks Away, Disturbed by Brain Trauma on Field]

Wednesday, August 30, 2017

After Decades of Pushing Bachelor’s Degrees, U.S. Needs More Tradespeople

FONTANA, Calif. — At a steel factory dwarfed by the adjacent Auto Club Speedway, Fernando Esparza is working toward his next promotion.

Esparza is a 46-year-old mechanic for Evolution Fresh, a subsidiary of Starbucks that makes juices and smoothies. He’s taking a class in industrial computing taught by a community college at a local manufacturing plant in the hope it will bump up his wages.

It’s a pretty safe bet. The skills being taught here are in high demand. That’s in part because so much effort has been put into encouraging high school graduates to go to college for academic degrees rather than for training in industrial and other trades that many fields like his face worker shortages.

Now California is spending $6 million on a campaign to revive the reputation of vocational education, and $200 million to improve the delivery of it.

“It’s a cultural rebuild,” said Randy Emery, a welding instructor at the College of the Sequoias in California’s Central Valley.

Standing in a cavernous teaching lab full of industrial equipment on the college’s Tulare campus, Emery said the decades-long national push for high school graduates to get bachelor’s degrees left vocational programs with an image problem, and the nation’s factories with far fewer skilled workers than needed.

“I’m a survivor of that teardown mode of the ’70s and ’80s, that college-for-all thing,” he said.

This has had the unintended consequence of helping flatten out or steadily erode the share of students taking vocational courses. In California’s community colleges, for instance, it’s dropped to 28 percent from 31 percent since 2000, contributing to a shortage of trained workers with more than a high school diploma but less than a bachelor’s degree.

Research by the state’s 114-campus community college system showed that families and employers alike didn’t know of the existence or value of vocational programs and the certifications they confer, many of which can add tens of thousands of dollars per year to a graduate’s income.

“We needed to do a better job getting the word out,” said Van Ton-Quinlivan, the system’s vice chancellor for workforce and economic development.

High schools and colleges have struggled for decades to attract students to job-oriented classes ranging from welding to nursing. They’ve tried cosmetic changes, such as rebranding “vocational” courses as “career and technical education,” but students and their families have yet to buy in, said Andrew Hanson, a senior research analyst with Georgetown University’s Center on Education and the Workforce.

Federal figures show that only 8 percent of undergraduates are enrolled in certificate programs, which tend to be vocationally oriented.

Sen. Marco Rubio, R-Fla., last year focused attention on the vocational vs. academic debate by contending during his presidential campaign that “welders make more money than philosophers.”

The United States has 30 million jobs that pay an average of $55,000 per year and don’t require a bachelor’s degree, according to the Georgetown center. People with career and technical educations are actually slightly more likely to be employed than their counterparts with academic credentials, the U.S. Department of Education reports, and significantly more likely to be working in their fields of study.

At California Steel Industries, where Esparza was learning industrial computing, some supervisors without college degrees make as much as $120,000 per year and electricians also can make six figures, company officials said.

Skilled trades show among the highest potential among job categories, the economic-modeling company Emsi calculates. It says tradespeople also are older than workers in other fields — more than half were over 45 in 2012, the last period for which the subject was studied — meaning looming retirements could result in big shortages.

High schools and community colleges are the keys to filling industrial jobs, Hanson said, but something needs to change.

“You haven’t yet been able to attract students from middle-class and more affluent communities” to vocational programs, he said. “Efforts like California’s to broaden the appeal are exactly what we need.”

by Matt Krupnick, PBS Newshour |  Read more:
Image: PBS
[ed. The main problem being that those jobs aren't as Instagramable as some bartender working at an upscale dive for $15/hr. When the world goes to hell and robots take over every administrative and technical job that ever existed, survivors will be those that have some marketable hands-on skill: plumbers, electricians, carpenters, mechanics, etc.] 

The Transhumanist FAQ

1.1 What is transhumanism?

Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows: 

(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities. 

(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies. Transhumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as “human”. 

It is not our human shape or the details of our current human biology that define what is valuable about us, but rather our aspirations and ideals, our experiences, and the kinds of lives we lead. To a transhumanist, progress occurs when more people become more able to shape themselves, their lives, and the ways they relate to others, in accordance with their own deepest values. Transhumanists place a high value on autonomy: the ability and right of individuals to plan and choose their own lives. Some people may of course, for any number of reasons, choose to forgo the opportunity to use technology to improve themselves. Transhumanists seek to create a world in which autonomous individuals may choose to remain unenhanced or choose to be enhanced and in which these choices will be respected. 

Through the accelerating pace of technological development and scientific understanding, we are entering a whole new stage in the history of the human species. In the relatively near future, we may face the prospect of real artificial intelligence. New kinds of cognitive tools will be built that combine artificial intelligence with interface technology. Molecular nanotechnology has the potential to manufacture abundant resources for everybody and to give us control over the biochemical processes in our bodies, enabling us to eliminate disease and unwanted aging. Technologies such as brain-computer interfaces and neuropharmacology could amplify human intelligence, increase emotional well-being, improve our capacity for steady commitment to life projects or a loved one, and even multiply the range and richness of possible emotions. On the dark side of the spectrum, transhumanists recognize that some of these coming technologies could potentially cause great harm to human life; even the survival of our species could be at risk. Seeking to understand the dangers and working to prevent disasters is an essential part of the transhumanist agenda. 

Transhumanism is entering the mainstream culture today, as increasing numbers of scientists, scientifically literate philosophers, and social thinkers are beginning to take seriously the range of possibilities that transhumanism encompasses. A rapidly expanding family of transhumanist groups, differing somewhat in flavor and focus, and a plethora of discussion groups in many countries around the world, are gathered under the umbrella of the World Transhumanist Association, a non-profit democratic membership organization. 

1.2 What is a posthuman?

It is sometimes useful to talk about possible future beings whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards. The standard word for such beings is “posthuman”. (Care must be taken to avoid misinterpretation. “Posthuman” does not denote just anything that happens to come after the human era, nor does it have anything to do with the “posthumous”. In particular, it does not imply that there are no humans anymore.) 

Many transhumanists wish to follow life paths which would, sooner or later, require growing into posthuman persons: they yearn to reach intellectual heights as far above any current human genius as humans are above other primates; to be resistant to disease and impervious to aging; to have unlimited youth and vigor; to exercise control over their own desires, moods, and mental states; to be able to avoid feeling tired, hateful, or irritated about petty things; to have an increased capacity for pleasure, love, artistic appreciation, and serenity; to experience novel states of consciousness that current human brains cannot access. It seems likely that the simple fact of living an indefinitely long, healthy, active life would take anyone to posthumanity if they went on accumulating memories, skills, and intelligence. 

Posthumans could be completely synthetic artificial intelligences, or they could be enhanced uploads [see “What is uploading?”], or they could be the result of making many smaller but cumulatively profound augmentations to a biological human. The latter alternative would probably require either the redesign of the human organism using advanced nanotechnology or its radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, anti-aging therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable computers, and cognitive techniques. 

Some authors write as though simply by changing our self-conception, we have become or could become posthuman. This is a confusion or corruption of the original meaning of the term. The changes required to make us posthuman are too profound to be achievable by merely altering some aspect of psychological theory or the way we think about ourselves. Radical technological modifications to our brains and bodies are needed. It is difficult for us to imagine what it would be like to be a posthuman person. Posthumans may have experiences and concerns that we cannot fathom, thoughts that cannot fit into the three-pound lumps of neural tissue that we use for thinking. Some posthumans may find it advantageous to jettison their bodies altogether and live as information patterns on vast super-fast computer networks. Their minds may be not only more powerful than ours but may also employ different cognitive architectures or include new sensory modalities that enable greater participation in their virtual reality settings. Posthuman minds might be able to share memories and experiences directly, greatly increasing the efficiency, quality, and modes in which posthumans could communicate with each other. The boundaries between posthuman minds may not be as sharply defined as those between humans. 

Posthumans might shape themselves and their environment in so many new and profound ways that speculations about the detailed features of posthumans and the posthuman world are likely to fail.

by Nick Bostrom, Oxford University |  Read more: (pdf)

via:
[ed. Hey, was that a raindrop...?!]