Friday, March 18, 2016

Debriefing Mike Murphy

On a pleasant Super Tuesday afternoon — one of 10 or 11 Super Tuesdays we seem to be having this March — I am standing in the bloated carcass of that much-maligned beast known as The Establishment. In the unmarked suite of a generic mid-Wilshire office building (The Establishment can't be too careful, with all these populists sharpening pitchforks), I have come to Right to Rise, Jeb Bush's $118 million super-PAC, to watch Mike Murphy and his crew pack it in.

If you've been reading your Conventional Wisdom Herald, you know that Murphy, one of the most storied and furiously quick-witted political consultants of the last three decades, has lately been cast as the Titanic skipper who steered Jeb's nine-figure colossus smack into an iceberg. That donor loot helped buy Jeb all of four delegates before he dropped from the race, returning to a quiet life of low-energy contemplation. The Los Angeles Times called Right to Rise "one of the most expensive failures in American political history," which is among the more charitable assessments. (If you ever find yourself in Mike Murphy's position, never, ever look at Twitter.)

In his early career, profilers taking note of his long hair, leather jackets, and loud Hawaiian shirts made Murphy sound like a cross between the wild man of Borneo, Jimmy Buffett, and an unmade futon. These days, his hair is short, and there's a little less of it to account for. He looks more like a shambling film professor, in smart-guy faculty glasses, Lacoste half-zip, and khakis — his loud rainbow-striped socks being the only sartorial tell that he might still, as a Republican elder once told a reporter, be "in need of adult supervision." (...)

Murphy first cracked the political-consultant game back in 1982, cutting political ads from his dorm room and later dropping out of Georgetown's School of Foreign Service, figuring he dodged a career "stamping visas in Istanbul." Since then he's sold one political consultancy and his share of another, and is partner in a third (Revolution), for which he mostly does corporate work. He generally prefers this to campaigns these days, since even though there's accountability to corporate boards, "you don't have to face 22 people who have no experience, telling you how to do your job from their safe Twitter perch in journalism."

Murphy's clients have won around two dozen Senate and gubernatorial races (everyone from John Engler to Mitt Romney to Christie Todd Whitman to Arnold Schwarzenegger). If you notice a theme, it's that he often helps Republicans win in Democratic states. Likewise, he's played a major role in assisting three losing presidential candidates (McCain, Lamar! Alexander, and Jeb!). If you again notice a theme, it's that his presidential candidates sometimes seem more excited about their first names than the electorate does.

Like all hired guns in his trade, he's taken his share of mercenary money just for the check. But Murphy says when it comes to presidentials, he thinks it matters more and is a sucker for long shots. "I have friends I believe in who want to run. I'm a romantic, so I keep falling for that pitch." Jeb wasn't exactly a long shot, I remind him. Like hell he wasn't, says Murphy. It's a hard slog, not being a Grievance Candidate this year. "He was the guy who was handing out policy papers when Trump was handing out broken bottles."

Since a candidate is not permitted by law to discuss campaign specifics with his super-PAC once he declares, a law Murphy vows was strictly observed ("I'm too pretty to go to jail"), I ask him what he would've told Jeb during the campaign had he been allowed to. Over the years, Murphy has forged a reputation of telling his candidates the truth, no matter how bitter the medicine. (He once had to tell a congressional client that his toupee was unconvincing.) Though Murphy's tongue is usually on a hair-trigger, he stops and ponders this question for a beat. He then says he would've told Jeb, "What the f — were we thinking?"

Even pre-campaign, however, when they were allowed to coordinate as Right to Rise was amassing its unprecedented war chest, well before Trump's ascendancy, both knew that despite the media billing Bush the prohibitive favorite — a position they both detested — they were facing long odds. (The assumption was Ted Cruz would be occupying the anger-candidate slot that Trump has instead so ably filled.)

Murphy says Bush regarded this election as a necessary tussle between the politics of optimism and grievance. At a preseason dinner, Murphy gave Bush his best guess of their chances of winning — under 50 percent. "He grinned," Murphy says, "and named an even lower number. I remember leaving the dinner with a mix of great pride in Jeb's principled courage and with a sense of apprehension about the big headwinds we would face." And though he'd also have told his friend, if he'd been allowed to speak to him, that he was proud of Jeb "for fighting his corner," ultimately, Murphy admits, "there is no campaign trick or spending level or candidate whisperer that can prevent a party from committing political suicide if it wants to."

by Matt Labash, New Republic |  Read more:
Image: Gary Locke

A History of the Amiga - Part 1 Genesis


[ed. My first computer was an Amiga 1000. As the joke goes, it was so far ahead of its time not even Commodore knew how to market it.]

The Amiga computer was a dream given form: an inexpensive, fast, flexible multimedia computer that could do virtually anything. It handled graphics, sound, and video as easily as other computers of its time manipulated plain text. It was easily ten years ahead of its time. It was everything its designers imagined it could be, except for one crucial problem: the world was essentially unaware of its existence.

With personal computers now playing such a prominent role in modern society, it's surprising to discover that a machine with most of the features of modern PCs actually first came to light back in 1985. Almost without exception, the people who bought and used Amigas became diehard fans. Many of these people would later look back fondly on their Amiga days and lament the loss of the platform. Some would even state categorically that despite all the speed and power of modern PCs, the new machines have yet to capture the fun and the spirit of their Amiga predecessors. A few still use their Amigas, long after the equivalent mainstream personal computers of the same vintage have been relegated to the recycling bin. Amiga users, far more than any other group, were and are extremely passionate about their platform.

So if the Amiga was so great, why did so few people hear about it? The world has plenty of books about the IBM PC and its numerous clones, and even a large library about Apple Computer and the Macintosh platform. There are also many books and documentaries about the early days of the personal computing industry. A few well-known examples are the excellent book Accidental Empires (which became a PBS documentary called Triumph of the Nerds) and the seminal work Fire in the Valley (which became a TV movie on HBO entitled Pirates of Silicon Valley.)

These works tell an exciting tale about the early days of personal computing, and show us characters such as Bill Gates and Steve Jobs battling each other while they were still struggling to establish their new industry and be taken seriously by the rest of the world. They do a great job telling the story of Microsoft, IBM, and Apple, and other companies that did not survive as they did. But they mention Commodore and the Amiga rarely and in passing, if at all. Why?

When I first went looking for the corresponding story of the Amiga computer, I came up empty-handed. An exhaustive search for Amiga books came up with only a handful of old technical manuals, software how-to guides, and programming references. I couldn't believe it. Was the story so uninteresting? Was the Amiga really just a footnote in computing history, contributing nothing new and different from the other platforms?

As I began researching, I discovered the answer, and it surprised me even more than the existence of the computer itself. The story of Commodore and the Amiga was, by far, even more interesting than that of Apple or Microsoft. It is a tale of vision, of technical brilliance, dedication, and camaraderie. It is also a tale of deceit, of treachery, and of betrayal. It is a tale that has largely remained untold.

This series of articles attempts to explain what the Amiga was, what it meant to its designers and users, and why, despite its relative obscurity and early demise, it mattered so much to the computer industry. It follows some of the people whose lives were changed by their contact with the Amiga and shows what they are doing today. Finally, it looks at the small but dedicated group of people who have done what many thought was impossible and developed a new Amiga computer and operating system, ten years after the bankruptcy of Commodore. Long after most people had given up the Amiga for dead, these people have given their time, expertise and money in pursuit of this goal.

To many people, these efforts seem futile, even foolish. But to those who understand, who were there and lived through the Amiga at the height of its powers, they do not seem foolish at all.

But the story is about something else as well. More than a tale about a computer maker, this is the story about the age-old battle between mediocrity and excellence, the struggle between merely existing and trying to go beyond expectations. At many points in the story, the struggle is manifested by two sides: the hard-working, idealistic engineers driven to the bursting point and beyond to create something new and wonderful, and the incompetent and often avaricious managers and executives who end up destroying that dream. But the story goes beyond that. At its core, it is about people, not just the designers and programmers, but the users and enthusiasts, everyone whose lives were touched by the Amiga. And it is about me, because I count myself among those people, despite being over a decade too late to the party.

All these people have one thing in common. They understand the power of the dream.

by Jeremy Reimer, Ars Technica | Read more:
Image: Commodore

Thursday, March 17, 2016

Buddy Guy

The Mattering Instinct

We can’t pursue our lives without thinking that our lives matter—though one has to be careful here to distinguish the relevant sense of “matter." Simply to take actions on the basis of desires is to act as if your life matters. It’s inconceivable to pursue a human life without these kinds of presumptions—that your own life matters to some extent. Clinical depression is when you are convinced that you don’t and will never matter. That’s a pathological attitude, and it highlights, by its pathology, the way in which the mattering instinct normally functions. To be a fully functioning, non-depressed person is to live and to act, to take it for granted that you can act on your own behalf, pursue your goals and projects. And that we have a right to be treated in accord with our own commitment to our lives mattering. We quite naturally flare up into outrage and indignation when others act in violation of the presumption grounding the pursuance of our lives. So this is what I mean by the mattering instinct—that commitment to one’s own life that is inseparable from pursuing a coherent human life.

But I want to distinguish more precisely the relevant sense of “mattering." The commitment to your own mattering is, first of all, not to presume that you cosmically matter—that you matter to the universe. My very firm opinion is that we don’t matter to the universe. The universe is lacking in all attitudes, including any attitude toward us. Of course, the religious point of view is that we do cosmically matter. The universe, as represented by God, takes an attitude toward us. That is not what I’m saying is presumed in the mattering instinct. To presume that one matters isn’t to presume that you matter to the universe, nor is it to presume that you matter more than others. There have been philosophers who asserted that some—for example, people of genius—absolutely matter more than others. Nietzsche asserted this. He said, for example, that all the chaos and misery of the French Revolution was justified because it brought forth the genius of Napoleon. The only justification for a culture, according to Nietzsche, is that it fosters a person who bears all the startling originality of a great work of art. All the non-originals—which are, of course, the great bulk of us—don’t matter. Nietzsche often refers to them as “the botched and the bungled.” According to Nietzsche there is an inequitable distribution of mattering. But I neither mean to be asserting anything religious nor anything Nietzsche-like in talking about our mattering instinct. I reject the one as firmly as the other. In fact, I would argue that the core of the moral point of view is that there is an equitable distribution of mattering among humans. To the extent that any of us matters—and just try living your life without presuming that you do—we all equally matter. (...)

When you figure out what matters to you and what makes you feel like you’re living a meaningful life, you universalize this. Say I’m a scientist and all my feelings about my own mattering are crystalized around my life as a scientist. It’s quite natural to slide from that into thinking that the life of science is the life that matters. Why doesn’t everybody get their sense of meaning from science? That false universalizing takes place quite naturally, imperceptibly, being unconsciously affected by the forces of the mattering map. In different people the need to justify their own sense of mattering slides into the religious point of view and they end up concluding that, without a God to justify human mattering, life is meaningless: Why doesn’t everybody see that the life that matters is the life of religion? That’s false reasoning about mattering as well. These are the things I’m thinking about: What’s justified by the mattering instinct, which itself cannot and need not be justified, and what isn’t justified by it.

Yes, I want to explain the mattering instinct in terms of evolutionary psychology because I think everything about us, everything about human nature, demands an evolutionary explanation. And I do think that the outlines of such an explanation are quite apparent. That I matter, that my life demands the ceaseless attention I give it, is exactly what those genes would have any organism believing, if that organism was evolved enough for belief. The will to survive evolves, in a higher creature like us, into the will to matter. (...)

Science is science and philosophy is philosophy, and it takes a philosopher to do the demarcation. How does science differ from philosophy? That’s not a scientific question. In fact, what science is is not itself a scientific question; what science is is the basic question in the philosophy of science, or at least the most general one.

Here’s what I think science is: Science is this ingenuous motley collection of techniques and cognitive abilities that we use in order to try to figure out what is, the questions of what is: What exists? What kind of universe are we living in? How is our universe ontologically furnished? People talk about the scientific method. There’s no method. That makes it sound like it’s a recipe: one, two, three, do this and you’re doing science. Instead, science is a grab bag of different techniques and cognitive abilities: observation, collecting of data, experimental testing, a priori mathematics, theorizing, model simulations; different scientific activities call for different talents, different cognitive abilities.

The abilities and techniques that a geologist who’s collecting samples of soil and rocks to figure out thermal resistance is using, compared to a cognitive scientist who’s figuring out a computer simulation of long-term memory, compared to Albert Einstein performing a thought experiment—what it’s like to ride a lightwave—compared to a string theorist working out the mathematical implications of 11 dimensions of M-theory, compared to a computational biologist sifting through big data in order to spot genomic phenotypes, are all so very different. These are very different cognitive abilities and talents, and they’re all brought together in order to figure out what kind of universe we’re living in, what its constituents are and what the laws of nature governing the constituents are.

Here’s the wonderful trick about science: Given all of these motley attributes, talents, techniques, activities, in order for it to be science, you have to bring reality itself into the picture as a collaborator. Science is a way to prod reality to answer us back when we’re getting it wrong. It’s an amazing thing that we’ve figured out how to do it and it’s a good thing too because our intuitions are askew. Why shouldn’t they be? We’re just evolved apes, as we know through science. Our views about space and time, causality, individuation are all off. If we hadn’t developed an enterprise whose whole point is to prod reality to answer us back when we’re getting it wrong, we’d never know how out of joint our basic intuitions are.

Science has been able to correct this because no matter how theoretical it is, you have to be able to get your predictions. You have to be able to get reality to say, “So you think simultaneity is absolute, do you? no matter which frame of reference you’re measuring it in? Well, we’re just going to see about that.” And you perform the tests and, sure enough, our intuitions are wrong. That’s what science is. If philosophers think that they can compete with that, they're off their rockers.

That’s the mistake that a lot of scientists make. I call them philosophy jeerers—the ones who just dismiss philosophy, that have nothing to add because they think that philosophers are trying to compete with this amazing grab bag that we’ve worked out and that gets reality itself to be a collaborator. But there’s more to be done, to be figured out, than just what kind of world we live in, the job description of science. In fact, everything I’ve just been saying, in defending science as our best means of figuring out the nature of our universe, hasn’t been science at all but rather philosophy, a kind of rewording of what Karl Popper had said.

Karl Popper, a philosopher, coined the term “falsifiability,” to try to highlight the importance of this all-important ability of science to prod reality into being our collaborator. Popper is the one philosopher that scientists will cite. They like him. He has a very heroic view of scientists. They’re just out to falsify their theories. "A theory that we accept," he says, “just hasn’t been falsified yet.” It’s a very heroic view of scientists. They’re never egotistically attached to their theories. A very idealized view of science and scientists. No wonder scientists eat Popper up.

One of the things that Popper had said, and this relates very much to this whole idea of beauty in our scientific theories, is that we have to be able to test our theories in order for them to be scientific. Our whole way of framing our theories and the questions that we want to solve and the data that we’re interested in looking at—particularly in theory formation, there are certain metaphysical presumptions that we bring with us in order to do science at all—they can’t be validated by science, but they’re implicit in the very carrying on of science. That there are metaphysical presumptions that go into theory formation is an aspect of Popper’s description of science that most scientists forget that Popper ever said.

One of these is that nature is law-like. If we find some anomaly, some contradiction to an existing law, we don’t say, “Oh, well, maybe nature just isn’t law-like. Maybe this was a miracle.” No. We say that we got the laws wrong and go back to the drawing board. Newtonian physics gets corrected, shown to be only a limiting case under the more general relativistic physics. We’re always presuming that nature is law-like in order to do science at all. We also bring with us our intuitions about beauty and, all things being equal, if we have two theories that are adequate to all the empirical evidence we have, we go with the one that’s more elegant, more beautiful. Usually that means more mathematically beautiful. That can be a very strong metaphysical ingredient in the formation of our theories.

It was particularly dramatic in Einstein that he had these very strong views of the beauty and harmony of the laws of nature, and that was utilized in general relativity. General relativity was published in 1915. It had to wait until 1919, when Eddington went to Africa and took pictures of the solar eclipse, for some empirical validation to be established. Sure enough, light waves were bent because of the mass of the sun; gravity distorted the geometry of space-time.

This was the first empirical verification that came for general relativity; there was nothing before then. Einstein jokingly had said to somebody that if the empirical evidence had not validated his theory, he would’ve felt sorry for the dear Lord. He said to Hans Reichenbach, a philosopher of science and a physicist, that he knew before the empirical validation arrived in 1919 that the theory had to be true because it was too beautiful and elegant not to be true. That’s a very strong intuition, a metaphysical intuition that informed his formulation of the theory, which is exactly the kind of thing that Popper was talking about.

The laws of nature are elegant, which usually means mathematically elegant. We’re moved by this. You can’t learn the relativity theory and not be moved by the beauty of it.

Look, there are people who say the string theory is not science until you can somehow get reality to answer us back. It’s not science; it’s metaphysics—this is an argument.

The notion of the multiverse: It certainly seems that it’s hard to get any empirical evidence for parallel universes, but yet it’s a very elegant way of answering a lot of questions, like the so-called fine tuning of the physical constants. These are places in which science might be slipping over into philosophy. What we have to just keep doing is working away at it and perhaps we’ll be able to figure out an ingenious way for reality to answer us back.

by Rebecca Newberger Goldstein , Edge | Read more:
Image: uncredited

Tuesday, March 15, 2016

Why Do We Work So Hard?

When John Maynard Keynes mused in 1930 that, a century hence, society might be so rich that the hours worked by each person could be cut to ten or 15 a week, he was not hallucinating, just extrapolating. The working week was shrinking fast. Average hours worked dropped from 60 at the turn of the century to 40 by the 1950s. The combination of extra time and money gave rise to an age of mass leisure, to family holidays and meals together in front of the television. There was a vision of the good life in this era. It was one in which work was largely a means to an end – the working class had become a leisured class. Households saved money to buy a house and a car, to take holidays, to finance a retirement at ease. This was the era of the three-Martini lunch: a leisurely, expense-padded midday bout of hard drinking. This was when bankers lived by the 3-6-3 rule: borrow at 3%, lend at 6%, and head off to the golf course by 3pm.

The vision of a leisure-filled future occurred against the backdrop of the competition against communism, but it is a capitalist dream: one in which the productive application of technology rises steadily, until material needs can be met with just a few hours of work. It is a story of the triumph of innovation and markets, and one in which the details of a post-work world are left somewhat hazy. Keynes, in his essay on the future, reckoned that when the end of work arrived:
For the first time since his creation man will be faced with his real, his permanent problem – how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.
Karl Marx had a different view: that being occupied by good work was living well. Engagement in productive, purposeful work was the means by which people could realise their full potential. He’s not credited with having got much right about the modern world, but maybe he wasn’t so wrong about our relationship with work.

In those decades after the second world war, Keynes seemed to have the better of the argument. As productivity rose across the rich world, hourly wages for typical workers kept rising and hours worked per week kept falling – to the mid-30s, by the 1970s. But then something went wrong. Less-skilled workers found themselves forced to accept ever-smaller pay rises to stay in work. The bargaining power of the typical blue-collar worker eroded as technology and globalisation handed bosses a whole toolkit of ways to squeeze labour costs. At the same time, the welfare state ceased its expansion and began to retreat, swept back by governments keen to boost growth by cutting taxes and removing labour-market restrictions. The income gains that might have gone to workers, that might have kept living standards rising even as hours fell, that might have kept society on the road to the Keynesian dream, flowed instead to those at the top of the income ladder. Willingly or unwillingly, those lower down the ladder worked fewer and fewer hours. Those at the top, meanwhile, worked longer and longer.

It was not obvious that things would turn out this way. You might have thought that whereas, before, a male professional worked 50 hours a week while his wife stayed at home with the children, a couple of married professionals might instead each opt to work 35 hours a week, sharing more of the housework, and ending up with both more money and more leisure. That didn’t happen. Rather, both are now more likely to work 60 hours a week and pay several people to care for the house and children.

Why? One possibility is that we have all got stuck on a treadmill. Technology and globalisation mean that an increasing number of good jobs are winner-take-most competitions. Banks and law firms amass extraordinary financial returns, directors and partners within those firms make colossal salaries, and the route to those coveted positions lies through years of round-the-clock work. The number of firms with global reach, and of tech start-ups that dominate a market niche, is limited. Securing a place near the top of the income spectrum in such a firm, and remaining in it, is a matter of constant struggle and competition. Meanwhile the technological forces that enable a few elite firms to become dominant also allow work, in the form of those constantly pinging emails, to follow us everywhere.

This relentless competition increases the need to earn high salaries, for as well-paid people cluster together they bid up the price of the resources for which they compete. In the brainpower-heavy cities where most of them live, getting on the property ladder requires the sort of sum that can be built up only through long hours in an important job. Then there is conspicuous consumption: the need to have a great-looking car and a home out of Interiors magazine, the competition to place children in good (that is, private) schools, the need to maintain a coterie of domestic workers – you mean you don’t have a personal shopper? And so on, and on.

The dollars and hours pile up as we aim for a good life that always stays just out of reach. In moments of exhaustion we imagine simpler lives in smaller towns with more hours free for family and hobbies and ourselves. Perhaps we just live in a nightmarish arms race: if we were all to disarm, collectively, then we could all live a calmer, happier, more equal life.

But that is not quite how it is. The problem is not that overworked professionals are all miserable. The problem is that they are not.

Drinking coffee one morning with a friend from my home town, we discuss our fathers’ working habits. Both are just past retirement age. Both worked in an era in which a good job was not all-consuming. When my father began his professional career, the post-war concept of the good life was still going strong. He was a dedicated, even passionate worker. Yet he never supposed that work should be the centre of his life.

Work was a means to an end; it was something you did to earn the money to pay for the important things in life. This was the advice I was given as a university student, struggling to figure out what career to pursue in order to have the best chance at an important, meaningful job. I think my parents were rather baffled by my determination to find satisfaction in my professional life. Life was what happened outside work. Life, in our house, was a week’s holiday at the beach or Pop standing on the sidelines at our baseball games. It was my parents at church, in the pew or volunteering in some way or another. It was having kids who gave you grandkids. Work merely provided more people to whom to show pictures of the grandkids.

This generation of workers, on the early side of the baby boom, is marching off to retirement now. There are things to do in those sunset years. But the hours will surely stretch out and become hard to fill. As I sit with my friend it dawns on us that retirement sounds awful. Why would we stop working?

Here is the alternative to the treadmill thesis. As professional life has evolved over the past generation, it has become much more pleasant. Software and information technology have eliminated much of the drudgery of the workplace. The duller sorts of labour have gone, performed by people in offshore service-centres or by machines. Offices in the rich world’s capitals are packed not with drones filing paperwork or adding up numbers but with clever people working collaboratively.

The pleasure lies partly in flow, in the process of losing oneself in a puzzle with a solution on which other people depend. The sense of purposeful immersion and exertion is the more appealing given the hands-on nature of the work: top professionals are the master craftsmen of the age, shaping high-quality, bespoke products from beginning to end. We design, fashion, smooth and improve, filing the rough edges and polishing the words, the numbers, the code or whatever is our chosen material. At the end of the day we can sit back and admire our work – the completed article, the sealed deal, the functioning app – in the way that artisans once did, and those earning a middling wage in the sprawling service-sector no longer do.

The fact that our jobs now follow us around is not necessarily a bad thing, either. Workers in cognitively demanding fields, thinking their way through tricky challenges, have always done so at odd hours. Academics in the midst of important research, or admen cooking up a new creative campaign, have always turned over the big questions in their heads while showering in the morning or gardening on a weekend afternoon. If more people find their brains constantly and profitably engaged, so much the better.

Smartphones do not just enable work to follow us around; they also make life easier. Tasks that might otherwise require you to stay late in the office can be taken home. Parents can enjoy dinner and bedtime with the children before turning back to the job at hand. Technology is also lowering the cost of the support staff that make long hours possible. No need to employ a full-time personal assistant to run the errands these days: there are apps to take care of the shopping, the laundry and the dinner, walk the dog, fix the car and mend the hole in the roof. All of these allow us to focus ever more of our time and energy on doing what our jobs require of us.

There are downsides to this life. It does not allow us much time with newborn children or family members who are ill; or to develop hobbies, side-interests or the pleasures of particular, leisurely rituals – or anything, indeed, that is not intimately connected with professional success. But the inadmissible truth is that the eclipsing of life’s other complications is part of the reward.

It is a cognitive and emotional relief to immerse oneself in something all-consuming while other difficulties float by. The complexities of intellectual puzzles are nothing to those of emotional ones. Work is a wonderful refuge.

by Ryan Avent, The Economist |  Read more:
Image: Izhar Cohen

The Last Island of the Savages

The lumps of white coral shone round the dark mound like a chaplet of bleached skulls, and everything around was so quiet that when I stood still all sound and all movement in the world seemed to come to an end. It was a great peace, as if the earth had been one grave, and for a time I stood there thinking mostly of the living who, buried in remote places out of the knowledge of mankind, still are fated to share in its tragic or grotesque miseries. In its noble struggles too—who knows? The human heart is vast enough to contain all the world. It is valiant enough to bear the burden, but where is the courage that would cast it off?

                                                                            —Joseph Conrad, Lord Jim

Shortly before midnight on August 2, 1981, a Panamanian-registered freighter called the Primrose, which was traveling in heavy seas between Bangladesh and Australia with a cargo of poultry feed, ran aground on a coral reef in the Bay of Bengal. As dawn broke the next morning, the captain was probably relieved to see dry land just a few hundred yards from the Primrose’s resting place: a low-lying island, several miles across, with a narrow beach of clean white sand giving way to dense jungle. If he consulted his charts, he realized that this was North Sentinel Island, a western outlier in the Andaman archipelago, which belongs to India and stretches in a ragged line between Burma and Sumatra. But the sea was too rough to lower the lifeboats, and so—since the ship seemed to be in no danger of sinking—the captain decided to keep his crew on board and wait for help to arrive.

A few days later, a young sailor on lookout duty in the Primrose’s Watchtower spotted several people coming down from the forest toward the beach and peering out at the stranded vessel. They must be a rescue party sent by the shipping company, he thought. Then he took a closer look at them. They were small men, well-built, frizzy-haired, and black. They were naked except for narrow belts that circled their waists. And they were holding spears, bows, and arrows, which they had begun waving in a manner that seemed not altogether friendly.

Not long after this, a wireless operator at the Regent Shipping Company’s offices in Hong Kong received an urgent distress call from the Primrose’s captain, asking for an immediate airdrop of firearms so that his Island crew could defend itself. “Wild men, estimate more than 50, carrying various homemade weapons are making two or three wooden boats,” the message read. “Worrying they will board us at sunset. All crew members’ lives not guaranteed.”

If the Primrose’s predicament seemed a thing less of the twentieth century than of the eighteenth—an episode, perhaps, from Captain Cook’s voyages in the Pacific—it is because the island where the ship lay grounded had somehow managed to slip through the net of history. Although its existence had been known for centuries, its inhabitants had had virtually no contact with the rest of humanity. Anthropologists referred to them as “Sentinelese,” but no one knew what they called themselves—indeed, no one even knew what language they spoke. And in any case, no one within living memory had gotten close enough to ask. Whether the natives’ prelapsarian state was one of savagery or innocence, no one knew either.

The same monsoon-whipped waves that had driven the Primrose onto the reef kept the tribesmen’s canoes at bay, and high winds blew their arrows off the mark. The crew kept up a twenty-four-hour guard with makeshift weapons—a flare gun, axes, some lengths of pipe—as news of the emergency slowly filtered to the outside world. (An Indian government spokesman denied reports in the Hong Kong press that the Sentinelese were “cannibals.” A Hong Kong government spokesman suggested that perhaps the Primrose’s radio officer had “gone bananas.”) After nearly a week, the Indian Navy dispatched a tugboat and a helicopter to rescue the besieged sailors.

The natives of North Sentinel must have watched the whirring aircraft as it hovered three times above the great steel hulk, lowering a rope ladder to pluck the men safely back into modernity. Then the strange machines departed, the sea calmed, and the island remained, lush and impenetrable, still waiting for its Cook or its Columbus.

Epochs of history rarely come to a sudden end, seldom announce their passing with anything so dramatic as the death of a king or the dismantling of a wall. More often, they withdraw slowly and imperceptibly (or at least unperceived), like the ebbing tide on a deserted beach.

That is how the Age of Discovery ended. For more than five hundred years, the envoys of civilization sailed through storms and hacked through jungles, startling in turn one tribe after another of long-lost human cousins. For an instant, before the inevitable breaking of faith, the two groups would face each other, staring—as innocent, both of them, as children, and blameless as if the world had been born afresh. To live such a moment seems, when we think of it now, to have been one of the most profound experiences that our planet in its vanished immensity once offered. But each time the moment repeated itself on each fresh beach, there was one less island to be found, one less chance to start everything anew. It began to repeat itself less and less often, until there came a time, maybe a century ago, when there were only a few such places left, only a few doors still unopened.

Sometime quite recently, the last door opened. I believe it happened not long before the end of the millennium, on an island already all but known, a place encircled by the buzzing, thrumming web of a world still unknown to it, and by the mesh of a history that had forever been drawing closer. (...)

This is how you get to the most isolated human settlement on earth: You board an evening flight at JFK for Heathrow, Air India 112, a plane full of elegant sari-clad women, London-bound businessmen, hippie backpackers. You settle in to watch a movie (a romantic comedy in which Harrison Ford and Anne Heche get stranded on a desert island) and after a quick nap you are in London.

Then you catch another plane. You read yesterday’s Times while flying above the corrugated gullies of eastern Turkey, watch a Hindi musical somewhere over Iran. That night, and for the week that follows, you are in New Delhi, where the smog lies on the ground like mustard gas, and where one day you see an elephant—an elephant!—in the midst of downtown traffic.

From New Delhi you go by train to Calcutta, where you must wait for a ship. And you must wait for a ticket. There are endless lines at the shipping company office, and jostling, and passing back and forth of black-and-white photographs in triplicate and hundred-rupee notes and stacks of documents interleaved with Sapphire brand carbon paper. Next you are on the ship, a big Polish-built steamer crawling with cockroaches. The steamer passes all manner of scenery: slim and fragile riverboats like craft from a pharaoh’s tomb; broad-beamed, lateen-rigged Homeric merchantmen. You watch the sun set into the Bay of Bengal, play cards with some Swedish backpackers, and take in the shipboard video programming, which consists of the complete works of Macaulay Culkin, subtitled in Arabic. On the morning of the sixth day your ship sails into a wide, sheltered bay—steaming jungles off the port bow, a taxi-crowded jetty to starboard—and you have arrived in the Andamans, at Port Blair.

In Port Blair you board a bus, finding a seat beneath a wall-mounted loudspeaker blaring a Hindi cover of “The Macarena Song.” The bus rumbles through the bustling market town, past barefoot men peddling betel nut, past a billboard for the local computer-training school (“I want to become the 21st century’s computer professional”). On the western outskirts you see a sawmill that is turning the Andaman forests into pencils on behalf of a company in Madras, and you see the airport, where workmen are busy extending the runway—out into a field where water buffalo graze—so that in a few years, big jetliners will be able to land here, bringing tour groups direct from Bangkok and Singapore. A little farther on, you pass rice paddies, and patches of jungle, and the Water Sports Training Centre, and thatched huts, and family-planning posters, and satellite dishes craning skyward. And then, within an hour’s time, you are at the ocean again, and on a very clear day you will see the island in the distance, a slight disturbance of the horizon.

by Adam Goodheart, American Scholar |  Read more:
Image: Ana Raquel S. Hernandes/Flickr

The World's Top Fighter Pilots Fear This Woman's Voice


All F/A-18 Super Hornet fighter jets come with a female voice that issues greetings and warnings, in tones ranging from stern and sharp to extremely urgent. It doesn't matter if the pilot is wearing a Malaysian, Kuwaiti, or Australian flag on his flight suit, the airplane speaks in a Tennessee twang that sounds a lot like Loretta Lynn in the middle of a very bad day. Embark on a miscue, and the jet issues an audible correction: “Roll right! Roll right!” or “Pull up! Pull up!”

U.S. Air Force pilots refer to voice of the Super Hornet as “Bitchin' Betty,” while among Britain's Royal Air Force she is known as “Nagging Nora.” But a real woman personifies the aircraft, 60-year-old Leslie Shook, and she recently retired after 35 years as an employee of Boeing Co. “I knew I had an accent which I did not think was desirable in the plane,” Shook said in an interview, the voice familiar to generations of fighter pilots coming in clear over a telephone. “No one ever said anything about it. I was my own worst critic as far as that goes.”

After powering up the F/A-18 and hearing Shook's greeting, a pilot won't typically hear much from her again unless the situation gets serious. You might be in danger of flying into a mountain, triggering a warning recorded by Shook. Or perhaps you have just drained half the fuel supply for the mission, in which case you will hear her repeat: “Bingo. Bingo.”

Nearly every aircraft has its own voice. The first digital voice in a U.S. combat jet was that of Kim Crow, a professional actor who still does voice-over work. For whatever reason, women’s voices have been common in fighter jets and numerous civilian aircraft.

Shook worked in St. Louis for McDonnell Douglas, which Boeing acquired in 1997. McDonnell was among the first to use voice commands on the flight deck, for both civilian and military jets, and the company favored women for the job.

Shook’s involvement with the Hornet came about by happenstance, as one more job to record following a long day in her work as a video-services coordinator for the defense contractor. That meant she helped arrange such things as video shoots, photography, audio recordings, television commercials, and speaking events.

In the mid-1990s, when an F/A-18 customer requested a voice command for the jet’s ground-avoidance system, Boeing arranged a recording session. Several people were involved, including a Navy lieutenant colonel, and the woman recording the command wasn’t suitable.

“They did not like her voice; it was too sweet for the airplane,” Shook recalled. She was feeling tired and hungry that evening, ready to get home, and she stepped in with some voice advice. “I explained to them that Betty has a cadence, a sharpness to get your attention.”

The Navy officer suggested Shook do the recording, and the fighter jet quickly had its digital scold.

by Justin Bachman, Bloomberg | Read more:
Image: Boeing Co. and U.S. Air Force photo/Staff Sgt. Ben Fulton

Monday, March 14, 2016

In a Hail of Bullets and Fire

In late 2013, Jang Song-thaek, an uncle of Kim Jong-un, the North Korean leader, was taken to the Gang Gun Military Academy in a Pyongyang suburb.

Hundreds of officials were gathered there to witness the execution of Mr. Jang’s two trusted deputies in the administrative department of the ruling Workers’ Party.

The two men, Ri Ryong-ha and Jang Su-gil, were torn apart by antiaircraft machine guns, according to South Korea’s National Intelligence Service. The executioners then incinerated their bodies with flamethrowers.

Jang Song-thaek, widely considered the second-most powerful figure in the North, fainted during the ordeal, according to a new book published in South Korea that offers a rare glimpse into the secretive Pyongyang regime.

“Son-in-Law of a Theocracy,” by Ra Jong-yil, a former deputy director of the National Intelligence Service, is a rich biography of Mr. Jang, the most prominent victim of the purges his young nephew has conducted since assuming power in 2011.

Mr. Jang was convicted of treason in 2013. He was executed at the same place and in the same way as his deputies, the South Korean intelligence agency said.

The book asserts that although he was a fixture of the North Korean political elite for decades, he dreamed of reforming his country. “With his execution, North Korea lost virtually the only person there who could have helped the country introduce reform and openness,” Mr. Ra said during a recent interview.

Mr. Ra, who is also a professor of political science and a former South Korean ambassador to Japan and Britain, mined existing publications but also interviewed sources in South Korea, Japan and China, including high-ranking defectors from the North who spoke on the condition of anonymity.

Mr. Jang met one of the daughters of North Korea’s founder, Kim Il-sung, while both attended Kim Il-sung University in the mid-1960s. The daughter, Kim Kyong-hee, developed a crush on Mr. Jang, who was tall and humorous — and sang and played the accordion.

Her father transferred the young man to a provincial college to keep the two apart. But Ms. Kim hopped in her Soviet Volga sedan to see Mr. Jang each weekend.

Once they married in 1972, Mr. Jang’s career took off under the patronage of Kim Jong-il, his brother-in-law and the designated successor of the regime.

In his memoir, a Japanese sushi chef for Kim Jong-il from 1988 to 2001 who goes by the alias Kenji Fujimoto remembered Mr. Jang as a fun-loving prankster who was a regular at banquets that could last until morning or even stretch a few days. A key feature of the events was a “pleasure squad” of young, attractive women who would dance the cancan, sing American country songs or perform a striptease, according to the book and accounts by defectors.

Mr. Jang also mobilized North Korean diplomats abroad to import Danish dairy products, Black Sea caviar, French cognac and Japanese electronics — gifts Mr. Kim handed out during his parties to keep his elites loyal.

But North Korean diplomats who have defected to South Korea also said that during his frequent trips overseas to shop for Mr. Kim, Mr. Jang would drink heavily and speak dejectedly about people dying of hunger back home.

Few benefited more than Mr. Jang from the regime he loyally served. But he was never fully embraced by the Kim family because he was not blood kin. This “liminal existence” enabled him to see the absurdities of the regime more clearly than any other figure within it, Mr. Ra wrote.

Mr. Ra said Hwang Jang-yop, a North Korean party secretary who defected to Seoul in 1997 and lived here until his death in 2010, shared a conversation he once had with Mr. Jang. When told that the North’s economy was cratering, Mr. Jang responded sarcastically: “How can an economy already at the bottom go further down?”

Mr. Jang’s frequent partying with the “pleasure squad” strained his marriage. Senior defectors from the North said it was an open secret among the Pyongyang elite that the couple both had extramarital affairs.

Their only child, Jang Kum-song, killed herself in Paris in 2006. She overdosed on sleeping pills after the Pyongyang government caught wind of her dating a Frenchman and summoned her home.

Still, the marriage endured. When Kim Jong-il banished Mr. Jang three times for overstepping his authority, his wife intervened on his behalf.

After Mr. Kim suffered a stroke in 2008 and died in 2011, Mr. Jang helped his young nephew, Kim Jong-un, establish himself as successor. At the same time, he vastly expanded his own influence — and ambition.

by Choe Sang-Hun, NY Times | Read more:
Image: CreditKyodo, via Reuters

What Would It Mean To Have A 'Hapa' Bachelorette?

On a recent episode of The Bachelor, the ABC dating reality show that ends its 20th season Monday night, contestant Caila Quinn brings Ben Higgins home to meet her interracial family.

"Have you ever met Filipinos before?" Quinn's mother asks, leading Higgins into a dining room where the table is filled with traditional Filipino food.

"I don't know," he replies. "No. I don't think so."

As they sit around the adobo and pancit, Quinn's father talks to Higgins, white man to white man. What comes with dating Quinn, the father says, "is a very special Philippine community." Quinn grimaces.

"I had no idea what I was getting into when I married Caila's mother," the father says. But being married to a Filipina, he assures Higgins, has been "the most fun" and "magical."

This scene can be read as an attempt by The Bachelor franchise to dispel criticisms (and the memory of a 2012 lawsuit) concerning its whitewashed casts. It shows how these attempts can be clunky at best, offensive and creepy at worst.

Quinn's run also demonstrates how, as this rose-strewn, fantasy-fueled romance machine tries to include more people of color, diversification looks like biracial Asian-American — often known as "hapa" — women.

Among the 19 women who have won the "final rose" since The Bachelor premiered in 2002, two — Tessa Horst and Catherine Giudici — have been biracial Asian-white. All other winners, aside from Mary Delgado in 2004 who was Cuban-American, appear to have been white. As these handy graphics by writer and video artist Karen X. Cheng show, in the previous seven years, the only women of color who lasted into the final few weeks were of mixed-race Asian-white background. (...)

To understand why only a narrow group of women of color — biracial Asian-white women — survive in this world is to delve into romantic tropes, the stuff The Bachelor is made of.

"As objects of beauty, these women are benefiting from two helpful stereotypes about female desirability," said Ann Morning, associate professor of sociology at New York University. One is whiteness as the persisting standard of beauty. The other is Asian women as sexualized, exotic and submissive.

Taken alone, the first stereotype can be detrimental. "Today, being white is often perceived as a kind of boring, colorless identity," Morning said. But that stereotype about whiteness can work to balance negative stereotypes about Asian women.

Lily Anne Welty Tamai, curator of history at the Japanese American National Museum (and a friend of mine), explained where these stereotypes about Asian women come from. The trope of Puccini's 1904 Madama Butterfly paved the way for American incarnations of a tragic love story between an American soldier and Asian woman in the mid-20th century, when American soldiers brought home war stories — and sometimes brides — from Asia, where women were often part of the conquest. Popular narratives included the 1957 film Sayonara and the 1989 musical Miss Saigon. ("I guess they just never got around to making the Korea version," Tamai said.)

These stories cemented in the American consciousness the idea of the Asian woman as the foreign sex toy: the geisha, the china doll, the "me love you long time" sex worker.

by Akemi Johnson, NPR |  Read more:
Image: Kelsey McNeal/ABC via Getty Images

Mr. Spock at the “Sh*t Show”

Barack Obama, the rise of Trump and a world gone crazy

Barack Obama does not much appreciate being blamed for the rise of Donald Trump. But after reading Jeffrey Goldberg’s immense article on “The Obama Doctrine” in the Atlantic, which was based on several lengthy interviews with the president and his inner circle of advisors, I suspect Obama also knows that the charge contains a germ of truth, on a deep karmic or psychological level most of his critics are unlikely to grasp.

Goldberg’s Obama magnum opus is well written, highly intelligent and impressively researched. It’s also massively narcissistic and sycophantic, in vintage Beltway-insider style, marinating in details and locations and celebrating its author’s access to power. We’re in the White House dining room, or John Kerry’s private office at the State Department, or aboard Air Force One on the runway in Kuala Lumpur. We are discussing serious things: the perceived personality defects of Vladimir Putin, the contemporary relevance of Thomas Hobbes’ “Leviathan,” and the importance of America’s relationship to Asia — a question Obama hoped would be central to his presidency that got trumped, or Trumped, by other concerns.

Even as Goldberg sinks, little by little, into the swamp of Washington groupthink — honestly, he should know better than to refer to Hugo Chávez, the late Venezuelan president, as a “dictator” — he teases out an intriguing portrait of the blend of caution, calculation and hopefulness that have characterized Obama’s foreign policy. Through it all, we can also discern a depiction of the president’s central flaw. It may not be a flaw at all, depending on your view of such things; it is certainly preferable to other flaws we could name. (Obama is presumed to favor Hillary Clinton over Bernie Sanders, but Clinton is portrayed throughout the Goldberg article as an incautious military interventionist who drove the administration’s disastrous Libya policy and wanted to compound the error in Syria. It’s a lot more like an indictment than an endorsement.)

Obama is frequently described as cool or sardonic or detached or analytical or deliberative; he is without doubt the most intellectually gifted American president in many decades. (Woodrow Wilson would be the most recent candidate, and before him probably Lincoln.) But what underlies Obama’s impressive book-learning and nuanced strategic thinking is a well-known failing of the intellectual class: He doesn’t seem to know much about human nature, and appears continually surprised by how stupid, fearful and irrational his fellow citizens and fellow planetary inhabitants are. He has read more than enough of Hobbes and John Locke, more than enough foreign-policy papers by officials of the George H.W. Bush administration. (His avatar in that field is, no kidding, Brent Scowcroft, which explains a lot.) He’s a little short, so to speak, on Freud and Nietzsche, on “Sympathy for the Devil” and “Apocalypse Now” and “Blue Velvet.”

To my mind, the most illuminating of Goldberg’s numerous Obama-up-close anecdotes is the one about the presidential press conference at a G20 summit in Turkey last November, a few days after the ISIS attacks in Paris that killed 130 people. Obama seemed increasingly exasperated and puzzled that the press corps never asked him about climate change or the conflict in Ukraine or the Iranian nuclear negotiations or any other possible summit topics. Every question was about ISIS and terrorism, culminating in a CNN reporter’s infamous outburst: “Why can’t we take out these bastards?”

That was the week when Trump first proposed barring all Muslims from entering the United States, a suggestion that all normal and reasonable people immediately rejected as outrageous — and that propelled him to a huge lead in the Republican campaign he has yet to surrender. Every Republican elected official everywhere in the country seized on the Syrian refugee crisis as a potential wedge issue, demanding that no more migrants be admitted under any circumstances (except maybe the Christians, in Ted Cruz’s iteration). Obama barely appeared to have noticed any of this. An unnamed official told Goldberg that it wasn’t until the next day, after a flight to Manila, that the president’s advisors figured out that “everyone back home had lost their minds.”

Whoever said that — it may have been Obama himself — was entirely correct. America’s reaction to the Paris attacks came pretty close to mass hysteria, and had nothing to do with the actual danger represented by ISIS, which was and is insignificant. There are many valid criticisms to raise about Obama’s approach to foreign policy and national security, including the dubious effects of the drone war and his administration’s obsessive crackdown on leakers and whistleblowers. But we have been very fortunate, in my judgment, to have had a president for seven-plus years who has valued logic over panic when it comes to the issue of Islamic terrorism, and who has consistently sought to frame that problem in global and historical terms and not to exaggerate its importance.

I’m delighted to learn, via Goldberg, that Obama often reminds his staff that far more Americans die every year from falls in the bathtub than die at the hands of Muslim terrorists. (John Kerry, on the other hand, comes off like a raving lunatic, envisioning a future in which ISIS destroys European society and leads to the return of 1930s-style fascism. Thank Christ he didn’t get elected.) I can understand why he doesn’t say this stuff in public, but Obama is justified in describing Syria — and, by extension, the larger quagmire of the Middle East — as a “shit show” that won’t be solved during his presidency or the next one or the one after that. On any logical basis, it’s difficult to challenge his argument that the least bad choice in Middle East policy is to disentangle and disconnect to the greatest extent possible and pivot toward America’s relationship with the developing nations of Asia, Africa and Latin America, where constructive change is far more achievable.

But Americans, as you may have noticed, don’t do logic all that well. We do fear and passion and soaring guitar chords and “freedom isn’t free.” We love tough talk and paranoid fantasies and bizarre, apocalyptic delusions. Obama is a master of political oratory, which is what got him elected in the first place. But given his apparent puzzlement that the stupid, primitive and seductive passions that run throughout human history — and the deranged current of jingoistic nationalism that runs throughout American history — have not been conclusively vanquished by the light of reason, he still adds up to the strangest and unlikeliest president ever. So I think it’s about halfway true that Obama’s persistent mode of cool drove us crazy and paved the way for the empty hotness of Trump, although it’s a lot more true to say that we were already crazy and Trump was ready and waiting.

Trump is the anti-Obama, the distorted reflection, the choleric abreaction to Obama’s phlegmatic calm. If Obama is the most Apollonian political figure ever, Trump is the Dionysian comeback. If Obama is the only president ever to be compared with Mr. Spock — Goldberg does it at least twice — then Trump is the sadistic, bearded Spock from that alternate-universe “Star Trek” episode. (As Obama probably knows, that would be season 2, episode 4: “Mirror, Mirror.”) Obama did not create Trump, because the Trumpian force has always been with us and within us. But in relying too much on his misguided assumption that humans are governed by reason more than emotion, and that the atavism, tribalism and nihilism Trump so perfectly embodies were in retreat, Obama may unwittingly have released Trump from his dungeon in the American unconscious.

As is customary when he feels that his fundamental worldview is under attack, Obama managed to sound bemused rather than outraged when Margaret Brennan of CBS News brought up what a Wall Street Journal editorial has called the “Obama-Trump dialectic” at a White House press conference this week. “I have been blamed by Republicans for a lot of things,” the president said, “but being blamed for their primaries and who they’re selecting is novel.” He went on to say, reasonably enough, that “Republican political elites” and the right-wing media have been pouring poison into the lagoon for years, and should hardly be surprised that a mutant monster has come crawling out of it. He’s neither Republican nor Democrat! Neither man nor crustacean!

by Andrew O'Hehir, Salon |  Read more:
Image: Pete Souza

10 Awkward Friendships You Probably Have

Sunday, March 13, 2016

The Residents


William Adolphe Bouguereau, Fishing For Frogs, 1882

Pax, the 'iPhone of Vaporizers'

Josh, a banker at a hedge fund in Manhattan, loves his Pax vaporizer.

He uses the small, sleek gadget to heat up his stash of weed just to the right temperature in under a minute — hot enough so he can inhale it and feel the effects, but not so hot that it burns — a few times each week.

He presses a button to turn it on while it’s in his pocket, and about a minute later, pulls it out, puts it to his lips, and inhales.

“You breathe in, you breath out, and you’re done,” he recently told Tech Insider.

Josh (we're not using his real name for obvious reasons) said he’s used his Pax to get high outside of a movie theater, on the subway, at concerts, and even on his way into the office.

He likes it because it’s not only convenient, but also inconspicuous. Because you’re not actually smoking, there isn’t as much of an odor, and it doesn’t create a huge cloud of smoke that draws attention.

Pax, he said, “has truly pushed the envelope and brought [smoking weed] into the 21st century.”

The Pax vaporizer is made by Pax Labs, one of the leading — and certainly one of the buzziest — companies cashing in on the growing demand for small, handheld battery-powered vaporizers that have revolutionized how people smoke weed in public.

The second generation Pax, the Pax 2, came out last year and is packed with technology: It has multiple sensors to measure temperature and an accelerometer that detects movement. The mouthpiece recognizes when your lips touch it, telling the heater to turn on and start heating up whatever you've packed inside of it. Insulation keeps the smooth exterior cool while the oven heats up to temperatures as high as 455 degrees Fahrenheit.

It's been referred to as "the iPhone of vaporizers," and, like the iPhone, it comes with a premium price tag — the Pax 2 will set you back $289.99.

by Tim Stenovec, Tech Insider | Read more:
Image: Pax

The Facebook Breakup

For Kate Sokoloff, a brand strategist in Portland, Ore., the Facebook mirror of her breakup with her boyfriend of three years was like “an emotional sucker punch,” she said. “Not 15 minutes after we broke up four years ago, and probably while he was still parked outside of my house, he changed his status to ‘single.’”

This meant that all of the couple’s Facebook friends, including her teenage sons, were instantly notified. “There was no hiding or time to cry on my own,” said Ms. Sokoloff, now 55.

She did message friends, asking them to remove any photos of herself and her former partner from their own Facebook albums, but she remembers wishing “there was a Facebook vacuum cleaner that could suck every trace of our relationship off the Internet. Photos, in particular. In fact, some just popped up yesterday.”

Since last November, there has been such a tool, part of a kit the social network has designed to manage and curate the digital archive that is growing with each relationship. It’s like cleaning your closet, said Kelly Winters, a product manager on Facebook’s designated “Compassion Team,” a changing squad of product managers and designers, engineers, researchers, social scientists and psychologists. “You don’t want to keep anything around that doesn’t spark joy,” she said, echoing the mantra of Marie Kondo, the Japanese decluttering guru.

Three million users have already deployed some aspect of the breakup flow, as it’s called, by choosing to minimize what they see of an ex going forward, and similarly hide their own postings, settings that can easily be reversed if the future brings a change of heart or a dulling of the ache.

Undoing the vacuum tool (to use Ms. Sokoloff’s words for the engineering feat that harnesses what is known as distributed computing to untag hundreds or even thousands of images that no longer spark joy) is more laborious. (...)

Finding the right tone was a big part of the design process, Ms. Albert said, language being crucial in creating a tool kit that would be flexible enough to address a 14-year-old breaking up with her boyfriend of four weeks as well as longtime married couples with children.

It also had to be neutral, not familiar, and not in any way hortatory. “If designers are in charge of surprise and delight,” she said, “what does it mean to design for aspects of life that are painful?”

Facebook language isn’t lyric poetry, by any means, but it does the trick. If you’re able to stumble onto the breakup flow (not an easy task, at this point; it’s only available on mobile and only in the United States), you should discover, as Ms. Winters described, a bento box of options.

“Take a Break. Here are some changes that might be helpful. We won’t notify Taylor of any changes you make. See less of Taylor. See Taylor on Facebook only if you visit his profile.” And so on. Mostly the language is like that of an instruction manual — “Turn on tag approvals for posts and photos you’re tagged in” — though at the end, it veers into self-care: “Reach out to people you trust for support. Stay Active. …”

There were some ideas that were, as Ms. Albert said, “out of scope to build, the idea of locking yourself out, temporarily, from one person’s account, trying to prevent that stalking behavior.” Technologically, she said, it was a bridge too far, and it led to a bigger conversation about what role Facebook wants to play in people’s lives. “It would be like Starbucks not accepting your credit card,” she said.

And just maybe such stalking is productive for some, a step toward resilience that would never accrue from watching baby sloth videos or mash-ups of Donald Trump tweets.

Ms. Sokoloff, the brand strategist who yearned for a digital vacuum cleaner, wondered if there wasn’t some emotional cost in making all traces of a relationship disappear. “Is there something important in the healing process that would be lost if we can essentially have the Facebook equivalent of the dream removers from ‘Eternal Sunshine of the Spotless Mind’?”

by Penelope Green, NY Times |  Read more:
Image: NY Times screenshots