Saturday, December 10, 2016

How the Twinkie Made the Superrich Even Richer

As fans gathered on Rockefeller Plaza in Manhattan, Al Roker pulled up in a big red delivery truck, ready to give America what it wanted: Twinkies.

The snack cakes flew through the air into the crowd pressed against metal barriers. One man shoved cream-filled treats into his mouth. Another “Today” host tucked Twinkies into the neckline of her dress.

Across the nation in the summer of 2013, there was a feeding frenzy for Twinkies. The iconic snack cake returned to shelves just months after Hostess had shuttered its bakeries and laid off thousands of workers. The return was billed on “Today” as “the sweetest comeback in the history of ever.”

Nowhere was it sweeter, perhaps, than at the investment firms Apollo Global Management and Metropoulos & Company, which spent $186 million in cash to buy some of Hostess’s snack cake bakeries and brands in early 2013.

Less than four years later, they sold the company in a deal that valued Hostess at $2.3 billion. Apollo and Metropoulos have now reaped a return totaling 13 times their original cash investment.

Behind the financial maneuvering at Hostess, an investigation by The New York Times found a blueprint for how private equity executives like those at Apollo have amassed some of the greatest fortunes of the modern era.

Deals like Hostess have helped make the men running the six largest publicly traded private equity firms collectively the highest-earning executives of any major American industry, according to a joint study that The Times conducted with Equilar, a board and executive data provider. The study covered thousands of publicly traded companies; privately held corporations do not report such data.

Stephen A. Schwarzman, a co-founder of Blackstone, took home the largest haul last year: nearly $800 million. He and other private equity executives receive more annually than the leaders of Facebook and Apple, companies that revolutionized the way society communicates.

The top executives at those six publicly traded private equity firms earned, on average, $211 million last year — which is about what Leon Black, a founder of Apollo, received. That amount was nearly 10 times what the average bank chief executive earned, though firms like Apollo face less public scrutiny on pay than banks do.

Private equity firms note that much of their top executives’ wealth stems from owning their own stock and that they have earned their fortunes bringing companies back to life by applying their operational and financial expertise. Hostess, a defunct snack brand that was quickly returned to profitability, is a textbook example of the success of this approach.

Yet even as private equity’s ability to generate huge profits is indisputable, the industry’s value to the work force and the broader economy is still a matter of debate. Hostess, which has bounced between multiple private equity owners over the last decade, shows how murky the jobs issue can be.

In 2012, the company filed for bankruptcy under the private equity firm Ripplewood Holdings. Months later, with Ripplewood having lost control and the company’s creditors in charge, Hostess was shut down and its workers sent home for good.

Without investment from Apollo and Metropoulos, Hostess brands and all those jobs might have vanished forever after the bankruptcy. The way these firms see it, they created a new company and new jobs with higher pay and generous bonuses.

But the new Hostess employs only 1,200 people, a fraction of the roughly 8,000 workers who lost their jobs at Hostess’s snack cake business during the 2012 bankruptcy.

And some Hostess employees who got their jobs back lost them again. Under Apollo and Metropoulos, Hostess shut down one of the plants they reopened in Illinois, costing 415 jobs.

The collapse and revival of Hostess illustrates how even in a business success, many workers don’t share in the gains. The episode also provides a snapshot of the economic forces that helped propel Donald J. Trump to the White House.

Since losing his job at Hostess in 2012, Mark Popovich has had three jobs, including one that paid about $10 an hour, half what he made at the Twinkie-maker. A lifelong Democrat and devoted “union man,” Mr. Popovich said he supported Mr. Trump, the first time he ever voted Republican.

“It’s getting old, getting bounced around all the time,” said Mr. Popovich, a 58-year-old Ohio resident.

Such frustrations stem from broader shifts in the economy, as all types of companies turn to automation to cut costs and labor unions lose their influence. While these changes have helped keep companies profitable, private equity has used these shifts in the workplace to supercharge wealth far beyond that of the typical chief executive.

And yet, Mr. Trump did not focus on private equity on the campaign trail, instead blaming the plight of the American working class on a shadowy cabal of elitist Democrats and Wall Street bankers who support trade deals that ship jobs overseas.

“People understand jobs going to China,” said Michael Hillard, an economics professor at the University of Southern Maine. “But no one has ever heard of these private equity firms that come in and do all this financial engineering. It is much more complicated and less visible.”

by Michael Corkery and Ben Protess, NY Times |  Read more:
Image: Shaw Nielsen

Khandahar & Vaudou Game


Todd Solondz
, Wiener-Dog
via:

Dan Savage's Open Letter to Paul Allen on Lidding I-5

Dear Paul,

God, I fucking hate open letters. Some asshole writes something, addresses you in the first person, tosses it up somewhere, and you're somehow obligated to drop whatever you're doing and respond. I've made it a policy never to take the bait, Paul, because responding to one asshole's open letter means getting open letters from two dozen other assholes.

And fuck that, right?

But I'm writing you an open letter, Paul, because I'm an asshole, I guess. And because there's something I've wanted to say to you for 20 years, and I didn't run into you the one time I went to a Seahawks game, and the thing I've been wanting to say to your face all these years is suddenly relevant.

You really fucked up the Seattle Commons. But here's the good news, Paul: You have a chance to redeem yourself. There's a new effort to build a large urban park in the heart of Seattle. The Washington State Convention Center is holding an open house on December 7 to discuss a planned expansion and to showcase ideas for the public benefit package offered by the project—and they're inviting the Lid I-5 steering committee to speak. Maybe you should show up. Maybe you should get behind the Lid I-5 movement.

And by "get behind it," Paul, I mean "pay for it."

Readers of this open letter who aren't Paul Allen or weren't around in the mid-1990s may not have heard of the Seattle Commons. A brief recap: John Hinterberger, a columnist for the Seattle Times, thought Seattle needed a large central park—something like New York City's Central Park, Chicago's Grant Park, San Francisco's Golden Gate Park—and suggested creating one in South Lake Union [ed. near the Space Needle]. At the time, South Lake Union was a run-down, mostly empty neighborhood with some scattered (and cheap!) apartments, a little light industry, and acres of parking lots. Local developers had already transformed Belltown, a similar neighborhood, by packing it with rows of condo towers (and pushing out the cheap apartments and light industry) and they were eyeing South Lake Union.

You got behind the Seattle Commons, Paul, as the plan came to be known, and it immediately became associated with you and, by extension, with the tech boom and tech money and tech workers—and the resentments being stirred up by rent hikes and rising housing costs. (Sound familiar?) You loaned the Commons campaign $20 million to buy up property within the proposed boundaries of the park, you started buying up property around the perimeter of the park, you pledged millions more to endow a fund for maintenance and security, so that the park, if built, would not drain resources away from Seattle's other parks.

You were the man behind the Seattle Commons—the man underwriting it, the man who stood to profit most if the park got built.

And this is where you fucked up, Paul: Seattle voters were asked to tax themselves to pay for the construction of the park. The park would be built only if the voters agreed to cover the $111 million construction costs. The levy, over time and with interest, would cost taxpayers more than a quarter of a billion dollars.

We voted on the Seattle Commons twice: a 70-ish acre park in 1995, and a scaled-back 60-ish acre park in 1996. I supported the Seattle Commons, but the Stranger Election Control Board—now dominated by sensitive millennials, then dominated by fucking hippies—urged our readers to vote against the plan. It's possible the Commons would have passed if Seattle's Only Newspaper had backed it. So The Stranger fucked up too, Paul. But you fucked up worse.

Again, you stood to profit if the park got built. And the anti-Commons campaign pretended it could prevent you or anyone else from profiting off the redevelopment of South Lake Union by blocking construction of the park.

"The Commons proposes to change this commercial, light-industrial neighborhood to an upscale new 'urban village' using the park as a front lawn for luxury apartments and condominiums," the anti-Commons campaign wrote in the 1996 voters' pamphlet. "The Commons plan disregards the people who want to continue to live and work in South Lake Union."

Seattle voters rejected the Commons both times it went to the ballot. But voting down the park didn't save light industry in South Lake Union, or any of those cheap (and crumbling) apartments, and not a single parking lot was spared. The condos and office buildings went up anyway. Developers profited just the same—you profited—and displaced businesses didn't get any financial assistance to help them relocate, which had been part of the Commons plan, and the public didn't get a park out of the deal.

"Why haven't any of the new software billionaires this region has spawned put up the money?" Timothy Egan wondered aloud in the New York Times before the first Commons vote. You didn't take the hint.

This is what I've wanted to say to you for 20 years, Paul: You could have and you should have put up the money to build the park. There was a stock rally between the first and second Commons vote, and you made a billion dollars in one day. You should have cashed out a quarter of that day's take and paid for the park. (In all fairness, Paul, I can't find the headline about your billion-dollar windfall. Maybe it happened, maybe I was/am high. Even so, the value of your Microsoft stock—and you owned hundreds of millions of shares at the time—more than doubled between the first and second votes. You could have paid for the park by cashing out a miniscule slice of your stock.)

You should have called a press conference before the first vote and said, "Here's the money, build the park, name it after my mom."

Allen Park would be there now—Seattle's central park—and your name would be on the lips of Seattle residents. People would say, "We're headed to Allen Park," "My parents just moved into a condo near Allen Park," "We're playing softball today in Allen Park," "That's the part of Allen Park where the gays have sex in the bushes."

You've always wanted to leave your mark on the region, Paul, the place where you made your fortune. You hunger for a legacy. But no one is going to look at an office building or a condo tower 50 years from now and think, "Man, that Paul Allen, he was a visionary!"

What about the Museum of Pop Culture? Formerly the EMP Museum? Formerly the Experience Music Project and Science Fiction Museum and Hall of Fame? Formerly the Experience Music Project? Originally the Jimi Hendrix Museum? Sorry, Paul, but MoPOP is going to be a food court 10 minutes after you're dead. They'll be selling tacos in the lounges and playing laser tag in the Sky Church before you're in the ground. (And what did that building cost you? One hundred million dollars—more than paying for the Commons would've cost you, Paul, once you toss in staff, tchotchkes, programming, and paying consultants to change the name every five fucking years.)

Your philanthropic efforts, while worthy, are scattered and random. Your collections will be broken up, your sports teams will be sold off, newer office buildings and swanker condos will go up, MoPOP will be the world's bling-blang-iest food court. You will not be remembered, Paul. Bill Gates will be the man who cured malaria, and you'll be a footnote, if that, on a Wiki page about South Lake Union. (Strike that, Paul: You're already a footnote. Go search "South Lake Union" on Historylink.org. You're literally a footnote.)

But you have a second chance, Paul. A second chance to get this right.

by Dan Savage, The Stranger |  Read more:
Image: Levi Hastings

Friday, December 9, 2016

The Outline

[ed. Can't you just feel the excitement?! Me neither.]

In an introductory post to his colorful new project, The Outline, Joshua Topolsky writes that the site is “a new kind of publication for a new kind of human.” The veteran editor (formerly of Engadget, The Verge, and Bloomberg Digital) knows that’ll make some people roll their eyes. That’s OK. Per The Outline’s cool-kids-only tagline: “It’s not for everyone. It’s for you.”

For all its esoteric posturing, the The Outline’s MO is actually pretty straightforward. Practically, it’s a source for articles, videos, quotes, photos, graphs, and games created and curated by members of the publication’s staff. Visually, it’s the amphetamine-addled cousin of Bloomberg’s irreverent 2014 redesign (which Topolsky also spearheaded). Functionally, it’s a discovery platform. And existentially, it’s a gamble on the way people will consume information in the future.

Topolsky is betting that an appetite exists not just for the content The Outline has on offer, but the way it presents that content. Its user interface is fashionably opaque—mysterious enough to keep you engaged, but not so impenetrable that it turns you away. This approach has worked before, most famously—and most successfully—for Snapchat. “We make it easy to play with,” Snap CEO Evan Spiegel recently said of his app’s interface. “You can’t break anything.”

The Outline takes that philosophy and applies it to online publishing. Everything on the platform—ads included—hinges on cards (Topolsky calls them “atoms”), customizable templates, assembled by the publication’s editors, that visitors to the site can thumb through on their phones. Swiping sideways takes you to a new bit of content, sight unseen. If you like what you find, you can swipe vertically to dive deeper into a narrative. If you decide, three paragraphs in, that the story isn’t for you, simply swipe to the next card. “What we really want is for people to explore,” Topolsky says.

The result is a less utilitarian approach to news consumption. “My feeling when I look at something like this is it’s the experience that I’m supposed to consume,” says Lanny Geffen, vice president of strategy and UX at design studio OneMethod. “I’m supposed to flip through and have a messy, curious, where-do-I-go-next kind of experience. And that’s authentically digital.”

Another sign of digital authenticity: The Outline acknowledges that most of its traffic will arrive via links from Facebook, Twitter, email, and chat. Once a visitor has arrived, the design’s job is to keep her engaged. Not with a nav bar or list of latest stories (The Outline‘s organization places little emphasis on things like chronology or story hierarchy), but with its labyrinthine browsing experience. This can make it difficult to find your way back to a specific story, but it also encourages readers to swipe in search of content they haven’t seen.

It’s hard not to think of slot machines. In her book Addiction by Design, MIT anthropologist Natasha Schüll describes how casinos have optimized slots to maximize what they call time on device. By varying its payout, a well designed slot machine can lull gamblers into a state of prolonged, undivided attention to the task at hand. App developers have their own version of time on device (they call it time in app), and they have their own versions of variable payout (spent any time on Tinder lately?). Online publishers like The Outline (and WIRED, for that matter) have a version of time on device, too. They call it time on site. Thumbing through The Outline, unsure what your next swipe might bring, one can’t help but sense that it’s designed to appeal to the same compulsions as Tinder, and, yes, slot machines.

by Liz Stinson, Wired |  Read more:
Image: The Outline

Color of the Year

Donna Summer


[ed. It's nice to have some good backup singers.]

Big Bother Is Watching

Why Slack is designed to never give you any.

In Silicon Valley, communicating is not something you do; it is a problem you solve. Slack, currently one of tech’s hottest properties, started out as a simple in-house chat app for a videogame company. But in the great tradition of startup pivots, the Slack team realized that the real action was in their chat app, not the convoluted game they were creating. In 2013 they decided to roll out Slack to do for others what it had done for them: improve their office communications.

Since then, the app has grown to become the biggest, most bloated minder to ever patrol the digitized workplace. Billing itself as the mega-app that will soon make email obsolete, it has three million daily users, including, as its sales team is keen to tell you, most of the Fortune 100. For those companies that hitch their wagon to it, Slack is increasingly the piece of software that mediates the entire work experience. You chat with your coworkers. You check your social media feeds. You store your documents, track your budgets, book your travel, update your calendars, wrangle your to-do lists, order your lunch. It is a constant, thrumming presence, a hive of notifications and tasks and chitchat that nags at workers and reminds them that there’s always more to do, more to catch up on—and that nothing goes unrecorded. Its name, despite the superficial connotation of hang-loose downtime, indicates its ultimate, soaring ambition: Slack, the company’s CEO, Stewart Butterfield, recently revealed, is an acronym for Searchable Log of All Conversation and Knowledge.

“Everything in Slack—messages, notifications, files, and all—is automatically indexed and archived so that you can have it at your fingertips whenever you want,” chirps the company’s marketing copy. A once harried, now grateful knowledge worker confronts Information with a capital “I,” swinging his sword at the looming pile. Slack cheers on the little guy: “Slice and dice your way to that one message in your communication haystack.”

Slack tracks and catalogs everything that passes through it, and that is supposed to be a perk. But if the little guy can find anything in the archive, so can his risk-mitigating boss.

The Game’s the Thing

Try Slack for the first time, and you will be struck by its informal vibe, cribbed, as far as I can tell, from Richard Scarry’s Busytown. There are a hundred cute ways to tell your coworker you “Got it,” where “it” is probably a sales report. The thumb’s-up emoji is in heavy rotation. There are no forced salutations or stiff valedictions. (If “All best” is the first casualty of the email-less revolution, I am guessing no one will cry.) GIFs are tolerated—even encouraged. Never before have so many gyrating bananas, tiny clapping hands, and RuPaul eye rolls infiltrated the workplace.

Next to the other indignities of the office—drug tests, non-compete and non-disclosure agreements, morality clauses, polygraphs—an animated dancing fruit might come as a relief, one more piece of flair to lighten the drudgery. Yet the seemingly free-wheeling patter of Slack, organized into what the company calls “channels,” has about as much spontaneity as a dentist’s office poster. Before you can dance like no one is watching, you have to know that someone is.

Slack slots neatly in the trend toward the gamification of labor and everyday communication—which only seems fitting, given its humble videogame beginnings. Sometimes the game is quite explicit. As you trick out your account, trawling Slack’s directory of third-party add-ons, you might see one called Scorebot. With Scorebot’s help, you can compete with your coworkers for the honor of most “socially adept”; the worker with the most positive emojis gets the most points. “Make everyday conversation a competition,” Scorebot’s website crows. For a moment, I wondered if Scorebot was a joke, but it seems to be an earnest creation of Crema, a Kansas City design firm, attracted to the honey of Slack’s popularity. (Now that Slack has launched an $80-million investment fund for app-makers, the honey is even sweeter.) And joke or no, Scorebot is just another arbitrary assessment tool in a work culture that bristles with them.

We are, I think, on the verge of another Slack pivot, if it hasn’t happened quietly already. As its watchful bots continue to circle, archiving and analyzing, retrieving and praising, the company will be forced to acknowledge that the true value of Slack lies not in its ability to enable productivity, but rather to measure it. The metrics business is booming, after all. Forget the annual performance review; with Slack’s help, managers could track their employees even more closely, and in ever more granular ways. And why stop at performance analytics? Sentiment analysis could automatically alert supervisors when employees’ idle bickering tips into mutiny. Depressed or anxious employees could be automatically served with puppy videos and advice bots. (...)

The Personal Is Professional

The rise of Slack can be attributed in part to the makeup of its client base: journalists and media companies are among its most visible users. They’re also some of the program’s biggest critics, having passed through the requisite phases of early adoption and breathless evangelism into a performative cynicism.

Of course, for every disaffiliate, there is a full-blown Slack convert, with the expected litany of advice listicles, tutorial videos, power user how-to books, and other shibboleths of the highly optimized online life. The company’s multibillion-dollar valuation has pushed it firmly into unicorn territory, meaning that its origin story is already cast into myth. Stewart Butterfield, company founder and CEO, has advanced to the vanguard of the influencer circuit, putting in face-time on C-SPAN and conference keynotes. There has been the requisite Wired cover story with an insufferably cheeky headline (“The Most Fascinating Profile You’ll Ever Read About a Guy and His Boring Startup”), which delivered—if you appreciate that all superlatives are relative.

Butterfield has claimed that Slack is ultimately a work reducer, that it increases “transparency” and shortens the workday. The company abides by the philosophy of “work hard, go home”—an odd choice for a cloud-based, cross-platform app that wants a piece of your every device. It is precisely tools like Slack that allow employees to work anywhere, whenever. Slack users may go home at 6 p.m., but their jobs follow them, pinging them from their smartphones.

“Do more of your work from Slack,” the company urged this summer while unveiling its new “message buttons,” which allow users to click, for example, “approve” or “deny” on an expense report. It could have offered the same sentiment by commanding, “Live more of your life through Slack.”

This total, dystopian immersion of life into work should send a chill coursing down to the ends of our carpel-tunnel-stricken fingertips. But of course, it gets worse: we are now monetizing the workers’ dystopia across several platforms at once. In November, Microsoft unveiled Teams, a Slack competitor that will soon come standard with Office 365, the company’s popular suite of business tools. Facebook recently launched a Slack competitor called Workplace, which has been hailed as “a new messaging app that embodies the dissolving distinction between personal and professional digital spaces.” Whoever thought that pitch would sound good must have known that the target users of Workplace already count themselves as addicts, conditioned for constant validation from their electronic supervisors and craving their next hits of dopamine. (After all, Facebook practically invented this kind of stimulus.)

Now that apps like these effectively distill the history and future not only of your job, but also of your personal life, who’d want to quit completely? Who could? It would be like leaving your memoir-in-progress on the bus—a simile that no longer makes sense, since your memoir manuscript, obviously, would be stored in the cloud. It would also mean giving up access to the digital equivalent of the office water-cooler—though, again, this simile is nowhere near immersive enough for an ever-shifting social/work platform that constantly calls out for your attention and participation. According to the company’s CEO, the average Slack user is “actively” using the app for two hours and twenty minutes per day, with the program often running in the background throughout the day (along with pushing alerts to smartphones). That means that what some take to be the workplace’s one pleasure—interacting with other humans—is heavily mediated through an optimize-everything app that never forgets.

by Jacob Silverman, The Baffler |  Read more:
Image: Dandy/John J. Cussler

Meaningness

Let’s start this book in the middle. The main course is a ways off, and I want to give you a taste now.

Let’s talk about purpose. (Purpose is one of the dimensions of meaningness discussed in this book.)

Especially at turning points in life, people ask questions like:
  • Is there any purpose at all in living? Or is everything completely pointless?
  • What am I supposed to do?
  • How can I choose among the many ways I could spend the rest of my life?
  • Does everyone’s life have the same purpose, or does everyone have their own?
  • Where does purpose come from? Does it have some ultimate source, or is it just a personal invention?
Various religions, philosophies, and systems claim to have answers. Some are complicated, and they all seem quite different. When you strip away the details, though, there are only a half dozen fundamental answers. Each is appealing in its own way, but also problematic. Understanding clearly what is right and wrong about each approach can resolve the underlying problem.

Let’s go through these alternatives briefly. I will explain each one in detail in the middle part of the book.

Five confused attitudes to purpose
Everything has a fixed purpose, given by some sort of fundamental ordering principle of the universe. (This might be God, or Fate, or the Cosmic Plan, or something.) Humans too have a specific role to play in the proper order of the universe.
This is the stance of eternalism. It may be comfortable. If you just follow the eternal law, everything will come out right. Unfortunately, it often seems that much of life has no purpose. At any rate, you cannot figure out what it is supposed to be. Priests or other authority figures claim to know what the cosmic purposes are, but their advice often seems wrong for particular situations.

For these reasons, even people who are explicitly committed to eternalism generally fall into other stances at times.
Nothing has any purpose. Life is meaningless. Any purposes you imagine you have are illusions, errors, or lies.
This is the stance of nihilism. It appears quite logical. It might seem to follow naturally from some scientific facts: everything is made of subatomic particles; they certainly don’t have purposes; and you can’t get purpose by glomming together a bunch of purposeless bits.

It is easy to fall into nihilism in moments of despair; but, fortunately, it is difficult to maintain, and hardly anyone holds it for long. Nevertheless, the seemingly compelling logic of nihilism needs an answer. It turns out that it is quite wrong, as a matter again of science and logic. But because that is not obvious, three other stances try (and fail) to find a middle way between eternalism and nihilism.
The supposed cosmic purposes are doubtful at best, but obviously, people do have goals. There are human purposes no one can seriously doubt: survival, health, sex, romance, fame, power, enjoyable experiences, children, beautiful things. Realistically, those are what everyone pursues anyway. You might as well drop the hypocritical pretense of “higher” purposes and go for what you really want.
This is the stance of materialism. Realistically, most people adopt this stance much of the time. However, at times everyone does recognize the value of altruistic and creative purposes, which this stance rejects. Moreover, most recognize that materialism is an endless treadmill: the enjoyment of new goodies wears off quickly, and then you are left craving the next, better thing.
You can’t take it with you. After you are dead, it is meaningless how many toys you had. What matters is how you live your life: whether you create something of beauty or value for others. You have unique capabilities to improve the world, and it’s your responsibility to find and act on your personal gift.
This is the stance of mission. The problem is that no one actually has a “unique personal gift.” God does not have plans for us. People waste a lot of time and effort trying to find “their purpose in life,” and are miserable when they fail. Besides that, rejecting material purposes causes you to overlook genuine opportunities for enjoyment and satisfaction.
Since the universe (or God) does not supply us with purposes, they are human creations. Mostly people mindlessly adopt purposes that are handed to them by society. You need to throw those off, and choose your own purposes, as an act of creative will.
This is the stance of existentialism.1 It is based on the assumption that if purposes are not objective, or externally given, they must be subjective, or internally created. Existentialism holds out hope for freedom. But it is not actually possible to create your own purposes. Choosing at random would be pointless, and impossible; and what purely personal basis could you have for choosing one purpose over another?

Resolving confusion

Each of these confused stances treats meaning as fixed by an external force, or denies meaning or some aspect of it.

The central message of this book is that meaning is real (and cannot be denied), but is fluid (so it cannot be fixed). It is neither objective (given by God) nor subjective (chosen by individuals).

The book offers resolutions to problems of meaning that avoid denial, fixation, and the impossibility of total self-determination. These resolutions are non-obvious, and sometimes unattractive; but they are workable in ways the alternatives are not.

by David Chapman, Meaningness |  Read more:
Image: Meaningness

[ed. This is a free hypertext book on the subject of meaning in human existence. The short section above is from the opening chapter: An appetizer: purpose.]

Iguana Chased by Snakes - Planet Earth II: Islands


[ed. This is like my worst nightmare. There's also another version, narrated with Marshawn Lynch's epic Beast Quake run in 2011, which adds a little humor and helps dissipate the scariness a bit. Watch at your own (mental health) risk.]

Thursday, December 8, 2016


Costa Dvorezky
via:

Who Would Destroy the World?

Consider a seemingly simple question: If the means were available, who exactly would destroy the world? There is surprisingly little discussion of this question within the nascent field of existential risk studies. But the issue of “agential risks” is critical: What sort of agent would either intentionally or accidentally cause an existential catastrophe?

An existential risk is any future event that would either permanently compromise our species’ potential for advancement or cause our extinction. Oxford philosopher Nick Bostrom coined the term in 2002, but the concept dates back to the end of World War II, when self-annihilation became a real possibility for the first time in human history.

In the past 15 years, the concept of an existential risk has received growing attention from scholars in a wide range of fields. And for good reason: An existential catastrophe could only happen once in our history. This raises the stakes immensely, and it means that reacting to existential risks won’t work. Humanity must anticipate such risks to avoid them.

So far, existential risk studies has focused mostly on the technologies—such as nuclear weapons and genetic engineering—that future agents could use to bring about a catastrophe. Scholars have said little about the types of agents who might actually deploy these technologies, either on purpose or by accident. This is a problematic gap in the literature, because agents matter just as much as, or perhaps even more than, potentially dangerous advanced technologies. They could be a bigger factor than the number of weapons of total destruction in the world.

Agents matter. To illustrate this point, consider the “two worlds” thought experiment: In world A, one finds many different kinds of weapons that are powerful enough to destroy the world, and virtually every citizen has access to them. Compare this with world B, in which there exists only a single weapon, and it is accessible to only one-fourth of the population. Which world would you rather live in? If you focus only on the technology, then world B is clearly safer.

Imagine, though, that world A is populated by peaceniks, while world B is populated by psychopaths. Now which world would you rather live in? Even though world A has more weapons, and greater access to them, world B is a riskier place to live. The moral is this: To accurately assess the overall probability of risk, as some scholars have attempted to do, it’s important to consider both sides of the agent-tool coupling.

Studying agents might seem somewhat trivial, especially for those with a background in science and technology. Humans haven’t changed much in the past 30,000 years, and we’re unlikely to evolve new traits in the coming decades, whereas the technologies available to us have changed dramatically. This makes studying the latter much more important. Nevertheless, studying the human side of the equation can suggest new ways to mitigate risk.

Agents of terror. “Terrorists,” “rogue states,” “psychopaths,” “malicious actors,” and so on—these are frequently lumped together by existential risk scholars without further elaboration. When one takes a closer look, though, one discovers important and sometimes surprising differences between various types of agents. For example, most terrorists would be unlikely to intentionally cause an existential catastrophe. Why? Because the goals of most terrorists—who are typically motivated by nationalist, separatist, anarchist, Marxist, or other political ideologies—are predicated on the continued existence of the human species.

The Irish Republican Army, for example, would obstruct its own goal of reclaiming Northern Ireland if it were to dismantle global society or annihilate humanity. Similarly, if the Islamic State were to use weapons of total destruction against its enemies, doing so would interfere with its vision for Muslim control of the Middle East.

The same could be said about most states. For example, North Korea’s leaders may harbor fantasies of world domination, and the regime could decide that launching nuclear missiles at the West would help achieve this goal. But insofar as North Korea is a rational actor, it is unlikely to initiate an all-out nuclear exchange, because this could produce a nuclear winter leading to global agricultural failures, which would negatively impact the regime’s ability to maintain control over large territories.

On the other hand, there are some types of agents that might only pose a danger after world-destroying technologies become widely available—but not otherwise. Consider the case of negative utilitarians. Individuals who subscribe to this view believe that the ultimate aim of moral conduct is to minimize the total suffering in the universe. As the Scottish philosopher R. N. Smart pointed out in a 1958 paper, the problem with this view is that it seems to call for the destruction of humanity. After all, if there are no humans around to suffer, there can be no human suffering. Negative utilitarianism—or at least some versions of it—suggests that the most ethical actor would be a “world-exploder.”

As powerful weapons become increasingly accessible to small groups and individuals, negative utilitarians could emerge as a threat to human survival. Other types of agents that could become major hazards in the future are apocalyptic terrorists (fanatics who believe that the world must be destroyed to be saved), future ecoterrorists (in particular, those who see human extinction as necessary to save the biosphere), idiosyncratic agents (individuals, such as school shooters, who simply want to kill as many people as possible before dying), and machine superintelligence.

Superintelligence has received considerable attention in the past few years, but it’s important for scholars and governments alike to recognize that there are human agents who could also bring about a catastrophe. Scholars should not succumb to the “hardware bias” that has so far led them to focus exclusively on superintelligent machines.

by Phil Torres, Bulletin of the Atomic Scientists | Read more:
Image: Dr. Strangelove

Starbucks's $10 Cup of Coffee Is Priced Just Right

[ed. I've been in the Roastery in Seattle on Pike (although I didn't know it at the time). I'd just stopped in to get a Serious Pie pizza, which, as I found out, is situated in a small enclave adjacent to the main dining room. The whole operation looked like a brewery to me... massive roasting equipment, drying bins, conveyor-belts running all over the place and other stuff going on while hundreds of people sat around drinking coffee, working on laptops, taking pictures and just soaking in the vibe (and tourists pouring in all the time). It was, generally, kind of weird. I kept asking myself "what is everyone doing here?"(I'm not much of a coffee drinker myself), but people seemed to be outright giddy about being in the happening place.]

It's not about the $10 cup of coffee.

Critics have derided Starbucks Corp.'s push into higher-end coffee bars, which the chain discussed in detail on Thursday, at an all-day confab with Wall Street investors and analysts. Skeptics questioned whether millennials would pony up for pricey drinks such as the $10 Nitro cold-brew coffee, which is infused with nitrogen gas. Others poked fun at descriptions of exotic beans small-batch roasted in Seattle.

They are missing the point.

Starbucks is not trying to change its current business model by going even more upmarket than it already is, nor is it trying to convince folks to spend more on its run-of-the-mill drip coffee just because it's served by hipsters in hats and leather-lined cloth aprons (the new uniform of the higher-end Roastery stores).

On the contrary, Starbucks has launched a completely separate, brand new restaurant chain -- one that rejects the old model of brick-and-mortar ubiquity, while also getting back to the chain's roots.

Back in the 1970s, CEO Howard Schultz fashioned Starbucks in the vision of a "third place"-- corporate jargon for somewhere for people to hang out when not at work or home. But over the years, many of the 25,000 stores worldwide have basically turned into fast-food stations, where people get their coffee fix and get out.

Store Count

Starbucks aims to get up to 37,000 stores by 2021, which would overtake McDonald's store footprint

Many of the stores don't look all that different from McDonald's higher-end McCafe's. Starbucks has invested in drive-thru's, mobile ordering and payment and virtual baristas, to speed up the process and get more money flowing through the chain.

The problem is, this model depends greatly on foot traffic. And with consumers spending increasingly less time at the malls and shopping centers Starbucks locations were built to serve, the company knows it can no longer rely on the "pull" model of intercepting existing foot traffic. Instead, it will have to create more of a "push" model, giving consumers reasons to get off their couch and come in to get a coffee.

Traffic Jam

Shopper traffic at retail stores is in a steep, years-long decline

Enter Starbucks Reserve and Roastery stores. They are more wine bar than coffeehouse, replete with tastings, mixologists and educated baristas. They're a place to take a date, to sit at the bar and experience the tastes and smells, as you'd do at a wine bar or craft brewery.

Indeed, customers at the already opened Roastery outpost in Seattle spend an average of 40 minutes at the restaurant, according to the company. Compare that to the few minutes it takes to swing by a Starbucks on your morning commute and pick up the vanilla latte you pre-ordered on your cell phone. It's a totally different business model.

by Shelly Banjo, Bloomberg |  Read more:
Image: via:

Okinawa Churaumi Aquarium


Kuroshio Sea Tank, Okinawa Churaumi Aquarium, Japan.

Danseuse Earrings
via:

Frances Hammell Gearhart
via:

Wednesday, December 7, 2016

The True Story of America’s Sky-High Prescription Drug Prices

Let’s say you’re at the doctor. And the doctor hands you a prescription.

The prescription is for Humira, an injectable medication used to treat a lot of common conditions like arthritis and psoriasis. Humira is an especially popular medication right now. In 2015, patients all around the world spent $14 billion on Humira prescriptions — that’s roughly the size of Jamaica's entire economy.

Let’s say your doctor appointment is happening in the United Kingdom. There, your Humira prescription will cost, on average, $1,362. If you’re seeing a doctor in Switzerland, the drug runs around $822.

But if you’re seeing a doctor in the United States, your Humira prescription will, on average, run you $2,669.

How does this happen? Why does Humira cost so much more here than it does in other countries?

Humira is the exact same drug whether it’s sold in the United States, in Switzerland, or anywhere else. What’s different about Humira in the United States is the regulatory system we’ve set up around our pharmaceutical industry.

The United States is exceptional in that it does not regulate or negotiate the prices of new prescription drugs when they come onto market. Other countries will task a government agency to meet with pharmaceutical companies and haggle over an appropriate price. These agencies will typically make decisions about whether these new drugs represent any improvement over the old drugs — whether they’re even worth bringing onto the market in the first place. They’ll pore over reams of evidence about drugs’ risks and benefits.

The United States allows drugmakers to set their own prices for a given product — and allows every drug that's proven to be safe come onto market. And the problems that causes are easy to see, from the high copays at the drugstore to the people who can’t afford lifesaving medications.

What’s harder to see is that if we did lower drug prices, we would be making a trade-off. Lowering drug profits would make pharmaceuticals a less desirable industry for investors. And less investment in drugs would mean less research toward new and innovative cures.

There’s this analogy that Craig Garthwaite, a health economist, gave me that helped make this clear. Think about a venture capitalist who is deciding whether to invest $10 million in a social media app or a cure for pancreatic cancer.

“As you decrease the potential profits I’m going to make from pancreatic cures, I’m going to shift more of my investment over to apps or just keep the money in the bank and earn the money I make there,” Garthwaite says.

Right now America’s high drug prices mean that investing in pharmaceuticals can generate a whole bunch of profits — and that drugs can be too expensive for Americans to afford.

Let’s say you’re a pharmaceutical executive and you’ve discovered a new drug. And you want to sell it in Australia. Or Canada. Or Britain.

You’re going to want to start setting up some meetings with agencies that make decisions about drug coverage and prices.

These regulatory bodies generally evaluate two things: whether the country wants to buy your drug and, if so, how much they’ll pay for it. These decisions are often related, as regulators evaluate whether your new drug is enough of an improvement on whatever is already on the market to warrant a higher price.

So let’s say you want to sell your drug in Australia. You’ll have to submit an application to the Pharmaceutical Benefits Advisory Committee, where you’ll attempt to prove that your drug is more effective than whatever else is on the market right now.

The committee will then make a recommendation to the country’s national health care system of whether to buy the drug — and, if the recommendation is to buy it, the committee will suggest what price the health plan ought to pay.

Australia’s Pharmaceutical Benefits Advisory Committee is not easy to impress: It has rejected about half of the anti-cancer drug applications it received in the past decade because their benefits didn’t seem worth the price.

But if you do succeed — and Australia deems your drug worthy to cover — then you’ll have to decide whether the committee has offered a high enough price. If so, congrats! You’ve entered the Australian drug market.

Other countries regulate the price of drugs because they see them as a public utility

Countries like Australia, Canada, and Britain don’t regulate the price of other things that consumers buy, like computers or clothing. But they and dozens of other countries have made the decision to regulate the price of drugs to ensure that medical treatment remains affordable for all citizens, regardless of their income. Medication is treated differently because it is a good that some consumers, quite literally, can’t live without.

This decision comes with policy trade-offs, no doubt. Countries like Australia will often refuse to cover drugs that they don’t think are worth the price. In order for regulatory agencies to have leverage in negotiating with drugmakers, they have to be able to say no to the drugs they don’t think are up to snuff. This means certain drugs that sell in the United States aren’t available in other countries — and there are often public outcries when these agencies refuse to approve a given drug.

At the same time, just because there are more drugs on the American market, that doesn’t mean all patients can access them. “To think that patients have full access to a wide range of products isn’t right,” says Aaron Kesselheim, an associate professor of medicine at Harvard Medical School. “If the drugs are so expensive that you can’t afford them, that’s functionally the same thing as not even having them on the market.”

It also doesn’t mean we’re necessarily getting better treatment. Other countries’ regulatory agencies usually reject drugs when they don’t think they provide enough benefit to justify the price that drugmakers want to charge. In the United States, those drugs come onto market — which means we get expensive drugs that offer little additional benefit but might be especially good at marketing.

This happened in 2012 with a drug called Zaltrap, which treats colorectal cancer. The drug cost about $11,000 per month — twice as much as its competitors — while, in the eyes of doctors, offering no additional benefit.

“In most industries something that offers no advantage of its competitors and yet sells for twice the price would never even get on the market,” Peter Bach, an oncologist at Sloan-Kettering Memorial Hospital, wrote in a New York Times op-ed. “But that is not how things work for drugs. The Food and Drug Administration approves drugs if they are shown to be ‘safe and effective.’ It does not consider what the relative costs might be.”

by Sarah Kliff, Vox | Read more:
Image: uncredited

How the Internet Unleashed a Burst of Cartooning Creativity


In 1989 Bill Watterson, the writer of “Calvin and Hobbes”, a brilliant comic strip about a six-year-old child and his stuffed tiger, denounced his industry. In a searing lecture, he attacked bland, predictable comics, churned out by profit-driven syndicates. Cartooning, said Mr Watterson, “will never be more than a cheap, brainless commodity until it is published differently.”

In 2012 he is finally getting his way. As the newspaper industry continues its decline, the funnies pages have decoupled from print. Instead of working for huge syndicates, or for censored newspapers with touchy editors, cartoonists are now free to create whatever they want. Whether it is cutting satire about Chinese politics, or a simple joke about being a dog, everything can win an audience on the internet.

This burst of new life comes as cartoons seemed to be in terminal decline. Punch, once a fierce political satire magazine whose cartoons feature in almost every British history textbook, finally closed its doors in 2002. The edgier Viz magazine, which sold a million copies an issue in the early 1990s, now sells 65,000. In the United States, of the sprawling EC Comics stable, only Mad magazine remains, its circulation down from 2.1m in 1974 to 180,000. Meanwhile, the American newspaper industry, home of the cartoon strip, now makes less in advertising revenue than at any time since the 1950s. (...)

Triumph of the nerds

The decline of newspapers and the rise of the internet have broken that system. Newspapers no longer have the money to pay big bucks to cartoonists, and the web means anybody can get published. Cartoonists who want to make their name no longer send sketches to syndicates or approach newspapers: they simply set up websites and spread the word on Twitter and Facebook. Randall Munroe, the creator of “XKCD”, left a job at NASA to write his stick men strip, full of science and technology jokes (see above and below). Kate Beaton, a Canadian artist who draws “Hark, A Vagrant”, sketched her cartoons between shifts while working in a museum. Matthew Inman created his comic “The Oatmeal” by accident while trying to promote a dating website he built to escape his job as a computer coder.

The typical format for a web comic was established a decade or more ago, says Zach Weiner, the writer of “Saturday Morning Breakfast Cereal”, or “SMBC” (below). It has not changed much since. Most cartoonists update on a regular basis — daily, or every other day — and run in sequence. “I think that’s purely because that’s what the old newspapers used to do,” says Mr Weiner. But whereas many newspaper comics tried to appeal to as many people as possible, often with lame, fairly universal jokes, online cartoonists are free to be experimental, in both content and form.

Ryan North uses the same drawing every day for his “Dinosaur Comics” — the joke is in the dialogue, which he writes fresh every weekday, and the absurdity of dinosaurs discussing Shakespeare and dating. “SMBC” flicks between one-panel gags and extremely long, elaborate stories. Fred Gallagher, the writer of “Megatokyo”, has created an entire soap-opera-like world, drawn in beautiful Japanese manga-style, accessible only to those who follow the sage regularly. Mr Munroe’s “XKCD” is usually a simple strip comic, but recently featured one explorable comic, entitled “Click and Drag”, which, if printed at high resolution, would be 46 feet wide.

Perhaps thanks to the technical skills needed to succeed, web cartoonists tend to be young — few are over 30 — well-educated and extremely geeky.

by The Economist, Medium |  Read more:
Image: XKCD