Friday, October 26, 2018

The Great Risk Shift

To many economic commentators, insecurity first reared its ugly head in the wake of the financial crisis of the late-2000s. Yet the roots of the current situation run much deeper. For at least 40 years, economic risk has been shifting from the broad shoulders of government and corporations onto the backs of American workers and their families.

This sea change has occurred in nearly every area of Americans’ finances: their jobs, their health care, their retirement pensions, their homes and savings, their investments in education and training, their strategies for balancing work and family. And it has affected Americans from all demographic groups and across the income spectrum, from the bottom of the economic ladder almost to its highest rungs.

I call this transformation “The Great Risk Shift” — the title of a book I wrote in the mid-2000s, which I’ve recently updated for a second edition. My goal in writing the book was to highlight a long-term trend toward greater insecurity, one that began well before the 2008 financial crisis but has been greatly intensified by it.

I also wanted to make clear that the Great Risk Shift wasn’t a natural occurrence — a financial hurricane beyond human control. It was the result of deliberate policy choices by political and corporate leaders, beginning in the late 1970s and accelerating in the 1980s and 1990s. These choices shredded America’s unique social contract, with its unparalleled reliance on private workplace benefits. They also left existing programs of economic protection more and more threadbare, penurious and outdated — and hence increasingly incapable of filling the resulting void.

To understand the change, we must first understand what is changing. Unique among rich democracies, the United States fostered a social contract based on stable long-term employment and widespread provision of private workplace benefits. As the figure below shows, our government framework of social protection is indeed smaller than those found in other rich countries. Yet when we take into account private health and retirement benefits — mostly voluntary, but highly subsidized through the tax code — we have an overall system that is actually larger in size than that of most other rich countries. The difference is that our system is distinctively private.


This framework, however, is coming undone. The unions that once negotiated and defended private benefits have lost tremendous ground. Partly for this reason, employers no longer wish to shoulder the burdens they took on during more stable economic times. In an age of shorter job tenure and contingent work, as Monica Potts will describe in her forthcoming contribution to this series, employers also no longer highly value the long-term commitments to workers that these arrangements reflected and fostered.

Of course, policymakers could have responded to these changes by shoring up existing programs of economic security. Yet at the same time as the corporate world was turning away from an older model of employment, the political world was turning away from a longstanding approach to insecurity known as “social insurance.” The premise of social insurance is that widespread economic risks can be dealt with effectively only through institutions that spread their costs across rich and poor, healthy and sick, able-bodied and disabled, young and old.

Social insurance works like any other insurance program: We pay in — in this case, through taxes — and, in return, are offered a greater degree of protection against life’s risks. The idea is most associated with FDR, but, from the 1930s well into the 1970s, it was promoted by private insurance companies and unionized corporations, too. During this era of rising economic security, both public and private policymakers assumed that a dynamic capitalist economy required a basic foundation of protection against economic risks.

That changed during the economic and political turmoil of the late 1970s. With the economy becoming markedly more unequal and conservatives gaining political ground, many policy elites began to emphasize a different credo — one premised on the belief that social insurance was too costly and inefficient and that individuals should be given “more skin in the game” so they could manage and minimize risks on their own. Politicians began to call for greater “personal responsibility,” a dog whistle that would continue to sound for decades.

Instead of guaranteed pensions, these policymakers argued, workers should have tax-favored retirement accounts. Instead of generous health coverage, they should have high-deductible health plans. Instead of subsidized child care or paid family leave, they should receive tax breaks to arrange for family needs on their own. Instead of pooling risks, in short, companies and government should offload them.

The transformation of America’s retirement system tells the story in miniature. Thirty years ago, most workers at larger firms received a guaranteed pension that was protected from market risk. These plans built on Social Security, then at its peak. Today, such “defined-benefit” pensions are largely a thing of the past. Instead, private workers lucky enough to get a pension receive “defined-contribution” plans such as 401(k)s — tax-favored retirement accounts, first authorized in the early 1980s, that don’t require contributions and don’t provide guaranteed benefits. Meanwhile, Social Security has gradually declined as a source of secure retirement income for workers even as private guaranteed retirement income has been in retreat.

The results have not been pretty. We will not be able to assess the full extent of the change until today’s youngest workers retire. But according to researchers at Boston College, the share of working-age households at risk of being financially unprepared for retirement at age 65 has jumped from 31 percent in 1983 to more than 53 percent in 2010. In other words, more than half of younger workers are slated to retire without saving enough to maintain their standard of living in old age.

Guaranteed pensions have not been the only casualty of the Great Risk Shift. At the same time as employers have raced away from safeguarding retirement security, health insurance has become much less common in the workplace, even for college-educated workers. Indeed, coverage has risen in recent years only because more people have become eligible for Medicare and Medicaid and for subsidized plans outside the workplace under the Affordable Care Act. As late as the early 1980s, 80 percent of recent college graduates had health insurance through their job; by the late 2000s, the share had fallen to around 60 percent. And, of course, the drop has been far greater for less educated workers.

In sum, corporate retrenchment has come together with government inaction — and sometimes government retrenchment — to produce a massive transfer of economic risk from broad structures of insurance onto the fragile balance sheets of American households. Rather than enjoying the protections of insurance that pools risk broadly, Americans are increasingly facing economic risks on their own, and often at their peril.

The erosion of America’s distinctive framework of economic protection might be less worrisome if work and family were stable sources of security themselves. Unfortunately, they are not. The job market has grown more uncertain and risky, especially for those who were once best protected from its vagaries. Workers and their families now invest more in education to earn a middle-class living. Yet in today’s postindustrial economy, these costly investments are no guarantee of a high, stable, or upward-sloping path. [ed. See also:
A Follow-Up on the Reasons for Prime Age Labor Force Non-Participation]

Meanwhile, the family, a sphere that was once seen as solely a refuge from economic risk, has increasingly become a source of risk of its own. Although median wages have essentially remained flat over the last generation, middle-income families have seen stronger income growth, with their real median incomes rising around 13 percent between 1979 and 2013. Yet this seemingly hopeful statistic masks the reality that the whole of this rise is because women are working many more hours outside the home than they once did. Indeed, without the increased work hours and pay of women, middle-class incomes would have fallen between 1979 and 2013.

by Jacob S. Hacker, TPM |  Read more:
Image: Christine Frapech/TPM

Unfair Advantage

Every year Americans make more and more purchases online, many of them at Amazon.com. What shoppers don’t see when browsing the selections at Amazon are the many ways the online store is transforming the economy. Our country is losing small businesses. Jobs are becoming increasingly insecure. Inequality is rising. And Amazon plays a key role in all of these trends.

Stacy Mitchell believes Amazon is creating a new type of monopoly. She says its founder and CEO, Jeff Bezos, doesn’t want Amazon to merely dominate the market; he wants it to
become the market.

Amazon is already the world’s largest online retailer, drawing so much consumer Web traffic that many other retailers can compete only by becoming “Amazon third-party sellers” and doing business through their competitor. It’s a bit like the way downtown shops once had to move to the mall to survive — except in this case Amazon owns the mall, monitors the other businesses’ transactions, and controls what shoppers see.

From early in her career Mitchell has focused on retail monopolies. During the 2000s she researched the predatory practices and negative impacts of big-box stores such as Walmart. Her 2006 book, Big-Box Swindle: The True Cost of Mega-Retailers and the Fight for America’s Independent Businesses, documented the threat these supersized chains pose to independent local businesses and community well-being. (stacymitchell.com)

Now Amazon is threatening to overtake Walmart as the biggest retailer in the world. Mitchell says she occasionally shops at Amazon herself, when there’s something she can’t find locally, but this hasn’t stopped her from being a vocal critic of the way the company uses its monopoly power to stifle competition. She’s among a growing number of advocates who are calling for more vigorous enforcement of antitrust laws.
(...)

Frisch: Many consumers welcome Amazon as a wonderful innovation that makes shopping more convenient, but you say the corporation has a “stranglehold” on commerce. Why?

Mitchell: Without many of us noticing, Amazon has become one of the most powerful corporations in the U.S. It is common to talk about Amazon as though it were a retailer, and it certainly sells a lot of goods — more books than any other retailer online or off, and it will soon be the top seller of clothing, toys, and electronics. One of every two dollars Americans spend online now goes to Amazon. But to think of Amazon as a retailer is to miss the true nature of this company.

Amazon wants to control the underlying infrastructure of commerce. It’s becoming the place where many online shoppers go first. Even just a couple of years ago, most of us, when we wanted to buy something online, would type the desired product into a search engine. We might search for New Balance sneakers, for example, and get multiple results: sporting-goods stores, shoe stores, and, of course, Amazon. Today more than half of shoppers are skipping Google and going directly to Amazon to search for a product. This means that other companies, if they want access to those consumers, have to become sellers on Amazon. We’re moving toward a future in which buyers and sellers no longer meet in an open public market, but rather in a private arena that Amazon controls.

From this commanding position Amazon is extending its reach in many directions. It’s building out its shipping and package-delivery infrastructure, in a bid to supplant UPS and the U.S. Postal Service. Its Web-services division powers much of the Internet and handles data storage for entities ranging from Netflix to the CIA. Amazon is producing hit television shows and movies, publishing books, and manufacturing a growing share of the goods it sells. It’s making forays into healthcare and finance. And with the purchase of Whole Foods, it’s beginning to extend its online presence into the physical world. (...)

Frisch: We hear a lot about the power of “disruptive” ideas and technologies to transform our society. Amazon seems like the epitome of a disrupter.

Mitchell: Because Amazon grew alongside the Internet, it’s easy to imagine that the innovations and conveniences of online shopping are wedded to it. They aren’t. Jeff Bezos would prefer that we believe Amazon’s dominance is the inevitable result of innovation, and that to challenge the company’s power would mean giving up the benefits of the Internet revolution. But history tells us that when monopolies are broken up, there’s often a surge of innovation in their wake.

Frisch: You don’t think e-commerce in itself is a problem?

Mitchell: No. There’s no reason why making purchases through the Internet is inherently destructive. I do think a world without local businesses would be a bad idea, because in-person, face-to-face shopping generates significant social and civic benefits for a community. But lots of independent retailers have robust e-commerce sites, including my local bookstore, hardware store, and several clothing retailers. Being online gives customers another way to buy from them. We can even imagine a situation in which many small businesses might sell their wares on a single website to create a full-service marketplace. It wouldn’t be a problem as long as the rules that govern that website are fair, the retailers are treated equally, and power isn’t abused.

Frisch: But that’s not the case with Amazon?

Mitchell: No. As search traffic migrates to Amazon, independent businesses face a Faustian bargain: Do they continue to hang their shingle on a road that is increasingly less traveled or do they become an Amazon seller? It’s no easy decision, because once you become a third-party seller, 15 percent of your revenue typically goes to Amazon — more if you use their warehouse and fulfillment services. Amazon also uses the data that it gleans from monitoring your sales to compete against you by offering the same items. And it owns the customer relationship, particularly if you use Amazon’s fulfillment services — meaning you store your goods in its warehouses and pay it to handle the shipping. In that case, you cannot communicate with your customer except through Amazon’s system, and Amazon monitors those communications. If you go out of bounds, it can suspend you as a seller.

Frisch: What’s out of bounds? Let’s say a customer wants to know which product would be better, A or B. Can a seller tell them?

Mitchell: You’re allowed to respond to that question, but if, in the process of responding, you violate Amazon’s rules, you can be suspended from Amazon and see your livelihood disappear. An example of this is a small company that made custom-designed urns for ashes.

Frisch: For people who’ve been cremated?

Mitchell: Yes. They sold these urns through their website and also through Amazon. A customer contacted the urn-maker through Amazon to ask about engraving. The company responded truthfully that there was no way to place an order for engraving through Amazon, but it could be done through the company’s website. Within twenty-four hours the urn maker got slapped down by Amazon. The rules for third-party sellers say you can never give a customer a URL, because Amazon does not want that customer going anywhere else — even in a case where Amazon can’t provide what the customer wants.

An independent retailer’s most valuable assets are its knowledge of products and ability to spot trends. Once you become a seller on Amazon, you forfeit your expertise to them. They use your sales figures to spot the latest trends. Researchers at Harvard Business School have found that when you start selling through Amazon, within a short time Amazon will have figured out what your most popular items are and begun selling them itself. Amazon is now producing thousands of products, from batteries to blouses, under its own brands. It’s copying what other companies are selling and then giving its own products top billing in its search results. For example, a company called Rain Design in San Francisco made a popular laptop stand and built a business selling it through Amazon. A couple of years ago Rain Design found that Amazon had introduced a nearly identical product. The only difference is that the company’s raindrop logo had been swapped for Amazon’s smiling arrow. (...)

Frisch: You’ve characterized Amazon as a throwback to the age of the robber barons. How so?

Mitchell: The robber barons were nineteenth-century industrialists who dominated industries like oil and steel. During the Gilded Age, toward the end of the nineteenth century, these industrialists gained control of a technology that was opening up a new way of doing business: the railroad. They used their command of the rails to disadvantage their competitors. John D. Rockefeller, who ran Standard Oil, for example, conspired with the railroad magnate Cornelius Vanderbilt to charge competing oil companies huge sums to ship their product by rail. The first antitrust laws were written in response to industrialists’ attempts to control access to the market.

It’s striking how similar this history is to what Amazon has done: a new technology comes along that gives people a novel way to bring their wares to market, but a single company gains control over it and uses that power to undermine competitors and create a monopoly.

Amazon sells nearly half of all print books and has more than 80 percent of the e-book market. That’s enough to make it a gatekeeper: if Amazon suppresses a book in its search results or removes the book’s BUY button, as it has done during disputes with certain publishers, it causes that book’s sales to plummet. That is a monopoly.

Frisch: When did the Gilded Age monopolies get broken up?

Mitchell: A turning point came in the 1930s, during Franklin D. Roosevelt’s second term as president. Roosevelt concluded that corporate concentration was impeding the economy by closing off opportunity and slowing job and wage growth. So he set about dusting off the nation’s antitrust policies and using them to go after monopolies. This aggressive approach lasted for decades. Republican and Democratic presidents alike talked about the importance of fighting monopolies.

Then in the 1970s a group of legal and economic scholars, led by Robert Bork, argued that corporate consolidation should be allowed to go unchecked as long as consumer prices stayed low. The Reagan administration embraced this view. Under Reagan the antitrust laws were left intact, but how the antitrust agencies interpret and enforce the laws was radically altered. Antitrust policy was stripped of its original purpose and power. Subsequent administrations, including Democrats, followed suit.

All of the concerns that used to drive antitrust enforcement have collapsed into a single concern: low prices. But we aren’t just consumers. We’re workers who need to earn a living. We’re small businesspeople. We’re innovators and inventors. As the economy has grown more consolidated, with fewer and fewer companies dominating just about every industry, one consequence is lower wages. Economic consolidation means workers have fewer options for employment. This appears to be a big reason why wages have been stagnant now for decades. We should also remember that our antitrust laws, at their heart, are about protecting democracy. Amazon shouldn’t be allowed to decide which books succeed or fail, which companies are allowed to compete. (...)

Frisch: Before you took on Amazon, you helped galvanize community opposition to Walmart. Why should people be against the big-box retailer coming to their town?

Mitchell: Walmart’s pitch to communities is always that it will offer low prices and create jobs and tax revenue. Particularly for smaller communities, this seems like a great deal. But an overwhelming majority of research has found that Walmart is much more of an extractive force. Poverty actually rises in places where Walmart opens a store.

Independent businesses, on the other hand, help communities thrive, because they buy many goods and services locally. When a small business needs an accountant, it’s likely to hire someone nearby. When it needs a website, it hires a local web designer. It banks at the local bank and advertises on the local radio station. It also tends to carry more local and regional products. An independent bookstore, for example, might feature local authors prominently.

Economic relationships often involve other types of relationships, too. When you shop at a small business, you’re dealing with your neighbors. You’re buying from someone whose kids go to school with your kids. That matters for the health of communities.

When Walmart comes in, it systematically wipes out a lot of those relationships. Instead of circulating locally, most dollars spent at the Walmart store leave the community. You’re left with fewer jobs than you had to start with, and they’re low-wage positions.

by Tracy Frisch and Stacy Mitchell, The Sun |  Read more:
Image: uncredited

Tech to Blame for Ever-Growing Repair Costs

It's hard to remove a part from a new car without coming across a wire attached to it. As tech grows to occupy every spare corner of the car, many buyers might not realize that all that whiz-bang stuff is going to make collision repair an absolute bear.

Even seemingly small damages to a vehicle's front end can incur costs nearing $3,000, according to new research from AAA. The study looked at three solid sellers in multiple vehicle segments, including a small SUV, a midsize sedan and a pickup truck. It looked at repair costs using original equipment list prices and an established average for technician labor rates.

Let's use AAA's examples for some relatable horror stories. Mess up your rear bumper? Well, if you have ultrasonic parking sensors or radar back there, it could cost anywhere from $500 to $2,000 to fix. Knock off a side mirror equipped with a camera as part of a surround-view system? $500 to $1,100. (...)

AAA wasn't the first group to realize how nuts these costs can get. On a recent episode of Autoline, a CEO of a nonprofit focused on collision repair education pointed out that a front-corner collision repair on a Kia K900 could cost as much as $34,000. Sure, it's a low-production luxury sedan, but is anyone truly ready to drop $34,000 on a car that starts around $50,000?

by Andrew Krok, CNET |  Read more:
Image: AAA

Thursday, October 25, 2018


David Michael Bowers, State of the nation
via:

Nominating Oneself for the Short End of a Tradeoff

I’ve gotten a chance to discuss The Whole City Is Center with a few people now. They remain skeptical of the idea that anyone could “deserve” to have bad things happen to them, based on their personality traits or misdeeds.

These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer. I don’t know if this passes an Intellectual Turing Test. When I think of people deserving bad things, I think of them having nominated themselves to get the short end of a tradeoff.

Let me give three examples:

1. Imagine an antidepressant that works better than existing antidepressants, one that consistently provides depressed people real relief. If taken as prescribed, there are few side effects and people do well. If ground up, snorted, and taken at ten times the prescribed dose – something nobody could do by accident, something you have to really be trying to get wrong – it acts as a passable heroin substitute, you can get addicted to it, and it will ruin your life.

The antidepressant is popular and gets prescribed a lot, but a black market springs up, and however hard the government works to control it, a lot of it gets diverted to abusers. Many people get addicted to it and their lives are ruined. So the government bans the antidepressant, and everyone has to go back to using SSRIs instead.

Let’s suppose the government is being good utilitarians here: they calculated out the benefit from the drug treating people’s depression, and the cost from the drug being abused, and they correctly determined the costs outweighed the benefits.

But let’s also suppose that nobody abuses the drug by accident. The difference between proper use and abuse is not subtle. Everybody who knows enough to know anything about the drug at all has heard the warnings. Nobody decides to take ten times the recommended dose of antidepressant, crush it, and snort it, through an innocent mistake. And nobody has just never heard the warnings that drugs are bad and can ruin your life.

Somebody is going to get the short end of the stick. If the drug is banned, depressed people will lose access to relief for their condition. If the drug is permitted, recreational users will continue having the opportunity to destroy their lives. And we’ve posited that the utilitarian calculus says that banning the antidepressant would be better. But I still feel, in some way, that the recreational users have nominated themselves to get the worse end of this tradeoff. Depressed people shouldn’t have to suffer because you see a drug that says very clearly on the bottle “DO NOT TAKE TOO MUCH OF THIS YOU WILL GET ADDICTED AND IT WILL BE TERRIBLE” and you think “I think I shall take too much of this”.

(this story is loosely based on the history of tianeptine in the US)

2. Suppose you’re in a community where some guy is sexually harassing women. You tell him not to and he keeps doing it, because that’s just the kind of guy he is, and it’s unclear if he can even stop himself. Eventually he does it so much that you kick him out of the community.

Then one of his friends comes to you and says “This guy harassed one woman per month, and not even that severely. On the other hand, kicking him out of the community costs him all of his friends, his support network, his living situation, and his job. He is a pretty screwed-up person and it’s unclear he will ever find more friends or another community. The cost to him of not being in this community, is actually greater than the cost of being harassed is to a woman.”

Somebody is going to have their lives made worse. Either the harasser’s life will be worse because he’s kicked out of the community. Or women’s lives are worse because they are being harassed. Even if I completely believe the friend’s calculation that kicking him out will bring more harm on him than keeping him would bring harm to women, I am still comfortable letting him get the short end of the tradeoff.

And this is true even if we are good determinists and agree he only harasses somebody because of an impulse control problem secondary to an underdeveloped frontal lobe, or whatever the biological reason for harassing people might be.

(not going to bring up what this story is loosely based on, but it’s not completely hypothetical either)

3. Sometimes in discussions of basic income, someone expresses concern that some people’s lives might become less meaningful if they didn’t have a job to give them structure and purpose.

And I respond “Okay, so those people can work, basic income doesn’t prohibit you from working, it just means you don’t have to.”

And they object “But maybe these people will choose not to work even though work would make them happier, and they will just suffer and be miserable.”

Again, there’s a tradeoff. Somebody’s going to suffer. If we don’t grant basic income, it will be people stuck in horrible jobs with no other source of income. If we do grant basic income, it will be people who need work to have meaning in their lives, but still refuse to work. Since the latter group has a giant door saying “SOLUTION TO YOUR PROBLEMS” wide open in front of them but refuses to take it, I find myself sympathizing more with the former group. That’s true even if some utilitarian were to tell me that the latter group outnumbers them.

I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?

One option is to dismiss them as misfirings of the heuristic “expose people to the consequences of their actions so that they are incentivized to make the right action”. I’ve tried to avoid that escape by specifying in each example that even when they’re properly exposed and incentivized the calculus still comes out on the side of making the tradeoff in their favor. But maybe this is kind of like saying “Imagine you could silence this one incorrect person without any knock-on effects on free speech anywhere else and all the consequences would be positive, would you do it?” In the thought experiment, maybe yes; in the real world this either never happens, or never happens with 100% certainty, or never happens in a way that’s comfortably outside whatever Schelling fence you’ve built for yourself. I’m not sure I find that convincing because in real life we don’t treat “force people to bear the consequences of their action” as a 100% sacred principle that we never violate.

Another option is to dismiss them as people “revealing their true preferences”, eg if the harasser doesn’t stop harassing women, he must not want to be in the community too much. But I think this operates on a really sketchy idea of revealed preference, similar to the Caplanian one where if you abuse drugs that just means you like drugs so there’s no problem. Most of these situations feel like times when that simplified version of preferences breaks down.

A friend reframes the second situation in terms of the cost of having law at all. It’s important to be able to make rules like “don’t sexually harass people”, and adding a clause saying “…but we’ll only enforce these when utilitarianism says it’s correct” makes them less credible and creates the opportunity for a lot of corruption. I can see this as a very strong answer to the second scenario (which might be the strongest), although I’m not sure it applies much to the first or third.

I could be convinced that my desire to let people who make bad choices nominate themselves for the short end of tradeoffs is just the utilitarian justifications (about it incentivizing behavior, or it revealing people’s true preferences) crystallized into a moral principle. I’m not sure if I hold this moral principle or not. I’m reluctant to accept the ban-antidepressant, tolerate-harasser, and repeal-basic-income solutions, but I’m also not sure what justification I have for not doing so except “Here’s a totally new moral principle I’m going to tack onto the side of my existing system”.

But I hope people at least find this a more sympathetic way of understanding when people talk about “desert” than a caricatured story where some people just need to suffer because they’re bad.

by Scott Alexander, Slate Star Codex |  Read more:
[ed. I don't know what Scott's been doing in psychiatry these days since moving to SF, but his blog has benefited greatly. See also: Cognitive Enhancers: Mechanisms and Tradeoffs.] 

Wednesday, October 24, 2018

Elton John


via:
[ed. Buy quality.]

Uber's Secret Restaurant Empire


Uber’s Secret Restaurant Empire

It’s a phenomenon that Jason Droege, vice president for Uber Everything, labels the “virtual restaurant.” Such places start with no storefronts and no seats; they operate out of a corner of a professional kitchen, inside a restaurant with a different name and menu.

via: (Bloomberg)

Innovation Under Socialism

I have friends who revel in arriving in a place and immediately investigating the neighborhood’s shortcuts, jogging down paths without a destination, wandering down wayward trails just to see where they lead. For those whose thirst for adventure is complemented by a healthy dose of spatial awareness and cognition, discovery is a thrill. Personally, I cannot relate to any of this. Nothing means less to me than the orientation of the sunrise and sunset. Your cardinal points are wasted on me, for I am a person endowed with no sense of direction whatsoever. Throw in any language other than my native fluency in French and English, along with a flailing Spanish, and my demise is guaranteed. Yet, in recent years, I have felt confident enough to explore places where I had never been before without knowing the local official language. In all this, my saving grace has been my iPhone—the powerful pocket-sized computer whose mapping and translating superpowers have convinced me almost no place is out of my reach. I’ll say it: I am a socialist and I love my iPhone.

This confession is music to the ears of the “capitalism made your iPhone” club. Indeed, proponents of capitalism often brandish rapid innovation as if it were an automatic checkmate on collectivist socioeconomic ideologies. To them, modern technology proves not only that capitalism works, but that it is the best system to stimulate innovation. The subtext of their retort is that a socialist economy could never generate technology this advanced. When coupled with a defense of “thought leaders” as obscenely rich as Steve Jobs, Elon Musk, and Jeff Bezos, their argument also contends that concentrating capital and power in the hands of a few billionaires is a small price to pay for the astronomical leaps in innovation from which we all benefit.

Capitalism’s fan base is not wrong that the iPhone, first released in 2007, is a product of America’s fiercely capitalist economy. I will also concede that without the vision of Steve Jobs, Apple’s late CEO and the 110th richest person in the world at his death, there would be no iPhone as we know it (although it is worth noting that the army of engineers and developers whose labor actually produced the iPhone might have come up with an equally wonderful smartphone). Nonetheless, their perspective is deeply misguided. It manages to both underestimate how much capitalism stifles innovation and misunderstand how much the fundamentals of a socialist economy make it the better system for stimulating innovation.

Innovation describes a four-step process that creates or ameliorates a thing or way of doing things. It begins with invention, the design of a device or process that did not previously exist in this form. The invention is then developed, meaning that it is improved with an eye towards eventual scaling, exchange or introduction on a market, and external use by others. At the production stage, the invention is built or reproduced. Finally, the invention is distributed to a wider audience. In our present economy, a minority of the innovation process happens at the individual level, from lonesome inventors and modern Benjamin Franklins who are able to conjure all sorts of contraptions in their garage. The majority, however, results from research and development (R&D) paid for by private firms, and by the public through government agencies, research institutions, and other recipients of federal and state funding.

The profit motive and exclusive proprietary rights are central to capitalist innovation. By law, private firms must prioritize the interest of their shareholders, which tends to be interchangeable with making as much money as possible. Accordingly, investments in any stage of the innovative process must eventually produce profits. To maximize profit, private firms jealously guard the value of their invention through regulations and restrictive contracts. Statutes and regulations help protect their trade secrets. The U.S. Patent and Trademarks Office routinely grants them utility and design patents that “exclude others from making, using, offering for sale, or selling … or importing the invention” for twenty years after the patent is issued. They enforce licensing agreements that can limit the uses and dissemination of all or part of their inventions. To further frustrate efforts to innovate on the back of their inventions, private firms subject their former employees to non-compete agreements that can severely limit them from using their knowledge and skills on competing projects for a period following their departure. Breaches carry dire consequences like expensive lawsuits, big money judgments, and other enormous hassles.

By contrast, the public sector innovates under an academic model instead of for profit. Certainly, earning tenure or an executive position can be lucrative. In some industries, a revolving door gives individuals the opportunity to innovate in both the private and public sectors throughout their careers. However, innovation in this area is less motivated by extracting profit, and more so by signifiers of prestige, career appointments, recognition, publication, project funding, and prizes.

The capitalist model has its perks. At present, private firms raise massive amounts of capital from the government to fund research, but also from banks, private equity, and wealthy donors. This vast amount of capital can prove lucrative for certain classes of workers. Innovative talent might accumulate wealth through generous compensation packages, which play an important role in attracting and retaining them.

Private firms also boast a terrifying nimbleness that allows them to push projects and respond to change faster than government institutions. For instance, firms can turn over staff quickly if their industry in the absence of unions and norms against firing workers at will, other than the standard prohibitions against discriminatory practices. In other words, without the regulatory and administrative constraints that saddle publicly funded projects, private firms can move through the innovative process faster.

Another advantage of the capitalist model is that profits—potential and actual—provide some measure of how well a company is innovating. Particularly, for the many private firms that sell some of their shares to the public on stock exchanges, prices serve as a form of feedback from investors and the market. Imagine that a publicly-traded retailer announces the imminent launch of an affordable, solar-powered computer that boasts power and speeds to rival Apple’s newest models. In the hours following the press release, the retailer’s stock value triples. A week later, while at a tech conference in the Colorado mountains, the retailer’s CEO lets it slip that the first prototype will actually retail for about four thousand dollars. Unfortunately for the CEO, he was wearing a hot mic. The quote is made public in an article titled “No debt-saddled, environmentally-conscious millennial will shed $4,000 for a computer!” The stock value immediately plummets by two-hundred percent.

The original rise in the retailer’s share value communicates that investors believe in the product as a profitable enterprise, and that they see this type of innovation as a worthwhile pursuit. The drop, on the other hand, suggests that they believe this specific product would be more marketable and therefore more profitable if it were developed for an audience beyond high-end consumers. The turn in the stock value can embolden the retailer—through its management, Board of Directors, or shareholders—to revisit its plan to innovate. It also signals to competitors that their innovation of a similar product could be well received, especially if they can overcome the original product’s weaknesses.

But prioritizing profit is a double-edged sword that can hamper innovation. Owning the proprietary rights allows private firms to block workers—through anti-competitive tools like non-compete agreements, patents, and licenses—who put labor into the innovation process from applying the extensive technical expertise and intimate understanding of the product to improve the innovation substantially. This becomes especially relevant once the workers leave the firm division in which they worked, or leave the firm altogether. Understandably, this lack of control and ownership will cause some workers, however passionate they may be about a project, to be less willing to maximize their contribution to the innovation.

Of course, the so-called nimbleness that allows firms to make drastic changes like mass layoffs is extremely harmful to the workers. This is no fluke. The capitalist economy thrives on a reserve army of labor. Inching closer to full employment makes workers scarcer, which empowers the labor force as a whole to bargain for higher wages and better work conditions. These threaten the firm’s bottom line. So, the capitalist economy is structured to maintain the balance of power towards the owners of capital. Positions that pay well (and less than well) come with the precariousness of at-will employment and disappearing union power. A constant pool of unemployed labor is maintained through layoffs and other tactics like higher interest rates, which the government will compel to help slow growth and thereby hiring. This system harms the potential for innovation, too.

The fear of losing work can dissuade workers from taking risks, experimenting, or speaking up as they identify items that could improve a taken approach—all actions that foster innovation. Meanwhile, thousands of individuals who could be contributing to the innovative process are instead involuntarily un-employed. This model also encourages monopolization, as concentrating market power gives private firms the most control over how much profit they can extract. But squashing competition that could contribute fresh ideas hurts every phase of the innovation process, while giving workers in fewer workplaces space to innovate.

Deferring to profit causes many areas of R&D to go unexplored. Private firms have less reason to invest in innovations likely to be made universally available for free if managers or investors do not see much upside for the firm’s bottom line. In theory, the slack in private research can be picked up by the public sector. In reality, however, decades of austerity measures threaten the public’s ability to underwrite risky and inefficient research. Both the Democratic and Republican parties increasingly adhere to a neoliberal ideology that vilifies “big government,” promotes running government like a business, pretends that government budgets should mirror household budgets or the private firm’s balance sheet, and rams privatization under the guises of so-called public-private partnerships and private subcontractors.

In the United States, public investment in R&D has been trending downward. As documented in a 2014 report from the Information Technology & Innovation Foundation, “[f]rom 2010 to 2013, federal R&D spending fell from $158.8 to $133.2 billion … Between 2003 and 2008, state funding for university research, as a share of GDP, dropped on average by 2 percent. States such as Arizona and Utah saw decreases of 49 percent and 24 percent respectively.” Even if public investment in the least profitable aspect of research suddenly surged, in our current model, the private sector continues to be the primary driver of development, production, and distribution. Where there remains little potential for profit, private firms will be reluctant to advance to the next phases of the innovation process. Public-private projects raise similar concerns. Coordinated efforts can increase private investment by spreading some costs and risk to the public. But to attract private partners in the first place, the public sector has a greater incentive to prioritize R&D projects with more financial upsides.

This is how the quest for profits and tight grip over proprietary rights, both important features of the capitalist model, discourage risk. Innovations are bound for plateauing after a few years, as firms increasingly favor minor aesthetic tweaks and updates over bold ideas while preventing other avenues of innovation from blossoming. At the same time, massive amounts of capital continue to float into the hands of a few. The price of innovating under capitalism is then both decreased innovation and decreased equality. The idea that this approach to innovation must be our best and only option is a delusion.

As I see it, four ingredients are key to kindling innovation. First, there must be problems requiring solutions (an easy one to meet). Second, there must be capital and resources available to invent, develop, produce, and distribute the innovative product. There must also be actual human beings available to participate in every phase of the innovation process. And fourth, at least some of these human beings must be have the creativity and motivation to participate in the innovation process. The question isn’t really whether a socialist economy can provide these four ingredients at all (it can) but rather, whether it can innovate better than a capitalist economy (it can).

by Vanessa Bee, Current Affairs |  Read more:
Image: uncredited

How Fentanyl Took over Pennsylvania

The first time Nicki Saccomanno used fentanyl, she overdosed.

It was 2016, and the 38-year-old from Kensington hadn't known that the drugs she'd bought had been cut with the deadly synthetic opioid. She just remembers injecting herself with a bag, and then waking up surrounded by paramedics frantically trying to revive her.

Saccomanno, who has been addicted to heroin for 10 years, was shaken. But, before long, there was barely anything else to take but fentanyl to stave off the intense pain of withdrawal. Every corner, it seemed, was selling it. Saccomanno and other longtime heroin users found themselves forced to adapt.

For younger users, like the twentysomethings who live in the camps off Lehigh Avenue, fentanyl is all they've ever known. Like others before them, many graduated from using legal painkillers to illicit opioids in the last few years — except when they turned to the streets to feed their addictions, they were buying a drug much more powerful than their older counterparts had started on.

Young and old are paying for it with their lives. Fentanyl was present in 84 percent of Philadelphia's 1,217 fatal overdoses last year, and in 67 percent of the state's 5,456 overdose deaths in 2017, according to a wide-ranging report on the state of the opioid crisis in Pennsylvania released this month by the U.S. Drug Enforcement Administration.

The report shows how, over the last five years, the opioid crisis ballooned into an overdose crisis — how fentanyl contaminated the state's heroin supply, overwhelmed county morgues with overdose victims, and shocked advocates, people in addiction, and law-enforcement officials alike with its sudden ubiquity.

But to all of them, the explosion of fentanyl makes a kind of terrible sense: Fentanyl is significantly cheaper to produce than heroin. It draws a significantly larger profit. It's significantly more powerful and more addictive than heroin, even Kensington's supply, which has long been known as the cheapest and purest in the country.

These days, Saccomanno uses a combination of heroin and fentanyl, even though she hates it.

"You get sicker," she said. "You need to get more fentanyl more often. It makes being able to get well and stay well even harder. But you can't find anything else."

‘A dramatic shift’

Pure economics.

That's what law-enforcement officials say is driving the rise of fentanyl in Pennsylvania.

It has legitimate use as a drug to treat serious pain, like that in cancer patients, and has been on the illicit drug market for at least 15 years, said Pat Trainor, spokesman for the Philadelphia branch of the DEA. But it mostly turned up in unusual overdose rashes and would disappear from the scene again.

"Two or three years ago, we really saw a pretty dramatic shift," Trainor said. "It was initially seen as a cut or an adulterant in low-quality heroin, and it's really shifted now that it's pretty much largely — but not completely — replaced most of the heroin supply in Philadelphia."

In Philadelphia, he said, a kilogram of heroin, or 2.2 pounds, sells for $50,000 to $80,000, and a drug trafficker can make about $500,000 in profit off it. A kilogram of fentanyl sells for $53,000 to $55,000, is 50 to 100 times stronger, and can turn a profit of up to $5 million.

"For a lot of drug-trafficking organizations, it's that simple," said Trainor.

Most of the fentanyl that ends up in Pennsylvania is manufactured in China and smuggled through Mexican drug-trafficking organizations into the United States along the same routes used to traffic heroin, according to the DEA report.

People have also tried to make it closer to home, however. Unlike heroin, which is derived from opium poppies, fentanyl and its analogues can be produced in a lab. Earlier this year, DEA agents raided what they thought was a methamphetamine lab in a hotel room in western Pennsylvania. To their surprise, it turned out that the room's occupant had been trying to make fentanyl.

Seeking out fentanyl

Earlier this year, researchers from the Philadelphia Department of Public Health, conducting a survey of opioid users at Kensington's needle exchange, posed a question to 400 people in active addiction.

They knew that most of the city's heroin supply had already been tainted with fentanyl, and wanted to know how people in addiction were reacting. And so they asked drug users what they would do if they knew that fentanyl was in the drugs they were buying.

The answers they received shocked them. Of the drug users the Health Department surveyed, 45 percent told researchers that they weren't trying to avoid fentanyl at all — that they would be more likely to use a bag of fentanyl.

"There was more acceptance — it had become part of the community in a way it hadn't been initially. It was actually something people were going for because it was an enhanced high," said Kendra Viner, manager of the department's Opioid Surveillance Program. "And people between 25 and 34 years old were significantly more likely to say they would seek out fentanyl."

by Aubrey Whelan, Philidelphia Inquirer | Read more:
Image: John Duchneskie

Eight Reasons a Financial Crisis is Coming


It's been about 10 years since the last financial crisis. FocusEconomics wants to know if another one is due.

The short answer is yes.

In the last 10 years not a single fundamental economic flaw has been fixed in the US, Europe, Japan, or China.

The Fed was behind the curve for years contributing to the bubble. Massive rounds of QE in the US, EU, and Japan created extreme equity and junk bond bubbles.

Trump's tariffs are ill-founded as is Congressional spending wasted on war.

Potential Catalysts
  1. Junk Bond Bubble Bursting
  2. Equity Bubble Bursting
  3. Italy
  4. Tariffs
  5. Brexit
  6. Pensions
  7. Housing
  8. China
Many will blame the Fed. The Fed is surely to blame, but it is prior bubble-blowing policy, not rate hikes now that are the problem.

by Mike "Mish" Shedlock, MishTalk |  Read more:
Image: uncredited
[ed. See also: Smoot–Hawley Tariff Act (Prediction: you'll be hearing a lot about this in the coming few months). And: The Music Fades Out (John P. Hussman, Ph.D.)]

Nike Air Huarache Drift Breathe
via:

Axis lighting by SVOYA studio
via:

Tuesday, October 23, 2018

Gish Gallop

The Gish Gallop should not be confused with the argumentum ad nauseam, in which the same point is repeated many times. In a Gish Gallop, many bullshit points are given all at once.
“”If I were wrong, then one would have been enough!
Albert Einstein, commenting on the book 100 Authors Against Einstein
The Gish Gallop (also known as proof by verbosity) is the fallacious debate tactic of drowning your opponent in a flood of individually-weak arguments in order to prevent rebuttal of the whole argument collection without great effort. The Gish Gallop is a belt-fed version of the on the spot fallacy, as it's unreasonable for anyone to have a well-composed answer immediately available to every argument present in the Gallop. The Gish Gallop is named after creationist Duane Gish, who often abused it.

Although it takes a trivial amount of effort on the Galloper's part to make each individual point before skipping on to the next (especially if they cite from a pre-concocted list of Gallop arguments), a refutation of the same Gallop may likely take much longer and require significantly more effort (per the basic principle that it's always easier to make a mess than to clean it back up again).

The tedium inherent in untangling a Gish Gallop typically allows for very little "creative license" or vivid rhetoric (in deliberate contrast to the exciting point-dashing central to the Galloping), which in turn risks boring the audience or readers, further loosening the refuter's grip on the crowd.

This is especially true in that the Galloper need only win a single one out of all his component arguments in order to be able to cast doubt on the entire refutation attempt. For this reason, the refuter must achieve a 100% success ratio (with all the yawn-inducing elaboration that goes with such precision). Thus, Gish Galloping is frequently employed (with particularly devastating results) in timed debates. The same is true for any time- or character-limited debate medium, including Twitter and newspaper editorials.

Examples of Gish Gallops are commonly found online, in crank "list" articles that claim to show "X hundred reasons for (or against) Y". At the highest levels of verbosity, with dozens upon dozens or even hundreds of minor arguments interlocking, each individual "reason" is — upon closer inspection — likely to consist of a few sentences at best.

Gish Gallops are almost always performed with numerous other logical fallacies baked in. The myriad component arguments constituting the Gallop may typically intersperse a few perfectly uncontroversial claims — the basic validity of which are intended to lend undue credence to the Gallop at large — with a devious hodgepodge of half-truths, outright lies, red herrings and straw men — which, if not rebutted as the fallacies they are, pile up into egregious problems for the refuter.

There may also be escape hatches or "gotcha" arguments present in the Gallop, which are — like the Gish Gallop itself — specifically designed to be brief to pose, yet take a long time to unravel and refute.

However, Gish Gallops aren't impossible to defeat — just tricky (not to say near-impossible for the unprepared). Upon closer inspection, many of the allegedly stand-alone component arguments may turn out to be nothing but thinly-veiled repetitions or simple rephrasings of the same basic points — which only makes the list taller, not more correct (hence; "proof by verbosity"). This essential flaw in the Gallop means that a skilled rebuttal of one component argument may in fact be a rebuttal to many.

by Rational Wiki |  Read more:
Image: Rational Wiki
[ed. For example: It's Just Incredible What Some People Can Believe]

Not the Man They Think He Is at Home

There’s an incident from early on in Elton John’s career that reminds us how peculiar it has been. The year was 1970. John’s first album, Empty Sky, had been released in the U.K. but had gone nowhere. His label, supportive of him in fits and starts, eventually laid out for a decent producer and some lush orchestrations for his second album, which was self-titled and came out that spring. Today, we know Elton John as a lasting and flamboyant star; put that aside for right now and remember that back then, no one was thinking in those terms about the plainly talented but pudgy and somewhat morose 22-year-old they were working with at the time. The first single from Elton John, “Border Song,” was a flop. The label’s next move, for some reason, was to dig out a non-album track and release it as the second single, hoping to garner more attention that way. That release, “Rock and Roll Madonna,” went nowhere, either. The label went back to the album, poked around some more, and made a third try, with “Take Me to the Pilot.”

It wasn’t a hit.

By this time, other things were going on in John’s career. The shy boy behind the scenes found a raucous personality on stage; he and a small band had flown to America and had wowed the industry with a cacophonous six-night stand at the famous Troubadour nightclub in L.A. And by this time, John had finished a third album, Tumbleweed Connection, which was released that October.

That’s when something interesting happened. Late in the year, some radio DJs checked out the B-side of the “Pilot” single, which was a throwaway track from the Elton John album. They began to play it. This was not typical at the time. The B-side was a forlorn-sounding piano-based track.

The first words of the song went, “It’s a little bit funny / This feeling inside …”

A few months later, in early 1971, nearly a year after the release of the album, the B-side was a top-ten hit both in the U.S. and in the U.K. The track, “Your Song,” is a standard today, nearly 50 years on; it is one of the most played radio singles of all time, and has been covered by scores of artists, perhaps hundreds. But isn’t it weird that no one — label execs, marketers, or journos — thought it was a single back then, or even notable? For some reason, even experienced music industry people at the time couldn’t “hear” the song.

In some fundamental way, “Your Song” was unusual. The melody is sturdy, of course, and the chorus is plainly as lovely as can be, but there was something about its formal presentation — coursing strings, a prominent, tasteful bass, a subtle but insistent set of piano fills — that wasn’t registering. Maybe it just wasn’t cool enough. As for the lyrics, their premise — “I’m writing a song about writing a song for you” — has some remote Cole Porter overtones, I guess, but there’s nothing droll or arch about it; indeed, if anything, it suffers from over-sincerity. It was the end of the psychedelic era, remember, and the more somber singer-songwriters had their roots in folk and blues. “Your Song” is arguably a traditional pop ballad, but it’s conceived and performed with a somewhat shambling but definite rock-and-roll authenticity. The writing is elegant and prosaic at the same time; are the lyrics conversational and halting, or exquisitely crafted to sound that way? I think, in 1970, the people first confronted by the song couldn’t process what is, for us, today, its patent brilliance, because they hadn’t heard a song like it before. It’s familiar to us today, because we live in a world that Elton John has made his own.

Indeed, to many, John is a bit too obvious, now: the teddy-bear pop-rock star, the burbling sidekick of royalty, the aging, bewigged gay icon. But that cozy mien has always hidden something uncompromising and a bit strange underneath. He is a dubious figure set against the high intellectualism of Joni Mitchell, say, or the assuredly more dangerous work of Lou Reed, or that of Bowie, and on and on. But in his own way, originally, and then definitely as his acclaim grew, he found his own distinctive passage through the apocalypse of the post-Beatles pop landscape — and offered us ever more ambitious pop constructions, culminating in some sort of weird masterpiece, Goodbye Yellow Brick Road, and then an odd autobiographical song cycle, Captain Fantastic and the Brown Dirt Cowboy, in which he looked back to examine his life and the years of insecurity preceding his stardom.

Those were his artistic achievements. His commercials ones were even bigger. Bowie looms large in rock history now, but in the U.S., in the early ’70s, he was nothing close to a star. John famously took sartorial flamboyance to almost transvestite levels but was treated as a curiosity, and never registered as transgressive. He had seven No. 1 albums in a row in the U.S. These albums, in a three-and-a-half-year period, spent a total of 39 weeks at No. 1, a bit less than a quarter of that overall span. By Billboard’s rankings, he is by far the biggest album act of the 1970s (despite the fact that he didn’t have a top-ten album after 1976). He is also Billboard’s biggest singles act of the decade, and the magazine’s third-biggest singles artist of all time, with nine No. 1 singles and 27 top-ten hits, which is a lot. In all, he’s sold more than 150 million albums and 100 million singles.

Fifty years into his career, John has embarked on what is supposed to be his absolutely final Farewell-Good-bye-I’m-Retiring-I-Really-Mean-It Tour. The first time he announced his final show, for those keeping score, was in 1976. “Who wants to be a 45-year-old entertainer in Las Vegas like Elvis?” he said at the time. (He played his 449th and 450th Las Vegas shows, supposedly his final ones, this May, at the age of 71.) The new tour began in Allentown and Philadelphia and will come to Madison Square Garden on October 18 and 19, and then again on November 8 and 9 — after which the Farewell Yellow Brick Road tour has shows scheduled through 2019.

But for the record, it should be said that if there is one thing John is not, it’s obvious. He doesn’t write his own lyrics; he has spoken to us, if he has at all, through the words of other lyricists, most prominently Bernie Taupin, with whom he formed a songwriting partnership in 1967 that lasted through the entirety of his classic years. Over the decades, the themes and subjects of Taupin’s words have benignly reflected onto the singer’s persona, even though we have no reason to think they accurately represent it. And John’s songwriting process make their significance even more obscure. The pair didn’t (and still don’t) work together; instead, John walks off with Taupin’s scrawls and, with uncanny speed and focus, makes the songs he wants out of them. (Band members and producers over the years have testified that the composition of some of his most famous works was accomplished in 15 or 20 minutes.) In effect, he has always made Taupin’s words mean what he wants them to mean, giving himself the room to identify with or distance himself from them at will. In other words, if you think you know Elton John through his songs — you don’t.

by Bill Wyman, Vulture |  Read more:
Image: Jack Robinson/Condé Nast via Getty Images

First Twitter Gave Me Power. Then I Felt Hopeless.

From October 2015 to the present day, I have lived approximately 168 different lives on the internet. I was Eve the Nobody before I was Eve the Sex Writer before I was Eve the Comedian before I was Eve the Depressed Girl before I was Eve the Drunk before I was Eve the Feminist before I was Eve the Tech Blogger before I was Eve the Democratic Socialist before I was Eve the Hater before I was Eve the Teetotaler before I was Eve the Professional Politics Writer before I was Eve the Sword Girl before I became whichever iteration of myself I am today.

Translating the essence of who you are into a digestible product is a strange way to live, especially when you’re a young adult and your sense of self is in flux. It was never my main intention to peddle my personality for a living, but in the era of social media, the personal brand reigns supreme; self-commodification was an inevitable outcome for a young writer like myself—extremely online, comfortable with confessing her most deranged impulses to a large audience, and looking for affirmation and love. Translating the ups and downs of my existence into my personal brand was a way of life for me. The more I viewed my life as something to be consumed by other people, capitalizing on all the pain and pleasure and resentment and fear that come along with being alive, the more compulsively I posted. My way of being online was always unsustainable, and each time I couldn’t sustain it any longer, I shed my skin, and evolved into a slightly more adept version of myself.

Let’s go back to October 2015: My life was about to change forever because I was about to post my first viral tweet. I had graduated college a year before, and even though I knew that I wanted to write for a living, I was unsure exactly how to realize that ambition. After a year of aimless drifting, I ran into a friend at a bar who was working as an editor at a small web publication, and I started freelancing personal essays and silly blog posts while working part-time at a coffee shop. Every now and then, I’d tweet a mundane observation or a link to an article, but I didn’t have enough followers to get in deep. (...)

Fast-forward to 2016: I am on Twitter for hours and hours and hours every day, so it’s not entirely surprising that I am also lonely and depressed. I am tweeting through it all and I am handsomely rewarded for my social media impulses: My follower count balloons to 10,000 and it just keeps getting bigger. To me, that means I am special and I am doing something right. I’ve successfully capitalized on the internet notoriety I received from my first viral tweet to realize my career ambitions—I am freelance writing for whoever will have me and my Twitter brand is key to my hustle. I date guys who don’t like me back and then get paid by publications like Cosmopolitan and New York Magazine to spill the details of my disastrous love life, among other things. I feel like a legitimate writer, and I am reveling in it, and yet I still feel empty. Even though I panic about the toll my social media compulsion is taking on me, I tweet and I tweet and I tweet some more. I do it because I tell myself I wouldn’t be where I am—eking out a living off writing—if it wasn’t for all my tweeting. It’s not like I get the majority of my work through any connection or secret “in.” Instead, it’s because people see me on Twitter. I feel indebted to the social platform, and unlike the thrill of my first viral tweet, it feels like a burden. I don’t want to admit it, but I am scared.

Now it’s March 2017: I have just started a new job covering politics for VICE. I don’t think I would’ve have gotten this job without my Twitter; after all, I now have 40,000 followers, and those are the people who click on my articles, and that’s good for business. It’s what makes me a valuable asset. As I pivot from oversharing my personal plight to thoughtlessly spewing out half-formed ideas about our current political hell, my following surges. I’ll write an aggressive political take, and it will make some people mad and that will lead to more followers, and so it goes.

As 2018 swings into full gear, my life neatens up and I can no longer ignore the cracks in my personal brand. I have a full-time job and I am in a serious long-term relationship with an amazing man whose love and companionship nourishes me in ways the affirmation of thousands of strangers never could. I hate Twitter. I have 79,000 followers and I still fucking hate it. I also still use it constantly. My timeline is a stream of infinite negativity, of horrific news, and everybody yelling at one another, and maybe I’m just getting older, but suddenly I am exhausted by all the cyber-rage. Every day online feels like Gamergate. The internet is angrier and more savage than it’s ever been, and it’s not safe to use Twitter as loosely as I once did. For the first time in years, my impulse to inform the world of all my inane passing thoughts and feelings has fizzled out. Moreover, I am gripped with fear that an amorphous Twitter beast will punish me for all the crazy things I’ve publicly shared over the years, that all my meanest and most callous moments will come back to bite me in the ass.

by Eve Peyser, Vice |  Read more:
Image: Kitron Neuschatz & Lia Kantrowitz

Monday, October 22, 2018


Jean A. Mercier, Le Reve de Je-Francois, 1943
via:

Want to Know When You’re Going to Die?

It's the ultimate unanswerable question we all face: When will I die? If we knew, would we live differently? So far, science has been no more accurate at predicting life span than a $10 fortune teller. But that’s starting to change.

The measures being developed will never get good enough to forecast an exact date or time of death, but insurance companies are already finding them useful, as are hospitals and palliative care teams. “I would love to know when I’m going to die,” says Brian Chen, a researcher who is chief science officer for Life Epigenetics, a company that services the insurance industry. “That would influence how I approach life.”

The work still needs to be made more practical, and companies have to figure out the best uses for the data. Ethicists, meanwhile, worry about how people will cope with knowing the final secret of life. But like it or not, the death predictor is coming.

The clock

Steve Horvath, a UCLA biostatistician who grew up in Frankfurt, Germany, describes himself as “very straight,” while his identical twin brother is gay. So he had a personal interest when, a few years ago, a colleague asked him for help analyzing biological data from the saliva of twins with opposite sexual orientations. The colleague was trying to detect chemical changes that would indicate whether certain genes were turned on or off.

The hypothesis was that these so-called epigenetic changes, which alter the activity of DNA but not the DNA sequence itself, might help explain why two people with identical genes differ in this way. But Horvath found “zero signal” in the epigenetics of the twins’ saliva. Instead, what caught his attention was a powerful link between epigenetic changes and aging. “I was blown away by how strong the signal was,” he says. “I dropped most other projects in my lab and said: ‘This is the future.’”

Horvath became particularly intrigued by how certain chemical changes to cytosine—one of the four DNA bases, or “letters” of the genetic code—make genes more or less active. Given someone’s actual age, looking for these changes in that person’s DNA can tell him whether the person’s body is aging unusually fast or slowly. His team tested this epigenetic clock on 13,000 blood samples collected decades ago, from people whose subsequent date of death was known. The results revealed that the clock can be used to predict mortality.

Because most common diseases—cancer, heart disease, Alzheimer’s—are diseases of aging, the ticking of Horvath’s clock predicts how long someone will live and how much of that life will be free of these diseases (though it doesn’t foretell which ones people will get). “After five years of research, there is nobody who disputes that epigenetics predicts life span,” he says. (...)

Slow the ticking

As we age, the cytosine at hundreds of thousand of spots in our DNA either gains or loses methyl chemical groups (CH3). Horvath’s insight was to measure these increases and decreases in methylation, find the 300 to 500 changes that matter most, and use those to make his clocks. His findings suggest that the speed of the clock is strongly influenced by underlying genes. He estimates that about 40% of the ticking rate is determined by genetic inheritance, and the rest by lifestyle and luck.

Morgan Levine, who completed postdoctoral research in Horvath’s lab and now runs her own lab at Yale, is starting to compare an individual’s epigenetic profile with the profile of cells from the lining of a healthy umbilical cord. The more people deviate from that standard, the worse off they are likely to be. She thinks she will eventually be able to compare various epigenetic age measures to predict even in childhood who is going to be at greatest risk of which diseases—when it’s still early enough to change that future. “Your genes aren’t your fate, but even less so with things like epigenetics,” she says. “There definitely should be things we can do to delay aging if we can just figure out what they are.”

by Karen Weintraub, MIT Review | Read more:
Image:Vera Kratochvil/public domain
[ed. Whether it's epigenetics, bionics, gene editing or transhumanist brain uploads, immortal life is coming. We just need to survive politicians, climate change, bio-terrorism and nuclear war first. See also: Actors are digitally preserving themselves to continue their careers beyond the grave.]