Tuesday, March 7, 2017

Bill Gates: The Robot That Takes Your Job Should Pay Taxes

[ed. See also: What's Wrong With Bill Gates' Robot Tax]

Robots are taking human jobs. But Bill Gates believes that governments should tax companies’ use of them, as a way to at least temporarily slow the spread of automation and to fund other types of employment.

It’s a striking position from the world’s richest man and a self-described techno-optimist who co-founded Microsoft, one of the leading players in artificial-intelligence technology.

In a recent interview with Quartz, Gates said that a robot tax could finance jobs taking care of elderly people or working with kids in schools, for which needs are unmet and to which humans are particularly well suited. He argues that governments must oversee such programs rather than relying on businesses, in order to redirect the jobs to help people with lower incomes. The idea is not totally theoretical: EU lawmakers considered a proposal to tax robot owners to pay for training for workers who lose their jobs, though on Feb. 16 the legislators ultimately rejected it.

“You ought to be willing to raise the tax level and even slow down the speed” of automation, Gates argues. That’s because the technology and business cases for replacing humans in a wide range of jobs are arriving simultaneously, and it’s important to be able to manage that displacement. “You cross the threshold of job replacement of certain activities all sort of at once,” Gates says, citing warehouse work and driving as some of the job categories that in the next 20 years will have robots doing them. (...)

Below is a transcript, lightly edited for style and clarity.

Quartz: What do you think of a robot tax? This is the idea that in order to generate funds for training of workers, in areas such as manufacturing, who are displaced by automation, one concrete thing that governments could do is tax the installation of a robot in a factory, for example.

Bill Gates: Certainly there will be taxes that relate to automation. Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those things. If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level.

And what the world wants is to take this opportunity to make all the goods and services we have today, and free up labor, let us do a better job of reaching out to the elderly, having smaller class sizes, helping kids with special needs. You know, all of those are things where human empathy and understanding are still very, very unique. And we still deal with an immense shortage of people to help out there.

So if you can take the labor that used to do the thing automation replaces, and financially and training-wise and fulfillment-wise have that person go off and do these other things, then you’re net ahead. But you can’t just give up that income tax, because that’s part of how you’ve been funding that level of human workers.

And so you could introduce a tax on robots…

There are many ways to take that extra productivity and generate more taxes. Exactly how you’d do it, measure it, you know, it’s interesting for people to start talking about now. Some of it can come on the profits that are generated by the labor-saving efficiency there. Some of it can come directly in some type of robot tax. I don’t think the robot companies are going to be outraged that there might be a tax. It’s OK.

Could you figure out a way to do it that didn’t dis-incentivize innovation?


Well, at a time when people are saying that the arrival of that robot is a net loss because of displacement, you ought to be willing to raise the tax level and even slow down the speed of that adoption somewhat to figure out, “OK, what about the communities where this has a particularly big impact? Which transition programs have worked and what type of funding do those require?”

You cross the threshold of job-replacement of certain activities all sort of at once. So, you know, warehouse work, driving, room cleanup, there’s quite a few things that are meaningful job categories that, certainly in the next 20 years, being thoughtful about that extra supply is a net benefit. It’s important to have the policies to go with that.

People should be figuring it out. It is really bad if people overall have more fear about what innovation is going to do than they have enthusiasm. That means they won’t shape it for the positive things it can do.

by Kevin J. Delaney, Quartz |  Read more:
Image: Quartz

Public Pensions Are in Better Shape Than You Think

The beleaguered condition of state and local pension plans is one of those ongoing disaster stories that crops up about once a week somewhere. The explanation usually goes something like this: Irresponsible politicians and greedy public employee unions created over-generous benefit schemes, leading to pension plans which aren't "fully-funded" and eventual fiscal crisis. That in turn necessitates benefit cuts, contribution hikes, or perhaps even abolishment of the pension scheme.

But a fascinating new paper from Tom Sgouros at UC Berkeley's Haas Institute makes a compelling argument that the crisis in public pensions is to a large degree the result of terrible accounting practices. (Stay with me, this is actually interesting.) He argues that the typical debate around public pensions revolves around accounting rules which were designed for the private sector — and their specific mechanics both overstate some dangers faced by public pensions and understate others.

To understand Sgouros' argument, it's perhaps best to start with what "fully-funded" means. This originally comes from the private sector, and it means that a pension plan has piled up enough assets to pay 100 percent of its existing obligations if the underlying business vanishes tomorrow. Thus if existing pensioners are estimated to collect $100 million in benefits before they die, but the fund only has $75 million, it has an "unfunded liability" of $25 million.

This approach makes reasonably good sense for a private company, because it really might go out of business and be liquidated at any moment, necessitating the pension fund to be spun off into a separate entity to make payouts to the former employees. But the Government Accounting Standards Board (GASB), a private group that sets standards for pension accounting, has applied this same logic to public pension funds as well, decreeing that they all should be 100 percent funded.

This makes far less sense for governments, because they are virtually never liquidated. Governments can and do suffer fiscal problems or even bankruptcy on occasion. But they are not businesses — you simply can't dissolve, say, Arkansas and sell its remaining assets to creditors because it's in financial difficulties. That gives governments a permanence and therefore a stability that private companies cannot possibly have.

The GASB insists that it only wants to set standards for measuring pension fund solvency. But its analytical framework has tremendous political influence. When people see "unfunded liability," they tend to assume that this is a direct hole in the pension funding scheme that will require some combination of benefit cuts or more funding. Governments across the nation have twisted themselves into knots trying to meet the 100-percent benchmark.

While all pensions have contributions coming in from workers, the permanence of those contributions is far more secure for public pensions. Plus, those contributions can be used to pay a substantial fraction of benefits.

Indeed, one could easily run a pension scheme on a pay-as-you-go basis, without any fund at all (this used be common). That might not be a perfect setup, since it wouldn't leave much room for error, but practically speaking, public pension funds can and do cruise along indefinitely only 70 percent or so funded.

This ties into a second objection: How misleading the calculation for future pension liabilities is.

A future pension liability is determined by calculating the "present value" of all future benefit payments, with a discount rate to account for inflation and interest rates. But this single number makes no distinction between liabilities that are due tomorrow, and those that are due gradually over, say, decades.

Fundamentally, a public pension is a method by which retirees are supported by current workers and financial returns, and one of its great strengths is its long time horizon and large pool of mutual supporters. It gives great leeway to muddle through problems that only crop up very slowly over time. If huge problems really will pile up, but only over 70 years, there is no reason to lose our minds now — small changes, regularly adjusted, will do the trick.

Finally, a 100-percent funding level — the supposed best possible state for a responsible pension manager — can actually be dangerous. It means that current contributions are not very necessary to pay benefits, sorely tempting politicians to cut back contributions or increase benefits. And because asset values tend to fluctuate a lot, this can leave pension funds seriously overextended if there is a market boom — creating the appearance of full funding — followed by a collapse. Numerous state and local public pensions were devastated by just this process during the dot-com and housing bubbles.

by This Week |  Read more:
Image: Haas Institute

Monday, March 6, 2017


Gary Larson

Cressida Campbell
(Australian, b.1960), Verandah
via:

Instead of ‘1984,’ Read This

Although America’s political system seems unable to stimulate robust, sustained economic growth, it at least is stimulating consumption of a small but important segment of literature. Dystopian novels are selling briskly — Aldous Huxley’s “Brave New World” (1932), Sinclair Lewis’s “It Can’t Happen Here” (1935), George Orwell’s “Animal Farm” (1945) and “1984” (1949), Ray Bradbury’s “Fahrenheit 451” (1953) and Margaret Atwood’s “The Handmaid’s Tale” (1985), all warning about nasty regimes displacing democracy.

There is, however, a more recent and pertinent presentation of a grim future. Last year, in her 13th novel, “The Mandibles: A Family, 2029-2047,” Lionel Shriver imagined America slouching into dystopia merely by continuing current practices.

Shriver, who is fascinated by the susceptibility of complex systems to catastrophic collapses, begins her story after the 2029 economic crash and the Great Renunciation, whereby the nation, like a dissolute Atlas, shrugged off its national debt, saying to creditors: It’s nothing personal. The world is not amused, and Americans’ subsequent downward social mobility is not pretty.

Florence Darkly, a millennial, is a “single mother” but such mothers now outnumber married ones. Newspapers have almost disappeared, so “print journalism had given way to a rabble of amateurs hawking unverified stories and always to an ideological purpose.” Mexico has paid for an electronic border fence to keep out American refugees. Her Americans are living, on average, to 92, the economy is “powered by the whims of the retired,” and, “desperate to qualify for entitlements, these days everyone couldn’t wait to be old.” People who have never been told “no” are apoplectic if they can’t retire at 52. Antibiotic-resistant bacteria are ubiquitous, so shaking hands is imprudent. (...)

Social order collapses when hyperinflation follows the promiscuous printing of money after the Renunciation. This punishes those “who had a conscientious, caretaking relationship to the future.” Government salaries and Medicare reimbursements are “linked to an inflation algorithm that didn’t require further action from Congress. Even if a Snickers bar eventually cost $5 billion, they were safe.”

In a Reason magazine interview, Shriver says, “I think it is in the nature of government to infinitely expand until it eats its young.” In her novel, she writes:

“The state starts moving money around. A little fairness here, little more fairness there. . . . Eventually social democracies all arrive at the same tipping point: where half the country depends on the other half. . . . Government becomes a pricey, clumsy, inefficient mechanism for transferring wealth from people who do something to people who don’t, and from the young to the old — which is the wrong direction. All that effort, and you’ve only managed a new unfairness.”

Florence learns to appreciate “the miracle of civilization.” It is miraculous because “failure and decay were the world’s natural state. What was astonishing was anything that worked as intended, for any duration whatsoever.” Laughing mordantly as the apocalypse approaches, Shriver has a gimlet eye for the foibles of today’s secure (or so it thinks) upper middle class, from Washington’s Cleveland Park to Brooklyn. About the gentrification of the latter, she observes:

“Oh, you could get a facelift nearby, put your dog in therapy, or spend $500 at Ottawa on a bafflingly trendy dinner of Canadian cuisine (the city’s elite was running out of new ethnicities whose food could become fashionable). But you couldn’t buy a screwdriver, pick up a gallon of paint, take in your dry cleaning, get new tips on your high heels, copy a key, or buy a slice of pizza. Wealthy residents might own bicycles worth $5K, but no shop within miles would repair the brakes. . . . High rents had priced out the very service sector whose presence at ready hand once helped to justify urban living.”

by George F. Will, WSJ |  Read more:
Image: Sarah Lee

Inside the Loneliest Five-Star Restaurant in the World

You can eat foie gras at Antarctica's Concordia Station, but your closest neighbor is the International Space Station and you might not see oranges for three months.

Life in the kitchen is never easy—being a chef is a profession that involves an incredible amount of precision, creativity, and the ability to keep your cool in this uniquely stressful environment, even in the best of conditions. In a place like Antarctica's Concordia Station, one of the most isolated research facilities in the world, where day and night can last months on end and temperatures generally hover between -30 and -60 Celsius, the already stressful task of being a chef begins to sound downright hellish.

This however, is not the opinion of Luca Ficara, who has been serving as the base's resident chef since November.

When I Skyped with Ficara last week, he was well into the first full week of perpetual darkness at the base, but despite the fact that he wouldn't be seeing the sun for another three months, he was all smiles and jokes. Ficara must operate in an environment which is a far cry from "the best of conditions," yet despite all the hardships his job description entails, it's the small things that he misses most: "It's been three months since I've had a orange," he told me with a melancholy that only three months without a orange can warrant.

Ficara, affectionately referred to as "the David Copperfield of the kitchen" by his crewmates, hails from Sicily, where he spent five years training as a chef in the IPSSAR Hospitality School in Catania, Italy. At 30, Ficara has spent years working in kitchens in Australia, England, and Spain, although working in a kitchen on the white continent was always little more than a dream.

"To be honest, [going to Antarctica] was not in my plan," said Ficara, laughing. "It was like a lottery—you just buy a scratch card, and if you're lucky, you're going to win. You always dream about it, but you never think you will be the winner."

Each year, the Italian National Program for Antarctic Research (which maintains the base along with the French Polar Institute Paul Emile Victor) holds a lottery to determine who will be spending the next year as the resident chef at Concordia. This lottery system has won the station something of a reputation for its food, which received a nod in the Lonely Planet as a place "considered by many to enjoy Antarctica's best cuisine, with fine wines and seven-course lunches on Sundays."

While Ficara didn't really expect to end up in the Concordia Kitchen, he turned out to be the perfect fit for the job given his diverse culinary repertoire. The chefs chosen by the PNRA must demonstrate not only proficiency as cooks, but also a robust knowledge of international culinary practices so that they can cater to the tastes of the 13-person Concordia winter crew, who hail from England, Switzerland, France, and Italy.

The winter-over crew at Concordia is living in near total isolation, their contact with the outside world limited to digital interactions during the eight months of the year when Antarctica is so cold that jet fuel turns to gel, prohibiting any visitors from reaching the base. In these isolated conditions, food takes on a special importance for everyone at the base. While the crew may be landlocked until November, Ficara nonetheless manages to allow his colleagues to return to their homes on a nightly basis, riding on aromas of Yorkshire pudding, foie gras, or chicken parmigiana.

In addition to trying to cater to the local tastes of the various crew members, Ficara also arranges for themed nights each Saturday, occasions for which he prepares some of his most lavish meals.

"You must understand that every day is the same. So to give some effect of the end of the week we try to make special events," said Ficara. "For example, for the French crew, I tried to make a very fancy French meal. I gave somebody a job as a sommelier and explained how to serve the food. We've done a few nights like this—very stylish."

Despite the festive atmosphere that is brought about by Ficara's elaborate feasts each week at the base, it wouldn't be much of a party without another crucial ingredient: alcohol. The crew keeps a decent variety of spirits on site, but only have access to them on Saturday evenings during which they eat, drink, and be merry to celebrate the end of another week at the base. In addition to downloading recipes for cocktails to experiment with over dinner, the crew is particularly fond of wine, the lifeblood of its Italian and French crew members.

"It's not like we have a wine bar, but we have a lot of wine—unfortunately, we just have French wine," said Ficara with a laugh. "I think the best wine for everybody is the wine from where you're born, but a glass of wine is always a pleasure [even if it's French]."

During the summer months (November to February), the Concordia population grows to around 75 people, which often requires the chef to take on some additional help in the kitchen. During the eight months where there are only a dozen other crew members on site, Ficara must crank out three meals a day on his own. A daunting task, but Ficara is not always without help—he keeps the kitchen door open, always ready to offer cooking lessons to his crewmates.

"Most of the time I'm alone in the kitchen, but sometimes I like to give cooking lessons to the crew, so I'll make some muffins with Beth [Concordia's English doctor] or some pizza with Mario [Concordia's Italian Mission Commander]," Ficara told me. In addition to instructing the crew on how to cook, Luca also entertains them with stories about how he came to learn about the dish they are preparing. "It's nice when we have meals because we share the experience of traveling or we share the ingredients we'd never have known. Each plate has some history from me, so I always explain how I know how to prepare something."

by Daniel Oberhaus, Munchies |  Read more:
Image: IPEV/PNRA

Learning to Love the Secret Language of Urine

[ed. See also: How much pee is in our swimming pools? New urine test reveals the truth. Yikes.]

Learning about the body’s many excretions, secretions and suppurations in medical school, I realized that each medical specialty has its own essential effluent. And I heard that some physicians choose their careers based on the bodily fluid they find least revolting. Thus, a doctor disgusted by stool and pus but able to stand the sight of blood might end up a hematologist, while one repulsed by urine and bile but tolerant of sputum might choose pulmonology.

Many physicians are actively drawn to a particular bodily fluid, intrigued by its unique diagnostic mysteries. Each fluid that runs through the body is a language in which diseases speak to physicians, telling them what is wrong with a patient. And specializing means becoming fluent in one specific fluid’s dialect, learning to interpret its colors, textures and consistencies, and spending a career pondering its secrets.

As a medical student, I saw that a bodily fluid could shape a career. And though I resisted settling on just one (I remain a generalist), I have always been partial to pee.

I’ve studied all the body’s fluids and used each in diagnosing disease, and urine stands out in the wealth of information it grants about a patient’s condition. Conceived in the kidneys — a pair of bean-shaped organs tucked away in the abdomen’s rear — urine runs down the ureters and is conveniently stored in the bladder, from which it is gathered in plastic cups for testing. Urine analysis is performed frequently enough by physicians to have earned the shorthand “urinalysis” — no other bodily fluid can claim to be on a nickname-basis with the medical profession.

I remember the first time I watched a nephrologist turn a urine sample into a diagnosis. As a medical student at Cooper University Hospital in Camden, N.J., I followed behind as he carried a small, plastic urine cup to the microscope room in the nephrology department. He plunged a diagnostic dipstick into the fluid to reveal bits of blood and protein unseen by the naked eye. He then placed some urine into a centrifuge, which spun rapidly and concentrated floating cells into a sediment at the vial’s bottom. After peering through a microscope at a single drop of this stuff, noting stray bits of debris flung across the viewing field, the nephrologist wove a comprehensive diagnostic tale that encompassed all the patient’s symptoms and lab abnormalities. The diagnosis turned out to be glomerulonephritis, a rare form of kidney disease. He was able to look inside that patient with a clairvoyance that seemed positively sorcerous, with urine as his crystal ball. From that moment I was determined to learn urine’s subtle language.

by Jonathan Reisman, Washington Post |  Read more:
Image: Christine Glade/Istock

Imagine If You Will...


You can't see it from this angle, but just off-camera Rod Serling is smoking a cigarette and delivering a monologue. Maybe one like this:
The Monsters Are Due on Maple Street (1960)
Narrator: [Closing Narration] The tools of conquest do not necessarily come with bombs and explosions and fallout. There are weapons that are simply thoughts, attitudes, prejudices to be found only in the minds of men. For the record, prejudices can kill, and suspicion can destroy, and a thoughtless frightened search for a scapegoat has a fallout all of its own for the children, and the children yet unborn. And the pity of it is that these things cannot be confined to the Twilight Zone. 
by Tom Sullivan
[ed. See also: Donald Trump isn’t the only villain – the Republican party shares the blame]

Sunday, March 5, 2017


Hiroshi Yoshida (Japanese, 1876-1950), Winter in Taguchi
via:
“Sorry I’m late. It took me forever to find this place.”
via:

Newsrooms Are Making Leaking Easier–and More Secure–Than Ever

A growing number of disaffected government insiders have been approaching journalists to share information anonymously since the election in November and the inauguration just over a month ago. In response, news organizations have made it safer and easier for potential whistleblowers by actively encouraging them to use a variety of secure communication channels.

Many outlets have even posted instructions and assigned additional staff to monitor the information that arrives over these channels–such as the encrypted mobile application Signal and the dedicated whistleblowing platform SecureDrop. The Washington Post wrote a lengthy piece offering advice for leaking government documents. ProPublica updated its “How to Leak” page and posted an instructional video with Nieman Lab. And The New York Times published a page titled “Got a confidential news tip?” which details a number of secure channels, from encrypted email to plain manila envelopes, alongside basic instructions for using them safely.

But even as more news outlets promote secure channels for outreach from potential sources, it is still incredibly rare for these tools to be mentioned in published stories. Every newsroom has editorial policies regarding the treatment of anonymous sources, and most interpret the mere mention of tools like Signal or SecureDrop to be an unnecessary risk. As a result, the usefulness of these tools is underpublicized, and a study published by the Tow Center for Digital Journalism last year still offers the only account of SecureDrop’s value in newsrooms. Of the ten news outlets studied at the time, nine said that they regularly receive newsworthy information through SecureDrop.

The demand for secure communication tools has only risen since Trump’s election. The Times launched SecureDrop just a week after the election, while downloads of the Signal app rose 400 percent during the month of November. There are currently 22 active SecureDrop installations in newsrooms—nearly twice as many as there were just a year ago. A handful of freelance journalists and about a dozen non-profit groups also use SecureDrop.

Government employees, too, are taking advantage. Members of the Environmental Protection Agency, Foreign Service, and Department of Labor have been using Signal to communicate with the press against the President’s gag order. Aids to politicians are using Signal and a similar app called Confide not just for leaking, but for personal protection under increased suspicion and surveillance. These apps may pass unnoticed unless users are subjected to a “phone check,” like the one press secretary Sean Spicer allegedly demanded from a dozen communications staffers last week.

According to Derek Kravitz, research editor at ProPublica, a single source often uses multiple secure channels to communicate with a reporter. Signal has become the most common way for new sources to contact them, while SecureDrop mainly serves as a guarded vessel for documents and data dumps. “It’s mostly people contacting us on Signal or another medium,” Kravitz said, “and then we’ll go to SecureDrop to see if they’ve sent anything.”

Kravitz added that “the flow of tips and leaks has been consistent since inauguration,” and so has their quality: “Nearly all messages have had some news value or public interest.”

Tools like Signal and SecureDrop are not only resilient to attack, but also fairly user-friendly. They are designed to minimize risk, even for inexperienced users. “Not every source is an expert on being an anonymous source,” says Kevin Poulsen, the hacker and longtime Wired reporter who originally conceived of SecureDrop. “That’s not why they’re contacting a reporter. It’s because they’re an expert on something else.”

It makes sense that so many first-time whistleblowers are turning to Signal, in particular: There is little separating the experience of using Signal from typical texting and calling on a smartphone. Yet this ease does not come at the expense of security. Signal’s code, developed by Open Whisper Systems, is freely available for anyone to test and verify. Even Edward Snowden endorses Signal as the best secure communication tool for most people.

by Charles Berret, Columbia Journalism Review |  Read more:
Image: Getty

Harold Harvey
(English, 1874-1941), Marazion Marsh

The Most Broadly Overvalued Moment in Market History

"The issue is no longer whether the currnet market resembles those preceding the 1929, 1969-70, 1973-74, and 1987 crashes. The issue is only - are conditions like October of 1929, or more like April? Like October of 1987, or more like July? If the latter, then over the short-term, arrogant imprudence will continue to be mistaken for enlightened genius, while studied restraint will be mistaken for stubborn foolishness. We can't rule out further short-term gains, but those gains will turn bitter... Let's not be shy: regardless of short-term action, we ultimately expect the S&P 500 to fall by more than half, and the Nasdaq by two-thirds. Don't scoff without reviewing history first."
- John P. Hussman, Ph.D., Hussman Econometrics, February 9, 2000

"On Wall Street, urgent stupidity has one terminal symptom, and it is the belief that money is free. Investors have turned the market into a carnival, where everybody 'knows' that the new rides are the good rides, and the old rides just don't work. Where the carnival barkers seem to hand out free money just for showing up. Unfortunately, this business is not that kind - it has always been true that in every pyramid, in every easy-money sure-thing, the first ones to get out are the only ones to get out... Over time, price/revenue ratios come back in line. Currently, that would require an 83% plunge in tech stocks (recall the 1969-70 tech massacre). The plunge may be muted to about 65% given several years of revenue growth. If you understand values and market history, you know we're not joking."
- John P. Hussman, Ph.D., Hussman Econometrics, March 7, 2000

On Wednesday, the consensus of the most reliable equity market valuation measures we identify (those most tightly correlated with actual subsequent S&P 500 total returns in market cycles across history) advanced within 5% of the extreme registered in March 2000. Recall that following that peak, the S&P 500 did indeed lose half of its value, the Nasdaq Composite lost 80% of its value, and the tech-heavy Nasdaq 100 Index lost an oddly precise 83% of its value. With historically reliable valuation measures beyond those of 1929 and lesser peaks, capitalization-weighted measures are essentially tied with the most offensive levels in history. Meanwhile, the valuation of the median component of the S&P 500 is already far beyond the median valuations observed at the peaks of 2000, 2007 and prior market cycles, while our estimate for 10-12 year returns on a conventional 60/30/10 mix of stocks, bonds, and T-bills fell to a record low last week, making this the most broadly overvalued instant in market history.

There is a quick, knee-jerk response floating around these days, which asserts that “stocks are still cheap relative to interest rates.” This argument is quite popular with investors who haven’t spent much time getting their hands dirty with historical data, satisfied to repeat verbal arguments they’ve heard elsewhere as a substitute for analysis. It’s even an argument we recently heard, almost inexplicably, from one investor we’ve regularly agreed with at market extremes over several decades (more on that below). In 2007, as the market was peaking just before the global financial crisis, precisely the same misguided assertions prompted me to write Long-Term Evidence on the Fed Model and Forward Operating P/E Ratios. See also How Much Do Interest Rates Affect the Fair Value of Stocks? from May of that year. Let’s address this argument once again, in additional detail.

Valuations and interest rates


There’s no question that interest rates are relevant to the fair valuation of stocks. After all, a security is nothing but a claim to some future stream of cash flows that will be delivered into the hands of investors over time. The higher the price an investor pays for a given stream of future cash flows, the lower the long-term return the investor can expect to earn as those cash flows are received. Conversely, the lower the long-term return an investor can tolerate, the higher the price they will agree to pay for that stream of future cash flows. If interest rates are low, it’s not unreasonable to expect that investors would accept a lower expected future return on stocks. If rates are high, it’s not unreasonable to expect that investors would demand a higher expected future return on stocks.

The problem is that investors often misinterpret the form of this relationship, and become confused about when interest rate information is needed and when it is not. Specifically, given a set of expected future cash flows and the current price of the security, one does not need any information about interest rates at all to estimate the long-term return on that security. The price of the security and the cash flows are sufficient statistics to calculate that expected return. For example, if a security that promises to deliver a $100 cash flow in 10 years is priced at $82 today, we immediately know that the expected 10-year return is (100/82)^(1/10)-1 = 2%. Having estimated that 2% return, we can now compare it with competing returns on bonds, to judge whether we think it’s adequate, but no knowledge of interest rates is required to “adjust” the arithmetic.

There are three objects of interest here: the current price, the future stream of expected cash flows, and the long-term rate of return that converts one to the other. Given any two of these, one can estimate the third. For example, given a set of expected future cash flows and some “justified” return of the investor’s choosing, one can use those two pieces of information to calculate the price that will deliver that desired expected return. If I want a $100 future payment to give me a 5% future return over 10 years, I should be willing to pay no more than $100/(1.05)^10 = $61.39.

So when you want to convert a set of expected cash flows into an acceptable price today, interest rates may very well affect the “justified” rate of return you choose. But if you already know the current price, and the expected cash flows, you don’t need any information about prevailing interest rates in order to estimate the expected rate of return. One does not have to “factor in” the level of interest rates when observable valuations are used to estimate prospective long-term market returns, because interest rates are irrelevant to that calculation. The only thing that interest rates do at that point is to allow a comparison of the expected return that’s already baked in the cake with alternative returns available in the bond market.

The Fed Model is an artifact of just 16 years of history


There’s an additional problem. While it’s compelling to believe that the expected return on stocks and bonds should have a one-to-one relationship, history doesn’t bear that out at all. Indeed, over the past century, the correlation between bond and stock yields has historically gone in the entirely wrong direction except during the inflation-disinflation cycle from about 1970 to 1998. What investors may not realize is that the correlation between interest rates and earnings yields (as well as dividend yields) has been negative since 1998. Investors across history have not been consistent at all in treating stocks and bonds as closely competing substitutes.

As I noted during the bubbles that ended in 2000 and 2007, the problem with the Fed Model (which compares the S&P 500 forward operating earnings yield with the 10-year Treasury yield) is that this presumed one-to-one relationship between stock and bond yields is wholly an artifact of the disinflationary period from 1982 to 1998. The stock market advance from 1982 to 1998 represented one of the steepest movements from deep secular undervaluation to extreme secular overvaluation in stock market history. Concurrently, bond yields declined as inflation retreated from high levels of the 1970’s. What the Fed Model does is to overlay those two outcomes and treat them as if stocks were “fairly valued” the entire time.

The chart below shows the S&P 500 forward operating earnings yield alongside the 10-year Treasury bond yield. The inset of the chart is the chart that appeared in Alan Greenspan’s 1997 Humphrey Hawkins testimony, and is the entire basis upon which the Fed Model rests. The same segment of history is highlighted in the yellow block. Notice that this is the only segment of history in which the presumed one-to-one relationship actually held.


The Fed Model is not a fair-value relationship, but an artifact of a specific disinflationary segment of market history. It is descriptive of yield behavior during that limited period, but it has a very poor predictive record with regard to actual subsequent market returns.

When investors assert that stocks are “fairly valued relative to interest rates,” they are essentially invoking the Fed Model. What they seem to have in mind is that regardless of absolute valuation levels, stocks can be expected to achieve acceptably high returns as long as the S&P 500 forward operating earnings yield is higher than the 10-year Treasury yield.

No, no. That’s not how any of this works, and we have a century of evidence to show it. The deep undervaluation of stocks in 1982 was followed by glorious subsequent returns. The steep overvaluation of stocks in 1998 was followed by one crash, then another, which left S&P 500 total returns negative for more than a decade. I fully expect that current valuations, which are within a breath of 2000 extremes on the most historically reliable measures, will again result in zero or negative returns over the coming 10-12 years. Let’s dig into some data to detail the basis for those expectations.

First, a quick note on historically reliable valuation measures. The value of any security is based on the long-term stream of cash flows that it can be expected to deliver over decades and decades. While corporate earnings are certainly required to generate future cash flows, current earnings (or even forward earnings) are very poor “sufficient statistics” for that stream of cash flows. That’s true not only because of fluctuations in profit margins over the economic cycle, but also due to very long-term competitive forces that exert themselves over multiple economic cycles. From the standpoint of historical reliability, valuation measures that dampen or mute the impact of fluctuating profit margins dramatically outperform measures based on current earnings. Indeed, even the Shiller CAPE, which uses a 10-year average of inflation-adjusted earnings, provides substantially better results when one also adjusts for the embedded profit margin (the denominator of the CAPE / S&P 500 revenues). For a brief primer on the importance of implied profit margins in evaluating market valuations, see Two Point Three Sigmas Above the Norm and Margins, Multiples, and the Iron Law of Valuation.

The chart below shows the ratio of nonfinancial market capitalization to corporate gross value-added, including estimated foreign revenues. I created this measure, MarketCap/GVA, as an apples-to-apples alternative to market capitalization/GDP that matches the object in the numerator with the object in the denominator, and also takes foreign revenues into account. We find this measure to be better correlated with actual subsequent S&P 500 total returns than any other measure we’ve studied in market cycles across history, including price/earnings, price/forward earnings, price/book, price/dividends, enterprise value/EBITDA, the Fed Model, Tobin’s Q, market cap/GDP, the NIPA profits cyclically-adjusted P/E (CAPE), and the Shiller CAPE.

MarketCap/GVA is shown below on an inverted log scale (blue line, left scale), along with the actual subsequent 12-year total return of the S&P 500 (red line, right scale). From current valuations, which now rival the most extreme levels in U.S. history, we estimate likely S&P 500 nominal total returns averaging less than 1% annually over the coming 12-year horizon. As a side note, we tend to prefer a 12-year horizon because that is the point where the autocorrelation profile of valuations drops to zero, and is therefore the horizon over which mean reversion is most reliable (see Valuations Not Only Mean-Revert, They Mean-Invert).


I’m often asked why we don’t “adjust” MarketCap/GVA for the level of interest rates. The answer, as detailed at the beginning of this comment, is that given both the price of a security, and the expected stream of future expected cash flows (or a sufficient statistic for those cash flows), one does not need any information at all about interest rates in order to estimate the expected long-term return on that security. Each point in the chart below shows the actual 12-year subsequent total return of the S&P 500 index, along with two fitted values, one using MarketCap/GVA alone, and the other including the 10-year Treasury bond yield as an additional explanatory variable. That additional variable adds absolutely no incremental explanatory power. Both fitted values have a 93% correlation with actual subsequent 12-year S&P 500 total returns.

We’re now in a position to say something very precise about current valuations and interest rates. Given the present level of interest rates, investors who are willing to accept likely prospective nominal total returns on the S&P 500 of less than 1% over the coming 12-year period are entirely welcome to judge stocks as “fairly valued relative to interest rates.” But understand that this is precisely what that phrase implies here.

Moreover, as one can see from the foregoing charts, there’s not a single market cycle in history, neither in the period before the 1970’s (when interest rates regularly hovered near current levels), nor in recent decades, that has failed to raise prospective 10-12 year S&P 500 total returns to the 8-10% range or beyond over the completion of that cycle. So even if investors are willing to accept 10-12 year total returns of next to nothing, they should also be fully prepared for an interim market loss on the order of 50-60%, because that is the decline that would now be required to restore those 8-10% return expectations, without even breaking below historical valuation norms.

by John P. Hussman, Ph.D, Hussman Funds |  Read more:
Image: Hussman Strategic Advisors

What Writers Really Do When They Write

1
Many years ago, during a visit to Washington DC, my wife’s cousin pointed out to us a crypt on a hill and mentioned that, in 1862, while Abraham Lincoln was president, his beloved son, Willie, died, and was temporarily interred in that crypt, and that the grief-stricken Lincoln had, according to the newspapers of the day, entered the crypt “on several occasions” to hold the boy’s body. An image spontaneously leapt into my mind – a melding of the Lincoln Memorial and the Pietà. I carried that image around for the next 20-odd years, too scared to try something that seemed so profound, and then finally, in 2012, noticing that I wasn’t getting any younger, not wanting to be the guy whose own gravestone would read “Afraid to Embark on Scary Artistic Project He Desperately Longed to Attempt”, decided to take a run at it, in exploratory fashion, no commitments. My novel, Lincoln in the Bardo, is the result of that attempt, and now I find myself in the familiar writerly fix of trying to talk about that process as if I were in control of it.

We often discuss art this way: the artist had something he “wanted to express”, and then he just, you know … expressed it. We buy into some version of the intentional fallacy: the notion that art is about having a clear-cut intention and then confidently executing same.

The actual process, in my experience, is much more mysterious and more of a pain in the ass to discuss truthfully.

2
A guy (Stan) constructs a model railroad town in his basement. Stan acquires a small hobo, places him under a plastic railroad bridge, near that fake campfire, then notices he’s arranged his hobo into a certain posture – the hobo seems to be gazing back at the town. Why is he looking over there? At that little blue Victorian house? Stan notes a plastic woman in the window, then turns her a little, so she’s gazing out. Over at the railroad bridge, actually. Huh. Suddenly, Stan has made a love story. Oh, why can’t they be together? If only “Little Jack” would just go home. To his wife. To Linda.

What did Stan (the artist) just do? Well, first, surveying his little domain, he noticed which way his hobo was looking. Then he chose to change that little universe, by turning the plastic woman. Now, Stan didn’t exactly decide to turn her. It might be more accurate to say that it occurred to him to do so; in a split-second, with no accompanying language, except maybe a very quiet internal “Yes.”

He just liked it better that way, for reasons he couldn’t articulate, and before he’d had the time or inclination to articulate them.

An artist works outside the realm of strict logic. Simply knowing one’s intention and then executing it does not make good art. Artists know this. According to Donald Barthelme: “The writer is that person who, embarking upon her task, does not know what to do.” Gerald Stern put it this way: “If you start out to write a poem about two dogs fucking, and you write a poem about two dogs fucking – then you wrote a poem about two dogs fucking.” Einstein, always the smarty-pants, outdid them both: “No worthy problem is ever solved in the plane of its original conception.”

How, then, to proceed? My method is: I imagine a meter mounted in my forehead, with “P” on this side (“Positive”) and “N” on this side (“Negative”). I try to read what I’ve written uninflectedly, the way a first-time reader might (“without hope and without despair”). Where’s the needle? Accept the result without whining. Then edit, so as to move the needle into the “P” zone. Enact a repetitive, obsessive, iterative application of preference: watch the needle, adjust the prose, watch the needle, adjust the prose (rinse, lather, repeat), through (sometimes) hundreds of drafts. Like a cruise ship slowly turning, the story will start to alter course via those thousands of incremental adjustments.

The artist, in this model, is like the optometrist, always asking: Is it better like this? Or like this?

The interesting thing, in my experience, is that the result of this laborious and slightly obsessive process is a story that is better than I am in “real life” – funnier, kinder, less full of crap, more empathetic, with a clearer sense of virtue, both wiser and more entertaining.

And what a pleasure that is; to be, on the page, less of a dope than usual.

3
Revising by the method described is a form of increasing the ambient intelligence of a piece of writing. This, in turn, communicates a sense of respect for your reader. As text is revised, it becomes more specific and embodied in the particular. It becomes more sane. It becomes less hyperbolic, sentimental, and misleading. It loses its ability to create a propagandistic fog. Falsehoods get squeezed out of it, lazy assertions stand up, naked and blushing, and rush out of the room.

Is any of this relevant to our current political moment?

Hoo, boy.

When I write, “Bob was an asshole,” and then, feeling this perhaps somewhat lacking in specificity, revise it to read, “Bob snapped impatiently at the barista,” then ask myself, seeking yet more specificity, why Bob might have done that, and revise to, “Bob snapped impatiently at the young barista, who reminded him of his dead wife,” and then pause and add, “who he missed so much, especially now, at Christmas,” – I didn’t make that series of changes because I wanted the story to be more compassionate. I did it because I wanted it to be less lame.

But it is more compassionate. Bob has gone from “pure asshole” to “grieving widower, so overcome with grief that he has behaved ungraciously to a young person, to whom, normally, he would have been nice”. Bob has changed. He started out a cartoon, on which we could heap scorn, but now he is closer to “me, on a different day”.

How was this done? Via pursuit of specificity. I turned my attention to Bob and, under the pressure of trying not to suck, my prose moved in the direction of specificity, and in the process my gaze became more loving toward him (ie, more gentle, nuanced, complex), and you, dear reader, witnessing my gaze become more loving, might have found your own gaze becoming slightly more loving, and together (the two of us, assisted by that imaginary grouch) reminded ourselves that it is possible for one’s gaze to become more loving.

Or we could just stick with “Bob was an asshole,” and post it, and wait for the “likes”, and for the pro-Bob forces to rally, and the anti-barista trolls to anonymously weigh in – but, meanwhile, there’s poor Bob, grieving and misunderstood, and there’s our poor abused barista, feeling crappy and not exactly knowing why, incrementally more convinced that the world is irrationally cruel.

by George Saunders, The Guardian |  Read more:
Image: Yann Kebbi for Review

Saturday, March 4, 2017

The Plane So Good Its Still In Production After 60 Years

[ed. My first plane was a Cessna 140 (taildragger precursor to the 150). The next, a straight-tailed 1956 Cessna 172. I loved that plane. 206 nose gear, oversized tires, manual flaps. Just a joy.]

It can seat four people, in a squeeze, and weighs a little under 800kg without fuel or its passengers. It has a maximum speed of 140mph (226km/h), though you could push this up to 185mph at a pinch – but the manufacturer would rather you didn’t. And on a tank full of fuel, you could travel 800 miles (1,290km) – the equivalent of going from Berlin to Belfast, or New York to Madison, Wisconsin.

You might think this was a high-performance car with a little more-than-average leg room – but it’s a plane. The Cessna 172, which first rolled off the production line in 1956, is still in production today. And if any design could claim to be the world’s favourite aircraft, it’s the 172.

More than 43,000 Cessna 172s have been made so far. And while the 172 (also known as the Skyhawk) has undergone a myriad of tweaks and improvements over the past 60-odd years, the aircraft essentially looks much the same as it did when it was first built in the 1950s.

In the past 60 years, Cessna 172s have become a staple of flight training schools across the world. Generations of pilots have taken their first, faltering flights in a Cessna 172, and for good reason – it’s a plane deliberately designed to be easy to fly, and to survive less-than-accomplished landings.

“More pilots over the years have earned their wings in a 172 than any other aircraft in the world,” says Doug May, the vice-president of piston aircraft at Cessna’s parent company, Textron Aviation.

“The forgiving nature of the aircraft really does suit it to the training environment,” he says.

Light aircraft might not be updated as often as cars, but 60 years is still a very long time to produce a vehicle that has essentially been unchanged. The only time its production ceased for an extended time was in the late 1980s, when stricter US laws restricted the manufacture of all light aircraft. What is it about the 172 that has made it such a favourite for so long?

One answer comes from the fact that the Cessna 172 is a high-wing monoplane – meaning the wings sit high above the cockpit. This is very useful for student pilots because it gives them a better view of the ground and makes the aircraft much easier to land.

The 172 was based on an earlier Cessna design called the 150. This looked very similar apart from the fact it was a “taildragger” – instead of a wheel at the front, the 150 had a smaller wheel at the back, underneath the tailfin (like most aircraft before the arrival of jets). The 150 enjoyed the benefits of a light aircraft boom in the years following World War Two, as many of the companies that had produced tens of thousands of military aircraft now turned their attention to civilian aircraft.

The Cessna 150 was a very successful design – nearly 24,000 were made in a 19-year production run – but it only had enough room for two; the pilot and one passenger. Cessna saw the gap for a bigger model that could take twice as many people. So the basic design of the 150 was modified, and made more robust – where the 150 was made of a fabric skin stretched around a frame, the 172 was made of aluminium.

The design was so clean and aerodynamic that Cessna’s marketing department dubbed the 172 the “land-o-matic” because it was so easy to fly and land.

“I think it’s really the robustness that’s been behind the aircraft’s success,” says May. “It’s able to take six to eight to 10 landings an hour, hour after hour.” May says the 172 is often the plane a student will take their first flight in – and it will often take them through their hours until they qualify for a pilot’s licence.

“The Cessna 172 was not built to minimum requirements,” says May. “I think they did an exceptional job of looking at the intended role, and actually providing a plane that would surpass those requirements.”

And during its history, that ease of use and reliability has led to some quite remarkable flights.

On 4 December 1958 two pilots called Robert Timm and John Cook climbed into a Cessna 172 at McCarran Airfield in Las Vegas. Their mission? To break the world record for the longest flight without landing.

This would be no easy feat. The previous record, which was set in 1949, was a colossal achievement – the two pilots had flown an aircraft very like Timm and Cook’s Cessna for a total of 46 days – all to raise money for a cancer fund.

The two pilots would need to keep their aircraft in the air for nearly seven weeks, without landing once. According to Jalopnik, the necessary modifications took more than a year to make – and included a small sink so the two pilots could brush their teeth and even bathe. In order to do this, the two pilots had to strip out the back seats so they had room for a mattress. While one pilot flew the plane, the other would sleep. And should they feel the need to shower? A small platform could be extended between the open cabin and the wing strut – allowing the relief pilot to shower out in the open air.

by Stephen Dowling, BBC | Read more:
Images: markk

Must It Always Be Wartime?

[ed. Interesting. It never occurred to me how a good portion of the military budget might be allocated to aid and nation building as a form of preventative national security.]

If the fight against terrorist groups is hard enough to classify, consider new and emerging security threats—such as cyberattacks on critical infrastructure or the use of bioengineered viruses—that do not involve the kinetic or explosive weapons of traditional war. Does it make sense to speak of “combatants” when the attacker is not an armed soldier but a hacker at a computer terminal or a scientist in a biology laboratory? And even if they are combatants, is it a proper response to such attacks to authorize shooting or bombing them from afar, as is permitted in a traditional armed conflict?

International humanitarian law is clearly in need of elaboration in order to address these newer forms of conflict, but it should at least provide the starting point. For example, biological warfare unleashing deadly pathogens or cyber warfare shutting down electrical facilities are disturbing in large part because they could inflict widespread indiscriminate and disproportionate civilian casualties—concepts that are central to humanitarian law.

Similarly, a firmer grounding in international human rights and humanitarian law would have helped to avoid the kinds of perversions of that law that were orchestrated by the Bush administration, whose attorney general, Alberto Gonzales, dismissed the Geneva Conventions as “quaint” and “obsolete” and whose Justice Department cited a “new kind of war” to authorize “enhanced interrogation techniques” such as waterboarding, a form of torture. In fact, despite Trump’s musings about reviving it, international law prohibits torture—indeed, makes it a crime—in times of both peace and war.

Greater attention to human rights principles might also have led Trump to temper his executive order temporarily banning visitors to the United States from seven mainly Muslim countries. Ostensibly designed to fight terrorism, it made no effort to limit its scope to people who posed any identifiable threat, at enormous personal cost, if upheld by the courts, to the 60,000 people whose visas were suddenly not recognized.

Complicating matters further is the expanding role of the US military. Today, counterinsurgency strategy is broadly understood to involve far more than fighting an opposing military. It also has come to mean protecting the civilian population and building government institutions that serve rather than prey upon people, including a legal system that protects rights. Trump is now questioning the utility of such “nation-building,” but in the meantime it has led the Pentagon to sponsor a variety of programs that have little to do with confronting enemy troops.

As Brooks describes it, US soldiers now undertake public health programs, agricultural reform efforts, small business development projects, and training in the rule of law. This expanding mandate, as Brooks shows, has enabled the Pentagon to dramatically increase its budget—few in Congress deny requests for more spending on national defense—even as austerity eviscerates the budgets of the agencies that traditionally carry out these tasks, such as the State Department and USAID.

The radically different budgets of the Pentagon and its civilian counterparts only reinforce the tendency to look to the military to address nonmilitary problems—to treat it as a “Super Walmart” ready to respond to the nation’s every foreign policy need. “It’s a vicious circle,” Brooks explains, “as civilian capacity has declined, the military has stepped into the breach.”

Yet there is a cost to a self-reinforcing cycle of militarizing US foreign policy. Pursuing economic development, undertaking agrarian reform, expanding the rule of law—these are tasks requiring considerable expertise, including linguistic skills and cultural sensitivity not usually associated with the average military recruit, still chosen foremost for strength and agility even in a world in which traditional military tasks diminish in importance.

Moreover, humanitarian and development workers have typically enjoyed a degree of protection in the field because of their neutrality—their dedication to offering services on the basis of need rather than political preference. The militarization of these efforts has contributed to the “shrinking of humanitarian space” in which aid workers give assistance; they are increasingly endangered because they are perceived as military assets. The US may not be well served by Congress’s reflexive preference for military solutions to civilian problems.

by Kenneth Roth, NYRB |  Read more:
Image: NATO

How Millions of Kids Are Being Shaped by Know-It-All Voice Assistants

Kids adore their new robot siblings.

As millions of American families buy robotic voice assistants to turn off lights, order pizzas and fetch movie times, children are eagerly co-opting the gadgets to settle dinner table disputes, answer homework questions and entertain friends at sleepover parties.

Many parents have been startled and intrigued by the way these disembodied, know-it-all voices — Amazon’s Alexa, Google Home, Microsoft’s Cortana — are impacting their kids’ behavior, making them more curious but also, at times, far less polite.

In just two years, the promise of the technology has already exceeded the marketing come-ons. The disabled are using voice assistants to control their homes, order groceries and listen to books. Caregivers to the elderly say the devices help with dementia, reminding users what day it is or when to take medicine.

For children, the potential for transformative interactions are just as dramatic — at home and in classrooms. But psychologists, technologists and linguists are only beginning to ponder the possible perils of surrounding kids with artificial intelligence, particularly as they traverse important stages of social and language development.

“How they react and treat this nonhuman entity is, to me, the biggest question,” said Sandra Calvert, a Georgetown University psychologist and director of the Children’s Digital Media Center. “And how does that subsequently affect family dynamics and social interactions with other people?”

With an estimated 25 million voice assistants expected to sell this year at $40 to $180 — up from 1.7 million in 2015 — there are even ramifications for the diaper crowd.

Toy giant Mattel recently announced the birth of Aristotle, a home baby monitor launching this summer that “comforts, teaches and entertains” using AI from Microsoft. As children get older, they can ask or answer questions. The company says, “Aristotle was specifically designed to grow up with a child.”

Boosters of the technology say kids typically learn to acquire information using the prevailing technology of the moment — from the library card catalogue, to Google, to brief conversations with friendly, all-knowing voices. But what if these gadgets lead children, whose faces are already glued to screens, further away from situations where they learn important interpersonal skills?

It’s unclear whether any of the companies involved are even paying attention to this issue. (...)

Today’s children will be shaped by AI much like their grandparents were shaped by new devices called television. But you couldn’t talk with a TV.

Ken Yarmosh, a 36-year-old Northern Virginia app developer and founder of Savvy Apps has multiple voice assistants in his family’s home, including those made by Google and Amazon. (The Washington Post is owned by Amazon founder Jeffrey P. Bezos, whose middle name is Preston, according to Alexa.)

Yarmosh’s 2-year-old son has been so enthralled by Alexa that he tries to speak with coasters and other cylindrical objects that look like Amazon’s device. Meanwhile, Yarmosh’s now 5-year-old son, in comparing his two assistants, came to believe Google knew him better.

by Michael S. Rosenwald, Washington Post | Read more:
Image: Bill O'Leary

Friday, March 3, 2017

Pharrell Williams



[ed. ... everybody stole my moves.]

This Is How Your Hyperpartisan Political News Gets Made

The websites Liberal Society and Conservative 101 appear to be total opposites. The former publishes headlines such as “WOW, Sanders Just Brutally EVISCERATED Trump On Live TV. Trump Is Fuming.” Its conservative counterpart has stories like “Nancy Pelosi Just Had Mental Breakdown On Stage And Made Craziest Statement Of Her Career.”

So it was a surprise last Wednesday when they published stories that were almost exactly the same, save for a few notable word changes.

After CNN reported White House counselor Kellyanne Conway was “sidelined from television appearances,” both sites whipped up a post — and outrage — for their respective audiences. The resulting stories read like bizarro-world versions of each other — two articles with nearly identical words and tweets optimized for opposing filter bubbles. The similarity of the articles also provided a key clue BuzzFeed News followed to reveal a more striking truth: These for-the-cause sites that appeal to hardcore partisans are in fact owned by the same Florida company.

Liberal Society and Conservative 101 are among the growing number of so-called hyperpartisan websites and associated Facebook pages that have sprung up in recent years, and that attracted significant traffic during the US election. A previous BuzzFeed News analysis of content published by conservative and liberal hyperpartisan sites found they reap massive engagement on Facebook with aggressively partisan stories and memes that frequently demonize the other side’s point of view, often at the expense of facts.

Jonathan Albright, a professor at Elon University, published a detailed analysis of the hyperpartisan and fake news ecosystem. Given the money at stake, he told BuzzFeed News he’s not surprised some of the same people operate both liberal and conservative sites as a way to “run up their metrics or advertising revenue.”

“One of the problems that is a little overlooked is that it’s not one side versus the other — there are people joining in that are really playing certain types of political [views] against each other,” Albright said.

And all it takes to turn a liberal partisan story into a conservative one is to literally change a few words. (...)

The stories read like they were stamped out of the same content machine because they were. Using domain registration records and Google Analytics and AdSense IDs, BuzzFeed News determined that both sites are owned by American News LLC of Miami.

That company also operates another liberal site, Democratic Review, as well as American News, a conservative site that drew attention after the election when it posted a false article claiming that Denzel Washington endorsed Trump. It also operates GodToday.com, a site that publishes religious clickbait.

Liberal Society, Democratic Review, and God Today all have the same Google Analytics ID in their source code, which means they are connected. Domain registration records show that American News LLC is the owner of God Today. (The other two sites have private ownership records.)

Conservative 101 and American News have the same Google AdSense ID and domain records show that the latter is also registered to American News LLC. Corporate records list John Crane and Tyler Shapiro as officers of the company, and Crane is listed in domain ownership records. They did not respond to three emails and a phone message from BuzzFeed News.

Domain records suggest they began as conservative news publishers. John Crane acquired the AmericanNews.com domain in 2014 and added Conservative101.com in May of 2016. The company moved into liberal partisan news with the registration of DemocraticReview.com in June of last year and LiberalSociety.com a month later. (Their religious clickbait site, GodToday.com, was registered in February of last year.)

They also appear to run several large Facebook pages that play a major role in helping their partisan content generate social engagement and traffic. Content from American News is pushed out via a page with more than 5 million fans, while Liberal Society’s stories are promoted on a page with over 2 million fans. (...)

Grant Stern is a progressive who writes a column for Occupy Democrats and is the executive director of Photography Is Not A Crime. BuzzFeed News sent him American News LLC’s liberal and conservative sites and asked him to comment on the fact that they’re run by the same company.

“Those websites are marketing websites,” he said after looking at the content, “and the product they’re pitching is outrage.”

by Craig Silverman, Buzzfeed |  Read more:
Image: Liberal Society / Conservative 101