Sunday, March 5, 2017

The Most Broadly Overvalued Moment in Market History

"The issue is no longer whether the currnet market resembles those preceding the 1929, 1969-70, 1973-74, and 1987 crashes. The issue is only - are conditions like October of 1929, or more like April? Like October of 1987, or more like July? If the latter, then over the short-term, arrogant imprudence will continue to be mistaken for enlightened genius, while studied restraint will be mistaken for stubborn foolishness. We can't rule out further short-term gains, but those gains will turn bitter... Let's not be shy: regardless of short-term action, we ultimately expect the S&P 500 to fall by more than half, and the Nasdaq by two-thirds. Don't scoff without reviewing history first."
- John P. Hussman, Ph.D., Hussman Econometrics, February 9, 2000

"On Wall Street, urgent stupidity has one terminal symptom, and it is the belief that money is free. Investors have turned the market into a carnival, where everybody 'knows' that the new rides are the good rides, and the old rides just don't work. Where the carnival barkers seem to hand out free money just for showing up. Unfortunately, this business is not that kind - it has always been true that in every pyramid, in every easy-money sure-thing, the first ones to get out are the only ones to get out... Over time, price/revenue ratios come back in line. Currently, that would require an 83% plunge in tech stocks (recall the 1969-70 tech massacre). The plunge may be muted to about 65% given several years of revenue growth. If you understand values and market history, you know we're not joking."
- John P. Hussman, Ph.D., Hussman Econometrics, March 7, 2000

On Wednesday, the consensus of the most reliable equity market valuation measures we identify (those most tightly correlated with actual subsequent S&P 500 total returns in market cycles across history) advanced within 5% of the extreme registered in March 2000. Recall that following that peak, the S&P 500 did indeed lose half of its value, the Nasdaq Composite lost 80% of its value, and the tech-heavy Nasdaq 100 Index lost an oddly precise 83% of its value. With historically reliable valuation measures beyond those of 1929 and lesser peaks, capitalization-weighted measures are essentially tied with the most offensive levels in history. Meanwhile, the valuation of the median component of the S&P 500 is already far beyond the median valuations observed at the peaks of 2000, 2007 and prior market cycles, while our estimate for 10-12 year returns on a conventional 60/30/10 mix of stocks, bonds, and T-bills fell to a record low last week, making this the most broadly overvalued instant in market history.

There is a quick, knee-jerk response floating around these days, which asserts that “stocks are still cheap relative to interest rates.” This argument is quite popular with investors who haven’t spent much time getting their hands dirty with historical data, satisfied to repeat verbal arguments they’ve heard elsewhere as a substitute for analysis. It’s even an argument we recently heard, almost inexplicably, from one investor we’ve regularly agreed with at market extremes over several decades (more on that below). In 2007, as the market was peaking just before the global financial crisis, precisely the same misguided assertions prompted me to write Long-Term Evidence on the Fed Model and Forward Operating P/E Ratios. See also How Much Do Interest Rates Affect the Fair Value of Stocks? from May of that year. Let’s address this argument once again, in additional detail.

Valuations and interest rates


There’s no question that interest rates are relevant to the fair valuation of stocks. After all, a security is nothing but a claim to some future stream of cash flows that will be delivered into the hands of investors over time. The higher the price an investor pays for a given stream of future cash flows, the lower the long-term return the investor can expect to earn as those cash flows are received. Conversely, the lower the long-term return an investor can tolerate, the higher the price they will agree to pay for that stream of future cash flows. If interest rates are low, it’s not unreasonable to expect that investors would accept a lower expected future return on stocks. If rates are high, it’s not unreasonable to expect that investors would demand a higher expected future return on stocks.

The problem is that investors often misinterpret the form of this relationship, and become confused about when interest rate information is needed and when it is not. Specifically, given a set of expected future cash flows and the current price of the security, one does not need any information about interest rates at all to estimate the long-term return on that security. The price of the security and the cash flows are sufficient statistics to calculate that expected return. For example, if a security that promises to deliver a $100 cash flow in 10 years is priced at $82 today, we immediately know that the expected 10-year return is (100/82)^(1/10)-1 = 2%. Having estimated that 2% return, we can now compare it with competing returns on bonds, to judge whether we think it’s adequate, but no knowledge of interest rates is required to “adjust” the arithmetic.

There are three objects of interest here: the current price, the future stream of expected cash flows, and the long-term rate of return that converts one to the other. Given any two of these, one can estimate the third. For example, given a set of expected future cash flows and some “justified” return of the investor’s choosing, one can use those two pieces of information to calculate the price that will deliver that desired expected return. If I want a $100 future payment to give me a 5% future return over 10 years, I should be willing to pay no more than $100/(1.05)^10 = $61.39.

So when you want to convert a set of expected cash flows into an acceptable price today, interest rates may very well affect the “justified” rate of return you choose. But if you already know the current price, and the expected cash flows, you don’t need any information about prevailing interest rates in order to estimate the expected rate of return. One does not have to “factor in” the level of interest rates when observable valuations are used to estimate prospective long-term market returns, because interest rates are irrelevant to that calculation. The only thing that interest rates do at that point is to allow a comparison of the expected return that’s already baked in the cake with alternative returns available in the bond market.

The Fed Model is an artifact of just 16 years of history


There’s an additional problem. While it’s compelling to believe that the expected return on stocks and bonds should have a one-to-one relationship, history doesn’t bear that out at all. Indeed, over the past century, the correlation between bond and stock yields has historically gone in the entirely wrong direction except during the inflation-disinflation cycle from about 1970 to 1998. What investors may not realize is that the correlation between interest rates and earnings yields (as well as dividend yields) has been negative since 1998. Investors across history have not been consistent at all in treating stocks and bonds as closely competing substitutes.

As I noted during the bubbles that ended in 2000 and 2007, the problem with the Fed Model (which compares the S&P 500 forward operating earnings yield with the 10-year Treasury yield) is that this presumed one-to-one relationship between stock and bond yields is wholly an artifact of the disinflationary period from 1982 to 1998. The stock market advance from 1982 to 1998 represented one of the steepest movements from deep secular undervaluation to extreme secular overvaluation in stock market history. Concurrently, bond yields declined as inflation retreated from high levels of the 1970’s. What the Fed Model does is to overlay those two outcomes and treat them as if stocks were “fairly valued” the entire time.

The chart below shows the S&P 500 forward operating earnings yield alongside the 10-year Treasury bond yield. The inset of the chart is the chart that appeared in Alan Greenspan’s 1997 Humphrey Hawkins testimony, and is the entire basis upon which the Fed Model rests. The same segment of history is highlighted in the yellow block. Notice that this is the only segment of history in which the presumed one-to-one relationship actually held.


The Fed Model is not a fair-value relationship, but an artifact of a specific disinflationary segment of market history. It is descriptive of yield behavior during that limited period, but it has a very poor predictive record with regard to actual subsequent market returns.

When investors assert that stocks are “fairly valued relative to interest rates,” they are essentially invoking the Fed Model. What they seem to have in mind is that regardless of absolute valuation levels, stocks can be expected to achieve acceptably high returns as long as the S&P 500 forward operating earnings yield is higher than the 10-year Treasury yield.

No, no. That’s not how any of this works, and we have a century of evidence to show it. The deep undervaluation of stocks in 1982 was followed by glorious subsequent returns. The steep overvaluation of stocks in 1998 was followed by one crash, then another, which left S&P 500 total returns negative for more than a decade. I fully expect that current valuations, which are within a breath of 2000 extremes on the most historically reliable measures, will again result in zero or negative returns over the coming 10-12 years. Let’s dig into some data to detail the basis for those expectations.

First, a quick note on historically reliable valuation measures. The value of any security is based on the long-term stream of cash flows that it can be expected to deliver over decades and decades. While corporate earnings are certainly required to generate future cash flows, current earnings (or even forward earnings) are very poor “sufficient statistics” for that stream of cash flows. That’s true not only because of fluctuations in profit margins over the economic cycle, but also due to very long-term competitive forces that exert themselves over multiple economic cycles. From the standpoint of historical reliability, valuation measures that dampen or mute the impact of fluctuating profit margins dramatically outperform measures based on current earnings. Indeed, even the Shiller CAPE, which uses a 10-year average of inflation-adjusted earnings, provides substantially better results when one also adjusts for the embedded profit margin (the denominator of the CAPE / S&P 500 revenues). For a brief primer on the importance of implied profit margins in evaluating market valuations, see Two Point Three Sigmas Above the Norm and Margins, Multiples, and the Iron Law of Valuation.

The chart below shows the ratio of nonfinancial market capitalization to corporate gross value-added, including estimated foreign revenues. I created this measure, MarketCap/GVA, as an apples-to-apples alternative to market capitalization/GDP that matches the object in the numerator with the object in the denominator, and also takes foreign revenues into account. We find this measure to be better correlated with actual subsequent S&P 500 total returns than any other measure we’ve studied in market cycles across history, including price/earnings, price/forward earnings, price/book, price/dividends, enterprise value/EBITDA, the Fed Model, Tobin’s Q, market cap/GDP, the NIPA profits cyclically-adjusted P/E (CAPE), and the Shiller CAPE.

MarketCap/GVA is shown below on an inverted log scale (blue line, left scale), along with the actual subsequent 12-year total return of the S&P 500 (red line, right scale). From current valuations, which now rival the most extreme levels in U.S. history, we estimate likely S&P 500 nominal total returns averaging less than 1% annually over the coming 12-year horizon. As a side note, we tend to prefer a 12-year horizon because that is the point where the autocorrelation profile of valuations drops to zero, and is therefore the horizon over which mean reversion is most reliable (see Valuations Not Only Mean-Revert, They Mean-Invert).


I’m often asked why we don’t “adjust” MarketCap/GVA for the level of interest rates. The answer, as detailed at the beginning of this comment, is that given both the price of a security, and the expected stream of future expected cash flows (or a sufficient statistic for those cash flows), one does not need any information at all about interest rates in order to estimate the expected long-term return on that security. Each point in the chart below shows the actual 12-year subsequent total return of the S&P 500 index, along with two fitted values, one using MarketCap/GVA alone, and the other including the 10-year Treasury bond yield as an additional explanatory variable. That additional variable adds absolutely no incremental explanatory power. Both fitted values have a 93% correlation with actual subsequent 12-year S&P 500 total returns.

We’re now in a position to say something very precise about current valuations and interest rates. Given the present level of interest rates, investors who are willing to accept likely prospective nominal total returns on the S&P 500 of less than 1% over the coming 12-year period are entirely welcome to judge stocks as “fairly valued relative to interest rates.” But understand that this is precisely what that phrase implies here.

Moreover, as one can see from the foregoing charts, there’s not a single market cycle in history, neither in the period before the 1970’s (when interest rates regularly hovered near current levels), nor in recent decades, that has failed to raise prospective 10-12 year S&P 500 total returns to the 8-10% range or beyond over the completion of that cycle. So even if investors are willing to accept 10-12 year total returns of next to nothing, they should also be fully prepared for an interim market loss on the order of 50-60%, because that is the decline that would now be required to restore those 8-10% return expectations, without even breaking below historical valuation norms.

by John P. Hussman, Ph.D, Hussman Funds |  Read more:
Image: Hussman Strategic Advisors

What Writers Really Do When They Write

1
Many years ago, during a visit to Washington DC, my wife’s cousin pointed out to us a crypt on a hill and mentioned that, in 1862, while Abraham Lincoln was president, his beloved son, Willie, died, and was temporarily interred in that crypt, and that the grief-stricken Lincoln had, according to the newspapers of the day, entered the crypt “on several occasions” to hold the boy’s body. An image spontaneously leapt into my mind – a melding of the Lincoln Memorial and the Pietà. I carried that image around for the next 20-odd years, too scared to try something that seemed so profound, and then finally, in 2012, noticing that I wasn’t getting any younger, not wanting to be the guy whose own gravestone would read “Afraid to Embark on Scary Artistic Project He Desperately Longed to Attempt”, decided to take a run at it, in exploratory fashion, no commitments. My novel, Lincoln in the Bardo, is the result of that attempt, and now I find myself in the familiar writerly fix of trying to talk about that process as if I were in control of it.

We often discuss art this way: the artist had something he “wanted to express”, and then he just, you know … expressed it. We buy into some version of the intentional fallacy: the notion that art is about having a clear-cut intention and then confidently executing same.

The actual process, in my experience, is much more mysterious and more of a pain in the ass to discuss truthfully.

2
A guy (Stan) constructs a model railroad town in his basement. Stan acquires a small hobo, places him under a plastic railroad bridge, near that fake campfire, then notices he’s arranged his hobo into a certain posture – the hobo seems to be gazing back at the town. Why is he looking over there? At that little blue Victorian house? Stan notes a plastic woman in the window, then turns her a little, so she’s gazing out. Over at the railroad bridge, actually. Huh. Suddenly, Stan has made a love story. Oh, why can’t they be together? If only “Little Jack” would just go home. To his wife. To Linda.

What did Stan (the artist) just do? Well, first, surveying his little domain, he noticed which way his hobo was looking. Then he chose to change that little universe, by turning the plastic woman. Now, Stan didn’t exactly decide to turn her. It might be more accurate to say that it occurred to him to do so; in a split-second, with no accompanying language, except maybe a very quiet internal “Yes.”

He just liked it better that way, for reasons he couldn’t articulate, and before he’d had the time or inclination to articulate them.

An artist works outside the realm of strict logic. Simply knowing one’s intention and then executing it does not make good art. Artists know this. According to Donald Barthelme: “The writer is that person who, embarking upon her task, does not know what to do.” Gerald Stern put it this way: “If you start out to write a poem about two dogs fucking, and you write a poem about two dogs fucking – then you wrote a poem about two dogs fucking.” Einstein, always the smarty-pants, outdid them both: “No worthy problem is ever solved in the plane of its original conception.”

How, then, to proceed? My method is: I imagine a meter mounted in my forehead, with “P” on this side (“Positive”) and “N” on this side (“Negative”). I try to read what I’ve written uninflectedly, the way a first-time reader might (“without hope and without despair”). Where’s the needle? Accept the result without whining. Then edit, so as to move the needle into the “P” zone. Enact a repetitive, obsessive, iterative application of preference: watch the needle, adjust the prose, watch the needle, adjust the prose (rinse, lather, repeat), through (sometimes) hundreds of drafts. Like a cruise ship slowly turning, the story will start to alter course via those thousands of incremental adjustments.

The artist, in this model, is like the optometrist, always asking: Is it better like this? Or like this?

The interesting thing, in my experience, is that the result of this laborious and slightly obsessive process is a story that is better than I am in “real life” – funnier, kinder, less full of crap, more empathetic, with a clearer sense of virtue, both wiser and more entertaining.

And what a pleasure that is; to be, on the page, less of a dope than usual.

3
Revising by the method described is a form of increasing the ambient intelligence of a piece of writing. This, in turn, communicates a sense of respect for your reader. As text is revised, it becomes more specific and embodied in the particular. It becomes more sane. It becomes less hyperbolic, sentimental, and misleading. It loses its ability to create a propagandistic fog. Falsehoods get squeezed out of it, lazy assertions stand up, naked and blushing, and rush out of the room.

Is any of this relevant to our current political moment?

Hoo, boy.

When I write, “Bob was an asshole,” and then, feeling this perhaps somewhat lacking in specificity, revise it to read, “Bob snapped impatiently at the barista,” then ask myself, seeking yet more specificity, why Bob might have done that, and revise to, “Bob snapped impatiently at the young barista, who reminded him of his dead wife,” and then pause and add, “who he missed so much, especially now, at Christmas,” – I didn’t make that series of changes because I wanted the story to be more compassionate. I did it because I wanted it to be less lame.

But it is more compassionate. Bob has gone from “pure asshole” to “grieving widower, so overcome with grief that he has behaved ungraciously to a young person, to whom, normally, he would have been nice”. Bob has changed. He started out a cartoon, on which we could heap scorn, but now he is closer to “me, on a different day”.

How was this done? Via pursuit of specificity. I turned my attention to Bob and, under the pressure of trying not to suck, my prose moved in the direction of specificity, and in the process my gaze became more loving toward him (ie, more gentle, nuanced, complex), and you, dear reader, witnessing my gaze become more loving, might have found your own gaze becoming slightly more loving, and together (the two of us, assisted by that imaginary grouch) reminded ourselves that it is possible for one’s gaze to become more loving.

Or we could just stick with “Bob was an asshole,” and post it, and wait for the “likes”, and for the pro-Bob forces to rally, and the anti-barista trolls to anonymously weigh in – but, meanwhile, there’s poor Bob, grieving and misunderstood, and there’s our poor abused barista, feeling crappy and not exactly knowing why, incrementally more convinced that the world is irrationally cruel.

by George Saunders, The Guardian |  Read more:
Image: Yann Kebbi for Review

Saturday, March 4, 2017

The Plane So Good Its Still In Production After 60 Years

[ed. My first plane was a Cessna 140 (taildragger precursor to the 150). The next, a straight-tailed 1956 Cessna 172. I loved that plane. 206 nose gear, oversized tires, manual flaps. Just a joy.]

It can seat four people, in a squeeze, and weighs a little under 800kg without fuel or its passengers. It has a maximum speed of 140mph (226km/h), though you could push this up to 185mph at a pinch – but the manufacturer would rather you didn’t. And on a tank full of fuel, you could travel 800 miles (1,290km) – the equivalent of going from Berlin to Belfast, or New York to Madison, Wisconsin.

You might think this was a high-performance car with a little more-than-average leg room – but it’s a plane. The Cessna 172, which first rolled off the production line in 1956, is still in production today. And if any design could claim to be the world’s favourite aircraft, it’s the 172.

More than 43,000 Cessna 172s have been made so far. And while the 172 (also known as the Skyhawk) has undergone a myriad of tweaks and improvements over the past 60-odd years, the aircraft essentially looks much the same as it did when it was first built in the 1950s.

In the past 60 years, Cessna 172s have become a staple of flight training schools across the world. Generations of pilots have taken their first, faltering flights in a Cessna 172, and for good reason – it’s a plane deliberately designed to be easy to fly, and to survive less-than-accomplished landings.

“More pilots over the years have earned their wings in a 172 than any other aircraft in the world,” says Doug May, the vice-president of piston aircraft at Cessna’s parent company, Textron Aviation.

“The forgiving nature of the aircraft really does suit it to the training environment,” he says.

Light aircraft might not be updated as often as cars, but 60 years is still a very long time to produce a vehicle that has essentially been unchanged. The only time its production ceased for an extended time was in the late 1980s, when stricter US laws restricted the manufacture of all light aircraft. What is it about the 172 that has made it such a favourite for so long?

One answer comes from the fact that the Cessna 172 is a high-wing monoplane – meaning the wings sit high above the cockpit. This is very useful for student pilots because it gives them a better view of the ground and makes the aircraft much easier to land.

The 172 was based on an earlier Cessna design called the 150. This looked very similar apart from the fact it was a “taildragger” – instead of a wheel at the front, the 150 had a smaller wheel at the back, underneath the tailfin (like most aircraft before the arrival of jets). The 150 enjoyed the benefits of a light aircraft boom in the years following World War Two, as many of the companies that had produced tens of thousands of military aircraft now turned their attention to civilian aircraft.

The Cessna 150 was a very successful design – nearly 24,000 were made in a 19-year production run – but it only had enough room for two; the pilot and one passenger. Cessna saw the gap for a bigger model that could take twice as many people. So the basic design of the 150 was modified, and made more robust – where the 150 was made of a fabric skin stretched around a frame, the 172 was made of aluminium.

The design was so clean and aerodynamic that Cessna’s marketing department dubbed the 172 the “land-o-matic” because it was so easy to fly and land.

“I think it’s really the robustness that’s been behind the aircraft’s success,” says May. “It’s able to take six to eight to 10 landings an hour, hour after hour.” May says the 172 is often the plane a student will take their first flight in – and it will often take them through their hours until they qualify for a pilot’s licence.

“The Cessna 172 was not built to minimum requirements,” says May. “I think they did an exceptional job of looking at the intended role, and actually providing a plane that would surpass those requirements.”

And during its history, that ease of use and reliability has led to some quite remarkable flights.

On 4 December 1958 two pilots called Robert Timm and John Cook climbed into a Cessna 172 at McCarran Airfield in Las Vegas. Their mission? To break the world record for the longest flight without landing.

This would be no easy feat. The previous record, which was set in 1949, was a colossal achievement – the two pilots had flown an aircraft very like Timm and Cook’s Cessna for a total of 46 days – all to raise money for a cancer fund.

The two pilots would need to keep their aircraft in the air for nearly seven weeks, without landing once. According to Jalopnik, the necessary modifications took more than a year to make – and included a small sink so the two pilots could brush their teeth and even bathe. In order to do this, the two pilots had to strip out the back seats so they had room for a mattress. While one pilot flew the plane, the other would sleep. And should they feel the need to shower? A small platform could be extended between the open cabin and the wing strut – allowing the relief pilot to shower out in the open air.

by Stephen Dowling, BBC | Read more:
Images: markk

Must It Always Be Wartime?

[ed. Interesting. It never occurred to me how a good portion of the military budget might be allocated to aid and nation building as a form of preventative national security.]

If the fight against terrorist groups is hard enough to classify, consider new and emerging security threats—such as cyberattacks on critical infrastructure or the use of bioengineered viruses—that do not involve the kinetic or explosive weapons of traditional war. Does it make sense to speak of “combatants” when the attacker is not an armed soldier but a hacker at a computer terminal or a scientist in a biology laboratory? And even if they are combatants, is it a proper response to such attacks to authorize shooting or bombing them from afar, as is permitted in a traditional armed conflict?

International humanitarian law is clearly in need of elaboration in order to address these newer forms of conflict, but it should at least provide the starting point. For example, biological warfare unleashing deadly pathogens or cyber warfare shutting down electrical facilities are disturbing in large part because they could inflict widespread indiscriminate and disproportionate civilian casualties—concepts that are central to humanitarian law.

Similarly, a firmer grounding in international human rights and humanitarian law would have helped to avoid the kinds of perversions of that law that were orchestrated by the Bush administration, whose attorney general, Alberto Gonzales, dismissed the Geneva Conventions as “quaint” and “obsolete” and whose Justice Department cited a “new kind of war” to authorize “enhanced interrogation techniques” such as waterboarding, a form of torture. In fact, despite Trump’s musings about reviving it, international law prohibits torture—indeed, makes it a crime—in times of both peace and war.

Greater attention to human rights principles might also have led Trump to temper his executive order temporarily banning visitors to the United States from seven mainly Muslim countries. Ostensibly designed to fight terrorism, it made no effort to limit its scope to people who posed any identifiable threat, at enormous personal cost, if upheld by the courts, to the 60,000 people whose visas were suddenly not recognized.

Complicating matters further is the expanding role of the US military. Today, counterinsurgency strategy is broadly understood to involve far more than fighting an opposing military. It also has come to mean protecting the civilian population and building government institutions that serve rather than prey upon people, including a legal system that protects rights. Trump is now questioning the utility of such “nation-building,” but in the meantime it has led the Pentagon to sponsor a variety of programs that have little to do with confronting enemy troops.

As Brooks describes it, US soldiers now undertake public health programs, agricultural reform efforts, small business development projects, and training in the rule of law. This expanding mandate, as Brooks shows, has enabled the Pentagon to dramatically increase its budget—few in Congress deny requests for more spending on national defense—even as austerity eviscerates the budgets of the agencies that traditionally carry out these tasks, such as the State Department and USAID.

The radically different budgets of the Pentagon and its civilian counterparts only reinforce the tendency to look to the military to address nonmilitary problems—to treat it as a “Super Walmart” ready to respond to the nation’s every foreign policy need. “It’s a vicious circle,” Brooks explains, “as civilian capacity has declined, the military has stepped into the breach.”

Yet there is a cost to a self-reinforcing cycle of militarizing US foreign policy. Pursuing economic development, undertaking agrarian reform, expanding the rule of law—these are tasks requiring considerable expertise, including linguistic skills and cultural sensitivity not usually associated with the average military recruit, still chosen foremost for strength and agility even in a world in which traditional military tasks diminish in importance.

Moreover, humanitarian and development workers have typically enjoyed a degree of protection in the field because of their neutrality—their dedication to offering services on the basis of need rather than political preference. The militarization of these efforts has contributed to the “shrinking of humanitarian space” in which aid workers give assistance; they are increasingly endangered because they are perceived as military assets. The US may not be well served by Congress’s reflexive preference for military solutions to civilian problems.

by Kenneth Roth, NYRB |  Read more:
Image: NATO

How Millions of Kids Are Being Shaped by Know-It-All Voice Assistants

Kids adore their new robot siblings.

As millions of American families buy robotic voice assistants to turn off lights, order pizzas and fetch movie times, children are eagerly co-opting the gadgets to settle dinner table disputes, answer homework questions and entertain friends at sleepover parties.

Many parents have been startled and intrigued by the way these disembodied, know-it-all voices — Amazon’s Alexa, Google Home, Microsoft’s Cortana — are impacting their kids’ behavior, making them more curious but also, at times, far less polite.

In just two years, the promise of the technology has already exceeded the marketing come-ons. The disabled are using voice assistants to control their homes, order groceries and listen to books. Caregivers to the elderly say the devices help with dementia, reminding users what day it is or when to take medicine.

For children, the potential for transformative interactions are just as dramatic — at home and in classrooms. But psychologists, technologists and linguists are only beginning to ponder the possible perils of surrounding kids with artificial intelligence, particularly as they traverse important stages of social and language development.

“How they react and treat this nonhuman entity is, to me, the biggest question,” said Sandra Calvert, a Georgetown University psychologist and director of the Children’s Digital Media Center. “And how does that subsequently affect family dynamics and social interactions with other people?”

With an estimated 25 million voice assistants expected to sell this year at $40 to $180 — up from 1.7 million in 2015 — there are even ramifications for the diaper crowd.

Toy giant Mattel recently announced the birth of Aristotle, a home baby monitor launching this summer that “comforts, teaches and entertains” using AI from Microsoft. As children get older, they can ask or answer questions. The company says, “Aristotle was specifically designed to grow up with a child.”

Boosters of the technology say kids typically learn to acquire information using the prevailing technology of the moment — from the library card catalogue, to Google, to brief conversations with friendly, all-knowing voices. But what if these gadgets lead children, whose faces are already glued to screens, further away from situations where they learn important interpersonal skills?

It’s unclear whether any of the companies involved are even paying attention to this issue. (...)

Today’s children will be shaped by AI much like their grandparents were shaped by new devices called television. But you couldn’t talk with a TV.

Ken Yarmosh, a 36-year-old Northern Virginia app developer and founder of Savvy Apps has multiple voice assistants in his family’s home, including those made by Google and Amazon. (The Washington Post is owned by Amazon founder Jeffrey P. Bezos, whose middle name is Preston, according to Alexa.)

Yarmosh’s 2-year-old son has been so enthralled by Alexa that he tries to speak with coasters and other cylindrical objects that look like Amazon’s device. Meanwhile, Yarmosh’s now 5-year-old son, in comparing his two assistants, came to believe Google knew him better.

by Michael S. Rosenwald, Washington Post | Read more:
Image: Bill O'Leary

Friday, March 3, 2017

Pharrell Williams



[ed. ... everybody stole my moves.]

This Is How Your Hyperpartisan Political News Gets Made

The websites Liberal Society and Conservative 101 appear to be total opposites. The former publishes headlines such as “WOW, Sanders Just Brutally EVISCERATED Trump On Live TV. Trump Is Fuming.” Its conservative counterpart has stories like “Nancy Pelosi Just Had Mental Breakdown On Stage And Made Craziest Statement Of Her Career.”

So it was a surprise last Wednesday when they published stories that were almost exactly the same, save for a few notable word changes.

After CNN reported White House counselor Kellyanne Conway was “sidelined from television appearances,” both sites whipped up a post — and outrage — for their respective audiences. The resulting stories read like bizarro-world versions of each other — two articles with nearly identical words and tweets optimized for opposing filter bubbles. The similarity of the articles also provided a key clue BuzzFeed News followed to reveal a more striking truth: These for-the-cause sites that appeal to hardcore partisans are in fact owned by the same Florida company.

Liberal Society and Conservative 101 are among the growing number of so-called hyperpartisan websites and associated Facebook pages that have sprung up in recent years, and that attracted significant traffic during the US election. A previous BuzzFeed News analysis of content published by conservative and liberal hyperpartisan sites found they reap massive engagement on Facebook with aggressively partisan stories and memes that frequently demonize the other side’s point of view, often at the expense of facts.

Jonathan Albright, a professor at Elon University, published a detailed analysis of the hyperpartisan and fake news ecosystem. Given the money at stake, he told BuzzFeed News he’s not surprised some of the same people operate both liberal and conservative sites as a way to “run up their metrics or advertising revenue.”

“One of the problems that is a little overlooked is that it’s not one side versus the other — there are people joining in that are really playing certain types of political [views] against each other,” Albright said.

And all it takes to turn a liberal partisan story into a conservative one is to literally change a few words. (...)

The stories read like they were stamped out of the same content machine because they were. Using domain registration records and Google Analytics and AdSense IDs, BuzzFeed News determined that both sites are owned by American News LLC of Miami.

That company also operates another liberal site, Democratic Review, as well as American News, a conservative site that drew attention after the election when it posted a false article claiming that Denzel Washington endorsed Trump. It also operates GodToday.com, a site that publishes religious clickbait.

Liberal Society, Democratic Review, and God Today all have the same Google Analytics ID in their source code, which means they are connected. Domain registration records show that American News LLC is the owner of God Today. (The other two sites have private ownership records.)

Conservative 101 and American News have the same Google AdSense ID and domain records show that the latter is also registered to American News LLC. Corporate records list John Crane and Tyler Shapiro as officers of the company, and Crane is listed in domain ownership records. They did not respond to three emails and a phone message from BuzzFeed News.

Domain records suggest they began as conservative news publishers. John Crane acquired the AmericanNews.com domain in 2014 and added Conservative101.com in May of 2016. The company moved into liberal partisan news with the registration of DemocraticReview.com in June of last year and LiberalSociety.com a month later. (Their religious clickbait site, GodToday.com, was registered in February of last year.)

They also appear to run several large Facebook pages that play a major role in helping their partisan content generate social engagement and traffic. Content from American News is pushed out via a page with more than 5 million fans, while Liberal Society’s stories are promoted on a page with over 2 million fans. (...)

Grant Stern is a progressive who writes a column for Occupy Democrats and is the executive director of Photography Is Not A Crime. BuzzFeed News sent him American News LLC’s liberal and conservative sites and asked him to comment on the fact that they’re run by the same company.

“Those websites are marketing websites,” he said after looking at the content, “and the product they’re pitching is outrage.”

by Craig Silverman, Buzzfeed |  Read more:
Image: Liberal Society / Conservative 101

Astrid Trugg

via:

This Speck of DNA Contains a Movie, a Computer Virus, and an Amazon Gift Card

In 1895, the Lumiere Brothers—among the first filmmakers in history released a movie called The Arrival of a Train at La Ciotat Station. Just 50 seconds long, it consists of a silent, unbroken, monochrome shot of a train pulling into a platform full of people. It was a vivid example of the power of “animated photographs”, as one viewer described them. Now, 122 years later, The Arrival of a Train is breaking new ground again. It has just become one of the first movies to be stored in DNA.

In the famous double-helices of life’s fundamental molecule, Yaniv Erlich and Dina Zielinski from the New York Genome Center and Columbia University encoded the movie, along with a computer operating system, a photo, a scientific paper, a computer virus, and an Amazon gift card.

They used a new strategy, based on the codes that allow movies to stream reliably across the Internet. In this way, they managed to pack the digital files into record-breakingly small amounts of DNA. A one terabyte hard drive currently weighs around 150 grams. Using their methods, Erlich and Zielinski can fit 215,000 times as much data in a single gram of DNA. You could fit all the data in the world in the back of a car.

Storing information in DNA isn’t new: life has been doing it for as long as life has existed. The molecule looks like a twisting ladder, whose rungs are made from four building blocks, denoted by the letters A, C, G, and T. The sequence of these letters encodes the instructions for building every living thing. And if you can convert the ones and zeroes of digital data into those four letters, you can use DNA to encode pretty much anything.

Why bother? Because DNA has advantages that other storage media do not. It takes up much less space. It is very durable, as long as it is kept cold, dry, and dark—DNA from mammoths that died thousands of years ago can still be extracted and sequenced. And perhaps most importantly, it has a 3.7-billion-year track record. Floppy disks, VHS, zip disks, laser disks, cassette tapes… every media format eventually becomes obsolete, and every new format forces people to buy new reading devices and update their archives. But DNA will never become obsolete. It has such central importance that biologists will always want to study it. Sequencers will continue to improve, but there will always be sequencers.

by Ed Yong, The Atlantic |  Read more:
Image: NY Genome Center

The Resistance Will Have All the Proper Permits

The U.S. has started becoming a country of protesters once again, largely in response to the proposed policies and rhetoric of President Donald Trump. In response, citing the ostensible grounds of disorderliness and manipulation, Republican lawmakers have introduced bills to curb protesting in at least 17 states, with possibly more to come. I don’t approve, and if you don’t either I have a sorry message for you: This trend has been the bipartisan thrust of American policy since the 1970s.

For decades, we’ve restricted protests to protect safety and public order, but an important part of our democracy has eroded, namely the constitutional right to public assembly. I outline this history in one chapter of my new book, “The Complacent Class: The Self-Defeating Quest for the American Dream”; it is just one of many examples of how Americans are giving up their former dynamism for greater security, and to the possible long-run detriment of society.

These days, there exists a mini-industry of “protest planners,” comparable to wedding or convention planners. They will help you coordinate with the police, set up stages and sound systems in the approved manner, and clean up after the event. A major protest is a bureaucratized event, accompanied by professional teams of public relations and media management. The right to assembly has not been banished; it’s simply been limited, and made more difficult and expensive. At the margin that will limit the number and diversity of protests.

To understand how we got to this point, consider the chaos of public protests from the 1960s and early 1970s. The 1968 to 1975 period saw more instances of anti-government violence than any time since the American Civil War, and eventually state and local governments decided that they would regulate protests more closely.

Take the famed Selma civil-rights marchers of 1965, when the protesters had obtained the legal right, through petition, to conduct a 52-mile, five-day march down an interstate highway. Of course, that blocked the highway and inconvenienced many motorists and truckers. America’s NIMBY mentality would most likely prevent a comparable event today.

Starting in the 1970s, the federal courts began to assert that public spaces are not automatically fair game for marches and demonstrations, and so local governments have sought to please the users of such facilities rather than marchers and protesters. For instance, during the 2004 Democratic National Convention, numerous would-be demonstrators ended up being confined to a “demonstration zone,” which one federal judge described as analogous to Piranesi’s etchings of a prison. The zone was ringed by barricades, fences and coiled razor wire.

Or take the Occupy Wall Street movement in 2011, which was in large part defanged by the authorities of the city of New York. Rather than opting for outright confrontation, and perhaps some publicity victories for the protesters, authorities waited for the winter to shut down the encampment. The city police also surrounded the main protest site, Zuccotti Park, with their cars and set up a watchtower to keep a vigil. Barricades were placed to keep pedestrians away from the site, and passers-by were encouraged to “keep moving.”

Washington is in some ways more restrictive yet. The National Park Service controls about 25 percent of the city, including many of the focal spots. If a protest is expected to be of any note, the organizers will be required to meet with the Park Police and possibly the Capitol Police to plan it out, accompanied by lawyers in many cases. Further complications arise if the Secret Service is involved, and virtually any protest can be stifled or shut down altogether by invoking national security or terrorism fears.

by Tyler Cowen, Bloomberg |  Read more:
Image: Sarah Morris/Getty

Fukushima is Worse Than Ever

Remember the Fukushima Daiichi nuclear accident following the 2011 earthquake and tsunami in Japan? I wrote about it at the time, here, here, here, here, and here, explaining that the accident was far worse than the public was being told and that it would take many decades — if ever — for the site to recover. Well it’s six years later and, if anything, the Fukushima situation is even worse. Far from being over, the nuclear meltdown is continuing, the public health nightmare increasing. Why aren’t we reading about this everywhere? Trump is so much more interesting, I guess.

“The radiation levels inside Japan’s damaged Fukushima Daiichi nuclear reactor No. 2 have soared in recent weeks, reaching a maximum of 530 sieverts per hour, a number experts have called “unimaginable”.”

That’s the most recent evidence. Click on it and you’ll find the original story underneath. What this means is there’s a puddle of molten uranium that has melted its way through the steel pressure vessel, through the reinforced concrete containment, through the reinforced concrete foundation of the nuclear facility, and is now working its way through whatever rock or soil lies underneath the foundation, dropping lower each day.

Remember the China Syndrome?

The danger here isn’t that molten uranium will make its way to the opposite side of the world because that’s impossible. Liquid uranium won’t flow uphill, so the furthest it could go is the center of the Earth’s core, which is probably a great place for nuclear disposal. The real problem is that these next hundreds or thousands of feet for the uranium to drop could well facilitate the transfer of radiation and radioactive materials into the environment. This is right by the sea, remember, where radioactive cooling water has been released continuously for six years already. If the molten uranium hits an underground aquifer, such a spread could get even worse.

Just in case you didn’t bother to read any of those old column links above, here’s why I am writing about this subject and why you should take me seriously. Back in 1979 I was hired by the White House to help investigate the nuclear accident at Three Mile Island. My friend Robert Bishop, whom I consulted for this column, was the only American at Chernobyl.

There is no solution to the Fukushima problem, but there are a few things that can be done to mitigate this crisis, with the main one being what’s being called an ice wall, which you may have read about.

Yeah, like in Game of Thrones.

Liquid nitrogen is pumped underground to freeze a ring of soil around the power plant. The idea isn’t to somehow cool the molten uranium because that has its own source of heat that will last for a century or more. The point of the ice wall is to contain the poison while also minimizing incursion of water. Starve the puddle of water, the idea goes, and just let the uranium do its thing. Water is bad not just because it might carry radiation and radioactive materials away from the site and into the environment, water is bad because the uranium’s heat will turn it into hydrogen and oxygen which will then explode.

by Robert X. Cringely, I Cringely |  Read more:
Image: uncredited

The Worst Generation

[ed. Written 10 years ago and still as relevant as ever (and I'm a Boomer).] 

At a press gathering just after the 1992 election, David Broder, the dean of Washington reporters, commented to me that my Clintonista colleagues and I seemed so, well, so young to him. "I guess you Baby Boomers are really taking over," he said.

That's when it happened. I'd never been called a Boomer before. Poor Broder. My eyes got squinty and my face got red. The veins in my temples throbbed. The look on his face was horrible. He must have thought I was about to rip off his head and spit down his neck. Which I was.

"I am not a Baby Boomer," I snapped. "I am so tired of hearing about the goddamn Baby Boomers! I've spent my whole life swimming behind that garbage barge of a generation. They ruined everything they've passed through and left me in their wake."

Broder shook his head and walked away.

But the garbage barge just chugs on. As they enter late middle age, the Boomers still can't grow up. Guys who once dropped acid are now downing Viagra; women who once eschewed lipstick are now getting liposuction. At the risk of feeding their narcissism, I believe it's time someone stated the simple truth: The Baby Boomers are the most self-centered, self-seeking, self-interested, self-absorbed, self-indulgent, self-aggrandizing generation in American history.

I hate the Boomers.

I know it's a sin to hate, so let me put it this way: If they were animals, they'd be a plague of locusts, devouring everything in their path and leaving but a wasteland. If they were plants, they'd be kudzu, choking off every other living thing with their sheer mass. If they were artists, they'd be abstract expressionists, interested only in the emotions of that moment--not in the lasting result of the creative process. If they were a baseball club, they'd be the Florida Marlins: prefab prima donnas who bought their way to prominence, then disbanded--a temporary association but not a team.

Of course, it is as unfair to demonize an entire generation as it is to characterize an entire gender or race or religion. And I don't literally mean that everyone born between 1946 and 1964 is a selfish pig. But generations can have a unique character that defines them, especially the elites of a generation--those lucky few who are blessed with the money or brains or looks or skills or education that typifies an era. Whether it was Fitzgerald and Hemingway defining the Lost Generation of World War I and the Roaring Twenties, or JFK and the other heroes of the World War II generation, or the high-tech whiz kids of the post-Boomer generation, certain archetypes define certain times.

You know who you are. If you grew your hair and burned your draft card on campus during the sixties; if you toked, screwed, and boogied your way through the seventies; if you voted for Reagan and believed "Greed is good" in the eighties; and if you're trying to make up for it now by nesting as you cluck about the collapse of "family values," you're it. If not, even if demographers call you a Boomer, you probably hate our generation's elite as much as I do.

by Paul Begala, Esquire |  Read more:
Image: via

Thursday, March 2, 2017


Ken Price, Heat Wave, 1995
via:

How Domino’s Became the Pizza for the People

The act of ordering Domino’s can be described only as a sublime experience. I open my computer and hit the bookmark that takes me to the sleek homepage. I log in to my personal pizza profile, and, with just one click, I pay for my pepperoni pan pizza with extra sauce, baked, well done. Then I sit back, watch the online tracker, and think of nothing else as it broadcasts the updates: Isabel puts my pizza in the oven. Darien executes the quality check. Moe carries it to the car and brings it to me. Then the real experience begins.

My two hands grab the box and open it hastily. Gratification is near. The cheese and swirls of sauce threaten to drip over the edge of the thick, buttery crust. I take a bite before it cools. The rush of fat and oil and robust tomato sauce and butter hit my tongue and light up my pleasure centers. Another bite. Another slice. A whole pie. Domino’s satisfies in a way nothing else can. Tell me, when’s the last time a Papa John’s pizza ever did that for you?

I’m firmly in Domino’s target demographic, if not a dream user. I ordered from the chain all the time as a college student, and following a brief separation, I continue to order Domino’s even though I’m an adult with a working knowledge of nutrition and a tendency toward heartburn. I ride for Domino’s like Jay Z rides for Beyoncé. (...)

Ruby Tandoh, 2013 Great British Bake Off contestant and cookbook author, agrees. “I’m not a legit Domino’s connoisseur but I’ve liked it every time I’ve had it,” she says. “I can say with pretty much complete certainty that part of the reason for this is because it is the anti-pizza. It’s the anti-artisanal, anti-natural, anti-handmade, anti-thin crust pizza. It’s … the pizza emoji pizza. I love it.”

Even “real New Yorkers” are believers. Manhattan-based lawyer in the food and drink industry Jasmine Moy ordered Domino’s for the first time four months ago and was an immediate fan. “Listen, I’m a New Yorker, I know pizza. But I will say that pan pizza showed up and I thought it was not awful. I actually sort of liked it. Now I have a craving for it. The bread dough is almost like a focaccia … it’s crispy and it’s oily and it’s pretty delicious.” Now she and her boyfriend order a Supreme Pan Pizza with jalapeños every other week. He is a “Domino’s freak,” she says, and she’s becoming one too.

Thanks to unofficial celebrity endorsements from Allison Williams and Nigella Lawson and changes to its recipes, nobody needs to be ashamed of their fervent love (or even casual tolerance) for Domino’s pizza. Armed with the signature pan pizza and stock currently trading at $186 per share, the chain has earned enough fans that all who order no longer need to call it a guilty pleasure. It’s just a pleasure. And the fans are more like fanatics.

I am the first to admit that there’s a part of being a Domino’s enthusiast that is painful. It makes you feel a little terrible inside. I’m not talking about psychologically; it’s like your organs are protesting. Maybe that’s why Domino’s has made it its mission in the past six or seven years to shamelessly pander to fans (or maybe because it’s really profitable). Regardless of motive, Domino’s major innovations since 2009 have not only increased sales, but they’ve also inspired fanaticism and viral discussion of the brand — except for maybe the introduction of salad. “There have been some misses,” admits chief marketing officer Joe Jordan.

In 2008, sales were at an all-time low. Its stock was worth $4 a share. A national taste test indicated that consumers considered it on par with Chuck E. Cheese’s in terms of taste. That’s dark. Sure, you could get it fast; the company was still well known for its 30-minutes-or-less guarantee. But what good was fast delivery if the crust tasted like cardboard and the sauce tasted like ketchup, as focus groups reported? (Let me ask you that again when you’re stoned.) Domino’s knew its pizza was bad, and this is where the brand’s revival story began.

The company made a few key changes starting in 2008. First, it released the Pizza Tracker, which gamified pizza ordering and, according to an unofficial poll I conducted, is the most beloved part of the Domino’s experience. (Moy, her boyfriend, and even a friend who doesn’t really like Domino’s agree on that.) It also set Domino’s up as the most technologically advanced pizza chain, a spirit that it’s continued by allowing people to order by emoji, working with Samsung to allow voice-ordering through smart TVs, and introducing drone deliveries in New Zealand.

The following year, the company launched its “new and inspired” pizza, a recipe that required 18 months and millions of dollars to perfect: 100 percent real mozzarella, flavored with a hint of provolone; a sweeter, bolder tomato sauce with herbs and red pepper; and a garlic-seasoned crust with parsley baked in.

But changing the pizza wasn’t enough. The company wanted to inspire people to return to the brand.

by Allison P. Davis, The Ringer |  Read more:
Image: Xenia Latii

Full Transcript: The Wall Street Journal’s Joanna Stern Talks Wireless Data Plans

Trying to figure this stuff out, but what would you say is the best all-around wireless plan you think you can get right now from the four big carriers in the U.S.?

Yeah, it is a really tough question. I think T-Mobile has done some really great things. What you see happening in the industry — and this is sort of T-Mobile’s biggest problem — they certainly had a huge jump in subscribers, at least they did last year in, I think it was the fourth quarter. They had a huge jump in subscribers, but you shouldn’t switch, and I make this point in the column. You should not switch to a different carrier for a deal. It’s just the worst thing you can do.

You only want to switch if you know that that has the best service in the places you are. That’s what’s most important. Saying something has the best plan can be a little misleading to readers or people that follow this podcast. I don’t like to say that because I don’t want people to go run and get T-Mobile and then find out, “Oh no, I don’t have service, my vacation home or my office is a T-Mobile dead spot.” That’s one of the biggest problems with T-Mobile and also Sprint. AT&T and Verizon in the U.S. have, I wouldn’t say superb coverage but they have the best coverage that you can get. That’s what’s made them more of a premium offering.

All this said, I feel like I’ve couched this all. T-Mobile does have some great deals, now you see Verizon, specifically this week, trying to catch up with an unlimited deal. You have Sprint also just hustling with the old Verizon guy all over the place saying, “We are the best, and we will basically give you everything for free. We’re so desperate for customers.” AT&T has also followed.

What’s great about T-Mobile is they have now switched to “all-unlimited plans.” You get a very big bucket of data, and we can talk about this in a few minutes. You get a very big bucket of data but they also have put the fees inside that. There’s no hidden fees on your bill, I think that’s actually really important because I think everyone needs to understand what’s happening on their bill. Even if you don’t think you can save any money, just understand where your money is going. These companies will take you for everything.

Oh yeah.

They do take you for everything.

Absolutely.

It makes me feel better to just know where the money is going. For instance, in this process I called Verizon or went into a Verizon store and found out about like three things I didn’t know Verizon was offering, because it’s such a mess in their app and everything like that. Just understand what’s going with your bill.

First thing is to look up what the coverage is like in your area. You can do that using, what was that you mentioned? RootMetrics.

Yeah, RootMetrics is a great way to do that.

Do that, find out who offers the best coverage where you are.

Exactly, ask people. For example, one of the reasons I stay with Verizon is because my parents just got a house upstate and I know that Verizon has good service there, because I’ve seen other people with T-Mobile where there is no good service. That’s not something that would’ve come up on RootMetrics. Just ask around, or when you go to those places, see if anybody has that type of service or that carrier and decide then what you’d feel comfortable with.

This is a real tech reviewer problem, by the way but I also am on Verizon and then you know we’re constantly cycling through these loaner phones or these new devices that come through, and a lot of times some of the brand new unlocked cool phones that I’m trying to test are only operating on a GSM network and not CDMA. I have to basically get a different SIM card and something that will operate on that network as well. It seems like it’s still a little bit easier to switch from device to device is you’re on GSM.

It does, but some of the phones have both of the radios in it, so iPhone you shouldn’t have that problem. Also the Pixel, the new Pixels have both the radios so you should be fine. Yeah, totally, AT&T and T-Mobile are going to be more compatible with more phones, especially if you travel internationally.

Right. What are some of the catches in these plans? I was reading through the Verizon fine print the other day after they announced they were bringing out this $80-per-month unlimited plan back. It seems there are some instances where throttling is going to come into play. Some people may not even understand what throttling really means: They’re going to limit your data speeds on hotspots and tethering, which I do all the time.


Yeah.

What are some things that people should definitely be aware of if they’re thinking of some of these newer plans?

Let me try to describe it this way: With all of these plans now, all four of the big networks in the U.S. offer unlimited plans. Let’s use “unlimited” in quotes, because there really is no such thing as an unlimited plan. Just like there’s no such thing as anything in life is free, right? Nothing is free, nothing is truly unlimited. (...)

Okay, so let’s picture it this way: They give you an unlimited plan, right? That means you should have as many gigabytes in the world to stream video all day long and all night long. And I don’t know, watch all of Lauren’s videos and all my videos and download millions of files and all of these things, right? You should have enough to do all of that all through the month. Except that’s not really true.

The best way to think of it is like you have a highway and you’re cruising down the highway and you can go as fast as you can on that highway. You’re going to get as fast speeds when you’ve got this unlimited data, but if you hit a cap when there’s a lot of people on the network — if you’re in a place where there’s a lot of people using their phones and you’ve used a lot of data that month — the carrier will deprioritize your phone.

They might say, “Oh in this one area,” — could be a concert, could be some place, I don’t know, a train station, lots of people are on their phone and are on our network right now — “Oh, Joanna over there, she used 50 gigabytes this month of data. We are going to take her down right now, because the network is so busy. We’re going to throttle her speeds down.” Her LTE speeds are not going to be as fast as Lauren over there who is one of our customers who doesn’t have unlimited data and she’s only used two gigabytes this month. When there are these times of heavy traffic on these networks, they take the people who have unlimited data and have used a lot of data and bring down their speeds. Does that make any sense? (...)

Oh, okay, interesting. Where do products like Google Fi or a company like Republic Wireless fit into this landscape? We’ve spent a lot of time talking about the four big carriers so far.

Yeah, I think these are fascinating, and I think unfortunately there’s such a monopoly by these four big carriers and that’s what most people think to go to when they’re getting their service. Google Fi is really interesting for two reasons: It’s sort of the model that you pay what you use, and so here we’re buying like unlimited data or we’re buying eight gigs of data and you get 10 gigs because you get bonus data like who the hell knows what that means. There’s this model of you pay what you use that makes a lot of sense. If you know what one gigabyte costs a month or whatever and you paid that and then you just kept going and you paid for what you used, it would make sense. I think that’s a really interesting model, and Google doesn’t really have much to lose because they don’t have the major many years of what the other traditional carriers have offered.

I think there’s another interesting thing with wireless or Wi-Fi built into the plans. I think more and more obviously we have 5G on the horizon and other types of things. We are more and more around great and fast Wi-Fi, the handoff, making the handoff between Wi-Fi networks and mobile data makes a lot of sense. A lot of that engineering also has to happen in the phone. That’s what Republic Wireless is so good at. Also Wi-Fi calling, Wi-Fi phone calls can sound so much better than the traditional carrier phone calls. They have HD voice and things like that. I’ve thrown a lot in at this. I think they’re really interesting and you should look at them but also of course come back to where’s that coverage and does your device work with these companies?

by Recode Staff, Recode |  Read more:
Image: Morris MacMatzen - Pool/Getty Images

Wednesday, March 1, 2017

Eleven Thousand Bowls of Soup

It was almost noon on a Friday in the working-class Hong Kong neighborhood of Jordan, and Chiu Wing Keng was tired. The 28-year-old chef had been up until 6 that morning prepping ingredients in anticipation of a busy weekend at Kai Kai Dessert, his family’s two-floor storefront on Ning Po Street. Chiu’s father, Chiu Wai Yip, started Kai Kai nearly four decades ago, having learned the craft from his uncle. The family specializes in the kind of traditional Cantonese dessert soups that my mom, who immigrated to New York from Hong Kong in 1969, made when I was a kid: sweet red-bean soup with lotus seeds, silky egg-­custard pudding, glutinous sesame rice balls drowned in ginger syrup.

In 2015, the Michelin Guide almost caused the family business to close. Kai Kai Dessert was one of two dozen establishments honored in the prestigious culinary guide’s first-ever listings for street food, introduced in the Hong Kong and Macau guide. It’s a nice idea: giving international attention to a longtime local shop so it can bring old-fashioned, painstakingly crafted flavors to a new audience. But the downside came quickly. Customer traffic went up 30 percent in the first month, and a few weeks later, the Chius’ landlord more than doubled their rent, to $27,000 — the equivalent of about 11,000 bowls of soup. That’s more than half the restaurant’s total monthly income. (...)

High-end restaurants across the globe fret over the perks and perils of inclusion in the widely recognized Michelin food bible. How can I take advantage of the attention? Can I handle the increased business? Do I hire more people? Do I expand? Do I franchise? Do I do something — anything — different? But the new street-food category can threaten a restaurant’s very survival: For a hole in the wall serving $3 soups, a rent hike is disastrous. (...)

The Michelin recognition came as a surprise to the family. “We always thought the Michelin award was for fancy, high-class restaurants, not our kind of food,” Chiu the younger told me. The announcement of the street-food category — “a first in the history of the Michelin guides,” said Michael Ellis, the guides’ international director — also surprised many in the food world. Historically fine-dining focused, Michelin has been criticized in recent years for lacking relevance in a world where meals are exhaustively documented by the likes of Yelp, Eater, and any number of opinionated online food guides. Ellis explained the launch as specific to Hong Kong: “Street food is part of the local way of life. The city never sleeps, the streets are constantly bustling, and Hong Kong residents love to eat out, without necessarily sitting down and spending a lot of money.” Some observers, like the guide Lifestyle Asia, accused Michelin of getting gimmicky with “a ploy to show that they actually are in touch with how Hong Kong eats.”

In many ways, the listings are a natural culmination of the worldwide fetishization of street food: Places like Kogi BBQ, the L.A. taco truck that launched the Roy Choi mega-empire, have elevated humble foods to the status of haute cuisine. But they also reflect Michelin’s savvy investment in Asia.Originally focused on Europe, Michelin now has guides covering nearly 50 regions around the globe. The 2017 debuts include guides to Seoul and Shanghai, also street-food meccas, and the street-food category has expanded to Singapore.

Chiu told me that the street-food designation doesn’t bother him, mostly because he doesn’t understand what it’s supposed to mean. “Does ‘street food’ mean a lesser thing?” he asked. In any case, he said he would never dream of getting a star — it’s too much pressure. When I visited, the mood at Kai Kai was one of cheerful relief; two days before, Michelin had handed out the second year of Hong Kong street-food awards, and Kai Kai had made the list again. “Now I wonder about what happens if they stop giving it to us — that people will say, ‘What happened? They must have done something wrong.’”

by Bonnie Tsui, California Sunday Magazine | Read more:
Image: Pierfrancesco Celada

Now That We Can Alter Our Genetic Code, Should We?

A few days ago, I had just stepped off a podium at a cancer conference when a 50-year-old woman with a family history of breast cancer approached me. I had been discussing how my laboratory, among hundreds of other labs, was trying to understand how mutations in genes unleash the malignant behavior of cancer cells. She told me that she carried a mutation in the BRCA-1 gene—a mutation that she had likely inherited from her father.

Diagnosed with cancer in one of her breasts when she was 30, she had undergone surgery, chemo, radiation and hormonal therapy. But that grim sequence of diagnosis and treatment, she told me, was hardly the main source of her torment. Now, she worried about the development of cancer in her remaining breast, or in her ovaries. She was considering a double mastectomy and the surgical removal of her ovaries. A woman carrying a BRCA-1 mutation has nearly a 60-70 percent chance of developing cancer in her breasts or ovaries during her lifetime, and yet it's difficult to predict when or where that cancer might occur. For such women, the future is often fundamentally changed by that knowledge, and yet it remains just as fundamentally uncertain; their lives and energies might be spent anticipating cancer and imagining survivorship—from an illness that they have not yet developed. A disturbing new word, with a distinctly Orwellian ring, has been coined to describe these women: previvors—pre-survivors.

The uncertainty and anxiety had cast such a pall over this woman's adult life that she did not want her grandchildren to suffer through this ordeal (her children had not been tested yet, but would likely be tested in the future). What if she wanted to eliminate that genetic heritage from her family? Could she ensure that her children, or her grandchildren, would never have to live with the fear of future breast cancer, or other cancers associated with the BRCA-1 gene? Rather than waiting to excise organs, could her children, or their children, choose to excise the cancer-linked gene?

That same morning, a National Academies of Sciences panel issued a report on the future prospects of "gene editing" in human embryos. Gene "editing" (more on this below) refers to a set of techniques that enables the deliberate alteration of the genetic code of a cell. In principle, if the BRCA-1 mutation could be altered in egg cells or in sperm cells bearing that genetic mutation, the gene would be "fixed" (or restored to its non-mutant form) forever.

To understand what the report proposes, we need to understand how genes function, and how we might be able to manipulate genes in the future. First, though, a quick primer: A gene, crudely put, is a unit of hereditary information. It carries information to specify a biological function (although a single gene might specify more than one function). To simplify somewhat: You might imagine genes as a set of master-instructions carried between cells, and between organisms, that inform a cell or an organism how to build, maintain, repair and reproduce itself.

The BRCA-1 gene specifies a protein that allows cells to repair other damaged genes. For a cell, a damaged gene is a catastrophe in the making. It signals the loss of information—a crisis. Soon after genetic damage, the BRCA1 protein is recruited to the damaged gene. In patients with the normal gene, the protein launches a chain reaction, recruiting dozens of proteins to the knife-edge of the broken gene to swiftly repair the breach. In patients with the mutated gene, however, the mutant BRCA1 is not appropriately recruited, and the breaks are not repaired. The mutation thus enables more mutations—like fire fueling fire—until the growth-regulatory and metabolic controls on the cell are snapped, ultimately leading to breast cancer. Breast cancer, even in BRCA1-mutated patients, requires multiple triggers. The environment clearly plays a role: Add X-rays, or a DNA-damaging agent, and the mutation rate and cancer risk climbs even higher. And other gene variants can change the risk: If a BRCA-1 mutation is present with other gene-variants that increase cancer risk, then the chance of developing cancer multiplies.

Until recently, a woman carrying a mutation in the BRCA-1 gene had the means to alter her personal genetic destiny, but no means to alter the transmission of that destiny in her children. She could choose to undergo intensive screening for early breast cancer, and intervene only if and when cancer is detected. She could choose to take hormonal medicines to reduce her risk. Or she could choose to remove her breast and ovaries, thereby drastically reducing the future chance of developing breast and ovarian cancer (although the mutations also increase the risk of other cancers, such as pancreatic cancer, or prostate cancer in men). But notably, until the 1990s, she could not prevent the transmission of the mutated gene to her children.

In April 1990, Nature magazine announced the birth of a new technology that raised the stakes of human genetic diagnosis. The technique relies on a peculiar idiosyncrasy of human embryology. When an embryo is produced by in vitro fertilization (IVF), it is typically grown for several days in an incubator before being implanted into a woman's womb. Bathed in a nutrient-rich broth in a moist incubator, the single-cell embryo divides to form a glistening ball of cells. At the end of three days, there are eight and then sixteen cells. Astonishingly, if you remove a few cells from that embryo, the remaining cells divide and fill in the gap of missing cells, and the embryo continues to grow normally as if nothing had happened. For a moment in our history, we are actually quite like salamanders or, rather, like salamanders' tails—capable of complete regeneration even after being cut by a fourth.

A human embryo can thus be "biopsied" at this early stage, the few cells extracted used for genetic tests. Once the tests have been completed, cherry-picked embryos possessing the correct genes can be implanted. With some modifications, even oocytes—a woman's eggs—can be genetically tested before fertilization. These techniques together are called "preimplantation genetic diagnosis," or PGD. (...)

For a woman carrying a BRCA-1 mutation, preimplantation genetic diagnosis offers a new way to think about genetic selection in the future. An embryo (or even an egg) might be biopsied and diagnosed as a carrier for the BRCA-1 mutation, and the woman might choose not to implant that embryo. Some mathematics might put the choices into perspective: If a woman carrying the BRCA-1 mutation conceives a child with a man carrying no mutation, then the chance of having a child with the mutation is one in two. If the father also happens to carry the BRCA-1 mutation, then the chance increases to three in four (actually, for complicated reasons, the figure is closer to two in three). But with gene sequencing and PGD, a woman might be able to reduce the risk to zero—essentially erasing the BRCA-1 mutation from her future lineage.

In the spring of 2011, Jennifer Doudna, a biochemist, and a bacteriologist, Emmanuelle Charpentier, discovered yet another powerful mechanism to manipulate the human genome. Doudna and Charpentier were working on a mechanism by which bacteria defend themselves against invading viruses—"the most obscure thing I ever worked on," as Doudna would later put it. Building on earlier work by microbiologists, Doudna and Charpentier began to dissect the way bacteria could inactivate viral genes. Some microbes, they found, encode genes that can specifically recognize viral DNA and deliver a targeted cut to it.

In 2012, Doudna and Charpentier realized that the system was "programmable." Bacteria, of course, only seek and destroy viruses; they have no reason to recognize or cut other genomes. But Doudna and Charpentier learned enough about the self-defense system to trick it: They could force the system to make intentional cuts in other genes and genomes. The same bacterial defense system might, in principle, be "reprogrammed" to deliver a cut to the BRCA-1 gene, or to any gene of choice. Scientists working at Harvard, MIT and other institutions refined the system further, enabling its use in human cells.

The system could be manipulated even further. By tweaking a cell's own repair mechanism in conjunction with cutting a desired genetic sequence, researchers found, they could introduce a genetic sequence to a gene. A defined, predetermined genetic change could thus be written into a genome: The mutant BRCA-1 gene can be reverted to normal gene. The technique has been termed genome editing.

The method still has some fundamental constraints. At times, the cuts are delivered to the wrong genes. Occasionally, the repair is not efficient, making it difficult to "rewrite" information into particular sites in the genome. But it works more easily, more powerfully, and more efficiently than virtually any other genome-altering method to date. Only a handful of such instances of scientific serendipity have occurred in the history of biology. An arcane microbial defense system has created a trapdoor to the transformative technology that geneticists had sought so longingly for decades, a method to achieve directed, efficient, and sequence-specific modification of the human genome.

Can gene editing be used to change the genetic information of a human embryo in a permanent, heritable manner? In other words: Could we envision using it to revert the dysfunctional BRCA-1 gene, say, to functional version? The quick answer is yes, but only if we can overcome some strong technical hurdles. 

by Siddhartha Mukherjee, Tonic |  Read more:
Image: Kitron Neuschatz

Politics 101

How the baby boomers destroyed everything

America last: The case for moral disengagement from politics in the age of Trump

President Trump Has Done Almost Nothing
Assailing the White House From Hollywood’s Glass House

Growing up Poor, With Trump on TV

Trump’s handling of the Ryan Owens affair was contemptibly cynical

[ed. Re: Trump's first speech to Congress (2/28/17). Generic, unmemorable, but surprisingly coherent (for a change). Not much hyperbole, not many details either. Somebody finally got him to use a teleprompter and stick to it. It's sad that our expectations have fallen so low these days that anything short of another vein-popping, finger-pointing, bug-eyed rant seems like a relief.]