Friday, November 10, 2017


Hideo Hagiwara, Muruto Cape 1947
via:

Here Comes the Dystopia

I don’t know why, but private fire services disturb the hell out of me. Actually, I do know why. It’s because, more than just about anything else, they seem to hint at what a neo-feudalist dystopia might look like. The wealthy can buy firefighters and have their homes saved, while those who can’t afford to be saved will have to watch as all of their possessions are incinerated. There will be two types of people in this world: those who can pay their way to safety, and those who will burn. In the fully-privatized world, the amount of money you have determines literally everything, and the privatization of basic services like firefighting seems to draw us worryingly close to that world.

During the recent California wildfires, according to The Wall Street Journal, some people received a little more fire protection than others. Insurers sent in private firefighting forces to protect valuable homes, “sending out fire crews to clear brush, put down fire retardant around residents homes, set up sprinklers and then document the process via photos that are then sent to the homeowner.” Only those homes that had signed up for special policies would be coated in fire retardant.

The use of private firefighters for the wealthy has apparently grown over the past decade. A 2007 report documented AIG’s use of a “Wildfire Protection Unit” to serve its Private Client Group, a plan “offered only to homeowners in California’s most affluent ZIP Codes — including Malibu, Beverly Hills, Newport Beach and Menlo Park.” Just look at this extraordinary photo of AIG-branded firefighters. A homeowner recalled what it was like to know he was privately protected:

“Here you are in that raging wildfire. Smoke everywhere. Flames everywhere. Plumes of smoke coming up over the hills. Here’s a couple guys showing up in what looks like a firetruck who are experts trained in fighting wildfire and they’re there specifically to protect your home. … It was really, really comforting.”

Less comforting, perhaps, to the neighbors who also see a team of firefighters coming over the hills before realizing that they are only authorized to put out AIG-insured homes. In fact, not even just AIG-insured homes: special elite AIG-insured homes. Though in one case, the company was willing to benevolently extend protection to AIG customers with ordinary plans: “AIG said it did apply fire retardant to some homes of standard policyholders if they happened to be nearby, because it made financial sense.” If saving your home doesn’t “make financial sense,” though, you’re screwed.

The economics of the whole arrangement make perfect sense. An insurance company really does not want to see a multi-million dollar house burn to the ground, so of course it will be eager to provide extra fire services if doing so will substantially affect the amount of the subsequent claim. And it’s clear why a wealthy person would want this kind of coverage: as the homeowner above says, it’s really comforting to know that a corporation is sending you personal firefighters who will look out for you and you alone. Private firefighting is just like private security, or the mercenary soldiers that rich people hired in post-Katrina New Orleans to protect their properties from looting.

But though these deals make perfect sense to the parties making them, they have alarming implications. There’s something outrageous about a world in which firefighters protect some people rather than others, and choose to let houses burn to the ground that could be saved. I am still haunted by the 2010 story of a local fire department who refused to put out a house fire because the owner hadn’t paid his $75 annual fire protection fee. Emergency services seem like one area in which there ought to be a consensus that money shouldn’t play a role. Obviously, that’s far from true, as anyone who has gotten stuck with a $1000 bill for an ambulance ride knows well. But the more emergency response becomes a transaction, rather than an equal and universal guarantee, the more literally true it is that some people’s lives are worth more than others.

I am terrified of the future, and it’s partly because I don’t really see a way to stop these trends from getting worse. Public services will be under-resourced and wealthy people will have a strong interest in contracting for private services instead. This is already the situation in medical care, it’s already the situation with police and the military, why shouldn’t it be the same with fire? In every other domain of life, how much money you have determines what you will get in return, this just extends market logic to yet one more domain.

by Nathan J. Robinson, Current Affairs |  Read more:
Image: uncredited

Thursday, November 9, 2017

Chesterton's Fence


Chesterton's Fence
Image via: Wikipedia
[ed. Broad applicability to... well, just about everything. Politics, technology, education, law, economics, urban planning, and  more.] 

Where the Small-Town American Dream Lives On

Orange City, the county seat of Sioux County, Iowa, is a square mile and a half of town, more or less, population six thousand, surrounded by fields in every direction. Sioux County is in the northwest corner of the state, and Orange City is isolated from the world outside—an hour over slow roads to the interstate, more than two hours to the airport in Omaha, nearly four to Des Moines. Hawarden, another town, twenty miles away, is on the Big Sioux River, and was founded as a stop on the Northwestern Railroad in the eighteen-seventies; it had a constant stream of strangers coming through, with hotels to service them and drinking and gambling going on. But Orange City never had a river or a railroad, or, until recently, even a four-lane highway, and so its pure, hermetic culture has been preserved.

Orange City is small and cut off, but, unlike many such towns, it is not dying. Its Central Avenue is not the hollowed-out, boarded-up Main Street of twenty-first-century lore. Along a couple of blocks, there are two law offices, a real-estate office, an insurance brokerage, a coffee shop, a sewing shop, a store that sells Bibles, books, and gifts, a notions-and-antiques store, a hair-and-tanning salon, and a home-décor-and-clothing boutique, as well as the Sioux County farm bureau, the town hall, and the red brick Romanesque courthouse.

There are sixteen churches in town. The high-school graduation rate is ninety-eight per cent, the unemployment rate is two per cent. There is little crime. The median home price is around a hundred and sixty thousand dollars, which buys a three- or four-bedroom house with a yard, in a town where the median income is close to sixty thousand. For the twenty per cent of residents who make more than a hundred thousand dollars a year, it can be difficult to find ways to spend it, at least locally. There are only so many times you can redo your kitchen. Besides, conspicuous extravagance is not the Orange City way. “There are stories about people who are too showy, who ended up ruined,” Dan Vermeer, who grew up in the town, says. “The Dutch are comfortable with prosperity, but not with pleasure.”

The town was founded, in 1870, by immigrants from Holland looking for farmland, and until recently almost everyone who lived there was Dutch. Many of the stores on Central Avenue still bear Dutch names: Bomgaars farm-supply store, Van Maanen’s Radio Shack, Van Rooyen Financial Group, DeJong Chiropractic and Acupuncture, Woudstra Meat Market. The town’s police force consists of Jim Pottebaum, Duane Hulstein, Audley DeJong, Bruce Jacobsma, Chad Van Ravenswaay, Wes Van Voorst, and Bob Van Zee. When an Orange City teacher wants to divide her class in half, she will say, “A”s through “U”s to one side, “V”s through “Z”s to the other. Once, many years ago, an actual Dutch woman, from Rotterdam, moved to town with her American husband. She found the Dutchness of Orange City peculiar—the way that most people didn’t speak Dutch anymore but sprinkled their English with phrases that nobody had used in the Netherlands for a hundred years.

In the early part of the twentieth century, the question of how much Dutchness to retain caused a religious schism in the town: the American Reformed Church broke off from the First Reformed Church in order to conduct services in English. But, as the last Dutch speakers began to die off, Orange City took measures to embalm its heritage. The shops on the main stretch of Central Avenue are required to embellish their façades with “Dutch fronts”—gables in the shape of bells and step-edged triangles, painted traditional colors such as dark green, light gray, and blue, with white trim. Across the street from Bomgaars is Windmill Park, with its flower beds and six decorative windmills of varying sizes along a miniature canal. Each year, at the end of May, Orange City holds a tulip festival. Thousands of bulbs are imported from the Netherlands and planted in rows, and for three days much of the town dresses up in nineteenth-century Dutch costumes, sewn by volunteers—white lace caps and long aprons, black caps and knickers—and performs traditional dances in the street. There is a ceremonial street cleaning—kerchiefed boys throwing bucketfuls of water, aproned girls scrubbing with brooms—followed by a parade, in which the Tulip Queen and her court, high-school seniors, wave from their float, and the school band marches after them in clogs.

Every June, a couple of weeks after Tulip Festival, another ritual is enacted: a hundred of the town’s children graduate from the high school. Each of them must then make a decision that will set the course of their lives—whether to leave Orange City or to stay. This decision will affect not just where they live but how they see the world and how they vote. The town is thriving, so the choice is not driven by necessity: to stay is not to be left behind but to choose a certain kind of life. Each year, some leave, but usually more decide to settle in—something about Orange City inspires loyalty. It is only because so many stay that the town has prospered. And yet to stay home is to resist an ingrained American belief about movement and ambition. (...)

Some of the kids who left Orange City left for a profession. There was work you couldn’t do there, lives you couldn’t live—there weren’t a lot of tech jobs, for instance, or much in finance. Not many left for the money; you might make a higher salary elsewhere, but the cost of living in Orange City was so low that you’d likely end up worse off. Some left for a life style: they wanted mountains to ski and hike in, or they wanted to live somewhere with sports teams and restaurants. But most left for the same reason Dan Vermeer did—for the chance to remake themselves.

In bigger places, when you started working you met new people, and your professional self became your identity. But in Orange City you would always be So-and-So’s kid, no matter what you accomplished. People liked to point out that even Jesus had this problem when he tried to preach in his home town:
They said, “Where did this man get all this? What is this wisdom that has been given to him? What deeds of power are being done by his hands! Is not this the carpenter, the son of Mary and brother of James and Joses and Judas and Simon, and are not his sisters here with us?” And they took offense at Him.
But, while this was for some kids a reason to leave, for others it was why they wanted to stay. In Orange City, you could feel truly known. You lived among people who had not only known you for your whole life but known your parents and grandparents as well. You didn’t have to explain how your father had died, or why your mother couldn’t come to pick you up. Some people didn’t feel that they had to leave to figure out who they were, because their family and its history already described their deepest self.

Besides these sentiments, which were widespread, there was another crucial fact about Orange City that enabled it to keep more of its young than other towns its size: it had a college. Northwestern College, a small Christian school of twelve hundred students, affiliated with the Dutch Reformed Church, was founded not long after the town itself. Northwestern offered a variety of liberal-arts majors, but was oriented toward Christian ministry and practical subjects like nursing and education.

Stephanie Schwebach, née Smit, graduated from the high school in 1997 and went to Northwestern to train as a teacher. She had never felt restless in Orange City. “I really didn’t have an adventurous spirit,” she says. “I’m going to stay with the people I know.” Her professional goal was to get a job teaching in the same school she’d gone to as a child.

When she was growing up, she lived next door to her grandparents, and every Sunday after church her family went to their house for lunch, as was the custom then in Orange City. She met her future husband, Eric, in seventh grade, and they started dating in eleventh. Eric came from a huge family—his father was one of sixteen. Most of Eric’s many aunts and uncles still lived in the area, and if anyone needed anything done, like laying cement for a driveway, the family would come and help out.

After high school, Eric thought about joining the military—he thought it would be fun to see a bit of the world—but Stephanie talked him into sticking around, so he stayed in his parents’ house and went to a local technical school to train as an electrician. When Stephanie was a junior in college, they became engaged. He got a job with the manufacturer of Blue Bunny ice cream, and she started teaching. They had two children.

Some years ago, Stephanie and Eric were both working in Le Mars, a town twenty minutes away, and they considered moving there. But then Stephanie thought, It just makes it harder to stop in and say hi to your parents if you don’t live in the same town, and the kids can’t wander over by themselves—we won’t be close in the same way. Instead, they moved into the house that Eric had grown up in, on an acreage at the edge of town, and his parents built a smaller house next to it.

When Stephanie thought about what she wanted for her children in the future, the first thing she thought was, Stay close. “I want them to live right next door, so I can be the grandma that takes care of their kids and gets to see them grow through all the different stages,” she says. “Our kids have told us that once Eric’s folks are dead we have to buy their house so they, our kids, can live in our house, next door. And that would be fine with me!”

In many towns, the most enterprising kids leave for college and stay away rather than starting businesses at home, which means that there are fewer jobs at home, which means that even more people leave; and, over time, the town’s population gets smaller and older, shops and schools begin to close, and the town begins to die. This dynamic has affected Iowa more than almost any other state: during the nineteen-nineties, only North Dakota lost a larger proportion of educated young people. In 2006, Iowa’s then governor, Tom Vilsack, undertook a walking tour of the state, with the theme “Come Back to Iowa, Please,” aimed at the young and educated. He threw cocktail parties in cities around the country, at which he begged these young emigrants to return, promising that Iowa had more to offer than “hogs, acres of corn, and old people.” But the campaign was a failure. In 2007, the legislature in Des Moines created the Generation Iowa Commission, to study why college graduates were leaving; two years later, a fifth of the members of the commission had themselves left the state.

The sociologists Patrick Carr and Maria Kefalas spent several months in a small Iowa town and found that children who appeared likely to succeed were from an early age groomed for departure by their parents and teachers. Other kids, marked as stayers, were often ignored in school. Everyone realized that encouraging the ambitious kids to leave was killing the town, but the ambition of the children was valued more than the life of the community. The kids most likely to make it big weren’t just permitted to leave—they were pushed.

In Orange City, that kind of pushing was uncommon. People didn’t seem to care about careers as much as they did in other places. “Even now, my friends there, I’m not sure what many of them do, and I don’t think they know what I do,” Dan Vermeer says. “That’s just not what you talk about.” You could be proud of a child doing something impressive in another part of the country, but having grown children and grandkids around you was equally a sign of success. Go to Northwestern, Orange City parents would say. And, when you get your degree, why not settle down here? There are plenty of jobs, and it’ll take you five minutes to drive to work. When you have children, we’ll help you take care of them. People here share your values, it’s a good Christian place. And they care about you: if anything happens, they’ll have your back.

by Larissa MacFarquhar, New Yorker | Read more:
Image: Brian Finke

Wednesday, November 8, 2017


Frantisek Gross, Breakfast
via:

Apple at Its Best

The history of Apple being doomed doesn’t necessarily repeat, but it does rhyme.

Take the latest installment, from Professor Mohanbir Sawhney at the Kellogg School of Management (one of my former professors, incidentally):
Have we reached peak phone? That is, does the new iPhone X represent a plateau for hardware innovation in the smartphone product category? I would argue that we are indeed standing on the summit of peak “phone as hardware”: While Apple’s newest iPhone offers some impressive hardware features, it does not represent the beginning of the next 10 years of the smartphone, as Apple claims… 
As we have seen, when the vector of differentiation shifts, market leaders tend to fall by the wayside. In the brave new world of AI, Google and Amazon have the clear edge over Apple. Consider Google’s Pixel 2 phone: Driven by AI-based technology, it offers unprecedented photo-enhancement features and deeper hardware-software integration, such as real-time language translation when used with Google’s special headphones…The shifting vector of differentiation to AI and agents does not bode well for Apple… 
Sheets of glass are simply no longer the most fertile ground for innovation. That means Apple urgently needs to shift its focus and investment to AI-driven technologies, as part of a broader effort to create the kind of ecosystem Amazon and Google are building quickly. However, Apple is falling behind in the AI race, as it remains a hardware company at its core and it has not embraced the open-source and collaborative approach that Google and Amazon are pioneering in AI.
It is an entirely reasonable argument, particularly that last line: I myself have argued that Apple needs to rethink its organizational structure in order to build more competitive services. If the last ten years have shown us anything, though, it is that discounting truly great hardware — and the sort of company necessary to deliver that — is the surest way to be right in theory and wrong in reality.

by Ben Thompson, Stratechery |  Read more:
Image: Apple

The City


Belinda Eaton, Man with fish, 2001
via:

Something is Wrong on the Internet

As someone who grew up on the internet, I credit it as one of the most important influences on who I am today. I had a computer with internet access in my bedroom from the age of 13. It gave me access to a lot of things which were totally inappropriate for a young teenager, but it was OK. The culture, politics, and interpersonal relationships which I consider to be central to my identity were shaped by the internet, in ways that I have always considered to be beneficial to me personally. I have always been a critical proponent of the internet and everything it has brought, and broadly considered it to be emancipatory and beneficial. I state this at the outset because thinking through the implications of the problem I am going to describe troubles my own assumptions and prejudices in significant ways.

One of so-far hypothetical questions I ask myself frequently is how I would feel about my own children having the same kind of access to the internet today. And I find the question increasingly difficult to answer. I understand that this is a natural evolution of attitudes which happens with age, and at some point this question might be a lot less hypothetical. I don’t want to be a hypocrite about it. I would want my kids to have the same opportunities to explore and grow and express themselves as I did. I would like them to have that choice. And this belief broadens into attitudes about the role of the internet in public life as whole.

I’ve also been aware for some time of the increasingly symbiotic relationship between younger children and YouTube. I see kids engrossed in screens all the time, in pushchairs and in restaurants, and there’s always a bit of a Luddite twinge there, but I am not a parent, and I’m not making parental judgments for or on anyone else. I’ve seen family members and friend’s children plugged into Peppa Pig and nursery rhyme videos, and it makes them happy and gives everyone a break, so OK.

But I don’t even have kids and right now I just want to burn the whole thing down.

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. Much of what I am going to describe next has been covered elsewhere, although none of the mainstream coverage I’ve seen has really grasped the implications of what seems to be occurring.

To begin: Kid’s YouTube is definitely and markedly weird. I’ve been aware of its weirdness for some time. Last year, there were a number of articles posted about the Surprise Egg craze. Surprise Eggs videos depict, often at excruciating length, the process of unwrapping Kinder and other egg toys. That’s it, but kids are captivated by them. There are thousands and thousands of these videos and thousands and thousands, if not millions, of children watching them. (...)

[ed. Increasingly bizzare and troubling examples of "kids" videos being produced.]

As another blogger notes, one of the traditional roles of branded content is that it is a trusted source. Whether it’s Peppa Pig on children’s TV or a Disney movie, whatever one’s feelings about the industrial model of entertainment production, they are carefully produced and monitored so that kids are essentially safe watching them, and can be trusted as such. This no longer applies when brand and content are disassociated by the platform, and so known and trusted content provides a seamless gateway to unverified and potentially harmful content.

(Yes, this is the exact same process as the delamination of trusted news media on Facebook feeds and in Google results that is currently wreaking such havoc on our cognitive and political systems and I am not going to explicitly explore that relationship further here, but it is obviously deeply significant.) (...)

Here are a few things which are disturbing me:

The first is the level of horror and violence on display. Some of the times it’s troll-y gross-out stuff; most of the time it seems deeper, and more unconscious than that. The internet has a way of amplifying and enabling many of our latent desires; in fact, it’s what it seems to do best. I spend a lot of time arguing for this tendency, with regards to human sexual freedom, individual identity, and other issues. Here, and overwhelmingly it sometimes feels, that tendency is itself a violent and destructive one.

The second is the levels of exploitation, not of children because they are children but of children because they are powerless. Automated reward systems like YouTube algorithms necessitate exploitation in the same way that capitalism necessitates exploitation, and if you’re someone who bristles at the second half of that equation then maybe this should be what convinces you of its truth. Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. Not in a future of AI overlords and robots in the factories, but right here, now, on your screen, in your living room and in your pocket. (...)

A friend who works in digital video described to me what it would take to make something like this: a small studio of people (half a dozen, maybe more) making high volumes of low quality content to reap ad revenue by tripping certain requirements of the system (length in particular seems to be a factor). According to my friend, online kids’ content is one of the few alternative ways of making money from 3D animation because the aesthetic standards are lower and independent production can profit through scale. It uses existing and easily available content (such as character models and motion-capture libraries) and it can be repeated and revised endlessly and mostly meaninglessly because the algorithms don’t discriminate — and neither do the kids.

These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.

This, I think, is my point: The system is complicit in the abuse.

And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale. (...)

This is a deeply dark time, in which the structures we have built to sustain ourselves are being used against us — all of us — in systematic and automated ways. It is hard to keep faith with the network when it produces horrors such as these. While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on. This is not what many of us want. (...)

What concerns me is not just the violence being done to children here, although that concerns me deeply. What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects. 

by James Brindle, Medium |  Read more:
Image: YouTube

You Will Lose Your Job to a Robot—and Sooner Than You Think

I want to tell you straight off what this story is about: Sometime in the next 40 years, robots are going to take your job.

I don’t care what your job is. If you dig ditches, a robot will dig them better. If you’re a magazine writer, a robot will write your articles better. If you’re a doctor, IBM’s Watson will no longer “assist” you in finding the right diagnosis from its database of millions of case studies and journal articles. It will just be a better doctor than you.

And CEOs? Sorry. Robots will run companies better than you do. Artistic types? Robots will paint and write and sculpt better than you. Think you have social skills that no robot can match? Yes, they can. Within 20 years, maybe half of you will be out of jobs. A couple of decades after that, most of the rest of you will be out of jobs.

In one sense, this all sounds great. Let the robots have the damn jobs! No more dragging yourself out of bed at 6 a.m. or spending long days on your feet. We’ll be free to read or write poetry or play video games or whatever we want to do. And a century from now, this is most likely how things will turn out. Humanity will enter a golden age.

But what about 20 years from now? Or 30? We won’t all be out of jobs by then, but a lot of us will—and it will be no golden age. Until we figure out how to fairly distribute the fruits of robot labor, it will be an era of mass joblessness and mass poverty. Working-class job losses played a big role in the 2016 election, and if we don’t want a long succession of demagogues blustering their way into office because machines are taking away people’s livelihoods, this needs to change, and fast. Along with global warming, the transition to a workless future is the biggest challenge by far that progressive politics—not to mention all of humanity—faces. And yet it’s barely on our radar.

That’s kind of a buzzkill, isn’t it? Luckily, it’s traditional that stories about difficult or technical subjects open with an entertaining or provocative anecdote. The idea is that this allows readers to ease slowly into daunting material. So here’s one for you: Last year at Christmas, I was over at my mother’s house and mentioned that I had recently read an article about Google Translate. It turns out that a few weeks previously, without telling anyone, Google had switched over to a new machine-learning algorithm. Almost overnight, the quality of its translations skyrocketed. I had noticed some improvement myself but had chalked it up to the usual incremental progress these kinds of things go through. I hadn’t realized it was due to a quantum leap in software. (...)

The Industrial Revolution was all about mechanical power: Trains were more powerful than horses, and mechanical looms were more efficient than human muscle. At first, this did put people out of work: Those loom-smashing weavers in Yorkshire—the original Luddites—really did lose their livelihoods. This caused massive social upheaval for decades until the entire economy adapted to the machine age. When that finally happened, there were as many jobs tending the new machines as there used to be doing manual labor. The eventual result was a huge increase in productivity: A single person could churn out a lot more cloth than she could before. In the end, not only were as many people still employed, but they were employed at jobs tending machines that produced vastly more wealth than anyone had thought possible 100 years before. Once labor unions began demanding a piece of this pie, everyone benefited.

The AI Revolution will be nothing like that. When robots become as smart and capable as human beings, there will be nothing left for people to do because machines will be both stronger and smarter than humans. Even if AI creates lots of new jobs, it’s of no consequence. No matter what job you name, robots will be able to do it. They will manufacture themselves, program themselves, repair themselves, and manage themselves. If you don’t appreciate this, then you don’t appreciate what’s barreling toward us.

In fact, it’s even worse. In addition to doing our jobs at least as well as we do them, intelligent robots will be cheaper, faster, and far more reliable than humans. And they can work 168 hours a week, not just 40. No capitalist in her right mind would continue to employ humans. They’re expensive, they show up late, they complain whenever something changes, and they spend half their time gossiping. Let’s face it: We humans make lousy laborers.

If you want to look at this through a utopian lens, the AI Revolution has the potential to free humanity forever from drudgery. In the best-case scenario, a combination of intelligent robots and green energy will provide everyone on Earth with everything they need. But just as the Industrial Revolution caused a lot of short-term pain, so will intelligent robots. While we’re on the road to our Star Trek future, but before we finally get there, the rich are going to get richer—because they own the robots—and the rest of us are going to get poorer because we’ll be out of jobs. Unless we figure out what we’re going to do about that, the misery of workers over the next few decades will be far worse than anything the Industrial Revolution produced.

by Kevin Drum, Mother Jones |  Read more:
Image: Roberto Parada

Grimes

Tuesday, November 7, 2017


Tony Fitzpatrick
via:

Gerrymandering
via:
Not shown: “Area full of bad-memory land mines” - Also to be avoided.

Let the People Pick the President

The winners of Tuesday’s elections — Republican or Democrat, for governor, mayor or dogcatcher — all have one thing in common: They received more votes than their opponent. That seems like a pretty fair way to run an electoral race, which is why every election in America uses it — except the most important one of all.

Was it just a year ago that more than 136 million Americans cast their ballots for president, choosing Hillary Clinton over Donald Trump by nearly three million votes, only to be thwarted by a 200-year-old constitutional anachronism designed in part to appease slaveholders and ratified when no one but white male landowners could vote?

It feels more like, oh, 17 years — the last time, incidentally, that the American people chose one candidate for president and the Electoral College imposed the other.

In both cases the loser was a Democrat, a fact that has tempted more than a few people to dismiss complaints about the Electoral College as nothing but partisan sour grapes. That’s a mistake. For one thing, Republicans nearly suffered the same fate in 2004. A switch of just 60,000 votes in Ohio would have awarded the White House to John Kerry, who lost the national popular vote by roughly the same margin as Mr. Trump. More important, decades of polling have found that Americans of all stripes would prefer that the president be chosen directly by the people and not by 538 party functionaries six weeks after Election Day.

President Trump agrees, or at least he used to. In 2012, when he thought Barack Obama would lose the popular vote but still retake the White House, he called the Electoral College “a disaster for a democracy.” Last November, days after his own victory, Mr. Trump said: “I would rather see it where you went with simple votes. You know, you get 100 million votes, and somebody else gets 90 million votes, and you win. There’s a reason for doing this, because it brings all the states into play.”

He was right, even if he has since converted to an Electoral College advocate. The existing winner-take-all system, which awards all of a state’s electoral votes to the popular-vote winner in that state, no matter how close the race, is deeply anti-democratic. It treats tens of millions of Americans — from Republicans in Boston to Democrats in Biloxi — as if their voices don’t matter.

Defenders of the Electoral College argue that it was created to protect the interests of smaller states, whose voters would otherwise be overwhelmed by the much larger populations living in urban areas along the coasts. That’s wrong as a matter of history: The framers of the Constitution were concerned primarily with ensuring that the president wasn’t selected by uneducated commoners. The electors were meant to be a deliberative body of intelligent, well-informed men who would be immune to corruption. (The arrangement was also a gift to the Southern states, with their large, unenfranchised populations of slaves.)

But regardless of its original intent, the Electoral College today is, as Mr. Trump said, a disaster for a democracy. Modern presidential campaigns ignore almost all states, large and small alike, in favor of a handful that are closely divided between Republicans and Democrats — and even within those states, they focus on a few key regions. In 2016, two-thirds of all public campaign events were held in just six states: Michigan, Ohio, Florida, Pennsylvania, Virginia and North Carolina; toss in six more and you’ve got 94 percent of all campaign events.

This may be smart campaigning, but it’s terrible for the rest of the country, which is rendered effectively invisible, distorting our politics, our policy debates and even the distribution of federal funds. Candidates focus their platforms on the concerns of battleground states, and presidents who want to stay in office make sure to lavish attention, and money, on the same places. The emphasis on a small number of states also increases the risk to our national security, by creating an easy target for hackers who want to influence the outcome of an election. Perhaps most important, voters outside of swing states know their votes are devalued, if not worthless, and they behave accordingly. In 2012, 64 percent of swing-state voters showed up, compared with 57 percent everywhere else, a pattern that persisted in 2016. What better way to get more voters to register and go to the polls than to ensure that everyone’s vote is weighed equally?

The Electoral College has been the subject of more amendment efforts — 595 as of 2004 — than any other part of the Constitution. But amending the Constitution is a heavy lift. A quicker and more realistic fix is the National Popular Vote interstate compact, under which states agree to award all of their electoral votes to the winner of the national popular vote. The agreement kicks in as soon as states representing a total of 270 electoral votes sign on, ensuring that the popular vote will always pick the president. So far, 10 states and the District of Columbia have joined, representing 165 electoral votes. The problem is that they are all solidly Democratic, which only adds to the suspicion that this is no more than a partisan game. It’s not: When Mr. Trump is not making up stories about millions of illegal voters, he has argued that if the presidency were decided by popular vote, he would have campaigned differently and still would have won. He may well be right.

How can red states be persuaded to sign on and give all their citizens a voice? Some, like Georgia and Arizona, may not stay red for much longer. But even deep-red states would benefit from the infusion of attention and cash from campaigns seeking to rustle up every vote they can find.

This problem isn’t going away; if anything it’s going to get worse as Americans continue to cluster. Half the population now lives in just nine states. It’s time for states that have been on the fence about the national popular-vote compact to get off and sign on. Connecticut, Oregon and Delaware have all come close to passing the compact in recent years; they should get it done now. Yes, they’re three reliably blue states representing 17 electoral votes among them, but every vote counts.

by Editorial Board, NY Times |  Read more:
Image: Kiersten Essenpreis

Never Let Me Go and the Human Condition

I teach college English, which means that I spend a lot of time telling people not to write papers about “the human condition.” The human condition, I tell my students, is vague. What about the human condition are you interested in? I ask. Bodies? Money? Desire? Work? Power? These are things that we could make arguments about. The human condition is not. I give my students a handout of writing tips, a series of “dos and don’ts” I’ve honed over the course of a decade in college classrooms. Many of the “don’ts” regard specificity. Don’t write about society. Be more specific. Don’t tell me about gender norms. Be more specific. Don’t write about the human condition. Be more specific.

I hold to these rules tightly; I think they are good rules, and they certainly lead to more exacting student writing. But when I read that Kazuo Ishiguro won the Nobel Prize for literature last week, my first thought was: Ishiguro won the Nobel Prize because he writes about the human condition.

When I teach his 2005 novel Never Let Me Go, I have to break some rules.

For the uninitiated, Never Let Me Go is a novel about a group of young people who are also clones. These clones will grow up and begin to donate their organs in their late teens and twenties and then they will die slow, orchestrated deaths; their bodies will be used to save the lives of others. The clones have been created by a vast government program and there is no escape from it. Never Let Me Go is not a story of rebellion.

The novel is narrated by Kath, who is a carer, which means quite literally that she cares emotionally for other clones going through the donation process. In the first paragraph of the novel, Kath tells us that she’s about to wrap up her work as a carer, that she soon will become a donor. When the novel begins, we don’t quite know what this means. We find out everything very slowly. I have stated the premise of the book more clearly and explicitly than Ishiguro ever does.

Kath is what some people might call an unreliable narrator, but I prefer to think of her as clueless instead. Like the butler of Ishiguro’s earlier novel Remains of the Day, Kath never quite comprehends what’s going on around her, or what’s happening to her. In the parlance of the book, she’s been “told and not told” about her fate. Ishiguro reveals information slowly; the word “clone” doesn’t appear until more than halfway through the novel, and Kath speaks in the euphemisms of the donation program. To die, for example, is to “complete.”

Never Let Me Go is fantastic for developing students’ close reading skills; I start off teaching the novel by close reading its first paragraph with my class for a long time, longer than should be possible. The book is a teacher’s dream: there is almost too much to talk about. We discuss narration and epistemology (how do we know what we know in this book?), genre (is the book science fiction? A crime novel? A bildungsroman?) We can talk Foucault and surveillance, biopolitics, medical ethics, aesthetics, education, gender, sex: this book has everything, which is why I can—and do—fit it onto so many syllabi.

But I also teach this book because it gets under my students’ skin. They tell me this after class, in evaluations, in emails years later, but I can also see it in their faces in my classroom. Never Let Me Go gets to them because it gets them. The narrator is young and confused and sad: so are a lot of college students. Sure, the book is about a massive government program that raises children for slaughter, but so much of the book, a good 80% of it, I’d venture, is about daily childhood and teenage life: alliances between friends, art projects, soccer games, awkward sex ed classes, writing essays, falling in love. There are hazy and obscure threats from the adult world, which the clones feel but don’t quite understand. The clones feel powerless, they know that death is coming—kind of—and yet they live their lives anyway.

In this, the clones are just like us.

Whenever I teach Never Let Me Go, there’s a moment, almost always on the final day of class on the novel, when my students get demonstrably frustrated with Kath and the other clones. They ask: where is their anger? Why don’t they rebel? Why do they passively accept their deaths? Why don’t they do something? (The most the clones try, and fail to do is temporarily defer—not circumvent—their donations.) I, summoning something in myself that I don’t usually summon, pause and then intone: why don’t you rebel? Where is your anger? Why do you passively accept your deaths?

I have taught this book many times, and I know how to orchestrate this moment. I lean forward in my chair. You guys know you’re going to die, too, right? Why don’t you do something?

Sometimes my students are silent, but sometimes they start arguing with me about details that I frankly don’t care about in this particular moment. The clones will die sooner than we will (will they? What do you know that I don’t?) We’re not oppressed by governments restricting what we do with other bodies (It’s never a woman who says this.) We can go round and round, and I will always have an answer.

You are going to die and there’s nothing you can do about it, I tell them. It’s the human condition.

by Jacquelyn Ardam, Avidly | Read more:
Image: Goodreads

The Truth About the US ‘Opioid Crisis’ – Prescriptions Aren’t the Problem

The news media is awash with hysteria about the opioid crisis (or opioid epidemic). But what exactly are we talking about? If you Google “opioid crisis”, nine times out of 10 the first paragraph of whatever you’re reading will report on death rates. That’s right, the overdose crisis.

For example, the lead article on the “opioid crisis” on the US National Institutes of Health website begins with this sentence: “Every day, more than 90 Americans die after overdosing on opioids.”

Is the opioid crisis the same as the overdose crisis? No. One has to do with addiction rates, the other with death rates. And addiction rates aren’t rising much, if at all, except perhaps among middle-class whites.

Let’s look a bit deeper.

The overdose crisis is unmistakable. I reported on some of the statistics and causes in the Guardian last July. I think the most striking fact is that drug overdose is the leading cause of death for Americans under 50. Some people swallow, or (more often) inject, more opioids than their body can handle, which causes the breathing reflex to shut down. But drug overdoses that include opioids (about 63%) are most often caused by a combination of drugs (or drugs and alcohol) and most often include illegal drugs (eg heroin). When prescription drugs are involved, methadone and oxycontin are at the top of the list, and these drugs are notoriously acquired and used illicitly.

Yet the most bellicose response to the overdose crisis is that we must stop doctors from prescribing opioids. Hmmm. (...)

First, why not clarify that most of the abuse of prescription pain pills is not by those for whom they’re prescribed? Among those for whom they are prescribed, the onset of addiction (which is usually temporary) is about 10% for those with a previous drug-use history, and less than 1% for those with no such history. Note also the oft-repeated maxim that most heroin users start off on prescription opioids. Most divers start off as swimmers, but most swimmers don’t become divers.

Second, wouldn’t it be sensible for the media to distinguish street drugs such as heroin from pain pills? We’re talking about radically different groups of users.

Third, virtually all experts agree that fentanyl and related drugs are driving the overdose epidemic. These are many times stronger than heroin and far cheaper, so drug dealers often use them to lace or replace heroin. Yet, because fentanyl is a manufactured pharmaceutical prescribed for severe pain, the media often describe it as a prescription painkiller – however it reaches its users.

It’s remarkably irresponsible to ignore these distinctions and then use “sum total” statistics to scare doctors, policymakers and review boards into severely limiting the prescription of pain pills.

By the way, if you were either addicted to opioids or needed them badly for pain relief, what would you do if your prescription was abruptly terminated? Heroin is now easier to acquire than ever, partly because it’s available on the darknet and partly because present-day distribution networks function like independent cells rather than monolithic gangs – much harder to bust. And, of course, increased demand leads to increased supply. Addiction and pain are both serious problems, serious sources of suffering. If you were afflicted with either and couldn’t get help from your doctor, you’d try your best to get relief elsewhere. And your odds of overdosing would increase astronomically.

It’s doctors – not politicians, journalists, or professional review bodies – who are best equipped and motivated to decide what their patients need, at what doses, for what periods of time. And the vast majority of doctors are conscientious, responsible and ethical.

Addiction is not caused by drug availability. The abundant availability of alcohol doesn’t turn us all into alcoholics. No, addiction is caused by psychological (and economic) suffering, especially in childhood and adolescence (eg abuse, neglect, and other traumatic experiences), as revealed by massive correlations between adverse childhood experiences and later substance use. The US is at or near the bottom of the developed world in its record on child welfare and child poverty. No wonder there’s an addiction problem. And how easy it is to blame doctors for causing it.

by Marc Lewis, The Guardian |  Read more:
Image: John Moore/Getty Images
[ed. I'd agree, and also stress the difference between dependence and addiction. Many people are dependent on various medications and other things for a variety of physical and mental ailments, yet they continue to function productively without ever becoming addicted (xanax, adderall, ssri's, caffeine, marijuana, etc.). Personal physicians are probably the best hope we have to respond to individual patient needs, especially now that the risks of opioids are so well documented and a more rigorous prescription tracking system is in place. Most importantly, we need to get people off the streets and away from uncontrolled products. More prescribing and dispensing restrictions will only make the problem worse.]

Monday, November 6, 2017

Tiny Human Brain Organoids Implanted Into Rodents, Triggering Ethical Concerns

Minuscule blobs of human brain tissue have come a long way in the four years since scientists in Vienna discovered how to create them from stem cells.

The most advanced of these human brain organoids — no bigger than a lentil and, until now, existing only in test tubes — pulse with the kind of electrical activity that animates actual brains. They give birth to new neurons, much like full-blown brains. And they develop the six layers of the human cortex, the region responsible for thought, speech, judgment, and other advanced cognitive functions.

These micro quasi-brains are revolutionizing research on human brain development and diseases from Alzheimer’s to Zika, but the headlong rush to grow the most realistic, most highly developed brain organoids has thrown researchers into uncharted ethical waters. Like virtually all experts in the field, neuroscientist Hongjun Song of the University of Pennsylvania doesn’t “believe an organoid in a dish can think,” he said, “but it’s an issue we need to discuss.”

Those discussions will become more urgent after this weekend. At a neuroscience meeting, two teams of researchers will report implanting human brain organoids into the brains of lab rats and mice, raising the prospect that the organized, functional human tissue could develop further within a rodent. Separately, another lab has confirmed to STAT that it has connected human brain organoids to blood vessels, the first step toward giving them a blood supply.

That is necessary if the organoids are to grow bigger, probably the only way they can mimic fully grown brains and show how disorders such as autism, epilepsy, and schizophrenia unfold. But “vascularization” of cerebral organoids also raises such troubling ethical concerns that, previously, the lab paused its efforts to even try it.

“We are entering totally new ground here,” said Christof Koch, president of the Allen Institute for Brain Science in Seattle. “The science is advancing so rapidly, the ethics can’t keep up.”

by Sharon Begley, STAT |  Read more:
Image:Xuyu Qian/Johns Hopkins University
[ed. If a thing can be achieved it will be... commonly known as Pandora's Box. There's bit of dark irony too, putting human brain tissue in lab rats. As Lily Tomlin once said "the trouble with the rat race is, even if you win you're still a rat".]

Leaked Documents Expose Secret Tale Of Apple’s Offshore Island Hop

It was May 2013, and Apple Inc. chief executive Tim Cook was angry.

He sat before the U.S. Senate Permanent Subcommittee on Investigations, which had completed an inquiry into how Apple avoided tens of billions of dollars in taxes by shifting profits into Irish subsidiaries that the subcommittee’s chairman called “ghost companies.”

“We pay all the taxes we owe, every single dollar,” Cook declared. “We do not depend on tax gimmicks. . . . We do not stash money on some Caribbean island.”

Five months later, Ireland bowed to international pressure and announced a crackdown on Irish firms, like Apple’s subsidiaries, that claimed that almost all of their income was not subject to taxes in Ireland or anywhere else in the world.

Now leaked documents, called the Paradise Papers, shine a light on how the iPhone maker responded to this move. Despite its CEO’s public rejection of island havens, that’s where Apple turned as it began shopping for a new tax refuge.

Apple’s advisers at one of the world’s top law firms, U.S.-headquartered Baker McKenzie, canvassed one of the leading players in the offshore world, a firm of lawyers called Appleby, which specialized in setting up and administering tax haven companies.

A questionnaire that Baker McKenzie emailed in March 2014 set out 14 questions for Appleby’s offices in the Cayman Islands, the British Virgin Islands, Bermuda, the Isle of Man, Guernsey and Jersey.

One asked that the offices: “Confirm that an Irish company can conduct management activities . . . without being subject to taxation in your jurisdiction.”

Apple also asked for assurances that the local political climate would remain friendly: “Are there any developments suggesting that the law may change in an unfavourable way in the foreseeable future?”

In the end, Apple settled on Jersey, a tiny island in the English Channel that, like many Caribbean havens, charges no tax on corporate profits for most companies. Jersey was to play a significant role in Apple’s newly configured Irish tax structure set up in late 2014. Under this arrangement, the MacBook-maker has continued to enjoy ultra-low tax rates on most of its profits and now holds much of its non-U.S. earnings in a $252 billion mountain of cash offshore. The Irish government’s crackdown on shadow companies, meanwhile, has had little effect.


The inside story of Apple’s hunt for a new avoidance strategy is among the disclosures emerging from a leak of secret corporate records that reveals how the offshore tax game is played by Apple, Nike, Uber and other multinational corporations – and how top law firms help them exploit gaps between differing tax codes around the world.

The documents come from the internal files of offshore law firm Appleby and corporate services provider Estera, two businesses that operated together under the Appleby name until Estera became independent in 2016.

The files show how Appleby played a cameo role in creating many cross-border tax structures. German newspaper Süddeutsche Zeitung obtained the records and shared them with the International Consortium of Investigative Journalists and its media partners, including The New York Times, Australia’s ABC, the BBC in the United Kingdom, Le Monde in France and CBC in Canada.

These disclosures come as the White House and Congress consider cutting the U.S. federal tax on corporate income, pushing its top rate of 35 percent down to 20 percent or lower. President Donald Trump has insisted that American firms are getting a bad deal from current tax rules.

The documents show that, in reality, many big U.S. multinationals pay income taxes at very low rates, thanks in part to complex corporate structures they set up with the help of a global network of elite tax advisers.

In this regard, Apple has led the field. Despite almost all design and development of its products taking place in the U.S., the iPhone-maker has for years been able to report that about two-thirds of its worldwide profits were made in other countries, where it has used loopholes to access ultra-low foreign tax rates.

Now leaked documents help show how Apple quietly carried out a restructuring of its Irish companies at the end of 2014, allowing it to carry on paying taxes at low rates on the majority of global profits.

Multinationals that transfer intangible assets to tax havens and adopt other aggressive avoidance strategies are costing governments around the world as much as $240 billion a year in lost tax revenue, according to a conservative estimate in 2015 by the Organization for Economic Cooperation and Development.

by Simon Bowers, ICIJ |  Read more:
Image: SEC
[ed. The question now is will the Paradise Papers even register with the American public? (Rhetorical, I know). Of course they won't. Our government/media/corporations/elite (GMCE? Hell, let's just call them the Matrix) won't let them. They'll be spun out, delegitimized, obfuscated... and, well you know the drill. America's two-minute attention span will move on. And like a ripple on a pond, or an errant blip on a flatlined brain monitor, the matter will quickly be forgotten. I haven't looked at the latest Republican tax plan but I can guess what's in it (more tax cuts for corporations and the wealthy!). And, of course, a big bonus for 'repatriating' American corporation profits that should never have been allowed to be sequestered in the first place. (I love that term 'repatriating', like our money's been on vacation for years, which I guess it has). See also: Cohn Says Repatriation Tax Rate Will Be in the ‘10% Range. Update. The process begins: Was it wrong to hack and leak the Panama Papers? and What the Paradise Papers Tell Us About Global Business and Political Elites]

Pat Metheny



[ed. Genius. How could someone even conceptualize a composition like this? Here's another one. See also: Just Jazz Guitar Interview - Pat Metheny]