Thursday, November 23, 2017

Gobo

Before the internet, we relied on newspapers and broadcasters to filter much of our information, choosing curators based on their styles, reputations and biases – did you want a Wall Street Journal or New York Times view of the world? Fox News or NPR? The rise of powerful search engines made it possible to filter information based on our own interests – if you’re interested in sumo wrestling, you can learn whatever Google will show you, even if professional curators don’t see the sport as a priority.

Social media has presented a new problem for filters. The theory behind social media is that we want to pay attention to what our friends and family think is important. In practice, paying attention to everything 500 or 1500 friends are interested in is overwhelming – Robin Dunbar theorizes that people have a hard limit to how many relationships we can cognitively maintain. Twitter solves this problem with a social hack: it’s okay to miss posts on your feed because so many are flowing by… though Twitter now tries to catch you up on important posts if you had the temerity to step away from the service for a few hours.

Facebook and other social media platforms solve the problem a different way: the algorithm. Facebook’s news feed usually differs sharply from a list of the most recent items posted by your friends and pages you follow – instead, it’s been personalized using thousands of factors, meaning you’ll see posts Facebook thinks you’ll want to see from hours or days ago, while you’ll miss some recent posts the algorithm thinks won’t interest you. Research from the labs of Christian Sandvig and Karrie Karahalios suggests that even heavy Facebook users aren’t aware that algorithms shape their use of the service, and that many have experienced anxiety about not receiving responses to posts the algorithm suppressed.

Many of the anxieties about Facebook and other social platforms are really anxieties about filtering. The filter bubble, posited by Eli Pariser, is the idea that our natural tendencies towards homophily get amplified by filters designed to give us what we want, not ideas that challenge us, leading to ideological isolation and polarization. Fake news designed to mislead audiences and garner ad views relies on the fact that Facebook’s algorithms have a difficult time determining whether information is true or not, but can easily see whether information is new and popular, sharing information that’s received strong reactions from previous audiences. When Congress demands action on fake news and Kremlin propaganda, they’re requesting another form of filtering, based on who’s creating content and on whether it’s factually accurate. (...)

Algorithmic filters optimize platforms for user retention and engagement, keeping our eyes firmly on the site so that our attention can be sold to advertisers. We thought it was time that we all had a tool that let us filter social media the ways we choose. What if we could choose to challenge ourselves one day, encountering perspectives from outside our normal orbits, and relax another day, filtering for what’s funniest and most viral. So we built Gobo.

What’s Gobo?

Gobo is a social media aggregator with filters you control. You can use Gobo to control what’s edited out of your feed, or configure it to include news and points of view from outside your usual orbit. Gobo aims to be completely transparent, showing you why each post was included in your feed and inviting you to explore what was filtered out by your current filter settings. (...)

How does it work?

Gobo retrieves posts from people you follow on Twitter and Facebook and analyzes them using simple machine learning-based filters. You can set those filters – seriousness, rudeness, virality, gender and brands – to eliminate some posts from your feed. The “politics” slider works differently, “filtering in”, instead of “filtering out” – if you set the slider towards “lots of perspectives”, our “news echo” algorithm will start adding in posts from media outlets that you likely don’t read every day.

by Ethan Zukerman, My Heart's in Accra |  Read more:
Image: Gobo

Tuesday, November 21, 2017

What Do We Do with the Art of Monstrous Men?

Roman Polanski, Woody Allen, Bill Cosby, William Burroughs, Richard Wagner, Sid Vicious, V. S. Naipaul, John Galliano, Norman Mailer, Ezra Pound, Caravaggio, Floyd Mayweather, though if we start listing athletes we’ll never stop. And what about the women? The list immediately becomes much more difficult and tentative: Anne Sexton? Joan Crawford? Sylvia Plath? Does self-harm count? Okay, well, it’s back to the men I guess: Pablo Picasso, Max Ernst, Lead Belly, Miles Davis, Phil Spector.

They did or said something awful, and made something great. The awful thing disrupts the great work; we can’t watch or listen to or read the great work without remembering the awful thing. Flooded with knowledge of the maker’s monstrousness, we turn away, overcome by disgust. Or … we don’t. We continue watching, separating or trying to separate the artist from the art. Either way: disruption. They are monster geniuses, and I don’t know what to do about them.

We’ve all been thinking about monsters in the Trump era. For me, it began a few years ago. I was researching Roman Polanski for a book I was writing and found myself awed by his monstrousness. It was monumental, like the Grand Canyon. And yet. When I watched his movies, their beauty was another kind of monument, impervious to my knowledge of his iniquities. I had exhaustively read about his rape of thirteen-year-old Samantha Gailey; I feel sure no detail on record remained unfamiliar to me. Despite this knowledge, I was still able to consume his work. Eager to. The more I researched Polanski, the more I became drawn to his films, and I watched them again and again—especially the major ones: Repulsion, Rosemary’s Baby, Chinatown. Like all works of genius, they invited repetition. I ate them. They became part of me, the way something loved does.

I wasn’t supposed to love this work, or this man. He’s the object of boycotts and lawsuits and outrage. In the public’s mind, man and work seem to be the same thing. But are they? Ought we try to separate the art from the artist, the maker from the made? Do we undergo a willful forgetting when we want to listen to, say, Wagner’s Ring cycle? (Forgetting is easier for some than others; Wagner’s work has rarely been performed in Israel.) Or do we believe genius gets special dispensation, a behavioral hall pass?

And how does our answer change from situation to situation? Certain pieces of art seem to have been rendered unconsumable by their maker’s transgressions—how can one watch The Cosby Show after the rape allegations against Bill Cosby? I mean, obviously it’s technically doable, but are we even watching the show? Or are we taking in the spectacle of our own lost innocence?

And is it simply a matter of pragmatics? Do we withhold our support if the person is alive and therefore might benefit financially from our consumption of their work? Do we vote with our wallets? If so, is it okay to stream, say, a Roman Polanski movie for free? Can we, um, watch it at a friend’s house?
*
But hold up for a minute: Who is this “we” that’s always turning up in critical writing anyway? We is an escape hatch. We is cheap. We is a way of simultaneously sloughing off personal responsibility and taking on the mantle of easy authority. It’s the voice of the middle-brow male critic, the one who truly believes he knows how everyone else should think. We is corrupt. We is make-believe. The real question is this: can I love the art but hate the artist? Can you? When I say we, I mean I. I mean you.
*
I know Polanski is worse, whatever that means, and Cosby is more current. But for me the ur-monster is Woody Allen.

The men want to know why Woody Allen makes us so mad. Woody Allen slept with Soon-Yi Previn, the child of his life partner Mia Farrow. Soon-Yi was a teenager in his care the first time they slept together, and he the most famous film director in the world.

I took the fucking of Soon-Yi as a terrible betrayal of me personally. When I was young, I felt like Woody Allen. I intuited or believed he represented me on-screen. He was me. This is one of the peculiar aspects of his genius—this ability to stand in for the audience. The identification was exacerbated by the seeming powerlessness of his usual on-screen persona: skinny as a kid, short as a kid, confused by an uncaring, incomprehensible world. (Like Chaplin before him.) I felt closer to him than seems reasonable for a little girl to feel about a grown-up male filmmaker. In some mad way, I felt he belonged to me. I had always seen him as one of us, the powerless. Post-Soon-Yi, I saw him as a predator.

My response wasn’t logical; it was emotional.
*
One rainy afternoon, in the spring of 2017, I flopped down on the living-room couch and committed an act of transgression. No, not that one. What I did was, I on-demanded Annie Hall. It was easy. I just clicked the OK button on my massive universal remote and then rummaged around in a bag of cookies while the opening credits rolled. As acts of transgression go, it was pretty undramatic.

I had watched the movie at least a dozen times before, but even so, it charmed me all over again. Annie Hall is a jeu d’esprit, an Astaire soft shoe, a helium balloon straining at its ribbon. It’s a love story for people who don’t believe in love: Annie and Alvy come together, pull apart, come together, and then break up for good. Their relationship was pointless all along, and entirely worthwhile. Annie’s refrain of “la di da” is the governing spirit of the enterprise, the collection of nonsense syllables that give joyous expression to Allen’s dime-store existentialism. “La di da” means, Nothing matters. It means, Let’s have fun while we crash and burn. It means, Our hearts are going to break, isn’t it a lark?

Annie Hall is the greatest comic film of the twentieth century—better than Bringing Up Baby, better even than Caddyshack—because it acknowledges the irrepressible nihilism that lurks at the center of all comedy. Also, it’s really funny. To watch Annie Hall is to feel, for just a moment, that one belongs to humanity. Watching, you feel almost mugged by that sense of belonging. That fabricated connection can be more beautiful than love itself. And that’s what we call great art. In case you were wondering.

Look, I don’t get to go around feeling connected to humanity all the time. It’s a rare pleasure. And I’m supposed to give it up just because Woody Allen misbehaved? It hardly seems fair.
*
When I mentioned in passing I was writing about Allen, my friend Sara reported that she’d seen a Little Free Library in her neighborhood absolutely crammed to its tiny rafters with books by and about Allen. It made us both laugh—the mental image of some furious, probably female, fan who just couldn’t bear the sight of those books any longer and stuffed them all in the cute little house.

Then Sara grew wistful: “I don’t know where to put all my feelings about Woody Allen,” she said. Well, exactly.
*
I told another smart friend that I was writing about Woody Allen. “I have very many thoughts about Woody Allen!” she said, all excited to share. We were drinking wine on her porch and she settled in, the late afternoon light illuminating her face. “I’m so mad at him! I was already pissed at him over the Soon-Yi thing, and then came the—what’s the kid’s name—Dylan? Then came the Dylan allegations, and the horrible dismissive statements he made about that. And I hate the way he talks about Soon-Yi, always going on about how he’s enriched her life.”

This, I think, is what happens to so many of us when we consider the work of the monster geniuses—we tell ourselves we’re having ethical thoughts when really what we’re having is moral feelings. We put words around these feelings and call them opinions: “What Woody Allen did was very wrong.” And feelings come from someplace more elemental than thought. The fact was this: I felt upset by the story of Woody and Soon-Yi. I wasn’t thinking; I was feeling. I was affronted, personally somehow.
*
Here’s how to have some complicated emotions: watch Manhattan.

by Claire Dederer, Paris Review | Read more:
Image: Manhattan

Mark Lanegan

[ed. Read the comments. At least the show highlighted former Screaming Trees frontman Mark Lanegan (even if Lanegan left Seattle quite a while ago). See also: Bourdain's field notes: Seattle.]

Erased


From 1951 to 1953, Robert Rauschenberg made a number of artworks that explore the limits and very definition of art. These works recall and effectively extend the notion of the artist as creator of ideas, a concept first broached by Marcel Duchamp (1887–1968) with his iconic readymades of the early twentieth century. With Erased de Kooning Drawing (1953), Rauschenberg set out to discover whether an artwork could be produced entirely through erasure—an act focused on the removal of marks rather than their accumulation.

Rauschenberg first tried erasing his own drawings but ultimately decided that in order for the experiment to succeed he had to begin with an artwork that was undeniably significant in its own right. He approached Willem de Kooning (1904–1997), an artist for whom he had tremendous respect, and asked him for a drawing to erase. Somewhat reluctantly, de Kooning agreed. After Rauschenberg completed the laborious erasure, he and fellow artist Jasper Johns (b. 1930) devised a scheme for labeling, matting, and framing the work, with Johns inscribing the following words below the now-obliterated de Kooning drawing:

ERASED de KOONING DRAWING
ROBERT RAUSCHENBERG
1953

The simple, gilded frame and understated inscription are integral parts of the finished artwork, offering the sole indication of the psychologically loaded act central to its creation. Without the inscription, we would have no idea what is in the frame; the piece would be indecipherable.

via: SFMOMA |  Read more:
[ed. This kind of 'art' drives me nuts (which, probably, is partially the intent). Here's another example, with an ironic twist.]
h/t YMFY

The FCC Will Move to Kill Net Neutrality Over Thanksgiving

FCC Chairman Ajit Pai is planning to make good on his promise to kill net neutrality this weekend, under cover of the holidays, ushering in an era in which the largest telcoms corporations can extract bribes from the largest internet corporations to shut small, innovative and competitive internet services from connecting to you.

Under this regime, a company like Fox News or Google could pay a bribe to a company like Comcast, and in exchange, Comcast would make sure that its subscribers would get slower connections to their rivals. This is billed as "free enterprise."

Net Neutrality was won in America thanks to an improbable uprising of everyday people who finally figured out that this boring, wonky policy issue would directly affect their futures, and the way they work, learn, raise their children, stay in touch with their families, start businesses, participate in the public sphere, stay healthy and elect their leaders. Millions of Americans called, wrote, marched and donated and won over the largest, best-funded corporations in America, beating them and forcing the Obama-era FCC to protect the free, fair, open internet.

Ironically, Trump owes his victory to the neutral internet, where insurgent racists and conspiracy theorists were able to gather and network in support of Trumpism without having to outbid mainstream political rivals. Across Trumpist forums, the brighter among his supporters are aghast that a Trump appointee is about to destroy the factors that made their communities possible.

Ajit Pai is planning to introduce his anti-neutrality fast-track vote over the holiday weekend in the hopes that we'll be too busy eating or shopping to notice.

He's wrong.

Thanksgiving is when students go home for the holidays to fix their internet connections and clean the malware out of their computers. Those students -- awake to the Trumpist threat to their futures -- will spend this weekend explaining to their parents why they need to participate in the fight for a neutral net.

Thankgiving is when workers stay home from the office, participating in online forums and social media, where they will have raucous conversations about this existential threat -- because a free, fair and open internet isn't more important than climate change or gender discrimination or racism or inequality, but a free, fair and open internet is how we're going to win all those fights.

Sneaking in major policy changes over the holiday weekend is a bad look. People are better at understanding procedural irregularities than they are at understanding substance. It's hard to understand what "net neutrality" means -- but it's easy to understand that Ajit Pai isn't killing it in secret because he wants to make sure you're pleasantly surprised on Monday. (...)
Except this obfuscation plan isn't "devilishly brilliant," it's a massive underestimation of the brutal backlash awaiting the broadband industry and its myopic water carriers. Survey after survey (including those conducted by the cable industry itself) have found net neutrality has broad, bipartisan support. The plan is even unpopular among the traditional Trump trolls over at 4chan /pol/ that spent the last week drinking onion juice. It's a mammoth turd of a proposal, and outside of the color guard at the lead of the telecom industry's sockpuppet parade -- the majority of informed Americans know it. 
Net neutrality has been a fifteen year fight to protect the very health of the internet itself from predatory duopolists like Comcast. Killing it isn't something you can hide behind the green bean amandine, and it's not a small scandal you can bury via the late Friday news dump. This effort is, by absolutely any measure, little more than a grotesque hand out to one of the least competitive -- and most disliked -- industries in America. Trying to obfuscate this reality via the holidays doesn't change that. Neither does giving the plan an Orwellian name like "Restoring Internet Freedom." 
It's abundantly clear that if the FCC and supporters were truly proud of what they were doing, they wouldn't feel the need to try and hide it. If this was an FCC that actually wanted to have a candid, useful public conversation about rolling back net neutrality, it wouldn't be actively encouraging fraud and abuse of the agency's comment system. To date, the entire proceeding has been little more than a glorified, giant middle finger to the public at large, filled with scandal and misinformation. And the public at large -- across partisan aisles -- is very much aware of that fact.
FCC Plan To Use Thanksgiving To 'Hide' Its Attack On Net Neutrality Vastly Underestimates The Looming Backlash [Karl Bode/Techdirt]

by Cory Doctorow, Boing Boing |  Read more:
Image: Bryce Durbin/Techcrunch
[ed. See also: An Open Letter to the FCC (by NY Attny. Gen. Eric Schneiderman]

The Republican War on College

There are two big problems with the GOP’s claim that its tax-reform proposals help the middle class.

The first, and most obvious, is that both the House bill, which passed last week, and the Senate bill would raise taxes on tens of millions of middle-class and low-income households by the end of the decade, according to several analyses of the bills.

The second reason is subtler, but perhaps equally significant. To pay for a permanent tax cut on corporations, the plan raises taxes on colleges and college students, which is part of a broader Republican war on higher education in the U.S. This is a big deal, because in the last half-century, the most important long-term driver of wage growth has arguably been college.

The House bill would reduce benefits for higher education by more than $60 billion in the coming decade. It would shock graduate students with sudden tax increases, punish student debtors, and force schools to raise tuition at a time when higher education already feels unaffordable for many students. On balance, the GOP plan would encourage large corporations to invest in new machines in the workplace, while discouraging American workers from investing in themselves.
* * *
The most dramatic attack on higher education in the GOP House bill is its tax on tuition waivers. This may sound arcane to non–graduate students, but it’s a huge deal. Nearly 150,000 graduate students, many of whom represent the future of scientific and academic research, don’t pay tuition in exchange for teaching classes or doing other work at university. The GOP tax plan would treat their unpaid tuition as income. So imagine you are a graduate student earning $30,000 from your school for teaching, while attending a $50,000 program. Under current law, your taxable income is $30,000. Under the GOP tax plan, your taxable income is $80,000. Your tax bill could quadruple. (The provision is not in the latest Senate version of the bill.)

“It’s a huge tax increase on income-constrained students, and the worst part of it is that it would come as a shock to thousands of grad students who aren’t expecting it,” says Kim Rueben, a senior fellow at the Urban Institute. In an essay for The New York Times, Erin Rousseau, a graduate student at MIT who studies mental-health disorders, said she is paid $33,000 for up to 80 hours of weekly work as a teacher and researcher, but she pays nothing for her tuition, which is priced at $51,000. By counting that tuition as income, the GOP plan would raise her tax bill by about $9,000. Graduate school would suddenly become unaffordable for thousands of students who aren’t independently wealthy or being subsidized by rich parents, and many who are currently enrolled would be forced to drop out.

But that’s just the start. In fact, modern GOP economic policy amounts to a massive, coordinated, multilevel attack on higher education in the U.S. It starts with the elite colleges. The House bill would apply an excise tax to private universities with endowments worth more than $250,000 per full-time student. The measure is expected to raise about $2.5 billion from approximately 70 topflight universities.

Elite colleges are just the most prestigious sliver of the U.S.’s higher-education system. Despite their outsize role in national research, Harvard and Stanford, with their multibillion-dollar endowments, might not inspire much sympathy. There is a graver, yet subtler, threat to American higher education, according to Rueben. It is a domino set of public policies and their consequences, starting with Republican health-care and tax policy, continuing on to state budgets, and ending with higher tuition for students.

The first link of this causal chain is health-care spending. Republicans are obsessed with cutting federal spending on Medicaid. Doing so, Rueben points out, would result in many states spending much more on health and public welfare to make up the difference. All things equal, that means states have to raise taxes. But the House bill reduces, and the Senate bill eliminates, the deduction for state and local taxes. This would raise effective tax rates on rich residents in high-income states, making it harder for state legislators to raise taxes again, which would in turn create a dilemma for states needing more health-care dollars without higher taxes. One solution: Spend less on higher education, and force colleges to raise tuition. This wouldn’t be a new initiative so much as the continuation of an old one. Over the past 20 years, the most important contributor to higher tuition costs has been declining public spending on colleges, as cash-strapped states shift money to health care. When states pay less, students pay more.

If that chain of causation was too confusing, here’s an executive summary: By pressuring states to spend more on health care while hampering their ability to raise taxes (never an easy thing to begin with), GOP tax and budget policies could deprive public colleges of state funding, which would force American students to pay more.

This would almost certainly lead to a rise in student debt. So it would make sense to make that debt easier to pay off. The House bill does the opposite. It would eliminate a provision that allows low- and middle-income student debtors to deduct up to $2,500 in student-loan interest each year. (...)

For the white middle class, a turn against college is a profound historical irony. The GI Bill was more responsible than almost any other law in fashioning the 20th century’s middle class. Many Trump voters feel left behind, or worry that their children will grow up poorer. It’s extremely unlikely that these families will personally benefit from a large tax cut for General Electric and Apple. What they could use, instead, is some extra money today, plus an education system that prepares their kids for a new career, in a field that isn’t in structural decline.

Designing that sort of policy is totally possible, at least mathematically. For the multitrillion-dollar cost of reducing the corporate income tax from 35 percent to 20 percent, the U.S. could provide universal pre-K education and free tuition at public colleges for nonaffluent students. For far less, it could keep the tax code’s current higher-education benefits, which help millions to get a college degree and find a higher-paying job.

by Derek Thompson, The Atlantic |  Read more:
Image: Brian Snyder/Reuters

Can A.I. Be Taught to Explain Itself?

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: “Advances in A.I. Are Used to Spot Signs of Sexuality.” But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski’s work “dangerous” and “junk science.” (They claimed it had not been peer reviewed, though it had.) In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: “The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse.”

Kosinski has made a career of warning others about the uses and potential abuses of data. Four years ago, he was pursuing a Ph.D. in psychology, hoping to create better tests for signature personality traits like introversion or openness to change. But he and a collaborator soon realized that Facebook might render personality tests superfluous: Instead of asking if someone liked poetry, you could just see if they “liked” Poetry Magazine. In 2014, they published a study showing that if given 200 of a user’s likes, they could predict that person’s personality-test answers better than their own romantic partner could.

After getting his Ph.D., Kosinski landed a teaching position at the Stanford Graduate School of Business and soon started looking for new data sets to investigate. One in particular stood out: faces. For decades, psychologists have been leery about associating personality traits with physical characteristics, because of the lasting taint of phrenology and eugenics; studying faces this way was, in essence, a taboo. But to understand what that taboo might reveal when questioned, Kosinski knew he couldn’t rely on a human judgment.

Kosinski first mined 200,000 publicly posted dating profiles, complete with pictures and information ranging from personality to political views. Then he poured that data into an open-source facial-recognition algorithm — a so-called deep neural network, built by researchers at Oxford University — and asked it to find correlations between people’s faces and the information in their profiles. The algorithm failed to turn up much, until, on a lark, Kosinski turned its attention to sexual orientation. The results almost defied belief. In previous research, the best any human had done at guessing sexual orientation from a profile picture was about 60 percent — slightly better than a coin flip. Given five pictures of a man, the deep neural net could predict his sexuality with as much as 91 percent accuracy. For women, that figure was lower but still remarkable: 83 percent.

Much like his earlier work, Kosinski’s findings raised questions about privacy and the potential for discrimination in the digital age, suggesting scenarios in which better programs and data sets might be able to deduce anything from political leanings to criminality. But there was another question at the heart of Kosinski’s paper, a genuine mystery that went almost ignored amid all the media response: How was the computer doing what it did? What was it seeing that humans could not?

It was Kosinski’s own research, but when he tried to answer that question, he was reduced to a painstaking hunt for clues. At first, he tried covering up or exaggerating parts of faces, trying to see how those changes would affect the machine’s predictions. Results were inconclusive. But Kosinski knew that women, in general, have bigger foreheads, thinner jaws and longer noses than men. So he had the computer spit out the 100 faces it deemed most likely to be gay or straight and averaged the proportions of each. It turned out that the faces of gay men exhibited slightly more “feminine” proportions, on average, and that the converse was true for women. If this was accurate, it could support the idea that testosterone levels — already known to mold facial features — help mold sexuality as well.

But it was impossible to say for sure. Other evidence seemed to suggest that the algorithms might also be picking up on culturally driven traits, like straight men wearing baseball hats more often. Or — crucially — they could have been picking up on elements of the photos that humans don’t even recognize. “Humans might have trouble detecting these tiny footprints that border on the infinitesimal,” Kosinski says. “Computers can do that very easily.”

It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.

This isn’t merely a theoretical concern. In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.

Those techniques can seem inescapably alien to our own ways of thinking. Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.

by Cliff Kuang, NY Times |  Read more:
Image: Derek Brahney. Source photo: J.R. Eyerman/The Life Picture Collection/Getty
[ed. See also: Caught in the Web]

Sunday, November 19, 2017

The Dunning-Kruger Effect: Why Incompetent People Think They're Amazing

A New Supreme Court Case Could Cripple Public Employee Unions

As much as I support unions in theory, there is too often a difference between theory in practice. In this case, my beef is that many union leaderships regularly sell out their members. I am particularly disgusted with the conduct of the unions with respect to CalPERS, where they get know-nothing, potted plants on the board who rubber stamp staff’s self-serving initiatives. Even worse, the SEIU’s Terry Brennard and CSEA’s David Low were cited by CalPERS staff as key players in getting its non-secret, tamper-friendly election procedures passed.

So if this decision goes against public employee unions, IMHO their leaders’ habit of power-seeking at the expense of the rank and file is a big part of the antipathy towards unions in America and laid the groundwork for cases like these.
***
By Bobbi Murray, a freelance journalist based in Los Angeles. Originally published at Capital and Main

Wisconsin provided early examples of scorched-earth labor policies. California unions took note.

Should Mark Janus prevail in his Supreme Court case, public-sector employees in California and other states who now pay agency fees instead of union dues will be able to opt out of any payment at all—even though they can still benefit from collective bargaining contracts and turn to the union with grievances, enjoying a free ride that drains union resources.

The ruling would undermine the ability of public-sector unions—about half of U.S. organized labor—to set standards for wage and workplace conditions. The resulting financial pressure will hamper unions from taking lead roles in policy debates on such issues as health care. “The short-term [goal] is to reduce the ability to collect dues,” said Raphael Sonenshein, executive director of the Pat Brown Institute for Public Affairs. “The long-term aim is to weaken collective bargaining.”

Anti-union forces, often funded by corporate-backed foundations, have been on the attack for decades. One stunning victory was the 2011 passage of Wisconsin’s Act 10, that state’s “budget repair” bill. Republican Governor Scott Walker, long a vocal enemy of public-sector unions, introduced it to address a $3.6 billion budget shortfall.

Act 10 gutted public-sector union collective bargaining rights, leaving unions unable to negotiate wages—except raises attached to the cost-of-living—along with pensions, work conditions such as hours worked, sick leave and vacations. In other words, all the things that, for many, make it worth paying union dues.

The law also loosened restrictions on local governments’ hiring and wage policies, while allowing wage freezes and requiring higher employee health-care contributions.

Act 10 knee-capped labor as a political force in an historically union state — the first to recognize public-sector unions. By 2014 the once-robust Wisconsin State Employees Union had lost 60 percent of its members; its annual budget dropped from $6 million to $2 million. Then came the defections. In 2013 the nearly 6,000 prison guards staffing Wisconsin’s correctional facilities voted to leave WSEU for the newly-created Wisconsin Association for Correctional Law Enforcement, which cut dues from WSEU’s roughly $36 monthly rate to WACLE’s $18. WACLE now represents approximately 5,900 state security workers.

“The two major public-sector unions both lost about 80 percent of dues-paying members,” Joel Rogers, a University of Wisconsin, Madison professor of law and sociology, told Capital & Main. Rogers is also the founder of an organization called COWS, touted as “the national high-road strategy center” think tank. Shrunken union budgets hobbled the ability to operate effectively on policy issues and support labor-friendly candidates. “They are basically nowhere near what they were in terms of political forces,” Rogers said.

Employees whose livelihoods had taken a hit with budget cuts weren’t in a mood to pay dues to a union without collective bargaining power. So they quit—bleeding unions of funds.

“Which is what it was all about,” said Rogers.

by Yves Smith, Naked Capitalism |  Read more:

X Marks the Self

In August, a man with a sword was arrested near Buckingham Palace on suspicion of preparing to commit an act of terrorism. Westminster Magistrates Court heard that the man, an Uber driver from Luton, had intended to go to Windsor Castle but his satnav directed him to a pub called The Windsor Castle instead. Without stopping for a drink, he drove on to Buckingham Palace. It isn’t clear if he was still relying on the satnav for the final stage of his journey, or whether rage at the mistake was a motivating factor in his alleged offence. Three police officers were said to have received minor injuries; presumably he hadn’t stopped to ask them for directions.

Greg Milner includes a few stories about satnav fails in Pinpoint, his lively history of satellite navigation technology – his central chapter is called ‘Death by GPS’ – but one of the eye-opening things about his book is quite how far-reaching the tech is. As well as guiding missiles and encouraging motorists not to pay attention to road signs or even to the road ahead of them, GPS is used in crop management, high frequency trading, weather forecasting, earthquake measurement, nuclear-detonation detection and space exploration, as well as the smooth running of countless infrastructure networks, from electricity grids to the internet.

GPS, which stands for Global Positioning System, was developed by the American military. The US Department of Defence currently spends more than a billion dollars a year maintaining it. There are 31 GPS satellites orbiting the earth, all monitored, along with hundreds of other military satellites, from Schriever Air Force Base in Colorado. For the system to work, a receiver on the ground – your mobile phone, for example – needs to have a ‘line of sight’ to at least four of the satellites (there are very few places on earth where it wouldn’t). Each satellite continously broadcasts its position, along with the time the signal left the satellite. The time it takes for the signal to reach you (measured in milliseconds) will tell you exactly how far away it is. Three of these signals provide enough information to pinpoint your position; the fourth confirms the time used in the calculations. GPS satellites, unlike mobile phones, carry super-accurate atomic clocks, which are continually synchronised with one another. This is necessary for the precision of the positioning system, but many of the applications of GPS make use of it primarily as a timekeeping device.

Since 1967, the second has been defined as ‘the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom’. A pendulum clock uses gravity to make a pendulum oscillate at a measurable frequency; a quartz clock uses electricity to make a quartz crystal oscillate at a measurable frequency; an atomic clock uses microwaves to make caesium (or similar) atoms oscillate at a measurable frequency. In the 1970s, the only way to synchronise your atomic clock with the one at the International Bureau of Weights and Measures was to take it to Paris with you and compare them side by side. Now it’s all done by satellite signals. GPS time is also what enables stocks and shares to change hands in microseconds, prevents power surges in vast electrical grids and keeps the internet ticking smoothly.

But before it was co-opted as the pocketwatch of late capitalism – a gift from the US government – GPS was developed as a way to help the US air force drop its bombs just where it wanted with as little risk as possible to American lives. As with any technological breakthrough, it took decades, with false starts, moments of inspiration, patient refinements, scepticism from the brass (‘We’re the navy, we know where we are’), inter-service rivalry and a more or less steady influx of government cash. Within days of Sputnik’s launch in 1957, two young engineers at Johns Hopkins University were using the Russian satellite’s radio signal to plot and then predict its position. GPS came of age in the 1991 Gulf War. (...)

There used to be two different GPS signals: a high-precision one, which only military receivers could decrypt, and a deliberately degraded one for civilian use, which gave your position to within a hundred metres or so. When US troops started shipping out to the Gulf after Saddam Hussein invaded Kuwait in August 1990, they had only 13 Manpack portable GPS receivers between them. Each one cost $40,000 and weighed 12 kg. The Department of Defence put in an order for thousands of Trimpacks, portable receivers built by a former Hewlett Packard engineer called Charlie Trimble (one of the many people Milner interviewed). But there still weren’t enough to go round, so a lot of soldiers ended up spending $1000 of their own money on mass-produced Magellan portable receivers, which were less accurate than Trimpacks but better than nothing in the middle of a hostile desert. The Magellans were made by Ed Tuck, an ex-military venture capitalist with a background in the tech industry. He’d imagined selling cheap (less than $300) GPS receivers to middle-aged men who didn’t like admitting they were lost or asking for directions, but many of his early customers were people with boats off the southern coast of Florida – drug dealers or people traffickers.

Because so many soldiers in Desert Storm were carrying GPS receivers that used the civilian signal, the military turned off the ‘selective availability’ software that degraded it. They turned it on again when the Gulf War was over, but amateur GPS enthusiasts would have noticed a sudden improvement in their receivers’ accuracy in September 1994, when US forces landed on Haiti to depose General Cédras and restore Jean-Bertrand Aristide to power. Meanwhile, commercial GPS receiver manufacturers were developing ways to overcome or work around selective availability, and make their products more accurate in spite of it. In May 2000 the military stopped degrading the civilian GPS signal. Sales of GPS receivers soared.

It isn’t just every phone and every Uber car that’s now fitted with GPS; in some parts of the world it’s every tractor too. And not because farmers need to be reminded of the way to their fields. Milner visited a sugar beet farm in Colorado, a few hours north of Schriever Air Force Base. Using GPS in combination with the Russian GLONASS system to achieve ‘sub-inch accuracy’, the beet farmer tills his field in strips, leaving a narrow band of fallow earth between each row to help keep water and nutrients in the soil. Each seed is planted in a precise, recorded position, with more of them in the more fertile parts of the field. Just the right amount of water and fertiliser is sprayed onto the beets. When they’re harvested, each and every one can be plucked entire from the earth (a broken beet is no use to anyone). Milner reckons that GPS is now worth billions a year to American farmers. An experiment in Uttar Pradesh, meanwhile, found that levelling the land on a two-acre farm using GPS nearly tripled the wheat yield. The farmer in Colorado told Milner that GPS gives him ‘intimate knowledge’ of the land, like his grandfather, who walked behind a horse looking at the ground beneath his feet. Still, hi-tech agriculture has its downsides. Not so many years ago, it took two men to harvest a beet field: one of them driving the tractor, the other operating the digger at the back. Now the tractor does almost everything itself; the driver merely has to turn it round at the end of the row. Soon, he won’t have to do even that. A former farmhand in East Yorkshire told me this summer that he had stopped driving tractors because he can’t understand the computers.

by Thomas Jones, LRB |  Read more:
Image: via:
[ed. I had a project that used some of the earliest commercial versions of GPS technology (around 1992) to map salmon streams in remote Alaskan back-country, then feed those locations in ArcInfo for mapping. Selective availability was a bitch and overcoming it required triangulation from several locations, some of which entailed a lot of frustrating bushwhacking to get to sites high enough on mountain sides to get good, reliable Sat signals. A lot of extra effort to overcome Defense Dept. paranoia.]  

Amazon is Becoming the New Microsoft

My last column was about the recent tipping point signifying that cloud computing is guaranteed to replace personal computing over the next three years. This column is about the slugfest to determine what company’s public cloud is most likely to prevail. I reckon it is Amazon’s and I’ll go further to claim that Amazon will shortly be the new Microsoft.

What I mean by The New Microsoft is that Amazon is starting to act a lot like the old Microsoft of the 1990s. You remember — the Bad Microsoft.

Microsoft in the Bill Gates era was truly full of itself, pushing competitors around, crushing enemies and occasionally breaking the law as a bevy of anti-trust settlements show. Microsoft was the second most valuable company on Earth after ExxonMobil and seemed to feel it could get away with anything. When I wrote something that displeased them, they’d summon me to Redmond for reeducation. Fortunately I didn’t give a shit — then or now — making me completely resistant to the technique.

Today in the public cloud space Amazon is behaving much like Microsoft did in the 90s. They are hugely dominant with more than 70 percent of the cloud market and growing. According to Gartner Amazon Web Services (AWS) will soon have 80 percent of the public cloud market.

Understand there are only three players in the public cloud that matter at all — Amazon, Google and Microsoft. Forget about companies like IBM and Oracle because their market share is meaningless. Larry Ellison can talk about having lower cloud prices, but if he cannot support at least a million virtual seats (he can’t) his pricing doesn’t matter.

All three of the big public cloud companies are growing fast but Amazon is growing faster. This year AWS will spend $10 billion expanding. Microsoft and Google are spending billions, too, but not that many billions. Amazon may always be bigger.

And Amazon may always be faster, too. Part of the reason AWS is gaining market share is because Microsoft’s Azure doesn’t boot virtual machines quite as fast. Specifically it can take over a minute for storage to come on-line and in the public cloud world a minute to access your Dropbox is 40 seconds too long.

This too shall pass, but Microsoft will still be smaller. That’s why Redmond has staked out the Enterprise cloud market — alas, the segment most sensitive to such slow boot times.

But what about Google? They are definitely in the hunt and competitive in terms of performance, which is why Salesforce — which sees itself as eventually becoming part of Microsoft — instead chose to balance AWS recently by allying also with Google. But beyond this one deal it is hard to see Google making cloud inroads. The problem is that Google’s biggest cloud customer by far is, well, Google itself, and that customer is so demanding that the commercial cloud division hasn’t been able to get its act together. This can change but I’ll guarantee that Google as a cloud customer won’t become any less demanding, so it may not change at all.

AWS supports most startups as well as all 17 US intelligence agencies — taking 350,000 PCs out of places like the CIA, Thank Edward Snowden for that one. They are enjoying great success, though AWS partners aren’t enjoying themselves quite as much. Put simply, AWS is a pain to deal with if you are a customer big enough to be in personal communication and not just a credit card number. This, too, is like the old Microsoft.

by Robert X. Cringely, I, Cringely |  Read more:
Image: uncredited

Orcas vs Great White Sharks: In a Battle of the Apex Predators Who Wins?

 The great white shark, Carcharodon carcharias, is considered the most voracious apex predator in temperate marine ecosystems worldwide, playing a key role in controlling ecosystem dynamics.

As a result, it is difficult to imagine a great white as prey. And yet, earlier this year the carcasses of five great whites washed ashore along South Africa’s Western Cape province. Ranging in size from 2.7 metres (9ft) to 4.9 metres (16ft), the two females and three males all had one thing in common: holes puncturing the muscle wall between the pectoral fins. Strangest of all, their livers were missing.

The bite marks inflicted, together with confirmed sightings indicate that orcas, Orcinus orca, were responsible for this precisely-targeted predation. Although the opening scene from Jaws II immediately springs to mind, in which an orca washes up with huge bite marks on it, the reality has turned out to be the exact opposite.

When comparing these two apex predators alongside each other, the stats read like a game of Top Trumps. Max length: great white 6.4 metres, orca 9.6 metres; max weight: great white 2,268kg, orca 9,000kg; burst swim speed: great white 45km/h, orca 48km/h. On paper, at least, it does seem that orcas have the edge.

The diet of orcas is often geographic or population specific. Those populations predating in South African waters have been documented targeting smaller shark species for their livers. Cow sharks, blues and makos caught on longlines have had their livers removed by orcas, alongside the brains of the billfish also caught. Cow shark carcasses without livers have also washed ashore near Cape Town, and again, this followed nearby orca sightings.

With no doubt that orcas are using highly specialised hunting strategies to target the liver; the real question is: why?

Shark livers are large, typically accounting for 5% or more of a shark’s total body weight. They are oil rich, with a principal component, squalene, serving as an energy store and providing buoyancy in the absence of the swim-bladder found in teleosts (bony fish).

Analysis of white shark livers in particular shows an extremely high total lipid content, dominated by triacylglycerols (>93%). This results in an energy density that is higher than whale blubber. For the sharks this serves as an energy storage unit to fuel migrations, growth and reproduction (Pethybridge et al 2014). For the orcas this is like eating a deep fried Mars Bar with added vitamins. Generally speaking, livers contain vitamin C, vitamin B12, folate, vitamin B6, niacin, riboflavin, vitamin A, iron, sodium and of course fat, carbohydrate and protein energy sources.

Since the attraction of this delicacy to the orca is clear, how exactly does an orca go about removing a great white shark’s liver? The evidence we have shows that it is done with some precision – the shark carcasses were not obliterated.

During a 1997 encounter off the Farrallon Islands off the coast of San Francisco, a group of whale watchers witnessed an orca ramming into the side of a great white shark, momentarily stunning it and allowing the orca to flip it over and holding it in place (ventral/belly up) for around 15 minutes, after which the orca began consuming its prey, much to the surprise of the whale watchers on board. A similar incident was captured on film off Costa Rica in 2014 – this time the orca’s prey was a tiger shark. And it’s not just sharks; orcas have been observed doing the same to stingrays too.

What the orcas were exploiting to their own advantage is a curious phenomenon known as “tonic immobility” (TI). This is a natural state of paralysis, which occurs when elasmobranchs are positioned ventral side up in the water column. For certain species of shark like the great white, which is unable to pump water across its gills unless it keeps swimming, the consequence of being maintained within this ‘tonic’ state for too long is final. Effectively, the orcas have learned how to drown their prey whilst minimising their own predatory exertion.

by Lauren Smith, The Guardian |  Read more:
Image: Composite: Rex Features and Getty Images

Saturday, November 18, 2017


Catrin Arno aka Catrin Welz-Stein 

Bob Dylan


Shadows are falling and I been here all day
It's too hot to sleep and time is running away
Feel like my soul has turned into steel
I've still got the scars that the sun didn't heal
There's not even room enough to be anywhere
It's not dark yet, but it's getting there
Well my sense of humanity is going down the drain
Behind every beautiful thing, there's been some kind of pain
She wrote me a letter and she wrote it so kind
She put down in writin' what was in her mind
I just don't see why I should even care
It's not dark yet, but it's getting there
Well I been to London and I been to gay Paree
I followed the river and I got to the sea
I've been down to the bottom of a whirlpool of lies
I ain't lookin' for nothin' in anyone's eyes
Sometimes my burden is more than I can bear
It's not dark yet, but it's getting there
I was born here and I'll die here, against my will
I know it looks like I'm movin' but I'm standin' still
Every nerve in my body is so naked and numb
I can't even remember what it was I came here to get away from
Don't even hear the murmur of a prayer
It's not dark yet, but it's getting there

Friday, November 17, 2017

Nota Bene #10: Notes on $450,312,500

Leonardo da Vinci’s “Salvator Mundi” Sells for $450.3 Million, Shattering Auction Highs
  1. The hammer price was a round $400,000,000, which means that the buyer's premium alone was more than $50 million. By convention, the buyer's premium goes to the auction house for its troubles, but you can be sure that Christie's grossed much less than $50,312,500 last night. The seller will have negotiated "enhanced hammer," which means that the Rybolovlev family will be receiving significantly more than $400 million. On top of that, the lot had a third-party guarantee, which means that Christie's has to split its profits with the guarantor. That said, even after a multi-million-dollar marketing campaign, Christie's surely made a healthy profit on this lot.

  2. The last time this painting was sold by an auction house was only four years ago, in 2013, when Sotheby's sold it privately to Yves Bouvier for $80 million. That decision, to go with a private sale rather than a glitzy public auction, now looks very, very stupid.

  3. Bouvier then flipped the work to Rybolovlev for $127.5 million. When Rybolovlev found out how much Bouvier made on the deal, he was furious, and basically gave up art collecting entirely. His decision to sell the painting was made in anger, out of pique that he had been ripped off. Now it seems he has made more money off one painting, in four years, than most art collectors dream of making in a lifetime. There's probably a moral here, but I have no idea what it is.

  4. The difference between the 2013 sale and the 2017 sale isn't just four years and $300+ million, it's also the difference between a private sale and a public sale. A public sale, at least when it's orchestrated by Christie's in the way that this one was, involves glitz and expensive marketing videos and hour-long lines and lighting worthy of a Thomas Kinkade store; it also ensures the ratification of maximum publicity for the final sale price. Which is to say: Rybolovlev didn't own a $450 million painting, he owned an $80 million painting which he overpaid for by almost $50 million. But the new owner absolutely owns a $450 million painting, the only one in the world.

  5. That said, he doesn't own a very good painting. Even if some part of it was actually painted by Leonardo 500 years ago, most of it wasn't, and there's nothing in the 2017 version of the painting which would, from a connoisseur's perspective, place Leonardo in any kind of artistic pantheon. 

  6. Which explains, at least in part, why a centuries-old painting was sold in a Contemporary Art sale, rather than in the Old Masters sale where you'd think it belonged. The world of Old Masters is, still, a place where connoisseurship matters. In the Contemporary Art world, by contrast, the only people driving valuations are collectors. Christie's realized that they could bypass the cognoscenti and going straight to the art-buying public. That strategy, it turns out, can pay off handsomely. Especially since, at these levels, it's fair to say that Christie's has a personal relationship with every human being on the planet who's willing and able to pay $400 million for a painting. You can be sure that all of them were contacted by the auction house at some point over the past month. And you don't need to know anything about art to spend $450 million on a painting; all you need is $450 million.
by Felix Salmon |  Read more:
Image: Drew Angerer/Getty Images via NY Times