Thursday, December 1, 2016

Humanity’s Greatest Fear is About Being Irrelevant

Genevieve Bell is an Australian anthropologist who has been working at tech company Intel for 18 years, where she is currently head of sensing and insights. She has given numerous TED talks and in 2012 was inducted into the Women in Technology hall of fame. Between 2008 and 2010, she was also South Australia’s thinker in residence.

Why does a company such as Intel need an anthropologist?

That is a question I’ve spent 18 years asking myself. It’s not a contradiction in terms, but it is a puzzle. When they hired me, I think they understood something that not everyone in the tech industry understood, which was that technology was about to undergo a rapid transformation. Computers went from being on an office desk spewing out Excel to inhabiting our homes and lives and we needed to have a point of view about what that was going to look like. It was incredibly important to understand the human questions: such as, what on earth are people going to do with that computational power. If we could anticipate just a little bit, that would give us a business edge and the ability to make better technical decisions. But as an anthropologist that’s a weird place to be. We tend to be rooted in the present – what are people doing now and why? – rather than long-term strategic stuff. (...)

You are often described as a futurologist. A lot of people are worried about the future. Are they right to be concerned?

That technology is accompanied by anxiety is not a new thing. We have anxieties about certain types of technology and there are reasons for that. We’re coming up to the 200th anniversary of Mary Shelley’s Frankenstein and the images in it have persisted.

Shelley’s story worked because it tapped into a set of cultural anxieties. The Frankenstein anxiety is not the reason we worried about the motor car or electricity, but if you think about how some people write about robotics, AI and big data, those concerns have profound echoes going back to the Frankenstein anxieties 200 years ago.

What is the Frankenstein anxiety?

Western culture has some anxieties about what happens when humans try to bring something to life, whether it’s the Judeo-Christian stories of the golem or James Cameron’s The Terminator.

So what is the anxiety about? My suspicion is that it’s not about the life-making, it’s about how we feel about being human. What we are seeing now isn’t an anxiety about artificial intelligence per se, it’s about what it says about us. That if you can make something like us, where does it leave us? And that concern isn’t universal, as other cultures have very different responses to AI, to big data. The most obvious one to me would be the Japanese robotic tradition, where people are willing to imagine the role of robots as far more expansive than you find in the west. For example, the Japanese roboticist Masahiro Mori published a book called The Buddha in the Robot, where he suggests that robots would be better Buddhists than humans because they are capable of infinite invocations. So are you suggesting that robots could have religion? It’s an extraordinary provocation.

So you don’t agree with Stephen Hawking when he says that AI is likely “either the best or the worst thing ever to happen to humanity”?

Mori’s argument was that we project our own anxieties and when we ask: “Will the robots kill us?”, what we are really asking is: “Will we kill us?” Coming from a Japanese man who lived through the 20th century that might not be an unreasonable question. He wonders what would happen if we were to take as our starting point that technology could be our best angels, not our worst – it’s an interesting thought exercise. When I see some of the big thinkers of our day contemplating the arc of artificial intelligence, what I see is not necessarily a critique of the technology itself but a critique of us. We are building the engines, so what we build into them is what they will be. The question is not will AI rise up and kill us, rather, will we give it the tools to do so? (...)

A lot of the work you do examines the intersection between the intended use of a device and how people actually use it – and examining the disconnection. Could you talk about something you’re researching at the moment?


I’m interested in how animals are connected to the internet and how we might be able to see the world from an animal’s point of view. There’s something very interesting in someone else’s vantage point, which might have a truth to it. For instance, the tagging of cows for automatic milking machines, so that the cows can choose when to milk themselves. Cows went from being milked twice a day to being milked three to six times a day, which is great for the farm’s productivity and results in happier cows, but it’s also faintly disquieting that the technology makes clear to us the desires of cows – making them visible in ways they weren’t before. So what does one do with that knowledge? One of the unintended consequences of big data and the internet of things is that some things will become visible and compel us to confront them.

by Ian Tucker, The Guardian |  Read more:
Image: Leah Nash/NYT/Eyevine