If you're a regular reader, you'll know that I believe two things about computers: first, that they are the most significant functional element of most modern artifacts, from cars to houses to hearing aids; and second, that we have dramatically failed to come to grips with this fact. We keep talking about whether 3D printers should be "allowed" to print guns, or whether computers should be "allowed" to make infringing copies, or whether your iPhone should be "allowed" to run software that Apple hasn't approved and put in its App Store.
Practically speaking, though, these all amount to the same question: how do we keep computers from executing certain instructions, even if the people who own those computers want to execute them? And the practical answer is, we can't.
Oh, you can make a device that goes a long way to preventing its owner from doing something bad. I have a blender with a great interlock that has thus far prevented me from absentmindedly slicing off my fingers or spraying the kitchen with a one-molecule-thick layer of milkshake. This interlock is the kind of thing that I'm very unlikely to accidentally disable, but if I decided to deliberately sabotage my blender so that it could run with the lid off, it would take me about ten minutes' work and the kind of tools we have in the kitchen junk-drawer.
This blender is a robot. It has an internal heating element that lets you use it as a slow-cooker, and there's a programmable timer for it. It's a computer in a fancy case that includes a whirling, razor-sharp blade. It's not much of a stretch to imagine the computer that controls it receiving instructions by network. Once you design a device to be controlled by a computer, you get the networked part virtually for free, in that the cheapest and most flexible commodity computers we have are designed to interface with networks and the cheapest, most powerful operating systems we have come with networking built in. For the most part, computer-controlled devices are born networked, and disabling their network capability requires a deliberate act.
My kitchen robot has the potential to do lots of harm, from hacking off my fingers to starting fires to running up massive power-bills while I'm away to creating a godawful mess. I am confident that we can do a lot to prevent this stuff: to prevent my robot from harming me through my own sloppiness, to prevent my robot from making mistakes that end up hurting me, and to prevent other people from taking over my robot and using it to hurt me.
The distinction here is between a robot that is designed to do what its owner wants – including asking "are you sure?" when its owner asks it to do something potentially stupid – and a robot that is designed to thwart its owner's wishes. The former is hard, important work and the latter is a fool's errand and dangerous to boot. (....)
Is there such a thing as a robot? An excellent paper by Ryan Calo proposes that there is such a thing as a robot, and that, moreover, many of the thorniest, most interesting legal problems on our horizon will involve them.
As interesting as the paper was, I am unconvinced. A robot is basically a computer that causes some physical change in the world. We can and do regulate machines, from cars to drills to implanted defibrillators. But the thing that distinguishes a power-drill from a robot-drill is that the robot-drill has a driver: a computer that operates it. Regulating that computer in the way that we regulate other machines – by mandating the characteristics of their manufacture – will be no more effective at preventing undesirable robotic outcomes than the copyright mandates of the past 20 years have been effective at preventing copyright infringement (that is, not at all).
But that isn't to say that robots are unregulatable – merely that the locus of the regulation needs to be somewhere other than in controlling the instructions you are allowed to give a computer. For example, we might mandate that manufacturers subject code to a certain suite of rigorous public reviews, or that the code be able to respond correctly in a set of circumstances (in the case of a self-driving car, this would basically be a driving test for robots). Insurers might require certain practices in product design as a condition of cover. Courts might find liability for certain programming practices and not for others. Consumer groups like Which? and Consumer Union might publish advice about things that purchasers should look for when buying devices. Professional certification bodies, such as national colleges of engineering, might enshrine principles of ethical software practice into their codes of conduct, and strike off members found to be unethical according to these principles.
by Cory Doctorow, The Guardian | Read more:
Image: Blutgruppe/ Blutgruppe/Corbis
Practically speaking, though, these all amount to the same question: how do we keep computers from executing certain instructions, even if the people who own those computers want to execute them? And the practical answer is, we can't.
Oh, you can make a device that goes a long way to preventing its owner from doing something bad. I have a blender with a great interlock that has thus far prevented me from absentmindedly slicing off my fingers or spraying the kitchen with a one-molecule-thick layer of milkshake. This interlock is the kind of thing that I'm very unlikely to accidentally disable, but if I decided to deliberately sabotage my blender so that it could run with the lid off, it would take me about ten minutes' work and the kind of tools we have in the kitchen junk-drawer.
This blender is a robot. It has an internal heating element that lets you use it as a slow-cooker, and there's a programmable timer for it. It's a computer in a fancy case that includes a whirling, razor-sharp blade. It's not much of a stretch to imagine the computer that controls it receiving instructions by network. Once you design a device to be controlled by a computer, you get the networked part virtually for free, in that the cheapest and most flexible commodity computers we have are designed to interface with networks and the cheapest, most powerful operating systems we have come with networking built in. For the most part, computer-controlled devices are born networked, and disabling their network capability requires a deliberate act.
My kitchen robot has the potential to do lots of harm, from hacking off my fingers to starting fires to running up massive power-bills while I'm away to creating a godawful mess. I am confident that we can do a lot to prevent this stuff: to prevent my robot from harming me through my own sloppiness, to prevent my robot from making mistakes that end up hurting me, and to prevent other people from taking over my robot and using it to hurt me.
The distinction here is between a robot that is designed to do what its owner wants – including asking "are you sure?" when its owner asks it to do something potentially stupid – and a robot that is designed to thwart its owner's wishes. The former is hard, important work and the latter is a fool's errand and dangerous to boot. (....)
Is there such a thing as a robot? An excellent paper by Ryan Calo proposes that there is such a thing as a robot, and that, moreover, many of the thorniest, most interesting legal problems on our horizon will involve them.
As interesting as the paper was, I am unconvinced. A robot is basically a computer that causes some physical change in the world. We can and do regulate machines, from cars to drills to implanted defibrillators. But the thing that distinguishes a power-drill from a robot-drill is that the robot-drill has a driver: a computer that operates it. Regulating that computer in the way that we regulate other machines – by mandating the characteristics of their manufacture – will be no more effective at preventing undesirable robotic outcomes than the copyright mandates of the past 20 years have been effective at preventing copyright infringement (that is, not at all).
But that isn't to say that robots are unregulatable – merely that the locus of the regulation needs to be somewhere other than in controlling the instructions you are allowed to give a computer. For example, we might mandate that manufacturers subject code to a certain suite of rigorous public reviews, or that the code be able to respond correctly in a set of circumstances (in the case of a self-driving car, this would basically be a driving test for robots). Insurers might require certain practices in product design as a condition of cover. Courts might find liability for certain programming practices and not for others. Consumer groups like Which? and Consumer Union might publish advice about things that purchasers should look for when buying devices. Professional certification bodies, such as national colleges of engineering, might enshrine principles of ethical software practice into their codes of conduct, and strike off members found to be unethical according to these principles.
by Cory Doctorow, The Guardian | Read more:
Image: Blutgruppe/ Blutgruppe/Corbis