The PC is dead. Rising numbers of mobile, lightweight, cloud-centric devices don't merely represent a change in form factor. Rather, we're seeing an unprecedented shift of power from end users and software developers on the one hand, to operating system vendors on the other—and even those who keep their PCs are being swept along. This is a little for the better, and much for the worse.
For decades we've enjoyed a simple way for people to create software and share or sell it to others. People bought general-purpose computers—PCs, including those that say Mac. Those computers came with operating systems that took care of the basics. Anyone could write and run software for an operating system, and up popped an endless assortment of spreadsheets, word processors, instant messengers, Web browsers, e-mail, and games. That software ranged from the sublime to the ridiculous to the dangerous—and there was no referee except the user's good taste and sense, with a little help from nearby nerds or antivirus software. (This worked so long as the antivirus software was not itself malware, a phenomenon that turned out to be distressingly common.)
Choosing an OS used to mean taking a bit of a plunge: since software was anchored to it, a choice of, say, Windows over Mac meant a long-term choice between different available software collections. Even if a software developer offered versions of its wares for each OS, switching from one OS to another typically meant having to buy that software all over again.
That was one reason we ended up with a single dominant OS for over two decades. People had Windows, which made software developers want to write for Windows, which made more people want to buy Windows, which made it even more appealing to software developers, and so on. In the 1990s, both the U.S. and European governments went after Microsoft in a legendary and yet, today, easily forgettable antitrust battle. Their main complaint? That Microsoft had put a thumb on the scale in competition between its own Internet Explorer browser and its primary competitor, Netscape Navigator. Microsoft did this by telling PC makers that they had to ensure that Internet Explorer was ready and waiting on the user's Windows desktop when the user unpacked the computer and set it up, whether the PC makers wanted to or not. Netscape could still be prebundled with Windows, as far as Microsoft was concerned. Years of litigation and oceans of legal documents can thus be boiled down into an essential original sin: an OS maker had unduly favored its own applications.
When the iPhone came out in 2007, its design was far more restrictive. No outside code at all was allowed on the phone; all the software on it was Apple's. What made this unremarkable—and unobjectionable—was that it was a phone, not a computer, and most competing phones were equally locked down. We counted on computers to be open platforms—hard to think of them any other way—and understood phones as appliances, more akin to radios, TVs, and coffee machines.
Then, in 2008, Apple announced a software development kit for the iPhone. Third-party developers would be welcome to write software for the phone, in just the way they'd done for years with Windows and Mac OS. With one epic exception: users could install software on a phone only if it was offered through Apple's iPhone App Store. Developers were to be accredited by Apple, and then each individual app was to be vetted, at first under standards that could be inferred only through what made it through and what didn't. For example, apps that emulated or even improved on Apple's own apps weren't allowed.
The original sin behind the Microsoft case was made much worse. The issue wasn't whether it would be possible to buy an iPhone without Apple's Safari browser. It was that no other browser would be permitted—or, if permitted, it would be only through Apple's ongoing sufferance. And every app sold for the iPhone would have 30 percent of its price (and later, that of its "in-app purchases") go to Apple. Famously proprietary Microsoft never dared to extract a tax on every piece of software written by others for Windows—perhaps because, in the absence of consistent Internet access in the 1990s through which to manage purchases and licenses, there'd be no realistic way to make it happen.
Fast forward 15 years, and that's just what Apple did with its iOS App Store.
by Jonathan Zittrain, MIT | Continue reading:
Image: Wikipedia
For decades we've enjoyed a simple way for people to create software and share or sell it to others. People bought general-purpose computers—PCs, including those that say Mac. Those computers came with operating systems that took care of the basics. Anyone could write and run software for an operating system, and up popped an endless assortment of spreadsheets, word processors, instant messengers, Web browsers, e-mail, and games. That software ranged from the sublime to the ridiculous to the dangerous—and there was no referee except the user's good taste and sense, with a little help from nearby nerds or antivirus software. (This worked so long as the antivirus software was not itself malware, a phenomenon that turned out to be distressingly common.)
Choosing an OS used to mean taking a bit of a plunge: since software was anchored to it, a choice of, say, Windows over Mac meant a long-term choice between different available software collections. Even if a software developer offered versions of its wares for each OS, switching from one OS to another typically meant having to buy that software all over again.
That was one reason we ended up with a single dominant OS for over two decades. People had Windows, which made software developers want to write for Windows, which made more people want to buy Windows, which made it even more appealing to software developers, and so on. In the 1990s, both the U.S. and European governments went after Microsoft in a legendary and yet, today, easily forgettable antitrust battle. Their main complaint? That Microsoft had put a thumb on the scale in competition between its own Internet Explorer browser and its primary competitor, Netscape Navigator. Microsoft did this by telling PC makers that they had to ensure that Internet Explorer was ready and waiting on the user's Windows desktop when the user unpacked the computer and set it up, whether the PC makers wanted to or not. Netscape could still be prebundled with Windows, as far as Microsoft was concerned. Years of litigation and oceans of legal documents can thus be boiled down into an essential original sin: an OS maker had unduly favored its own applications.
When the iPhone came out in 2007, its design was far more restrictive. No outside code at all was allowed on the phone; all the software on it was Apple's. What made this unremarkable—and unobjectionable—was that it was a phone, not a computer, and most competing phones were equally locked down. We counted on computers to be open platforms—hard to think of them any other way—and understood phones as appliances, more akin to radios, TVs, and coffee machines.
Then, in 2008, Apple announced a software development kit for the iPhone. Third-party developers would be welcome to write software for the phone, in just the way they'd done for years with Windows and Mac OS. With one epic exception: users could install software on a phone only if it was offered through Apple's iPhone App Store. Developers were to be accredited by Apple, and then each individual app was to be vetted, at first under standards that could be inferred only through what made it through and what didn't. For example, apps that emulated or even improved on Apple's own apps weren't allowed.
The original sin behind the Microsoft case was made much worse. The issue wasn't whether it would be possible to buy an iPhone without Apple's Safari browser. It was that no other browser would be permitted—or, if permitted, it would be only through Apple's ongoing sufferance. And every app sold for the iPhone would have 30 percent of its price (and later, that of its "in-app purchases") go to Apple. Famously proprietary Microsoft never dared to extract a tax on every piece of software written by others for Windows—perhaps because, in the absence of consistent Internet access in the 1990s through which to manage purchases and licenses, there'd be no realistic way to make it happen.
Fast forward 15 years, and that's just what Apple did with its iOS App Store.
by Jonathan Zittrain, MIT | Continue reading:
Image: Wikipedia