A new digital revolution is coming, this time in fabrication. It draws on the same insights that led to the earlier digitizations of communication and computation, but now what is being programmed is the physical world rather than the virtual one. Digital fabrication will allow individuals to design and produce tangible objects on demand, wherever and whenever they need them. Widespread access to these technologies will challenge traditional models of business, aid, and education.
The roots of the revolution date back to 1952, when researchers at the Massachusetts Institute of Technology (MIT) wired an early digital computer to a milling machine, creating the first numerically controlled machine tool. By using a computer program instead of a machinist to turn the screws that moved the metal stock, the researchers were able to produce aircraft components with shapes that were more complex than could be made by hand. From that first revolving end mill, all sorts of cutting tools have been mounted on computer-controlled platforms, including jets of water carrying abrasives that can cut through hard materials, lasers that can quickly carve fine features, and slender electrically charged wires that can make long thin cuts.
Today, numerically controlled machines touch almost every commercial product, whether directly (producing everything from laptop cases to jet engines) or indirectly (producing the tools that mold and stamp mass-produced goods). And yet all these modern descendants of the first numerically controlled machine tool share its original limitation: they can cut, but they cannot reach internal structures. This means, for example, that the axle of a wheel must be manufactured separately from the bearing it passes through.
In the 1980s, however, computer-controlled fabrication processes that added rather than removed material (called additive manufacturing) came on the market. Thanks to 3-D printing, a bearing and an axle could be built by the same machine at the same time. A range of 3-D printing processes are now available, including thermally fusing plastic filaments, using ultraviolet light to cross-link polymer resins, depositing adhesive droplets to bind a powder, cutting and laminating sheets of paper, and shining a laser beam to fuse metal particles. Businesses already use 3-D printers to model products before producing them, a process referred to as rapid prototyping. Companies also rely on the technology to make objects with complex shapes, such as jewelry and medical implants. Research groups have even used 3-D printers to build structures out of cells with the goal of printing living organs.
Additive manufacturing has been widely hailed as a revolution, featured on the cover of publications from Wired to The Economist. This is, however, a curious sort of revolution, proclaimed more by its observers than its practitioners. In a well-equipped workshop, a 3-D printer might be used for about a quarter of the jobs, with other machines doing the rest. One reason is that the printers are slow, taking hours or even days to make things. Other computer-controlled tools can produce parts faster, or with finer features, or that are larger, lighter, or stronger. Glowing articles about 3-D printers read like the stories in the 1950s that proclaimed that microwave ovens were the future of cooking. Microwaves are convenient, but they don’t replace the rest of the kitchen.
The revolution is not additive versus subtractive manufacturing; it is the ability to turn data into things and things into data. That is what is coming; for some perspective, there is a close analogy with the history of computing. The first step in that development was the arrival of large mainframe computers in the 1950s, which only corporations, governments, and elite institutions could afford. Next came the development of minicomputers in the 1960s, led by Digital Equipment Corporation’s PDP family of computers, which was based on MIT’s first transistorized computer, the TX-0. These brought down the cost of a computer from hundreds of thousands of dollars to tens of thousands. That was still too much for an individual but was affordable for research groups, university departments, and smaller companies. The people who used these devices developed the applications for just about everything one does now on a computer: sending e-mail, writing in a word processor, playing video games, listening to music. After minicomputers came hobbyist computers. The best known of these, the MITS Altair 8800, was sold in 1975 for about $1,000 assembled or about $400 in kit form. Its capabilities were rudimentary, but it changed the lives of a generation of computing pioneers, who could now own a machine individually. Finally, computing truly turned personal with the appearance of the IBM personal computer in 1981. It was relatively compact, easy to use, useful, and affordable.
Just as with the old mainframes, only institutions can afford the modern versions of the early bulky and expensive computer-controlled milling devices. In the 1980s, first-generation rapid prototyping systems from companies such as 3D Systems, Stratasys, Epilog Laser, and Universal brought the price of computer-controlled manufacturing systems down from hundreds of thousands of dollars to tens of thousands, making them attractive to research groups. The next-generation digital fabrication products on the market now, such as the RepRap, the MakerBot, the Ultimaker, the PopFab, and the MTM Snap, sell for thousands of dollars assembled or hundreds of dollars as parts. Unlike the digital fabrication tools that came before them, these tools have plans that are typically freely shared, so that those who own the tools (like those who owned the hobbyist computers) can not only use them but also make more of them and modify them. Integrated personal digital fabricators comparable to the personal computer do not yet exist, but they will.
Personal fabrication has been around for years as a science-fiction staple. When the crew of the TV series Star Trek: The Next Generation was confronted by a particularly challenging plot development, they could use the onboard replicator to make whatever they needed. Scientists at a number of labs (including mine) are now working on the real thing, developing processes that can place individual atoms and molecules into whatever structure they want. Unlike 3-D printers today, these will be able to build complete functional systems at once, with no need for parts to be assembled. The aim is to not only produce the parts for a drone, for example, but build a complete vehicle that can fly right out of the printer. This goal is still years away, but it is not necessary to wait: most of the computer functions one uses today were invented in the minicomputer era, long before they would flourish in the era of personal computing. Similarly, although today’s digital manufacturing machines are still in their infancy, they can already be used to make (almost) anything, anywhere. That changes everything.
The roots of the revolution date back to 1952, when researchers at the Massachusetts Institute of Technology (MIT) wired an early digital computer to a milling machine, creating the first numerically controlled machine tool. By using a computer program instead of a machinist to turn the screws that moved the metal stock, the researchers were able to produce aircraft components with shapes that were more complex than could be made by hand. From that first revolving end mill, all sorts of cutting tools have been mounted on computer-controlled platforms, including jets of water carrying abrasives that can cut through hard materials, lasers that can quickly carve fine features, and slender electrically charged wires that can make long thin cuts.
Today, numerically controlled machines touch almost every commercial product, whether directly (producing everything from laptop cases to jet engines) or indirectly (producing the tools that mold and stamp mass-produced goods). And yet all these modern descendants of the first numerically controlled machine tool share its original limitation: they can cut, but they cannot reach internal structures. This means, for example, that the axle of a wheel must be manufactured separately from the bearing it passes through.
In the 1980s, however, computer-controlled fabrication processes that added rather than removed material (called additive manufacturing) came on the market. Thanks to 3-D printing, a bearing and an axle could be built by the same machine at the same time. A range of 3-D printing processes are now available, including thermally fusing plastic filaments, using ultraviolet light to cross-link polymer resins, depositing adhesive droplets to bind a powder, cutting and laminating sheets of paper, and shining a laser beam to fuse metal particles. Businesses already use 3-D printers to model products before producing them, a process referred to as rapid prototyping. Companies also rely on the technology to make objects with complex shapes, such as jewelry and medical implants. Research groups have even used 3-D printers to build structures out of cells with the goal of printing living organs.
Additive manufacturing has been widely hailed as a revolution, featured on the cover of publications from Wired to The Economist. This is, however, a curious sort of revolution, proclaimed more by its observers than its practitioners. In a well-equipped workshop, a 3-D printer might be used for about a quarter of the jobs, with other machines doing the rest. One reason is that the printers are slow, taking hours or even days to make things. Other computer-controlled tools can produce parts faster, or with finer features, or that are larger, lighter, or stronger. Glowing articles about 3-D printers read like the stories in the 1950s that proclaimed that microwave ovens were the future of cooking. Microwaves are convenient, but they don’t replace the rest of the kitchen.
The revolution is not additive versus subtractive manufacturing; it is the ability to turn data into things and things into data. That is what is coming; for some perspective, there is a close analogy with the history of computing. The first step in that development was the arrival of large mainframe computers in the 1950s, which only corporations, governments, and elite institutions could afford. Next came the development of minicomputers in the 1960s, led by Digital Equipment Corporation’s PDP family of computers, which was based on MIT’s first transistorized computer, the TX-0. These brought down the cost of a computer from hundreds of thousands of dollars to tens of thousands. That was still too much for an individual but was affordable for research groups, university departments, and smaller companies. The people who used these devices developed the applications for just about everything one does now on a computer: sending e-mail, writing in a word processor, playing video games, listening to music. After minicomputers came hobbyist computers. The best known of these, the MITS Altair 8800, was sold in 1975 for about $1,000 assembled or about $400 in kit form. Its capabilities were rudimentary, but it changed the lives of a generation of computing pioneers, who could now own a machine individually. Finally, computing truly turned personal with the appearance of the IBM personal computer in 1981. It was relatively compact, easy to use, useful, and affordable.
Just as with the old mainframes, only institutions can afford the modern versions of the early bulky and expensive computer-controlled milling devices. In the 1980s, first-generation rapid prototyping systems from companies such as 3D Systems, Stratasys, Epilog Laser, and Universal brought the price of computer-controlled manufacturing systems down from hundreds of thousands of dollars to tens of thousands, making them attractive to research groups. The next-generation digital fabrication products on the market now, such as the RepRap, the MakerBot, the Ultimaker, the PopFab, and the MTM Snap, sell for thousands of dollars assembled or hundreds of dollars as parts. Unlike the digital fabrication tools that came before them, these tools have plans that are typically freely shared, so that those who own the tools (like those who owned the hobbyist computers) can not only use them but also make more of them and modify them. Integrated personal digital fabricators comparable to the personal computer do not yet exist, but they will.
Personal fabrication has been around for years as a science-fiction staple. When the crew of the TV series Star Trek: The Next Generation was confronted by a particularly challenging plot development, they could use the onboard replicator to make whatever they needed. Scientists at a number of labs (including mine) are now working on the real thing, developing processes that can place individual atoms and molecules into whatever structure they want. Unlike 3-D printers today, these will be able to build complete functional systems at once, with no need for parts to be assembled. The aim is to not only produce the parts for a drone, for example, but build a complete vehicle that can fly right out of the printer. This goal is still years away, but it is not necessary to wait: most of the computer functions one uses today were invented in the minicomputer era, long before they would flourish in the era of personal computing. Similarly, although today’s digital manufacturing machines are still in their infancy, they can already be used to make (almost) anything, anywhere. That changes everything.