[ed. Amazing. I knew they had camouflage capabilities, but this is beyond the beyonds.]
Saturday, June 2, 2012
How Octopuses Make Themselves Invisible
[ed. Amazing. I knew they had camouflage capabilities, but this is beyond the beyonds.]
Do You Really Want to Live Forever?
Imagine you are offered a trustworthy opportunity for immortality in which your mind (perhaps also your body) will persist eternally. Let’s further stipulate that the offer includes perpetual youthful health and the ability to upgrade to any cognitive and physical technologies that become available in the future. There is one more stipulation: You could never decide later to die. Would you take it? Metaphysician and former British diplomat Stephen Cave thinks accepting such an offer would be a bad idea.
Cave’s fascinating new book, Immortality, posits that civilization is a major side effect of humanity's attempts to live forever. He argues that our sophisticated minds inexorably recognize that, like all other living things, we will one day die. Simultaneously, Cave asserts, “The one thing that these minds cannot imagine is that very state of nonexistence; it is literally inconceivable. Death therefore presents itself as both inevitable and impossible. This is what I will call the Mortality Paradox, and its resolution is what gives shape to the immortality narratives, and therefore to civilization.”
Cave identifies four immortality narratives that drive civilizations over time which he calls; (1) Staying Alive, (2) Resurrection, (3) Soul, and (4) Legacy. Cave gracefully marches through his four immortality narratives citing examples from history, psychology, and religion up to the modern day. “At its core, a civilization is a collection of life extension technologies: agriculture to ensure food in steady supply, clothing to stave off cold, architecture to provide shelter and safety, better weapons for hunting and defense, and medicine to combat injury and disease,” he writes.
In the Staying Alive narrative Cave opens with the quest of the First Emperor of China to find the elixir of life but lands us soon the 21st century where transhumanists aim to use modern science to finally achieve the goal of perpetual youthful life. He notes that in the last century, humans have in fact doubled average human life expectancy.
Why not simply repair the damage caused by aging, thus defeating physical death? This is the goal of transhumanists like theoretical biogerontologist Aubrey de Grey who has devised the Strategies for Engineered Negligible Senescence (SENS) program. SENS technologies would include genetic interventions to rejuvenate cells, stem cell transplants to replace aged organs and tissues, and nano-machines to patrol our bodies to prevent infections and kill nascent cancers. Ultimately, Cave cannot argue that these life-extension technologies will not work for individuals but suggests that they would produce problems like overpopulation and environmental collapse that would eventually subvert them. He also cites calculations done by a demographer that assuming aging and disease is defeated by biomedical technology accidents would still do in would-be immortals. The average life expectancy of medical immortals would be 5,775 years. Frankly, I will be happy to take that.
Resurrection is his next immortality narrative. Of course, the most prevalent resurrection story is that of Jesus of Nazareth 2,000 years ago. The New Testament explicitly states that one day every individual will once again live in his or her real but improved physical bodies. Physical resurrection is also the orthodox belief of the other two Abrahamic religions, Judaism and Islam. Thus, Cave notes, half of the world’s population officially believes in the future resurrection of their physical bodies. He adds, however, that many Christians, Jews, and Muslims actually subscribe to another immortality narrative, Soul.
Cave identifies three major problems with the Resurrection Narrative: the Cannibal problem, the Transformation problem, and the Duplication problem. Briefly, if resurrection is to mean anything, it must mean that a specific individual is brought back to life. The question is what happens when atoms have been shared by more than one person: Who gets to use the specific nitrogen and carbon atoms when everyone is brought back to life? I don’t think that that is much of problem since atoms are interchangeable and presumably God could simply put any random carbon and nitrogen atoms back in the same places they were in your physical body. They needn’t be the exact same atoms that you had when you died.
The Transformation problem is harder. Many believers would have died old, decrepit, and demented. That’s not how they believe they will be resurrected; they expect to get better, incorruptible bodies. By being thus transformed would the resurrected believer really be the same person who had died or a different person? And then there is the problem of duplication. God could not just reassemble a believer as she was when she died, he could also reassemble her as a 5-year-old girl. Cave argues that these three problems calls into question the notion that it would truly be a specific individual believer rising from the grave. (...)
Cave notes that this focus on preserving a person’s mind leads other modern would-be computational resurrectionists to argue for uploading minds (information encoded in an individual’s brain) onto another piece of hardware, an electronic avatar, a robot, or another brain which would be psychologically identical to the original mind. Cave argues that computational resurrection does not actually achieve immortality for a specific individual, but merely makes an exact psychological copy of him. There is the additional problem that if minds can be digitized they can be duplicated many times. If this occurs who then is the original resurrectee? “When you closed your eyes on your deathbed, you could not expect to open them again in silicon form,” he explains. The result of mind uploading “would all just be high-tech ways of producing a counterfeit you.” (...)
The most popular immortality narrative is Soul. Most Christians now believe that their souls, which persist after death, will be reunited with their resurrected bodies. Souls thus solve a lot of the identity problems associated with the earlier Resurrection narrative. Cave argues that Soul narrative resolves the Mortality Paradox by denying “that the failing body is the true self, identifying the person instead with exactly that mental life that seems so inextinguishable.” In Christianity all souls are equal before God, so if the omnipotent and omniscient Creator of the universe is interested in your life then who are your politicians to ignore your desires?
What about the afterlife? Cave cites American evangelist James L. Garlow who says that in Heaven “your every desire is satisfied more abundantly than you’ve ever dreamed.” But what if your desire is to be reunited with your wife who instead desires to spend her eternity with her childhood sweetheart? A more sophisticated theocentric view of the soul’s afterlife is that Heaven is the eternal exaltation of God. But what can this mean? Cave points out that an afterlife without time is not really a life at all. “Everything that makes up a human life—experience, learning, growth, communication, even singing hosannas—requires the passage of time. Without time, nothing can happen; it is a state of stasis, a cessation of thought and action,” he argues. “The attraction of the soul view was the unique aura it gave to every individual life, but its logical conclusion is an eternity of nothing, with life negated altogether.”
by Ronald Bailey, Reason | Read more:
Cave’s fascinating new book, Immortality, posits that civilization is a major side effect of humanity's attempts to live forever. He argues that our sophisticated minds inexorably recognize that, like all other living things, we will one day die. Simultaneously, Cave asserts, “The one thing that these minds cannot imagine is that very state of nonexistence; it is literally inconceivable. Death therefore presents itself as both inevitable and impossible. This is what I will call the Mortality Paradox, and its resolution is what gives shape to the immortality narratives, and therefore to civilization.”
Cave identifies four immortality narratives that drive civilizations over time which he calls; (1) Staying Alive, (2) Resurrection, (3) Soul, and (4) Legacy. Cave gracefully marches through his four immortality narratives citing examples from history, psychology, and religion up to the modern day. “At its core, a civilization is a collection of life extension technologies: agriculture to ensure food in steady supply, clothing to stave off cold, architecture to provide shelter and safety, better weapons for hunting and defense, and medicine to combat injury and disease,” he writes.
In the Staying Alive narrative Cave opens with the quest of the First Emperor of China to find the elixir of life but lands us soon the 21st century where transhumanists aim to use modern science to finally achieve the goal of perpetual youthful life. He notes that in the last century, humans have in fact doubled average human life expectancy.
Why not simply repair the damage caused by aging, thus defeating physical death? This is the goal of transhumanists like theoretical biogerontologist Aubrey de Grey who has devised the Strategies for Engineered Negligible Senescence (SENS) program. SENS technologies would include genetic interventions to rejuvenate cells, stem cell transplants to replace aged organs and tissues, and nano-machines to patrol our bodies to prevent infections and kill nascent cancers. Ultimately, Cave cannot argue that these life-extension technologies will not work for individuals but suggests that they would produce problems like overpopulation and environmental collapse that would eventually subvert them. He also cites calculations done by a demographer that assuming aging and disease is defeated by biomedical technology accidents would still do in would-be immortals. The average life expectancy of medical immortals would be 5,775 years. Frankly, I will be happy to take that.
Resurrection is his next immortality narrative. Of course, the most prevalent resurrection story is that of Jesus of Nazareth 2,000 years ago. The New Testament explicitly states that one day every individual will once again live in his or her real but improved physical bodies. Physical resurrection is also the orthodox belief of the other two Abrahamic religions, Judaism and Islam. Thus, Cave notes, half of the world’s population officially believes in the future resurrection of their physical bodies. He adds, however, that many Christians, Jews, and Muslims actually subscribe to another immortality narrative, Soul.
Cave identifies three major problems with the Resurrection Narrative: the Cannibal problem, the Transformation problem, and the Duplication problem. Briefly, if resurrection is to mean anything, it must mean that a specific individual is brought back to life. The question is what happens when atoms have been shared by more than one person: Who gets to use the specific nitrogen and carbon atoms when everyone is brought back to life? I don’t think that that is much of problem since atoms are interchangeable and presumably God could simply put any random carbon and nitrogen atoms back in the same places they were in your physical body. They needn’t be the exact same atoms that you had when you died.
The Transformation problem is harder. Many believers would have died old, decrepit, and demented. That’s not how they believe they will be resurrected; they expect to get better, incorruptible bodies. By being thus transformed would the resurrected believer really be the same person who had died or a different person? And then there is the problem of duplication. God could not just reassemble a believer as she was when she died, he could also reassemble her as a 5-year-old girl. Cave argues that these three problems calls into question the notion that it would truly be a specific individual believer rising from the grave. (...)
Cave notes that this focus on preserving a person’s mind leads other modern would-be computational resurrectionists to argue for uploading minds (information encoded in an individual’s brain) onto another piece of hardware, an electronic avatar, a robot, or another brain which would be psychologically identical to the original mind. Cave argues that computational resurrection does not actually achieve immortality for a specific individual, but merely makes an exact psychological copy of him. There is the additional problem that if minds can be digitized they can be duplicated many times. If this occurs who then is the original resurrectee? “When you closed your eyes on your deathbed, you could not expect to open them again in silicon form,” he explains. The result of mind uploading “would all just be high-tech ways of producing a counterfeit you.” (...)
The most popular immortality narrative is Soul. Most Christians now believe that their souls, which persist after death, will be reunited with their resurrected bodies. Souls thus solve a lot of the identity problems associated with the earlier Resurrection narrative. Cave argues that Soul narrative resolves the Mortality Paradox by denying “that the failing body is the true self, identifying the person instead with exactly that mental life that seems so inextinguishable.” In Christianity all souls are equal before God, so if the omnipotent and omniscient Creator of the universe is interested in your life then who are your politicians to ignore your desires?
What about the afterlife? Cave cites American evangelist James L. Garlow who says that in Heaven “your every desire is satisfied more abundantly than you’ve ever dreamed.” But what if your desire is to be reunited with your wife who instead desires to spend her eternity with her childhood sweetheart? A more sophisticated theocentric view of the soul’s afterlife is that Heaven is the eternal exaltation of God. But what can this mean? Cave points out that an afterlife without time is not really a life at all. “Everything that makes up a human life—experience, learning, growth, communication, even singing hosannas—requires the passage of time. Without time, nothing can happen; it is a state of stasis, a cessation of thought and action,” he argues. “The attraction of the soul view was the unique aura it gave to every individual life, but its logical conclusion is an eternity of nothing, with life negated altogether.”
by Ronald Bailey, Reason | Read more:
Friday, June 1, 2012
Silence and Word
As we draw near to World Communications Day 2012, I would like to share with you some reflections concerning an aspect of the human process of communication which, despite its importance, is often overlooked and which, at the present time, it would seem especially necessary to recall. It concerns the relationship between silence and word: two aspects of communication which need to be kept in balance, to alternate and to be integrated with one another if authentic dialogue and deep closeness between people are to be achieved. When word and silence become mutually exclusive, communication breaks down, either because it gives rise to confusion or because, on the contrary, it creates an atmosphere of coldness; when they complement one another, however, communication acquires value and meaning.
Silence is an integral element of communication; in its absence, words rich in content cannot exist. In silence, we are better able to listen to and understand ourselves; ideas come to birth and acquire depth; we understand with greater clarity what it is we want to say and what we expect from others; and we choose how to express ourselves. By remaining silent we allow the other person to speak, to express him or herself; and we avoid being tied simply to our own words and ideas without them being adequately tested. In this way, space is created for mutual listening, and deeper human relationships become possible. It is often in silence, for example, that we observe the most authentic communication taking place between people who are in love: gestures, facial expressions and body language are signs by which they reveal themselves to each other. Joy, anxiety, and suffering can all be communicated in silence – indeed it provides them with a particularly powerful mode of expression. Silence, then, gives rise to even more active communication, requiring sensitivity and a capacity to listen that often makes manifest the true measure and nature of the relationships involved. When messages and information are plentiful, silence becomes essential if we are to distinguish what is important from what is insignificant or secondary. Deeper reflection helps us to discover the links between events that at first sight seem unconnected, to make evaluations, to analyze messages; this makes it possible to share thoughtful and relevant opinions, giving rise to an authentic body of shared knowledge. For this to happen, it is necessary to develop an appropriate environment, a kind of ‘eco-system’ that maintains a just equilibrium between silence, words, images and sounds.
The process of communication nowadays is largely fuelled by questions in search of answers. Search engines and social networks have become the starting point of communication for many people who are seeking advice, ideas, information and answers. In our time, the internet is becoming ever more a forum for questions and answers – indeed, people today are frequently bombarded with answers to questions they have never asked and to needs of which they were unaware. If we are to recognize and focus upon the truly important questions, then silence is a precious commodity that enables us to exercise proper discernment in the face of the surcharge of stimuli and data that we receive. Amid the complexity and diversity of the world of communications, however, many people find themselves confronted with the ultimate questions of human existence: Who am I? What can I know? What ought I to do? What may I hope? It is important to affirm those who ask these questions, and to open up the possibility of a profound dialogue, by means of words and interchange, but also through the call to silent reflection, something that is often more eloquent than a hasty answer and permits seekers to reach into the depths of their being and open themselves to the path towards knowledge that God has inscribed in human hearts.
by Pope Benedictus XVI | Read more:
Silence is an integral element of communication; in its absence, words rich in content cannot exist. In silence, we are better able to listen to and understand ourselves; ideas come to birth and acquire depth; we understand with greater clarity what it is we want to say and what we expect from others; and we choose how to express ourselves. By remaining silent we allow the other person to speak, to express him or herself; and we avoid being tied simply to our own words and ideas without them being adequately tested. In this way, space is created for mutual listening, and deeper human relationships become possible. It is often in silence, for example, that we observe the most authentic communication taking place between people who are in love: gestures, facial expressions and body language are signs by which they reveal themselves to each other. Joy, anxiety, and suffering can all be communicated in silence – indeed it provides them with a particularly powerful mode of expression. Silence, then, gives rise to even more active communication, requiring sensitivity and a capacity to listen that often makes manifest the true measure and nature of the relationships involved. When messages and information are plentiful, silence becomes essential if we are to distinguish what is important from what is insignificant or secondary. Deeper reflection helps us to discover the links between events that at first sight seem unconnected, to make evaluations, to analyze messages; this makes it possible to share thoughtful and relevant opinions, giving rise to an authentic body of shared knowledge. For this to happen, it is necessary to develop an appropriate environment, a kind of ‘eco-system’ that maintains a just equilibrium between silence, words, images and sounds.
The process of communication nowadays is largely fuelled by questions in search of answers. Search engines and social networks have become the starting point of communication for many people who are seeking advice, ideas, information and answers. In our time, the internet is becoming ever more a forum for questions and answers – indeed, people today are frequently bombarded with answers to questions they have never asked and to needs of which they were unaware. If we are to recognize and focus upon the truly important questions, then silence is a precious commodity that enables us to exercise proper discernment in the face of the surcharge of stimuli and data that we receive. Amid the complexity and diversity of the world of communications, however, many people find themselves confronted with the ultimate questions of human existence: Who am I? What can I know? What ought I to do? What may I hope? It is important to affirm those who ask these questions, and to open up the possibility of a profound dialogue, by means of words and interchange, but also through the call to silent reflection, something that is often more eloquent than a hasty answer and permits seekers to reach into the depths of their being and open themselves to the path towards knowledge that God has inscribed in human hearts.
by Pope Benedictus XVI | Read more:
Morals and the Machine
[ed. The emerging field of machine ethics. Can it keep pace with the development of robotic technology?]
In the classic science-fiction film “2001”, the ship’s computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship’s mission (investigating an artefact near Jupiter) and to keep the mission’s true purpose secret from the ship’s crew. To resolve the contradiction, he tries to kill the crew.
As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was.
A bestiary of robots
Military technology, unsurprisingly, is at the forefront of the march towards self-determining machines (see Technology Quarterly). Its evolution is producing an extraordinary variety of species. The Sand Flea can leap through a window or onto a roof, filming all the while. It then rolls along on wheels until it needs to jump again. RiSE, a six-legged robo-cockroach, can climb walls. LS3, a dog-like robot, trots behind a human over rough terrain, carrying up to 180kg of supplies. SUGV, a briefcase-sized robot, can identify a man in a crowd and follow him. There is a flying surveillance drone the weight of a wedding ring, and one that carries 2.7 tonnes of bombs.
Robots are spreading in the civilian world, too, from the flight deck to the operating theatre (see article). Passenger aircraft have long been able to land themselves. Driverless trains are commonplace. Volvo’s new V40 hatchback essentially drives itself in heavy traffic. It can brake when it senses an imminent collision, as can Ford’s B-Max minivan. Fully self-driving vehicles are being tested around the world. Google’s driverless cars have clocked up more than 250,000 miles in America, and Nevada has become the first state to regulate such trials on public roads. In Barcelona a few days ago, Volvo demonstrated a platoon of autonomous cars on a motorway.
As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.
As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong.
by The Economist | Read more:
Illustration by Derek Bacon
In the classic science-fiction film “2001”, the ship’s computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship’s mission (investigating an artefact near Jupiter) and to keep the mission’s true purpose secret from the ship’s crew. To resolve the contradiction, he tries to kill the crew. As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was.
A bestiary of robots
Military technology, unsurprisingly, is at the forefront of the march towards self-determining machines (see Technology Quarterly). Its evolution is producing an extraordinary variety of species. The Sand Flea can leap through a window or onto a roof, filming all the while. It then rolls along on wheels until it needs to jump again. RiSE, a six-legged robo-cockroach, can climb walls. LS3, a dog-like robot, trots behind a human over rough terrain, carrying up to 180kg of supplies. SUGV, a briefcase-sized robot, can identify a man in a crowd and follow him. There is a flying surveillance drone the weight of a wedding ring, and one that carries 2.7 tonnes of bombs.
Robots are spreading in the civilian world, too, from the flight deck to the operating theatre (see article). Passenger aircraft have long been able to land themselves. Driverless trains are commonplace. Volvo’s new V40 hatchback essentially drives itself in heavy traffic. It can brake when it senses an imminent collision, as can Ford’s B-Max minivan. Fully self-driving vehicles are being tested around the world. Google’s driverless cars have clocked up more than 250,000 miles in America, and Nevada has become the first state to regulate such trials on public roads. In Barcelona a few days ago, Volvo demonstrated a platoon of autonomous cars on a motorway.
As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.
As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong.
by The Economist | Read more:
Illustration by Derek Bacon
The 1 Percent’s Problem
Let’s start by laying down the baseline premise: inequality in America has been widening for decades. We’re all aware of the fact. Yes, there are some on the right who deny this reality, but serious analysts across the political spectrum take it for granted. I won’t run through all the evidence here, except to say that the gap between the 1 percent and the 99 percent is vast when looked at in terms of annual income, and even vaster when looked at in terms of wealth—that is, in terms of accumulated capital and other assets. Consider the Walton family: the six heirs to the Walmart empire possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society. (Many at the bottom have zero or negative net worth, especially after the housing debacle.) Warren Buffett put the matter correctly when he said, “There’s been class warfare going on for the last 20 years and my class has won.”
So, no: there’s little debate over the basic fact of widening inequality. The debate is over its meaning. From the right, you sometimes hear the argument made that inequality is basically a good thing: as the rich increasingly benefit, so does everyone else. This argument is false: while the rich have been growing richer, most Americans (and not just those at the bottom) have been unable to maintain their standard of living, let alone to keep pace. A typical full-time male worker receives the same income today he did a third of a century ago.
From the left, meanwhile, the widening inequality often elicits an appeal for simple justice: why should so few have so much when so many have so little? It’s not hard to see why, in a market-driven age where justice itself is a commodity to be bought and sold, some would dismiss that argument as the stuff of pious sentiment.
Put sentiment aside. There are good reasons why plutocrats should care about inequality anyway—even if they’re thinking only about themselves. The rich do not exist in a vacuum. They need a functioning society around them to sustain their position. Widely unequal societies do not function efficiently and their economies are neither stable nor sustainable. The evidence from history and from around the modern world is unequivocal: there comes a point when inequality spirals into economic dysfunction for the whole society, and when it does, even the rich pay a steep price.
Let me run through a few reasons why.
The Consumption Problem
When one interest group holds too much power, it succeeds in getting policies that help itself in the short term rather than help society as a whole over the long term. This is what has happened in America when it comes to tax policy, regulatory policy, and public investment. The consequence of channeling gains in income and wealth in one direction only is easy to see when it comes to ordinary household spending, which is one of the engines of the American economy.
It is no accident that the periods in which the broadest cross sections of Americans have reported higher net incomes—when inequality has been reduced, partly as a result of progressive taxation—have been the periods in which the U.S. economy has grown the fastest. It is likewise no accident that the current recession, like the Great Depression, was preceded by large increases in inequality. When too much money is concentrated at the top of society, spending by the average American is necessarily reduced—or at least it will be in the absence of some artificial prop. Moving money from the bottom to the top lowers consumption because higher-income individuals consume, as a fraction of their income, less than lower-income individuals do.
In our imaginations, it doesn’t always seem as if this is the case, because spending by the wealthy is so conspicuous. Just look at the color photographs in the back pages of the weekend Wall Street Journal of houses for sale. But the phenomenon makes sense when you do the math. Consider someone like Mitt Romney, whose income in 2010 was $21.7 million. Even if Romney chose to live a much more indulgent lifestyle, he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes. But take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.
The relationship is straightforward and ironclad: as more money becomes concentrated at the top, aggregate demand goes into a decline. Unless something else happens by way of intervention, total demand in the economy will be less than what the economy is capable of supplying—and that means that there will be growing unemployment, which will dampen demand even further. In the 1990s that “something else” was the tech bubble. In the first decade of the 21st century, it was the housing bubble. Today, the only recourse, amid deep recession, is government spending—which is exactly what those at the top are now hoping to curb.
by Joseph E. Stiglitz, Vanity Fair | Read more:
Stephen Doyle
So, no: there’s little debate over the basic fact of widening inequality. The debate is over its meaning. From the right, you sometimes hear the argument made that inequality is basically a good thing: as the rich increasingly benefit, so does everyone else. This argument is false: while the rich have been growing richer, most Americans (and not just those at the bottom) have been unable to maintain their standard of living, let alone to keep pace. A typical full-time male worker receives the same income today he did a third of a century ago.
From the left, meanwhile, the widening inequality often elicits an appeal for simple justice: why should so few have so much when so many have so little? It’s not hard to see why, in a market-driven age where justice itself is a commodity to be bought and sold, some would dismiss that argument as the stuff of pious sentiment.
Put sentiment aside. There are good reasons why plutocrats should care about inequality anyway—even if they’re thinking only about themselves. The rich do not exist in a vacuum. They need a functioning society around them to sustain their position. Widely unequal societies do not function efficiently and their economies are neither stable nor sustainable. The evidence from history and from around the modern world is unequivocal: there comes a point when inequality spirals into economic dysfunction for the whole society, and when it does, even the rich pay a steep price.
Let me run through a few reasons why.
The Consumption Problem
When one interest group holds too much power, it succeeds in getting policies that help itself in the short term rather than help society as a whole over the long term. This is what has happened in America when it comes to tax policy, regulatory policy, and public investment. The consequence of channeling gains in income and wealth in one direction only is easy to see when it comes to ordinary household spending, which is one of the engines of the American economy.
It is no accident that the periods in which the broadest cross sections of Americans have reported higher net incomes—when inequality has been reduced, partly as a result of progressive taxation—have been the periods in which the U.S. economy has grown the fastest. It is likewise no accident that the current recession, like the Great Depression, was preceded by large increases in inequality. When too much money is concentrated at the top of society, spending by the average American is necessarily reduced—or at least it will be in the absence of some artificial prop. Moving money from the bottom to the top lowers consumption because higher-income individuals consume, as a fraction of their income, less than lower-income individuals do.
In our imaginations, it doesn’t always seem as if this is the case, because spending by the wealthy is so conspicuous. Just look at the color photographs in the back pages of the weekend Wall Street Journal of houses for sale. But the phenomenon makes sense when you do the math. Consider someone like Mitt Romney, whose income in 2010 was $21.7 million. Even if Romney chose to live a much more indulgent lifestyle, he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes. But take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.
The relationship is straightforward and ironclad: as more money becomes concentrated at the top, aggregate demand goes into a decline. Unless something else happens by way of intervention, total demand in the economy will be less than what the economy is capable of supplying—and that means that there will be growing unemployment, which will dampen demand even further. In the 1990s that “something else” was the tech bubble. In the first decade of the 21st century, it was the housing bubble. Today, the only recourse, amid deep recession, is government spending—which is exactly what those at the top are now hoping to curb.
by Joseph E. Stiglitz, Vanity Fair | Read more:
Stephen Doyle
There’s No Stopping the Rise of E-Money
...All this activity has people once again talking about a cashless society. Because let’s face it: Cash is expensive. In the United States, for instance, studies indicate that maintaining a cash system—including printing new bills, recycling old ones, moving them about in armored trucks, using them to replenish automatic cash machines—costs the country about 1 percent of GDP. Those studies also show that the marginal cost of a cash transaction is around double that of a debit-card transaction.
Cash’s indirect costs are huge, too. In a 2011 study [PDF], Edgar L. Feige of the University of Wisconsin, in Madison, and Richard Cebula of Jacksonville University, in Florida, found that in the United States 18 to 19 percent of total reportable income is hidden from federal tax men, a shortfall of about US $500 billion. The Justice Department estimated in 2008 that secret offshore bank accounts were responsible for about one-fifth of the tax gap, suggesting that the remaining 80 percent is attributable to unreported cash. (...)
Thus the allure of the mobile phone as an alternative to cash. The enabling technology has finally arrived, and it’s taking root because the business drivers (that is, the high cost of cash) and the social drivers (cash’s disproportionate cost to the poor) were already there. And just as the plastic card and the Web made it easy for us to pay merchants, the mobile phone will soon make it easy for us to pay each other.
So let's assume that the mobile phone will take over and that in a few years’ time, you’ll be able to pay Walmart or your window cleaner or your niece with your mobile phone. In this world, switching among dollars and euros and frequent-flier miles and Facebook Credits and Google Bucks and any other form of money will be just a matter of choosing from a menu on the phone. The cost of introducing new currencies will collapse—anyone will be able to do it. The future of money, in other words, won’t be that single galactic currency of science fiction. (We already know that, because we can’t even make a single currency work between Germany and Greece, let alone Ganymede and Gamma Centauri.) Instead, we can look forward not merely to hundreds but thousands or even millions of currencies. And though regulators may oppose the trend, they can’t hold it back.
That must sound as crazy to you as the idea of paper money once did to your ancestors, but it really isn’t. Trying to imagine a wallet with a hundred currencies in it and a Coke machine with a hundred slots for them is, of course, nuts. But based on the available currencies in your mobile “wallet” and prevailing market conditions, your phone and the Coke machine will be able to negotiate an exchange rate in a fraction of a second.
Likewise, I don’t want to carry around a hundred different retailer credit and loyalty cards, but my phone can hold a zillion. So when I go to Starbucks, my phone will select my Starbucks app, load up my Starbucks account, and generally not bother me about the details. When I walk next door into Target, my phone will select my Target app, fire up my Target card, and get down to business.
Who will want to issue these new currencies? Corporations, for starters. When Edward de Bono wrote The IBM Dollar: A Proposal for the Wider Use of “Target” Currencies back in 1994, he looked forward to a time when “the successors to Bill Gates will have put the successors to Alan Greenspan out of business,” arguing that it would be more efficient for companies to issue money than equity. Even if all I’ve got is Microsoft Moola, and you want to get paid in Samsung Shekels, who cares? Our phones can sort it out for us.
by David G.W. Birch, IEEE Spectrum | Read more:
Illustration: Harry Campbell
Cash’s indirect costs are huge, too. In a 2011 study [PDF], Edgar L. Feige of the University of Wisconsin, in Madison, and Richard Cebula of Jacksonville University, in Florida, found that in the United States 18 to 19 percent of total reportable income is hidden from federal tax men, a shortfall of about US $500 billion. The Justice Department estimated in 2008 that secret offshore bank accounts were responsible for about one-fifth of the tax gap, suggesting that the remaining 80 percent is attributable to unreported cash. (...)
Thus the allure of the mobile phone as an alternative to cash. The enabling technology has finally arrived, and it’s taking root because the business drivers (that is, the high cost of cash) and the social drivers (cash’s disproportionate cost to the poor) were already there. And just as the plastic card and the Web made it easy for us to pay merchants, the mobile phone will soon make it easy for us to pay each other.
So let's assume that the mobile phone will take over and that in a few years’ time, you’ll be able to pay Walmart or your window cleaner or your niece with your mobile phone. In this world, switching among dollars and euros and frequent-flier miles and Facebook Credits and Google Bucks and any other form of money will be just a matter of choosing from a menu on the phone. The cost of introducing new currencies will collapse—anyone will be able to do it. The future of money, in other words, won’t be that single galactic currency of science fiction. (We already know that, because we can’t even make a single currency work between Germany and Greece, let alone Ganymede and Gamma Centauri.) Instead, we can look forward not merely to hundreds but thousands or even millions of currencies. And though regulators may oppose the trend, they can’t hold it back.
That must sound as crazy to you as the idea of paper money once did to your ancestors, but it really isn’t. Trying to imagine a wallet with a hundred currencies in it and a Coke machine with a hundred slots for them is, of course, nuts. But based on the available currencies in your mobile “wallet” and prevailing market conditions, your phone and the Coke machine will be able to negotiate an exchange rate in a fraction of a second.
Likewise, I don’t want to carry around a hundred different retailer credit and loyalty cards, but my phone can hold a zillion. So when I go to Starbucks, my phone will select my Starbucks app, load up my Starbucks account, and generally not bother me about the details. When I walk next door into Target, my phone will select my Target app, fire up my Target card, and get down to business.
Who will want to issue these new currencies? Corporations, for starters. When Edward de Bono wrote The IBM Dollar: A Proposal for the Wider Use of “Target” Currencies back in 1994, he looked forward to a time when “the successors to Bill Gates will have put the successors to Alan Greenspan out of business,” arguing that it would be more efficient for companies to issue money than equity. Even if all I’ve got is Microsoft Moola, and you want to get paid in Samsung Shekels, who cares? Our phones can sort it out for us.
by David G.W. Birch, IEEE Spectrum | Read more:
Illustration: Harry Campbell
Thursday, May 31, 2012
Self-Portrait in a Sheet Mirror: On Vivian Maier
Imagine being the kind of person who finds everything provocative. All you have to do is set out on a walk through city streets, a Rolleiflex hanging from a strap around your neck, and your heart starts pounding in anticipation. In a world that never fails to startle, it is up to you to find the perfect angle of vision and make use of the available light to illuminate thrilling juxtapositions. You have the power to create extraordinary images out of ordinary scenes, such as two women crossing the street, minks hanging listlessly down the backs of their matching black jackets; or a white man dropping a coin in a black man’s cup while a white dog on a leash looks away, as if in embarrassment; or a stout old woman braced in protest, gripping the hands of a policeman; or three women waiting at a bus stop, lips set in grim response to the affront represented by your camera, their expressions saying “go away” despite the sign behind them announcing, “Welcome to Chicago.”
Welcome to this crowded stage of a city, where everyone is an actor—the poor, the rich, the policemen and street vendors, the nuns and nannies. Even a leaf, a balloon, a puddle, the corpse of a cat or horse can play a starring role. And you are there, too, as involved in the action of this vibrant theater as anyone else, caught in passing at just the right time, your self-portraits turned to vaporous mirages in store windows, outlined in the silhouettes of shadows and reflected in mirrors that you find in unexpected places. You have to be quick if you’re going to get the image you want. You are quick—so quick that you can snap the picture before the doorman has a chance to come outside and tell you to move on.
There is so much drama worth capturing on film; you don’t have the time or resources to turn all of your many thousands of negatives into prints. Anyway, prints aren’t the point of these adventures. It’s enough to delight in your own ingenuity over and over again, with each click of the shutter. You’ll leave the distribution of your art to someone else.
On a winter’s day in 2007, a young realtor named John Maloof paid $400 for a box full of negatives that was being sold by an auction house in Chicago. The box had been repossessed from a storage locker gone into arrears, and Maloof was hoping it contained images he could use to illustrate a book he was co-writing about the Chicago neighborhood of Portage Park. As it turned out, he had stumbled upon a much more valuable treasure: the work of a photographer who looks destined to take her place as one of the pre-eminent street photographers of the twentieth century.
Like all good stories, this one is full of false leads and startling surprises. Maloof was unimpressed initially by the negatives and disappointed that he hadn’t found any materials for his book on Portage Park. As he told a reporter from the Guardian, “Nothing was pertinent for the book so I thought: ‘Well, this sucks, but we can probably sell them on eBay or whatever.’” He created a blog and posted scans of the negatives, but after the blog received no visitors for months, he posted the scans on Flickr. People began to take notice, and their responses helped Maloof appreciate the importance of his purchase.
His growing excitement led him to take a crash course in photography, buy a Rolleiflex—the same kind of camera that had been used to capture the images on the negatives—and even build a darkroom in his attic. He tracked down other buyers who had been at the auction and persuaded them to sell him their boxes, ultimately accumulating a collection of more than 100,000 negatives and 3,000 prints, hundreds of rolls of film, home movies and audiotapes, as well as personal items like clothes, letters and books on photography. A second Chicago collector, Jeffrey Goldstein, held on to materials he acquired from one of the initial bidders. But Maloof estimates that he succeeded in gathering 90 percent of the photographer’s archive.
At some point between 2007 and 2009, Maloof set out to identify the person who had taken the photographs, though this portion of the story remains murky. According to the Chicago Sun-Times, Maloof was “sifting through the negatives in 2009 when he found” a name, that of Vivian Maier, “on an envelope and Googled it. What he found was an obit.” But in a discussion on Flickr, Maloof indicated that he had found Maier’s name earlier. He reported that he came across her name on a photo-label envelope a year after he’d purchased the materials from the auction house. He considered trying to meet Maier but was told by the auction house that she was ill. “I didn’t want to bother her,” he said. “Soooo many questions would have been answered if I had. It eats at me from time to time.” In April 2009 he Googled Maier’s name and found her obituary, which had been placed the previous day. “How weird?” Maloof commented on Flickr. (...)
In an interview with Chicago Magazine, Lane Gensburg described his former nanny as having “an amazing ability to relate to children.” Gensburg indicated that he wanted nothing unflattering said about Maier, not foreseeing how an offhand epithet would, for some, become the basis of her legacy: “She was like Mary Poppins,” he reportedly said, introducing a loving comparison that has been repeated less lovingly in subsequent accounts of Maier’s life. Maier may have left behind a huge archive of fascinating visual material that is inviting the world’s attention. But it’s not easy for Mary Poppins to be taken seriously as an artist.
by Joanna Scott, The Nation | Read more:
Photo: Vivian Maier, Self Portrait
Meet 'Flame, 'The Massive Spy Malware Infiltrating Iranian Computers
A massive, highly sophisticated piece of malware has been newly found
infecting systems in Iran and elsewhere and is believed to be part of a
well-coordinated, ongoing, state-run cyberespionage operation. (...)
Early analysis of Flame by the Lab indicates that it’s designed primarily to spy on the users of infected computers and steal data from them, including documents, recorded conversations and keystrokes. It also opens a backdoor to infected systems to allow the attackers to tweak the toolkit and add new functionality.
The malware, which is 20 megabytes when all of its modules are installed, contains multiple libraries, SQLite3 databases, various levels of encryption — some strong, some weak — and 20 plug-ins that can be swapped in and out to provide various functionality for the attackers. It even contains some code that is written in the LUA programming language — an uncommon choice for malware. (...)
“It’s a very big chunk of code. Because of that, it’s quite interesting that it stayed undetected for at least two years,” Gostev said. He noted that there are clues that the malware may actually date back to as early as 2007, around the same time period when Stuxnet and DuQu are believed to have been created.
Gostev says that because of its size and complexity, complete analysis of the code may take years.
“It took us half a year to analyze Stuxnet,” he said. “This is 20 times more complicated. It will take us 10 years to fully understand everything.”
Among Flame’s many modules is one that turns on the internal microphone of an infected machine to secretly record conversations that occur either over Skype or in the computer’s near vicinity; a module that turns Bluetooth-enabled computers into a Bluetooth beacon, which scans for other Bluetooth-enabled devices in the vicinity to siphon names and phone numbers from their contacts folder; and a module that grabs and stores frequent screenshots of activity on the machine, such as instant-messaging and e-mail communications, and sends them via a covert SSL channel to the attackers’ command-and-control servers.
The malware also has a sniffer component that can scan all of the traffic on an infected machine’s local network and collect usernames and password hashes that are transmitted across the network. The attackers appear to use this component to hijack administrative accounts and gain high-level privileges to other machines and parts of the network. (...)
Because Flame is so big, it gets loaded to a system in pieces. The machine first gets hit with a 6-megabyte component, which contains about half a dozen other compressed modules inside. The main component extracts, decompresses and decrypts these modules and writes them to various locations on disk. The number of modules in an infection depends on what the attackers want to do on a particular machine.
Once the modules are unpacked and loaded, the malware connects to one of about 80 command-and-control domains to deliver information about the infected machine to the attackers and await further instruction from them. The malware contains a hardcoded list of about five domains, but also has an updatable list, to which the attackers can add new domains if these others have been taken down or abandoned.
While the malware awaits further instruction, the various modules in it might take screenshots and sniff the network. The screenshot module grabs desktop images every 15 seconds when a high-value communication application is being used, such as instant messaging or Outlook, and once every 60 seconds when other applications are being used.
Although the Flame toolkit does not appear to have been written by the same programmers who wrote Stuxnet and DuQu, it does share a few interesting things with Stuxnet.
Stuxnet is believed to have been written through a partnership between Israel and the United States, and was first launched in June 2009. It is widely believed to have been designed to sabotage centrifuges used in Iran’s uranium enrichment program. DuQu was an espionage tool discovered on machines in Iran, Sudan, and elsewhere in 2011 that was designed to steal documents and other data from machines. Stuxnet and DuQu appeared to have been built on the same framework, using identical parts and using similar techniques. But Flame doesn’t resemble either of these in framework, design or functionality.
by Kim Zetter, Wired | Read more:
Image: Courtesy of Kaspersky
Early analysis of Flame by the Lab indicates that it’s designed primarily to spy on the users of infected computers and steal data from them, including documents, recorded conversations and keystrokes. It also opens a backdoor to infected systems to allow the attackers to tweak the toolkit and add new functionality.
The malware, which is 20 megabytes when all of its modules are installed, contains multiple libraries, SQLite3 databases, various levels of encryption — some strong, some weak — and 20 plug-ins that can be swapped in and out to provide various functionality for the attackers. It even contains some code that is written in the LUA programming language — an uncommon choice for malware. (...)
“It’s a very big chunk of code. Because of that, it’s quite interesting that it stayed undetected for at least two years,” Gostev said. He noted that there are clues that the malware may actually date back to as early as 2007, around the same time period when Stuxnet and DuQu are believed to have been created.
Gostev says that because of its size and complexity, complete analysis of the code may take years.
“It took us half a year to analyze Stuxnet,” he said. “This is 20 times more complicated. It will take us 10 years to fully understand everything.”
Among Flame’s many modules is one that turns on the internal microphone of an infected machine to secretly record conversations that occur either over Skype or in the computer’s near vicinity; a module that turns Bluetooth-enabled computers into a Bluetooth beacon, which scans for other Bluetooth-enabled devices in the vicinity to siphon names and phone numbers from their contacts folder; and a module that grabs and stores frequent screenshots of activity on the machine, such as instant-messaging and e-mail communications, and sends them via a covert SSL channel to the attackers’ command-and-control servers.
The malware also has a sniffer component that can scan all of the traffic on an infected machine’s local network and collect usernames and password hashes that are transmitted across the network. The attackers appear to use this component to hijack administrative accounts and gain high-level privileges to other machines and parts of the network. (...)
Because Flame is so big, it gets loaded to a system in pieces. The machine first gets hit with a 6-megabyte component, which contains about half a dozen other compressed modules inside. The main component extracts, decompresses and decrypts these modules and writes them to various locations on disk. The number of modules in an infection depends on what the attackers want to do on a particular machine.
Once the modules are unpacked and loaded, the malware connects to one of about 80 command-and-control domains to deliver information about the infected machine to the attackers and await further instruction from them. The malware contains a hardcoded list of about five domains, but also has an updatable list, to which the attackers can add new domains if these others have been taken down or abandoned.
While the malware awaits further instruction, the various modules in it might take screenshots and sniff the network. The screenshot module grabs desktop images every 15 seconds when a high-value communication application is being used, such as instant messaging or Outlook, and once every 60 seconds when other applications are being used.
Although the Flame toolkit does not appear to have been written by the same programmers who wrote Stuxnet and DuQu, it does share a few interesting things with Stuxnet.
Stuxnet is believed to have been written through a partnership between Israel and the United States, and was first launched in June 2009. It is widely believed to have been designed to sabotage centrifuges used in Iran’s uranium enrichment program. DuQu was an espionage tool discovered on machines in Iran, Sudan, and elsewhere in 2011 that was designed to steal documents and other data from machines. Stuxnet and DuQu appeared to have been built on the same framework, using identical parts and using similar techniques. But Flame doesn’t resemble either of these in framework, design or functionality.
by Kim Zetter, Wired | Read more:
Image: Courtesy of Kaspersky
Booktography is fast becoming a viral fad all over the web. The best ones are those which seamlessly integrates the book’s cover with the live person. A dead person may also be used for the purposes of this meme but that’s rather macabre. Much like photobombs, and jumping-in-the-air photos; the originator of this concept is unknown, but the concept behind his/her creative idea will go on to spawn many more memes.
More here:
Freaks, Geeks and Microsoft
When the Kinect was introduced in November 2010 as a $150 motion-control add-on to Microsoft’s Xbox consoles, it drew attention from more than just video-gamers. A slim, black, oblong 11½-inch wedge perched on a base, it allowed a gamer to use his or her body to throw virtual footballs or kick virtual opponents without a controller, but it was also seen as an important step forward in controlling technology with natural gestures.
In fact, as the company likes to note, the Kinect set “a Guinness World Record for the fastest-selling consumer device ever.” And at least some of the early adopters of the Kinect were not content just to play games with it. “Kinect hackers” were drawn to the fact that the object affordably synthesizes an arsenal of sophisticated components — notably, a fancy video camera, a “depth sensor” to capture visual data in three dimensions and a multiarray microphone capable of a similar trick with audio.
Combined with a powerful microchip and software, these capabilities could be put to uses unrelated to the Xbox. Like: enabling a small drone to “see” its surroundings and avoid obstacles; rigging up a 3-D scanner to create small reproductions of most any object (or person); directing the music of a computerized orchestra with conductorlike gestures; remotely controlling a robot to brush a cat’s fur. It has been used to make animation, to add striking visual effects to videos, to create an “interactive theme park” in South Korea and to control a P.C. by the movement of your hands (or, in a variation developed by some Japanese researchers, your tongue).
At the International Consumer Electronics Show earlier this year, Steve Ballmer, Microsoft’s chief executive, used his keynote presentation to announce that the company would release a version specifically meant for use outside the Xbox context and to indicate that the company would lay down formal rules permitting commercial uses for the device. A result has been a fresh wave of Kinect-centric experiments aimed squarely at the marketplace: helping Bloomingdale’s shoppers find the right size of clothing; enabling a “smart” shopping cart to scan Whole Foods customers’ purchases in real time; making you better at parallel parking.
An object that spawns its own commercial ecosystem is a thing to take seriously. Think of what Apple’s app store did for the iPhone, or for that matter how software continuously expanded the possibilities of the personal computer. Patent-watching sites report that in recent months, Sony, Apple and Google have all registered plans for gesture-control technologies like the Kinect. But there is disagreement about exactly how the Kinect evolved into an object with such potential. Did Microsoft intentionally create a versatile platform analogous to the app store? Or did outsider tech-artists and hobbyists take what the company thought of as a gaming device and redefine its potential?
This clash of theories illustrates a larger debate about the nature of innovation in the 21st century, and the even larger question of who, exactly, decides what any given object is really for. Does progress flow from a corporate entity’s offering a whiz-bang breakthrough embraced by the masses? Or does techno-thing success now depend on the company’s acquiescing to the crowd’s input? Which vision of an object’s meaning wins? The Kinect does not neatly conform to either theory. But in this instance, maybe it’s not about whose vision wins; maybe it’s about the contest.
by Rob Walker, NY Times | Read more:
Illustration by Robbie Porter
Internet to Grow Fourfold in Four Years
Cisco Systems (NASDAQ: CSCO) put out its annual Visual Networking Index (VNI) forecast for 2011 to 2016. The huge router company projects the Internet will be four times as large in four years as it will be this year. The “wired” world, which has changed human interaction and the growth of the availability of information, will explode, if Cisco is correct.
It is hard to find an analogue to this expansion in recent business and social history. Perhaps the growth of the number of TV sets or cable use. Or, maybe the growth of car ownership at the beginning of the last century. At any rate, the growth cannot be matched by anything that has happened in recent memory. The Cisco forecast means that billions of people will be tethered to the Internet. Cisco does not believe its job is to say what the impact of this will be, but there are some reasonable guesses.
The path to the fourfold increase includes these things:
The weight of video use is likely to be the greatest burden on Internet systems. While news is probably a large part of this, entertainment is likely to be larger. Businesses modeled on companies like Netflix (NASDAQ: NFLX) and Google’s (NASDAQ: GOOG) YouTube will expand not just in America and Europe. Similar companies will be established in the most populous nations, with the largest probably coming from China, Russia and much of South America. No one knows yet from where the content for these new businesses, whether or not they are legitimate, will come. If the past is any indication, a great deal will originate from U.S. studios. It will either be a revenue windfall for them or part of the growing trouble with piracy.
by Douglas A. McIntyre, 24/7 Wall Street | Read more:
It is hard to find an analogue to this expansion in recent business and social history. Perhaps the growth of the number of TV sets or cable use. Or, maybe the growth of car ownership at the beginning of the last century. At any rate, the growth cannot be matched by anything that has happened in recent memory. The Cisco forecast means that billions of people will be tethered to the Internet. Cisco does not believe its job is to say what the impact of this will be, but there are some reasonable guesses.
The path to the fourfold increase includes these things:
It is to Cisco’s advantage to make telecom, cable and wireless providers believe these numbers, because the increased use of the company’s routers will be needed to carry the burgeoning load. But, based on recent history, it is not hard to believe that Cisco is right — at least directionally.
- By 2016, the forecast projects there will be nearly 18.9 billion network connections — almost 2.5 connections for each person on earth — compared with 10.3 billion in 2011.
- By 2016, there are expected to be 3.4 billion Internet users — about 45% of the world’s projected population, according to UN estimates.
- The average fixed broadband speed is expected to increase nearly fourfold, from 9 megabits per second (Mbps) in 2011 to 34 Mbps in 2016.
- By 2016, 1.2 million video minutes — the equivalent of 833 days (or more than two years) — will travel the Internet every second.
The weight of video use is likely to be the greatest burden on Internet systems. While news is probably a large part of this, entertainment is likely to be larger. Businesses modeled on companies like Netflix (NASDAQ: NFLX) and Google’s (NASDAQ: GOOG) YouTube will expand not just in America and Europe. Similar companies will be established in the most populous nations, with the largest probably coming from China, Russia and much of South America. No one knows yet from where the content for these new businesses, whether or not they are legitimate, will come. If the past is any indication, a great deal will originate from U.S. studios. It will either be a revenue windfall for them or part of the growing trouble with piracy.
by Douglas A. McIntyre, 24/7 Wall Street | Read more:
Subscribe to:
Comments (Atom)















