Wednesday, August 31, 2016
Great Exploitations: The Poverty Industry
At least since the passage of California’s Proposition 13 in 1978—in which property owners voted to halve their property taxes—the United States has struggled with an anti-tax mentality revolving around the belief that government is ineffective. That sentiment is nowhere so clearly expressed as in wingnut Grover Norquist’s famous dictum that government should be small enough to drown in a bathtub. Indeed, the right’s efforts to starve government of the level of resources necessary for competent functioning have made a self-fulfilling prophecy of the claim that government is moribund.
Daniel L. Hatcher’s The Poverty Industry exposes one way that states have responded to the anti-tax climate and diminishing federal funds. Facing budget crises but reluctant to raise taxes, many state politicians treat federal dollars available for poverty-relief programs as an easy mark from which they can mine revenue without political consequence. They divert federal funding earmarked for social programs for children and the elderly, repurposing it for their general funds with the help of private companies that in effect launder money for them. A law professor at the University of Baltimore who has represented Maryland victims of such schemes, Hatcher presents a distressing picture of how states routinely defraud taxpayers of millions of federal dollars.
This is possible because there is a near-total absence of accountability for how states use federal money intended to fight poverty. Remarkably, states do not even have to pretend to have used all the funds for the stated purpose; they are only required to show that they are taking care of the populations for which the funds were intended. Medicaid, for example, operates as a matching program: states receive federal payments that match state spending on health care for low-income residents. The purpose of this “fiscal federalism” is to merge federal resources with states’ understandings of their own populations’ needs. But the grant system is rife with abuse. The more money that states claim they spend on qualifying Medicaid services, the more federal money they can receive. Hatcher demonstrates how states contract with companies to find ways to claim very high administrative costs for these social programs, which the federal government will reimburse, creating more money they can siphon off into the general fund.
Exacerbating states’ natural inclination toward grift, private companies have taken power at all stages of the welfare system and have done so with an eye on states’ and their own bottom lines. States almost universally contract with private corporations to administer their welfare programs. Welfare providers, such as hospitals, also hire private companies to help them maximize payment claims. States then hire additional private companies to help them reduce their payouts to providers and increase their claims from the federal government. The federal government hires the same or similar companies to audit Medicaid and other industries and to review state actions. These companies lobby heavily at the state and national levels for their own interests and with little public scrutiny brought to bear on how they conduct their business. Hatcher details how often conflicts of interest and pay-to-play arrangements influence the votes of state politicians, for example. At each step, the companies profit off a system designed to provide a safety net for our most vulnerable citizens. They are, quite literally, stealing from the poor. And although it has the authority to do so, the federal government rarely pursues prosecution against revenue maximization schemes.
Agency confidentiality statutes mean that it is very difficult to prove misallocation claims against state agencies, which regularly rebuff efforts to collect evidence by saying that opening records would violate recipients’ privacy. In 2004 Alabama spent enough money on child welfare to pay each child $3750 a month, but it only paid its foster care providers between $400 and $450 per month. The state claims the rest of the money is spent on providing services for the children, but accounting for nearly 90 percent of that money remains nearly impossible. And while Hatcher notes that some of it did go toward the intended services, much of it almost certainly found its way into poverty industry hands.
• • •
A leading corporate perpetrator of the poverty industry, in Hatcher’s telling, is MAXIMUS. Founded in 1975, the company works with governments around the globe as a private contractor for government aid programs. The company was found guilty of intentionally creating incorrect Medicaid claims while in a revenue maximization contract with the District of Columbia, and had to pay a $30 million federal fine in 2007. Yet its methods are so intensely profitable—for both states and itself—that it continues to win more state contracts. Hatcher uncovered MAXIMUS emails to Maryland officials in which it warned that the state was losing out by not pocketing more money intended for poor children; in the same email, it offered to help in that process.
Companies that arose in the military-industrial complex, including Northrup Grumman and Lockheed Martin, are now helping to create the poverty-industrial complex by going into this profitable revenue maximization business themselves. These companies make money if they can remove children from welfare rolls. A whistleblower lawsuit revealed that WellCare—a company that had already paid a $10 million fine for defrauding Florida’s Medicaid and Healthy Kids programs, and that has acknowledged illegal campaign finance contributions—held a celebratory dinner after removing 425 babies from state welfare rolls, lessening its financial responsibility and increasing corporate profits. WellCare had to pay a $137.5 million settlement to the Justice Department to settle that lawsuit.
As Hatcher explains, there are a number of ways for states to make money off of foster children. For example, the state can declare them disabled and therefore eligible for Social Security benefits. The state then names itself their trustees and gets to keep the money. Hatcher argues that the same is true for children receiving veterans’ benefits. For example, a guardian state can manufacture ways to increase the administrative costs of managing and dispersing benefits so that it can add those charges to the federal government’s tab. It may also seek to place children in its care with foster families rather than find a relative who can care for the child because it then profits from continuing to administer benefits for the child. It might put children in its care on prescription drugs to sedate their behavior so it can reduce staffing costs and charge for the medicines, even if their behavior can be managed without sedation. Tragically, states often treat vulnerable children in their care as cash machines.
Daniel L. Hatcher’s The Poverty Industry exposes one way that states have responded to the anti-tax climate and diminishing federal funds. Facing budget crises but reluctant to raise taxes, many state politicians treat federal dollars available for poverty-relief programs as an easy mark from which they can mine revenue without political consequence. They divert federal funding earmarked for social programs for children and the elderly, repurposing it for their general funds with the help of private companies that in effect launder money for them. A law professor at the University of Baltimore who has represented Maryland victims of such schemes, Hatcher presents a distressing picture of how states routinely defraud taxpayers of millions of federal dollars.
This is possible because there is a near-total absence of accountability for how states use federal money intended to fight poverty. Remarkably, states do not even have to pretend to have used all the funds for the stated purpose; they are only required to show that they are taking care of the populations for which the funds were intended. Medicaid, for example, operates as a matching program: states receive federal payments that match state spending on health care for low-income residents. The purpose of this “fiscal federalism” is to merge federal resources with states’ understandings of their own populations’ needs. But the grant system is rife with abuse. The more money that states claim they spend on qualifying Medicaid services, the more federal money they can receive. Hatcher demonstrates how states contract with companies to find ways to claim very high administrative costs for these social programs, which the federal government will reimburse, creating more money they can siphon off into the general fund.
Exacerbating states’ natural inclination toward grift, private companies have taken power at all stages of the welfare system and have done so with an eye on states’ and their own bottom lines. States almost universally contract with private corporations to administer their welfare programs. Welfare providers, such as hospitals, also hire private companies to help them maximize payment claims. States then hire additional private companies to help them reduce their payouts to providers and increase their claims from the federal government. The federal government hires the same or similar companies to audit Medicaid and other industries and to review state actions. These companies lobby heavily at the state and national levels for their own interests and with little public scrutiny brought to bear on how they conduct their business. Hatcher details how often conflicts of interest and pay-to-play arrangements influence the votes of state politicians, for example. At each step, the companies profit off a system designed to provide a safety net for our most vulnerable citizens. They are, quite literally, stealing from the poor. And although it has the authority to do so, the federal government rarely pursues prosecution against revenue maximization schemes.
Agency confidentiality statutes mean that it is very difficult to prove misallocation claims against state agencies, which regularly rebuff efforts to collect evidence by saying that opening records would violate recipients’ privacy. In 2004 Alabama spent enough money on child welfare to pay each child $3750 a month, but it only paid its foster care providers between $400 and $450 per month. The state claims the rest of the money is spent on providing services for the children, but accounting for nearly 90 percent of that money remains nearly impossible. And while Hatcher notes that some of it did go toward the intended services, much of it almost certainly found its way into poverty industry hands.
• • •
A leading corporate perpetrator of the poverty industry, in Hatcher’s telling, is MAXIMUS. Founded in 1975, the company works with governments around the globe as a private contractor for government aid programs. The company was found guilty of intentionally creating incorrect Medicaid claims while in a revenue maximization contract with the District of Columbia, and had to pay a $30 million federal fine in 2007. Yet its methods are so intensely profitable—for both states and itself—that it continues to win more state contracts. Hatcher uncovered MAXIMUS emails to Maryland officials in which it warned that the state was losing out by not pocketing more money intended for poor children; in the same email, it offered to help in that process.
Companies that arose in the military-industrial complex, including Northrup Grumman and Lockheed Martin, are now helping to create the poverty-industrial complex by going into this profitable revenue maximization business themselves. These companies make money if they can remove children from welfare rolls. A whistleblower lawsuit revealed that WellCare—a company that had already paid a $10 million fine for defrauding Florida’s Medicaid and Healthy Kids programs, and that has acknowledged illegal campaign finance contributions—held a celebratory dinner after removing 425 babies from state welfare rolls, lessening its financial responsibility and increasing corporate profits. WellCare had to pay a $137.5 million settlement to the Justice Department to settle that lawsuit.
As Hatcher explains, there are a number of ways for states to make money off of foster children. For example, the state can declare them disabled and therefore eligible for Social Security benefits. The state then names itself their trustees and gets to keep the money. Hatcher argues that the same is true for children receiving veterans’ benefits. For example, a guardian state can manufacture ways to increase the administrative costs of managing and dispersing benefits so that it can add those charges to the federal government’s tab. It may also seek to place children in its care with foster families rather than find a relative who can care for the child because it then profits from continuing to administer benefits for the child. It might put children in its care on prescription drugs to sedate their behavior so it can reduce staffing costs and charge for the medicines, even if their behavior can be managed without sedation. Tragically, states often treat vulnerable children in their care as cash machines.
by Erik Loomis, Boston Review | Read more:
Image: Mother And Child (1908) by Egon SchieleSuillus, hogfish (Lachnolaimus maximus) from Catesby's The natural history of Carolina, Florida, and the Bahama island
History of Computer Design: Macintosh
The physical design of the Macintosh has signs of this self-consciously revolutionary atmosphere. The members of the team each signed the cast of the inside of the case - though mainly technicians would see it, they put their signature on their work like the radical artists they felt they were. In a durable case of carefully selected ABS plastic and with a very fine texture that could make scratches less apparent, it was meant to last. This concern for detail and endurance included even the colour of the plastic, a tawny brown called PMS 453 that Jerry Manock thought would age well, unlike the lighter plastic of the Lisa which shifted with prolonged exposure to sunlight to a bright orange (Kunkel, 25).
Jobs encouraged the Macintosh team to learn from mistakes made by the large team designing the Lisa. The thick band of plastic over the Lisa's screen reminded Jobs of a Cro-Magnon forehead, and he guided the physical appearance of the Mac to make it seem more cheerful (Sculley, 160). Since the Macintosh was to be easy to use, it should have a friendly appearance. Like the Lisa, the Macintosh has its circuitry, disk drive and display in a single unit with a keyboard and mouse, a self-contained design requiring only three cables, including the power cord, and contributing to a far easier assembly for the user than the increasingly established PCs. However, its disk drive is below the display, making it taller, narrower, more symmetrical, and far more suggestive of a face. Rather than looking cantilevered, the display has only a small recess below to elevate it and give some room for the keyboard, but this also enhances the impression of a chin. The simple anthropomorphic quality of the case and the few cables contribute to the Macintosh's identity as a computer that ordinary people could understand.
The design of the case was closely guided by Steve Jobs, and his name appears on its design patent along with its producers, Terry Oyama with Jerry Manock. Oyama later said, "Even though Steve didn't draw any of the lines, his ideas and inspiration made the design what it is. To be honest, we didn't know what it meant for a computer to be 'friendly' until Steve told us" (Kunkel, 26). (...)
The way in which the Macintosh can be used is also strongly guided by physical design. The keyboard is like that of a typewriter except for the option and command keys, the latter sporting the Apple logo, that are on either side to accommodate both left and right-handed typists. It does not have the numerous function keys or even the cursor keys found on other computer keyboards. The lack of these keys is what Donald Norman calls a forcing device; without them, the user is forced to use the mouse. This was a very intentional strategy used by Jobs to ensure that the Macintosh would be used in the way designed, with a mouse rather than with the then-familiar key commands. This strategy also forced software developers to create applications that take advantage of the mouse-driven graphical interface, rather than reproduce existing software for the new platform (Levy, 194-5).
The ports on the back of the Macintosh are recessed to prevent users from trying to plug in non-compatible peripherals. Each of these ports is labeled with an easily understood icon developed by Apple according to the Deutsche Industrie Norman standard. These icons, on a clear plastic label applied by hand, help prevent injury to the computer and confusion for the user. To further simplify use, the power switch (the only switch on the computer; even ejecting a disk is controlled through the graphical interface) is located on the back where it cannot be hit accidentally, but has a smooth area around it in the otherwise textured plastic to guide the user's hand. Manock was proud to fine-tune his design in this way, and said, "That's the kind of detail that turns an ordinary product into an artifact." A similarly subtle detail is found on the underside of the handle at the top of the machine: ribs in the plastic there make it easier to grip the case (Kunkel, 24). (...)
The concern for details on the Macintosh was unprecedented for a computer and gave it a sense of personality. Many Macintosh owners feel a relationship with their computer that extends far beyond its functions. Upon its release, it was frequently described, not in terms of its technology, but as an art object. One early article advises caring for the machine as if signs of its normal use as a tool were unfortunate blemishes - it suggests users "clean the Macintosh's exterior with a soft sable paintbrush, which you can buy at any art store" (MacWorld, Dec. 1984, p. 45).
The Macintosh is clearly shaped to provide as much uniformity in user experience as possible. However, the limitations of the machine were sometimes resented. The Macintosh initially sold to technophiles, early adopters of innovations who tolerated an unrefined product in favour of novelty. These early Mac users were immediately passionate, but Douglas Adams typifies them by saying, "What I . . . fell in love with was not the machine itself, which was ridiculously slow and underpowered, but a romantic idea of the machine" (Levy, 187).
Ed Tracy, Landsnail | Read more:
Image: via:
Jobs encouraged the Macintosh team to learn from mistakes made by the large team designing the Lisa. The thick band of plastic over the Lisa's screen reminded Jobs of a Cro-Magnon forehead, and he guided the physical appearance of the Mac to make it seem more cheerful (Sculley, 160). Since the Macintosh was to be easy to use, it should have a friendly appearance. Like the Lisa, the Macintosh has its circuitry, disk drive and display in a single unit with a keyboard and mouse, a self-contained design requiring only three cables, including the power cord, and contributing to a far easier assembly for the user than the increasingly established PCs. However, its disk drive is below the display, making it taller, narrower, more symmetrical, and far more suggestive of a face. Rather than looking cantilevered, the display has only a small recess below to elevate it and give some room for the keyboard, but this also enhances the impression of a chin. The simple anthropomorphic quality of the case and the few cables contribute to the Macintosh's identity as a computer that ordinary people could understand.
The design of the case was closely guided by Steve Jobs, and his name appears on its design patent along with its producers, Terry Oyama with Jerry Manock. Oyama later said, "Even though Steve didn't draw any of the lines, his ideas and inspiration made the design what it is. To be honest, we didn't know what it meant for a computer to be 'friendly' until Steve told us" (Kunkel, 26). (...)
The way in which the Macintosh can be used is also strongly guided by physical design. The keyboard is like that of a typewriter except for the option and command keys, the latter sporting the Apple logo, that are on either side to accommodate both left and right-handed typists. It does not have the numerous function keys or even the cursor keys found on other computer keyboards. The lack of these keys is what Donald Norman calls a forcing device; without them, the user is forced to use the mouse. This was a very intentional strategy used by Jobs to ensure that the Macintosh would be used in the way designed, with a mouse rather than with the then-familiar key commands. This strategy also forced software developers to create applications that take advantage of the mouse-driven graphical interface, rather than reproduce existing software for the new platform (Levy, 194-5).
The ports on the back of the Macintosh are recessed to prevent users from trying to plug in non-compatible peripherals. Each of these ports is labeled with an easily understood icon developed by Apple according to the Deutsche Industrie Norman standard. These icons, on a clear plastic label applied by hand, help prevent injury to the computer and confusion for the user. To further simplify use, the power switch (the only switch on the computer; even ejecting a disk is controlled through the graphical interface) is located on the back where it cannot be hit accidentally, but has a smooth area around it in the otherwise textured plastic to guide the user's hand. Manock was proud to fine-tune his design in this way, and said, "That's the kind of detail that turns an ordinary product into an artifact." A similarly subtle detail is found on the underside of the handle at the top of the machine: ribs in the plastic there make it easier to grip the case (Kunkel, 24). (...)
The concern for details on the Macintosh was unprecedented for a computer and gave it a sense of personality. Many Macintosh owners feel a relationship with their computer that extends far beyond its functions. Upon its release, it was frequently described, not in terms of its technology, but as an art object. One early article advises caring for the machine as if signs of its normal use as a tool were unfortunate blemishes - it suggests users "clean the Macintosh's exterior with a soft sable paintbrush, which you can buy at any art store" (MacWorld, Dec. 1984, p. 45).
The Macintosh is clearly shaped to provide as much uniformity in user experience as possible. However, the limitations of the machine were sometimes resented. The Macintosh initially sold to technophiles, early adopters of innovations who tolerated an unrefined product in favour of novelty. These early Mac users were immediately passionate, but Douglas Adams typifies them by saying, "What I . . . fell in love with was not the machine itself, which was ridiculously slow and underpowered, but a romantic idea of the machine" (Levy, 187).
Ed Tracy, Landsnail | Read more:
Image: via:
Tuesday, August 30, 2016
Google Takes on Uber With New Ride-Share Service
Google is moving onto Uber Technologies Inc.’s turf with its own ride-sharing service in San Francisco that would help commuters carpool at far cheaper rates, according to a person familiar with the matter, jumping into a booming but fiercely competitive market.
Google, a unit of Alphabet Inc., began a pilot program around its California headquarters in May that enables several thousand area workers at specific firms to use the Waze navigation app to connect with fellow commuters. It now plans to open the program to all San Francisco-area Waze users this fall, the person said, with hopes of expanding the service if successful. Waze, which Google acquired in 2013, offers real-time driving directions based on information from other drivers.
Unlike Uber and its crosstown rival Lyft Inc., both of which largely operate as on-demand taxi businesses, Waze wants to connect riders with drivers who are already headed in the same direction. The company has said it aims to make fares low enough to discourage drivers from operating as taxi drivers. Waze’s current pilot charges riders at most 54 cents a mile—far less than most Uber and Lyft rides—and, for now, Google doesn’t take a cut.
Still, Google’s push into ride-sharing could portend a clash with Uber, a seven-year-old firm valued at roughly $68 billion that largely invented the concept of summoning a car with a smartphone app.
Google and Uber were once allies—Google invested $258 million in Uber in 2013—but increasingly see each other as rivals. Alphabet executive David Drummond said Monday that he resigned from Uber’s board because of the increasing competition between the companies. Uber, which has long used Google’s mapping software for its ride-hailing service, recently began developing its own maps.
Google, a unit of Alphabet Inc., began a pilot program around its California headquarters in May that enables several thousand area workers at specific firms to use the Waze navigation app to connect with fellow commuters. It now plans to open the program to all San Francisco-area Waze users this fall, the person said, with hopes of expanding the service if successful. Waze, which Google acquired in 2013, offers real-time driving directions based on information from other drivers.
Unlike Uber and its crosstown rival Lyft Inc., both of which largely operate as on-demand taxi businesses, Waze wants to connect riders with drivers who are already headed in the same direction. The company has said it aims to make fares low enough to discourage drivers from operating as taxi drivers. Waze’s current pilot charges riders at most 54 cents a mile—far less than most Uber and Lyft rides—and, for now, Google doesn’t take a cut.
Still, Google’s push into ride-sharing could portend a clash with Uber, a seven-year-old firm valued at roughly $68 billion that largely invented the concept of summoning a car with a smartphone app.
Google and Uber were once allies—Google invested $258 million in Uber in 2013—but increasingly see each other as rivals. Alphabet executive David Drummond said Monday that he resigned from Uber’s board because of the increasing competition between the companies. Uber, which has long used Google’s mapping software for its ride-hailing service, recently began developing its own maps.
by Jack Nicas, WSJ | Read more:
Image: Linda Davidson, WSJ/WP/Getty
We Are Nowhere Close to the Limits of Athletic Performance
[ed. See also: Born to Rest.]
For many years I lived in Eugene, Oregon, also known as “track-town USA” for its long tradition in track and field. Each summer high-profile meets like the United States National Championships or Olympic Trials would bring world-class competitors to the University of Oregon’s Hayward Field. It was exciting to bump into great athletes at the local cafe or ice cream shop, or even find myself lifting weights or running on a track next to them. One morning I was shocked to be passed as if standing still by a woman running 400-meter repeats. Her training pace was as fast as I could run a flat out sprint over a much shorter distance.
The simple fact was that she was an extreme outlier, and I wasn’t. Athletic performance follows a normal distribution, like many other quantities in nature. That means that the number of people capable of exceptional performance falls off exponentially as performance levels increase. While an 11-second 100-meter can win a high school student the league or district championship, a good state champion runs sub-11, and among 100 state champions only a few have any hope of running near 10 seconds.
Keep going along this curve, and you get to the freaks among freaks—competitors who shatter records and push limits beyond imagination. When Carl Lewis dominated sprinting in the late 1980s, sub-10 second 100m times were rare, and anything in the 10-second flat range guaranteed a high finish, even at the Olympics. Lewis was a graceful 6 feet 2 inches, considered tall for a sprinter. Heights much greater than his were supposed to be a disadvantage for a sprinter, forcing a slower cadence and reduced speeds—at least that was the conventional wisdom.
So no one anticipated the coming of a Usain Bolt. At a muscular 6 feet 5 inches, and finishing almost half a second faster than the best of the previous generation, he seemed to come from another species entirely. His stride length can reach a remarkable 9.3 feet,1 and, in the words of a 2013 study in the European Journal of Physics, demonstrated performance that “is of physical interest since he can achieve, until now, accelerations and speeds that no other runner can.”
Bolt’s times weren’t just faster than anyone else in the world. They were considerably faster even than those of a world-class runner from the previous generation that was using performance-enhancing drugs. The Jamaican-born Canadian sprinter Ben Johnson achieved a world-record time of 9.79 seconds at the 1988 Olympic Games, beating Lewis and boasting that he’d have been faster if he hadn’t raised his hand in victory just ahead of the finish line. It would later be found out that he’d been using steroids.
Even the combination of an elite runner and anabolic steroids, though, was not enough to outcompete a genetic outlier. Bolt achieved a time of 9.58 seconds at the 2009 World Athletics Championship, setting a world record and beating his own previous record by a full tenth of a second.
We find a similar story in the NBA with Shaquille O’Neal. O’Neal was the first 7-footer in the league who retained the power and agility of a much smaller man. Neither a beanpole nor a plodding hulk, he would have been an athletic 200-pounder if scaled down to 6 feet in height. When Shaq got the ball near the hoop, no man (or sometimes even two men) could stop him from dunking it. Soon after his entry into the league, basket frames had to be reinforced to prevent being destroyed by his dunks. After the Lakers won three championships in a row, the NBA was forced to change their rules drastically—allowing zone defenses—in order to reduce Shaq’s domination of the game. Here was a genetic outlier whose performance was unequalled by anyone else in a league that has long been criticized for its soft anti-doping policy; for example, it only added blood testing for human growth hormone to its program last year. Whatever doping may have been going on, it wasn’t enough to get anyone to Shaq’s level.
By comparison, the potential improvements achievable by doping effort are relatively modest. In weightlifting, for example, Mike Israetel, a professor of exercise science at Temple University, has estimated that doping increases weightlifting scores by about 5 to 10 percent. Compare that to the progression in world record bench press weights: 361 pounds in 1898, 363 pounds in 1916, 500 pounds in 1953, 600 pounds in 1967, 667 pounds in 1984, and 730 pounds in 2015. Doping is enough to win any given competition, but it does not stand up against the long-term trend of improving performance that is driven, in part, by genetic outliers. As the population base of weightlifting competitors has increased, outliers further and further out on the tail of the distribution have appeared, driving up world records.
Similarly, Lance Armstrong’s drug-fuelled victory of the 1999 Tour de France gave him a margin of victory over second-place finisher Alex Zulle of 7 minutes, 37 seconds, or about 0.1 percent. That pales in comparison to the dramatic secular increase in speeds the Tour has seen over the past half century: Eddy Merckx won the 1971 tour, which was about the same distance as the 1999 tour, in a time 5 percent worse than Zulle’s. Certainly, some of this improvement is due to training methods and better equipment. But much of it is simply due to the sport’s ability to find competitors of ever more exceptional natural ability, further and further out along the tail of what’s possible.
We’re just scratching the surface of what genetic outliers can do. The normal distribution we see in athletic capabilities is a telltale signature of many small additive effects that are all independent from each other. Ultimately, these additive effects come from gene variants, or alleles, with small positive and negative consequences on traits such as height, muscularity, and coordination. It is now understood, for example, that great height is due to the combination of an unusually large number of positive variants, and possibly some very rare mutations that have a large effect on their own.
The genomics researcher George Church maintains a list of some of these single mutations. They include a variant of LRP5 that leads to extra-strong bones, a variant of MSTN that produces extra lean muscle, and a variant of SCN9A that is associated with pain insensitivity.
Church has also been involved in one of the greatest scientific breakthroughs of recent decades: the development of a highly efficient gene editing tool called CRISPR, which has been approved for clinical trials for medical applications. If CRISPR-related technologies develop as anticipated, designer humans are at most a few decades away. Editing is most easily done soon after conception, when the embryo consists of only a small number of cells, but it is also possible in adults. Clinical trials of CRISPR, when they start this year, will edit existing cells in adults using an injection of a viral vector. It seems likely that CRISPR, or some improved version of it, will be established to be both safe and effective in the near future.
by Stephen Hsu, Nautilus | Read more:
For many years I lived in Eugene, Oregon, also known as “track-town USA” for its long tradition in track and field. Each summer high-profile meets like the United States National Championships or Olympic Trials would bring world-class competitors to the University of Oregon’s Hayward Field. It was exciting to bump into great athletes at the local cafe or ice cream shop, or even find myself lifting weights or running on a track next to them. One morning I was shocked to be passed as if standing still by a woman running 400-meter repeats. Her training pace was as fast as I could run a flat out sprint over a much shorter distance.
The simple fact was that she was an extreme outlier, and I wasn’t. Athletic performance follows a normal distribution, like many other quantities in nature. That means that the number of people capable of exceptional performance falls off exponentially as performance levels increase. While an 11-second 100-meter can win a high school student the league or district championship, a good state champion runs sub-11, and among 100 state champions only a few have any hope of running near 10 seconds.
Keep going along this curve, and you get to the freaks among freaks—competitors who shatter records and push limits beyond imagination. When Carl Lewis dominated sprinting in the late 1980s, sub-10 second 100m times were rare, and anything in the 10-second flat range guaranteed a high finish, even at the Olympics. Lewis was a graceful 6 feet 2 inches, considered tall for a sprinter. Heights much greater than his were supposed to be a disadvantage for a sprinter, forcing a slower cadence and reduced speeds—at least that was the conventional wisdom.
So no one anticipated the coming of a Usain Bolt. At a muscular 6 feet 5 inches, and finishing almost half a second faster than the best of the previous generation, he seemed to come from another species entirely. His stride length can reach a remarkable 9.3 feet,1 and, in the words of a 2013 study in the European Journal of Physics, demonstrated performance that “is of physical interest since he can achieve, until now, accelerations and speeds that no other runner can.”
Bolt’s times weren’t just faster than anyone else in the world. They were considerably faster even than those of a world-class runner from the previous generation that was using performance-enhancing drugs. The Jamaican-born Canadian sprinter Ben Johnson achieved a world-record time of 9.79 seconds at the 1988 Olympic Games, beating Lewis and boasting that he’d have been faster if he hadn’t raised his hand in victory just ahead of the finish line. It would later be found out that he’d been using steroids.
Even the combination of an elite runner and anabolic steroids, though, was not enough to outcompete a genetic outlier. Bolt achieved a time of 9.58 seconds at the 2009 World Athletics Championship, setting a world record and beating his own previous record by a full tenth of a second.
We find a similar story in the NBA with Shaquille O’Neal. O’Neal was the first 7-footer in the league who retained the power and agility of a much smaller man. Neither a beanpole nor a plodding hulk, he would have been an athletic 200-pounder if scaled down to 6 feet in height. When Shaq got the ball near the hoop, no man (or sometimes even two men) could stop him from dunking it. Soon after his entry into the league, basket frames had to be reinforced to prevent being destroyed by his dunks. After the Lakers won three championships in a row, the NBA was forced to change their rules drastically—allowing zone defenses—in order to reduce Shaq’s domination of the game. Here was a genetic outlier whose performance was unequalled by anyone else in a league that has long been criticized for its soft anti-doping policy; for example, it only added blood testing for human growth hormone to its program last year. Whatever doping may have been going on, it wasn’t enough to get anyone to Shaq’s level.
By comparison, the potential improvements achievable by doping effort are relatively modest. In weightlifting, for example, Mike Israetel, a professor of exercise science at Temple University, has estimated that doping increases weightlifting scores by about 5 to 10 percent. Compare that to the progression in world record bench press weights: 361 pounds in 1898, 363 pounds in 1916, 500 pounds in 1953, 600 pounds in 1967, 667 pounds in 1984, and 730 pounds in 2015. Doping is enough to win any given competition, but it does not stand up against the long-term trend of improving performance that is driven, in part, by genetic outliers. As the population base of weightlifting competitors has increased, outliers further and further out on the tail of the distribution have appeared, driving up world records.
Similarly, Lance Armstrong’s drug-fuelled victory of the 1999 Tour de France gave him a margin of victory over second-place finisher Alex Zulle of 7 minutes, 37 seconds, or about 0.1 percent. That pales in comparison to the dramatic secular increase in speeds the Tour has seen over the past half century: Eddy Merckx won the 1971 tour, which was about the same distance as the 1999 tour, in a time 5 percent worse than Zulle’s. Certainly, some of this improvement is due to training methods and better equipment. But much of it is simply due to the sport’s ability to find competitors of ever more exceptional natural ability, further and further out along the tail of what’s possible.
We’re just scratching the surface of what genetic outliers can do. The normal distribution we see in athletic capabilities is a telltale signature of many small additive effects that are all independent from each other. Ultimately, these additive effects come from gene variants, or alleles, with small positive and negative consequences on traits such as height, muscularity, and coordination. It is now understood, for example, that great height is due to the combination of an unusually large number of positive variants, and possibly some very rare mutations that have a large effect on their own.
The genomics researcher George Church maintains a list of some of these single mutations. They include a variant of LRP5 that leads to extra-strong bones, a variant of MSTN that produces extra lean muscle, and a variant of SCN9A that is associated with pain insensitivity.
Church has also been involved in one of the greatest scientific breakthroughs of recent decades: the development of a highly efficient gene editing tool called CRISPR, which has been approved for clinical trials for medical applications. If CRISPR-related technologies develop as anticipated, designer humans are at most a few decades away. Editing is most easily done soon after conception, when the embryo consists of only a small number of cells, but it is also possible in adults. Clinical trials of CRISPR, when they start this year, will edit existing cells in adults using an injection of a viral vector. It seems likely that CRISPR, or some improved version of it, will be established to be both safe and effective in the near future.
by Stephen Hsu, Nautilus | Read more:
Image: Cameron Spencer/Getty Images
A Brief History of the College Textbook Pricing Racket
When I recently wrote about airport stores, one of the most interesting (albeit minor) facets of the piece was the fact that airport travelers are generally considered a captive audience, making it easy for shops to jack up prices.
Airports, though, are amateur hour compared to the college textbook industry.
Any industry that can increase its prices by 1,041 percent over a 38-year period—as the textbook industry did between 1977 and 2015, according to an NBC News analysis—is one that knows how to keep, and hold, an audience. (It's almost like they're selling EpiPens.)
And, as students across the country return to school, this is probably the perfect time of year to ask: Was it always this way? The answer: no, and you can blame a big shift in the '70s. (...)
What happened in the '70s? Let's ask someone from the '70s: In a 1975 piece for the The Annals of the American Academy of Political and Social Science, journalist Phillip Whitten, who spent time running his own publishing firms, said that shifts in the uptake in textbooks, driven by a desire to standardize curriculum as well as to make things easier for students, led to a significant increase in the use of textbooks during this period.
But textbook companies of the era didn't have it easy. In his piece, Whitten crunched the numbers of a hypothetical textbook, one sold for $12.50 but generally offered to college stores at a wholesale price of $10. (In today's dollars, the book would have sold for $44.73 before markup by the bookstore—not a bad price, actually.)
In Whitten's example, the book sold 50,000 copies, netting half a million dollars in sales, but was offset by a variety of costs, including royalties, marketing, and manufacturing. Still though, the book made $79,000 in pre-tax profit, a solid 15.8 percent margin. But he noted that the game for publishers was generally not that easy, due to the existence of both fixed and variable costs.
"If Sociology in Modern World had sold 20,000 copies, we would have lost $75,000; had it sold 10,000 copies—and there are many texts that do not do even that well—our loss would have been greater than $126,000," Whitten wrote.
(How does that compare to the modern day? Priceonomics writer Zachary Crockett, who spent time working for a textbook publisher, breaks down the math similarly to Whitten, though these days, publishers tend to make $40 in pure profit on a $180 book—a 22 percent margin.) (...)
Last year, two separate incidents occurred that raised the ire of textbook critics. In some ways, they kind of dovetail into one another.
The good professor, punished: Last October, Alain Bourget, an associate math professor at the California State University at Fullerton, received a formal reprimand after choosing not to give his students the $180 textbook recommended to him by the school, instead offering a cheaper $80 option, supplemented by online offerings. The school said this broke the rules, because he veered from the book every other introductory linear algebra course was using at the school. He fought the reprimand, but failed. (His hometown paper treated him like a hero.)
The economist who's made bank from a single book: Harvard University Economist Gregory Mankiw was raked over the coals by The Oregonian last year for the high cost of his tome Principles of Economics, an introductory book that sells on Amazon for $333.35 and can be rented on Chegg for $49.99. The absurdity of Mankiw's book, which exemplifies many of the economic disparities covered in the book, was further highlighted by writer Richard Read's story. When asked if he'd ever write an open-source textbook, Mankiw had this to say: "Let me fix that for you: Would you keep doing your job if you stopped being paid? Why or why not?" A fair point—until you realize that Mankiw has, by some estimates, made $42 million in royalties from this book alone.
Airports, though, are amateur hour compared to the college textbook industry.
Any industry that can increase its prices by 1,041 percent over a 38-year period—as the textbook industry did between 1977 and 2015, according to an NBC News analysis—is one that knows how to keep, and hold, an audience. (It's almost like they're selling EpiPens.)
And, as students across the country return to school, this is probably the perfect time of year to ask: Was it always this way? The answer: no, and you can blame a big shift in the '70s. (...)
What happened in the '70s? Let's ask someone from the '70s: In a 1975 piece for the The Annals of the American Academy of Political and Social Science, journalist Phillip Whitten, who spent time running his own publishing firms, said that shifts in the uptake in textbooks, driven by a desire to standardize curriculum as well as to make things easier for students, led to a significant increase in the use of textbooks during this period.
But textbook companies of the era didn't have it easy. In his piece, Whitten crunched the numbers of a hypothetical textbook, one sold for $12.50 but generally offered to college stores at a wholesale price of $10. (In today's dollars, the book would have sold for $44.73 before markup by the bookstore—not a bad price, actually.)
In Whitten's example, the book sold 50,000 copies, netting half a million dollars in sales, but was offset by a variety of costs, including royalties, marketing, and manufacturing. Still though, the book made $79,000 in pre-tax profit, a solid 15.8 percent margin. But he noted that the game for publishers was generally not that easy, due to the existence of both fixed and variable costs.
"If Sociology in Modern World had sold 20,000 copies, we would have lost $75,000; had it sold 10,000 copies—and there are many texts that do not do even that well—our loss would have been greater than $126,000," Whitten wrote.
(How does that compare to the modern day? Priceonomics writer Zachary Crockett, who spent time working for a textbook publisher, breaks down the math similarly to Whitten, though these days, publishers tend to make $40 in pure profit on a $180 book—a 22 percent margin.) (...)
Last year, two separate incidents occurred that raised the ire of textbook critics. In some ways, they kind of dovetail into one another.
The good professor, punished: Last October, Alain Bourget, an associate math professor at the California State University at Fullerton, received a formal reprimand after choosing not to give his students the $180 textbook recommended to him by the school, instead offering a cheaper $80 option, supplemented by online offerings. The school said this broke the rules, because he veered from the book every other introductory linear algebra course was using at the school. He fought the reprimand, but failed. (His hometown paper treated him like a hero.)
The economist who's made bank from a single book: Harvard University Economist Gregory Mankiw was raked over the coals by The Oregonian last year for the high cost of his tome Principles of Economics, an introductory book that sells on Amazon for $333.35 and can be rented on Chegg for $49.99. The absurdity of Mankiw's book, which exemplifies many of the economic disparities covered in the book, was further highlighted by writer Richard Read's story. When asked if he'd ever write an open-source textbook, Mankiw had this to say: "Let me fix that for you: Would you keep doing your job if you stopped being paid? Why or why not?" A fair point—until you realize that Mankiw has, by some estimates, made $42 million in royalties from this book alone.
by Ernie Smith, Pricenomics | Read more:
Image: m01229/CC BY 2.0Reverse Voxsplaining: Drugs Vs. Chairs
EpiPens, useful medical devices which reverse potentially fatal allergic reactions, have recently quadrupled in price, putting pressure on allergy sufferers and those who care for them. Vox writes that this “tells us a lot about what’s wrong with American health care” – namely that we don’t regulate it enough:
The problem with the pharmaceutical industry isn’t that they’re unregulated just like chairs and mugs. The problem with the pharmaceutical industry is that they’re part of a highly-regulated cronyist system that works completely differently from chairs and mugs.
If a chair company decided to charge $300 for their chairs, somebody else would set up a woodshop, sell their chairs for $250, and make a killing – and so on until chairs cost normal-chair-prices again. When Mylan decided to sell EpiPens for $300, in any normal system somebody would have made their own EpiPens and sold them for less. It wouldn’t have been hard. Its active ingredient, epinephrine, is off-patent, was being synthesized as early as 1906, and costs about ten cents per EpiPen-load.
Why don’t they? They keep trying, and the FDA keeps refusing to approve them for human use. For example, in 2009, a group called Teva Pharmaceuticals announced a plan to sell their own EpiPens in the US. The makers of the original EpiPen sued them, saying that they had patented the idea epinephrine-injecting devices. Teva successfully fended off the challenge and brought its product to the FDA, which rejected it because of “certain major deficiencies”. As far as I know, nobody has ever publicly said what the problem was – we can only hope they at least told Teva.
In 2010, another group, Sandoz, asked for permission to sell a generic EpiPen. Once again, the original manufacturers sued for patent infringement. According to Wikipedia, “as of July 2016 this litigation was ongoing”.
In 2011, Sanoji asked for permission to sell a generic EpiPen called e-cue. This got held up for a while because the FDA didn’t like the name (really!), but eventually was approved under the name Auvi-Q, (which if I were a giant government agency that rejected things for having dumb names, would be going straight into the wastebasket). But after unconfirmed reports of incorrect dosage delivery, they recalled all their products off the market.
This year, a company called Adamis decided that in order to get around the patent on devices that inject epinephrine, they would just sell pre-filled epinephrine syringes and let patients inject themselves. The FDA rejected it, noting that the company involved had done several studies but demanding that they do some more.
Also, throughout all of this a bunch of companies are merging and getting bought out by other companies and making secret deals with each other to retract their products and it’s all really complicated.
None of this is because EpiPens are just too hard to make correctly. Europe has eight competing versions. But aside from the EpiPen itself, only one competitor has ever made it past the FDA and onto the pharmacy shelf – a system called Adrenaclick.
And of course there’s a catch. With ordinary medications, pharmacists are allowed to interpret prescriptions for a brand name as prescriptions for the generic unless doctors ask them not to. For example, if I write a prescription for “Prozac”, a pharmacist knows that I mean anything containing fluoxetine, the chemical ingredient sold under the Prozac brand. They don’t have to buy it directly from Prozac trademark-holder Eli Lilly. It’s like if someone asks for a Kleenex and you give them a regular tissue, or if you suggest putting something in a Tupperware but actually use a plastic container made by someone other than the Tupperware Corporation.
EpiPens are protected from this substitution. If a doctor writes a prescription for “EpiPen”, the pharmacist must give an EpiPen-brand EpiPen, not an Adrenaclick-brand EpiPen. This is apparently so that children who have learned how to use an EpiPen don’t have to relearn how to use an entirely different device (hint: jam the pointy end into your body).
If you know anything at all about doctors, you know that they have way too much institutional inertia to change from writing one word on a prescription pad to writing a totally different word on a prescription pad, especially if the second word is almost twice as long, and especially especially if it’s just to do something silly like save a patient money. I have an attending who, whenever we are dealing with anything other than a life-or-death matter, just dismisses it with “Nobody ever died from X”, and I can totally hear him saying “Nobody ever died from paying extra for an adrenaline injector”. So Adrenaclick continues to languish in obscurity.
So why is the government having so much trouble permitting a usable form of a common medication?
There are a lot of different factors, but let me focus on the most annoying one. EpiPen manufacturer Mylan Inc spends about a million dollars on lobbying per year. OpenSecrets.org tells us what bills got all that money. They seem to have given the most to defeat S.214, the “Preserve Access to Affordable Generics Act”. The bill would ban pharmaceutical companies from bribing generic companies not to create generic drugs.
Did they win? Yup. In fact, various versions of this bill have apparently failed so many times that FDA Law Blog notes that “insanity is doing the same thing over and over again and expecting different result”.
So let me try to make this easier to understand.
Imagine that the government creates the Furniture and Desk Association, an agency which declares that only IKEA is allowed to sell chairs. IKEA responds by charging $300 per chair. Other companies try to sell stools or sofas, but get bogged down for years in litigation over whether these technically count as “chairs”. When a few of them win their court cases, the FDA shoots them down anyway for vague reasons it refuses to share, or because they haven’t done studies showing that their chairs will not break, or because the studies that showed their chairs will not break didn’t include a high enough number of morbidly obese people so we can’t be sure they won’t break. Finally, Target spends tens of millions of dollars on lawyers and gets the okay to compete with IKEA, but people can only get Target chairs if they have a note signed by a professional interior designer saying that their room needs a “comfort-producing seating implement” and which absolutely definitely does not mention “chairs” anywhere, because otherwise a child who was used to sitting on IKEA chairs might sit down on a Target chair the wrong way, get confused, fall off, and break her head.
(You’re going to say this is an unfair comparison because drugs are potentially dangerous and chairs aren’t –but 50 people die each year from falling off chairs in Britain alone and as far as I know nobody has ever died from an EpiPen malfunction.)
Imagine that this whole system is going on at the same time that IKEA donates millions of dollars lobbying senators about chair-related issues, and that these same senators vote down a bill preventing IKEA from paying off other companies to stay out of the chair industry. Also, suppose that a bunch of people are dying each year of exhaustion from having to stand up all the time because chairs are too expensive unless you have really good furniture insurance, which is totally a thing and which everybody is legally required to have.
And now imagine that a news site responds with an article saying the government doesn’t regulate chairs enough.
The story of Mylan’s giant EpiPen price increase is, more fundamentally, a story about America’s unique drug pricing policies. We are the only developed nation that lets drugmakers set their own prices, maximizing profits the same way sellers of chairs, mugs, shoes, or any other manufactured goods would.Let me ask Vox a question: when was the last time that America’s chair industry hiked the price of chairs 400% and suddenly nobody in the country could afford to sit down? When was the last time that the mug industry decided to charge $300 per cup, and everyone had to drink coffee straight from the pot or face bankruptcy? When was the last time greedy shoe executives forced most Americans to go barefoot? And why do you think that is?
The problem with the pharmaceutical industry isn’t that they’re unregulated just like chairs and mugs. The problem with the pharmaceutical industry is that they’re part of a highly-regulated cronyist system that works completely differently from chairs and mugs.
If a chair company decided to charge $300 for their chairs, somebody else would set up a woodshop, sell their chairs for $250, and make a killing – and so on until chairs cost normal-chair-prices again. When Mylan decided to sell EpiPens for $300, in any normal system somebody would have made their own EpiPens and sold them for less. It wouldn’t have been hard. Its active ingredient, epinephrine, is off-patent, was being synthesized as early as 1906, and costs about ten cents per EpiPen-load.
Why don’t they? They keep trying, and the FDA keeps refusing to approve them for human use. For example, in 2009, a group called Teva Pharmaceuticals announced a plan to sell their own EpiPens in the US. The makers of the original EpiPen sued them, saying that they had patented the idea epinephrine-injecting devices. Teva successfully fended off the challenge and brought its product to the FDA, which rejected it because of “certain major deficiencies”. As far as I know, nobody has ever publicly said what the problem was – we can only hope they at least told Teva.
In 2010, another group, Sandoz, asked for permission to sell a generic EpiPen. Once again, the original manufacturers sued for patent infringement. According to Wikipedia, “as of July 2016 this litigation was ongoing”.
In 2011, Sanoji asked for permission to sell a generic EpiPen called e-cue. This got held up for a while because the FDA didn’t like the name (really!), but eventually was approved under the name Auvi-Q, (which if I were a giant government agency that rejected things for having dumb names, would be going straight into the wastebasket). But after unconfirmed reports of incorrect dosage delivery, they recalled all their products off the market.
This year, a company called Adamis decided that in order to get around the patent on devices that inject epinephrine, they would just sell pre-filled epinephrine syringes and let patients inject themselves. The FDA rejected it, noting that the company involved had done several studies but demanding that they do some more.
Also, throughout all of this a bunch of companies are merging and getting bought out by other companies and making secret deals with each other to retract their products and it’s all really complicated.
None of this is because EpiPens are just too hard to make correctly. Europe has eight competing versions. But aside from the EpiPen itself, only one competitor has ever made it past the FDA and onto the pharmacy shelf – a system called Adrenaclick.
And of course there’s a catch. With ordinary medications, pharmacists are allowed to interpret prescriptions for a brand name as prescriptions for the generic unless doctors ask them not to. For example, if I write a prescription for “Prozac”, a pharmacist knows that I mean anything containing fluoxetine, the chemical ingredient sold under the Prozac brand. They don’t have to buy it directly from Prozac trademark-holder Eli Lilly. It’s like if someone asks for a Kleenex and you give them a regular tissue, or if you suggest putting something in a Tupperware but actually use a plastic container made by someone other than the Tupperware Corporation.
EpiPens are protected from this substitution. If a doctor writes a prescription for “EpiPen”, the pharmacist must give an EpiPen-brand EpiPen, not an Adrenaclick-brand EpiPen. This is apparently so that children who have learned how to use an EpiPen don’t have to relearn how to use an entirely different device (hint: jam the pointy end into your body).
If you know anything at all about doctors, you know that they have way too much institutional inertia to change from writing one word on a prescription pad to writing a totally different word on a prescription pad, especially if the second word is almost twice as long, and especially especially if it’s just to do something silly like save a patient money. I have an attending who, whenever we are dealing with anything other than a life-or-death matter, just dismisses it with “Nobody ever died from X”, and I can totally hear him saying “Nobody ever died from paying extra for an adrenaline injector”. So Adrenaclick continues to languish in obscurity.
So why is the government having so much trouble permitting a usable form of a common medication?
There are a lot of different factors, but let me focus on the most annoying one. EpiPen manufacturer Mylan Inc spends about a million dollars on lobbying per year. OpenSecrets.org tells us what bills got all that money. They seem to have given the most to defeat S.214, the “Preserve Access to Affordable Generics Act”. The bill would ban pharmaceutical companies from bribing generic companies not to create generic drugs.
Did they win? Yup. In fact, various versions of this bill have apparently failed so many times that FDA Law Blog notes that “insanity is doing the same thing over and over again and expecting different result”.
So let me try to make this easier to understand.
Imagine that the government creates the Furniture and Desk Association, an agency which declares that only IKEA is allowed to sell chairs. IKEA responds by charging $300 per chair. Other companies try to sell stools or sofas, but get bogged down for years in litigation over whether these technically count as “chairs”. When a few of them win their court cases, the FDA shoots them down anyway for vague reasons it refuses to share, or because they haven’t done studies showing that their chairs will not break, or because the studies that showed their chairs will not break didn’t include a high enough number of morbidly obese people so we can’t be sure they won’t break. Finally, Target spends tens of millions of dollars on lawyers and gets the okay to compete with IKEA, but people can only get Target chairs if they have a note signed by a professional interior designer saying that their room needs a “comfort-producing seating implement” and which absolutely definitely does not mention “chairs” anywhere, because otherwise a child who was used to sitting on IKEA chairs might sit down on a Target chair the wrong way, get confused, fall off, and break her head.
(You’re going to say this is an unfair comparison because drugs are potentially dangerous and chairs aren’t –but 50 people die each year from falling off chairs in Britain alone and as far as I know nobody has ever died from an EpiPen malfunction.)
Imagine that this whole system is going on at the same time that IKEA donates millions of dollars lobbying senators about chair-related issues, and that these same senators vote down a bill preventing IKEA from paying off other companies to stay out of the chair industry. Also, suppose that a bunch of people are dying each year of exhaustion from having to stand up all the time because chairs are too expensive unless you have really good furniture insurance, which is totally a thing and which everybody is legally required to have.
And now imagine that a news site responds with an article saying the government doesn’t regulate chairs enough.
by Scott Alexander, Slate Star Codex | Read more:
Image: Jim Bourg/ReutersCan We Save Venice Before It’s Too Late?
[ed. Short answer, it's already too late (as far as industrial tourism is concerned). Longer term question is how much of Venice will still be above water in 20 years?]
A deadly plague haunts Venice, and it’s not the cholera to which Thomas Mann’s character Gustav von Aschenbach succumbed in the Nobel laureate’s 1912 novella “Death in Venice.” A rapacious tourist monoculture threatens Venice’s existence, decimating the historic city and turning the Queen of the Adriatic into a Disneyfied shopping mall.
Millions of tourists pour into Venice’s streets and canals each year, profoundly altering the population and the economy, as many native citizens are banished from the island city and those who remain have no choice but to serve in hotels, restaurants and shops selling glass souvenirs and carnival masks.
Tourism is tearing apart Venice’s social fabric, cohesion and civic culture, growing ever more predatory. The number of visitors to the city may rise even further now that international travelers are avoiding destinations like Turkey and Tunisia because of fears of terrorism and unrest. This means that the 2,400 hotels and other overnight accommodations the city now has no longer satisfy the travel industry’s appetites. The total number of guest quarters in Venice’s historic center could reach 50,000 and take it over entirely.
Just along the Grand Canal, Venice’s main waterway, the last 15 years have seen the closure of state institutions, judicial offices, banks, the German Consulate, medical practices and stores to make way for 16 new hotels.
Alarm at this state of affairs led to last month’s decision by the United Nations Educational, Scientific and Cultural Organization to place Venice on its World Heritage in Danger list unless substantial progress to halt the degradation of the city and its ecosystem is made by next February. Unesco has so far stripped only one city of its status as a heritage site from the more than 1,000 on the list: Dresden, after German authorities ignored Unesco’s 2009 recommendations against building a bridge over the River Elbe that marred the Baroque urban ensemble. Will Venice be next to attain this ignominious status?
In its July report, Unesco’s committee on heritage sites expressed “extreme concern” about “the combination of ongoing transformations and proposed projects threatening irreversible changes to the overall relationship between the City and its Lagoon,” which would, in its thinking, erode the integrity of Venice.
Unesco’s ultimatum stems from several longstanding problems. First, the increasing imbalance between the number of the city’s inhabitants (which plummeted from 174,808 in 1951 to 56,311 in 2014, the most recent year for which numbers are available) and the tourists. Proposed large-scale development, including new deepwater navigation channels and a subway running under the lagoon, would hasten erosion and strain the fragile ecological-urban system that has grown up around Venice.
For now, gigantic cruise liners regularly parade in front of Piazza San Marco, the city’s main public square, mocking the achievements of the last 1,500 years. To mention but one, the M.S.C. Divina is 222 feet high, twice as tall as the Doge’s Palace, a landmark of the city that was built in the 14th century. At times, a dozen liners have entered the lagoon in a single day.
A deadly plague haunts Venice, and it’s not the cholera to which Thomas Mann’s character Gustav von Aschenbach succumbed in the Nobel laureate’s 1912 novella “Death in Venice.” A rapacious tourist monoculture threatens Venice’s existence, decimating the historic city and turning the Queen of the Adriatic into a Disneyfied shopping mall.
Millions of tourists pour into Venice’s streets and canals each year, profoundly altering the population and the economy, as many native citizens are banished from the island city and those who remain have no choice but to serve in hotels, restaurants and shops selling glass souvenirs and carnival masks.
Tourism is tearing apart Venice’s social fabric, cohesion and civic culture, growing ever more predatory. The number of visitors to the city may rise even further now that international travelers are avoiding destinations like Turkey and Tunisia because of fears of terrorism and unrest. This means that the 2,400 hotels and other overnight accommodations the city now has no longer satisfy the travel industry’s appetites. The total number of guest quarters in Venice’s historic center could reach 50,000 and take it over entirely.
Just along the Grand Canal, Venice’s main waterway, the last 15 years have seen the closure of state institutions, judicial offices, banks, the German Consulate, medical practices and stores to make way for 16 new hotels.
Alarm at this state of affairs led to last month’s decision by the United Nations Educational, Scientific and Cultural Organization to place Venice on its World Heritage in Danger list unless substantial progress to halt the degradation of the city and its ecosystem is made by next February. Unesco has so far stripped only one city of its status as a heritage site from the more than 1,000 on the list: Dresden, after German authorities ignored Unesco’s 2009 recommendations against building a bridge over the River Elbe that marred the Baroque urban ensemble. Will Venice be next to attain this ignominious status?
In its July report, Unesco’s committee on heritage sites expressed “extreme concern” about “the combination of ongoing transformations and proposed projects threatening irreversible changes to the overall relationship between the City and its Lagoon,” which would, in its thinking, erode the integrity of Venice.
Unesco’s ultimatum stems from several longstanding problems. First, the increasing imbalance between the number of the city’s inhabitants (which plummeted from 174,808 in 1951 to 56,311 in 2014, the most recent year for which numbers are available) and the tourists. Proposed large-scale development, including new deepwater navigation channels and a subway running under the lagoon, would hasten erosion and strain the fragile ecological-urban system that has grown up around Venice.
For now, gigantic cruise liners regularly parade in front of Piazza San Marco, the city’s main public square, mocking the achievements of the last 1,500 years. To mention but one, the M.S.C. Divina is 222 feet high, twice as tall as the Doge’s Palace, a landmark of the city that was built in the 14th century. At times, a dozen liners have entered the lagoon in a single day.
by Salvatore Settis, NY Times | Read more:
Image: Venice, uncredited
Monday, August 29, 2016
How to Make Omurice (Japanese Fried Rice Omelette)
There's a video on YouTube that I've watched several times over the past couple of years. In it, a chef in Kyoto makes a plate ofomurice with a deftness and perfection of technique that may be unrivaled. He starts by frying rice in a carbon steel skillet, tossing it every which way until each grain is coated in a sheen of demi-glace and oil. Then he packs it into an oval mold and turns it out in a tight mound on a plate.
He then proceeds to make what is perhaps the greatest French omelette ever executed, cooking it in that same perfectly seasoned carbon steel skillet, stirring the egg with chopsticks, rolling it up, gently tossing it, rotating it, and finally tipping it out of the pan onto that mound of rice. He then grabs a knife and slices through the top of the omelette from end to end, unfurling it in a custardy cascade of soft-cooked egg curds. It's an act of such prowess, such beauty, such tantalizing food-porniness that it's easy to conclude there's no hope of ever making such a dish at home.
And that's where I want to step in. Because you absolutely can and should make this at home. I realized this while watching a cook make omurice on a trip to Japan back in July (my travel and lodging were paid for by the Tokyo Convention & Visitors Bureau). The cook was working with a flat griddle, not a carbon steel skillet. He fried the rice on that griddle, and, after mounding it on a plate, made the omelette on the griddle, too. Except that it wasn't a true rolled omelette. Instead, he poured the beaten eggs into a round on the griddle...and that was it. As soon as the eggs were set on the bottom and just slightly runny on top, he lifted the round with a couple of spatulas and set it down over the rice.
As fun as it is to master a French omelette, in this particular case, it's an unnecessary flourish that—while it makes for great showmanship—does little to improve the final dish, since you end up unrolling the omelette anyway. By not bothering to roll the omelette in the first place, you sidestep the entire technical challenge.
For those unfamiliar with omurice, it's a Japanese invention that combines an omelette with fried rice. You'll often hear it referred to as omuraisu (a contraction of the words "omuretsu" and "raisu," the Japanese pronunciations of "omelette" and "rice"), oromumeshi, which fully translates "rice" into Japanese. Some versions have the rice rolled up in the omelette; you can watch the very same Kyoto chef do that here.
by Daniel Gritzer, Serious Eats | Read more:
Image: YouTube
Yes We Scan
It’s the dead of winter in Stockholm and I’m sitting in a very small room inside the very inaptly named Calm Body Modification clinic. A few feet away sits the syringe that will, soon enough, plunge into the fat between my thumb and forefinger and deposit a glass-encased microchip roughly the size of an engorged grain of rice.
“You freaking out a little?” asks Calm’s proprietor, a heavily tattooed man named Chai, as he runs an alcohol-soaked cotton swab across my hand. “It’s all right. You’re getting a microchip implanted inside your body. It’d be weird if you weren’t freaking out a little bit.” Of Course It Fucking Hurts!, his T-shirt admonishes in bold type.
My choice to get microchipped was not ceremonial. It was neither a transhumanist statement nor the fulfillment of a childhood dream born of afternoons reading science fiction. I was here in Stockholm, a city that’s supposedly left cash behind, to see out the extreme conclusion of a monthlong experiment to live without cash, physical credit cards, and, eventually, later in the month, state-backed currency altogether, in a bid to see for myself what the future of money — as is currently being written by Silicon Valley — might look like.
Some of most powerful corporations in the world — Apple, Facebook, and Google; the Goliaths, the big guys, the companies that make the safest bets and rarely lose — are pouring resources and muscle into the payments industry, historically a complicated, low-margin business. Meanwhile, companies like Uber and Airbnb have been forced to become payments giants themselves, helping to facilitate and process millions of transactions (and millions of dollars) each day. A recent report from the auditor KPMG revealed that global investment in fintech — financial technology, that is — totaled $19.1 billion in 2015, a 106% jump compared to 2014; venture capital investment alone nearly quintupled between 2012 and last year. In 2014, Americans spent more than $3.68 billion using tap-to-pay tech, according to eMarketer. In 2015, that number was $8.71 billion, and in 2019, it’s projected to hit $210.45 billion. As Apple CEO Tim Cook told (warned?) a crowd in the U.K. last November, “Your kids will not know what money is.”
To hear Silicon Valley tell it, the broken-in leather wallet is on life support. I wanted to pull the plug. Which is how, ultimately, I found myself in this sterile Swedish backroom staring down a syringe the size of a pipe cleaner. I was here because I wanted to see the future of money. But really, I just wanted to pay for some shit with a microchip in my hand.
This isn’t lost on Bryan Yeager, a senior analyst who covers payments for eMarketer. “This kind of piecemeal fragmentation is probably one of the biggest inhibitors out there,” he said. “I’ll be honest: It’s very confusing, not just to me, but to most customers. And it really erodes the value proposition that mobile payments are simpler.”
On a frigid January afternoon in Midtown Manhattan, just hours into my experiment, I found myself at 2 Bros., a red-tiled, fluorescent-lit pizza shop that operates with an aversion to frills. As I made my way past a row of stainless steel ovens, I watched the patrons in front of me grab their glistening slices while wordlessly forking over mangled bills, as has been our country’s custom for a century and a half. When my turn came to order, I croaked what was already my least-favorite phrase: “Do you, um, take Apple Pay?” The man behind the counter blinked four times before (wisely) declaring me a lost cause and moving to the next person in line.
This kind of bewildered rejection was fairly common. A change may be coming for money, but not everyone’s on board yet, and Yaeger’s entirely correct that the “simple” value proposition hasn’t entirely come to pass. Paying with the wave of a phone, I found, pushes you toward extremes; to submit to the will of one of the major mobile wallets is to choose between big-box retailers and chain restaurants and small, niche luxury stores. The only business in my Brooklyn neighborhood that took Apple Pay or Android Pay was a cafe where a large iced coffee runs upwards of $5; globally, most of the businesses that have signed on as Apple Pay partners are large national chains like Jamba Juice, Pep Boys, Best Buy, and Macy’s.
Partially for this reason, the primary way most Americans are currently experiencing the great fintech boom isn’t through Apple or Android Pay at all, but through proprietary payment apps from chains such as Target, Walmart, and Starbucks — as of last October, an astonishing 1 in 5 of all Starbucks transactions in the U.S. were done through the company’s mobile app. It wouldn’t be all that hard to live a fully functional — if possibly boring — cash-free consumer life by tapping and swiping the proprietary apps of our nation’s biggest stores.
If that doesn’t feel revolutionary or particularly futuristic, it’s because it’s not really meant to. But the future of mobile retail is assuredly dystopian. Just ask Andy O’Dell, who works for Clutch, a marketing company that helps with consumer loyalty programs and deals with these kinds of mobile purchasing apps. “Apple Pay and the Starbucks payment app have nothing to do with actual payments,” he told me. “The power of payments and the future of these programs is in the data they generate.”
Imagine this future: Every day you go to Starbucks before work because it’s right near your house. You use the app, and to ensure your reliable patronage, Starbucks coughs up a loyalty reward, giving you a free cup of coffee every 15 visits. Great deal, you say! O’Dell disagrees. According to him, Starbucks is just hurting its margins by giving you something you’d already be buying. The real trick, he argued, is changing your behavior. He offers a new scenario where this time, instead of a free coffee every 15 visits, you get a free danish — which you try and then realize it goes great with coffee. So you start buying a danish once a week, then maybe twice a week, until it starts to feel like it was your idea all along.
In that case, O’Dell said, Starbucks has “changed my behavior and captured more share of my wallet, and they’ve also given me more of what I want.”
“That’s terrifying,” I told him.
“But that’s the brave new world, man,” he shot back. “Moving payments from plastic swipes to digital taps is going to change how companies influence your behavior. That’s what you’re asking, right? Well, that’s how we’re doing it.”
In this sense, the payments rush is, in no small part, a data rush. Creating a wallet that’s just a digital version of the one you keep in your pocket is not the endgame. But figuring out where you shop, when you shop, and exactly what products you have an affinity for, and then bundling all that information in digestible chunks to inform the marketers of the world? Being able to, as O’Dell puts it, “drive you to the outcome they want you to have like a rat in a maze by understanding, down to your personality, who you are”? That’s disruption worth investing in.
by Charlie Warzel, Buzzfeed | Read more:
Image: Katie Notopoulos / BuzzFeed News
“You freaking out a little?” asks Calm’s proprietor, a heavily tattooed man named Chai, as he runs an alcohol-soaked cotton swab across my hand. “It’s all right. You’re getting a microchip implanted inside your body. It’d be weird if you weren’t freaking out a little bit.” Of Course It Fucking Hurts!, his T-shirt admonishes in bold type.
My choice to get microchipped was not ceremonial. It was neither a transhumanist statement nor the fulfillment of a childhood dream born of afternoons reading science fiction. I was here in Stockholm, a city that’s supposedly left cash behind, to see out the extreme conclusion of a monthlong experiment to live without cash, physical credit cards, and, eventually, later in the month, state-backed currency altogether, in a bid to see for myself what the future of money — as is currently being written by Silicon Valley — might look like.
Some of most powerful corporations in the world — Apple, Facebook, and Google; the Goliaths, the big guys, the companies that make the safest bets and rarely lose — are pouring resources and muscle into the payments industry, historically a complicated, low-margin business. Meanwhile, companies like Uber and Airbnb have been forced to become payments giants themselves, helping to facilitate and process millions of transactions (and millions of dollars) each day. A recent report from the auditor KPMG revealed that global investment in fintech — financial technology, that is — totaled $19.1 billion in 2015, a 106% jump compared to 2014; venture capital investment alone nearly quintupled between 2012 and last year. In 2014, Americans spent more than $3.68 billion using tap-to-pay tech, according to eMarketer. In 2015, that number was $8.71 billion, and in 2019, it’s projected to hit $210.45 billion. As Apple CEO Tim Cook told (warned?) a crowd in the U.K. last November, “Your kids will not know what money is.”
To hear Silicon Valley tell it, the broken-in leather wallet is on life support. I wanted to pull the plug. Which is how, ultimately, I found myself in this sterile Swedish backroom staring down a syringe the size of a pipe cleaner. I was here because I wanted to see the future of money. But really, I just wanted to pay for some shit with a microchip in my hand.
------
The first thing you’ll notice if you ever decide to surrender your wallet is how damn many apps you’ll need in order to replace it. You’ll need a mobile credit card replacement — Apple Pay or Android Pay — for starters, but you’ll also need person-to-person payment apps like Venmo, PayPal, and Square Cash. Then don’t forget the lesser-knowns: Dwolla, Tilt, Tab, LevelUp, SEQR, Popmoney, P2P Payments, and Flint. Then you might as well embrace the cryptocurrency of the future, bitcoin, by downloading Circle, Breadwallet, Coinbase, Fold, Gliph, Xapo, and Blockchain. You’ll also want to cover your bases with individual retailer payment apps like Starbucks, Walmart, USPS Mobile, Exxon Speedpass, and Shell Motorist, to name but a few. Plus public and regular transit apps — Septa in Philadelphia, NJ Transit in New Jersey, Zipcar, Uber, Lyft. And because you have to eat and drink, Seamless, Drizly, Foodler, Saucey, Waitress, Munchery, and Sprig. The future is fractured.This isn’t lost on Bryan Yeager, a senior analyst who covers payments for eMarketer. “This kind of piecemeal fragmentation is probably one of the biggest inhibitors out there,” he said. “I’ll be honest: It’s very confusing, not just to me, but to most customers. And it really erodes the value proposition that mobile payments are simpler.”
On a frigid January afternoon in Midtown Manhattan, just hours into my experiment, I found myself at 2 Bros., a red-tiled, fluorescent-lit pizza shop that operates with an aversion to frills. As I made my way past a row of stainless steel ovens, I watched the patrons in front of me grab their glistening slices while wordlessly forking over mangled bills, as has been our country’s custom for a century and a half. When my turn came to order, I croaked what was already my least-favorite phrase: “Do you, um, take Apple Pay?” The man behind the counter blinked four times before (wisely) declaring me a lost cause and moving to the next person in line.
This kind of bewildered rejection was fairly common. A change may be coming for money, but not everyone’s on board yet, and Yaeger’s entirely correct that the “simple” value proposition hasn’t entirely come to pass. Paying with the wave of a phone, I found, pushes you toward extremes; to submit to the will of one of the major mobile wallets is to choose between big-box retailers and chain restaurants and small, niche luxury stores. The only business in my Brooklyn neighborhood that took Apple Pay or Android Pay was a cafe where a large iced coffee runs upwards of $5; globally, most of the businesses that have signed on as Apple Pay partners are large national chains like Jamba Juice, Pep Boys, Best Buy, and Macy’s.
Partially for this reason, the primary way most Americans are currently experiencing the great fintech boom isn’t through Apple or Android Pay at all, but through proprietary payment apps from chains such as Target, Walmart, and Starbucks — as of last October, an astonishing 1 in 5 of all Starbucks transactions in the U.S. were done through the company’s mobile app. It wouldn’t be all that hard to live a fully functional — if possibly boring — cash-free consumer life by tapping and swiping the proprietary apps of our nation’s biggest stores.
If that doesn’t feel revolutionary or particularly futuristic, it’s because it’s not really meant to. But the future of mobile retail is assuredly dystopian. Just ask Andy O’Dell, who works for Clutch, a marketing company that helps with consumer loyalty programs and deals with these kinds of mobile purchasing apps. “Apple Pay and the Starbucks payment app have nothing to do with actual payments,” he told me. “The power of payments and the future of these programs is in the data they generate.”
Imagine this future: Every day you go to Starbucks before work because it’s right near your house. You use the app, and to ensure your reliable patronage, Starbucks coughs up a loyalty reward, giving you a free cup of coffee every 15 visits. Great deal, you say! O’Dell disagrees. According to him, Starbucks is just hurting its margins by giving you something you’d already be buying. The real trick, he argued, is changing your behavior. He offers a new scenario where this time, instead of a free coffee every 15 visits, you get a free danish — which you try and then realize it goes great with coffee. So you start buying a danish once a week, then maybe twice a week, until it starts to feel like it was your idea all along.
In that case, O’Dell said, Starbucks has “changed my behavior and captured more share of my wallet, and they’ve also given me more of what I want.”
“That’s terrifying,” I told him.
“But that’s the brave new world, man,” he shot back. “Moving payments from plastic swipes to digital taps is going to change how companies influence your behavior. That’s what you’re asking, right? Well, that’s how we’re doing it.”
In this sense, the payments rush is, in no small part, a data rush. Creating a wallet that’s just a digital version of the one you keep in your pocket is not the endgame. But figuring out where you shop, when you shop, and exactly what products you have an affinity for, and then bundling all that information in digestible chunks to inform the marketers of the world? Being able to, as O’Dell puts it, “drive you to the outcome they want you to have like a rat in a maze by understanding, down to your personality, who you are”? That’s disruption worth investing in.
by Charlie Warzel, Buzzfeed | Read more:
Image: Katie Notopoulos / BuzzFeed News
Sunday, August 28, 2016
Colin Kaepernick Is Righter Than You Know: The National Anthem Is a Celebration of Slavery
[ed. Personally, I vote for America the Beautiful.]
Before a preseason game on Friday, San Francisco 49ers quarterback Colin Kaepernick refused to stand for the playing of “The Star-Spangled Banner.” When he explained why, he only spoke about the present: “I am not going to stand up to show pride in a flag for a country that oppresses black people and people of color. … There are bodies in the street and people getting paid leave and getting away with murder.”
Twitter then went predictably nuts, with at least one 49ers fan burning Kaepernick’s jersey.
Almost no one seems to be aware that even if the U.S. were a perfect country today, it would be bizarre to expect African-American players to stand for “The Star-Spangled Banner.” Why? Because it literally celebrates the murder of African-Americans.
Few people know this because we only ever sing the first verse. But read the end of the third verse and you’ll see why “The Star-Spangled Banner” is not just a musical atrocity, it’s an intellectual and moral one, too:
No refuge could save the hireling and slave
From the terror of flight or the gloom of the grave,
And the star-spangled banner in triumph doth wave
O’er the land of the free and the home of the brave.
“The Star-Spangled Banner,” Americans hazily remember, was written by Francis Scott Key about the Battle of Fort McHenry in Baltimore during the War of 1812. But we don’t ever talk about how the War of 1812 was a war of aggression that began with an attempt by the U.S. to grab Canada from the British Empire.
However, we’d wildly overestimated the strength of the U.S. military. By the time of the Battle of Fort McHenry in 1814, the British had counterattacked and overrun Washington, D.C., setting fire to the White House.
And one of the key tactics behind the British military’s success was its active recruitment of American slaves. As a detailed 2014 article in Harper’s explains, the orders given to the Royal Navy’s Admiral Sir George Cockburn read:
Then on the night of September 13, 1814, the British bombarded Fort McHenry. Key, seeing the fort’s flag the next morning, was inspired to write the lyrics for “The Star-Spangled Banner.”
So when Key penned “No refuge could save the hireling and slave / From the terror of flight or the gloom of the grave,” he was taking great satisfaction in the death of slaves who’d freed themselves. His perspective may have been affected by the fact he owned several slaves himself.
With that in mind, think again about the next two lines: “And the star-spangled banner in triumph doth wave / O’er the land of the free and the home of the brave.”
The reality is that there were human beings fighting for freedom with incredible bravery during the War of 1812. However, “The Star-Spangled Banner” glorifies America’s “triumph” over them — and then turns that reality completely upside down, transforming their killers into the courageous freedom fighters.
Before a preseason game on Friday, San Francisco 49ers quarterback Colin Kaepernick refused to stand for the playing of “The Star-Spangled Banner.” When he explained why, he only spoke about the present: “I am not going to stand up to show pride in a flag for a country that oppresses black people and people of color. … There are bodies in the street and people getting paid leave and getting away with murder.”
Twitter then went predictably nuts, with at least one 49ers fan burning Kaepernick’s jersey.
Almost no one seems to be aware that even if the U.S. were a perfect country today, it would be bizarre to expect African-American players to stand for “The Star-Spangled Banner.” Why? Because it literally celebrates the murder of African-Americans.
Few people know this because we only ever sing the first verse. But read the end of the third verse and you’ll see why “The Star-Spangled Banner” is not just a musical atrocity, it’s an intellectual and moral one, too:
No refuge could save the hireling and slave
From the terror of flight or the gloom of the grave,
And the star-spangled banner in triumph doth wave
O’er the land of the free and the home of the brave.
“The Star-Spangled Banner,” Americans hazily remember, was written by Francis Scott Key about the Battle of Fort McHenry in Baltimore during the War of 1812. But we don’t ever talk about how the War of 1812 was a war of aggression that began with an attempt by the U.S. to grab Canada from the British Empire.
However, we’d wildly overestimated the strength of the U.S. military. By the time of the Battle of Fort McHenry in 1814, the British had counterattacked and overrun Washington, D.C., setting fire to the White House.
And one of the key tactics behind the British military’s success was its active recruitment of American slaves. As a detailed 2014 article in Harper’s explains, the orders given to the Royal Navy’s Admiral Sir George Cockburn read:
Let the landings you make be more for the protection of the desertion of the Black Population than with a view to any other advantage. … The great point to be attained is the cordial Support of the Black population. With them properly armed & backed with 20,000 British Troops, Mr. Madison will be hurled from his throne.Whole families found their way to the ships of the British, who accepted everyone and pledged no one would be given back to their “owners.” Adult men were trained to create a regiment called the Colonial Marines, who participated in many of the most important battles, including the August 1814 raid on Washington.
Then on the night of September 13, 1814, the British bombarded Fort McHenry. Key, seeing the fort’s flag the next morning, was inspired to write the lyrics for “The Star-Spangled Banner.”
So when Key penned “No refuge could save the hireling and slave / From the terror of flight or the gloom of the grave,” he was taking great satisfaction in the death of slaves who’d freed themselves. His perspective may have been affected by the fact he owned several slaves himself.
With that in mind, think again about the next two lines: “And the star-spangled banner in triumph doth wave / O’er the land of the free and the home of the brave.”
The reality is that there were human beings fighting for freedom with incredible bravery during the War of 1812. However, “The Star-Spangled Banner” glorifies America’s “triumph” over them — and then turns that reality completely upside down, transforming their killers into the courageous freedom fighters.
by Jon Swarz, The Intercept | Read more:
Image: Peter Joneleit/Cal Sport Media/AP ImagesStadiums and Other Sacred Cows
[ed. See also: 5 Amazing Things About the Minnesota Vikings' New Stadium.]
There’s a strange sort of reverence that surrounds our relationship with sports. Jay Coakley first noticed it as a graduate student at the University of Notre Dame, in Indiana; he was studying sociology, so perhaps it was hard not to analyze the sport-centered culture that surrounded him. He observed the hype around football weekends, the mania of pep rallies, and the fundraising enthusiasm of booster clubs. He noticed that football players always seemed to have the nicest cars—and heard through his wife, who worked at the registrar at the time, that sometimes transcripts were changed to keep players on the field.
He was so intrigued that he proposed doing his thesis and dissertation on the topic.
“My faculty advisor in the sociology department said, ‘Are you crazy? You have to focus on something serious, not sports,’ ” Coakley recalls. “I said, ‘How can anything be more serious than something that evokes almost 100 percent of the interest of 100 percent of the people on this campus for five to six weekends of the year at least?’ ”
Coakley ended up doing his dissertation on the racial and religious identities of black Catholic priests, and his Master’s thesis on the race violence seen around the country in 1968. Yet sport, laden as it was with many of the societal tensions he saw in his graduate work, continued to draw him back in. He proposed courses on sports and leisure; he conducted independent studies discussing what sports meant to various individuals; he worked with PTAs and parks and recreation departments; and he began focusing on coaching education. As the years passed, Coakley became one of the most respected authorities in the growing field of sports sociology—a much more serious field than his academic advisor might have ever expected.
Along the way, Coakley developed a theory that finally explained the strange behavior he had first seen at Notre Dame, and which he continued to see throughout the athletic world. He called it “The Great Sports Myth”: the widespread assumption that sport is, inherently, a force of good—despite the fact that it can both empower and humiliate, build bonds and destroy them, blur boundaries and marginalize.
Nautilus sat down with Coakley to talk about the unassailable mythos around sport, and the widespread impacts it can have on our society.
How did you come up with the idea of the Great Sports Myth?
I developed the Great Sports Myth when I was working here in Fort Collins, with a group that was opposing Colorado State University building a $220 million on-campus football stadium, the final cost of which—with just interest—would be over $400 million, and there will be cost overruns in addition to that. All when they have a stadium two miles from campus that needs some renovation, but nevertheless has been a decent place to play. I was working with a group that was opposing this—and by the way, 80 percent of the faculty opposed it, 65 percent of the students opposed it. But there were people who were talking about what this new stadium was going to do, and no matter what kinds of data you came up with to ask them to raise questions about their assumptions, they rejected the data. They rejected all the arguments.
It seemed to me that their position was grounded in something very much like religious faith. I was trying to figure out what was going on, and that’s when I came up with this notion of the Great Sports Myth, which they were using as a basis for rejecting facts, good studies, good logical arguments, and stating that: This is going to be good despite what anybody was saying in opposition.
What’s the historical context of this attitude?
It appears that sports became integrated into American society as a spinoff of what was going on in England. There was this sense that the sons of the elites in society needed something to make them into men, and sport was identified as the mechanism through which that could be done.
That idea was transferred to the United States, but in the United States the importance of sports was tied to a host other factors as well: The need for productivity, the need to socialize and assimilate different immigrant groups, the need to create a military, the need to control young people running loose on the streets during the latter part of the 19th century. So what happened was that sports came to be identified as an important socializing mechanism for boys, a social control mechanism, and a developmental mechanism.
People became committed to sports because it was tied to their own interests as well. And it got put on a pedestal. We even revised Greek history to reify the purity and goodness of sports—talking about how sports were important for developmental purposes among the Greeks, and how they stopped wars to have the Olympic Games.
Sport then gets integrated into the schools in the United States, and all sorts of functions are attributed to it without our ever really examining whether those were valid or not. And so we’ve developed this sense that sport is beyond reproach. If there are any problems associated with sport, it has to be due to bad apples that are involved in it, who are somehow incorrigible enough that they can’t learn the lessons that sport teaches, so we have to get rid of them.
That fits into American culture as a whole, and our emphasis on individualism, personal choice and individual responsibility. (...)
How has this culture trickled down to other aspects of society?
Because sport is a source of excitement, pleasure, and joy, we are less likely to critique it. Sport has also served the interests of powerful people within our culture. It reifies competition and the whole notion of meritocracy, of distributing rewards to the winners, and that people who are successful deserve success. It becomes tied to all sorts of important factors within our culture.
There is this whole sense of the connection between sport and development, for example— both individual development and community development—that gets used by people who want to use sport to further their own interests. For example, by getting $500 million of public money for a stadium that they used to generate private profits.
There’s a strange sort of reverence that surrounds our relationship with sports. Jay Coakley first noticed it as a graduate student at the University of Notre Dame, in Indiana; he was studying sociology, so perhaps it was hard not to analyze the sport-centered culture that surrounded him. He observed the hype around football weekends, the mania of pep rallies, and the fundraising enthusiasm of booster clubs. He noticed that football players always seemed to have the nicest cars—and heard through his wife, who worked at the registrar at the time, that sometimes transcripts were changed to keep players on the field.
He was so intrigued that he proposed doing his thesis and dissertation on the topic.
“My faculty advisor in the sociology department said, ‘Are you crazy? You have to focus on something serious, not sports,’ ” Coakley recalls. “I said, ‘How can anything be more serious than something that evokes almost 100 percent of the interest of 100 percent of the people on this campus for five to six weekends of the year at least?’ ”
Coakley ended up doing his dissertation on the racial and religious identities of black Catholic priests, and his Master’s thesis on the race violence seen around the country in 1968. Yet sport, laden as it was with many of the societal tensions he saw in his graduate work, continued to draw him back in. He proposed courses on sports and leisure; he conducted independent studies discussing what sports meant to various individuals; he worked with PTAs and parks and recreation departments; and he began focusing on coaching education. As the years passed, Coakley became one of the most respected authorities in the growing field of sports sociology—a much more serious field than his academic advisor might have ever expected.
Along the way, Coakley developed a theory that finally explained the strange behavior he had first seen at Notre Dame, and which he continued to see throughout the athletic world. He called it “The Great Sports Myth”: the widespread assumption that sport is, inherently, a force of good—despite the fact that it can both empower and humiliate, build bonds and destroy them, blur boundaries and marginalize.
Nautilus sat down with Coakley to talk about the unassailable mythos around sport, and the widespread impacts it can have on our society.
How did you come up with the idea of the Great Sports Myth?
I developed the Great Sports Myth when I was working here in Fort Collins, with a group that was opposing Colorado State University building a $220 million on-campus football stadium, the final cost of which—with just interest—would be over $400 million, and there will be cost overruns in addition to that. All when they have a stadium two miles from campus that needs some renovation, but nevertheless has been a decent place to play. I was working with a group that was opposing this—and by the way, 80 percent of the faculty opposed it, 65 percent of the students opposed it. But there were people who were talking about what this new stadium was going to do, and no matter what kinds of data you came up with to ask them to raise questions about their assumptions, they rejected the data. They rejected all the arguments.
It seemed to me that their position was grounded in something very much like religious faith. I was trying to figure out what was going on, and that’s when I came up with this notion of the Great Sports Myth, which they were using as a basis for rejecting facts, good studies, good logical arguments, and stating that: This is going to be good despite what anybody was saying in opposition.
What’s the historical context of this attitude?
It appears that sports became integrated into American society as a spinoff of what was going on in England. There was this sense that the sons of the elites in society needed something to make them into men, and sport was identified as the mechanism through which that could be done.
That idea was transferred to the United States, but in the United States the importance of sports was tied to a host other factors as well: The need for productivity, the need to socialize and assimilate different immigrant groups, the need to create a military, the need to control young people running loose on the streets during the latter part of the 19th century. So what happened was that sports came to be identified as an important socializing mechanism for boys, a social control mechanism, and a developmental mechanism.
People became committed to sports because it was tied to their own interests as well. And it got put on a pedestal. We even revised Greek history to reify the purity and goodness of sports—talking about how sports were important for developmental purposes among the Greeks, and how they stopped wars to have the Olympic Games.
Sport then gets integrated into the schools in the United States, and all sorts of functions are attributed to it without our ever really examining whether those were valid or not. And so we’ve developed this sense that sport is beyond reproach. If there are any problems associated with sport, it has to be due to bad apples that are involved in it, who are somehow incorrigible enough that they can’t learn the lessons that sport teaches, so we have to get rid of them.
That fits into American culture as a whole, and our emphasis on individualism, personal choice and individual responsibility. (...)
How has this culture trickled down to other aspects of society?
Because sport is a source of excitement, pleasure, and joy, we are less likely to critique it. Sport has also served the interests of powerful people within our culture. It reifies competition and the whole notion of meritocracy, of distributing rewards to the winners, and that people who are successful deserve success. It becomes tied to all sorts of important factors within our culture.
There is this whole sense of the connection between sport and development, for example— both individual development and community development—that gets used by people who want to use sport to further their own interests. For example, by getting $500 million of public money for a stadium that they used to generate private profits.
by Brian J. Barth, Nautilus | Read more:
Image: Gabriel Heusi / Brasil2016.gov.br/ WikipediaNational Health Care Struggling
[ed. See also: Obamacare’s Faltering for One Simple Reason: Profit.]
With the hourglass running out for his administration, President Barack Obama's health care law is struggling in many parts of the country. Double-digit premium increases and exits by big-name insurers have caused some to wonder whether "Obamacare" will go down as a failed experiment.
If Democrat Hillary Clinton wins the White House, expect her to mount a rescue effort. But how much Clinton could do depends on finding willing partners in Congress and among Republican governors, a real political challenge.
"There are turbulent waters," said Kathleen Sebelius, Obama's first secretary of Health and Human Services. "But do I see this as a death knell? No."
Next year's health insurance sign-up season starts a week before the Nov. 8 election, and the previews have been brutal. Premiums are expected to go up sharply in many insurance marketplaces, which offer subsidized private coverage to people lacking access to job-based plans.
At the same time, retrenchment by insurers that have lost hundreds of millions of dollars means that more areas will become one-insurer markets, losing the benefits of competition. The consulting firm Avalere Health projects that seven states will only have one insurer in each of their marketplace regions next year.
Administration officials say insurers set prices too low in a bid to gain market share, and the correction is leading to sticker shock. Insurers blame the problems on sicker-than-expected customers, disappointing enrollment and a premium stabilization system that failed to work as advertised. They also say some people are gaming the system, taking advantage of guaranteed coverage to get medical care only when they are sick.
Not all state markets are in trouble. What is more important, most of the 11 million people covered through HealthCare.gov and its state-run counterparts will be cushioned from premium increases by government subsidies that rise with the cost.
But many customers may have to switch to less comprehensive plans to keep their monthly premiums down. And millions of people who buy individual policies outside the government marketplaces get no financial help. They will have to pay the full increases or go without coverage and risk fines. (People with employer coverage and Medicare are largely unaffected.)
Tennessee's insurance commissioner said recently that the individual health insurance market in her state is "very near collapse." Premiums for the biggest insurer are expected to increase by an average of 62 percent. Two competitors will post average increases of 46 percent and 44 percent.
But because the spigot of federal subsidies remains wide open, an implosion of health insurance markets around the country seems unlikely. More than 8 out of 10 HealthCare.gov customers get subsidies covering about 70 percent of their total premiums. Instead, the damage is likely to be gradual. Rising premiums deter healthy people from signing up, leaving an insurance pool that's more expensive to cover each succeeding year.
With the hourglass running out for his administration, President Barack Obama's health care law is struggling in many parts of the country. Double-digit premium increases and exits by big-name insurers have caused some to wonder whether "Obamacare" will go down as a failed experiment.
If Democrat Hillary Clinton wins the White House, expect her to mount a rescue effort. But how much Clinton could do depends on finding willing partners in Congress and among Republican governors, a real political challenge.
"There are turbulent waters," said Kathleen Sebelius, Obama's first secretary of Health and Human Services. "But do I see this as a death knell? No."
Next year's health insurance sign-up season starts a week before the Nov. 8 election, and the previews have been brutal. Premiums are expected to go up sharply in many insurance marketplaces, which offer subsidized private coverage to people lacking access to job-based plans.
At the same time, retrenchment by insurers that have lost hundreds of millions of dollars means that more areas will become one-insurer markets, losing the benefits of competition. The consulting firm Avalere Health projects that seven states will only have one insurer in each of their marketplace regions next year.
Administration officials say insurers set prices too low in a bid to gain market share, and the correction is leading to sticker shock. Insurers blame the problems on sicker-than-expected customers, disappointing enrollment and a premium stabilization system that failed to work as advertised. They also say some people are gaming the system, taking advantage of guaranteed coverage to get medical care only when they are sick.
Not all state markets are in trouble. What is more important, most of the 11 million people covered through HealthCare.gov and its state-run counterparts will be cushioned from premium increases by government subsidies that rise with the cost.
But many customers may have to switch to less comprehensive plans to keep their monthly premiums down. And millions of people who buy individual policies outside the government marketplaces get no financial help. They will have to pay the full increases or go without coverage and risk fines. (People with employer coverage and Medicare are largely unaffected.)
Tennessee's insurance commissioner said recently that the individual health insurance market in her state is "very near collapse." Premiums for the biggest insurer are expected to increase by an average of 62 percent. Two competitors will post average increases of 46 percent and 44 percent.
But because the spigot of federal subsidies remains wide open, an implosion of health insurance markets around the country seems unlikely. More than 8 out of 10 HealthCare.gov customers get subsidies covering about 70 percent of their total premiums. Instead, the damage is likely to be gradual. Rising premiums deter healthy people from signing up, leaving an insurance pool that's more expensive to cover each succeeding year.
by Ricardo Alonso-Zaldivar, AP | Read more:
Image: via:
Saturday, August 27, 2016
[ed. Ok, I've changed my mind about Trump (for World President). Warning! - you can't unwatch this.]
The World Wide Cage
I’d taken up blogging early in 2005, just as it seemed everyone was talking about ‘the blogosphere’. I’d discovered, after a little digging on the domain registrar GoDaddy, that ‘roughtype.com’ was still available (an uncharacteristic oversight by pornographers), so I called my blog Rough Type. The name seemed to fit the provisional, serve-it-raw quality of online writing at the time.
Blogging has since been subsumed into journalism – it’s lost its personality – but back then it did feel like something new in the world, a literary frontier. The collectivist claptrap about ‘conversational media’ and ‘hive minds’ that came to surround the blogosphere missed the point. Blogs were crankily personal productions. They were diaries written in public, running commentaries on whatever the writer happened to be reading or watching or thinking about at the moment. As Andrew Sullivan, one of the form’s pioneers, put it: ‘You just say what the hell you want.’ The style suited the jitteriness of the web, that needy, oceanic churning. A blog was critical impressionism, or impressionistic criticism, and it had the immediacy of an argument in a bar. You hit the Publish button, and your post was out there on the world wide web, for everyone to see.
Or to ignore. Rough Type’s early readership was trifling, which, in retrospect, was a blessing. I started blogging without knowing what the hell I wanted to say. I was a mumbler in a loud bazaar. Then, in the summer of 2005, Web 2.0 arrived. The commercial internet, comatose since the dot-com crash of 2000, was up on its feet, wide-eyed and hungry. Sites such as MySpace, Flickr, LinkedIn and the recently launched Facebook were pulling money back into Silicon Valley. Nerds were getting rich again. But the fledgling social networks, together with the rapidly inflating blogosphere and the endlessly discussed Wikipedia, seemed to herald something bigger than another gold rush. They were, if you could trust the hype, the vanguard of a democratic revolution in media and communication – a revolution that would change society forever. A new age was dawning, with a sunrise worthy of the Hudson River School. (...)
Blogging has since been subsumed into journalism – it’s lost its personality – but back then it did feel like something new in the world, a literary frontier. The collectivist claptrap about ‘conversational media’ and ‘hive minds’ that came to surround the blogosphere missed the point. Blogs were crankily personal productions. They were diaries written in public, running commentaries on whatever the writer happened to be reading or watching or thinking about at the moment. As Andrew Sullivan, one of the form’s pioneers, put it: ‘You just say what the hell you want.’ The style suited the jitteriness of the web, that needy, oceanic churning. A blog was critical impressionism, or impressionistic criticism, and it had the immediacy of an argument in a bar. You hit the Publish button, and your post was out there on the world wide web, for everyone to see.
Or to ignore. Rough Type’s early readership was trifling, which, in retrospect, was a blessing. I started blogging without knowing what the hell I wanted to say. I was a mumbler in a loud bazaar. Then, in the summer of 2005, Web 2.0 arrived. The commercial internet, comatose since the dot-com crash of 2000, was up on its feet, wide-eyed and hungry. Sites such as MySpace, Flickr, LinkedIn and the recently launched Facebook were pulling money back into Silicon Valley. Nerds were getting rich again. But the fledgling social networks, together with the rapidly inflating blogosphere and the endlessly discussed Wikipedia, seemed to herald something bigger than another gold rush. They were, if you could trust the hype, the vanguard of a democratic revolution in media and communication – a revolution that would change society forever. A new age was dawning, with a sunrise worthy of the Hudson River School. (...)
The millenarian rhetoric swelled with the arrival of Web 2.0. ‘Behold,’ proclaimed Wired in an August 2005 cover story: we are entering a ‘new world’, powered not by God’s grace but by the web’s ‘electricity of participation’. It would be a paradise of our own making, ‘manufactured by users’. History’s databases would be erased, humankind rebooted. ‘You and I are alive at this moment.’
The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, the venture capitalist Marc Andreessen sent out a rhapsodic series of tweets – he called it a ‘tweetstorm’ – announcing that computers and robots were about to liberate us all from ‘physical need constraints’. Echoing Etzler (and Karl Marx), he declared that ‘for the first time in history’ humankind would be able to express its full and true nature: ‘we will be whoever we want to be.’ And: ‘The main fields of human endeavour will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.’ The only thing he left out was the vegetables.
Such prophesies might be dismissed as the prattle of overindulged rich guys, but for one thing: they’ve shaped public opinion. By spreading a utopian view of technology, a view that defines progress as essentially technological, they’ve encouraged people to switch off their critical faculties and give Silicon Valley entrepreneurs and financiers free rein in remaking culture to fit their commercial interests. If, after all, the technologists are creating a world of superabundance, a world without work or want, their interests must be indistinguishable from society’s. To stand in their way, or even to question their motives and tactics, would be self-defeating. It would serve only to delay the wonderful inevitable.
The Silicon Valley line has been given an academic imprimatur by theorists from universities and think tanks. Intellectuals spanning the political spectrum, from Randian right to Marxian left, have portrayed the computer network as a technology of emancipation. The virtual world, they argue, provides an escape from repressive social, corporate and governmental constraints; it frees people to exercise their volition and creativity unfettered, whether as entrepreneurs seeking riches in the marketplace or as volunteers engaged in ‘social production’ outside the marketplace. As the Harvard law professor Yochai Benkler wrote in his influential book The Wealth of Networks (2006):
Benkler and his cohort had good intentions, but their assumptions were bad. They put too much stock in the early history of the web, when the system’s commercial and social structures were inchoate, its users a skewed sample of the population. They failed to appreciate how the network would funnel the energies of the people into a centrally administered, tightly monitored information system organised to enrich a small group of businesses and their owners.
The network would indeed generate a lot of wealth, but it would be wealth of the Adam Smith sort – and it would be concentrated in a few hands, not widely spread. The culture that emerged on the network, and that now extends deep into our lives and psyches, is characterised by frenetic production and consumption – smartphones have made media machines of us all – but little real empowerment and even less reflectiveness. It’s a culture of distraction and dependency. That’s not to deny the benefits of having easy access to an efficient, universal system of information exchange. It is to deny the mythology that shrouds the system. And it is to deny the assumption that the system, in order to provide its benefits, had to take its present form.
Late in his life, the economist John Kenneth Galbraith coined the term ‘innocent fraud’. He used it to describe a lie or a half-truth that, because it suits the needs or views of those in power, is presented as fact. After much repetition, the fiction becomes common wisdom. ‘It is innocent because most who employ it are without conscious guilt,’ Galbraith wrote in 1999. ‘It is fraud because it is quietly in the service of special interest.’ The idea of the computer network as an engine of liberation is an innocent fraud.
I love a good gizmo. When, as a teenager, I sat down at a computer for the first time – a bulging, monochromatic terminal connected to a two-ton mainframe processor – I was wonderstruck. As soon as affordable PCs came along, I surrounded myself with beige boxes, floppy disks and what used to be called ‘peripherals’. A computer, I found, was a tool of many uses but also a puzzle of many mysteries. The more time you spent figuring out how it worked, learning its language and logic, probing its limits, the more possibilities it opened. Like the best of tools, it invited and rewarded curiosity. And it was fun, head crashes and fatal errors notwithstanding.
In the early 1990s, I launched a browser for the first time and watched the gates of the web open. I was enthralled – so much territory, so few rules. But it didn’t take long for the carpetbaggers to arrive. The territory began to be subdivided, strip-malled and, as the monetary value of its data banks grew, strip-mined. My excitement remained, but it was tempered by wariness. I sensed that foreign agents were slipping into my computer through its connection to the web. What had been a tool under my own control was morphing into a medium under the control of others. The computer screen was becoming, as all mass media tend to become, an environment, a surrounding, an enclosure, at worst a cage. It seemed clear that those who controlled the omnipresent screen would, if given their way, control culture as well.
The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, the venture capitalist Marc Andreessen sent out a rhapsodic series of tweets – he called it a ‘tweetstorm’ – announcing that computers and robots were about to liberate us all from ‘physical need constraints’. Echoing Etzler (and Karl Marx), he declared that ‘for the first time in history’ humankind would be able to express its full and true nature: ‘we will be whoever we want to be.’ And: ‘The main fields of human endeavour will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.’ The only thing he left out was the vegetables.
Such prophesies might be dismissed as the prattle of overindulged rich guys, but for one thing: they’ve shaped public opinion. By spreading a utopian view of technology, a view that defines progress as essentially technological, they’ve encouraged people to switch off their critical faculties and give Silicon Valley entrepreneurs and financiers free rein in remaking culture to fit their commercial interests. If, after all, the technologists are creating a world of superabundance, a world without work or want, their interests must be indistinguishable from society’s. To stand in their way, or even to question their motives and tactics, would be self-defeating. It would serve only to delay the wonderful inevitable.
The Silicon Valley line has been given an academic imprimatur by theorists from universities and think tanks. Intellectuals spanning the political spectrum, from Randian right to Marxian left, have portrayed the computer network as a technology of emancipation. The virtual world, they argue, provides an escape from repressive social, corporate and governmental constraints; it frees people to exercise their volition and creativity unfettered, whether as entrepreneurs seeking riches in the marketplace or as volunteers engaged in ‘social production’ outside the marketplace. As the Harvard law professor Yochai Benkler wrote in his influential book The Wealth of Networks (2006):
This new freedom holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and, in an increasingly information-dependent global economy, as a mechanism to achieve improvements in human development everywhere.Calling it a revolution, he said, is no exaggeration.
Benkler and his cohort had good intentions, but their assumptions were bad. They put too much stock in the early history of the web, when the system’s commercial and social structures were inchoate, its users a skewed sample of the population. They failed to appreciate how the network would funnel the energies of the people into a centrally administered, tightly monitored information system organised to enrich a small group of businesses and their owners.
The network would indeed generate a lot of wealth, but it would be wealth of the Adam Smith sort – and it would be concentrated in a few hands, not widely spread. The culture that emerged on the network, and that now extends deep into our lives and psyches, is characterised by frenetic production and consumption – smartphones have made media machines of us all – but little real empowerment and even less reflectiveness. It’s a culture of distraction and dependency. That’s not to deny the benefits of having easy access to an efficient, universal system of information exchange. It is to deny the mythology that shrouds the system. And it is to deny the assumption that the system, in order to provide its benefits, had to take its present form.
Late in his life, the economist John Kenneth Galbraith coined the term ‘innocent fraud’. He used it to describe a lie or a half-truth that, because it suits the needs or views of those in power, is presented as fact. After much repetition, the fiction becomes common wisdom. ‘It is innocent because most who employ it are without conscious guilt,’ Galbraith wrote in 1999. ‘It is fraud because it is quietly in the service of special interest.’ The idea of the computer network as an engine of liberation is an innocent fraud.
I love a good gizmo. When, as a teenager, I sat down at a computer for the first time – a bulging, monochromatic terminal connected to a two-ton mainframe processor – I was wonderstruck. As soon as affordable PCs came along, I surrounded myself with beige boxes, floppy disks and what used to be called ‘peripherals’. A computer, I found, was a tool of many uses but also a puzzle of many mysteries. The more time you spent figuring out how it worked, learning its language and logic, probing its limits, the more possibilities it opened. Like the best of tools, it invited and rewarded curiosity. And it was fun, head crashes and fatal errors notwithstanding.
In the early 1990s, I launched a browser for the first time and watched the gates of the web open. I was enthralled – so much territory, so few rules. But it didn’t take long for the carpetbaggers to arrive. The territory began to be subdivided, strip-malled and, as the monetary value of its data banks grew, strip-mined. My excitement remained, but it was tempered by wariness. I sensed that foreign agents were slipping into my computer through its connection to the web. What had been a tool under my own control was morphing into a medium under the control of others. The computer screen was becoming, as all mass media tend to become, an environment, a surrounding, an enclosure, at worst a cage. It seemed clear that those who controlled the omnipresent screen would, if given their way, control culture as well.
by Nicholas Carr, Aeon | Read more:
Image: Albert Gea/ReutersRadical Flâneuserie
I started noticing the ads in the magazines I read. Here is a woman in an asymmetrical black swimsuit, a semitransparent palm tree superimposed on her head, a pink pole behind her. Here is a woman lying down, miraculously balanced on some kind of balustrade, in a white button-down, khaki skirt, and sandals, the same dynamic play of light and palm trees and buildings around her. In the top-right corner, the words Dans l’oeil du flâneur—“in the eye of the flâneur”—and beneath, the Hermès logo. The flâneur though whose “eye” we’re seeing seems to live in Miami. Not a well-known walking city, but why not—surely flânerie needn’t be confined to melancholic European capitals.
The theme was set by Hermès’s artistic director, Pierre-Alexis Dumas. While the media coverage of the campaign and the traveling exhibition that complemented it breathlessly adopted the term, Dumas gave a pretty illuminated definition of it. Flânerie, he explained, is not about “being idle” or “doing nothing.” It’s an “attitude of curiosity … about exploring everything.” It flourished in the nineteenth century, he continued, as a form of resistance to industrialization and the rationalization of everyday life, and “the roots of the spirit of Hermès are in nineteenth-century Flânerie.” This is pretty radical rhetoric for the director of a luxury-goods company with a €4.1 million yearly revenue. Looking at the ads, as well as the merchandise—including an eight-speed bicycle called “The Flâneur” that retailed for $11.3k—it seems someone at Hermès didn’t share, or understand, Dumas’s vision.
There’s something so attractive about wandering aimlessly through the city, taking it all in (especially if we’re wearing Hermès while we do it). We all, deep down, want to detach from our lives. The flâneur, since everyone wants to be one, has a long history of being many different things to different people, to such an extent that the concept has become one of these things we point to without really knowing what we mean—a kind of shorthand for urban, intellectual, curious, cosmopolitan. This is what Hermès is counting on: that we will associate Hermès products with those values and come to believe that buying them will reinforce those aspects of ourselves.
The earliest mention of a flâneur is in the late sixteenth century, possibly borrowed from the Scandinavian flana, “a person who wanders.” It fell largely out of use until the nineteenth century, and then it caught on again. In 1806, an anonymous pamphleteer wrote of the flâneur as “M. Bonhomme,” a man-about-town who comes from sufficient wealth to be able to have the time to wander the city at will, taking in the urban spectacle. He hangs out in cafés and watches the various inhabitants of the city at work and at play. He is interested in gossip and fashion, but not particularly in women. In an 1829 dictionary, a flâneur is someone “who likes to do nothing,” someone who relishes idleness. Balzac’s flâneur took two main forms: the common flâneur, happy to aimlessly wander the streets, and the artist-flâneur, who poured his experiences in the city into his work. (This was the more miserable type of flâneur, who, Balzac noted in his 1837 novel César Birotteau, “is just as frequently a desperate man as an idle one.”) Baudelaire similarly believed that the ultimate flâneur, the true connoisseur of the city, was an artist who “sang of the sorry dog, the poor dog, the homeless dog, the wandering dog [le chien flâneur].” Walter Benjamin’s flâneur, on the other hand, was more feral, a figure who “completely distances himself from the type of the philosophical promenader, and takes on the features of the werewolf restlessly roaming a social wildness,” he wrote in the late 1930s. An “intoxication” comes over him as he walks “long and aimlessly through the streets.”
And so the flâneur shape-shifts according to time, place, and agenda. If he didn’t exist, we would have had to invent him to embody our fantasies about nineteenth-century Paris—or about ourselves, today.
Hermès is similarly ambiguous about who, exactly, the flâneur in their ads is. Is he the man (or woman?) looking at the woman on the balustrade? Or is she the flâneur, too? Is the flâneur the photographer, or the (male?) gaze he represents? Is there a flâneuse, in Hermès’ version? Are we looking at her? Are we—am I, holding the magazine—her?
But I can’t be, because I’m the woman holding the magazine, being asked to buy Hermès products. I click through the pictures of the exhibition Hermès organized on the banks of the Seine, Wanderland, and one of the curiosities on view—joining nineteenth-century canes, an array of ties, an Hermès purse handcuffed to a coatrack—is an image of an androgynous person crossing the road, holding a stack of boxes so high he or she can’t see around them. Is this flânerie, Hermès-style?
Many critics over the years have argued that shopping was at odds with the idle strolling of the flâneur: he walked the arcades, the glass-roofed shopping streets that were the precursor to the department store, but he did not shop. Priscilla Parkhurst Ferguson, writing on the flâneur in her book Paris as Revolution, argues that women could not flâner because women who were shopping in the grands magasins were caught in an economy of spectacle, being tricked into buying things, and having their desires stimulated. By contrast the flâneur’s very raison d’être was having no reason whatsoever.
Before the twentieth century, women did not have the freedom to wander idly through the streets of Paris. The only women with the freedom to circulate (and a limited freedom at that) were the streetwalkers and ragpickers; Baudelaire’s mysterious and alluring passante, immortalized in his poem “To a (Female) Passer-by,” is assumed to have been a woman of the night. Even the word flâneuse doesn’t technically exist in French, except, according to an 1877 dictionary entry, to designate a kind of lounge chair. (So Hermès’s woman reclining on a balustrade was right on the money, for the late nineteenth century.)
But why must the flâneuse be restricted to being a female version of a male concept, especially when no one can agree on what the flâneur is anyway? Why not look at what women were actually doing on the city streets? What could the flâneuse look like then?
by Lauren Elkin, Paris Review | Read more:
Image: John Singer Sargent, A Street in Venice
The theme was set by Hermès’s artistic director, Pierre-Alexis Dumas. While the media coverage of the campaign and the traveling exhibition that complemented it breathlessly adopted the term, Dumas gave a pretty illuminated definition of it. Flânerie, he explained, is not about “being idle” or “doing nothing.” It’s an “attitude of curiosity … about exploring everything.” It flourished in the nineteenth century, he continued, as a form of resistance to industrialization and the rationalization of everyday life, and “the roots of the spirit of Hermès are in nineteenth-century Flânerie.” This is pretty radical rhetoric for the director of a luxury-goods company with a €4.1 million yearly revenue. Looking at the ads, as well as the merchandise—including an eight-speed bicycle called “The Flâneur” that retailed for $11.3k—it seems someone at Hermès didn’t share, or understand, Dumas’s vision.
There’s something so attractive about wandering aimlessly through the city, taking it all in (especially if we’re wearing Hermès while we do it). We all, deep down, want to detach from our lives. The flâneur, since everyone wants to be one, has a long history of being many different things to different people, to such an extent that the concept has become one of these things we point to without really knowing what we mean—a kind of shorthand for urban, intellectual, curious, cosmopolitan. This is what Hermès is counting on: that we will associate Hermès products with those values and come to believe that buying them will reinforce those aspects of ourselves.
The earliest mention of a flâneur is in the late sixteenth century, possibly borrowed from the Scandinavian flana, “a person who wanders.” It fell largely out of use until the nineteenth century, and then it caught on again. In 1806, an anonymous pamphleteer wrote of the flâneur as “M. Bonhomme,” a man-about-town who comes from sufficient wealth to be able to have the time to wander the city at will, taking in the urban spectacle. He hangs out in cafés and watches the various inhabitants of the city at work and at play. He is interested in gossip and fashion, but not particularly in women. In an 1829 dictionary, a flâneur is someone “who likes to do nothing,” someone who relishes idleness. Balzac’s flâneur took two main forms: the common flâneur, happy to aimlessly wander the streets, and the artist-flâneur, who poured his experiences in the city into his work. (This was the more miserable type of flâneur, who, Balzac noted in his 1837 novel César Birotteau, “is just as frequently a desperate man as an idle one.”) Baudelaire similarly believed that the ultimate flâneur, the true connoisseur of the city, was an artist who “sang of the sorry dog, the poor dog, the homeless dog, the wandering dog [le chien flâneur].” Walter Benjamin’s flâneur, on the other hand, was more feral, a figure who “completely distances himself from the type of the philosophical promenader, and takes on the features of the werewolf restlessly roaming a social wildness,” he wrote in the late 1930s. An “intoxication” comes over him as he walks “long and aimlessly through the streets.”
And so the flâneur shape-shifts according to time, place, and agenda. If he didn’t exist, we would have had to invent him to embody our fantasies about nineteenth-century Paris—or about ourselves, today.
Hermès is similarly ambiguous about who, exactly, the flâneur in their ads is. Is he the man (or woman?) looking at the woman on the balustrade? Or is she the flâneur, too? Is the flâneur the photographer, or the (male?) gaze he represents? Is there a flâneuse, in Hermès’ version? Are we looking at her? Are we—am I, holding the magazine—her?
But I can’t be, because I’m the woman holding the magazine, being asked to buy Hermès products. I click through the pictures of the exhibition Hermès organized on the banks of the Seine, Wanderland, and one of the curiosities on view—joining nineteenth-century canes, an array of ties, an Hermès purse handcuffed to a coatrack—is an image of an androgynous person crossing the road, holding a stack of boxes so high he or she can’t see around them. Is this flânerie, Hermès-style?
Many critics over the years have argued that shopping was at odds with the idle strolling of the flâneur: he walked the arcades, the glass-roofed shopping streets that were the precursor to the department store, but he did not shop. Priscilla Parkhurst Ferguson, writing on the flâneur in her book Paris as Revolution, argues that women could not flâner because women who were shopping in the grands magasins were caught in an economy of spectacle, being tricked into buying things, and having their desires stimulated. By contrast the flâneur’s very raison d’être was having no reason whatsoever.
Before the twentieth century, women did not have the freedom to wander idly through the streets of Paris. The only women with the freedom to circulate (and a limited freedom at that) were the streetwalkers and ragpickers; Baudelaire’s mysterious and alluring passante, immortalized in his poem “To a (Female) Passer-by,” is assumed to have been a woman of the night. Even the word flâneuse doesn’t technically exist in French, except, according to an 1877 dictionary entry, to designate a kind of lounge chair. (So Hermès’s woman reclining on a balustrade was right on the money, for the late nineteenth century.)
But why must the flâneuse be restricted to being a female version of a male concept, especially when no one can agree on what the flâneur is anyway? Why not look at what women were actually doing on the city streets? What could the flâneuse look like then?
by Lauren Elkin, Paris Review | Read more:
Image: John Singer Sargent, A Street in Venice
Subscribe to:
Posts (Atom)