Monday, June 30, 2014
Ernest Hemingway’s Summer Camping Recipes
With regard to writing, Ernest Hemingway was a man of simple tastes. Were I to employ a metaphor, I’d describe Hem as the kind of guy who’d prefer an unadorned plum from William Carlos Williams’ icebox to Makini Howell’s Pesto Plum Pizza with Balsamic Arugula.
Don’t mistake that metaphor for real life, however. Judging by his 1920Toronto Star how-to on maximizing comfort on camping vacations, he would not have stood for charred weenies and marshmallows on a stick. Rather, a little cookery know-how was something for a man to be proud of:
Would your mouth water less if I tell you that literary food blog Paper and Salt has updated Hem’s trout recipe à la Emeril Lagasse, omitting the Crisco and tossing in a few fresh herbs? No campfire required. You can get ‘er done in the broiler:
Image:Chowstalker
Don’t mistake that metaphor for real life, however. Judging by his 1920Toronto Star how-to on maximizing comfort on camping vacations, he would not have stood for charred weenies and marshmallows on a stick. Rather, a little cookery know-how was something for a man to be proud of:
“…a frying pan is a most necessary thing to any trip, but you also need the old stew kettle and the folding reflector baker.”Clearly, the man did not trust readers to independently seek out such sources as The Perry Ladies’ Cookbook of 1920 for instructions. Instead, he painstakingly details his method for successful preparation of Trout Wrapped in Bacon, including his preferred brands of vegetable shortening.
Would your mouth water less if I tell you that literary food blog Paper and Salt has updated Hem’s trout recipe à la Emeril Lagasse, omitting the Crisco and tossing in a few fresh herbs? No campfire required. You can get ‘er done in the broiler:
Bacon-Wrapped Trout: (adapted from Emeril Lagasse)
2 (10-ounce) whole trout, cleaned and gutted
1/2 cup cornmeal
Salt and ground pepper, to taste
8 sprigs fresh thyme
1 lemon, sliced
6 slices bacon
Fresh parsley, for garnish
1. Preheat broiler and set oven rack 4 to 6 inches from heat. With a paper towel, pat trout dry inside and out. Dredge outside of each fish in cornmeal, then season cavity with salt and pepper. Place 4 sprigs of thyme and 2 lemon slices inside each fish.
2. Wrap 3 bacon slices around the middle of each fish, so that the edges overlap slightly. Line a roasting pan with aluminum foil, and place fish on pan. Broil until bacon is crisp, about 5 minutes. With a spatula, carefully flip fish over and cook another 5 minutes, until flesh is firm.by Ayun Halliday, Open Culture | Read more:
Image:Chowstalker
The Bait-And-Switch Behind Today’s Hobby Lobby Decision
For many years, the Supreme Court struck a careful balance between protecting religious liberty and maintaining the rule of law in a pluralistic society. Religious people enjoy a robust right to practice their own faith and to act according to the dictates of their own conscience, but they could not wield religious liberty claims as a sword to cut away the legal rights of others. This was especially true in the business context. As the Supreme Court held in United States v. Lee, “[w]hen followers of a particular sect enter into commercial activity as a matter of choice, the limits they accept on their own conduct as a matter of conscience and faith are not to be superimposed on the statutory schemes which are binding on others in that activity.”
With Monday’s decision in Burwell v. Hobby Lobby, however, this careful balance has been upended. Employers who object to birth control on religious grounds may now refuse to comply with federal rules requiring them to include contraceptive care in their health plans. The rights of the employer now trump the rights of the employee.
To achieve this outcome, Justice Samuel Alito’s opinion on behalf of a bare majority of the Court engages in a kind of legalistic bait-and-switch. It takes a law Congress enacted to serve one limited purpose, and expands that law to suit Hobby Lobby’s much more expansive purpose.
In its 1963 decision in Sherbert v. Verner, the Court announced that laws that impose an “incidental burden on the free exercise of [a person of faith's] religion” may only be applied to them if the law is “justified by a ‘compelling state interest in the regulation of a subject within the State’s constitutional power to regulate.’” As anyone who has studied constitutional law will immediately recognize, this “compelling state interest” framework is the language judges use when the wish to invoke a test known as “strict scrutiny” — the highest test that exists under American constitutional law. Typically, laws that are subjected to strict scrutiny fare very badly. Strict scrutiny is the constitutional standard used to evaluate laws that discriminate on the basis of race, for example, and it only permits laws to be enforced when they further a compelling government interest and when they use the least restrictive means of doing so.
It soon became clear, however, that when the Court considered religious liberty claims it was actually engaged in something much less rigorous than strict scrutiny. As Professor Adam Winkler documented, courts uphold less than one-third of all laws they subject to strict scrutiny — yet they rejected 59 percent of the claims brought by plaintiffs claiming religious liberty. A different study reached even starker results — determining that nearly 88 percent of religious liberty plaintiffs lost under the standard announced in Sherbert.
The most likely explanation for this fact is that Sherbert and its progeny were careful to maintain the balance between religious liberty and third parties’ rights. InSherbert itself, the justices emphasized that they were siding with a plaintiff who claimed a religious liberty right not to work on Saturday because “the recognition of the appellant’s right” did not “serve to abridge any other person’s religious liberties.” Less than a decade later, in a case called Wisconsin v. Yoder, the Court once again emphasized that it was exempting an Amish family from a law making school attendance mandatory because it did not perceive any harms to third parties. “This case,” the Court explained, “is not one in which any harm to the physical or mental health of the child or to the public safety, peace, order, or welfare has been demonstrated or may be properly inferred.”
Ten years after that, the Court decided the Lee case, with its proclamation that a business owner’s own religious views “are not to be superimposed on the statutory schemes which are binding on others” engaged in a similar business. Allowing an employer to ignore a law protecting its employees, the Court explained “operates to impose the employer’s religious faith on the employees.”
In 1990, however, the Court briefly narrowed the protections offered to people who object to laws on religious grounds in an opinion authored by Justice Antonin Scalia. This unpopular decision inspired the Religious Freedom Restoration Act (RFRA), which formed the basis of Hobby Lobby’s legal claim. Yet, the purpose of RFRA was not to change the longstanding balance between religious liberty and the rights of third parties. Rather, it was to restore the many decades of religious liberty law that began with the Sherbert opinion. Indeed, RFRA explicitly states that its purpose is to “restore the compelling interest test as set forth in Sherbert v. Verner [] and Wisconsin v. Yoder [].”
Justice Alito’s opinion, however, tosses this explicit statement of congressional purpose aside, although he offers little explanation for why he is justified in doing so. His best effort is a reference to a 2000 law that amended one of RFRA’s definition of an “exercise of religion” to take out an explicit reference to the First Amendment. According to Alito, the purpose of this amendment was “an obvious effort to effect a complete separation from First Amendment case law” as laid out by cases like Sherbert and Yoder. Yet, it is difficult to square this interpretation with the fact that the RFRA statute still provides that its purpose is to “restore the compelling interest test as set forth” in Sherbert and Yoder.
The upshot of Alito’s opinion is that, for the first time in American history, people with religious objections to the law will be able to ignore many laws with impunity unless the government’s decision to enforce the law overcomes a very high legal bar that few laws survive. The full implications of Hobby Lobby, however, may not be known for years. When cases like Sherbert, Yoder and Lee were still good law at the federal level, plaintiffs alleging religious liberty alleged that they could engage in race discrimination and discrimination against women, and they also claimed immunity to paying Social Security taxes and the minimum wage. Though the Supreme Court probably isn’t ready to revisit these cases, religious business owners are likely to find many other regulations they can now object to on religious grounds. And all of these objections will come to court with vigorous tailwind.
With Monday’s decision in Burwell v. Hobby Lobby, however, this careful balance has been upended. Employers who object to birth control on religious grounds may now refuse to comply with federal rules requiring them to include contraceptive care in their health plans. The rights of the employer now trump the rights of the employee.
To achieve this outcome, Justice Samuel Alito’s opinion on behalf of a bare majority of the Court engages in a kind of legalistic bait-and-switch. It takes a law Congress enacted to serve one limited purpose, and expands that law to suit Hobby Lobby’s much more expansive purpose.
In its 1963 decision in Sherbert v. Verner, the Court announced that laws that impose an “incidental burden on the free exercise of [a person of faith's] religion” may only be applied to them if the law is “justified by a ‘compelling state interest in the regulation of a subject within the State’s constitutional power to regulate.’” As anyone who has studied constitutional law will immediately recognize, this “compelling state interest” framework is the language judges use when the wish to invoke a test known as “strict scrutiny” — the highest test that exists under American constitutional law. Typically, laws that are subjected to strict scrutiny fare very badly. Strict scrutiny is the constitutional standard used to evaluate laws that discriminate on the basis of race, for example, and it only permits laws to be enforced when they further a compelling government interest and when they use the least restrictive means of doing so.
It soon became clear, however, that when the Court considered religious liberty claims it was actually engaged in something much less rigorous than strict scrutiny. As Professor Adam Winkler documented, courts uphold less than one-third of all laws they subject to strict scrutiny — yet they rejected 59 percent of the claims brought by plaintiffs claiming religious liberty. A different study reached even starker results — determining that nearly 88 percent of religious liberty plaintiffs lost under the standard announced in Sherbert.
The most likely explanation for this fact is that Sherbert and its progeny were careful to maintain the balance between religious liberty and third parties’ rights. InSherbert itself, the justices emphasized that they were siding with a plaintiff who claimed a religious liberty right not to work on Saturday because “the recognition of the appellant’s right” did not “serve to abridge any other person’s religious liberties.” Less than a decade later, in a case called Wisconsin v. Yoder, the Court once again emphasized that it was exempting an Amish family from a law making school attendance mandatory because it did not perceive any harms to third parties. “This case,” the Court explained, “is not one in which any harm to the physical or mental health of the child or to the public safety, peace, order, or welfare has been demonstrated or may be properly inferred.”
Ten years after that, the Court decided the Lee case, with its proclamation that a business owner’s own religious views “are not to be superimposed on the statutory schemes which are binding on others” engaged in a similar business. Allowing an employer to ignore a law protecting its employees, the Court explained “operates to impose the employer’s religious faith on the employees.”
In 1990, however, the Court briefly narrowed the protections offered to people who object to laws on religious grounds in an opinion authored by Justice Antonin Scalia. This unpopular decision inspired the Religious Freedom Restoration Act (RFRA), which formed the basis of Hobby Lobby’s legal claim. Yet, the purpose of RFRA was not to change the longstanding balance between religious liberty and the rights of third parties. Rather, it was to restore the many decades of religious liberty law that began with the Sherbert opinion. Indeed, RFRA explicitly states that its purpose is to “restore the compelling interest test as set forth in Sherbert v. Verner [] and Wisconsin v. Yoder [].”
Justice Alito’s opinion, however, tosses this explicit statement of congressional purpose aside, although he offers little explanation for why he is justified in doing so. His best effort is a reference to a 2000 law that amended one of RFRA’s definition of an “exercise of religion” to take out an explicit reference to the First Amendment. According to Alito, the purpose of this amendment was “an obvious effort to effect a complete separation from First Amendment case law” as laid out by cases like Sherbert and Yoder. Yet, it is difficult to square this interpretation with the fact that the RFRA statute still provides that its purpose is to “restore the compelling interest test as set forth” in Sherbert and Yoder.
The upshot of Alito’s opinion is that, for the first time in American history, people with religious objections to the law will be able to ignore many laws with impunity unless the government’s decision to enforce the law overcomes a very high legal bar that few laws survive. The full implications of Hobby Lobby, however, may not be known for years. When cases like Sherbert, Yoder and Lee were still good law at the federal level, plaintiffs alleging religious liberty alleged that they could engage in race discrimination and discrimination against women, and they also claimed immunity to paying Social Security taxes and the minimum wage. Though the Supreme Court probably isn’t ready to revisit these cases, religious business owners are likely to find many other regulations they can now object to on religious grounds. And all of these objections will come to court with vigorous tailwind.
by Jan Millhiser, Think Progress | Read more:
Image: Sy Mukuherjee
The 'Internet's Own Boy' Free on Internet Archive
by Cory Doctorow, Boing Boing | Read more:
Video: Internet Archive
I Would Reunite 4 U: Prince’s Private Paisley Park Concert for Apollonia
[ed. Prince shreds.]
Eye want 2 tell U a story. Once upon a time in the land of Sinaplenty, there lived a Prince named Prince, who was always looking 4 his princess. Then he met a beautiful girl named Patricia Apollonia Kotero and cast her as the lead in his 1984 movie, Purple Rain. He dubbed her simply “Apollonia,” and after the departure of Denise “Vanity” Matthews, he assigned the remaining members of Vanity 6 (Susan Moonsie and Brenda Bennett) to be Apollonia’s backup singers in a new group called Apollonia 6.
Despite their intense connection on the silver screen in Purple Rain, Prince and Apollonia never had a romantic relationship in real life. Kotero was married, but her relationship status was kept a secret in order to better sell her image as a vixen. In addition to rumors about her relationship with Prince, the tabloids also linked Apollonia to Lorenzo Lamas and David Lee Roth. Prince had intended to give Apollonia 6 songs including “Manic Monday,” “Take Me With U,” and “The Glamorous Life” but soon realized that Apollonia was not a very good technical singer. She also hadn’t planned on being a Prince girl for all that long, and she did not intend to stay with Apollonia 6 after her contractual obligations to make an album and do Purple Rain were completed. Prince wrote all the songs on the group’s lone album but credited them to the group’s members, attributing the lead single, “Sex Shooter,” to Kotero herself.
But this was all long ago. Many years (30) have passed since Prince convinced Apollonia to purify herself in the waters of Lake Minnetonka. But some of us have held a torch for Prince and Apollonia all of these years. The hot, purple chemistry they had in Purple Rain was just 2 iconic. And for those some of us, on Saturday night, our dreams came true. Prince took Apollonia home 2 his kingdom of Paisley Park in Minnesota for the first time.
Paisley Park Studios, named after a song on 1985’s Around the World in a Day and Prince’s (now defunct) Paisley Park Records label, is a mysterious, magical $10 million complex that only a select few get to enter, and only by invitation from Prince. Essentially it is the Willy Wonka’s chocolate factory of funk. What is known about Paisley Park is that it has a Granite Room and a Wood Room that provide different acoustics for recording, and a complete soundstage that Prince used to shoot much of Purple Rain follow-up Graffiti Bridge. Prince still rehearses all of his tours on the Paisley Park soundstage, and its also been used as a practice space by the Beastie Boys, the Bee Gees, Neil Young, Kool & The Gang, and the Muppets. Paisley Park also contains “The Vault,” where Prince stores everything he has ever made, including B-sides, outtakes, and jam sessions he deemed 2 b 2 funky 4 human ears.
by Molly Lambert, Grantland | Read more:
Image: YouTube
Stash Pad
The buyer, an Italian, was in town for a week, with a million or so dollars to spend. We met one Sunday morning at 20 Pine, a Financial District condo building. She wore a red scarf, jangly jewelry, and a pair of lime-green sunglasses perched atop her curly hair, and she told me she would prefer to remain anonymous. Working through a shell company, she was looking to anchor some of her wealth in an advantageous port: New York City.
The building’s lobby, designed in leathery tones by Armani, swirled with polylingual property talk. As the Italian and I waited for her broker, an Asian man sitting on a couch next to us asked, “You see the apartment?” But he didn’t wait for an answer, leaping up to join a handful of women speaking a foreign language heading toward the elevators.
After a few minutes, a fashionably stubbled young man swung through 20 Pine’s revolving door: Santo Rosabianca, a broker with Wire International Realty. The firm, run by Rosabianca’s brother Luigi, an attorney, specializes in catering to overseas investors. A first-generation American, Santo greeted the buyer with kisses and briefed her in Italian. She was searching for a property that would generate substantial rental income. “Wall Street is not my favorite place,” she told me. “But he says it is very good for rent.”
Like several other buildings she was being shown, 20 Pine was developed at the height of the real-estate bubble. After the crash of 2008, it became an emblematic disaster, with the developers selling units in bulk at desperation prices, until opportunistic foreigners swooped in with cash offers. The salvage deals are long gone, but 20 Pine retains its international appeal. The one-bedroom the Italian was looking at, on the 27th floor, had a view of the Woolworth Building, sleek finishes, a bachelor-size kitchen, and access to an exclusive terrace reserved for upper-floor residents. It was first purchased by an investment banker in early 2008 for $1.3 million, was resold in 2011 for $850,000, and was now back on the market for close to its prerecession price. Rosabianca told the Italian it would rent for more than $4,000 a month, enough to assure a healthy cash flow while its value appreciated. “There’s really no safer way to get that kind of return,” he said, “than in New York City real estate.”
This is not exactly true—there’s plenty of risk in real estate, as the original crop of purchasers at 20 Pine discovered—but that hardly dampens the enthusiasm of foreign buyers, who have become an overpowering force in New York’s real-estate market. According to data compiled by the firm PropertyShark, since 2008, roughly 30 percent of condo sales in large-scale Manhattan developments have been to purchasers who either listed an overseas address or bought through an entity like a limited-liability corporation, a tactic rarely employed by local homebuyers but favored by foreign investors. Similarly, the firm Corcoran Sunshine, which markets luxury buildings, estimates that 35 percent of its sales since 2013 have been to international buyers, half from Asia, with the remainder roughly evenly split among Latin America, Europe, and the rest of the world. “The global elite,” says developer Michael Stern, “is basically looking for a safe-deposit box.” (...)
And so New Yorkers with garden-variety affluence—the kind of buyers who require mortgages—are facing disheartening price wars as they compete for scarce inventory with investors who may seldom even turn on a light switch. The Census Bureau estimates that 30 percent of all apartments in the quadrant from 49th to 70th Streets between Fifth and Park are vacant at least ten months a year.
To cater to the tastes of their transient residents, developers are designing their projects with features like hotel-style services. And the new economy has spawned new service businesses, like XL Real Property Management, which takes care of all the niggling details—repairs, insurance, condo fees—for absentee buyers. “I feel like foreign investors have gotten a bad rap,” says Dylan Pichulik, XL’s boyish chief executive, who recently took me to see a $15,000-a-month rental at the Gretsch, a condo building in Williamsburg, which he oversees for a Russian owner. “Because, you know, They’re evil, they’re coming in to buy all our real estate. But it’s a major driver of the market right now.”
Even those with less reflexively hostile reactions to foreign buying competition might still wonder:Who are these people? An entire industry of brokers, lawyers, and tight-lipped advisers exists largely to keep anyone from discovering the answer. This is because, while New York real estate has significant drawbacks as an asset—it’s illiquid and costly to manage—it has a major selling point in its relative opacity. With a little creative corporate structuring, the ownership of a New York property can be made as untraceable as a numbered bank account. And that makes the city an island haven for those who want to stash cash in an increasingly monitored global financial system. “With everything that is going on in Switzerland in terms of transparency, people are being forced to pay taxes on their capital that they used to hold there,” says Rodrigo Nino, the president of the Prodigy Network. “Real estate is a great alternative.”
Those on the New York end of the transaction often don’t know—or don’t care to find out—the exact derivation of foreign money involved in these transactions. “Sometimes they come in with wires,” says Luigi Rosabianca. “Sometimes they come in with suitcases.” Most of the time, the motivation behind this movement of cash, and buyers’ desire for privacy, is legitimate, but sometimes it’s not. An inquiry by the International Consortium of Investigative Journalists, a Washington-based nonprofit, has uncovered numerous cases in which New York real estate figured in foreign financial- and political-corruption scandals. “It’s something that is never discussed, but it’s the elephant in the room,” says Rosabianca. “Real estate is a wonderful way to cleanse money. Once you buy real estate, the derivation of that cash is forgotten.”
The building’s lobby, designed in leathery tones by Armani, swirled with polylingual property talk. As the Italian and I waited for her broker, an Asian man sitting on a couch next to us asked, “You see the apartment?” But he didn’t wait for an answer, leaping up to join a handful of women speaking a foreign language heading toward the elevators.
After a few minutes, a fashionably stubbled young man swung through 20 Pine’s revolving door: Santo Rosabianca, a broker with Wire International Realty. The firm, run by Rosabianca’s brother Luigi, an attorney, specializes in catering to overseas investors. A first-generation American, Santo greeted the buyer with kisses and briefed her in Italian. She was searching for a property that would generate substantial rental income. “Wall Street is not my favorite place,” she told me. “But he says it is very good for rent.”
Like several other buildings she was being shown, 20 Pine was developed at the height of the real-estate bubble. After the crash of 2008, it became an emblematic disaster, with the developers selling units in bulk at desperation prices, until opportunistic foreigners swooped in with cash offers. The salvage deals are long gone, but 20 Pine retains its international appeal. The one-bedroom the Italian was looking at, on the 27th floor, had a view of the Woolworth Building, sleek finishes, a bachelor-size kitchen, and access to an exclusive terrace reserved for upper-floor residents. It was first purchased by an investment banker in early 2008 for $1.3 million, was resold in 2011 for $850,000, and was now back on the market for close to its prerecession price. Rosabianca told the Italian it would rent for more than $4,000 a month, enough to assure a healthy cash flow while its value appreciated. “There’s really no safer way to get that kind of return,” he said, “than in New York City real estate.”
This is not exactly true—there’s plenty of risk in real estate, as the original crop of purchasers at 20 Pine discovered—but that hardly dampens the enthusiasm of foreign buyers, who have become an overpowering force in New York’s real-estate market. According to data compiled by the firm PropertyShark, since 2008, roughly 30 percent of condo sales in large-scale Manhattan developments have been to purchasers who either listed an overseas address or bought through an entity like a limited-liability corporation, a tactic rarely employed by local homebuyers but favored by foreign investors. Similarly, the firm Corcoran Sunshine, which markets luxury buildings, estimates that 35 percent of its sales since 2013 have been to international buyers, half from Asia, with the remainder roughly evenly split among Latin America, Europe, and the rest of the world. “The global elite,” says developer Michael Stern, “is basically looking for a safe-deposit box.” (...)
And so New Yorkers with garden-variety affluence—the kind of buyers who require mortgages—are facing disheartening price wars as they compete for scarce inventory with investors who may seldom even turn on a light switch. The Census Bureau estimates that 30 percent of all apartments in the quadrant from 49th to 70th Streets between Fifth and Park are vacant at least ten months a year.
To cater to the tastes of their transient residents, developers are designing their projects with features like hotel-style services. And the new economy has spawned new service businesses, like XL Real Property Management, which takes care of all the niggling details—repairs, insurance, condo fees—for absentee buyers. “I feel like foreign investors have gotten a bad rap,” says Dylan Pichulik, XL’s boyish chief executive, who recently took me to see a $15,000-a-month rental at the Gretsch, a condo building in Williamsburg, which he oversees for a Russian owner. “Because, you know, They’re evil, they’re coming in to buy all our real estate. But it’s a major driver of the market right now.”
Even those with less reflexively hostile reactions to foreign buying competition might still wonder:Who are these people? An entire industry of brokers, lawyers, and tight-lipped advisers exists largely to keep anyone from discovering the answer. This is because, while New York real estate has significant drawbacks as an asset—it’s illiquid and costly to manage—it has a major selling point in its relative opacity. With a little creative corporate structuring, the ownership of a New York property can be made as untraceable as a numbered bank account. And that makes the city an island haven for those who want to stash cash in an increasingly monitored global financial system. “With everything that is going on in Switzerland in terms of transparency, people are being forced to pay taxes on their capital that they used to hold there,” says Rodrigo Nino, the president of the Prodigy Network. “Real estate is a great alternative.”
Those on the New York end of the transaction often don’t know—or don’t care to find out—the exact derivation of foreign money involved in these transactions. “Sometimes they come in with wires,” says Luigi Rosabianca. “Sometimes they come in with suitcases.” Most of the time, the motivation behind this movement of cash, and buyers’ desire for privacy, is legitimate, but sometimes it’s not. An inquiry by the International Consortium of Investigative Journalists, a Washington-based nonprofit, has uncovered numerous cases in which New York real estate figured in foreign financial- and political-corruption scandals. “It’s something that is never discussed, but it’s the elephant in the room,” says Rosabianca. “Real estate is a wonderful way to cleanse money. Once you buy real estate, the derivation of that cash is forgotten.”
by Andrew Rice, NY Magazine | Read more:
Image: uncredited
Sunday, June 29, 2014
The Rise of the Personal Power Plant
At first glance, downtown Fort Collins, Colorado, looks like a sweet anachronism. Beautifully preserved 19th-century buildings beckon from leafy streets. A restored trolley car ding-dings its way along Mountain Avenue. It’s safe and spotless, vibrant and unrushed.
And yet this quaint district is ground zero for one of the most ambitious energy agendas of any municipality in the United States. Fort Collins, population 150 000, is trying to do something that no other community of its size has ever done: transform its downtown into a net-zero-energy district, meaning it will consume no more energy in a given year than it generates. And the city as a whole is aiming to reduce its carbon emissions by 80 percent by 2030, on the way to being carbon neutral by midcentury. To make all that happen, engineers there are preparing to aggressively deploy an array of advanced energy technologies, including combined-cycle gas turbines to replace aging coal-fired plants, as well as rooftop solar photovoltaics, community-supported solar gardens, wind turbines, thermal and electricity storage, microgrids, and energy-efficiency schemes.
It’s an audacious plan. But for Fort Collins Utilities, the local electric company, the less daring options were unacceptable. Like utilities all over the world, it is grappling with the dissolution of the traditional regulated-monopoly model of electricity production, with its single, centralized decision maker. The costs of solar and wind electricity generation have fallen to the point that countless consumers in many countries now produce their own electricity, often (but not always) with the blessing of regulators and policymakers. (...)
Customers are paying dearly for those upgrades: Electricity rates in Germany have doubled since 2002, to about 40 U.S. cents per kilowatt-hour. That’s more than four times the price of electricity in Illinois. Many other countries are now learning from these experiences, Kroposki adds, “to make sure that solar and wind systems integrate with the grid in ways that help overall system stability.”
And yet this quaint district is ground zero for one of the most ambitious energy agendas of any municipality in the United States. Fort Collins, population 150 000, is trying to do something that no other community of its size has ever done: transform its downtown into a net-zero-energy district, meaning it will consume no more energy in a given year than it generates. And the city as a whole is aiming to reduce its carbon emissions by 80 percent by 2030, on the way to being carbon neutral by midcentury. To make all that happen, engineers there are preparing to aggressively deploy an array of advanced energy technologies, including combined-cycle gas turbines to replace aging coal-fired plants, as well as rooftop solar photovoltaics, community-supported solar gardens, wind turbines, thermal and electricity storage, microgrids, and energy-efficiency schemes.
It’s an audacious plan. But for Fort Collins Utilities, the local electric company, the less daring options were unacceptable. Like utilities all over the world, it is grappling with the dissolution of the traditional regulated-monopoly model of electricity production, with its single, centralized decision maker. The costs of solar and wind electricity generation have fallen to the point that countless consumers in many countries now produce their own electricity, often (but not always) with the blessing of regulators and policymakers. (...)
The electricity industry is undergoing the same sort of fundamental change that has already transformed telecommunications and computing, says Clark Gellings, a fellow at the Electric Power Research Institute (EPRI), in Palo Alto, Calif. Recall the heyday of the telephone landline, when a monopoly provided reliable service, with few bells and no whistles. Today, a multitude of telecom providers offer more wired and wireless options and services than most people, frankly, care to contemplate. Computers, similarly, used to mean giant mainframes accessed via remote terminals. But when CPUs and memory became cheap enough and powerful enough, people could own their own computers, access and exchange information via the Internet, and leverage the power of distributed computation in the cloud.
Gellings envisions an analogue for electricity that he calls the ElectriNet: a highly interconnected and interactive network of power systems that also combines telecommunications, the Internet, and e-commerce. (Gellings first unveiled the then-heretical notion of electricity customers managing their own usage—a concept he called “demand-side load management”—in the December 1981 issue of IEEE Spectrum.) Such a network will allow traditional utilities to intelligently connect with individual households, service providers, and as yet unforeseen electricity players, fostering the billions of daily electricity “transactions” that will take place between generators and consumers. Smart appliances in the home will be able to respond to changes in electricity prices automatically by, for instance, turning themselves off or on as prices rise or fall. The ElectriNet will also allow for home security, data and communication services, and the like. [Listen to a podcast interview with Gellings on the future of the power grid.]
In addition, Gellings says, advanced sensors deployed throughout the network will let grid operators visualize the power system in real time, a key capability for detecting faults, physical attacks, and cyberattacks and for preventing or at least mitigating outages.
While distributed generation is already taking hold in many places, Gellings notes, “we have to move toward a truly integrated power system. That’s a system that makes the best use of distributed and central resources—because central power generation is not going to go away, although it may change in shape and form.” [For more on the undesirability of grid defection, see the sidebar, “The Slow Death of the Grid.”]
A highly intelligent and agile network that can handle the myriad transactions taking place among hundreds of thousands or even millions of individual energy producers and consumers isn’t just desirable, say experts. It has to happen, because the alternative would be grim.
Just ask the Germans. Generous subsidies, called feed-in tariffs, for renewable energy resulted in the country adding 30 gigawatts of solar and 30 gigawatts of wind power in just a few years. On a bright breezy day at noon, renewables can account for more than half of Germany’s generated electricity.
“That sounds like a good thing, but to the utility, it looked like a huge negative load,” notes Benjamin Kroposki, director of energy systems integration at theNational Renewable Energy Laboratory in Golden, Colo. When a large amount of renewable power is being generated, the output of conventional central power plants is correspondingly reduced to keep the system balanced. But if a local outage or a voltage spike or some other grid disturbance occurs, protective circuitry quickly shuts down the photovoltaics’ inverters. (Inverters are semiconductor-based systems that convert the direct current from the solar cells to alternating current.) And that in turn can lead to cascading systemwide instabilities.
“If you lose 30 gigawatts in just 10 cycles”—two-tenths of a second, that is—“you can’t ramp up conventional generators quickly enough to compensate,” Kroposki notes. So the Germans had to spend the equivalent of hundreds of millions of dollars on smarter inverters and communication links that would allow the PV arrays to automatically ride through any disturbances rather than simply shut down.
Gellings envisions an analogue for electricity that he calls the ElectriNet: a highly interconnected and interactive network of power systems that also combines telecommunications, the Internet, and e-commerce. (Gellings first unveiled the then-heretical notion of electricity customers managing their own usage—a concept he called “demand-side load management”—in the December 1981 issue of IEEE Spectrum.) Such a network will allow traditional utilities to intelligently connect with individual households, service providers, and as yet unforeseen electricity players, fostering the billions of daily electricity “transactions” that will take place between generators and consumers. Smart appliances in the home will be able to respond to changes in electricity prices automatically by, for instance, turning themselves off or on as prices rise or fall. The ElectriNet will also allow for home security, data and communication services, and the like. [Listen to a podcast interview with Gellings on the future of the power grid.]
In addition, Gellings says, advanced sensors deployed throughout the network will let grid operators visualize the power system in real time, a key capability for detecting faults, physical attacks, and cyberattacks and for preventing or at least mitigating outages.
While distributed generation is already taking hold in many places, Gellings notes, “we have to move toward a truly integrated power system. That’s a system that makes the best use of distributed and central resources—because central power generation is not going to go away, although it may change in shape and form.” [For more on the undesirability of grid defection, see the sidebar, “The Slow Death of the Grid.”]
A highly intelligent and agile network that can handle the myriad transactions taking place among hundreds of thousands or even millions of individual energy producers and consumers isn’t just desirable, say experts. It has to happen, because the alternative would be grim.
Just ask the Germans. Generous subsidies, called feed-in tariffs, for renewable energy resulted in the country adding 30 gigawatts of solar and 30 gigawatts of wind power in just a few years. On a bright breezy day at noon, renewables can account for more than half of Germany’s generated electricity.
“That sounds like a good thing, but to the utility, it looked like a huge negative load,” notes Benjamin Kroposki, director of energy systems integration at theNational Renewable Energy Laboratory in Golden, Colo. When a large amount of renewable power is being generated, the output of conventional central power plants is correspondingly reduced to keep the system balanced. But if a local outage or a voltage spike or some other grid disturbance occurs, protective circuitry quickly shuts down the photovoltaics’ inverters. (Inverters are semiconductor-based systems that convert the direct current from the solar cells to alternating current.) And that in turn can lead to cascading systemwide instabilities.
“If you lose 30 gigawatts in just 10 cycles”—two-tenths of a second, that is—“you can’t ramp up conventional generators quickly enough to compensate,” Kroposki notes. So the Germans had to spend the equivalent of hundreds of millions of dollars on smarter inverters and communication links that would allow the PV arrays to automatically ride through any disturbances rather than simply shut down.
Customers are paying dearly for those upgrades: Electricity rates in Germany have doubled since 2002, to about 40 U.S. cents per kilowatt-hour. That’s more than four times the price of electricity in Illinois. Many other countries are now learning from these experiences, Kroposki adds, “to make sure that solar and wind systems integrate with the grid in ways that help overall system stability.”
by Jean Kumagai, IEEE Spectrum | Read more:
Image: New Belgium BrewingSaturday, June 28, 2014
The Disconnect
As reliably as autumn brings Orion to the night sky, spring each year sends a curious constellation to the multiplex: a minor cluster of romantic comedies and the couples who traipse through them, searching for love. These tend not to be people who have normal problems. She is poised, wildly successful in an ulcer-making job, lonely. He is sensitive, creative, equipped with a mysteriously vast apartment, unattached. For all these resources, nothing can allay their solitude. He tries to cook. She collects old LPs. He seeks love in the arms of chatty narcissists. She pulls all-nighters in her office. Eventually, her best friend, who may also be her divorced mother, tells her that something needs to change: she’s squandering her golden years; she’ll end up forlorn and alone. Across town, his stout buddy, who is married to someone named Debbee, rhapsodizes about the pleasures of cohabitation. None of this is helpful. As the movie’s first act nears its end point, we spy our heroine in the primal scene of rom-com solitude: curled up on her couch, wearing lounge pants, quaffing her third glass of wine, and excavating an enormous box of Dreyer’s. She is watching the same TV show that he is (whiskey half drained on his coffee table, Chinese takeout in his lap), and although this fact assures us of a destined romance, it is not so useful for the people on the screen. They are alone; their lives are grim. The show they’re watching seems, from the explosive flickering, to be about the invasion of Poland.
Few things are less welcome today than protracted solitude—a life style that, for many people, has the taint of loserdom and brings to mind such characters as Ted Kaczynski and Shrek. Does aloneness deserve a less untoward image? Aside from monastic seclusion, which is just another way of being together, it is hard to come up with a solitary life that doesn’t invite pity, or an enviable loner who’s not cheating the rules. (Even Henry David Thoreau, for all his bluster about solitude, ambled regularly into Concord for his mother’s cooking and the local bars.) Meanwhile, the culture’s data pool is filled with evidence of virtuous togetherness. “The Brady Bunch.” The March on Washington. The Yankees, in 2009. Alone, we’re told, is where you end up when these enterprises go south.
And yet the reputation of modern solitude is puzzling, because the traits enabling a solitary life—financial stability, spiritual autonomy, the wherewithal to buy more dishwashing detergent when the box runs out—are those our culture prizes. Plus, recent demographic shifts suggest that aloneness, far from fading out in our connected age, is on its way in. In 1950, four million people in this country lived alone. These days, there are almost eight times as many, thirty-one million. Americans are getting married later than ever (the average age of first marriage for men is twenty-eight), and bailing on domestic life with alacrity (half of modern unions are expected to end in divorce). Today, more than fifty per cent of U.S. residents are single, nearly a third of all households have just one resident, and five million adults younger than thirty-five live alone. This may or may not prove a useful thing to know on certain Saturday nights.
Eric Klinenberg, a sociologist at New York University, has spent the past several years studying aloneness, and in his new book, “Going Solo: The Extraordinary Rise and Surprising Appeal of Living Alone” (Penguin), he approaches his subject as someone baffled by these recent trends. Klinenberg’s initial encounter with the growing ranks of singletons, he explains, came while researching his first book, about the Chicago heat wave of 1995. During that crisis, hundreds of people living alone died, not just because of the heat but because their solitary lives left them without a support network. “Silently, and invisibly, they had developed what one city investigator who worked with them regularly called ‘a secret society of people who live and die alone,’ ” Klinenberg writes.
“Going Solo” is his attempt to see how this secret society fares outside the crucible of natural disaster. For seven years, Klinenberg and his research team interviewed more than three hundred people living alone, plus many of the caretakers, planners, and designers who help make that solitary life possible. Their sample included single people in everything from halfway hotels to elder-care facilities, and drew on fieldwork conducted primarily in seven cities: Austin, Texas; Chicago; Los Angeles; New York; San Francisco; Washington, D.C.; and Stockholm.
The results were surprising. Klinenberg’s data suggested that single living was not a social aberration but an inevitable outgrowth of mainstream liberal values. Women’s liberation, widespread urbanization, communications technology, and increased longevity—these four trends lend our era its cultural contours, and each gives rise to solo living. Women facing less pressure to stick to child care and housework can pursue careers, marry and conceive when they please, and divorce if they’re unhappy. The “communications revolution” that began with the telephone and continues with Facebook helps dissolve the boundary between social life and isolation. Urban culture caters heavily to autonomous singles, both in its social diversity and in its amenities: gyms, coffee shops, food deliveries, laundromats, and the like ease solo subsistence. Age, thanks to the uneven advances of modern medicine, makes loners of people who have not previously lived by themselves. By 2000, sixty-two per cent of the widowed elderly were living by themselves, a figure that’s unlikely to fall anytime soon.
What turns this shift from demographic accounting to a social question is the pursuit-of-happiness factor: as a rule, do people live alone because they want to or because they have to?
Few things are less welcome today than protracted solitude—a life style that, for many people, has the taint of loserdom and brings to mind such characters as Ted Kaczynski and Shrek. Does aloneness deserve a less untoward image? Aside from monastic seclusion, which is just another way of being together, it is hard to come up with a solitary life that doesn’t invite pity, or an enviable loner who’s not cheating the rules. (Even Henry David Thoreau, for all his bluster about solitude, ambled regularly into Concord for his mother’s cooking and the local bars.) Meanwhile, the culture’s data pool is filled with evidence of virtuous togetherness. “The Brady Bunch.” The March on Washington. The Yankees, in 2009. Alone, we’re told, is where you end up when these enterprises go south.
And yet the reputation of modern solitude is puzzling, because the traits enabling a solitary life—financial stability, spiritual autonomy, the wherewithal to buy more dishwashing detergent when the box runs out—are those our culture prizes. Plus, recent demographic shifts suggest that aloneness, far from fading out in our connected age, is on its way in. In 1950, four million people in this country lived alone. These days, there are almost eight times as many, thirty-one million. Americans are getting married later than ever (the average age of first marriage for men is twenty-eight), and bailing on domestic life with alacrity (half of modern unions are expected to end in divorce). Today, more than fifty per cent of U.S. residents are single, nearly a third of all households have just one resident, and five million adults younger than thirty-five live alone. This may or may not prove a useful thing to know on certain Saturday nights.
Eric Klinenberg, a sociologist at New York University, has spent the past several years studying aloneness, and in his new book, “Going Solo: The Extraordinary Rise and Surprising Appeal of Living Alone” (Penguin), he approaches his subject as someone baffled by these recent trends. Klinenberg’s initial encounter with the growing ranks of singletons, he explains, came while researching his first book, about the Chicago heat wave of 1995. During that crisis, hundreds of people living alone died, not just because of the heat but because their solitary lives left them without a support network. “Silently, and invisibly, they had developed what one city investigator who worked with them regularly called ‘a secret society of people who live and die alone,’ ” Klinenberg writes.
“Going Solo” is his attempt to see how this secret society fares outside the crucible of natural disaster. For seven years, Klinenberg and his research team interviewed more than three hundred people living alone, plus many of the caretakers, planners, and designers who help make that solitary life possible. Their sample included single people in everything from halfway hotels to elder-care facilities, and drew on fieldwork conducted primarily in seven cities: Austin, Texas; Chicago; Los Angeles; New York; San Francisco; Washington, D.C.; and Stockholm.
The results were surprising. Klinenberg’s data suggested that single living was not a social aberration but an inevitable outgrowth of mainstream liberal values. Women’s liberation, widespread urbanization, communications technology, and increased longevity—these four trends lend our era its cultural contours, and each gives rise to solo living. Women facing less pressure to stick to child care and housework can pursue careers, marry and conceive when they please, and divorce if they’re unhappy. The “communications revolution” that began with the telephone and continues with Facebook helps dissolve the boundary between social life and isolation. Urban culture caters heavily to autonomous singles, both in its social diversity and in its amenities: gyms, coffee shops, food deliveries, laundromats, and the like ease solo subsistence. Age, thanks to the uneven advances of modern medicine, makes loners of people who have not previously lived by themselves. By 2000, sixty-two per cent of the widowed elderly were living by themselves, a figure that’s unlikely to fall anytime soon.
What turns this shift from demographic accounting to a social question is the pursuit-of-happiness factor: as a rule, do people live alone because they want to or because they have to?
by Nathan Heller, New Yorker | Read more:
Image:Jean-Francois Martin
Why Free Parking is Bad for Everyone
Over the past century, we've come to regard parking as a basic public good that should be freely shared — partly because of the sheer historical accident that parking meters didn't come along until the 1930s, a few decades after the car.
"By then, the custom of free parking was well-established," Shoup says. "It's hard to start charging people for something that the government owns and had been free." Consequently, parking is still free, he calculates, for 99 percent of all car trips made in the country.
But a parking spot, unlike things we normally consider to be public goods, is finite. It can only be used by a one car at a time. So if we let the market set the price, in cities, it'd certainly go above zero — and there's not really any compelling reason why it alone should be kept free. "We pay for everything else about our cars — the car itself, the gas, the tires, the insurance," Shoup says. "Why is it that parking should be different?"
One counterexample you might point to are roads, which are ostensibly free public goods. But there are mechanisms in place, such as the gas tax, that try to ensure roads are largely paid for by automobile users, in proportion to their use of it. The gas tax system may be broken, but it still reflects the idea that car drivers should pay for the roads.
When we find an open spot on the street, and there's no meter, it seems free — but it too is the result of government spending. The cost of the land, pavement, street cleaning, and other services related to free parking spots come directly out of tax dollars (usually municipal or state funding sources). Each on-street parking space is estimated to cost around $1,750 to build and $400 to maintain annually.
"That parking doesn't just come out of thin air," Shoup says. "So this means people who don't own cars pay for other peoples' parking. Every time you walk somewhere, or ride a bike, or take a bus, you're getting shafted."
All our free street parking also leads to secondary problem: most city governments (with the exception of New York, San Francisco, and a few other dense cities) require all new buildings to include specified large numbers of added parking spaces — partly because otherwise, the free street parking would be swamped by new residents. "In most of the country, you can't build a new apartment building without two parking spaces per unit," Shoup says.
This too costs money. In Washington DC, the underground spots many developers build to comply with these minimum requirements cost between $30,000 and $50,000 each. Whether they're constructed along with apartment buildings or shopping complexes, this cost ultimately gets passed along to consumers, in the form of rent or the price of goods.
"Wherever you go — a grocery store, say — a little bit of the money you pay for products is siphoned away to pay for parking," Shoup says. "My idea is simple: if somebody doesn't have a car, they shouldn't have to pay for parking."
If just the way we paid for free parking were unfair, it might not be all that big of a deal. But there are a few totally unrelated and negative consequences of keeping parking free.
"By then, the custom of free parking was well-established," Shoup says. "It's hard to start charging people for something that the government owns and had been free." Consequently, parking is still free, he calculates, for 99 percent of all car trips made in the country.
But a parking spot, unlike things we normally consider to be public goods, is finite. It can only be used by a one car at a time. So if we let the market set the price, in cities, it'd certainly go above zero — and there's not really any compelling reason why it alone should be kept free. "We pay for everything else about our cars — the car itself, the gas, the tires, the insurance," Shoup says. "Why is it that parking should be different?"
One counterexample you might point to are roads, which are ostensibly free public goods. But there are mechanisms in place, such as the gas tax, that try to ensure roads are largely paid for by automobile users, in proportion to their use of it. The gas tax system may be broken, but it still reflects the idea that car drivers should pay for the roads.
When we find an open spot on the street, and there's no meter, it seems free — but it too is the result of government spending. The cost of the land, pavement, street cleaning, and other services related to free parking spots come directly out of tax dollars (usually municipal or state funding sources). Each on-street parking space is estimated to cost around $1,750 to build and $400 to maintain annually.
"That parking doesn't just come out of thin air," Shoup says. "So this means people who don't own cars pay for other peoples' parking. Every time you walk somewhere, or ride a bike, or take a bus, you're getting shafted."
All our free street parking also leads to secondary problem: most city governments (with the exception of New York, San Francisco, and a few other dense cities) require all new buildings to include specified large numbers of added parking spaces — partly because otherwise, the free street parking would be swamped by new residents. "In most of the country, you can't build a new apartment building without two parking spaces per unit," Shoup says.
This too costs money. In Washington DC, the underground spots many developers build to comply with these minimum requirements cost between $30,000 and $50,000 each. Whether they're constructed along with apartment buildings or shopping complexes, this cost ultimately gets passed along to consumers, in the form of rent or the price of goods.
"Wherever you go — a grocery store, say — a little bit of the money you pay for products is siphoned away to pay for parking," Shoup says. "My idea is simple: if somebody doesn't have a car, they shouldn't have to pay for parking."
If just the way we paid for free parking were unfair, it might not be all that big of a deal. But there are a few totally unrelated and negative consequences of keeping parking free.
by Joseph Stromberg, Vox | Read more:
Image: City Lab
Friday, June 27, 2014
Thursday, June 26, 2014
An Employee Dies, and the Company Collects the Insurance
[ed. It never ends.]
Employees at The Orange County Register received an unsettling email from corporate headquarters this year. The owner of the newspaper, Freedom Communications, was writing to request workers’ consent to take out life insurance policies on them.
But the beneficiary of each policy would not be the survivors or estate of the insured employee, but the Freedom Communications pension plan. Reporters and editors resisted, uncomfortable with the notion that the company might profit from their deaths.
After an intensive lobbying campaign by Freedom Communications management, a modified plan was ultimately put in place. Yet Register employees were left shaken.
The episode at The Register reflects a common but little-known practice in corporate America: Companies are taking out life insurance policies on their employees, and collecting the benefits when they die.
Because so-called company-owned life insurance offers employers generous tax breaks, the market is enormous; hundreds of corporations have taken out policies on thousands of employees. Banks are especially fond of the practice. JPMorgan Chase and Wells Fargo hold billions of dollars of life insurance on their books, and count it as a measure of their ability to withstand financial shocks.
But critics say it is immoral for companies to profit from the death of employees, while employees themselves do not directly benefit. And despite a law enacted in 2006 that sought to curb the practice — companies now are restricted to insuring only the highest-paid 35 percent of employees, who must give their consent — it remains a growing, opaque and legal source of corporate profit.
“Companies are holding this humongous amount of coverage on the lives of human beings,” said Michael D. Myers, a lawyer in Houston who has brought class-action lawsuits against several companies with such policies.
Companies and banks say earnings from the insurance policies are used to cover long-term health care, deferred compensation and pension obligations.
“Life insurance is one of the ways of strengthening the long-term health of the pension plan and ensuring its ability to pay benefits,” Freedom Communications’ chief executive, Aaron Kushner, said in an interview.
And because such life insurance policies receive generous tax breaks — investment returns on the policies are tax-free, as are the death benefits eventually received — they are ideal investment vehicles for companies looking to set aside money to pay for pension plans. Companies argue that if they had to finance such obligations with investments taxed at a normal rate, they would incur losses and would not be able to offer the benefits to employees.
But in many cases, companies and banks can use the tax-free gains for whatever they choose. “If you want to take that money and go build a new bank branch, fine,” said Joseph E. Yesutis, a partner at the law firm Alston & Bird who specializes in banking regulation. “Companies don’t promise regulators they will use it for any specific purpose.”
Hundreds of billions of dollars of such policies are in place, providing companies with a steady stream of income as current and former employees die, even decades after they have retired or left the company.
Aon Hewitt estimates that in new policies worth at least $1 billion are being put in place annually, and that about one-third of the 1,000 largest companies in the country have such policies. Industry analysts estimate that as much as 20 percent of all new life insurance is taken out by companies on their employees.
by David Gelles, NY Times | Read more:
Employees at The Orange County Register received an unsettling email from corporate headquarters this year. The owner of the newspaper, Freedom Communications, was writing to request workers’ consent to take out life insurance policies on them.
But the beneficiary of each policy would not be the survivors or estate of the insured employee, but the Freedom Communications pension plan. Reporters and editors resisted, uncomfortable with the notion that the company might profit from their deaths.
After an intensive lobbying campaign by Freedom Communications management, a modified plan was ultimately put in place. Yet Register employees were left shaken.
The episode at The Register reflects a common but little-known practice in corporate America: Companies are taking out life insurance policies on their employees, and collecting the benefits when they die.
Because so-called company-owned life insurance offers employers generous tax breaks, the market is enormous; hundreds of corporations have taken out policies on thousands of employees. Banks are especially fond of the practice. JPMorgan Chase and Wells Fargo hold billions of dollars of life insurance on their books, and count it as a measure of their ability to withstand financial shocks.
But critics say it is immoral for companies to profit from the death of employees, while employees themselves do not directly benefit. And despite a law enacted in 2006 that sought to curb the practice — companies now are restricted to insuring only the highest-paid 35 percent of employees, who must give their consent — it remains a growing, opaque and legal source of corporate profit.
“Companies are holding this humongous amount of coverage on the lives of human beings,” said Michael D. Myers, a lawyer in Houston who has brought class-action lawsuits against several companies with such policies.
Companies and banks say earnings from the insurance policies are used to cover long-term health care, deferred compensation and pension obligations.
“Life insurance is one of the ways of strengthening the long-term health of the pension plan and ensuring its ability to pay benefits,” Freedom Communications’ chief executive, Aaron Kushner, said in an interview.
And because such life insurance policies receive generous tax breaks — investment returns on the policies are tax-free, as are the death benefits eventually received — they are ideal investment vehicles for companies looking to set aside money to pay for pension plans. Companies argue that if they had to finance such obligations with investments taxed at a normal rate, they would incur losses and would not be able to offer the benefits to employees.
But in many cases, companies and banks can use the tax-free gains for whatever they choose. “If you want to take that money and go build a new bank branch, fine,” said Joseph E. Yesutis, a partner at the law firm Alston & Bird who specializes in banking regulation. “Companies don’t promise regulators they will use it for any specific purpose.”
Hundreds of billions of dollars of such policies are in place, providing companies with a steady stream of income as current and former employees die, even decades after they have retired or left the company.
Aon Hewitt estimates that in new policies worth at least $1 billion are being put in place annually, and that about one-third of the 1,000 largest companies in the country have such policies. Industry analysts estimate that as much as 20 percent of all new life insurance is taken out by companies on their employees.
by David Gelles, NY Times | Read more:
Image: Monica Almeida/The New York Times
It Wasn't Over, It Still Isn't Over
Today is the tenth anniversary of The Notebook, and I'd like to take us all back to the mid-aughts for a moment. It was a dark time; George Bush was President, and trucker hats were having a moment. Then on June 26, 2004, Nicholas Sparks, Nick Cassavetes, Ryan Gosling, Rachel McAdams, Gena Rowlands, James Garner, and of course E from Entourage teamed up to tell the greatest, most powerful love story ever told.
Then a year later, for a brief moment in time, the world was exactly as it was meant to be: Rachel McAdams and Ryan Gosling started dating in real life. They were happy together, and we were happy too. If we couldn't be with Ryan Gosling ourselves, then at least Rachel McAdams should be there in our place. They made out at MTV Award shows, she was cool with him wearing his "Darfur" t-shirt to formal events, and they could both be openly Canadian together! Life seemed perfect for a brief moment in time.
by Michelle Markowitz , The Hairpin | Read more:
Image: YouTube
YAPC::NA 2014 Keynote: Programming Perl in 2034
[ed. Fascinating read. Don't be put off by the 'Programming Perl' subject in the title]
....Now, let's go and borrow that time machine and take a look at 2034.
2034 superficially looks a lot like 2014, only not. After all, most of 2034 is already here, for real, in 2014.
The one stunningly big difference is that today we're still living through exponential change: by 2034, the semiconductor revolution will have slowed down to the steady state of gradual incremental changes I described earlier. Change won't have stopped — but the armature of technological revolution will have moved elsewhere.
Now for a whistle-stop tour of 2034:
Of the people alive in 2014, about 75% of us will still be alive. (I feel safe in making this prediction because if I'm wildly wrong — if we've undergone a species extinction-level event — you won't be around to call me on my mistake. That's the great thing about futurology: when you get it really wrong, nobody cares.)
About two-thirds of the buildings standing in 2034 are already there in 2014. Except in low-lying areas where the well-known liberal bias of climatological science has taken its toll.
Automobiles look pretty much the same, although a lot more of them are electric or diesel-electric hybrids, and they exhibit a mysterious reluctance to run over pedestrians, shoot stop lights, or exceed the speed limit. In fact, the main force opposing the universal adoption of self-driving automobiles will probably be the Police unions: and it's only a matter of time before the insurance companies arm-wrestle the traffic cops into submission.
Airliners in 2034 look even more similar to those of 2014 than the automobiles. That's because airliners have a design life of 30 years; about a third of those flying in 2034 are already in service in 2014. And another third are new-build specimens of models already flying — Boeing 787s, Airbus 350s.
Not everything progresses linearly. Every decade brings a WTF moment or two to the history books: 9/11, Edward Snowden, the collapse of the USSR. And there are some obvious technology-driven radical changes. By 2034 Elon Musk has either declared bankruptcy or taken his fluffy white cat and retired to his billionaire's lair on Mars. China has a moon base. One of Apple, Ford, Disney, or Boeing has gone bust or fallen upon hard times, their niche usurped by someone utterly unpredictable. And I'm pretty sure that there will be some utterly bizarre, Rumsfeldian unknown-unknowns to disturb us all. A cure for old age, a global collapse of the financial institutions, a devastating epidemic of Martian hyper-scabies. But most of the changes, however radical, are not in fact very visible at first glance.
Most change is gradual, and it's only when we stack enough iterative changes atop one another that we get something that's immediately striking from a distance. The structures we inhabit in 2034 are going to look much the same: I think it's fairly safe to say that we will still live in buildings and wear clothes, even if the buildings are assembled by robots and the clothes emerge fully-formed from 3D printers that bond fibres suspended in a liquid matrix, and the particular fashions change. The ways we use buildings and clothes seem to be pretty much immutable across deep historical time.
So let me repeat that: buildings and clothing are examples of artifacts that may be manufactured using a variety of different techniques, some of which are not widespread today, but where the use-case is unlikely to change.
But then, there's a correspondingly different class of artifact that may be built or assembled using familiar techniques but put to utterly different uses.
Take the concrete paving slabs that sidewalks are made from, for example. Our concrete paving slab of 2034 is likely to be almost identical to the paving slab of 2014 — except for the trivial addition of a dirt-cheap microcontroller powered by an on-die photovoltaic cell, with a handful of MEMS sensors and a low power transceiver. Manufactured in bulk, the chip in the paving slab adds about a dollar to its price — it makes about as much of a difference the logistics of building a pavement as adding a barcoded label does to the manufacture and distribution of t-shirts. But the effect of the change, of adding an embedded sensor and control processor to a paving stone, is revolutionary: suddenly the sidewalk is part of the internet of things.
What sort of things does our internet-ified paving slab do?
For one thing, it can monitor its ambient temperature and warn its neighbors to tell incoming vehicle traffic if there's a danger of ice, or if a pot-hole is developing. Maybe it can also monitor atmospheric pressure and humidity, providing the city with a micro-level weather map. Genome sequencing is rapidly becoming the domain of micro-electromechanical systems, MEMS, which as semiconductor devices are amenable to Moore's law: we could do ambient genome sequencing, looking for the tell-tale signs of pathogens in the environment. Does that puddle harbor mosquito larvae infected with malaria parasites?
With low-power transceivers our networked sidewalk slab can ping any RFID transponders that cross it, thereby providing a slew of rich metadata about its users. If you can read the unique product identifier labels in a random pedestrian's clothing you can build up a database that identifies citizens uniquely — unless they habitually borrow each other's underwear. You can probably tell from their gait pattern if they're unwell, or depressed, or about to impulsively step out into the road. In which case your internet-of-things enabled sidewalk can notify any automobiles in the vicinity to steer wide of the self-propelled traffic obstacle.
It's not just automobiles and paving slabs that have internet-connected processors in them in 2034, of course. Your domestic washing machine is going to have a much simpler user interface, for one thing: you shove clothing items inside it and it asks them how they want to be washed, then moans at you until you remove the crimson-dyed tee shirt from the batch of whites that will otherwise come out pink.
And meanwhile your cheap Indonesian toaster oven has a concealed processor embedded in its power cable that is being rented out by the hour to spammers or bitcoin miners or whatever the equivalent theft-of-service nuisance threat happens to be in 2034.
In fact, by 2034, thanks to the fallout left behind by the end of Moore's law and it's corollary Koomey's law (that power consumption per MIP decreases by 50% every 18 months), we can reasonably assume that any object more durable than a bar of soap and with a retail value of over $5 probably has as much computing power as your laptop today — and if you can't think of a use for it, the advertising industry will be happy to do so for you (because we have, for better or worse, chosen advertising as the underlying business model for monetizing the internet: and the internet of things is, after all, an out-growth of the internet).
The world of 2034 is going to superficially, outwardly, resemble the world of 2014, subject to some obvious minor differences — more extreme weather, more expensive gas — but there are going to be some really creepy differences under the surface. In particular, with the build-out of the internet of things and the stabilization of standards once the semiconductor revolution has run its course, the world of 2034 is going to be dominated by metadata.
Today in 2014 we can reasonably to be tracked by CCTV whenever we show our faces in public, and for any photograph of us to be uploaded to Facebook and tagged by location, time, and identity using face recognition software. We know our phones are tracking us from picocell to picocell and, at the behest of the NSA, can be turned into bugging devices without our knowledge or consent (as long as we're locked out of our own baseband processors).
By 2034 the monitoring is going to be even more pervasive. The NETMIT group at MIT's Computer Science and Artificial Intelligence Lab are currently using WiFi signals to detect the breathing and heart rate of individuals in a room: wireless transmitters with steerable phased-array antennae that can beam bandwidth through a house are by definition excellent wall-penetrating radar devices, and just as the NSA has rooted many domestic routers to inspect our packets, so we can expect the next generation of spies to attempt to use our routers to examine our bodies.
The internet of things needs to be able to rapidly create dynamic routing tables so that objects can communicate with each other, and a corollary of that requirement is that everything knows where it is and who it belongs to and who has permission to use them. This has good consequences and bad consequences.
....Now, let's go and borrow that time machine and take a look at 2034.
2034 superficially looks a lot like 2014, only not. After all, most of 2034 is already here, for real, in 2014.
The one stunningly big difference is that today we're still living through exponential change: by 2034, the semiconductor revolution will have slowed down to the steady state of gradual incremental changes I described earlier. Change won't have stopped — but the armature of technological revolution will have moved elsewhere.
Now for a whistle-stop tour of 2034:
Of the people alive in 2014, about 75% of us will still be alive. (I feel safe in making this prediction because if I'm wildly wrong — if we've undergone a species extinction-level event — you won't be around to call me on my mistake. That's the great thing about futurology: when you get it really wrong, nobody cares.)
About two-thirds of the buildings standing in 2034 are already there in 2014. Except in low-lying areas where the well-known liberal bias of climatological science has taken its toll.
Automobiles look pretty much the same, although a lot more of them are electric or diesel-electric hybrids, and they exhibit a mysterious reluctance to run over pedestrians, shoot stop lights, or exceed the speed limit. In fact, the main force opposing the universal adoption of self-driving automobiles will probably be the Police unions: and it's only a matter of time before the insurance companies arm-wrestle the traffic cops into submission.
Airliners in 2034 look even more similar to those of 2014 than the automobiles. That's because airliners have a design life of 30 years; about a third of those flying in 2034 are already in service in 2014. And another third are new-build specimens of models already flying — Boeing 787s, Airbus 350s.
Not everything progresses linearly. Every decade brings a WTF moment or two to the history books: 9/11, Edward Snowden, the collapse of the USSR. And there are some obvious technology-driven radical changes. By 2034 Elon Musk has either declared bankruptcy or taken his fluffy white cat and retired to his billionaire's lair on Mars. China has a moon base. One of Apple, Ford, Disney, or Boeing has gone bust or fallen upon hard times, their niche usurped by someone utterly unpredictable. And I'm pretty sure that there will be some utterly bizarre, Rumsfeldian unknown-unknowns to disturb us all. A cure for old age, a global collapse of the financial institutions, a devastating epidemic of Martian hyper-scabies. But most of the changes, however radical, are not in fact very visible at first glance.
Most change is gradual, and it's only when we stack enough iterative changes atop one another that we get something that's immediately striking from a distance. The structures we inhabit in 2034 are going to look much the same: I think it's fairly safe to say that we will still live in buildings and wear clothes, even if the buildings are assembled by robots and the clothes emerge fully-formed from 3D printers that bond fibres suspended in a liquid matrix, and the particular fashions change. The ways we use buildings and clothes seem to be pretty much immutable across deep historical time.
So let me repeat that: buildings and clothing are examples of artifacts that may be manufactured using a variety of different techniques, some of which are not widespread today, but where the use-case is unlikely to change.
But then, there's a correspondingly different class of artifact that may be built or assembled using familiar techniques but put to utterly different uses.
Take the concrete paving slabs that sidewalks are made from, for example. Our concrete paving slab of 2034 is likely to be almost identical to the paving slab of 2014 — except for the trivial addition of a dirt-cheap microcontroller powered by an on-die photovoltaic cell, with a handful of MEMS sensors and a low power transceiver. Manufactured in bulk, the chip in the paving slab adds about a dollar to its price — it makes about as much of a difference the logistics of building a pavement as adding a barcoded label does to the manufacture and distribution of t-shirts. But the effect of the change, of adding an embedded sensor and control processor to a paving stone, is revolutionary: suddenly the sidewalk is part of the internet of things.
What sort of things does our internet-ified paving slab do?
For one thing, it can monitor its ambient temperature and warn its neighbors to tell incoming vehicle traffic if there's a danger of ice, or if a pot-hole is developing. Maybe it can also monitor atmospheric pressure and humidity, providing the city with a micro-level weather map. Genome sequencing is rapidly becoming the domain of micro-electromechanical systems, MEMS, which as semiconductor devices are amenable to Moore's law: we could do ambient genome sequencing, looking for the tell-tale signs of pathogens in the environment. Does that puddle harbor mosquito larvae infected with malaria parasites?
With low-power transceivers our networked sidewalk slab can ping any RFID transponders that cross it, thereby providing a slew of rich metadata about its users. If you can read the unique product identifier labels in a random pedestrian's clothing you can build up a database that identifies citizens uniquely — unless they habitually borrow each other's underwear. You can probably tell from their gait pattern if they're unwell, or depressed, or about to impulsively step out into the road. In which case your internet-of-things enabled sidewalk can notify any automobiles in the vicinity to steer wide of the self-propelled traffic obstacle.
It's not just automobiles and paving slabs that have internet-connected processors in them in 2034, of course. Your domestic washing machine is going to have a much simpler user interface, for one thing: you shove clothing items inside it and it asks them how they want to be washed, then moans at you until you remove the crimson-dyed tee shirt from the batch of whites that will otherwise come out pink.
And meanwhile your cheap Indonesian toaster oven has a concealed processor embedded in its power cable that is being rented out by the hour to spammers or bitcoin miners or whatever the equivalent theft-of-service nuisance threat happens to be in 2034.
In fact, by 2034, thanks to the fallout left behind by the end of Moore's law and it's corollary Koomey's law (that power consumption per MIP decreases by 50% every 18 months), we can reasonably assume that any object more durable than a bar of soap and with a retail value of over $5 probably has as much computing power as your laptop today — and if you can't think of a use for it, the advertising industry will be happy to do so for you (because we have, for better or worse, chosen advertising as the underlying business model for monetizing the internet: and the internet of things is, after all, an out-growth of the internet).
The world of 2034 is going to superficially, outwardly, resemble the world of 2014, subject to some obvious minor differences — more extreme weather, more expensive gas — but there are going to be some really creepy differences under the surface. In particular, with the build-out of the internet of things and the stabilization of standards once the semiconductor revolution has run its course, the world of 2034 is going to be dominated by metadata.
Today in 2014 we can reasonably to be tracked by CCTV whenever we show our faces in public, and for any photograph of us to be uploaded to Facebook and tagged by location, time, and identity using face recognition software. We know our phones are tracking us from picocell to picocell and, at the behest of the NSA, can be turned into bugging devices without our knowledge or consent (as long as we're locked out of our own baseband processors).
By 2034 the monitoring is going to be even more pervasive. The NETMIT group at MIT's Computer Science and Artificial Intelligence Lab are currently using WiFi signals to detect the breathing and heart rate of individuals in a room: wireless transmitters with steerable phased-array antennae that can beam bandwidth through a house are by definition excellent wall-penetrating radar devices, and just as the NSA has rooted many domestic routers to inspect our packets, so we can expect the next generation of spies to attempt to use our routers to examine our bodies.
The internet of things needs to be able to rapidly create dynamic routing tables so that objects can communicate with each other, and a corollary of that requirement is that everything knows where it is and who it belongs to and who has permission to use them. This has good consequences and bad consequences.
by Charles Stross, Charlie's Diary | Read more:
Image: no image
And did you get what
you wanted from this life, even so?
I did.
And what did you want?
To call myself beloved, to feel myself
beloved on the earth.”
~Raymond Carver, A New Path to the Waterfall
[ed. Also the inscription on his tombstone.]
Virtual Economies Are the Future of Consumption
All those people staring down at their phones while stuck in cars, sitting on the subway, lounging in parks, or getting quick hits of workday distraction? They’re not just catapulting angry birds or crushing candy. They’re contributing to a lively economy of mobile gaming, where each app download or purchase of a few extra lives in-game is contributing to a $20.9 billion global market in 2014, according to a new report from Juniper Research. And this virtual economy—where large amounts of real money are traded for digital goods that have no use in the real world—is only getting bigger.
Juniper Research, a decade-old British research firm specializing in mobile commerce that has worked with clients including Apple and IBM, predicts that the mobile gaming market will grow to over $40 billion by 2019. These big numbers might prompt a question: Why are we paying so much money for things that don’t actually exist? I previously wrote about the economies of online video games like World of Warcraft and EVE, where digital goods are sold for real money among players. But over the past year millions more people have begun participating in virtual economies through their smartphones, and a new branch of economics is growing out of those users.
A new book from MIT Press,Virtual Economies: Design and Analysis, by economists Vili Lehdonvirta and Edward Castronova, is an informative and surprisingly entertaining primer on these new markets. Every economy is based on “scarce means,” they write, or a discrepancy between supply and demand. It’s easy to see how that might function with a resource like gold—there is always less available than participants in the economy might want. But in a digital space, where objects are made up of infinitely replicable combinations of ones and zeroes, scarcity is harder to define. But it’s precisely this scarcity, Lehdonvirta and Castronova argue, that makes virtual economies function, and indeed boom to billions of dollars. (...)
Those virtual offerings often take the form of “content”—“artificially scarce resources … that create challenge and competition,” as defined by Virtual Economies, within games. If you’ve ever paid for new levels of Angry Birds, you’ve participated in the virtual content economy. Designing content gives merchants a unique advantage in the digital ecosystem: They can make supply exactly fit demand at all levels of their market, thus optimizing profits. So rather than charging a flat fee for subscriptions, they can price different content packages higher or lower, drawing in money from both casual players and committed addicts.
This model can also explain the different content structures of games over time. A high-end “triple-A” game like Halo offers lots of content immediately, with fans paying a premium for access. Online role-playing games like World of Warcraft consistently release or update content over time, trailing off once the game passes into irrelevance and subscribers lose interest. But free-to-play games that target diverse customers by charging for small packages of added content have a much longer tail—new content is often created long after the game’s release. It’s like a slow drip of morphine rather than a single injection of heroine—it might not produce as much of a bang, but it’ll probably keep you hooked and paying longer.
As more services and media move to an online-only format—think books, movies, and shopping experiences—the systems pioneered by gaming’s virtual economies will doubtless trickle into other facets of our lives as consumers. Whether we play video games or not, we’ll soon be dealing with the consequences of their economics.
Juniper Research, a decade-old British research firm specializing in mobile commerce that has worked with clients including Apple and IBM, predicts that the mobile gaming market will grow to over $40 billion by 2019. These big numbers might prompt a question: Why are we paying so much money for things that don’t actually exist? I previously wrote about the economies of online video games like World of Warcraft and EVE, where digital goods are sold for real money among players. But over the past year millions more people have begun participating in virtual economies through their smartphones, and a new branch of economics is growing out of those users.
A new book from MIT Press,Virtual Economies: Design and Analysis, by economists Vili Lehdonvirta and Edward Castronova, is an informative and surprisingly entertaining primer on these new markets. Every economy is based on “scarce means,” they write, or a discrepancy between supply and demand. It’s easy to see how that might function with a resource like gold—there is always less available than participants in the economy might want. But in a digital space, where objects are made up of infinitely replicable combinations of ones and zeroes, scarcity is harder to define. But it’s precisely this scarcity, Lehdonvirta and Castronova argue, that makes virtual economies function, and indeed boom to billions of dollars. (...)
Those virtual offerings often take the form of “content”—“artificially scarce resources … that create challenge and competition,” as defined by Virtual Economies, within games. If you’ve ever paid for new levels of Angry Birds, you’ve participated in the virtual content economy. Designing content gives merchants a unique advantage in the digital ecosystem: They can make supply exactly fit demand at all levels of their market, thus optimizing profits. So rather than charging a flat fee for subscriptions, they can price different content packages higher or lower, drawing in money from both casual players and committed addicts.
This model can also explain the different content structures of games over time. A high-end “triple-A” game like Halo offers lots of content immediately, with fans paying a premium for access. Online role-playing games like World of Warcraft consistently release or update content over time, trailing off once the game passes into irrelevance and subscribers lose interest. But free-to-play games that target diverse customers by charging for small packages of added content have a much longer tail—new content is often created long after the game’s release. It’s like a slow drip of morphine rather than a single injection of heroine—it might not produce as much of a bang, but it’ll probably keep you hooked and paying longer.
As more services and media move to an online-only format—think books, movies, and shopping experiences—the systems pioneered by gaming’s virtual economies will doubtless trickle into other facets of our lives as consumers. Whether we play video games or not, we’ll soon be dealing with the consequences of their economics.
by Kyle Chayka, Pacific Standard | Read more:
Image: Farmville. (Photo: Zynga)Wednesday, June 25, 2014
Melody Gardot
[ed. Delete annoying ads by clicking on the "x" in the upper right hand corner. Beautiful song.]
Why We Play
I can still hear the quick crunch of his vertebrae cracking. That's the meddling of hindsight, of course — he was too far away, out in the middle of the night-dark field, and there were too many people around me and around him: the fans heckling, the grunts and dull thud of 16 men crashing together in the scrum, then an ominous silence. People breathing hard, whispering, yelling for help.
But whatever I heard or didn't hear, whatever tricks memory has since played, I knew as soon as the scrum collapsed in on itself that something was wrong. It was clear in the collective intake of breath from the crowd, in the way the other players shifted their feet and paced in circles while they waited for the stretcher to arrive. I was in my ninth year of competitive rugby and I had seen plenty of men and women carried off the field, but in all those other instances the spinal boards had been only precautionary. Everyone knew, this time, that something was different.
By the next day, or the day after, the news was all over the rugby community in the small-town British university where I was a graduate student, and a member of a women's team. He'd been in the front row when the scrum caved in, and he'd been driven headfirst into the ground. His neck was broken, and apart from a twitching bicep, he was paralyzed from the shoulders down.
"He was so young," people said, defaulting to the past tense. "He was only 20 years old."
And then, the inevitable Band-Aid: "He was doing what he loved."
I thought about that phrase over and over again in the weeks after that night, and about its implication that paying a physical, or even fatal, price for the sports we love is worthwhile. At our next practice, the younger girls were deeply shaken — some contemplated quitting the team, calculating that it wasn't worth the risk. I reassured them that catastrophic spinal injuries were rare enough in men's rugby, and extraordinarily so in the women's game. But I wondered: How would I react, if my sport forced me to pay a price beyond the bruises, bone chips, blood and pulled muscles I'd already offered up?
If "doing what I loved" cost me the use of my legs and my arms, or the full use of my brain, would I say it was worth it? Could I measure the sport's rewards and stack them against the risks, and if I did, what would that balance sheet look like? What had the sport given me, and how much was I willing to pay in return? (...)
I'd always gravitated toward sports, though I'd never excelled at them. I'd been a mediocre little league softball player — my specialty was stealing bases, not a huge challenge in a league where very few girls could throw from home plate to second base with any accuracy — and a half-decent soccer player, relying on hustle and natural speed more than skill. I swam, and I ran track, and one summer I flirted with tennis. But none of those sports ever felt like they truly fit.
In junior high, I boasted to classmates that I would play rugby when I got to high school, despite knowing absolutely nothing about the sport. When I showed up at my first practice, I'd never even seen a rugby ball or watched a test on TV. All I knew is that it was a tough, violent game, and that, unlike football, girls were allowed to play it.
In 2006, researchers in the U.K. published a survey of 24 peer-reviewed studies examining people's motivations for playing sports. Echoing the Nike ad, the survey found that teenage girls increased their self-esteem and tapped into new social support networks when they joined a sports team. But those rewards came with a corresponding sacrifice. "While many girls wanted to be physically active," the researchers wrote, "a tension existed between wishing to appear feminine and attractive and the sweaty muscular image attached to active women ... A clear opposition can be seen between girls wanting to be physically active and at the same time feminine."
I remember seeing that tension play out on fields and hardwood gym floors throughout my school years: athletic girls catching themselves trying too hard to make a catch or swing a bat, visibly pulling themselves up short, then falling back on the safety of giggles and halfhearted efforts instead of striving for excellence, afraid of who might see. But I don't remember feeling all that torn about it, myself. I was awkward and inept when it came to "girl stuff" — slow to grasp the nuances of clothes, hair or makeup, flat-chested, and apparently incapable of anything resembling flirting. Rather than trying and failing to exist in that world, it seemed easier to me to embrace the tomboy cliché. When I joined the high school rugby team at age 15, it felt like I had completed a fumbling journey toward an identity that I could wrap myself in, and be shielded from the outside world.
I loved the early morning practices, riding the city bus through the darkness and running laps around the school's hallways while we waited for the snow to melt off our field in spring. I played in the back line in my early seasons, the row of leaner, fleeter players who ran the ball after the burlier forwards had scrummed over it, and I loved learning set plays and then learning the secret code names to go with them. ("Jerry Springer!" We'd yell across the field. "Sally Jessy Raphael!") I was tentative in my first season, afraid of hitting people and afraid of being hit. But I soon learned that the fear itself, the anticipation of pain, was almost always worse than the reality. Soon I loved the sound of an opposing player's bones jangling together when I drove her into the ground, and I even learned to love the sickening, stomach-churning moments before an open field tackle, wondering if I would miss or make the hit.
I loved being yelled at by my last name. I loved the scrapes and lumps I racked up on shins, thighs, and shoulders, the line of yellowed fingerprint bruises running down my arms in my prom photo. I loved the belligerence of the T-shirts that were handed out at tournaments: "Give Blood, Play Rugby." "Suck It Up, Princess." "You Only Wish You Could Play Like a Girl."
Most of all, I loved my teammates. Though I'd started playing rugby in part because of my discomfort in Girl World, the team didn't just provide a refuge: It also drew me back out again. It was in the dressing room after practices and games that I learned to stop hiding in a bathroom stall to change, learned to be comfortable with and even proud of my body. Girls from the team dragged me to the mall, and to school dances, stuffing me into dresses and heels that gradually came to feel less foreign. When I graduated after four seasons of high school rugby, and prepared to head off for four more seasons in college, I felt transformed. I no longer called myself a tomboy, and rugby was no longer a crutch.
So much for the revenue side of the balance sheet. Rugby had, for a time, given me everything. But around the same time I'd begun to outgrow my need for it, I'd also begun to understand its potential cost. I racked up pulled muscles and strained ligaments, and chipped a bone in my ankle that still aches under pressure, more than 15 years later. I played with women sporting twin scars on their knees from ACL surgeries. I saw a man come off the pitch one afternoon with his ear torn half off. I helped concussed teammates stagger off the field, unable to remember their own names, and suffered one concussion myself — a minor one, but still an injury with the terrifying power to reach back in time and erase my memories from even before the hit. I had one friend, on my college's men's team, who swore he would quit after three concussions, but he only counted the big ones. Once, I saw him pick himself up after a collision and line up alongside the wrong team. And then, finally, I watched that young man break his neck under the floodlights on a cold night in northern England. I was haunted by the question of my own potential regrets.
In the end, I quit the sport not by choice, but because I became an itinerant freelance writer, lived out of a suitcase for a year and a half, and eventually moved to the Yukon Territory in northern Canada, where there was no rugby to play. The question lingered, though. Here, people paddled whitewater rapids and tumbled off mountain bikes and ventured into the mountainous backcountry on skis and snowmobiles. Every year, boaters drowned or died of exposure, skiers were buried in avalanches, and hikers and mountaineers were rescued, or not reached in time. I bought a can of bear spray, learned to ice climb, capsized a canoe in an icy rapid for the first time. I faced a new set of rewards and a new set of risks. I wondered about the price my friends and I would be willing to pay to "do what we love."
But whatever I heard or didn't hear, whatever tricks memory has since played, I knew as soon as the scrum collapsed in on itself that something was wrong. It was clear in the collective intake of breath from the crowd, in the way the other players shifted their feet and paced in circles while they waited for the stretcher to arrive. I was in my ninth year of competitive rugby and I had seen plenty of men and women carried off the field, but in all those other instances the spinal boards had been only precautionary. Everyone knew, this time, that something was different.
By the next day, or the day after, the news was all over the rugby community in the small-town British university where I was a graduate student, and a member of a women's team. He'd been in the front row when the scrum caved in, and he'd been driven headfirst into the ground. His neck was broken, and apart from a twitching bicep, he was paralyzed from the shoulders down.
"He was so young," people said, defaulting to the past tense. "He was only 20 years old."
And then, the inevitable Band-Aid: "He was doing what he loved."
I thought about that phrase over and over again in the weeks after that night, and about its implication that paying a physical, or even fatal, price for the sports we love is worthwhile. At our next practice, the younger girls were deeply shaken — some contemplated quitting the team, calculating that it wasn't worth the risk. I reassured them that catastrophic spinal injuries were rare enough in men's rugby, and extraordinarily so in the women's game. But I wondered: How would I react, if my sport forced me to pay a price beyond the bruises, bone chips, blood and pulled muscles I'd already offered up?
If "doing what I loved" cost me the use of my legs and my arms, or the full use of my brain, would I say it was worth it? Could I measure the sport's rewards and stack them against the risks, and if I did, what would that balance sheet look like? What had the sport given me, and how much was I willing to pay in return? (...)
I'd always gravitated toward sports, though I'd never excelled at them. I'd been a mediocre little league softball player — my specialty was stealing bases, not a huge challenge in a league where very few girls could throw from home plate to second base with any accuracy — and a half-decent soccer player, relying on hustle and natural speed more than skill. I swam, and I ran track, and one summer I flirted with tennis. But none of those sports ever felt like they truly fit.
In junior high, I boasted to classmates that I would play rugby when I got to high school, despite knowing absolutely nothing about the sport. When I showed up at my first practice, I'd never even seen a rugby ball or watched a test on TV. All I knew is that it was a tough, violent game, and that, unlike football, girls were allowed to play it.
In 2006, researchers in the U.K. published a survey of 24 peer-reviewed studies examining people's motivations for playing sports. Echoing the Nike ad, the survey found that teenage girls increased their self-esteem and tapped into new social support networks when they joined a sports team. But those rewards came with a corresponding sacrifice. "While many girls wanted to be physically active," the researchers wrote, "a tension existed between wishing to appear feminine and attractive and the sweaty muscular image attached to active women ... A clear opposition can be seen between girls wanting to be physically active and at the same time feminine."
I remember seeing that tension play out on fields and hardwood gym floors throughout my school years: athletic girls catching themselves trying too hard to make a catch or swing a bat, visibly pulling themselves up short, then falling back on the safety of giggles and halfhearted efforts instead of striving for excellence, afraid of who might see. But I don't remember feeling all that torn about it, myself. I was awkward and inept when it came to "girl stuff" — slow to grasp the nuances of clothes, hair or makeup, flat-chested, and apparently incapable of anything resembling flirting. Rather than trying and failing to exist in that world, it seemed easier to me to embrace the tomboy cliché. When I joined the high school rugby team at age 15, it felt like I had completed a fumbling journey toward an identity that I could wrap myself in, and be shielded from the outside world.
I loved the early morning practices, riding the city bus through the darkness and running laps around the school's hallways while we waited for the snow to melt off our field in spring. I played in the back line in my early seasons, the row of leaner, fleeter players who ran the ball after the burlier forwards had scrummed over it, and I loved learning set plays and then learning the secret code names to go with them. ("Jerry Springer!" We'd yell across the field. "Sally Jessy Raphael!") I was tentative in my first season, afraid of hitting people and afraid of being hit. But I soon learned that the fear itself, the anticipation of pain, was almost always worse than the reality. Soon I loved the sound of an opposing player's bones jangling together when I drove her into the ground, and I even learned to love the sickening, stomach-churning moments before an open field tackle, wondering if I would miss or make the hit.
I loved being yelled at by my last name. I loved the scrapes and lumps I racked up on shins, thighs, and shoulders, the line of yellowed fingerprint bruises running down my arms in my prom photo. I loved the belligerence of the T-shirts that were handed out at tournaments: "Give Blood, Play Rugby." "Suck It Up, Princess." "You Only Wish You Could Play Like a Girl."
Most of all, I loved my teammates. Though I'd started playing rugby in part because of my discomfort in Girl World, the team didn't just provide a refuge: It also drew me back out again. It was in the dressing room after practices and games that I learned to stop hiding in a bathroom stall to change, learned to be comfortable with and even proud of my body. Girls from the team dragged me to the mall, and to school dances, stuffing me into dresses and heels that gradually came to feel less foreign. When I graduated after four seasons of high school rugby, and prepared to head off for four more seasons in college, I felt transformed. I no longer called myself a tomboy, and rugby was no longer a crutch.
So much for the revenue side of the balance sheet. Rugby had, for a time, given me everything. But around the same time I'd begun to outgrow my need for it, I'd also begun to understand its potential cost. I racked up pulled muscles and strained ligaments, and chipped a bone in my ankle that still aches under pressure, more than 15 years later. I played with women sporting twin scars on their knees from ACL surgeries. I saw a man come off the pitch one afternoon with his ear torn half off. I helped concussed teammates stagger off the field, unable to remember their own names, and suffered one concussion myself — a minor one, but still an injury with the terrifying power to reach back in time and erase my memories from even before the hit. I had one friend, on my college's men's team, who swore he would quit after three concussions, but he only counted the big ones. Once, I saw him pick himself up after a collision and line up alongside the wrong team. And then, finally, I watched that young man break his neck under the floodlights on a cold night in northern England. I was haunted by the question of my own potential regrets.
In the end, I quit the sport not by choice, but because I became an itinerant freelance writer, lived out of a suitcase for a year and a half, and eventually moved to the Yukon Territory in northern Canada, where there was no rugby to play. The question lingered, though. Here, people paddled whitewater rapids and tumbled off mountain bikes and ventured into the mountainous backcountry on skis and snowmobiles. Every year, boaters drowned or died of exposure, skiers were buried in avalanches, and hikers and mountaineers were rescued, or not reached in time. I bought a can of bear spray, learned to ice climb, capsized a canoe in an icy rapid for the first time. I faced a new set of rewards and a new set of risks. I wondered about the price my friends and I would be willing to pay to "do what we love."
by Eva Holland, SB Nation | Read more:
Image: Eva HollandLiner Notes
When I was a teen-ager, reading liner notes like these, I was often swept away, imagining what the recording “scene” must have been like. Who were all those cute girls in that photo on the inside foldout of the Allman Brothers Band’s “Brothers and Sisters” album? Did they have Southern accents? Were some of them actually Brits? That would be hot. If I had showed up at the session, would the girls have been like: “Y’all, look at this cute Yankee preteen—let’s have him off for a quick towboat in the bloody lorry?” Here, there is no need to speculate. I was there. I saw it. I was there for the entire nine-day orgy of talent and spontaneous creativity, an orgy that was oddly unsexual, and in which I was sometimes the only participant, as everyone else had gone out for dinner and apparently forgotten to invite me. Sometimes—inspired, dazed, loaded, pulsing, reverberating, high on the music—we would wander out into the upstate night and just gaze up at the stars, realizing we were part of history. I remember Bruce Springsteen musing, “Folks, folks, we are part of history.” Come to think of it, that might be where we first got that idea about us being part of history, from Bruce saying that. And I remember Bono firing back, “Yes, Bruce, but isn’t it the case, technically, that everyone is part of history?” “You got me there, Bon,” Bruce said, and everyone roared with laughter, having just nearly witnessed, we realized, a real clash of the titans, there by the campfire. Unfortunately, the Bruce-Bono contribution—a speculative number in which Woody Guthrie and Tom Joad teleport back in time and tell a story to Pocahontas about a New Jersey state trooper accosting Gandhi outside a Paterson night club in 1955, with its rousing chorus (“Just because he looked / Like a person of color”) and then its somewhat less rousing subchorus (“That incident, concerning color / Had put us all in a sort of dolor”)—had to be cut, simply for reasons of time (it was more than eleven hours long). (...)
We recorded at “the farm,” an abandoned barn off the New York State Thruway littered with dead sheep and (before Prince took charge and paid to have him carted away) a dead farmer, and also some abandoned sheep and even, at one point, an abandoned farmer. It got pretty crowded in there, but, swept away by the Dionysian energy, no one minded, even when the abandoned farmer fell asleep across the mixing board and deleted the Ritchie Blackmore solo on Don Henley’s version of “The Wheels on the Bus.”
What can I say about those crazy days and nights? I was there. You weren’t. You only wish you were. And I wish you were. Or, as the English majors say, I wish you had been. That is called, they tell me, “past-perfect tense.” O.K., whatever, Shakespeare. Still, there’s something to that. Those nights were past, they were perfect—and they were tense. I remember once when all our gear went missing. What a crisis! All the harmonicas were in that bus! We’d parked outside a diner full of hostile locals and state troopers. After a rather scary meal, we came out to find the bus missing. We glanced back into the diner and all the hostile locals and state troopers were looking down at their plates, the beginnings of a smile flickering across their face. Across their faces. What I mean to say is, the beginnings of a smile, one per face, were flickering, there on the various— There must have been about, I’d say, forty faces in there. Plus, this one guy had no face. He must have been in an accident or something. Or, come to think of it, maybe he was a robber, wearing a pair of panty hose over his face? But, anyway, even that guy was smiling. They were all looking pretty smug in there, so happy that us long-haired creative types had been stymied by their cornpone antics. Because slowly it had begun to dawn on us: they’d stolen our bus! Until someone realized we’d gone out the wrong door. We raced around the rest stop to confirm this, some of us, including Billie Holiday and Mick Jagger, racing back through the diner, only to find our bus—sure enough—just where we’d left it! We had a good laugh about that. Inside, the hostile locals and state troopers and that guy with the panty hose over his face were also having a good laugh about it. And I thought, Ain’t that America? For you and me? Ain’t that America, the land of the—
Which was when the hostile locals and state troopers and that guy with the panty hose over his face all raced out and beat us up, for having wrongly accused them of stealing our bus. After the beating, though—indicating the complexity of America—all the hostile locals and state troopers and that guy with the panty hose over his face invited us over for apple pie. And there were, like, as I said above, about forty of them. So we had to slog from house to house all night, eating pie after pie, when we should have been back at “the farm,” recording. Some of those pies were better than others. I guess that’s not surprising. It would have been pretty weird if every hostile local and state trooper and that guy with the panty hose over his face had all served us pie that was exactly equally good. Unless, I guess, they’d all bought their pie from the same place. Like Walmart. Or BJ’s Wholesale Club.
Anyway.
There were so many amazing moments during the making of this record. I suppose for many of us the high point was when Bob Dylan shuffled into the studio and sang his own composition, “As I Drive This Nation Sublime”:
As I drive this nation sublime / I drive my lady crazy / Get off my back woman / I’m free as that river flowing / Green as that grass you’re mowing / Don’t be so gender-inflexible / Where is it written that the dude must be the one who / Mows the lawn / Chick?But in the end that had to be left off, too.
by George Saunders, New Yorker | Read more:
Image: Zohar Lazar
14621 Neighborhood, Amanda and Her Flower Dress. Rebecca Norris Webb/Magnum Photos
via: Last Days of Kodak Town
[ed. The town I was born in. Good old Rochester, NY.]
Tuesday, June 24, 2014
Citizen Bezos
[ed. Bezos should chill. There's no reason to push so hard. This isn't Apple fighting some monolithic competitor, Amazon already owns the market.]
In the mid-1990s, when Amazon emerged as an online bookseller, publishers welcomed the company as a “savior” that could provide an alternative to the stifling market power of that era’s dominant chain stores, Barnes & Noble and Borders. Book publishers with exceptional foresight may have understood that they “had to view Amazon as both an empowering retail partner and a dangerous competitor,” as Brad Stone puts it in The Everything Store, his deeply reported, fiercely independent-minded account of Amazon’s rise.
Yet at first, Amazon seemed innovative and supportive. The company’s founder, Jeff Bezos, a Princeton- educated computer scientist and former Wall Street hedge fund strategist, had married a novelist; he often expressed a passionate devotion to books, particularly science fiction and management guides. In its early days of creative chaos, Amazon seemed to want to use the Internet to expand the potential of readers and publishers alike. Bezos hired writers and editors who supplied critical advice about books and tried to emulate on Amazon’s website “the trustworthy atmosphere of a quirky independent bookstore with refined literary tastes,” as Stone puts it.
Among the management books Bezos read devotedly were ones by and about Walmart executives. He became inspired by Walmart’s example of delivering low prices to customers and profits to shareholders by wringing every dime possible out of suppliers. By 2004, Amazon had acquired significant market power. It then began to squeeze publishers for more favorable financial terms. If a book publisher did not capitulate to Amazon, it would modify its algorithms to reduce the visibility of the offending publisher’s books; within a month, “the publisher’s sales usually fell by as much as 40 percent,” Stone reports, and the chastened victim typically returned to the negotiating table.
“Bezos kept pushing for more” and suggested that Amazon should negotiate with small publishers “the way a cheetah would pursue a sickly gazelle.” This remark—a joke, one of Bezos’s lieutenants insisted—yielded a negotiating program that Amazon executives referred to as “the Gazelle Project,” under which the company pressured the most vulnerable publishers for concessions. Amazon’s lawyers, presumably nervous that such a direct name might attract an antitrust complaint, insisted that it be recast as the Small Publisher Negotiation Program.
Around this time, Amazon also jettisoned its in-house writers and editors and replaced them with an algorithm, Amabot, that relied on customer data rather than editorial judgment to recommend books. The spread of aggression and automation within Amazon as the company grew larger and larger echoed classics of the science fiction genre to which Bezos was devoted. An anonymous employee bought an ad in a Seattle newspaper to protest the change. “DEAREST AMABOT,” the ad began. “If you only had a heart to absorb our hatred… Thanks for nothing, you jury-rigged rust bucket. The gorgeous messiness of flesh and blood will prevail!”
Will it, though? Over the last decade, Amazon’s growing market share and persistent bullying, particularly in the realm of digital books, where it now controls about two thirds of the market, raise the question of how well competition and antitrust law can protect diverse authors and publishers. Amazon has become a powerful distribution bottleneck for books at the same time that it is also moving to create its own books, in competition with the very publishers it is squeezing.
The evidence to date is that Amazon and the attorneys that advise it do not fear antitrust enforcement. You might suppose, for example, that the publication of Stone’s book, which contains extensive on-the-record interviews with former Amazon executives describing the company’s most dubious practices, would have chastened it and caused it to pull back from strong-arming publishers—to avoid bad publicity, if for no other reason. Yet that has not proved to be the case.
This spring, Amazon has again launched a negotiating campaign to force publishers to accept concessions on the percentage of revenue it takes from e-book sales. And Amazon has again punished those who resist. Its most prominent target has been Hachette, the French publishing group, which will bring out The Everything Store in paperback in October. As of early June, as part of its pressure tactics, Amazon had removed the link on its website that would allow customers to preorder the paperback edition of Stone’s book, as well as links that would facilitate other preorders of Hachette books.
Jeff Bezos’s conceit is that Amazon is merely an instrument of an inevitable digital disruption in the book industry, that the company is clearing away the rust and cobwebs created by inefficient analog-era “gatekeepers”—i.e., editors, diverse small publishers, independent bookstores, and the writers this system has long supported. In Bezos’s implied argument, Amazon’s catalytic “creative destruction,” in the economist Joseph Schumpeter’s phrase, will clarify who will prosper in an unstoppably faster, more interconnected economy.
“Amazon is not happening to book selling,” Bezos once told Charlie Rose. “The future is happening to book selling.” Yet the more Amazon uses its vertically integrated corporate power to squeeze publishers who are also competitors, the more Bezos’s claim looks like a smokescreen. And the more Amazon uses coercion and retaliation as means of negotiation, the more it looks to be restraining competition.
Toward the end of his account, Stone asks the essential question: “Will antitrust authorities eventually come to scrutinize Amazon and its market power?” His answer: “Yes, I believe that is likely.” It is “clear that Amazon has helped damage or destroy competitors small and large,” in Stone’s judgment.
In view of Amazon’s recent treatment of The Everything Store, Stone may now end up as a courtroom witness. Yet there are reasons to be wary about who will prevail in such a contest, if it ever takes place. As Stone notes, “Amazon is a masterly navigator of the law.” And crucially, as in so many other fields of economic policy, antitrust law has been reshaped in recent decades by the spread of free-market fundamentalism. Judges and legislators have reinterpreted antitrust law to emphasize above all the promotion of low prices for consumers, which Amazon delivers, rather than the interests of producers—whether these are authors, book publishers, or mom-and-pop grocery stores—that are threatened by giants.
In the mid-1990s, when Amazon emerged as an online bookseller, publishers welcomed the company as a “savior” that could provide an alternative to the stifling market power of that era’s dominant chain stores, Barnes & Noble and Borders. Book publishers with exceptional foresight may have understood that they “had to view Amazon as both an empowering retail partner and a dangerous competitor,” as Brad Stone puts it in The Everything Store, his deeply reported, fiercely independent-minded account of Amazon’s rise.
Yet at first, Amazon seemed innovative and supportive. The company’s founder, Jeff Bezos, a Princeton- educated computer scientist and former Wall Street hedge fund strategist, had married a novelist; he often expressed a passionate devotion to books, particularly science fiction and management guides. In its early days of creative chaos, Amazon seemed to want to use the Internet to expand the potential of readers and publishers alike. Bezos hired writers and editors who supplied critical advice about books and tried to emulate on Amazon’s website “the trustworthy atmosphere of a quirky independent bookstore with refined literary tastes,” as Stone puts it.
Among the management books Bezos read devotedly were ones by and about Walmart executives. He became inspired by Walmart’s example of delivering low prices to customers and profits to shareholders by wringing every dime possible out of suppliers. By 2004, Amazon had acquired significant market power. It then began to squeeze publishers for more favorable financial terms. If a book publisher did not capitulate to Amazon, it would modify its algorithms to reduce the visibility of the offending publisher’s books; within a month, “the publisher’s sales usually fell by as much as 40 percent,” Stone reports, and the chastened victim typically returned to the negotiating table.
“Bezos kept pushing for more” and suggested that Amazon should negotiate with small publishers “the way a cheetah would pursue a sickly gazelle.” This remark—a joke, one of Bezos’s lieutenants insisted—yielded a negotiating program that Amazon executives referred to as “the Gazelle Project,” under which the company pressured the most vulnerable publishers for concessions. Amazon’s lawyers, presumably nervous that such a direct name might attract an antitrust complaint, insisted that it be recast as the Small Publisher Negotiation Program.
Around this time, Amazon also jettisoned its in-house writers and editors and replaced them with an algorithm, Amabot, that relied on customer data rather than editorial judgment to recommend books. The spread of aggression and automation within Amazon as the company grew larger and larger echoed classics of the science fiction genre to which Bezos was devoted. An anonymous employee bought an ad in a Seattle newspaper to protest the change. “DEAREST AMABOT,” the ad began. “If you only had a heart to absorb our hatred… Thanks for nothing, you jury-rigged rust bucket. The gorgeous messiness of flesh and blood will prevail!”
Will it, though? Over the last decade, Amazon’s growing market share and persistent bullying, particularly in the realm of digital books, where it now controls about two thirds of the market, raise the question of how well competition and antitrust law can protect diverse authors and publishers. Amazon has become a powerful distribution bottleneck for books at the same time that it is also moving to create its own books, in competition with the very publishers it is squeezing.
The evidence to date is that Amazon and the attorneys that advise it do not fear antitrust enforcement. You might suppose, for example, that the publication of Stone’s book, which contains extensive on-the-record interviews with former Amazon executives describing the company’s most dubious practices, would have chastened it and caused it to pull back from strong-arming publishers—to avoid bad publicity, if for no other reason. Yet that has not proved to be the case.
This spring, Amazon has again launched a negotiating campaign to force publishers to accept concessions on the percentage of revenue it takes from e-book sales. And Amazon has again punished those who resist. Its most prominent target has been Hachette, the French publishing group, which will bring out The Everything Store in paperback in October. As of early June, as part of its pressure tactics, Amazon had removed the link on its website that would allow customers to preorder the paperback edition of Stone’s book, as well as links that would facilitate other preorders of Hachette books.
Jeff Bezos’s conceit is that Amazon is merely an instrument of an inevitable digital disruption in the book industry, that the company is clearing away the rust and cobwebs created by inefficient analog-era “gatekeepers”—i.e., editors, diverse small publishers, independent bookstores, and the writers this system has long supported. In Bezos’s implied argument, Amazon’s catalytic “creative destruction,” in the economist Joseph Schumpeter’s phrase, will clarify who will prosper in an unstoppably faster, more interconnected economy.
“Amazon is not happening to book selling,” Bezos once told Charlie Rose. “The future is happening to book selling.” Yet the more Amazon uses its vertically integrated corporate power to squeeze publishers who are also competitors, the more Bezos’s claim looks like a smokescreen. And the more Amazon uses coercion and retaliation as means of negotiation, the more it looks to be restraining competition.
Toward the end of his account, Stone asks the essential question: “Will antitrust authorities eventually come to scrutinize Amazon and its market power?” His answer: “Yes, I believe that is likely.” It is “clear that Amazon has helped damage or destroy competitors small and large,” in Stone’s judgment.
In view of Amazon’s recent treatment of The Everything Store, Stone may now end up as a courtroom witness. Yet there are reasons to be wary about who will prevail in such a contest, if it ever takes place. As Stone notes, “Amazon is a masterly navigator of the law.” And crucially, as in so many other fields of economic policy, antitrust law has been reshaped in recent decades by the spread of free-market fundamentalism. Judges and legislators have reinterpreted antitrust law to emphasize above all the promotion of low prices for consumers, which Amazon delivers, rather than the interests of producers—whether these are authors, book publishers, or mom-and-pop grocery stores—that are threatened by giants.
by Steve Coll, NY Review of Books | Read more:
Image: Nicolò Minerbi/LUZ/Redux
Subscribe to:
Posts (Atom)