Thursday, July 25, 2013
Say Goodbye to the Tech Sounds You’ll Never Hear Again
The boops and beeps of bygone technology can be used to chart its evolution. From the zzzzzzap of the Tesla coil to the tap-tap-tap of Morse code being sent via telegraph, what were once the most important nerd sounds in the world are now just historical signposts. But progress marches forward, and for every irritatingly smug Angry Pigs grunt we have to listen to, we move further away from the sound of the Defender ship exploding.
Let's celebrate the dying cries of technology's past. The following sounds are either gone forever, or definitely on their way out. Bow your heads in silence and bid them a fond farewell.
The Telephone Slam
Ending a heated telephone conversation by slamming the receiver down in anger was so incredibly satisfying. There was no better way to punctuate your frustration with the person on the other end of the line. And when that receiver hit the phone, the clack of plastic against plastic was accompanied by a slight ringing of the phone's internal bell. That's how you knew you were really pissed -- when you slammed the phone so hard, it rang.
There are other sounds we'll miss from the phone. The busy signal died with the rise of voicemail (although my dad refuses to get voicemail or call waiting, so he's still OG), and the rapid click-click-click of the dial on a rotary phone is gone. But none of those compare with hanging up the phone with a forceful slam.
Tapping a touchscreen just does not cut it. So the closest thing we have now is throwing the pitifully fragile smartphone against the wall.
The Modem
Go ahead and imitate the bleep bleep boop hiss of a 56k modem in front of anyone under the age of 20. They'll give the same look a dog makes when you present it with a hardcover book as a toy. But those noises where the first indication that you were joining, at the time, a new and wonderful world. A connected world where information (most of it wrong) flowed freely, and you could talk with both friends and complete strangers without running up a huge phone bill. Now, everything is constantly connected, and internet access is like electricity. It's just there. But there was a time when your desire to chat on IRC and check on your Geocities' guestbook was behind a magical handshake of beeps and hisses, all coming out of a tiny box plugged into your landline.
Go ahead and imitate the bleep bleep boop hiss of a 56k modem in front of anyone under the age of 20. They'll give the same look a dog makes when you present it with a hardcover book as a toy. But those noises where the first indication that you were joining, at the time, a new and wonderful world. A connected world where information (most of it wrong) flowed freely, and you could talk with both friends and complete strangers without running up a huge phone bill. Now, everything is constantly connected, and internet access is like electricity. It's just there. But there was a time when your desire to chat on IRC and check on your Geocities' guestbook was behind a magical handshake of beeps and hisses, all coming out of a tiny box plugged into your landline.
by Roberto Baldwin, Wired | Read more:
Photo: Ariel Zambelich/WiredHow Publix's People-First Culture Is Winning The Grocer War
Passing through Publix’s sliding doors to escape the blistering Lakeland, Fla. heat is a welcome relief, but it isn’t just the air-conditioning that jumps out at you. As you walk the aisles, bag boys and clerks in sage-green shirts and black aprons routinely smile and ask questions: “How are you today? Can we help you with anything?”
When a middle-aged woman asks about a box of crackers, no aisle number is blurted out. Instead, an employee races off to find the item, just as he is trained to do. At checkout, shoppers move to the front quickly, thanks to a two-customer-per-line goal enforced by proprietary, predictive staffing software. Baggers, a foggy memory at most large supermarket chains, carry purchases to the parking lot. Even Publix’s president, Todd Jones, who started out as a bagger 33 years ago, stoops down to pick up specks of trash on the store floor.
“We believe that there are three ways to differentiate: service, quality and price,” Jones says. “You’ve got to be good at two of them, and the best at one. We make service our number one, then quality and then price.”
If that’s a dig at Wal-Mart–traditional slogan: “Always low prices”–which has recently targeted Publix’s home turf, Florida, it’s a subtle one. The more direct retort comes via the numbers. As best we can tell, Publix is the most profitable grocery chain in the nation: Its net margins, 5.6% in 2012, trounced Wal-Mart’s (3.8%), as well as those of every public competitor, ranging from mass market Kroger (1.6%) to hoity-toity Whole Foods (3.9%).
Those numbers in a field notorious for razor-thin margins stem from another heady fact: Publix, the seventh-largest private company in the U.S. ($27.5 billion in sales) and one of the least understood thanks to decades of media reticence, is also the largest employee-owned company in America. For 83 years Publix has thrived by delivering top-rated service to its shoppers by turning thousands of its cashiers, baggers, butchers and bakers into the company’s largest collective shareholders. All staffers who have put in 1,000 work hours and a year of employment receive an additional 8.5% of their total pay in the form of Publix stock. (Though private, the board sets the stock price every quarter based on an independent valuation; it’s pegged at $26.90 now, up nearly 20% already this year.) How rich can employees get? According to Publix, a store manager who has worked at the company for 20 years and earns between $100,000 and $130,000 likely has $300,000 in stock and has received another $30,000 in dividends.
The route to that payday is completely transparent. Publix almost exclusively promotes from within, and every store displays advancement charts showing the path each employee can take to become a manager. Fifty-eight thousand of the company’s 159,000 employees have officially registered their interest in advancement. Associates are encouraged to rotate through various divisions, from grocery to real estate to distribution, to get a broad sense of the business. A former cake decorator in a store bakery is now in charge of all strategy for its bakeries. A distribution-center manager overseeing 800 associates got his start unloading railcars. When Lakeland store manager Edd Dean started bagging groceries as a teenager, he never expected to still be working in a supermarket 30 years later. “When I graduated college I had been seven years at Publix, and I started looking for a ‘real job,’?” he says. “I interviewed at a lot of companies, but the manager I was working with kept hounding me to come to Publix. Eventually it just clicked.” Dean is one of 34,000 employees who have more than ten years of tenure.
“I’m always amazed that more companies don’t recognize the power of associate ownership,” says Publix CEO Ed Crenshaw, 62, the grandson of founder George Jenkins and the fourth family member to run the company. While Crenshaw has a 1.1% stake in Publix, worth $230 million, and his entire family has 20%, worth $4.2 billion (see box, p. 102) , the employees (and former employees) are the controlling shareholders, with an 80% stake, worth $16.6 billion. Not surprisingly none of them belongs to a union.
When a middle-aged woman asks about a box of crackers, no aisle number is blurted out. Instead, an employee races off to find the item, just as he is trained to do. At checkout, shoppers move to the front quickly, thanks to a two-customer-per-line goal enforced by proprietary, predictive staffing software. Baggers, a foggy memory at most large supermarket chains, carry purchases to the parking lot. Even Publix’s president, Todd Jones, who started out as a bagger 33 years ago, stoops down to pick up specks of trash on the store floor.
“We believe that there are three ways to differentiate: service, quality and price,” Jones says. “You’ve got to be good at two of them, and the best at one. We make service our number one, then quality and then price.”
If that’s a dig at Wal-Mart–traditional slogan: “Always low prices”–which has recently targeted Publix’s home turf, Florida, it’s a subtle one. The more direct retort comes via the numbers. As best we can tell, Publix is the most profitable grocery chain in the nation: Its net margins, 5.6% in 2012, trounced Wal-Mart’s (3.8%), as well as those of every public competitor, ranging from mass market Kroger (1.6%) to hoity-toity Whole Foods (3.9%).
Those numbers in a field notorious for razor-thin margins stem from another heady fact: Publix, the seventh-largest private company in the U.S. ($27.5 billion in sales) and one of the least understood thanks to decades of media reticence, is also the largest employee-owned company in America. For 83 years Publix has thrived by delivering top-rated service to its shoppers by turning thousands of its cashiers, baggers, butchers and bakers into the company’s largest collective shareholders. All staffers who have put in 1,000 work hours and a year of employment receive an additional 8.5% of their total pay in the form of Publix stock. (Though private, the board sets the stock price every quarter based on an independent valuation; it’s pegged at $26.90 now, up nearly 20% already this year.) How rich can employees get? According to Publix, a store manager who has worked at the company for 20 years and earns between $100,000 and $130,000 likely has $300,000 in stock and has received another $30,000 in dividends.
The route to that payday is completely transparent. Publix almost exclusively promotes from within, and every store displays advancement charts showing the path each employee can take to become a manager. Fifty-eight thousand of the company’s 159,000 employees have officially registered their interest in advancement. Associates are encouraged to rotate through various divisions, from grocery to real estate to distribution, to get a broad sense of the business. A former cake decorator in a store bakery is now in charge of all strategy for its bakeries. A distribution-center manager overseeing 800 associates got his start unloading railcars. When Lakeland store manager Edd Dean started bagging groceries as a teenager, he never expected to still be working in a supermarket 30 years later. “When I graduated college I had been seven years at Publix, and I started looking for a ‘real job,’?” he says. “I interviewed at a lot of companies, but the manager I was working with kept hounding me to come to Publix. Eventually it just clicked.” Dean is one of 34,000 employees who have more than ten years of tenure.
“I’m always amazed that more companies don’t recognize the power of associate ownership,” says Publix CEO Ed Crenshaw, 62, the grandson of founder George Jenkins and the fourth family member to run the company. While Crenshaw has a 1.1% stake in Publix, worth $230 million, and his entire family has 20%, worth $4.2 billion (see box, p. 102) , the employees (and former employees) are the controlling shareholders, with an 80% stake, worth $16.6 billion. Not surprisingly none of them belongs to a union.
by Brian Soloman, Forbes | Read more:
Image: Bob Croslin for ForbesAmerica Has a Stadium Problem
[ed. I became acquainted with CVM during the Exxon Valdez Oil Spill litigation process, i.e., "how much would you pay not to have oiled beaches, or dead sea otters, etc.?" As I recall, the results (and potential financial liabilities) were quite controversial at the time.]
Economists have long known stadiums to be poor public investments. Most of the jobs created by stadium-building projects are either temporary, low-paying, or out-of-state contracting jobs—none of which contribute greatly to the local economy. (Athletes can easily circumvent most taxes in the state in which they play.) Most fans do not spend additional money as a result of a new stadium; they re-direct money they would have spent elsewhere on movies, dining, bowling, tarot-card reading, or other businesses. And for every out-of-state fan who comes into the city on game day and buys a bucket of Bud Light Platinum, another non-fan decides not to visit and purchases his latte at the coffee shop next door. All in all, building a stadium is a poor use of a few hundred million dollars.This isn’t news, by any stretch, but it turns out we’re spending even more money on stadiums than we originally thought. In her new book Public/Private Partnerships for Major League Sports Facilities, Judith Grant Long, associate professor of Urban Planning at the Harvard University Graduate School of Design, shatters previous conceptions of just how much money the public has poured into these deals. By the late ’90s, the first wave of damning economic studies conducted by Robert Baade and Richard Dye, James Quirk and Rodney Fort, and Roger Noll and Andrew Zimbalist came to light, but well afterwards, from 2001 to 2010, 50 new sports facilities were opened, receiving $130 million more, on average, than those opened in the preceding decade. (All figures from Long’s book adjusted for 2010 dollars.) In the 1990s, the average public cost for a new facility was estimated at $142 million, but by the end of the 2000s, that figure jumped to $241 million: an increase of 70 percent.
Economists have also been, according to Long, drastically underestimating the true cost of these projects. They fail to consider public subsidies for land and infrastructure, the ongoing costs of operations, capital improvements (we need a new scoreboard!), municipal services (all those traffic cops), and foregone property taxes (almost every major-league franchise located in the U.S. does not pay property taxes “due to a legal loophole with questionable rationale” as the normally value-neutral Long put it). Due to these oversights, Long calculates that economists have been underestimating public subsidies for sports facilities by 25 percent, raising the figure to $259 million per facility in operation during the 2010 season. (...)
The basic evolution behind subsidies for sports stadiums is as follows: owner wants new stadium to make more money and increase the value of the franchise. Owner threatens to move team. Politicians save face by pretending they won’t offer millions of dollars in subsidies. Politicians eventually offer millions of dollars in subsidies and keep the team in the city. If there’s a justification for all this, it comes from the concept of a public good.
“The traditional definition of a public good is that the benefits aren’t scarce, they’re non-rival and non-excludable, so the consumption by one person doesn’t limit the consumption by someone else,” Professor J.C. Bradbury, a sports economist at Kennesaw State University and author of Hot Stove Economics, told me over the phone. “So if I’m happy Charlotte has a basketball team, that doesn’t make anyone else less happy.” The stadium itself, though, is a private good. There are only a limited number of seats, and if my ass is in Section 101, Row V, Seat 21, your ass isn’t.
Still, the thinking goes, a fan can enjoy a team without giving the franchise a penny. If you don’t buy Sunday Ticket, don’t attend any games, and don’t purchase any merchandise, then your favorite football team won’t see any of your money, no matter how passionately you follow them. But how do you quantify this? This is where Contingent Valuation Method (CVM), a survey method originally designed by environmental economists to value public park space or clean air, comes into play.
by Aaron Gordon, Pacific Standard | Read more:
Image: KKIMPHOTOGRAPHY/FLICKR)Wednesday, July 24, 2013
Lost in the Forest
The new edition of the DSM replaces DSM-IV, which appeared in 1994. The DSM is the standard – and standardising – work of reference issued by the American Psychiatric Association, but its influence reaches into every nook and cranny of psychiatry, everywhere. Hence its publication has been greeted by a flurry of discussion, hype and hostility across all media, both traditional and social. Most of it has concerned individual diagnoses and the ways they have changed, or haven’t. To invoke the cliché for the first time in my life, most critics attended to the trees (the kinds of disorder recognised in the manual), but few thought about the wood. I want to talk about the object as a whole – about the wood – and will seldom mention particular diagnoses, except when I need an example.
Many worries have already been aired. In mid-May an onslaught was delivered by the Division of Clinical Psychology of the British Psychology Society, which is sceptical about the very project of standardised diagnosis, especially of schizophrenia and bipolar disorders. More generally, it opposes the biomedical model of mental illness, to the exclusion of social conditions and life-course events. On a quite different score, Allen Frances, the chief editor of DSM-IV, has for years been blogging his criticisms of the modifications leading to DSM-5. More and more kinds of behaviour are now being filed as disorders, opening up vast fields of profit for drug companies. I shall discuss none of these important issues, and will try to be informative and even supportive until the very end of this piece, where I address a fundamental flaw in the enterprise.
Who needs the 947 pages of the DSM-5? All that most consumers need is the DSM-5 Diagnostic Criteria Mobile App. The more interesting question is who needs the DSM anyway? First of all, bureaucracies. Everyone in North America who hopes their health insurance will cover or at least defray the cost of treatment for their mental illness must first receive a diagnosis that fits the scheme and bears a numerical code. For example, opening the book at random, I find 308.3 for Acute Stress Disorder. The coding is required both by American private insurers and by Medicare. It is also required for the universal health insurance plans provided in Canadian provinces.
There is another quite different bureaucratic use. Why is this a ‘statistical’ manual? Because its classifications can be used for studying the prevalence of various types of illness. For that one requires a standardised classification. In a sense, the manual has its origins in 1844, when the American Psychiatric Association, in the year of its founding, produced a statistical classification of patients in asylums. It was soon incorporated into the decennial US census. During the First World War it was used for assessing army recruits, perhaps the first time it was put to diagnostic use.
Many worries have already been aired. In mid-May an onslaught was delivered by the Division of Clinical Psychology of the British Psychology Society, which is sceptical about the very project of standardised diagnosis, especially of schizophrenia and bipolar disorders. More generally, it opposes the biomedical model of mental illness, to the exclusion of social conditions and life-course events. On a quite different score, Allen Frances, the chief editor of DSM-IV, has for years been blogging his criticisms of the modifications leading to DSM-5. More and more kinds of behaviour are now being filed as disorders, opening up vast fields of profit for drug companies. I shall discuss none of these important issues, and will try to be informative and even supportive until the very end of this piece, where I address a fundamental flaw in the enterprise.Who needs the 947 pages of the DSM-5? All that most consumers need is the DSM-5 Diagnostic Criteria Mobile App. The more interesting question is who needs the DSM anyway? First of all, bureaucracies. Everyone in North America who hopes their health insurance will cover or at least defray the cost of treatment for their mental illness must first receive a diagnosis that fits the scheme and bears a numerical code. For example, opening the book at random, I find 308.3 for Acute Stress Disorder. The coding is required both by American private insurers and by Medicare. It is also required for the universal health insurance plans provided in Canadian provinces.
There is another quite different bureaucratic use. Why is this a ‘statistical’ manual? Because its classifications can be used for studying the prevalence of various types of illness. For that one requires a standardised classification. In a sense, the manual has its origins in 1844, when the American Psychiatric Association, in the year of its founding, produced a statistical classification of patients in asylums. It was soon incorporated into the decennial US census. During the First World War it was used for assessing army recruits, perhaps the first time it was put to diagnostic use.
by Ian Hacking, London Review of Books | Read more:
Image: APA
Tuesday, July 23, 2013
On Tour
Friends often ask which were my favourite places to visit, but the truth is I can’t hold them all in my mind. What makes a place nice to visit, anyway? The pleasure it provides for its visitors? Who am I that Thailand must delight me? I’m horrified when a country is described as having a ‘warm people’, as though each citizen must please the sweaty strangers who choke the streets. Thailand — or any other place — can only exist. And by existing can only remind the traveller that other modalities are possible, that no way of living is a natural consequence of being alive on this planet. That should be enough.
by Claire Evans, Aeon | Read more:
Photo courtesy: Claire EvansThe Blip
Picture this, arranged along a time line.
For all of measurable human history up until the year 1750, nothing happened that mattered. This isn’t to say history was stagnant, or that life was only grim and blank, but the well-being of average people did not perceptibly improve. All of the wars, literature, love affairs, and religious schisms, the schemes for empire-making and ocean-crossing and simple profit and freedom, the entire human theater of ambition and deceit and redemption took place on a scale too small to register, too minor to much improve the lot of ordinary human beings. In England before the middle of the eighteenth century, where industrialization first began, the pace of progress was so slow that it took 350 years for a family to double its standard of living. In Sweden, during a similar 200-year period, there was essentially no improvement at all. By the middle of the eighteenth century, the state of technology and the luxury and quality of life afforded the average individual were little better than they had been two millennia earlier, in ancient Rome.
Then two things happened that did matter, and they were so grand that they dwarfed everything that had come before and encompassed most everything that has come since: the first industrial revolution, beginning in 1750 or so in the north of England, and the second industrial revolution, beginning around 1870 and created mostly in this country. That the second industrial revolution happened just as the first had begun to dissipate was an incredible stroke of good luck. It meant that during the whole modern era from 1750 onward—which contains, not coincidentally, the full life span of the United States—human well-being accelerated at a rate that could barely have been contemplated before. Instead of permanent stagnation, growth became so rapid and so seemingly automatic that by the fifties and sixties the average American would roughly double his or her parents’ standard of living. In the space of a single generation, for most everybody, life was getting twice as good.
At some point in the late sixties or early seventies, this great acceleration began to taper off. The shift was modest at first, and it was concealed in the hectic up-and-down of yearly data. But if you examine the growth data since the early seventies, and if you are mathematically astute enough to fit a curve to it, you can see a clear trend: The rate at which life is improving here, on the frontier of human well-being, has slowed.
If you are like most economists—until a couple of years ago, it was virtually all economists—you are not greatly troubled by this story, which is, with some variation, the consensus long-arc view of economic history. The machinery of innovation, after all, is now more organized and sophisticated than it has ever been, human intelligence is more efficiently marshaled by spreading education and expanding global connectedness, and the examples of the Internet, and perhaps artificial intelligence, suggest that progress continues to be rapid.
But if you are prone to a more radical sense of what is possible, you might begin to follow a different line of thought. If nothing like the first and second industrial revolutions had ever happened before, what is to say that anything similar will happen again? Then, perhaps, the global economic slump that we have endured since 2008 might not merely be the consequence of the burst housing bubble, or financial entanglement and overreach, or the coming generational trauma of the retiring baby boomers, but instead a glimpse at a far broader change, the slow expiration of a historically singular event. Perhaps our fitful post-crisis recovery is no aberration. This line of thinking would make you an acolyte of a 72-year-old economist at Northwestern named Robert Gordon, and you would probably share his view that it would be crazy to expect something on the scale of the second industrial revolution to ever take place again.
“Some things,” Gordon says, and he says it often enough that it has become both a battle cry and a mantra, “can happen only once.”
For all of measurable human history up until the year 1750, nothing happened that mattered. This isn’t to say history was stagnant, or that life was only grim and blank, but the well-being of average people did not perceptibly improve. All of the wars, literature, love affairs, and religious schisms, the schemes for empire-making and ocean-crossing and simple profit and freedom, the entire human theater of ambition and deceit and redemption took place on a scale too small to register, too minor to much improve the lot of ordinary human beings. In England before the middle of the eighteenth century, where industrialization first began, the pace of progress was so slow that it took 350 years for a family to double its standard of living. In Sweden, during a similar 200-year period, there was essentially no improvement at all. By the middle of the eighteenth century, the state of technology and the luxury and quality of life afforded the average individual were little better than they had been two millennia earlier, in ancient Rome.Then two things happened that did matter, and they were so grand that they dwarfed everything that had come before and encompassed most everything that has come since: the first industrial revolution, beginning in 1750 or so in the north of England, and the second industrial revolution, beginning around 1870 and created mostly in this country. That the second industrial revolution happened just as the first had begun to dissipate was an incredible stroke of good luck. It meant that during the whole modern era from 1750 onward—which contains, not coincidentally, the full life span of the United States—human well-being accelerated at a rate that could barely have been contemplated before. Instead of permanent stagnation, growth became so rapid and so seemingly automatic that by the fifties and sixties the average American would roughly double his or her parents’ standard of living. In the space of a single generation, for most everybody, life was getting twice as good.
At some point in the late sixties or early seventies, this great acceleration began to taper off. The shift was modest at first, and it was concealed in the hectic up-and-down of yearly data. But if you examine the growth data since the early seventies, and if you are mathematically astute enough to fit a curve to it, you can see a clear trend: The rate at which life is improving here, on the frontier of human well-being, has slowed.
If you are like most economists—until a couple of years ago, it was virtually all economists—you are not greatly troubled by this story, which is, with some variation, the consensus long-arc view of economic history. The machinery of innovation, after all, is now more organized and sophisticated than it has ever been, human intelligence is more efficiently marshaled by spreading education and expanding global connectedness, and the examples of the Internet, and perhaps artificial intelligence, suggest that progress continues to be rapid.
But if you are prone to a more radical sense of what is possible, you might begin to follow a different line of thought. If nothing like the first and second industrial revolutions had ever happened before, what is to say that anything similar will happen again? Then, perhaps, the global economic slump that we have endured since 2008 might not merely be the consequence of the burst housing bubble, or financial entanglement and overreach, or the coming generational trauma of the retiring baby boomers, but instead a glimpse at a far broader change, the slow expiration of a historically singular event. Perhaps our fitful post-crisis recovery is no aberration. This line of thinking would make you an acolyte of a 72-year-old economist at Northwestern named Robert Gordon, and you would probably share his view that it would be crazy to expect something on the scale of the second industrial revolution to ever take place again.
“Some things,” Gordon says, and he says it often enough that it has become both a battle cry and a mantra, “can happen only once.”
by Benjamin Wallace-Wells, NY Magazine | Read more:
Illustration by Mario Hugo
Subscribe to:
Posts (Atom)













