Thursday, November 16, 2017

Vatican 2.0

“Humans are distinguished from other species”, says Peter Thiel, one of Silicon Valley’s high priests, “by our ability to work miracles. We call these miracles technology.” Thiel inadvertently touches on a pervasive paradox: we see ourselves as both the miracle-makers of technology and the earthly audience, looking on in wonder. But if the miracle was once the automobile, the modern equivalent of the “great gothic cathedrals”, in Roland Barthes’s famous formulation, now it is surely the internet: conceived by unknown forces, built on the toil of a hidden workforce, and consumed more or less unthinkingly by whole populations. The internet’s supposed immateriality masks not only the huge infrastructure that sustains it, including vast, heavily polluting data centres, but also the increasingly narrow corporate interests that shape it and, in turn, us – the way we think, work and live. Algorithms are at the heart of this creative process, guiding us through internet searches and our city’s streets with a logic steeped in secrecy, filtered down from above – namely, the boardrooms of the Big Five: Amazon, Apple, Alphabet Inc. (the parent company of Google), Facebook and Microsoft, those companies that have come to dominate the digital realm. (...)

The scale of their supremacy is remarkable. Google controls more than 90 per cent of search-engine traffic in Europe, and 88 per cent in America; YouTube, also owned by Alphabet, is the largest music-streaming site in the world; Amazon takes more than half of every new US dollar spent online; Facebook, having swallowed competition like Instagram and WhatsApp, holds 77 per cent of all mobile social media traffic, and recently reached 2 billion members. With Apple and Microsoft, these are now the five largest companies in the world – hence the group moniker – with a collective net worth of $3 trillion. Together they spend more on government lobbying than the five biggest banks or the five largest oil companies. “Not since the turn of the twentieth century,” Taplin writes, “when Roosevelt took on the monopolies of Rockefeller and Morgan, has the country faced such a concentration of wealth and power.” All this, he adds wryly, on the back of an invention that was supposed to “eliminate the gatekeepers”. Not that Silicon Valley sees anything sinister in all this. As Taplin puts it, they have “a very high opinion of their place in history”. According to Sergey Brin, Google’s co-founder, the company aims to be “the perfect search engine . . . like the mind of God”. Earlier this year, Mark Zuckerberg even seemed to suggest that Facebook should emulate, if not replace, the Church as the bedrock on which communities are built. “Mark Zuckerberg will never be my pastor”, one reporter wrote in Christian Today.

For all their missionary zeal, Google and Facebook remain predominantly advertising companies – and their algorithms are ordered accordingly. They give their services away free, and make billions collecting data and selling it on. And like the advertising industry itself, their aims are riven with contradictions. Claims to cultivate the unique character of the individual, with increasingly personalized products, conflict with the greater goals of consumerist conformity and capturing user attention: self-expression can run free, so long as it plays out along predictable, profit­able lines. Hence, for example, Facebook’s increasing move towards auto-play video on its newsfeed, and the auto-queue on YouTube that sends you to a song with more views. In terms of “innovation” these companies present themselves as radical pioneers – they move fast and break things, as the mantra goes – but in practice, this only veils the homogenizing effects of their services. The music industry is an emblematic case: when the algorithms of streaming services push us towards what is already popular, a trade where 80 per cent of the revenue derives from just 1 per cent of the product is the result.

As Taplin stresses, however, perhaps the most worrying side of Silicon Valley’s self-aggrandizing aspirations is neither the money the companies make nor even the consequences so far. It is their contempt for whatever stands in their way. Democratic processes, the state, the public, tax, a basic sense of social responsibility, all are seen as so much baggage on a holy quest to better the lot of humankind. Ayn Rand’s refrain in The Fountainhead – “Who will stop me?” – has become a guiding principle. The Big Five were members of the American Legislative Exchange Council, a shadowy organization made up of the biggest corporations dedicated to advancing the free market, until a public outcry forced them to quit. While their founders and executives deliver sermons on, say, the need for a (tax-funded) universal basic income, the companies themselves excel in avoiding tax. In 2014, Facebook’s UK offices managed to pay £4,327 in corporation tax, less than the average British worker pays; in the same year, bonuses to its UK employees reached a total of £35 million. In 2013, Google paid £20.4 million in UK tax on a revenue of £3.8 billion. Taplin also points to a tellingly popular “sea-steading” movement, which sees Silicon Valley dream of “building an artificial island in the middle of the sea, not hosted by any nation-state” for the purpose of innovation. Thiel, one of its main funders, pictures it as an “escape from politics in all its forms” (and, one imagines, tax). They recently reached an agreement with the French Polynesian government to take over a portion of its territory instead. Meanwhile Larry Page, Google’s co-founder and Alphabet’s CEO, echoes the idea that governments should “set aside a small part of the world” for them alone – the company is now looking for land to build its own city; the venture capitalist Tim Draper has suggested that Silicon Valley become its own state. Add to this these companies’ vast, self-contained campuses, and one is left with the clear sense of something like a modern monastery, cut off from society but with the weight of humanity’s hopes on its shoulders: Silicon Vatican.

by Samuel Earle, TLS |  Read more:
Image: Google Headquarters, London © PENSON/REX/Shutterstock

Wednesday, November 15, 2017

Self-Driving Trucks Are Now Delivering Refrigerators

If you live in Southern California and you’ve ordered one of those fancy new smart refrigerators in the past few weeks, it may have hitched a ride to you on a robotruck.

Since early October, autonomous trucks built and operated by the startup Embark have been hauling Frigidaire refrigerators 650 miles along the I-10 freeway, from a warehouse in El Paso, Texas, to a distribution center in Palm Springs, California. A human driver rides in the cab to monitor the computer chauffeur for now, but the ultimate goal of this (auto) pilot program is to dump the fleshbag and let the trucks rumble solo down the highway.

“This is the first time someone has demonstrated this end-to-end," Embark CEO Alex Rodrigues says. "It showcases the way that we see self-driving playing into the logistics industry.” (...)

They’ve got some good arguments. First off, making a robot that can drive itself on the highway, where trucks spend nearly all their time, is relatively easy. You don’t have to account for pedestrians, cyclists, traffic lights, or other variables. The big rig just has to stay in its lane and keep a safe distance from fellow travelers.

Better yet, the need for autonomous trucks is very real: Trucks carry 70 percent of goods shipped around the US, but truckers are scarce. According to the American Trucking Associations, the industry is now short 50,000 drivers. As current drivers retire or quit, that number could hit 175,000 by 2024. Cut down the need for the human, and that shortage stops being a problem. And a self-driving truck isn't subject to rules that ban humans from spending more than 11 hours at a time behind the wheel.

Indeed, make a truck that doesn’t tire (or text), the thinking goes, and you save lives: In the US, more than 4,000 people die in crashes involving trucks every year, crashes that nearly always result from human error. That’s why the American Trucking Associations has embraced the new tech, recently issuing its first autonomous vehicle policy, calling for uniform federal laws that could help developers and researchers make automated and connected vehicles safer than humans. (The Teamsters are less enthused, and have pushed against the inclusion of commercial vehicles in coming federal legislation.)

For now, the Embark milk runs are designed to test logistics as well as the safety of the technology. On each trip, a human driver working for Ryder (a major trucking company and Embark’s partner on this venture) heads over to the Frigidaire lot in El Paso, picks up a load of refrigerators, hauls them to the rest stop right off the highway, and unhitches the trailer. Then, a driver working for Embark hooks that trailer up to the robotruck, cruises onto the interstate, pops it into autonomous mode, and lets it do its thing. The truck mostly sticks to the right lane and always follows the speed limit. Once in Palm Springs, the human pulls off the highway, unhitches the trailer, and passes the load to another Ryder driver, who takes it the last few miles to Frigidaire’s SoCal distribution center.

by Alex Davies, Wired |  Read more:
Image: Embark
[ed. They're here. The speed of this transformation is going to be surprising to many.]

Introducing the New Firefox: Firefox Quantum

It’s fast. Really fast. Firefox Quantum is over twice as fast as Firefox from 6 months ago, built on a completely overhauled core engine with brand new technology stolen from our advanced research group, and graced with a beautiful new look designed to get out of the way and let you do what you do best: surf a ton of pages, open a zillion tabs, all guilt free because Firefox Quantum uses less memory than the competition. Your computer will thank you.

It’s by far the biggest update we’ve had since we launched Firefox 1.0 in 2004, it’s just flat out better in every way. If you go and install it right now, you’ll immediately notice the difference, accompanied by a feeling of mild euphoria. If you’re curious about what we did, read on.

The first thing you’ll notice is the speed. Go on, open some tabs and have some fun. The second thing you’ll notice is the new User Interface (UI). We call this initiative Photon, and its goal is to modernize and unify anything that we call Firefox while taking advantage of the speedy new engine. You guessed it: the Photon UI itself is incredibly fast and smooth. To create Photon, our user research team studied how people browsed the web. We looked at real world hardware to make Firefox look great on any display, and we made sure that Firefox looks and works like Firefox regardless of the device you’re using. Our designers created a system that scales to more than just current hardware but lets us expand in the future. Plus, our Pocket integration goes one step further, which includes Pocket recommendationsalongside your most visited pages.

As part of our focus on user experience and performance in Firefox Quantum, Google will also become our new default search provider in the United States and Canada. With more than 60 search providers pre-installed across more than 90 languages, Firefox has more choice in search providers than any other browser.

We made many, many performance improvements in the browser’s core and shipped a new CSS engine, Stylo, that takes better advantage of today’s hardware with multiple cores that are optimized for low power consumption. We’ve also improved Firefox so that the tab you’re on gets prioritized over all others, making better use of your valuable system resources. We’ve done all this work on top of the multi-process foundation that we launched this past June. And we’re not done yet.

by Mark Mayo, Mozilla Blog | Read more:
Image: Mozilla

Otis G. Johnson

The Pity of It All

Ken Burns achieved renown with lengthy film histories of the Civil War, World War II, jazz, and baseball, but he describes his documentary The Vietnam War, made in close collaboration with his codirector and coproducer Lynn Novick, as “the most ambitious project we’ve ever undertaken.” Ten years in the making, it tells the story of the war in ten parts and over eighteen hours. Burns and Novick have made a film that conveys the realities of the war with extraordinary footage of battles in Vietnam and antiwar demonstrations in the United States. (...)

For those under forty, for whom the Vietnam War seems as distant as World War I or II, the film will serve as an education; for those who lived through it, the film will serve as a reminder of its horrors and of the official lies that drove it forward. In many ways it is hard to watch, and its battle scenes will revive the worst nightmares of those who witnessed them firsthand.

Asked why he and Novick took on this project, Burns said that more than forty years after the war ended, we can’t forget it, and we are still arguing about it. We are all, Novick added, “searching for some meaning in this terrible tragedy.” Their aim, the filmmakers said, was to explore whether the war was a terrible mistake that could have been avoided. They might have added that some consider it no mistake but the result of a deliberate policy. Nonetheless, she and Burns do provide answers to some questions Americans may still be asking about the war. (...)

From the very start, they strongly suggest that the US could not have won the war. To prove this point, which is still disputed, mostly by military men, they cite the private statements of John F. Kennedy, Lyndon Johnson, and many of their top advisers, who said that the measures they were taking were inadequate. It was feared that the recommendations of the Joint Chiefs of Staff for more troops and more bombing would not convince the enemy to give up its goal of reunifying Vietnam—and might in fact lead to a larger war with China. The presidents and their advisers nonetheless persisted, unwilling to give up what Kennedy called a “piece of territory” to the Communists.

Burns and Novick say the US was initially “trapped in the logic of the cold war.” As Kennedy’s phrase suggests, the war was never really about South Vietnam. Rather, Washington viewed it as a piece on a chessboard, or a domino whose fall to communism might have caused the rest of Southeast Asia to fall. Before the commitment of American combat troops in 1965, Burns and Novick make clear, there were several occasions when the US could have withdrawn without much public opposition. One was after the assassination of South Vietnamese President Ngo Dinh Diem in 1963, since his successor, General Duong Van Minh, favored a French proposal for a negotiated settlement and a neutral Vietnam. Another came after Johnson’s victory in the 1964 election, when the military junta that had taken power in Saigon earlier that year fell apart, leaving a vacuum of authority. “This is the year of minimum political risk for the Johnson administration,” Vice President Hubert Humphrey said. In other words, the war could have ended in 1965, if not before.

Burns and Novick suggest that the strategy the Americans adopted was primarily responsible for the enormous casualties on both sides. When General William Westmoreland took command of the first regular American troops in 1965, he knew that the NLF controlled three quarters of the South Vietnamese countryside. Although he never told the press, he had no hope for “pacification” unless his troops could kill more North Vietnamese soldiers than could be replaced, a threshold he called “the crossover point.”

Since US troops could rarely find the enemy, much less “take and hold” the vast jungles of the highlands, his strategy was to deploy small American units to serve as “bait” for North Vietnamese attacks and then kill the enemy with artillery and air strikes. The “body count”—or the “kill ratio”—became the standard measure for whether progress was being made. As a result, commanders on the ground often inflated the number of enemy soldiers killed to please their superiors, who in turn inflated the figures even more. Meanwhile many American combat troops were killed and wounded.

The “body count” did worse than mislead. It changed the nature of the war, as many American soldiers killed indiscriminately. The filmmakers show a helicopter gunner shooting a man in black pajamas running away in his rice field. They show the famous footage of Morley Safer watching as soldiers torched a village, and footage of soldiers blowing a hole in a hut where grain was stored and killing the people hiding there. Commanders designated enemy-held territories as “free fire zones” and shelled them every night, even though many civilians lived in those areas. We also see troops calling Vietnamese “gooks” or “slopes.”

The voiceover rarely editorializes, but the film suggests how the standard of the “body count” helps explain how the My Lai massacre—when American troops killed hundreds of unarmed Vietnamese civilians in March 1968—could have occurred. In late 1968, General Julian Ewell sent troops and aircraft into the densely populated Mekong Delta, killing 10,899 people in six months and seizing only 748 weapons. (The army inspector general later estimated that roughly half of those killed were in fact unarmed civilians.) Ewell was made a three-star general and given command of the largest army field force in Vietnam.

At the heart of the documentary are lengthy interviews with a number of American veterans, taking them through their war, often with archival footage of the battles they fought. A large number of these men came from small towns; many had fathers or uncles who had served in World War II and some had gone to West Point. As teenagers, they had always aspired to military service, and they couldn’t imagine not signing up to protect their country. Generally they knew nothing about Vietnam, but they wanted to show that they were warriors, as their dads had been. They went through basic training and were transported by air to Vietnam, where they were sent into battle in a land totally unfamiliar to them. We see some of them slogging through elephant grass and triple canopy jungles, always fearing a booby trap or an enemy ambush.

All of the men interviewed were brave and decent soldiers. One gave up a Rhodes scholarship to go into active duty out of loyalty to his friends. Several tell of their fears and anger. Many were wounded. Eventually some changed their minds about the war. (“What are we doing here?” “Are we fighting on the wrong side?”) One of them began to sympathize with the antiwar protesters and joined the peace movement after he left military service.

These segments of the film are the most affecting and yet these soldiers aren’t entirely representative. Not all American troops were as honorable as they were—or as articulate. Furthermore, as the filmmakers note, eight of ten Americans sent to Vietnam never saw combat. The majority were what the combat troops called REMFs, or Rear Echelon Mother Fuckers—public relations officers, construction men, and the like—who had good food, access to swimming pools, the run of the well-stocked post exchanges, and the liberty to go to Saigon to drink and pick up girls.

by Frances FitzGerald, NYRB |  Read more:
Image: Larry Burrows

Tuesday, November 14, 2017


Kasimir Malevich, White on White (1918)
via: Why Beauty Is Not Universal

Zero 7

Kiss the Good Times Goodbye

It saddens me to say it, but we are approaching the end of the automotive era.

The auto industry is on an accelerating change curve. For hundreds of years, the horse was the prime mover of humans and for the past 120 years it has been the automobile.

Now we are approaching the end of the line for the automobile because travel will be in standardized modules.

The end state will be the fully autonomous module with no capability for the driver to exercise command. You will call for it, it will arrive at your location, you'll get in, input your destination and go to the freeway.

On the freeway, it will merge seamlessly into a stream of other modules traveling at 120, 150 mph. The speed doesn't matter. You have a blending of rail-type with individual transportation.

Then, as you approach your exit, your module will enter deceleration lanes, exit and go to your final destination. You will be billed for the transportation. You will enter your credit card number or your thumbprint or whatever it will be then. The module will take off and go to its collection point, ready for the next person to call.

Most of these standardized modules will be purchased and owned by the Ubers and Lyfts and God knows what other companies that will enter the transportation business in the future.

A minority of individuals may elect to have personalized modules sitting at home so they can leave their vacation stuff and the kids' soccer gear in them. They'll still want that convenience.

The vehicles, however, will no longer be driven by humans because in 15 to 20 years — at the latest — human-driven vehicles will be legislated off the highways.

The tipping point will come when 20 to 30 percent of vehicles are fully autonomous. Countries will look at the accident statistics and figure out that human drivers are causing 99.9 percent of the accidents.

Of course, there will be a transition period. Everyone will have five years to get their car off the road or sell it for scrap or trade it on a module.

The big fleets

CNBC recently asked me to comment on a study showing that people don't want to buy an autonomous car because they would be scared of it. They don't trust traditional automakers, so the only autonomous car they'd buy would have to come from Apple or Google. Only then would they trust it.

My reply was that we don't need public acceptance of autonomous vehicles at first. All we need is acceptance by the big fleets: Uber, Lyft, FedEx, UPS, the U.S. Postal Service, utility companies, delivery services. Amazon will probably buy a slew of them. These fleet owners will account for several million vehicles a year. Every few months they will order 100,000 low-end modules, 100,000 medium and 100,000 high-end. The low-cost provider that delivers the specification will get the business.

These modules won't be branded Chevrolet, Ford or Toyota. They'll be branded Uber or Lyft or who-ever else is competing in the market.

The manufacturers of the modules will be much like Nokia — basically building handsets. But that's not where the value is going to be in the future. The value is going to be captured by the companies with the fully autonomous fleets.

The end of performance

These transportation companies will be able to order modules of various sizes — short ones, medium ones, long ones, even pickup modules. But the performance will be the same for all because nobody will be passing anybody else on the highway. That is the death knell for companies such as BMW, Mercedes-Benz and Audi. That kind of performance is not going to count anymore.

In each size vehicle, you will be able to order different equipment levels. There will be basic modules, and there will be luxury modules that will have a refrigerator, a TV and computer terminals with full connectivity. There will be no limit to what you can cram into these things because drinking while driving or texting while driving will no longer be an issue.

The importance of styling will be minimized because the modules in the high-speed trains will have to be blunt at both ends. There will be minimum separation in the train. Air resistance will be minimal because the modules will just be inserted into the train and spat out when you get close to your exit.

The future of dealers?


Unfortunately, I think this is the demise of automotive retailing as we know it.

Think about it: A horse dealer had a stable of horses of all ages, and you would come in and get the horse that suited you. You'd trade in your old horse and take your new horse home.

Car dealers will continue to exist as a fringe business for people who want personalized modules or who buy reproduction vintage Ferraris or reproduction Formula 3 cars. Automotive sport — using the cars for fun — will survive, just not on public highways. It will survive in country clubs such as Monticello in New York and Autobahn in Joliet, Ill. It will be the well-to-do, to the amazement of all their friends, who still know how to drive and who will teach their kids how to drive. It is going to be an elitist thing, though there might be public tracks, like public golf courses, where you sign up for a certain car and you go over and have fun for a few hours.

And like racehorse breeders, there will be manufacturers of race cars and sports cars and off-road vehicles. But it will be a cottage industry.

Yes, there will be dealers for this, but they will be few and far between. People will be unable to drive the car to the dealership, so dealers will probably all be on these motorsports and off-road dude ranches. It is there where people will be able to buy the car, drive it, get it serviced and get it repainted. In the early days, those tracks may be relatively numerous, but they will decline over time.

So auto retailing will be OK for the next 10, maybe 15 years as the auto companies make autonomous vehicles that still carry the manufacturer's brand and are still on the highway.

But dealerships are ultimately doomed. And I think Automotive News is doomed. Car and Driver is done; Road & Track is done. They are all facing a finite future. They'll be replaced by a magazine called Battery and Module read by the big fleets.

The era of the human-driven automobile, its repair facilities, its dealerships, the media surrounding it — all will be gone in 20 years.

by Bob Lutz, Automotive News | Read more:
Image: uncredited
[ed. Insurance companies will figure prominently in this decline as well. Rates for self-driven cars will skyrocket, making it nearly impossible to own one.]  

History of the Amiga

The Amiga computer was a dream given form: an inexpensive, fast, flexible multimedia computer that could do virtually anything. It handled graphics, sound, and video as easily as other computers of its time manipulated plain text. It was easily ten years ahead of its time. It was everything its designers imagined it could be, except for one crucial problem: the world was essentially unaware of its existence.

With personal computers now playing such a prominent role in modern society, it's surprising to discover that a machine with most of the features of modern PCs actually first came to light back in 1985. Almost without exception, the people who bought and used Amigas became diehard fans. Many of these people would later look back fondly on their Amiga days and lament the loss of the platform. Some would even state categorically that despite all the speed and power of modern PCs, the new machines have yet to capture the fun and the spirit of their Amiga predecessors. A few still use their Amigas, long after the equivalent mainstream personal computers of the same vintage have been relegated to the recycling bin. Amiga users, far more than any other group, were and are extremely passionate about their platform.

So if the Amiga was so great, why did so few people hear about it? The world has plenty of books about the IBM PC and its numerous clones, and even a large library about Apple Computer and the Macintosh platform. There are also many books and documentaries about the early days of the personal computing industry. A few well-known examples are the excellent book Accidental Empires (which became a PBS documentary called Triumph of the Nerds) and the seminal work Fire in the Valley (which became a TV movie on HBO entitled Pirates of Silicon Valley.)

These works tell an exciting tale about the early days of personal computing, and show us characters such as Bill Gates and Steve Jobs battling each other while they were still struggling to establish their new industry and be taken seriously by the rest of the world. They do a great job telling the story of Microsoft, IBM, and Apple, and other companies that did not survive as they did. But they mention Commodore and the Amiga rarely and in passing, if at all. Why?

When I first went looking for the corresponding story of the Amiga computer, I came up empty-handed. An exhaustive search for Amiga books came up with only a handful of old technical manuals, software how-to guides, and programming references. I couldn't believe it. Was the story so uninteresting? Was the Amiga really just a footnote in computing history, contributing nothing new and different from the other platforms?

As I began researching, I discovered the answer, and it surprised me even more than the existence of the computer itself. The story of Commodore and the Amiga was, by far, even more interesting than that of Apple or Microsoft. It is a tale of vision, of technical brilliance, dedication, and camaraderie. It is also a tale of deceit, of treachery, and of betrayal. It is a tale that has largely remained untold.

This series of articles attempts to explain what the Amiga was, what it meant to its designers and users, and why, despite its relative obscurity and early demise, it mattered so much to the computer industry. It follows some of the people whose lives were changed by their contact with the Amiga and shows what they are doing today. Finally, it looks at the small but dedicated group of people who have done what many thought was impossible and developed a new Amiga computer and operating system, ten years after the bankruptcy of Commodore. Long after most people had given up the Amiga for dead, these people have given their time, expertise and money in pursuit of this goal.

To many people, these efforts seem futile, even foolish. But to those who understand, who were there and lived through the Amiga at the height of its powers, they do not seem foolish at all.

But the story is about something else as well. More than a tale about a computer maker, this is the story about the age-old battle between mediocrity and excellence, the struggle between merely existing and trying to go beyond expectations. At many points in the story, the struggle is manifested by two sides: the hard-working, idealistic engineers driven to the bursting point and beyond to create something new and wonderful, and the incompetent and often avaricious managers and executives who end up destroying that dream. But the story goes beyond that. At its core, it is about people, not just the designers and programmers, but the users and enthusiasts, everyone whose lives were touched by the Amiga. And it is about me, because I count myself among those people, despite being over a decade too late to the party.

All these people have one thing in common. They understand the power of the dream.

by Jeremy Reimer, ARS Technica |  Read more:
Image: Commodore Amiga
[ed. My first computer was an Amiga 1000. I kept it for decades until I finally gifted it to an electronic shop enthusiast when I moved out of Alaska (I wish now that I hadn't). As the saying goes, it was so far advanced even Commodore didn't know how to market it. Parts 2-11 of the Amiga story here.]

Friday, November 10, 2017


Hideo Hagiwara, Muruto Cape 1947
via:

Here Comes the Dystopia

I don’t know why, but private fire services disturb the hell out of me. Actually, I do know why. It’s because, more than just about anything else, they seem to hint at what a neo-feudalist dystopia might look like. The wealthy can buy firefighters and have their homes saved, while those who can’t afford to be saved will have to watch as all of their possessions are incinerated. There will be two types of people in this world: those who can pay their way to safety, and those who will burn. In the fully-privatized world, the amount of money you have determines literally everything, and the privatization of basic services like firefighting seems to draw us worryingly close to that world.

During the recent California wildfires, according to The Wall Street Journal, some people received a little more fire protection than others. Insurers sent in private firefighting forces to protect valuable homes, “sending out fire crews to clear brush, put down fire retardant around residents homes, set up sprinklers and then document the process via photos that are then sent to the homeowner.” Only those homes that had signed up for special policies would be coated in fire retardant.

The use of private firefighters for the wealthy has apparently grown over the past decade. A 2007 report documented AIG’s use of a “Wildfire Protection Unit” to serve its Private Client Group, a plan “offered only to homeowners in California’s most affluent ZIP Codes — including Malibu, Beverly Hills, Newport Beach and Menlo Park.” Just look at this extraordinary photo of AIG-branded firefighters. A homeowner recalled what it was like to know he was privately protected:

“Here you are in that raging wildfire. Smoke everywhere. Flames everywhere. Plumes of smoke coming up over the hills. Here’s a couple guys showing up in what looks like a firetruck who are experts trained in fighting wildfire and they’re there specifically to protect your home. … It was really, really comforting.”

Less comforting, perhaps, to the neighbors who also see a team of firefighters coming over the hills before realizing that they are only authorized to put out AIG-insured homes. In fact, not even just AIG-insured homes: special elite AIG-insured homes. Though in one case, the company was willing to benevolently extend protection to AIG customers with ordinary plans: “AIG said it did apply fire retardant to some homes of standard policyholders if they happened to be nearby, because it made financial sense.” If saving your home doesn’t “make financial sense,” though, you’re screwed.

The economics of the whole arrangement make perfect sense. An insurance company really does not want to see a multi-million dollar house burn to the ground, so of course it will be eager to provide extra fire services if doing so will substantially affect the amount of the subsequent claim. And it’s clear why a wealthy person would want this kind of coverage: as the homeowner above says, it’s really comforting to know that a corporation is sending you personal firefighters who will look out for you and you alone. Private firefighting is just like private security, or the mercenary soldiers that rich people hired in post-Katrina New Orleans to protect their properties from looting.

But though these deals make perfect sense to the parties making them, they have alarming implications. There’s something outrageous about a world in which firefighters protect some people rather than others, and choose to let houses burn to the ground that could be saved. I am still haunted by the 2010 story of a local fire department who refused to put out a house fire because the owner hadn’t paid his $75 annual fire protection fee. Emergency services seem like one area in which there ought to be a consensus that money shouldn’t play a role. Obviously, that’s far from true, as anyone who has gotten stuck with a $1000 bill for an ambulance ride knows well. But the more emergency response becomes a transaction, rather than an equal and universal guarantee, the more literally true it is that some people’s lives are worth more than others.

I am terrified of the future, and it’s partly because I don’t really see a way to stop these trends from getting worse. Public services will be under-resourced and wealthy people will have a strong interest in contracting for private services instead. This is already the situation in medical care, it’s already the situation with police and the military, why shouldn’t it be the same with fire? In every other domain of life, how much money you have determines what you will get in return, this just extends market logic to yet one more domain.

by Nathan J. Robinson, Current Affairs |  Read more:
Image: uncredited

Thursday, November 9, 2017

Chesterton's Fence


Chesterton's Fence
Image via: Wikipedia
[ed. Broad applicability to... well, just about everything. Politics, technology, education, law, economics, urban planning, and  more.] 

Where the Small-Town American Dream Lives On

Orange City, the county seat of Sioux County, Iowa, is a square mile and a half of town, more or less, population six thousand, surrounded by fields in every direction. Sioux County is in the northwest corner of the state, and Orange City is isolated from the world outside—an hour over slow roads to the interstate, more than two hours to the airport in Omaha, nearly four to Des Moines. Hawarden, another town, twenty miles away, is on the Big Sioux River, and was founded as a stop on the Northwestern Railroad in the eighteen-seventies; it had a constant stream of strangers coming through, with hotels to service them and drinking and gambling going on. But Orange City never had a river or a railroad, or, until recently, even a four-lane highway, and so its pure, hermetic culture has been preserved.

Orange City is small and cut off, but, unlike many such towns, it is not dying. Its Central Avenue is not the hollowed-out, boarded-up Main Street of twenty-first-century lore. Along a couple of blocks, there are two law offices, a real-estate office, an insurance brokerage, a coffee shop, a sewing shop, a store that sells Bibles, books, and gifts, a notions-and-antiques store, a hair-and-tanning salon, and a home-décor-and-clothing boutique, as well as the Sioux County farm bureau, the town hall, and the red brick Romanesque courthouse.

There are sixteen churches in town. The high-school graduation rate is ninety-eight per cent, the unemployment rate is two per cent. There is little crime. The median home price is around a hundred and sixty thousand dollars, which buys a three- or four-bedroom house with a yard, in a town where the median income is close to sixty thousand. For the twenty per cent of residents who make more than a hundred thousand dollars a year, it can be difficult to find ways to spend it, at least locally. There are only so many times you can redo your kitchen. Besides, conspicuous extravagance is not the Orange City way. “There are stories about people who are too showy, who ended up ruined,” Dan Vermeer, who grew up in the town, says. “The Dutch are comfortable with prosperity, but not with pleasure.”

The town was founded, in 1870, by immigrants from Holland looking for farmland, and until recently almost everyone who lived there was Dutch. Many of the stores on Central Avenue still bear Dutch names: Bomgaars farm-supply store, Van Maanen’s Radio Shack, Van Rooyen Financial Group, DeJong Chiropractic and Acupuncture, Woudstra Meat Market. The town’s police force consists of Jim Pottebaum, Duane Hulstein, Audley DeJong, Bruce Jacobsma, Chad Van Ravenswaay, Wes Van Voorst, and Bob Van Zee. When an Orange City teacher wants to divide her class in half, she will say, “A”s through “U”s to one side, “V”s through “Z”s to the other. Once, many years ago, an actual Dutch woman, from Rotterdam, moved to town with her American husband. She found the Dutchness of Orange City peculiar—the way that most people didn’t speak Dutch anymore but sprinkled their English with phrases that nobody had used in the Netherlands for a hundred years.

In the early part of the twentieth century, the question of how much Dutchness to retain caused a religious schism in the town: the American Reformed Church broke off from the First Reformed Church in order to conduct services in English. But, as the last Dutch speakers began to die off, Orange City took measures to embalm its heritage. The shops on the main stretch of Central Avenue are required to embellish their façades with “Dutch fronts”—gables in the shape of bells and step-edged triangles, painted traditional colors such as dark green, light gray, and blue, with white trim. Across the street from Bomgaars is Windmill Park, with its flower beds and six decorative windmills of varying sizes along a miniature canal. Each year, at the end of May, Orange City holds a tulip festival. Thousands of bulbs are imported from the Netherlands and planted in rows, and for three days much of the town dresses up in nineteenth-century Dutch costumes, sewn by volunteers—white lace caps and long aprons, black caps and knickers—and performs traditional dances in the street. There is a ceremonial street cleaning—kerchiefed boys throwing bucketfuls of water, aproned girls scrubbing with brooms—followed by a parade, in which the Tulip Queen and her court, high-school seniors, wave from their float, and the school band marches after them in clogs.

Every June, a couple of weeks after Tulip Festival, another ritual is enacted: a hundred of the town’s children graduate from the high school. Each of them must then make a decision that will set the course of their lives—whether to leave Orange City or to stay. This decision will affect not just where they live but how they see the world and how they vote. The town is thriving, so the choice is not driven by necessity: to stay is not to be left behind but to choose a certain kind of life. Each year, some leave, but usually more decide to settle in—something about Orange City inspires loyalty. It is only because so many stay that the town has prospered. And yet to stay home is to resist an ingrained American belief about movement and ambition. (...)

Some of the kids who left Orange City left for a profession. There was work you couldn’t do there, lives you couldn’t live—there weren’t a lot of tech jobs, for instance, or much in finance. Not many left for the money; you might make a higher salary elsewhere, but the cost of living in Orange City was so low that you’d likely end up worse off. Some left for a life style: they wanted mountains to ski and hike in, or they wanted to live somewhere with sports teams and restaurants. But most left for the same reason Dan Vermeer did—for the chance to remake themselves.

In bigger places, when you started working you met new people, and your professional self became your identity. But in Orange City you would always be So-and-So’s kid, no matter what you accomplished. People liked to point out that even Jesus had this problem when he tried to preach in his home town:
They said, “Where did this man get all this? What is this wisdom that has been given to him? What deeds of power are being done by his hands! Is not this the carpenter, the son of Mary and brother of James and Joses and Judas and Simon, and are not his sisters here with us?” And they took offense at Him.
But, while this was for some kids a reason to leave, for others it was why they wanted to stay. In Orange City, you could feel truly known. You lived among people who had not only known you for your whole life but known your parents and grandparents as well. You didn’t have to explain how your father had died, or why your mother couldn’t come to pick you up. Some people didn’t feel that they had to leave to figure out who they were, because their family and its history already described their deepest self.

Besides these sentiments, which were widespread, there was another crucial fact about Orange City that enabled it to keep more of its young than other towns its size: it had a college. Northwestern College, a small Christian school of twelve hundred students, affiliated with the Dutch Reformed Church, was founded not long after the town itself. Northwestern offered a variety of liberal-arts majors, but was oriented toward Christian ministry and practical subjects like nursing and education.

Stephanie Schwebach, née Smit, graduated from the high school in 1997 and went to Northwestern to train as a teacher. She had never felt restless in Orange City. “I really didn’t have an adventurous spirit,” she says. “I’m going to stay with the people I know.” Her professional goal was to get a job teaching in the same school she’d gone to as a child.

When she was growing up, she lived next door to her grandparents, and every Sunday after church her family went to their house for lunch, as was the custom then in Orange City. She met her future husband, Eric, in seventh grade, and they started dating in eleventh. Eric came from a huge family—his father was one of sixteen. Most of Eric’s many aunts and uncles still lived in the area, and if anyone needed anything done, like laying cement for a driveway, the family would come and help out.

After high school, Eric thought about joining the military—he thought it would be fun to see a bit of the world—but Stephanie talked him into sticking around, so he stayed in his parents’ house and went to a local technical school to train as an electrician. When Stephanie was a junior in college, they became engaged. He got a job with the manufacturer of Blue Bunny ice cream, and she started teaching. They had two children.

Some years ago, Stephanie and Eric were both working in Le Mars, a town twenty minutes away, and they considered moving there. But then Stephanie thought, It just makes it harder to stop in and say hi to your parents if you don’t live in the same town, and the kids can’t wander over by themselves—we won’t be close in the same way. Instead, they moved into the house that Eric had grown up in, on an acreage at the edge of town, and his parents built a smaller house next to it.

When Stephanie thought about what she wanted for her children in the future, the first thing she thought was, Stay close. “I want them to live right next door, so I can be the grandma that takes care of their kids and gets to see them grow through all the different stages,” she says. “Our kids have told us that once Eric’s folks are dead we have to buy their house so they, our kids, can live in our house, next door. And that would be fine with me!”

In many towns, the most enterprising kids leave for college and stay away rather than starting businesses at home, which means that there are fewer jobs at home, which means that even more people leave; and, over time, the town’s population gets smaller and older, shops and schools begin to close, and the town begins to die. This dynamic has affected Iowa more than almost any other state: during the nineteen-nineties, only North Dakota lost a larger proportion of educated young people. In 2006, Iowa’s then governor, Tom Vilsack, undertook a walking tour of the state, with the theme “Come Back to Iowa, Please,” aimed at the young and educated. He threw cocktail parties in cities around the country, at which he begged these young emigrants to return, promising that Iowa had more to offer than “hogs, acres of corn, and old people.” But the campaign was a failure. In 2007, the legislature in Des Moines created the Generation Iowa Commission, to study why college graduates were leaving; two years later, a fifth of the members of the commission had themselves left the state.

The sociologists Patrick Carr and Maria Kefalas spent several months in a small Iowa town and found that children who appeared likely to succeed were from an early age groomed for departure by their parents and teachers. Other kids, marked as stayers, were often ignored in school. Everyone realized that encouraging the ambitious kids to leave was killing the town, but the ambition of the children was valued more than the life of the community. The kids most likely to make it big weren’t just permitted to leave—they were pushed.

In Orange City, that kind of pushing was uncommon. People didn’t seem to care about careers as much as they did in other places. “Even now, my friends there, I’m not sure what many of them do, and I don’t think they know what I do,” Dan Vermeer says. “That’s just not what you talk about.” You could be proud of a child doing something impressive in another part of the country, but having grown children and grandkids around you was equally a sign of success. Go to Northwestern, Orange City parents would say. And, when you get your degree, why not settle down here? There are plenty of jobs, and it’ll take you five minutes to drive to work. When you have children, we’ll help you take care of them. People here share your values, it’s a good Christian place. And they care about you: if anything happens, they’ll have your back.

by Larissa MacFarquhar, New Yorker | Read more:
Image: Brian Finke

Wednesday, November 8, 2017


Frantisek Gross, Breakfast
via:

Apple at Its Best

The history of Apple being doomed doesn’t necessarily repeat, but it does rhyme.

Take the latest installment, from Professor Mohanbir Sawhney at the Kellogg School of Management (one of my former professors, incidentally):
Have we reached peak phone? That is, does the new iPhone X represent a plateau for hardware innovation in the smartphone product category? I would argue that we are indeed standing on the summit of peak “phone as hardware”: While Apple’s newest iPhone offers some impressive hardware features, it does not represent the beginning of the next 10 years of the smartphone, as Apple claims… 
As we have seen, when the vector of differentiation shifts, market leaders tend to fall by the wayside. In the brave new world of AI, Google and Amazon have the clear edge over Apple. Consider Google’s Pixel 2 phone: Driven by AI-based technology, it offers unprecedented photo-enhancement features and deeper hardware-software integration, such as real-time language translation when used with Google’s special headphones…The shifting vector of differentiation to AI and agents does not bode well for Apple… 
Sheets of glass are simply no longer the most fertile ground for innovation. That means Apple urgently needs to shift its focus and investment to AI-driven technologies, as part of a broader effort to create the kind of ecosystem Amazon and Google are building quickly. However, Apple is falling behind in the AI race, as it remains a hardware company at its core and it has not embraced the open-source and collaborative approach that Google and Amazon are pioneering in AI.
It is an entirely reasonable argument, particularly that last line: I myself have argued that Apple needs to rethink its organizational structure in order to build more competitive services. If the last ten years have shown us anything, though, it is that discounting truly great hardware — and the sort of company necessary to deliver that — is the surest way to be right in theory and wrong in reality.

by Ben Thompson, Stratechery |  Read more:
Image: Apple

The City


Belinda Eaton, Man with fish, 2001
via:

Something is Wrong on the Internet

As someone who grew up on the internet, I credit it as one of the most important influences on who I am today. I had a computer with internet access in my bedroom from the age of 13. It gave me access to a lot of things which were totally inappropriate for a young teenager, but it was OK. The culture, politics, and interpersonal relationships which I consider to be central to my identity were shaped by the internet, in ways that I have always considered to be beneficial to me personally. I have always been a critical proponent of the internet and everything it has brought, and broadly considered it to be emancipatory and beneficial. I state this at the outset because thinking through the implications of the problem I am going to describe troubles my own assumptions and prejudices in significant ways.

One of so-far hypothetical questions I ask myself frequently is how I would feel about my own children having the same kind of access to the internet today. And I find the question increasingly difficult to answer. I understand that this is a natural evolution of attitudes which happens with age, and at some point this question might be a lot less hypothetical. I don’t want to be a hypocrite about it. I would want my kids to have the same opportunities to explore and grow and express themselves as I did. I would like them to have that choice. And this belief broadens into attitudes about the role of the internet in public life as whole.

I’ve also been aware for some time of the increasingly symbiotic relationship between younger children and YouTube. I see kids engrossed in screens all the time, in pushchairs and in restaurants, and there’s always a bit of a Luddite twinge there, but I am not a parent, and I’m not making parental judgments for or on anyone else. I’ve seen family members and friend’s children plugged into Peppa Pig and nursery rhyme videos, and it makes them happy and gives everyone a break, so OK.

But I don’t even have kids and right now I just want to burn the whole thing down.

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. Much of what I am going to describe next has been covered elsewhere, although none of the mainstream coverage I’ve seen has really grasped the implications of what seems to be occurring.

To begin: Kid’s YouTube is definitely and markedly weird. I’ve been aware of its weirdness for some time. Last year, there were a number of articles posted about the Surprise Egg craze. Surprise Eggs videos depict, often at excruciating length, the process of unwrapping Kinder and other egg toys. That’s it, but kids are captivated by them. There are thousands and thousands of these videos and thousands and thousands, if not millions, of children watching them. (...)

[ed. Increasingly bizzare and troubling examples of "kids" videos being produced.]

As another blogger notes, one of the traditional roles of branded content is that it is a trusted source. Whether it’s Peppa Pig on children’s TV or a Disney movie, whatever one’s feelings about the industrial model of entertainment production, they are carefully produced and monitored so that kids are essentially safe watching them, and can be trusted as such. This no longer applies when brand and content are disassociated by the platform, and so known and trusted content provides a seamless gateway to unverified and potentially harmful content.

(Yes, this is the exact same process as the delamination of trusted news media on Facebook feeds and in Google results that is currently wreaking such havoc on our cognitive and political systems and I am not going to explicitly explore that relationship further here, but it is obviously deeply significant.) (...)

Here are a few things which are disturbing me:

The first is the level of horror and violence on display. Some of the times it’s troll-y gross-out stuff; most of the time it seems deeper, and more unconscious than that. The internet has a way of amplifying and enabling many of our latent desires; in fact, it’s what it seems to do best. I spend a lot of time arguing for this tendency, with regards to human sexual freedom, individual identity, and other issues. Here, and overwhelmingly it sometimes feels, that tendency is itself a violent and destructive one.

The second is the levels of exploitation, not of children because they are children but of children because they are powerless. Automated reward systems like YouTube algorithms necessitate exploitation in the same way that capitalism necessitates exploitation, and if you’re someone who bristles at the second half of that equation then maybe this should be what convinces you of its truth. Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. Not in a future of AI overlords and robots in the factories, but right here, now, on your screen, in your living room and in your pocket. (...)

A friend who works in digital video described to me what it would take to make something like this: a small studio of people (half a dozen, maybe more) making high volumes of low quality content to reap ad revenue by tripping certain requirements of the system (length in particular seems to be a factor). According to my friend, online kids’ content is one of the few alternative ways of making money from 3D animation because the aesthetic standards are lower and independent production can profit through scale. It uses existing and easily available content (such as character models and motion-capture libraries) and it can be repeated and revised endlessly and mostly meaninglessly because the algorithms don’t discriminate — and neither do the kids.

These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.

This, I think, is my point: The system is complicit in the abuse.

And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale. (...)

This is a deeply dark time, in which the structures we have built to sustain ourselves are being used against us — all of us — in systematic and automated ways. It is hard to keep faith with the network when it produces horrors such as these. While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on. This is not what many of us want. (...)

What concerns me is not just the violence being done to children here, although that concerns me deeply. What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects. 

by James Brindle, Medium |  Read more:
Image: YouTube

You Will Lose Your Job to a Robot—and Sooner Than You Think

I want to tell you straight off what this story is about: Sometime in the next 40 years, robots are going to take your job.

I don’t care what your job is. If you dig ditches, a robot will dig them better. If you’re a magazine writer, a robot will write your articles better. If you’re a doctor, IBM’s Watson will no longer “assist” you in finding the right diagnosis from its database of millions of case studies and journal articles. It will just be a better doctor than you.

And CEOs? Sorry. Robots will run companies better than you do. Artistic types? Robots will paint and write and sculpt better than you. Think you have social skills that no robot can match? Yes, they can. Within 20 years, maybe half of you will be out of jobs. A couple of decades after that, most of the rest of you will be out of jobs.

In one sense, this all sounds great. Let the robots have the damn jobs! No more dragging yourself out of bed at 6 a.m. or spending long days on your feet. We’ll be free to read or write poetry or play video games or whatever we want to do. And a century from now, this is most likely how things will turn out. Humanity will enter a golden age.

But what about 20 years from now? Or 30? We won’t all be out of jobs by then, but a lot of us will—and it will be no golden age. Until we figure out how to fairly distribute the fruits of robot labor, it will be an era of mass joblessness and mass poverty. Working-class job losses played a big role in the 2016 election, and if we don’t want a long succession of demagogues blustering their way into office because machines are taking away people’s livelihoods, this needs to change, and fast. Along with global warming, the transition to a workless future is the biggest challenge by far that progressive politics—not to mention all of humanity—faces. And yet it’s barely on our radar.

That’s kind of a buzzkill, isn’t it? Luckily, it’s traditional that stories about difficult or technical subjects open with an entertaining or provocative anecdote. The idea is that this allows readers to ease slowly into daunting material. So here’s one for you: Last year at Christmas, I was over at my mother’s house and mentioned that I had recently read an article about Google Translate. It turns out that a few weeks previously, without telling anyone, Google had switched over to a new machine-learning algorithm. Almost overnight, the quality of its translations skyrocketed. I had noticed some improvement myself but had chalked it up to the usual incremental progress these kinds of things go through. I hadn’t realized it was due to a quantum leap in software. (...)

The Industrial Revolution was all about mechanical power: Trains were more powerful than horses, and mechanical looms were more efficient than human muscle. At first, this did put people out of work: Those loom-smashing weavers in Yorkshire—the original Luddites—really did lose their livelihoods. This caused massive social upheaval for decades until the entire economy adapted to the machine age. When that finally happened, there were as many jobs tending the new machines as there used to be doing manual labor. The eventual result was a huge increase in productivity: A single person could churn out a lot more cloth than she could before. In the end, not only were as many people still employed, but they were employed at jobs tending machines that produced vastly more wealth than anyone had thought possible 100 years before. Once labor unions began demanding a piece of this pie, everyone benefited.

The AI Revolution will be nothing like that. When robots become as smart and capable as human beings, there will be nothing left for people to do because machines will be both stronger and smarter than humans. Even if AI creates lots of new jobs, it’s of no consequence. No matter what job you name, robots will be able to do it. They will manufacture themselves, program themselves, repair themselves, and manage themselves. If you don’t appreciate this, then you don’t appreciate what’s barreling toward us.

In fact, it’s even worse. In addition to doing our jobs at least as well as we do them, intelligent robots will be cheaper, faster, and far more reliable than humans. And they can work 168 hours a week, not just 40. No capitalist in her right mind would continue to employ humans. They’re expensive, they show up late, they complain whenever something changes, and they spend half their time gossiping. Let’s face it: We humans make lousy laborers.

If you want to look at this through a utopian lens, the AI Revolution has the potential to free humanity forever from drudgery. In the best-case scenario, a combination of intelligent robots and green energy will provide everyone on Earth with everything they need. But just as the Industrial Revolution caused a lot of short-term pain, so will intelligent robots. While we’re on the road to our Star Trek future, but before we finally get there, the rich are going to get richer—because they own the robots—and the rest of us are going to get poorer because we’ll be out of jobs. Unless we figure out what we’re going to do about that, the misery of workers over the next few decades will be far worse than anything the Industrial Revolution produced.

by Kevin Drum, Mother Jones |  Read more:
Image: Roberto Parada