Thursday, August 24, 2017

There Are No More Low-Priced Homes

Sales of both newly built and existing homes fell unexpectedly in July, and while it's just one month's data, it may be a signal that the housing market has hit an insurmountable hurdle. It is just plain too expensive. Home prices are higher at virtually every price point, but the gains are biggest at the low end where demand is highest.

The median price of a home sold in July hit $258,300, the highest July price on record, according to the National Association of Realtors. The Realtors divide sales figures into six different price "buckets" in their monthly report. Sales in the range of $100,000 or below were down 14 percent compared with a year ago, while sales of million-dollar and higher homes jumped nearly 20 percent.

More telling is that at the start of 2013, when home prices were just beginning to bounce off the bottom of the housing crash, the share of homes sold above $500,000 was just 9 percent of all sales. Today that share is more than 14 percent. The share of lowest-priced home sales today is less than half of what it was then as well.

"On the lower end, there is virtually no property at a very low price level anymore," said Lawrence Yun, chief economist for the National Association of Realtors. "The same property has been moved up to a different price bucket just because the prices have been rising strongly, over 40 percent price appreciation in the past five years. We are not getting the transactions on the lower end because there is virtually no inventory on the lower end."

by Diana Olick, CNBC |  Read more:
Image: Getty

Wednesday, August 23, 2017

Eliminating the Human

I have a theory that much recent tech development and innovation over the last decade or so has an unspoken overarching agenda. It has been about creating the possibility of a world with less human interaction. This tendency is, I suspect, not a bug—it’s a feature. We might think Amazon was about making books available to us that we couldn’t find locally—and it was, and what a brilliant idea—but maybe it was also just as much about eliminating human contact.

The consumer technology I am talking about doesn’t claim or acknowledge that eliminating the need to deal with humans directly is its primary goal, but it is the outcome in a surprising number of cases. I’m sort of thinking maybe it is the primary goal, even if it was not aimed at consciously. Judging by the evidence, that conclusion seems inescapable.

This then, is the new norm. Most of the tech news we get barraged with is about algorithms, AI, robots, and self-driving cars, all of which fit this pattern. I am not saying that such developments are not efficient and convenient; this is not a judgment. I am simply noticing a pattern and wondering if, in recognizing that pattern, we might realize that it is only one trajectory of many. There are other possible roads we could be going down, and the one we’re on is not inevitable or the only one; it has been (possibly unconsciously) chosen.

I realize I’m making some wild and crazy assumptions and generalizations with this proposal—but I can claim to be, or to have been, in the camp that would identify with the unacknowledged desire to limit human interaction. I grew up happy but also found many social interactions extremely uncomfortable. I often asked myself if there were rules somewhere that I hadn’t been told, rules that would explain it all to me. I still sometimes have social niceties “explained” to me. I’m often happy going to a restaurant alone and reading. I wouldn’t want to have to do that all the time, but I have no problem with it—though I am sometimes aware of looks that say “Poor man, he has no friends.” So I believe I can claim some insight into where this unspoken urge might come from.

Human interaction is often perceived, from an engineer’s mind-set, as complicated, inefficient, noisy, and slow. Part of making something “frictionless” is getting the human part out of the way. The point is not that making a world to accommodate this mind-set is bad, but that when one has as much power over the rest of the world as the tech sector does over folks who might not share that worldview, there is the risk of a strange imbalance. The tech world is predominantly male—very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible for the sake of “simplicity and efficiency”—do the math, and there’s the future.

by David Byrne, MIT Technology Review |  Read more:
Image: Rolling Stone

Louise Linton's Guide to Fall Fashion for Poors

“Louise Linton, the labels-loving wife of Steven Mnuchin, replied condescendingly to an Instagram poster about her lifestyle and belittled the woman, Jenni Miller, a mother of three from Portland, Ore., for having less money than she does.” — New York Times, 8/22/17

Greetings #peasants, it’s me, Louise Linton, in a beautiful #hermesscarf and #tomford sunnies. You may know my husband, Steve Mnuchin, America’s Secretary of Treasure. Or you may be familiar with my work as a film star, from my turn as Samantha in Crew 2 Crew to 2013’s The Power of Few, where I played the role of “Cory’s Mother” #crew2crew #corysmom. I am also #rich, and probably paid more taxes on my #farragamo pants than you have in your entire worthless life.

Anyway, I found a spare moment between my annual three weeks of employment to put my diamond-studded #carandache pen to paper and write some fall fashion tips for the #greatunwashedmasses. Consider it yet another handout from me to you.

First off, that funky patterned #versace blouse may have gotten you through a summer of #leeching off the goodwill of the donor class, but now that fall is rolling around, it’s time to go understated. Try pairing a dark #ysl cashmere with a #givenchy suede jacket, or #burberry if you’re on a budget. #parasites

Don’t worry ladies, unlike my villa in #cabosanlucas, athleisure isn’t just for summer. This September onward, stay cool and comfortable in a pair of #gucci mesh-cut leggings and #marcjacobs sneakers. You’ll look downright #cute as you desperately jog away from the kinds of #sacrifices my loving husband and I have made for this country.

I know what you’re thinking – how can I grab even more out of silver screen #legend Louise Linton’s pockets, and what about makeup? A maroon or burgundy lip by #dior is my go-to, and should be yours too. It contrasts beautifully with any skin tone, from #porcelain to #ivory to #shell. And remember, only some people deserve to have nice things! I learned that while writing my memoir, In Congo’s Shadow, about my time in #Africa, a place where people do not even have basic amenities like #prada driving gloves and #dolce leopard-print handbags.

People are going to remember 2017 for just one thing: leather. And for genuine Italian #leather, you can’t do better than #cesarepaciotti, founded in 1980, the same year my husband got into Yale, and I was born. Have you even heard of Yale, you gluttonous fucking freeloader? #takethehighroad

For every fashion #do there’s a fashion #dont. First of all, don’t overdo it on the black – it makes you look old. Instead, keep it as youthful as possible with a dark gray #fendi vest and lighter #chanel accents. I know deep down Steve will replace me eventually. Second, don’t cheap out on the jewelry. When I’m staring at the reflection of my autumn #rosegold necklace in my #infinitypool, contemplating the ever-widening pit that is a myopic existence defined by greed and unmitigated materialism, you won’t catch me in anything less than #tiffanyandco. And here’s the biggest no-no I can give folks like you: for the love of God, do not breed! #mnugenics

Oh, and please check out my film Odious later this year; I believe I am in one of the scenes.

by Ziyad Gower, McSweeny's |  Read more:
Image: Twitter

Yasujiro Ozu, An Autumn Afternoon 1962.
via:

Winner-Takes All Effects in Autonomous Cars

There are now several dozen companies trying to make the technology for autonomous cars, across OEMs, their traditional suppliers, existing major tech companies and startups. Clearly, not all of these will succeed, but enough of them have a chance that one wonders what and where the winner-take-all effects could be, and what kinds of leverage there might be. Are there network effects that would allow the top one or two companies to squeeze the rest out, as happened in smartphone or PC operating systems? Or might there be room for five or ten companies to compete indefinitely? And for what layers in the stack does victory give power in other layers?

These kinds of question matter because they point to the balance of power in the car industry of the future. A world in which car manufacturers can buy commodity ‘autonomy in a box’ from any of half a dozen companies (or make it themselves), much as they buy ABS today, is very different from one in which Waymo and perhaps Uber are the only real options, and can set the business model of their choice, as Google did with Android. Microsoft and Intel found choke points in the PC world, and Google did in smartphones - what might those points be in autonomy?

To begin with, it seems pretty clear that the hardware and sensors for autonomy - and, probably, for electric - will be commodities. There is plenty of science and engineering in these (and a lot more work to do), just as there is in, say, LCD screens, but there is no reason why you have to use one rather than another just because everyone else is. There are strong manufacturing scale effects, but no network effect. So, LIDAR, for example, will go from a ‘spinning KFC bucket’ that costs $50k to a small solid-state widget at a few hundred dollars or less, and there will be winners within that segment, but there’s no network effect, while winning LIDAR doesn’t give leverage at other layers of the stack (unless you get a monopoly), anymore than than making the best image sensors (and selling them to Apple) helps Sony’s smartphone business. In the same way, it’s likely that batteries (and motors and battery/motor control) will be as much of a commodity as RAM is today - again, scale, lots of science and perhaps some winners within each category, but no broader leverage.

On the other hand, there probably won’t be direct parallels to the third party software developer ecosystems that we see in PCs or smartphones. Windows squashed the Mac and then iOS and Android squashed Windows Phone because of the virtuous circle of developer adoption above anything else, but you won’t buy a car (if you own a car at all, of course) based on how many apps you can run on it. They’ll all run Uber and Lyft and Didi, and have Netflix embedded in the screens, but any other apps will happen on your phone (or watch, or glasses).

Rather, the place to look is not within the cars directly but still further up the stack - in the autonomous software that enables a car to move down a road without hitting anything, in the city-wide optimisation and routing that mean we might automate all cars as a system, not just each individual car, and in the on-demand fleets of 'robo-taxis' that will ride on all of this. The network effects in on-demand are self-evident, but will will get much more complex with autonomy (which will cut the cost of an on-demand ride by three quarters or more). On-demand robo-taxi fleets will dynamically pre-position their cars, and both these and quite possibly all other cars will co-ordinate their routes in real time for maximum efficiency, perhaps across fleets, to avoid, for example, all cars picking the same route at the same time. This in turn could be combined not just with surge pricing but with all sorts of differential road pricing - you might pay more to get to your destination faster in busy times, or pick an arrival time by price.

From a technological point of view, these three layers (driving, routing & optimisation, and on-demand) are largely independent - you could install the Lyft app in a GM autonomous car and let the pre-installed Waymo autonomy module drive people around, hypothetically. Clearly, some people hope there will be leverage across layers, or perhaps bundling - Tesla says that it plans to forbid people from using its autonomous cars with any on-demand service other than its own. This doesn't work the other way - Uber won't insist you use only its own autonomous systems. But though Microsoft cross-leveraged Office and Windows, both of these won in their own markets with their own network effects: a small OEM insisting you use its small robo-taxi service would be like Apple insisting you buy AppleWorks instead of Microsoft Office in 1995. I suspect that a more neutral approach might prevail. This would especially be the case if we have cross-city co-ordination of all vehicles, or even vehicle-to-vehicle communication at junctions - you would need some sort of common layer (though my bias is always towards decentralised systems).

All this is pretty speculative, though, like trying to predict what traffic jams would look like from 1900. The one area where we can talk about what the key network effects might look like is in autonomy itself. This is about hardware, and sensors, and software, but mostly it's about data, and there are two sorts of data that matter for autonomy - maps and driving data. First, ‘maps.’

Our brains are continuously processing sensor data and building a 3D model of the world around us, in real time and quite unconsciously, such that when we run through a forest we don’t trip over a root or bang our head on a branch (mostly). In autonomy this is referred to as SLAM (Simultaneous Localisation And Mapping) - we map our surroundings and localise ourselves within them. This is obviously a basic requirement for autonomy - AVs need to work out where they are on the road and what features might be around (lanes, turnings, curbs, traffic lights etc), and they also need to work out what other vehicles are on the road and how fast they’re moving.

Doing this in real time on a real road remains very hard. Humans drive using vision (and sound), but extracting a sufficiently accurate 3D model of your surroundings from imaging alone (especially 2D imaging) remains an unsolved problem: machine learning makes it conceivable but no-one can do it yet with the accuracy necessary for driving. So, we take shortcuts. This is why almost all autonomy projects are combining imaging with 360 degree LIDAR: each of these sensors have their limitations, but by combining them (‘sensor fusion’) you can get a complete picture. Building a model of the world around you with imaging alone will certainly be possible at some point in the future, but using more sensors gets you there a lot quicker, even given that you have to wait for the cost and form factor of those sensors to become practical. That is, LIDAR is a shortcut to get to a model of the world around you. Once you've got that, you often use machine learning to understand what's in it - that shape is a car, or a cyclist, but for this, there don't seem to be a network effect (or a strong one): you can get enough images of cyclists yourself without needing a fleet of cars.

If LIDAR is one shortcut to SLAM, the other and more interesting one is to use prebuilt maps, which actually means ‘high-definition 3D models’. You survey the road in advance, process all the data at leisure, build a model of the street and then put it onto any car that’s going to drive down the road. The autonomous car doesn’t now have to process all that data and spot the turning or traffic light against all the other clutter in real-time at 65 miles an hour - instead it knows where to look for the traffic light, and it can take sightings of key landmarks against the model to localise itself on the road at any given time. So, your car uses cameras and LIDAR to work out where it is on the road and where the traffic signals etc are by comparing what it can see with a pre-built map instead of having to do it from scratch, and also uses those inputs to spot other vehicles around it in real time.

Maps have network effects. When any autonomous car drives down a pre-mapped road, it is both comparing the road to the map and updating the map: every AV can also be a survey car. If you have sold 500,000 AVs and someone else has only sold 10,000, your maps will be updated more often and be more accurate, and so your cars will have less chance of encountering something totally new and unexpected and getting confused. The more cars you sell the better all of your cars are - the definition of a network effect.

The risk here is that in the long term it is possible that just as cars could do SLAM without LIDAR, they could also do it without pre-built maps - after all, again, humans do. When and whether that would happen is unclear, but at the moment it appears that it would be long enough after autonomous cars go on sale that all the rest of the landscape might look quite different as well (that is, 🤷🏻‍♂️).

So, maps are the first network effect in data - the second comes in what the car does once it understands its surroundings. Driving on an empty road, or indeed on a road full of other AVs, is one problem, once you can see it, but working out what the other humans on the road are going to do, and what to do about it, is another problem entirely.

by Benedict Evans |  Read more:
Image: Venngage

You’ve Shot Your Moose. Are You Strong Enough to Pack It Out?

We ran as fast as the muskeg would allow, working against a deadline of the setting sun to get to the moose we had just taken before darkness closed in. Suddenly, the enormous animal was there before us, impossibly huge and hauntingly still. Standing there, a queasy feeling brought a knot to my stomach.

My hunting partner looked over, the pained expression on his face evidence that he, too, had the knot. We'd both been around dead moose before. Our dads were hunters, and we would inevitably help butcher and pack meat when a moose was downed and thus had firsthand experience in field-dressing, quartering and packing.

Facing it on our own was a very different reality and responsibility for a couple of 14-year-olds, as we were back then.

Daunting task

Any first-time moose hunter who's never experienced the process should realize that bringing a downed moose from the field to the table is daunting. The enormity of the animals, which seems obvious while observing them in the field, suddenly becomes a cold reality when a big carcass is lying in a muskeg swamp. Experience with smaller big game helps, but it does not completely prepare one for dealing with the largest deer on the planet.

Before you press the trigger on a moose, questions should be answered. Do you have experience in field-dressing big game? If you have experience with deer or caribou, field-dressing a moose is basically a larger version that requires considerably more effort. Explaining the entire process here would be far too lengthy, and even a detailed explanation would only offer a general idea.

There are so many variables to contend with in the field that real-time experience is a must. There's also a wealth of information on the internet, including a good video on the Alaska Department of Fish and Game website.

The basics are to get the animal skinned out without tainting meat (by puncturing intestines or stomach), cut it into packable pieces while keeping it clean and dry, and cool it quickly. A good skinning knife with a sharpening device or one of the new change-blade knives, a lightweight tarp in the 6-by-8-foot range, parachute cord, close-weave game bags, a bone saw and citric acid spray are the basic equipment requirements.

If packing the meat out will take more than a day, you'll want to get it off the ground, where air can circulate around it, and you'll want to hang a tarp over it in case of rain.

The longer it takes to get the meat out, the greater the possibility of a bear finding it. Pay attention when you return to the carcass and be prepared in case a bear has taken possession of your moose. As inviting as it will be to leave your rifle behind while you pack meat, it may not be in your best interest. But remember, killing a bear to defend your game meat is not legal; it's not considered a defense-of-life-and-property situation.

A heavy load

How will you get the animal from where it's shot back to the vehicle that brought you to your hunting destination? Unless you are in an area where you can drive some sort of ATV right up to the animal, or you have horses or some other pack animal, you are going to have to put it on your back and haul it out.

That being determined, you then must ask yourself, "Am I capable of tying a moose hindquarter onto a pack frame, lifting it to my shoulders and walking away with it?" The hindquarters of a typical yearling bull moose will weigh around 100 pounds. A mature bull will have hindquarters weighing 150 pounds or more.

by Steve Meyer, ADN |  Read more:
Image: Steve Meyer
[ed. It's a young man's sport, unless you have an ATV or boat with a jet unit that can get you somewhere near where you might actually bag a moose. But then again, so do a lot of other folks.]

Tuesday, August 22, 2017

Why Generation X Might Be Our Last, Best Hope

Demographics are destiny. We grew up in the world and mind of the baby-boomers simply because there were so many of them. They were the biggest, easiest, most free-spending market the planet had ever known. What they wanted filled the shelves and what fills the shelves is our history. They wanted to dance so we had rock ‘n’ roll. They wanted to open their minds so we had LSD. They did not want to go to war so that was it for the draft. We will grow old in the world and mind of the millennials because there are even more of them. Because they don’t know what they want, the culture will be scrambled and the screens a never-ending scroll. They are not literally the children of the baby-boomers but might as well be—because here you have two vast generations, linking arms over our heads, akin in the certainty that what they want they will have, and that what they have is right and good.

The members of the in-between generation have moved through life squeezed fore and aft, with these tremendous populations pressing on either side, demanding we grow up and move away, or grow old and die—get out, delete your account, kill yourself. But it’s become clear to me that if this nation has any chance of survival, of carrying its traditions deep into the 21st century, it will in no small part depend on members of my generation, Generation X, the last Americans schooled in the old manner, the last Americans that know how to fold a newspaper, take a joke, and listen to a dirty story without losing their minds.

Just think of all the things that have come and gone in our lifetimes, all the would-be futures we watched age into obsolescence—CD, DVD, answering machine, Walkman, mixtape, MTV, video store, mall. There were still some rotary phones around in our childhood—now it’s nothing but virtual buttons.

Though much derided, members of my generation turn out to be something like Humphrey Bogart in Casablanca—we’ve seen everything and grown tired of history and all the fighting and so have opened our own little joint at the edge of the desert, the last outpost in a world gone mad, the last light in the last saloon on the darkest night of the year. It’s not those who stormed the beaches and won the war, nor the hula-hooped millions who followed, nor what we have coming out of the colleges now—it’s Generation X that will be called the greatest.

The philosophy of the boomers, their general outlook and disposition, which became our culture, is based on a misunderstanding. In the boomers, those born after World War II but before the Kennedy assassination—some of this is less about dates, which are in dispute, than about sensibility—you’re seeing a rebellion. They’d say it was against Richard Nixon, or the Vietnam War, or the conformity of the 1950s, or disco, but it was really against their parents, specifically their fathers. It was a rejection of bourgeois life, the man in his gray flannel suit, his suburbs and corporate hierarchy and commute, the simple pleasures of his seemingly unadventurous life. But the old man did not settle beneath the elms because he was boring or empty or plastic. He did it because, 10 years before you were born, he killed a German soldier with his bare hands in the woods. Many of the boomers I know believe their parents hid themselves from the action. In truth, those World War II fathers were neither hiding nor settling. They were seeking. Peace. Tranquility. They wanted to give their children a fantasy of stability not because they knew too little but because they’d seen too much. Their children read this quest as emptiness and went away before the fathers could transmute the secret wisdom, the ancient knowledge that allows a society to persist and a person to get through a Wednesday afternoon.

In this way, the chain was broken, and the boomers went zooming into the chaos. Which explains the saving attitude of Generation X, those born between the mid-1960s and the early 1980s, say. We are a revolt against the boomers, a revolt against the revolt, a market correction, a restoration not of a power elite but of a philosophy. I always believed we had more in common with the poets haunting the taverns on 52nd Street at the end of the 30s than with the hippies at Woodstock. Cynical, wised up, sane. We’d seen what became of the big projects of the boomers as that earlier generation had seen what became of all the big social projects. As a result we could not stand to hear the Utopian talk of the boomers as we cannot stand to hear the Utopian talk of the millennials. We know that most people are rotten to the core, but some are good, and proceed accordingly.

by Rich Cohen, Vanity Fair |  Read more:
Image: No credit, Gramercy Pictures/Everett Collection, from Warner Bros./Neal Peters Collection. Center, from Matador Records, Miramax/Everett Collection, Columbia Pictures/Everett Collection, Universal Pictures/Everett Collection. Bottom: No credit, by Frans Schellekens/Redferns/Getty Images.
[ed. See also: The Bromance of Justin Trudeau and Emmanuel Macron, Gen X Dynamos of Democracy.]

Camouflage Is the New Black

I have always loved shopping: in real life, online, even from a plane thirty-thousand feet above the earth, courtesy of SkyMall. I buy clothes, handbags, makeup, perfume, kitchen items—nothing that any other woman would find strange. But if you click the history tab on my computer, you’ll now see long lists of military tactical gear heading my way via UPS and Amazon Prime.

With the jaw-dropping exploits of the Special Operation Forces (Navy SEALs, Army Rangers, American Snipers, and Lone Survivors) brought to our attention by movies, books, and video games, a new breed of groupies has made its presence (and buying power) known. You no longer need to join the armed forces to look the part.

I have a friend named Mike Ritland who is a former Navy SEAL. Last month, during a visit to Texas, I tagged along as he made a call to ITS Tactical near Dallas. ITS stands for “Imminent Threat Solutions” and is a very successful online business. This might have been a classic “thanks, but I’ll wait in the car” moment for me. I assumed ITS was not up on designer hair-care products or sexy bras, little did I know I was walking into my newest obsession.

The ITS showroom is a Disneyland for gearheads. It is filled with a panoply of items you probably don’t think you need but will soon convince yourself you do, desperately.

I can think of no good reason why I should buy a digital desert-camo elastic MOLLE strapped combat backpack with a place to attach a Velcro patch embroidered with my blood type, but I did. Two.

Perusing the merchandise on the shelves at ITS sent red warning lights to my brain. My short visit to ITS turned out to be a gateway drug, and like the first firework snort of cocaine, I was instantly hooked on military tactical gear.

In fact, my view of the world shifted, and I no longer felt safe. I was not prepared for calamity and became dreadfully aware of how vulnerable I was. How could I have been so reckless as to not have a vacuum-sealed bag of QuikClot Combat Gauze with me at all times? If I’m gutshot in front of the Starbucks in Ridgefield, Connecticut, this stuff will stop the massive flow of blood until I reach the nearest MASH unit. I mean, local hospital.

When I returned home from Texas, I went online at two A.M. to peruse ITS’s seductive website. Soon I had filled my virtual cart with a pair of fire-resistant Escape shoelaces made of Kevlar, which, when removed from your shoes, you can use to friction saw through plastic wrist restraints. I also bought a small wallet-size lock-picking set and a jazzy Velcro-backed helmet patch that says, I RUN TOWARDS GUNFIRE. My helmet options are still undecided, but I favor the kind with netting that allows the insertion of small leafy tree branches.

After a cup of chamomile tea to settle my adrenaline-spiked shopping nerves, I retired for the evening thinking of George Orwell’s words: “We sleep soundly in our beds because rough men stand ready in the night to visit violence on those who would do us harm.” Instead of sheep, I counted the hours until the next UPS delivery.

I soon learned that not every tactical-gear groupie shops for one item at a time, as I had been doing. You can sign up for a year’s worth of goodies by joining the Crate Club. Brandon Webb runs the Crate Club. Webb is a former Navy SEAL, celebrated combat sniper, and best-selling author of The Killing School and The Red Circle. Of all Webb’s prolific enterprises, I like the Crate Club best; it’s a sort of fruit-of-the-month club for badasses.

With a credit card and the push of a computer key, you can choose from various tiers of membership. The top-of-the-line crate is the Premier Crate that sells for around five hundred a year, and as with the less-deluxe Standard Crate or Pro Crate, you decide how often you would like it delivered. What you won’t know is what is in the box until you open it. That’s where the fun begins.

by Jane Stern, Paris Review |  Read more:
Image: Buzznet

Who Owns the Internet?

Thirty years ago, almost no one used the Internet for anything. Today, just about everybody uses it for everything. Even as the Web has grown, however, it has narrowed. Google now controls nearly ninety per cent of search advertising, Facebook almost eighty per cent of mobile social traffic, and Amazon about seventy-five per cent of e-book sales. Such dominance, Jonathan Taplin argues, in “Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy” (Little, Brown), is essentially monopolistic. In his account, the new monopolies are even more powerful than the old ones, which tended to be limited to a single product or service. Carnegie, Taplin suggests, would have been envious of the reach of Mark Zuckerberg and Jeff Bezos.

Taplin, who until recently directed the Annenberg Innovation Lab, at the University of Southern California, started out as a tour manager. He worked with Judy Collins, Bob Dylan, and the Band, and also with George Harrison, on the Concert for Bangladesh. In “Move Fast and Break Things,” Taplin draws extensively on this experience to illustrate the damage, both deliberate and collateral, that Big Tech is wreaking.

Consider the case of Levon Helm. He was the drummer for the Band, and, though he never got rich off his music, well into middle age he was supported by royalties. In 1999, he was diagnosed with throat cancer. That same year, Napster came along, followed by YouTube, in 2005. Helm’s royalty income, which had run to about a hundred thousand dollars a year, according to Taplin, dropped “to almost nothing.” When Helm died, in 2012, millions of people were still listening to the Band’s music, but hardly any of them were paying for it. (In the years between the founding of Napster and Helm’s death, total consumer spending on recorded music in the United States dropped by roughly seventy per cent.) Friends had to stage a benefit for Helm’s widow so that she could hold on to their house.

Google entered and more or less immediately took over the music business when it acquired YouTube, in 2006, for $1.65 billion in stock. As Taplin notes, just about “every single tune in the world is available on YouTube as a simple audio file (most of them posted by users).” Many of these files are illegal, but to Google this is inconsequential. Under the Digital Media Copyright Act, signed into law by President Bill Clinton shortly after Google went live, Internet service providers aren’t liable for copyright infringement as long as they “expeditiously” take down or block access to the material once they’re notified of a problem. Musicians are constantly filing “takedown” notices—in just the first twelve weeks of last year, Google received such notices for more than two hundred million links—but, often, after one link is taken down, the song goes right back up at another one. In the fall of 2011, legislation aimed at curbing online copyright infringement, the Stop Online Piracy Act, was introduced. It had bipartisan support in Congress, and backing from such disparate groups as the National District Attorneys Association, the National League of Cities, the Association of Talent Agencies, and the International Brotherhood of Teamsters. In January, 2012, the bill seemed headed toward passage, when Google decided to flex its market-concentrated muscles. In place of its usual colorful logo, the company posted on its search page a black rectangle along with the message “Tell Congress: Please don’t censor the web!” The resulting traffic overwhelmed congressional Web sites, and support for the bill evaporated. (Senator Marco Rubio, of Florida, who had been one of the bill’s co-sponsors, denounced it on Facebook.)

Google itself doesn’t pirate music; it doesn’t have to. It’s selling the traffic—and, just as significant, the data about the traffic. Like the Koch brothers, Taplin observes, Google is “in the extraction industry.” Its business model is “to extract as much personal data from as many people in the world at the lowest possible price and to resell that data to as many companies as possible at the highest possible price.” And so Google profits from just about everything: cat videos, beheadings, alt-right rants, the Band performing “The Weight” at Woodstock, in 1969.

I wasn’t always so skeptical,” Franklin Foer announces at the start of “World Without Mind: The Existential Threat of Big Tech” (Penguin Press). (...)

“I hope this book doesn’t come across as fueled by anger, but I don’t want to deny my anger either,” he writes. “The tech companies are destroying something precious. . . . They have eroded the integrity of institutions—media, publishing—that supply the intellectual material that provokes thought and guides democracy. Their most precious asset is our most precious asset, our attention, and they have abused it.”

Much of Foer’s anger, like Taplin’s, is directed at piracy. “Once an underground, amateur pastime,” he writes, “the bootlegging of intellectual property” has become “an accepted business practice.” He points to the Huffington Post, since shortened to HuffPost, which rose to prominence largely by aggregating—or, if you prefer, pilfering—content from publications like the Times and the Washington Post. Then there’s Google Books. Google set out to scan every book in creation and make the volumes available online, without bothering to consult the copyright holders. (The project has been hobbled by lawsuits.) Newspapers and magazines (including this one) have tried to disrupt the disrupters by placing articles behind paywalls, but, Foer contends, in the contest against Big Tech publishers can’t win; the lineup is too lopsided. “When newspapers and magazines require subscriptions to access their pieces, Google and Facebook tend to bury them,” he writes. “Articles protected by stringent paywalls almost never have the popularity that algorithms reward with prominence.”

Foer acknowledges that prominence and popularity have always mattered in publishing. In every generation, the primary business of journalism has been to stay in business. In the nineteen-eighties, Dick Stolley, the founding editor of People, developed what might be thought of as an algorithm for the pre-digital age. It was a formula for picking cover images, and it ran as follows: Young is better than old. Pretty is better than ugly. Rich is better than poor. Movies are better than music. Music is better than television. Television is better than sports. And anything is better than politics.

But Stolley’s Law is to Chartbeat what a Boy Scout’s compass is to G.P.S. It is now possible to determine not just which covers sell magazines but which articles are getting the most traction, who’s e-mailing and tweeting them, and how long individual readers are sticking with them before clicking away. This sort of detailed information, combined with the pressure to generate traffic, has resulted in what Foer sees as a golden age of banality. He cites the “memorable yet utterly forgettable example” of Cecil the lion. In 2015, Cecil was shot with an arrow outside Hwange National Park, in Zimbabwe, by a dentist from Minnesota. For whatever reason, the killing went viral and, according to Foer, “every news organization” (including, once again, this one) rushed to get in on the story, “so it could scrape some traffic from it.” He lists with evident scorn the titles of posts from Vox—“Eating Chicken Is Morally Worse Than Killing Cecil the Lion”—and The Atlantic’s Web site: “From Cecil the Lion to Climate Change: A Perfect Storm of Outrage.” (In July, Cecil’s son, Xanda, was shot, prompting another digital outpouring.)

Donald Trump, Foer argues, represents “the culmination” of this trend. In the lead-up to the campaign, Trump’s politics, such as they were, consisted of empty and outrageous claims. Although none deserved to be taken seriously, many had that coveted viral something. Trump’s utterances as a candidate were equally appalling, but on the Internet apparently nobody knows you’re a demagogue. “Trump began as Cecil the Lion, and then ended up president of the United States,” Foer writes. (...)

Either out of conviction or simply out of habit, the gatekeepers of yore set a certain tone. They waved through news about state budget deficits and arms-control talks, while impeding the flow of loony conspiracy theories. Now Chartbeat allows everyone to see just how many (or, more to the point, how few) readers there really are for that report on the drought in South Sudan or that article on monopoly power and the Internet. And so it follows that there will be fewer such reports and fewer such articles. The Web is designed to give people what they want, which, for better or worse, is also the function of democracy.

Post-Cecil, post-fact, and mid-Trump, is there anything to be done? Taplin proposes a few fixes.

by Elizabeth Kolbert, New Yorker |  Read more:
Image: Nishant Choksi

Monday, August 21, 2017

Rosé Is Exhausting

We are now deep into rosé season, and by season, I mean the rest of our lives. We have also acquired a new summer rite. Each year, a number of daring publications venture the rather interesting question, “Have we hit peak rosé?” only to provide the thrilling answer, which I will summarize for you here: No.

If you are youngish and urbanish and drinking tonight, it’s very possible you’re drinking rosé. This is more likely if you are a woman, according to the wine industry, which claims that women are driving rosé sales, though men drink rosé too — a phenomenon that has been labeled “brosé.” Bros who brosé signify their courage to push the boundaries of masculinity by wearing colorful socks, which is practically like wearing jewelry, and through their willingness to be photographed holding a glass of something pink up to their stubbled faces. Frosé is also a thing. It’s a slushie made out of rosé, sometimes festooned with strawberries or extra booze, like Aperol or elderflower liqueur, and if you think that wine is improved when rendered palatable to a small child, you might be a fan.

Rosé is not a varietal. It is made from lightly extracted red grapes, including — but not limited to — Grenache, Syrah, Cinsault, and Pinot Noir. However, it is classified in sales simply as rosé because despite the huge diversity of what this means — there are sparkling rosés and Pét-Nat rosés, and there are dry ones and less dry ones — it’s pink and you generally know exactly what you’re getting. This presents an issue: Rosé is only good when it’s kind of surprising, but most rosé is the exact opposite of surprising, and that’s exactly why it is popular. It’s light, it’s uncomplicated — you sip, you swallow, then you drink some more.

Whether you’re buying haute rosé or supermarket rosé, what you must never forget is to be drinking it all the time, and thus never not living the rosé lifestyle: Go on a rosé cruise, take in a rosé sunset, have a rosé night. Tie your rose gold hair back with a rosé-colored silk scarf so it doesn’t get in your rosé while you write a text on your rose gold iPhone that says, “rosé o’clock, bitches.” You can also sip it all day — why else would the hashtag #roséallday exist? “At a low 11.3 percent alcohol, you could easily drink this wine all day long,” a 2016 Vine Pair article confirms. The founder of Wine Savvy, Sayle Milne, recently told Refinery29: "You should be drinking rosé when you wake up. You should have it at lunch, you should have it at dinner. You should have it with a straw."

Rosé is alcohol, and if you drink it all day, you will eventually black out and wake up under a porch in Fair Harbor, and you will be covered in ticks.

I feel a little bad yelling at rosé. It never meant to hurt anyone. It’s been around for a long time. The Greeks and the Romans made rosé. Monks made rosé. And, like all wine, rosé comes in delightful forms, less delightful forms, and fairly disgusting forms, and it does so at every price point. The annoying thing about rosé is that it isn’t just a wine, like California Chardonnay or cheap Bordeaux — it’s “a state of mind” or “a lifestyle” or “a way of life.”

But just because rosé has a lot of bullshit surrounding it doesn’t mean there aren’t great rosés. Trust me, I know. I wish I had a good bottle of Chablis for every time someone told me that I would like rosé if I only got rosé. I am not saying that no rosé is good — just that maybe 80 or 90 percent of them aren’t, and while no one can deny that rosé rhymes with #allday and #yesway and s’il vous plait, for me, the truly telling coincidence is that it rhymes with okay.

Rosé used to just be some swill your dad bought when, newly divorced and preparing to host his first date, he helplessly thought, “Ladies like, uhh... wine?” Then the ’80s became the ’90s, the ’90s turned into the ’00s, and then the ’00s became a big horrible blur known as “post-9/11,” so people were like, “What can we resurrect from the past? How can we comfort ourselves with nostalgia while still honoring our newfound cosmopolitanism?” Rosé was there for us.

by Sarah Miller, Eater |  Read more:
Image: Mateus
[ed. See also: Starbucks is Now Selling Sushi Burritos.]

Disrupt the Citizen

By the time Uber and Lyft breached the levees of transport regulations, the American taxi system had already endured several waves of uneven deregulation. In the 1960s, in New York, the majority of taxi drivers formed a union with the aid of the mayor, Robert Wagner. Negotiations produced results: cabbies received a weekly paycheck, vacations, benefits, and a degree of job security. It was already standard for cab companies to insure their drivers, maintain their fleet, and check their drivers’ histories. Although they were private entities, cab companies were subject to heavy control because they were a public utility, a form of municipal transportation. As deregulation became the norm in the ’70s and ’80s, the US experimented with taxi deregulation, too. Cities like San Diego, Seattle, and Dallas increased the number of licenses, bringing thousands more taxis onto the streets. New York’s Taxi and Limousine Commission reclassified drivers as independent contractors, which made the job harder and put an end to the union. More and more drivers went full-time, chiefly to ensure they could pay off the leasing fees. Conditions deteriorated throughout ’90s as pay failed to keep up with expenses. The National Taxi Workers Alliance was founded in 1998 to give an increasingly diverse workforce a voice in a complex and punishing industry. When they launched successful strikes against low fares and harsh fines against drivers, it seemed like the industry might turn a corner.

Then came Uber and Lyft, under cover of app-enabled darkness, to induce more drastic deregulation. By 2015, the taxi industry in Chicago — a sprawling city with a smaller fleet than New York’s, where ride-sharing was poised to do best — reported that they had lost somewhere between 30 to 40 percent of their business to ride-sharing apps.

Uber and Lyft claimed their success was due to better software, better algorithms, and better responsiveness, but their overwhelming advantage came from breaking the law. They flooded streets with unlicensed cars acting as taxis, first in San Francisco and then in cities everywhere, because they thought nobody would stop them. Fare prices, set by the city to be equitable and predictable for taxis, were put entirely out of city control and made subject to whatever the companies considered demand: low on lazy Saturday afternoons, high on Saturday nights, and even higher after events like terrorist attacks. Taxi fares and tips were unreliable in their own way, but drivers faced a new level of capriciousness when ride-sharing companies began to set the fares. Fare prices not only changed throughout the day; the wage floor could be slashed at the whim of the company with little or no notice to drivers. This was viable because unlike the taxi industry, Uber and Lyft swim Scrooge McDuck–like through piles of venture capital. They don’t have to rely on fares as their only source of revenue.

Uber has also lied to drivers about how much they can make. As late as 2015, the company claimed that drivers could earn $90,000 a year working for them. In an exposé for the Philadelphia City Paper, reporter Emily Guendelsberger worked as an UberX driver and found this to be far from the truth. “If I worked 10 hours a day, six days a week with one week off, I’d net almost $30,000 a year before taxes,” she wrote. “But if I wanted to net that $90,000-a-year figure that so many passengers asked about, I would only have to work, let’s see . . . 27 hours a day, 365 days a year.” That doesn’t include the money required to maintain and insure the car. Thanks to financing from Goldman Sachs, Uber offers its drivers predatory “deep subprime” loans to acquire their cars, which drivers then have to work extra hours to service. (...)

Philadelphia was one of the last cities in Pennsylvania to permit Uber and Lyft. As a kind of trial run, ride-sharing services were made temporarily legal in the city just in time for last summer’s Democratic National Convention. All the convention materials advertised its presence. The first night’s party, hosted by the company, was picketed by taxi workers. Hundreds of Democrats walked past them.

Once Uber and Lyft were legalized in a city, it became impossible to hold them to existing regulations. When the companies refused to submit to driver-fingerprinting laws in Austin, the city put the requirement to a referendum and voters booted ride-sharing out of town. But in 2017, the Texas legislature overruled the city’s voters. The fingerprinting requirements were lifted, and Texans were left with no choice but to accept ride-sharing. The companies are believed to have spent $2.3 million on lobbying Texas lawmakers this year alone.

In their antiregulatory crusade, Uber and Lyft have fostered a divided society, pitting one kind of worker against another, one kind of user against another. The largest group of Uber drivers is white (40 percent), with black non-Hispanics the second largest (19.5 percent); the largest group of taxi drivers is black (over 30 percent), with white drivers in second (26 percent). Most Uber drivers are younger and have college experience, and many have degrees; most taxi drivers are older, married, and have never been to college. Though Uber is generally cheaper, its ridership is younger and richer than taxi riders, with most identifying themselves in the “middle 50 percent” of incomes (around $45,000 a year); seniors, the disabled, and the poor make up a higher percentage of taxi clientele than their share of the general population.

The political strategy behind ride-sharing lies in pitting the figure of the consumer against the figure of the citizen. As the sociologist Wolfgang Streeck has argued, the explosion of consumer choices in the 1960s and ’70s didn’t only affect the kinds of products people owned. It affected the way those people regarded government services and public utilities, which began to seem shabby compared with the vibrant world of consumer goods. A public service like mass transit came to seem less like a community necessity and more like one choice among many. Dissatisfied with goods formerly subject to collective provision, such as buses, the affluent ceased to pay for them, supporting private options even when public ones remained.

The promise of ride-sharing is that it complements public transit. In practice, ride-sharing eliminates public transit where it exists. The majority of ride-sharing trips in San Francisco take place in neighborhoods with the highest concentration of buses and subways, and even before New York’s summer of subway hell, train ridership had dipped. Bus ridership has decreased, too. What happened to all of those riders? Some are biking, some walking — but many are in cars on the streets. A 2017 study of traffic patterns proved conclusively that congestion in New York City has increased since the introduction of ride-sharing. Meanwhile, Uber and Lyft are negotiating with cities to replace public buses with subsidized rides.

by The Editors, N+1 |  Read more:
Image: uncredited

The Black Sun (detail), from the alchemical treatise “Splendor Solis”, 1582.
via:

On Identity Politics

Donald Trump’s victory last November was a shattering event for American liberalism. Surveying the destruction, the liberal Columbia University humanities professor Mark Lilla wrote that “one of the many lessons of the recent presidential election campaign and its repugnant outcome is that the age of identity liberalism must be brought to an end.” When his essay arguing for that claim appeared in The New York Times, it caused controversy on the left, because it dared to question one of American liberalism’s most dogmatically held beliefs.

Lilla has turned that op-ed piece into a short book called The Once And Future Liberal: After Identity Politics, which appears in bookstores today. It’s a thin but punchy book by a self-described “frustrated liberal” for liberals. Lilla is tired of losing elections, and tired of watching his own side sabotage itself. In an e-mail exchange, Lilla answered a few questions I put to him about the book: (...)

There is a barbed, pithy phrase toward the end of your book: “Black Lives Matter is a textbook example of how not to build solidarity.” You make it clear that you don’t deny the existence of racism and police brutality, but you do fault BLM’s political tactics. Would you elaborate?

There is no denying that by publicizing and protesting police mistreatment of African-Americans the BLM movement mobilized people and delivered a wake-up call to every American with a conscience. But then the movement went on to use this mistreatment to build a general indictment of American society and its racial history, and all its law enforcement institutions, and to use Mau-Mau tactics to put down dissent and demand a confession of sins and public penitence (most spectacularly in a public confrontation with Hillary Clinton, of all people). Which, again, only played into the hands of the Republican right.
As soon as you cast an issue exclusively in terms of identity you invite your adversary to do the same. Those who play one race card should be prepared to be trumped by another, as we saw subtly and not so subtly in the 2016 presidential election.

But there’s another reason why this hectoring is politically counter-productive. It is hard to get people willing to confront an injustice if they do not identify in some way with those who suffer it. I am not a black male motorist and can never fully understand what it is like to be one. All the more reason, then, that I need some way to identify with him if I am going to be affected by his experience. The more the differences between us are emphasized, the less likely I will be to feel outrage at his mistreatment.

There is a reason why the leaders of the Civil Rights Movement did not talk about identity the way black activists do today, and it was not cowardice or a failure to be woke. The movement shamed America into action by consciously appealing to what we share, so that it became harder for white Americans to keep two sets of books, psychologically speaking: one for “Americans” and one for “Negroes.” That those leaders did not achieve complete success does not mean that they failed, nor does it prove that a different approach is now necessary. There is no other approach likely to succeed. Certainly not one that demands that white Americans confess their personal sins and agree in every case on what constitutes discrimination or racism today. In democratic politics it is suicidal to set the bar for agreement higher than necessary for winning adherents and elections.

Chris Arnade, I believe it was, once wrote that college has replaced the church in catechizing America. You contend that “liberalism’s prospects depend in no small measure on what happens in our institutions of higher education.” What do you mean?

Up until the Sixties, those active in liberal and progressive politics were drawn largely from the working class or farm communities, and were formed in local political clubs or on union-dominated shop floors. Today they are formed almost exclusively in our colleges and universities, as are members of the mainly liberal professions of law, journalism, and education. This was an important political change, reflecting a deep social one, as the knowledge economy came to dominate manufacturing and farming after the sixties. Now most liberals learn about politics on campuses that are largely detached socially and geographically from the rest of the country – and in particular from the sorts of people who once were the foundation of the Democratic Party. They have become petri dishes for the cultivation of cultural snobbery. This is not likely to change by itself. Which means that those of us concerned about the future of American liberalism need to understand and do something about what has happened there.

And what has happened is the institutionalization of an ideology that fetishizes our individual and group attachments, applauds self-absorption, and casts a shadow of suspicion over any invocation of a universal democratic we. It celebrates movement politics and disprizes political parties, which are machines for reaching consensus through compromise – and actually wielding power for those you care about. Republicans understand this, which is why for two generations they have dominated our political life by building from the bottom up.

“Democrats have daddy issues” you write. I’d like you to explain that briefly, but also talk about why you use pointed phrasing like that throughout your polemic. I think it’s funny, and makes The Once And Future Liberal more readable. But contemporary liberalism is not known for its absence of sanctimony when its own sacred cows are being gored.


I was referring to Democrats’ single minded focus on the presidency. Rather than face up to the need to get out into the heartland of the country and start winning congressional, state, and local races – which would mean engaging people unlike themselves and with some views they don’t share – they have convinced themselves that if they just win the presidency by getting a big turnout of their constituencies on the two coasts they can achieve their goals. They forget that Clinton and Obama were stymied at almost every turn by a recalcitrant Congress and Supreme Court, and that many of their policies were undone at the state level. They get Daddy elected and then complain and accuse him of betrayal if he can’t just make things happen magically. It’s childish.

As for my writing, maybe Buffon was right that le style c’est l’homme même [style is the man — RD]. I find that striking, pithy statements often force me to think than do elaborate arguments. And I like to provoke. I can’t bear American sanctimony, self-righteousness, and moral bullying. We are a fanatical people.

As a conservative reading The Once And Future Liberal, I kept thinking how valuable this book is for my side. You astutely point out that before he beat Hillary Clinton, Donald Trump trounced the GOP establishment. Republicans may hold the high ground in Washington today, but I see no evidence that the GOP is ready for the new “dispensation,” as you call the time we have entered. It’s all warmed-over, think-tank Reaganism. What lessons can conservatives learn from your book?

I hope not too many, and not until we get our house in order! But of course if Palin-Trumpism – we shouldn’t forget her role as Jane the Baptist – has taught us anything, it is that the country has a large stake in having two responsible parties that care about truth and evidence, accept the norms of democratic comportment, and devote themselves to ennobling the demos rather than catering to its worse qualities. Democrats won’t be able to achieve anything lasting if they don’t have responsible partners on the other side. So I don’t mind lending a hand.

I guess that if I were a reformist Republican the lessons I would draw from The Once and Future Liberal would be two. The first is to abandon dogmatic, anti-government libertarianism and learn to start speaking about the common good again. This is a country, a republic, not a campsite or a parking lot where we each stay in our assigned spots and share no common life or purpose. We not only have rights in relation to government and our fellow citizens, we have reciprocal duties toward them. The effectiveness, not the size, of government is what matters. We have a democratic one, fortunately. It is not an alien spaceship sucking out our brains and corrupting the young. Learn to use it, not demonize it.

The second would be to become reality based again. Reaganism may have been good for its time but it cannot address the problems that the country – and Republican voters – face today. What is happening to the American family? How are workers affected by our new capitalism? What kinds of services (i.e., maternity leave, worker retraining) and regulations (i.e., anti-trust) would actually help the economy perform better and benefit us all? What kind of educational system will make our workers more highly skilled and competitive (wrong answer: home schooling)? If you don’t believe me, simply read Ross Douthat and Reihan Salam’s classic The Grand New Party, which laid this all out brilliantly and persuasively a decade ago. It’s been sitting on shelves gathering dust all this time while the party has skidded down ring after ring of the Inferno. (A conservative publisher should bring out an updated version…) Or take a look at the reformicon public policy journal National Affairs.

Oh, and a bonus bit of advice: get off the tit of Fox News. Now. It rots the brain, makes you crazy, ruins your judgment, and turns the demos into a mob, not a people. Find a more centrist Republican billionaire to set up a good, reality based conservative network. And relegate that tree-necked palooka Sean Hannity to a job he’s suited for, like coaching junior high wrestling…

As you know, there is a lot of pessimistic talk now about the future of liberal democracy. There’s a striking line in your book: “What’s extraordinary — and appalling — about the past four decades of our history is that politics have been dominated by two ideologies that encourage and even celebrate the unmaking of citizens.” You’re talking about the individualism that has become central to our politics, both on the left and the right. I would say that our political consciousness has been and is being powerfully formed by individualism and consumerism — tectonic forces that work powerfully against any attempt to build solidarity. Another tectonic force is what Alasdair MacIntyre calls “emotivism” — the idea that feelings are a reliable guide to truth. Could it be the case that identity politics are the only kind of politics of solidarity possible in a culture formed by these pre-political forces?

It’s an interesting argument that, if I’m not mistaken, Ross Douthat has made in other terms. I can see that they might be gestures toward solidarity but real solidarity comes when you identity more fully with the group and make a commitment to it, parking your individuality for the moment. Identitarian liberals have a hard time doing that.

Take the acronym LGBTQ as an example. It’s been fascinating to see how this list of letters has grown as each subgroup calls for recognition, rather than people in the groups finally settling on a single word as a moniker – say “gay,” or “queer,” or whatever. I don’t see how ID politics makes solidarity possible. Instead it just feeds what I call in the book the Facebook model of identity, one in which I like groups temporarily identify with, and unlike them when I no longer do, or get bored, or just want to move on.

by Rod Dreher, American Conservative |  Read more:
Image: Christophe Dellory
[ed. See also: Back to the Progressive Future.]

Sunday, August 20, 2017


Ōhno Bakufu 大野麦風 (1888–1976).
via:

The Electric-Bike Conundrum

It was nighttime, a soft summer night, and I was standing on Eighty-second Street and Second Avenue, in Manhattan, with my wife and another couple. We were in the midst of saying goodbye on the small island between the bike lane and the avenue when a bike whooshed by, soundless and very fast. I had been back in New York for only a week. As is always the case when I arrive after a period of months away, I was tuned to any change in the city’s ambient hum. When that bike flew past, I felt a shift in the familiar rhythm of the city as I had known it. I watched the guy as he travelled on the green bike path. He was speeding down the hill, but he wasn’t pedalling and showed no sign of exertion. For a moment, the disjunction between effort and velocity confused me. Then it dawned on me that he was riding an electric bike.

Like most of the guys you see with electric bikes in New York, he was a food-delivery guy. Their electric bikes tend to have giant batteries, capable of tremendous torque and horsepower. They are the vanguard, the visible part of the iceberg, but they are not indicative of what is to come. Their bikes are so conspicuously something other than a bike, for one thing. For another, the utility of having a battery speed up your delivery is so straightforward that it forecloses discussion. What lies ahead is more ambiguous. The electric bikes for sale around the city now have batteries that are slender, barely visible. The priority is not speed so much as assisted living.

I grew up as a bike rider in Manhattan, and I also worked as a bike messenger, where I absorbed the spartan, libertarian, every-man-for-himself ethos: you need to get somewhere as fast as possible, and you did what you had to do in order to get there. The momentum you give is the momentum you get. Bike messengers were once faddish for their look, but it’s this feeling of solitude and self-reliance that is, along with the cult of momentum, the essential element of that profession. The city—with its dedicated lanes and greenways—is a bicycle nirvana compared with what it once was, and I have had to struggle to remake my bicycle life in this new world of good citizenship. And yet, immediately, there was something about electric bikes that offended me. On a bike, velocity is all. That guy on the electric bike speeding through the night was probably going to have to brake hard at some point soon. If he wanted to pedal that fast to attain top speed on the Second Avenue hill that sloped down from the high Eighties, then it was his right to squander it. But he hadn’t worked to go that fast. And, after he braked—for a car, or a pedestrian, or a turn—he wouldn’t have to work to pick up speed again.

“It’s a cheat!” my friend Rob Kotch, the owner of Breakaway Courier Systems, said, when I got him on the phone and asked him about electric bikes. “Everyone cheats now. They see Lance Armstrong do it. They see these one-percenters making a ton of money without doing anything. So they think, why do I have to work hard? So now it’s O.K. for everyone to cheat. Everyone does it.” It took me a few minutes to realize that Kotch’s indignation on the subject of electric bikes was not coming from his point of view as a courier-system owner—although there is plenty of that. (He no longer employs bike messengers as a result of the cost of worker’s compensation and the competition from UberEATS, which doesn’t have to pay worker’s comp.) Kotch’s strong feelings were driven—so to speak—by his experience as someone who commutes twenty-three miles on a bicycle each day, between his home in New Jersey and his Manhattan office. He has been doing this ride for more than twenty years. (...)

I laughed and told him about a ride I took across the Manhattan Bridge the previous night, where several electric bikes flew by me. It was not, I insisted, an ego thing about who is going faster. Lots of people who flew by me on the bridge were on regular bikes. It was a rhythm thing, I said. On a bike, you know where the hills are, you know how to time the lights, you calibrate for the movement of cars in traffic, other bikes, pedestrians. The electric bike was a new velocity on the streets.

And yet, for all our shared sense that something was wrong with electric bikes, we agreed that, by any rational measure, they are a force for good.

“The engines are efficient, they reduce congestion,” he said.

“Fewer cars, more bikes,” I said.

We proceeded to list a few other Goo-Goo virtues. (I first encountered this phrase—short for good-government types—in Robert Caro’s “The Power Broker,” about Robert Moses, the man who built New York for the automobile.)

“If it’s such a good thing, why do we have this resentment?” I asked.

He wasn’t sure, he said. He confessed that he had recently tried a friend’s electric bike and found the experience appealing to the point of corruption.

“It’s only a matter of time before I get one,” he said ruefully. “And then I’ll probably never get on a real bike again.”

In some ways, the bike-ification of New York City can be seen as the ultimate middle finger raised to Robert Moses, a hero for building so many parks who then became a crazed highway builder who wanted to demolish part of Greenwich Village to make room for a freeway. But are all the bikes a triumph for his nemesis, Jane Jacobs, and her vision of cohesive neighborhoods anchored by street life, by which she meant the world of pedestrians on the sidewalk?

“The revolution under Bloomberg was to see the city as a place where pedestrians come first,” a longtime city bike rider and advocate I know, who didn’t wish to be named, said. “This electric phenomenon undermines this development. The great thing about bikes in the city is that, aesthetically and philosophically, you have to be present and aware of where you are, and where others are. When you keep introducing more and more power and speed into that equation, it goes against the philosophy of slowing cars down—of traffic calming—in order to make things more livable,” he said.

by Thomas Beller, New Yorker | Read more:
Image: Sophia Foster-Dimino

Bengt G. PetterssonBoat Bridge at Evening, Denmark, 1973.
via:

It’s Complicated

Have you ever thought about killing someone? I have, and I confess that it brought me peculiar feelings of pleasure to fantasize about putting the hurt on someone who had wronged me. I am not alone. According to the evolutionary psychologist David Buss, who asked thousands of people this same question and reported the data in his 2005 book, The Murderer Next Door, 91 percent of men and 84 percent of women reported having had at least one vivid homicidal fantasy in their life. It turns out that nearly all murders (90 percent by some estimates) are moralistic in nature—not cold-blooded killing for money or assets, but hot-blooded homicide in which perpetrators believe that their victims deserve to die. The murderer is judge, jury, and executioner in a trial that can take only seconds to carry out.

What happens in brains and bodies at the moment humans engage in violence with other humans? That is the subject of Stanford University neurobiologist and primatologist Robert M. Sapolsky’s Behave: The Biology of Humans at Our Best and Worst. The book is Sapolsky’s magnum opus, not just in length, scope (nearly every aspect of the human condition is considered), and depth (thousands of references document decades of research by Sapolsky and many others) but also in importance as the acclaimed scientist integrates numerous disciplines to explain both our inner demons and our better angels. It is a magnificent culmination of integrative thinking, on par with similar authoritative works, such as Jared Diamond’s Guns, Germs, and Steel and Steven Pinker’s The Better Angels of Our Nature. Its length and detail are daunting, but Sapolsky’s engaging style—honed through decades of writing editorials, review essays, and columns for The Wall Street Journal, as well as popular science books (Why Zebras Don’t Get Ulcers, A Primate’s Memoir)—carries the reader effortlessly from one subject to the next. The work is a monumental contribution to the scientific understanding of human behavior that belongs on every bookshelf and many a course syllabus.

Sapolsky begins with a particular behavioral act, and then works backward to explain it chapter by chapter: one second before, seconds to minutes before, hours to days before, days to months before, and so on back through adolescence, the crib, the womb, and ultimately centuries and millennia in the past, all the way to our evolutionary ancestors and the origin of our moral emotions. He gets deep into the weeds of all the mitigating factors at work at every level of analysis, which is multilayered, not just chronologically but categorically. Or more to the point, uncategorically, for one of Sapolsky’s key insights to understanding human action is that the moment you proffer X as a cause—neurons, neurotransmitters, hormones, brain-specific transcription factors, epigenetic effects, gene transposition during neurogenesis, dopamine D4 receptor gene variants, the prenatal environment, the postnatal environment, teachers, mentors, peers, socioeconomic status, society, culture—it triggers a cascade of links to all such intervening variables. None acts in isolation. Nearly every trait or behavior he considers results in a definitive conclusion, “It’s complicated.”

Does this mean we are relieved of moral culpability for our actions? As the old joke goes: nature or nurture—either way, it’s your parents’ fault. With all these intervening variables influencing our actions, where does free will enter the equation? Like most scientists, Sapolsky rejects libertarian free will: there is no homunculus (or soul, or separate entity) calling the shots for you, but even if there were a mini-me inside of you making choices, that mini-me would need a mini-mini-me inside of it, ad infinitum. That leaves two options: complete determinism and compatibilism, or “mitigated free will,” as Sapolsky calls it. A great many scientists are compatibilists, accepting the brute fact of a deterministic world with governing laws of nature that apply fully to humans, while conceding that such factors as brain injury, alcoholism, drug addiction, moments of uncontrollable rage, and the like can account for some criminal acts.

Sapolsky will have none of this. (...) Sapolsky quotes American cognitive scientist Marvin Minsky in support of the position that free will is really just “internal forces I do not understand.”

This is the part of Behave where the academic rubber meets the legal road as Sapolsky ventures into the areas of morality and criminal justice, which he believes needs a major overhaul. No, we shouldn’t let dangerous criminals out of jail to wreak havoc on society, but neither should we punish them for acts that, if we believe the science, they were not truly responsible for committing. Punishment as retribution is meaningless unless it is meted out in Skinnerian doses with the goal of deterring unwanted behaviors. Some progress has been made on this front. People who regularly suffer epileptic seizures are not allowed to drive, for example, but we don’t think of this ban as “punishing” them for their affliction. “Crowds of goitrous yahoos don’t excitedly mass to watch the epileptic’s driver’s license be publicly burned,” Sapolsky writes in his characteristic style. “We’ve successfully banished the notion of punishment in that realm. It may take centuries, but we can do the same in all our current arenas of punishment.”

by Michael Shermer, American Scholar |  Read more:
Image: Angelica Kauffman, Self-Portrait Hesitating between the Arts of Music and Painting, 1791

Steve Jobs’s Mock Turtleneck Gets a Second Life

Of the many technological and ­artistic triumphs of the fashion designer Issey Miyake—from his patented pleating to his soulful sculptural forms—his most famous piece of work will end up being the black mock turtleneck indelibly associated with Apple co-founder Steve Jobs.

The model was retired from production in 2011, after Jobs’s death, but in July, Issey Miyake Inc.—the innovative craftsman’s eponymous clothing brand—is releasing a $270 garment called the Semi-Dull T. It’s 60 percent polyester, 40 percent cotton, and guaranteed to inspire déjà vu.

Don’t call it a comeback. The company is at pains to state that the turtleneck, designed by Miyake protégé Yusuke Takahashi with a trimmer silhouette and higher shoulders than the original, isn’t a reissue. And even if the garment were a straight-up imitation, its importance as a cultural artifact is more about the inimitable way Jobs wore it.

For Jobs, this way of dressing was a kind of consolation prize after ­employees at Apple Inc. resisted his attempts to create a company uniform. In the early 1980s he’d visited Tokyo to tour the headquarters of Sony Corp., which had 30,000 employees in Japan. And all of them—from co-founder Akio Morita to each factory worker, sales rep, and ­secretary—wore the same thing: a traditional blue-and-white work jacket.

In the telling of Jobs biographer Walter Isaacson, Morita explained to Jobs that Sony had imposed a uniform since its founding in 1946. The workers of a nation ­humiliated in war were too broke to dress themselves, and corporations began supplying them with clothes to keep them looking professional and create a bond with their colleagues. In 1981, for Sony’s 35th anniversary, Morita had commissioned Miyake, already a fashion star after showing innovative collections in Paris, to design a jacket. Miyake returned with a futuristic taupe nylon model with no lapels and sleeves that unzipped to convert it into a vest.

Jobs loved it and commissioned Miyake to design a vest for Apple, which he then unsuccessfully pitched to a crowd in Cupertino, Calif. “Oh, man, did I get booed off the stage,” Jobs told Isaacson. “Everybody hated the idea.” Americans, with their cult of individuality, tend not to go in for explicit uniformity, conforming instead to dress codes that aren’t even written yet.

This left Jobs to ­contrive a uniform for himself, and he drew his daily ­wardrobe from a closet stocked with Levis 501s, New Balance 991s, and stacks of black mock turtlenecks—about 100 in total—supplied by Miyake.

How Jobs came to settle on this particular item of clothing isn’t recorded, but it had long been a totem of progressive high-culture types—San Francisco beatniks, Left Bank chanteuses, and Samuel Beckett flinching at the lens of Richard Avedon.

In the analysis of costume historian Anne Hollander, the existentialist black turtleneck indicates “the kind of freedom from sartorial convention demanded by deep thought,” and it’s tempting to read Jobs’s as the descendant of that symbol. His turtleneck was an extension of his aesthetic aspirations: severe but serene, ascetic but cushy. The garment, as Jobs wore it, was the vestment of a secular monk.

The shirt put an especially cerebral spin on the emerging West Coast ­business-casual look, implying that the Apple chief had evolved past such relics as neckties—an ­anti-establishment gesture that set a template for ­hoodie-clad Mark Zuckerbergs and every other startup kid disrupting a traditional dress code. In its minimalism and simplicity, the black turtleneck gave a flatscreen shimmer to Jobs’s ­self-presentation, with the clean lines of a blank slate and no old-fashioned buttons.

by Troy Patterson, Bloomberg |  Read more:
Image: Ted Cavanaugh for Bloomberg Pursuits, Stylist: Chloe Daley