Sunday, August 20, 2017


Ōhno Bakufu 大野麦風 (1888–1976).
via:

The Electric-Bike Conundrum

It was nighttime, a soft summer night, and I was standing on Eighty-second Street and Second Avenue, in Manhattan, with my wife and another couple. We were in the midst of saying goodbye on the small island between the bike lane and the avenue when a bike whooshed by, soundless and very fast. I had been back in New York for only a week. As is always the case when I arrive after a period of months away, I was tuned to any change in the city’s ambient hum. When that bike flew past, I felt a shift in the familiar rhythm of the city as I had known it. I watched the guy as he travelled on the green bike path. He was speeding down the hill, but he wasn’t pedalling and showed no sign of exertion. For a moment, the disjunction between effort and velocity confused me. Then it dawned on me that he was riding an electric bike.

Like most of the guys you see with electric bikes in New York, he was a food-delivery guy. Their electric bikes tend to have giant batteries, capable of tremendous torque and horsepower. They are the vanguard, the visible part of the iceberg, but they are not indicative of what is to come. Their bikes are so conspicuously something other than a bike, for one thing. For another, the utility of having a battery speed up your delivery is so straightforward that it forecloses discussion. What lies ahead is more ambiguous. The electric bikes for sale around the city now have batteries that are slender, barely visible. The priority is not speed so much as assisted living.

I grew up as a bike rider in Manhattan, and I also worked as a bike messenger, where I absorbed the spartan, libertarian, every-man-for-himself ethos: you need to get somewhere as fast as possible, and you did what you had to do in order to get there. The momentum you give is the momentum you get. Bike messengers were once faddish for their look, but it’s this feeling of solitude and self-reliance that is, along with the cult of momentum, the essential element of that profession. The city—with its dedicated lanes and greenways—is a bicycle nirvana compared with what it once was, and I have had to struggle to remake my bicycle life in this new world of good citizenship. And yet, immediately, there was something about electric bikes that offended me. On a bike, velocity is all. That guy on the electric bike speeding through the night was probably going to have to brake hard at some point soon. If he wanted to pedal that fast to attain top speed on the Second Avenue hill that sloped down from the high Eighties, then it was his right to squander it. But he hadn’t worked to go that fast. And, after he braked—for a car, or a pedestrian, or a turn—he wouldn’t have to work to pick up speed again.

“It’s a cheat!” my friend Rob Kotch, the owner of Breakaway Courier Systems, said, when I got him on the phone and asked him about electric bikes. “Everyone cheats now. They see Lance Armstrong do it. They see these one-percenters making a ton of money without doing anything. So they think, why do I have to work hard? So now it’s O.K. for everyone to cheat. Everyone does it.” It took me a few minutes to realize that Kotch’s indignation on the subject of electric bikes was not coming from his point of view as a courier-system owner—although there is plenty of that. (He no longer employs bike messengers as a result of the cost of worker’s compensation and the competition from UberEATS, which doesn’t have to pay worker’s comp.) Kotch’s strong feelings were driven—so to speak—by his experience as someone who commutes twenty-three miles on a bicycle each day, between his home in New Jersey and his Manhattan office. He has been doing this ride for more than twenty years. (...)

I laughed and told him about a ride I took across the Manhattan Bridge the previous night, where several electric bikes flew by me. It was not, I insisted, an ego thing about who is going faster. Lots of people who flew by me on the bridge were on regular bikes. It was a rhythm thing, I said. On a bike, you know where the hills are, you know how to time the lights, you calibrate for the movement of cars in traffic, other bikes, pedestrians. The electric bike was a new velocity on the streets.

And yet, for all our shared sense that something was wrong with electric bikes, we agreed that, by any rational measure, they are a force for good.

“The engines are efficient, they reduce congestion,” he said.

“Fewer cars, more bikes,” I said.

We proceeded to list a few other Goo-Goo virtues. (I first encountered this phrase—short for good-government types—in Robert Caro’s “The Power Broker,” about Robert Moses, the man who built New York for the automobile.)

“If it’s such a good thing, why do we have this resentment?” I asked.

He wasn’t sure, he said. He confessed that he had recently tried a friend’s electric bike and found the experience appealing to the point of corruption.

“It’s only a matter of time before I get one,” he said ruefully. “And then I’ll probably never get on a real bike again.”

In some ways, the bike-ification of New York City can be seen as the ultimate middle finger raised to Robert Moses, a hero for building so many parks who then became a crazed highway builder who wanted to demolish part of Greenwich Village to make room for a freeway. But are all the bikes a triumph for his nemesis, Jane Jacobs, and her vision of cohesive neighborhoods anchored by street life, by which she meant the world of pedestrians on the sidewalk?

“The revolution under Bloomberg was to see the city as a place where pedestrians come first,” a longtime city bike rider and advocate I know, who didn’t wish to be named, said. “This electric phenomenon undermines this development. The great thing about bikes in the city is that, aesthetically and philosophically, you have to be present and aware of where you are, and where others are. When you keep introducing more and more power and speed into that equation, it goes against the philosophy of slowing cars down—of traffic calming—in order to make things more livable,” he said.

by Thomas Beller, New Yorker | Read more:
Image: Sophia Foster-Dimino

Bengt G. PetterssonBoat Bridge at Evening, Denmark, 1973.
via:

It’s Complicated

Have you ever thought about killing someone? I have, and I confess that it brought me peculiar feelings of pleasure to fantasize about putting the hurt on someone who had wronged me. I am not alone. According to the evolutionary psychologist David Buss, who asked thousands of people this same question and reported the data in his 2005 book, The Murderer Next Door, 91 percent of men and 84 percent of women reported having had at least one vivid homicidal fantasy in their life. It turns out that nearly all murders (90 percent by some estimates) are moralistic in nature—not cold-blooded killing for money or assets, but hot-blooded homicide in which perpetrators believe that their victims deserve to die. The murderer is judge, jury, and executioner in a trial that can take only seconds to carry out.

What happens in brains and bodies at the moment humans engage in violence with other humans? That is the subject of Stanford University neurobiologist and primatologist Robert M. Sapolsky’s Behave: The Biology of Humans at Our Best and Worst. The book is Sapolsky’s magnum opus, not just in length, scope (nearly every aspect of the human condition is considered), and depth (thousands of references document decades of research by Sapolsky and many others) but also in importance as the acclaimed scientist integrates numerous disciplines to explain both our inner demons and our better angels. It is a magnificent culmination of integrative thinking, on par with similar authoritative works, such as Jared Diamond’s Guns, Germs, and Steel and Steven Pinker’s The Better Angels of Our Nature. Its length and detail are daunting, but Sapolsky’s engaging style—honed through decades of writing editorials, review essays, and columns for The Wall Street Journal, as well as popular science books (Why Zebras Don’t Get Ulcers, A Primate’s Memoir)—carries the reader effortlessly from one subject to the next. The work is a monumental contribution to the scientific understanding of human behavior that belongs on every bookshelf and many a course syllabus.

Sapolsky begins with a particular behavioral act, and then works backward to explain it chapter by chapter: one second before, seconds to minutes before, hours to days before, days to months before, and so on back through adolescence, the crib, the womb, and ultimately centuries and millennia in the past, all the way to our evolutionary ancestors and the origin of our moral emotions. He gets deep into the weeds of all the mitigating factors at work at every level of analysis, which is multilayered, not just chronologically but categorically. Or more to the point, uncategorically, for one of Sapolsky’s key insights to understanding human action is that the moment you proffer X as a cause—neurons, neurotransmitters, hormones, brain-specific transcription factors, epigenetic effects, gene transposition during neurogenesis, dopamine D4 receptor gene variants, the prenatal environment, the postnatal environment, teachers, mentors, peers, socioeconomic status, society, culture—it triggers a cascade of links to all such intervening variables. None acts in isolation. Nearly every trait or behavior he considers results in a definitive conclusion, “It’s complicated.”

Does this mean we are relieved of moral culpability for our actions? As the old joke goes: nature or nurture—either way, it’s your parents’ fault. With all these intervening variables influencing our actions, where does free will enter the equation? Like most scientists, Sapolsky rejects libertarian free will: there is no homunculus (or soul, or separate entity) calling the shots for you, but even if there were a mini-me inside of you making choices, that mini-me would need a mini-mini-me inside of it, ad infinitum. That leaves two options: complete determinism and compatibilism, or “mitigated free will,” as Sapolsky calls it. A great many scientists are compatibilists, accepting the brute fact of a deterministic world with governing laws of nature that apply fully to humans, while conceding that such factors as brain injury, alcoholism, drug addiction, moments of uncontrollable rage, and the like can account for some criminal acts.

Sapolsky will have none of this. (...) Sapolsky quotes American cognitive scientist Marvin Minsky in support of the position that free will is really just “internal forces I do not understand.”

This is the part of Behave where the academic rubber meets the legal road as Sapolsky ventures into the areas of morality and criminal justice, which he believes needs a major overhaul. No, we shouldn’t let dangerous criminals out of jail to wreak havoc on society, but neither should we punish them for acts that, if we believe the science, they were not truly responsible for committing. Punishment as retribution is meaningless unless it is meted out in Skinnerian doses with the goal of deterring unwanted behaviors. Some progress has been made on this front. People who regularly suffer epileptic seizures are not allowed to drive, for example, but we don’t think of this ban as “punishing” them for their affliction. “Crowds of goitrous yahoos don’t excitedly mass to watch the epileptic’s driver’s license be publicly burned,” Sapolsky writes in his characteristic style. “We’ve successfully banished the notion of punishment in that realm. It may take centuries, but we can do the same in all our current arenas of punishment.”

by Michael Shermer, American Scholar |  Read more:
Image: Angelica Kauffman, Self-Portrait Hesitating between the Arts of Music and Painting, 1791

Steve Jobs’s Mock Turtleneck Gets a Second Life

Of the many technological and ­artistic triumphs of the fashion designer Issey Miyake—from his patented pleating to his soulful sculptural forms—his most famous piece of work will end up being the black mock turtleneck indelibly associated with Apple co-founder Steve Jobs.

The model was retired from production in 2011, after Jobs’s death, but in July, Issey Miyake Inc.—the innovative craftsman’s eponymous clothing brand—is releasing a $270 garment called the Semi-Dull T. It’s 60 percent polyester, 40 percent cotton, and guaranteed to inspire déjà vu.

Don’t call it a comeback. The company is at pains to state that the turtleneck, designed by Miyake protégé Yusuke Takahashi with a trimmer silhouette and higher shoulders than the original, isn’t a reissue. And even if the garment were a straight-up imitation, its importance as a cultural artifact is more about the inimitable way Jobs wore it.

For Jobs, this way of dressing was a kind of consolation prize after ­employees at Apple Inc. resisted his attempts to create a company uniform. In the early 1980s he’d visited Tokyo to tour the headquarters of Sony Corp., which had 30,000 employees in Japan. And all of them—from co-founder Akio Morita to each factory worker, sales rep, and ­secretary—wore the same thing: a traditional blue-and-white work jacket.

In the telling of Jobs biographer Walter Isaacson, Morita explained to Jobs that Sony had imposed a uniform since its founding in 1946. The workers of a nation ­humiliated in war were too broke to dress themselves, and corporations began supplying them with clothes to keep them looking professional and create a bond with their colleagues. In 1981, for Sony’s 35th anniversary, Morita had commissioned Miyake, already a fashion star after showing innovative collections in Paris, to design a jacket. Miyake returned with a futuristic taupe nylon model with no lapels and sleeves that unzipped to convert it into a vest.

Jobs loved it and commissioned Miyake to design a vest for Apple, which he then unsuccessfully pitched to a crowd in Cupertino, Calif. “Oh, man, did I get booed off the stage,” Jobs told Isaacson. “Everybody hated the idea.” Americans, with their cult of individuality, tend not to go in for explicit uniformity, conforming instead to dress codes that aren’t even written yet.

This left Jobs to ­contrive a uniform for himself, and he drew his daily ­wardrobe from a closet stocked with Levis 501s, New Balance 991s, and stacks of black mock turtlenecks—about 100 in total—supplied by Miyake.

How Jobs came to settle on this particular item of clothing isn’t recorded, but it had long been a totem of progressive high-culture types—San Francisco beatniks, Left Bank chanteuses, and Samuel Beckett flinching at the lens of Richard Avedon.

In the analysis of costume historian Anne Hollander, the existentialist black turtleneck indicates “the kind of freedom from sartorial convention demanded by deep thought,” and it’s tempting to read Jobs’s as the descendant of that symbol. His turtleneck was an extension of his aesthetic aspirations: severe but serene, ascetic but cushy. The garment, as Jobs wore it, was the vestment of a secular monk.

The shirt put an especially cerebral spin on the emerging West Coast ­business-casual look, implying that the Apple chief had evolved past such relics as neckties—an ­anti-establishment gesture that set a template for ­hoodie-clad Mark Zuckerbergs and every other startup kid disrupting a traditional dress code. In its minimalism and simplicity, the black turtleneck gave a flatscreen shimmer to Jobs’s ­self-presentation, with the clean lines of a blank slate and no old-fashioned buttons.

by Troy Patterson, Bloomberg |  Read more:
Image: Ted Cavanaugh for Bloomberg Pursuits, Stylist: Chloe Daley

Saturday, August 19, 2017

Aging Parents With Lots of Stuff, and Children Who Don’t Want It

Mothers and daughters talk about all kinds of things. But there is one conversation Susan Beauregard, 49, of Hampton, Conn., is reluctant to have with her 89-year-old mother, Anita Shear: What to do — eventually — with Mrs. Shear’s beloved set of Lenox china?

Ms. Beauregard said she never uses her own fine china, which she received as a wedding gift long ago. “I feel obligated to take my mom’s Lenox, but it’s just going to sit in the cupboard next to my stuff,” she said.

The only heirlooms she wants from her mother, who lives about an hour away, in the home where Ms. Beauregard was raised, are a few pictures and her mother’s wedding band and engagement ring, which she plans to pass along to her son.

So, in a quandary familiar to many adults who must soon dispose of the beloved stuff their parents would love them to inherit, Ms. Beauregard has to break it to her mother that she does not intend to keep the Hitchcock dining room set or the buffet full of matching Lenox dinnerware, saucers and gravy boats.

As baby boomers grow older, the volume of unwanted keepsakes and family heirlooms is poised to grow — along with the number of delicate conversations about what to do with them. According to a 2014 United States census report, more than 20 percent of America’s population will be 65 or older by 2030. As these waves of older adults start moving to smaller dwellings, assisted living facilities or retirement homes, they and their kin will have to part with household possessions that the heirs simply don’t want.

“We went from a 3,000-square-foot colonial with three floors to a single-story, 1,400-square-foot living space,” said Tena Bluhm, 76, formerly of Fairfax, Va. She and her 77-year-old husband, Ray Bluhm, moved this month to a retirement community in Lake Ridge, Va.

Before the move, their two adult children took a handful of items, including a new bed and a dining table and chairs. But Mrs. Bluhm could not interest them in “the china and the silver and the crystal,” her own generation’s hallmarks of a properly furnished, middle-class home.

The competitive accumulation of material goods, a cornerstone of the American dream, dates to the post-World War II economy, when returning veterans fled the cities to establish homes and status in the suburbs. Couples married when they were young, and wedding gifts were meant to be used — and treasured — for life.

“Americans spent to keep up with the Joneses, using their possessions to make the statement that they were not failing in their careers,” wrote Juliet B. Schor, the Boston College sociologist, in her 1998 book, “The Overspent American: Why We Want What We Don’t Need.”

But for a variety of social, cultural, and economic reasons, this is no longer the case. Today’s young adults tend to acquire household goods that they consider temporary or disposable, from online retailers or stores like Ikea and Target, instead of inheriting them from parents or grandparents.

This represents a significant shift in material culture, said Mary Kay Buysse, executive director of the National Association of Senior Move Managers, a professional organization of moving specialists who help older people downsize.

“This is the first time we’re seeing a kink in the chain of passing down mementos from one generation to another,” Ms. Buysse said in a telephone interview from the group’s headquarters in Hinsdale, Ill.

Accordingly, the senior move management industry has experienced unprecedented growth in recent years, Ms. Buysse said.

by Tom Verde, NY Times |  Read more:
Image: T.J. Kirkpatrick

'A Bit More'

Last year I fell in love with a toaster.

It looks like most others. A brushed, stainless-steel housing. Four slots, to accommodate the whole family’s bread-provisioning needs. It is alluring but modest, perched atop the counter on proud haunches.

But at a time when industry promises disruptive innovation, Breville, the Australian manufacturer of my toaster, offers something truly new and useful through humility rather than pride.

The mechanism that raises and lowers the bread from the chassis is motorized. After I press a button atop the frame, the basket silently lowers the bread into the device to become toast. On its own, this feature seems doomed to mechanical failure. But the risk is worthwhile to facilitate the toaster’s star ability: the “A Bit More” button. That modest attribute offers a lesson for design of all stripes—one that could make every designed object and experience better.

Toast is an imperfect art. Different breads brown at different rates. Even with the very same bread, similar toaster settings can produce varied results. When my bread doesn’t come up dark enough, I dial in a guess for another browning run. Usually I go overboard and burn the toast in the process. It’s toaster telephone game.

The “A Bit More” button enters here, at the friction point between good and great toast. When the toast reveals itself to me above the Breville’s chassis, I visually gauge its browness. If insufficient, I press the button, which actuates the basket motor. Down it goes for a brief, return visit to the coil. Then back up again, having been toasted, well, just a bit more.

The button also makes toasting bread, normally a quantitative act, more qualitative. The lever dials in numerical levels of browning, and the “A Bit More” button cuts it with you-know-what-I-mean ambiguity. That dance between numbers and feelings apologizes even for a slightly over-browned slice of toast by endearing the eater to the result the button helped produce.

Sure, I’m talking about toast. But Breville’s “A Bit More” Button is nothing short of brilliant. It highlights an obvious but still unseen problem with electric toasters, devices that have been around for more than a century. And then it solves that problem in an elegant way that is also delightful to use. It’s just the kind of solution that designers desperately hope to replicate, and users hope to discover in ordinary products. But agreeing on a method for accomplishing such achievements is harder.

The “A Bit More” Button was conceived by the industrial designer Keith Hensel, who worked for Sunbeam and then as Breville’s principal designer until his unexpected death in 2013, at the age of 47. His specialty was household products, like toasters, kettles, and blenders.

Breville’s head designer, Richard Hoare, tells me that Hensel, with whom he worked closely, fell upon the idea by “focusing on user empathy.” Hensel had been pondering the problem people have with toasters. “Your bread comes up too light, so you put it back down, then get distracted and forget, and it goes through a full cycle and burns,” Hoare relates. “Keith thought, why can’t the consumer have more control? Why can’t they have ‘A Bit More?’”

According to Hoare, the design team called the button by that name from the start. Some people within Breville thought it was too colloquial, and other options were considered. “Extra Darkness” was one, and “10% Extra” another. “These were confusing and clunky,” says Hoare. “In the end ‘A Bit More’ was the clearest.” Breville, which holds several patents in motorized toaster basket tech, started selling toasters with the feature in 2008. (...)

Hoare’s recollection corresponds with a trend in contemporary design practice—and one that claims to be particularly adept at producing outcomes like “A Bit More.” It’s called user-experience, or UX, design, a discipline that strives to craft pleasurable and useful encounters between people and things. Originally derived from human-computer interaction, or HCI, where user-interface design was its ancestor, UX purports to offer a general approach to design of all kinds, from software design to product design to architecture and urban planning.

But UX practice talks out of both sides of its mouth. On the one hand, UX fancies itself an empirical discipline. Its processes include ethnographic user research, specification drafting, iterative design, user testing, and so forth. UX inherits mid-century form-follows-function design ideals. It also embraces more recent trends, like participatory design, which deeply integrates stakeholders into the design process. Data are often incorporated into UX for affirming, denying, or directing elsewhere a design team’s attention.

On the other hand, UX design also privileges out-of-the-box genius to solve design problems. Apple, often considered to typify UX, is famous for conducting design in secret via a small cadre of geniuses. Steve Jobs is the ultimate example, a figure who held that “people don’t know what they want until you show it to them.” In the design-genius mentality, how a toaster (or smartphone, or building) ought to work becomes a type of soothsaying, whereby the designer earns the status of mastermind. Research becomes retrospective justification, the designer’s ingenuity validated by user adoption of the product—irrespective of how well it really serves their goals or interests.

Neither polarity of UX-style design really helps explain how one might best arrive at Breville’s “A Bit More” button. On one side is intuition. Keith Hensel, the genius who died too soon, possessed a sixth sense for taming the Maillard reaction and a congenial manner for proselytizing his solution. On the other side is evidence, via the research and participant observation conducted to cash out the “user empathy” Hoare cites as a compass bearing.

UX proponents tell tall tales about how good design really takes place. Bottom-up, evidentiary design implies that the designer is ultimately unnecessary, a mere facilitator who draws out a solution from the collective. The designer becomes a bureaucrat. And top-down, genius design becomes indistinguishable from salesmanship. As a result, design dissolves into other, more established disciplines like business intelligence, product marketing, and corporate evangelism. It’s an error that makes good design look far easier and more replicable than it really is. And worse, it allows people to conclude that their own expertise—from data analytics to advertising to illustration—is a sufficient stand-in for design. (...)

Allow me to indulge an analogy from philosophy. In both the genius and consensus registers, UX design predicates its success on knowledge: either the second sight of the designer, or the negotiated consensus of the user. Philosophers call the study of knowledge epistemology, and this approach to design is entirely epistemological. Just find the proper knowledge and the right design will emerge.

But when conducted best—including in Breville’s case, and despite Hoare’s insistence otherwise—design is more related to the philosophy of what things are, called ontology. It is a discipline of essence, that great bugbear of contemporary life, not of knowledge. Pursuing greater compatibility with a thing’s essence requires that the designer focus on the abstraction formed by the designed object and its human users together—whether it be toasting, dwelling, publishing, socializing, or anything else.

The designer’s job is not to please or comfort the user, but to make an object even more what it already is. Design is the stewardship of essence—not the pursuit of utility, or delight, or form. This is the orientation that produces solutions like the Breville “A Bit More” button. The design opportunities that would otherwise go unnoticed emerge not from what people know about or desire for toasting, but from deeply pursuing the nature of toasting itself.

by Ian Bogost, The Atlantic |  Read more:
Image: Breville

Mika Mäkinen, Dinosaur jr. - Gig poster project
via:

Pornhub Is the Kinsey Report of Our Time

The streaming sex empire may have done more to expand the sexual dreamscape than Helen Gurley Brown, Masters and Johnson, or Sigmund Freud.

Waking up on a Sunday morning, I received a text about what happened after I left the previous night’s party. “Everyone got high and we played truth or dare. Ted and Ivan docked.”

“Are you serious?” I replied. “I thought that only happened in porn.” Defined by Urban Dictionary as “the act of placing the head of one’s penis inside the foreskin of another’s penis,” docking is an act that, until that fateful night, nobody at the party had attempted or witnessed firsthand. (Or so they claimed.) But once you know a thing is a thing, sometimes you can’t get it out of your mind. And in a fit of libidinous boredom, or idle curiosity, or lust, or who even knows why anyone does anything anyway — you do that thing. Because that thing exists, and so do you. At some point, someone had to.

On the internet, there is a maxim known as Rule 34, which states: If you can imagine it, there is porn of it. No exceptions. And now that we are solidly into the age of internet pornography, I believe we are ready for another maxim: If there is porn of it, people will try it. (Maybe we can call it Rule 35.) And if people are trying that thing, then inevitably some of them will make videos of that thing and upload those to the internet. The result: an infinitely iterating feedback loop of sexual trial and error. Once upon a time, someone would try something new on film and it would take years to circulate on VHS or DVD through a relatively small community of porn watchers. But today, even the mainstream is porn-literate, porn-saturated, and porn-conversant. For a sexual butterfly effect to take place, you don’t even need to try that thing with your body — you can watch it, text about it, post jokes about it on Tumblr, chat about it on Grindr, masturbate while thinking about it, and type its name into so many search engines as to alter the sexual universe. There is such a thing, now, as a sexual meme — erotic acts and fantasies that replicate and spread like wildfire.

For we are living in a golden age of sexual creativity — an erotic renaissance that is, I believe, unprecedented in human history. Today you can, in a matter of minutes, see more boners than the most orgiastic member of Caligula’s court would see in a lifetime. This is, in itself, enough to revolutionize sexual culture at every level. But seeing isn’t even the whole story — because each of us also has the ability to replicate, share, and reinvent everything we see. Taken as a whole, this vast trove of smut is the Kinsey Report of our time, shedding light on the multiplicity of erotic desires and sexual behaviors in our midst. (...)

As long as there has been porn, there have been people worrying that porn is damaging sex. I’m not here to join that debate. The deeper we go down the internet-porn wormhole, the more it seems narrow-minded to understand porn exclusively in terms of what kind of sex it “teaches” us to have. Because in the streaming era, the amount and diversity of porn we watch exponentially outpaces that of the sex we have. Porn is bigger than its real-sex analog, and the difference isn’t just volume: The porn we see is weirder, wilder, and more particular than what most of us will ever have — or want — in our own lives. An expansive erotic landscape unto itself, pornography exists adjacent to and in constant conversation with real sex — but is much more capricious and capacious and creative. Pornography is more than a mere causal agent in the way we screw. It has also become a laboratory of the sexual imagination — and as such, it offers insight into a collective sexual consciousness that is in a state of high-speed evolution.

The speed of that evolution may be best observed in the deluge of sexual memes that depart from traditional real-world sexual behavior. In addition to acts like pussy-slapping and ball-squeezing — which could theoretically be included in some crazily updated version of The Joy of Sex — the new generation of sexual memes includes a new set of narrative memes. Pornographic scene-setting, erotic situations, and role-playing are being reinvented, and imaginations have expanded to accommodate a never-ending supply of novel stimuli. Some of these memes seem to live almost entirely within the realm of porn. (Does anybody enjoy being searched by the TSA?) Some may have real-world origins, but have undergone so much reimagining as to approach derivative art. (When homemade-porn versions of the video game Overwatch spiked last year, had there been a preceding spike in dirty talk in the headsets of Overwatch players?) And others are only acceptable when they don’t have real-world analogs. “Is it me or is there way too much stepdaughter porn lately?” a straight man recently asked. He was right, and it doesn’t stop there: In the U.S. in 2015 and 2016, the most popular search term on Pornhub was “stepmom.” Though he said he was “immensely insulted” by the genre, that didn’t prevent him from watching. “If I ignore the title and the girl looks hot, I open it.” And no, “stepsister” porn has not made him feel any different about his sisters, and I can go to hell for asking. (...)

How users navigate that material in private — what they choose to watch, in what sequence and for how long — is a sexual-sociological gold mine. MindGeek’s understanding of its users’ autoerotic habits is almost terrifyingly precise. Like Facebook, Google, Netflix, and every other major player online, Pornhub collects and analyzes a staggering amount of user data — some of which it uses, like those other companies, to help curate content and determine what a user sees. Pornhub also publicizes some of its anonymized findings on the company’s data-analytics blog, Pornhub Insights. (Which means the X-rated version of Netflix is actually more casual with its data than the real Netflix. Knowledge of the human condition, in the age of big data, is idiosyncratic and subject to corporate marketing strategies.) To celebrate the website’s tenth anniversary, Pornhub Insights analyzed a decade’s worth of data — and provided access to that data, granting us an unusual peek into the internet’s collective id. And it’s an id that is constantly shape-shifting — sometimes very rapidly. New sexual memes are invented daily, and when they explode in popularity, they can spawn thousands of spinoffs and imitators. And sometimes they fade away just as quickly — another porn fad that came, conquered, and vanished. Overnight.

by Maureen O’Connor, The Cut | Read more:
Image: Ben Wiseman
[ed. See also: What We Learned About Sexual Desire From 10 Years of Pornhub User Data]

There Is More to Becoming an Elite Route Runner Than Meets the Eye

Save for the lucky few anointed as quarterbacks, every kid who picks up a football starts as a wide receiver. At their core, backyard games are a series of one-on-one clashes between pass catchers and defensive backs, and the first challenge any aspiring gridiron star faces is learning how to get open. No skill on a football field is more relatable. No goal is more familiar.

That shared experience is part of what makes route running at the highest level so misunderstood. On one level, the idea of beating the person across from you is among the simplest in football. But against NFL cornerbacks, creating space requires as much nuance and attention to detail as any undertaking in the sport. “It’s all about efficiency,” Packers wide receiver turned running back Ty Montgomery tells The Ringer. “I think you learn that through repetition. How many steps [are you] taking at the top? How [are you] getting off the line? How are you creating separation? What ways are you able to make the same route look different every time you run it?”

Route running is a skill that’s both oft-discussed and underappreciated, and it’s become increasingly coveted in an era when many prospects come from spread backgrounds and have less formal training in that respect than ever before. The question, then, is what distinguishes a novice route runner from an expert—and how improvement happens. I talked to some of the league’s best receiving coaches and route runners to find out what goes into a part of the game that’s far more complex than it sounds.

When practices begin each season, Bengals wide receivers coach James Urban starts at square one with his players. Whether he’s working with six-time Pro Bowler A.J. Green or rookie first-round draft pick John Ross, Urban teaches every one of his receivers how to line up in a proper stance, which involves positioning the outside foot forward in order to create an initial burst with the back leg. “We use those foundations so when something kicks up or something isn’t quite as clean as we want it to be or doesn’t look right or the timing’s not right, I can say, ‘Hey, fix your stance,’” Urban says. “And then they know what that means.”

Part of the goal is to create consistency among the receiving corps. Part of it is correcting the mistakes of players who have used the wrong get-off for years. Cardinals receivers coach Darryl Drake claims that making quick adjustments is especially crucial when it comes to young players. “It has to become a habit more than anything else,” Drake says. “And it takes a while when you’ve been doing it [wrong] for four or five years.”

From there, the next step is reinforcing the fundamentals: pushing off—and not dropping back—the outside foot at the snap, learning which foot to plant with on inside and outside cuts, and keeping one’s shoulders over the knees in order to stay balanced and give off the illusion of running a vertical route for as long as possible. These are the types of things that go unnoticed to the casual fan watching on TV, but serve as the building blocks for every receiver. And even for stalwarts like Packers star Jordy Nelson, there is room for small tweaks that can make a huge difference on the field.

When Green Bay wide receivers coach Luke Getsy arrived on the staff as a quality-control assistant in 2014, he introduced a new method for getting in and out of the break at the top of routes. By first planting on the inside foot—as opposed to the outside foot—when getting to the break of a route, the Packers receivers eliminated one small step and created a subtle but vital advantage. “By allowing us to get to that drop in [three steps] and letting our plant foot hit before or at the same time as the DB, we’re going to be successful no matter how good the DB is,” Nelson says. With 98 catches for 1,519 yards with 13 touchdowns, the 2014 season also happened to be the most productive of Nelson’s career.

For younger players, picking up on these types of tricks during film sessions and drills can mean transforming from an average route runner into a devastating one. During the early years of his career, Ravens running back Danny Woodhead had the privilege of playing alongside some of the best route runners at their respective positions that the game has ever seen: LaDainian Tomlinson, Antonio Gates, and Wes Welker. Each taught Woodhead something he’s carried with him for the rest of his career. “I’ve been fortunate because I’ve been able to play with some Hall of Famers,” Woodhead says. “It’s huge when you can watch someone who’s done it before, and not only done it before, but done it before at the highest, highest level.”

The mantra that Woodhead took from Welker was to try to make every route look identical until the last possible moment. These days, Woodhead will ask Baltimore’s linebackers if any slight lean or misstep gives away his routes during practice. For running backs, the goal when route running is to mimic the same release out of the backfield on every play. For receivers, the key is pushing vertically to make defenders think that they are streaking down the field each time they come off the ball. “That’s what scares a DB the most—[a wideout] going by him,” Rams wide receivers coach Eric Yarber says. “Something that’s going to strike up the band and get the fans going. That makes a DB tremble and poo-poo in his pants.”

Yarber says that the main weakness most young players have is a lack of patience. They lift their chests too early, tipping their hand and letting opposing cornerbacks know it’s time to slow down. Other young wideouts have a tendency to flail their arms to the side as they come to a halt—“the air brakes,” as Urban calls them.

Good route runners keep their bodies compact as they move up the field; the greats eliminate any possible indicator as to which direction they’re going. This obsession with deception has led some receivers to have coverage preferences that may seem counterintuitive at first brush. Cowboys slot receiver Cole Beasley says that while no receiver likes to be manhandled, he’ll take matching up with a tight press-coverage corner over trying to beat a defender who cedes a few yards of ground any day.

“I feel like from further off [from a defender], you have to be more precise with your movements,” Beasley says. “You could give something away easier because they’re looking at you from a further distance. They can see your whole body. But when you’re right there, there’s not much for their eyes to focus on.”

Learning how to master the mechanics is only part of the equation, though. To rise into the upper echelon, receivers must have not only a keen awareness of their technique; they also must develop a sense for what the defense is trying to accomplish.

by Robert Mays, The Ringer |  Read more:
Image: Getty Images/Ringer Illustration

Friday, August 18, 2017

Hooper's Law of Drug Development

We've come to expect technology to improve each year. Moore's Law is justifiably famous, with its remarkable ability to explain the past and predict the future. It states that the number of transistors squeezed onto integrated circuits doubles every two years; this pattern has held true for half a century. More transistors on chips allow computers to perform faster mathematical calculations.

Moore's Law is optimistic and reflects the ability of humans to "chip" away at a problem, making sequential, cumulative advances. Much of technology fits this pattern. One glaring exception, tragically, is the drug development conducted by pharmaceutical companies. It is hugely expensive and has gotten more so each year. If costs continue to grow at 7.5 percent per year, real costs will more than double every 10 years. The pharmaceutical industry seems to be operating under a reverse-Moore's Law. I call it Hooper's Law. Here's the short version: Drug development costs double every decade. Why? Simple: the U.S. Food and Drug Administration is steadily increasing the cost per clinical trial participant and the number of required participants per clinical trial.

Technology and Moore's Law

The Cray 1 supercomputer that I used at NASA in the early 1980s cost an inflation-adjusted $28 million. Today's iPhone 7, at a cost of $650, is equal to 2,000 Cray 1 supercomputers. Per dollar, the iPhone 7 performs 90 million times as many calculations as the Cray 1. And for that price, you get a phone too.

Why shouldn't drug research and development fit this pattern? Every year scientists learn more about biology, physiology, pharmacology, and the natural history of diseases. They study what has worked and what hasn't. Their tools become more precise and more powerful. And yet the field of drug research and development seems immune to the powers that drive Moore's Law.

Drug Development is Expensive

Each year, to launch a certain number of new medicines, companies plow more and more money into research and development. Joseph DiMasi, Henry Grabowski, and Ronald Hansen, in a study performed for the Tufts Center for the Study of Drug Development, have estimated that the cost of bringing a new drug to market, in 2013 dollars, is $2.558 billion ($2.69 billion in 2017 dollars).12 Further, as a condition for approval, the FDA often requires drug companies to conduct post-marketing clinical trials to answer some remaining questions. Those post-marketing studies add $312 million, on average, to a drug's cost, raising the overall price tag to $2.87 billion in 2013 dollars ($3.02 billion in 2017 dollars).

Why is this number so large? One reason is that much of R&D is spent on the roughly 95 percent of drugs that fail along the way. The 95 percent failure rate is an average; some drugs have a 50 percent chance of success and others have a 1 percent chance. It depends on the drug, the therapeutic area, and the stage of the drug's development. A 2014 study by researchers at Cleveland Clinic found that 99.6 percent of more than 400 Alzheimer's clinical trials had failed.3 The $2.558 billion tab accounts for those "dry holes." (...)

Reasons for Expensive Clinical Trials

Why have drugs become more expensive to develop? Some examples illustrate why.

When I worked at Merck in the early 1990s, one of its biggest drugs was Vasotec (enalapril). It was tested in 2,987 patients before FDA approval. Mevacor (lovastatin), another of Merck's big drugs at the time, was tested in 6,582 patients in the EXCEL Study. At the time, that was thought to be a massive trial.

Now the situation is different.

Orexigen Therapeutics was conducting clinical trials on the obesity compound Contrave (naltrexone/bupropion). In 2011, the FDA asked the company to conduct a trial on between 60,000 and 100,000 patients. This clinical trial would have been enormously expensive, especially considering the resources available for a small company like Orexigen. In response to this request, Orexigen discontinued the development of Contrave and all of its other obesity drugs.7 During this period, the firm's stock price dropped 70 percent, and Orexigen laid off 40 percent of its staff.

Later, after negotiations with the FDA, Orexigen eventually ran a clinical trial on fewer than 10,000 patients. While this reduced requirement enabled the trial to proceed, this was still a huge and hugely expensive clinical trial.

The REVEAL trial, in which Merck is currently testing the experimental drug anacetrapib, includes a whopping 30,000 subjects and is being conducted at 430 hospitals and clinics in the United Kingdom, North America, China, Germany, Italy, and Scandinavia.

Between 1999 and 2005, the average length of a clinical trial grew from 460 days to 780 days, while the number of procedures on each patient (e.g., blood draws, scans) grew similarly, from 96 to 158.8 Comparing the 2001-2005 period to the 2011-2015 period, one study found that the number of study participant visits to care providers (e.g., hospitals, clinics, doctors' offices) increased 23-29 percent; the number of distinct procedures increased 44-59 percent; the total number of procedures performed increased 53-70 percent; and the cost per study volunteer per visit increased 34-61 percent.9

The protocols for clinical trials—those written recipes for how patients are to be recruited, dosed, and evaluated—have become more complex, as well. Dr. Gerry Messerschmidt, chief medical officer at Precision Oncology, reports, "When I was writing protocols 20 years ago, they were one-third the size that they are now. The change has really been quite dramatic."10

Clinical trials are more expensive now because the cost per participant has increased at the same time that the number of participants has grown. Why? Again, the answer is the FDA. (...)

Pharmaceutical companies typically estimate the future expenses and revenues for each prospective drug, looking forward 20 years. In some cases I know of intimately, they hire consultants to estimate expenses, revenues, and probabilities of success at each phase of development. They use these data to compute the financial value of each pharmaceutical project and, if the expected value (probability-adjusted value) of the project is negative, the consultants recommend discontinuing development.

Many new medicines are discarded for reasons that have nothing to do with safety and efficacy. Consultants have, for example, where the prospects looked poor, suggested killing drugs for brain cancer, ovarian cancer, melanoma, hemophilia, and other important conditions.11 Even though millions of dollars may have already been spent, these consultants would never recommend that a company knowingly proceed on a path toward losing more money unless some other crucial non-financial objective was being achieved.

by Charles L. Hooper, Econlib |  Read more:
Image: uncredited

I Spent a Month Learning Guitar on the Internet and It Actually Worked

As the legend goes, the Sex Pistols's bass player, Sid Vicious, didn't know how to play his instrument. High on amphetamines, he stayed up one night picking along to a Ramones album on a beat-up Fender. The next day, he was no Jaco Pastorius—but in the same way I hold a guitar after spending a month using the online guitar-learning platform Fender Play, he had a much better grip on things. (Well, musically speaking...)

Since the phenomenon known as the online instructional exploded in the form of the massive online open course (MOOC), YouTube tutorials, and master classes, I've been wary of the "watch (a screen) and learn" approach. I'm a social learner, for one, so if a lesson doesn't have a clear narrative flow between what I'm hearing and what I'm seeing, I'll make different connections than the person explaining something wants me to. Production-wise, they leave a lot to be desired, causing my mind to wander, and unfortunately Aldous Huxley's fantasy of "sleep learning" only sort-of works. My biggest gripe, though, is that I learn by asking questions, and I'm hesitant to try any mass-education program that doesn't allow them, even—and often, especially—if that's due to a platform's own limitations.

But guitar-playing isn't quite the same as eschewing history to devote more time to computer science, and a revolving door of guitar teachers, from junior high to the present, left me with enough glaring gaps in my own understanding of my instrument that I feared I'd be stuck mangling cover songs until I could nail down a reliable network of fellow noobs.

So, armed with an American Professional Stratocaster (full disclosure: Fender sent one me to keep as a tool with which to do my review), I spent four weeks on my computer and smartphone, working my way through 67 individual lessons spread out over five difficulty levels, covering everything from basic picking through 12-bar blues and how-to guides to rock classics like Heart's "Barracuda." Unless you're training to out-fiddle the devil—which I wasn't—a typical guitar lesson lasts about an hour, once a week. This was 13.5 hours at home over the course of four weeks. And you know what? It worked.

Before you ask me to play "Eruption," however, allow me to qualify: I'm no Anna Calvi—hell, I'd be lucky to call myself a Johnny Depp—but what I am is, finally, equipped with the basic building blocks upon which I can grow and develop my own skills and style as a player. Here's why:

by Emerson Rosenthal, Creator |  Read more:
Image: Fender
[ed. I'm posting this just so people understand that it's not that difficult to learn guitar. But...!  Save yourself some bucks ($20/mo?!). Just go to these sites: Justin Guitar (or his Justin Guitar YouTube site), and Guitar Jamz (or its subset: Marty Music). And, if you're really adventurous: Tondr.

James McMurtry

The $70m High School Stadium

It cost over $70m and has 12,000 seats, multi-tiered stands, a $1.8m video screen and an exterior that lights up in the colours of the home team. None of which seems extraordinary in the gaudy world of Texas high school football.

What might be most striking about the state’s latest student sports palace is not the arena itself, but the wide-angle view encompassing what is next to it: another high school football stadium, neatly landscaped with a giant screen of its own and a capacity of almost 10,000.

The Texas high school building frenzy is often dubbed an arms race – in which case, Katy Independent school district (ISD), near Houston, is tooled up like few others.

Legacy Stadium, the latest most expensive high school venue in the nation, had its opening ceremony on Thursday night. The red-clad Tigers of Katy high school and other local teams will play there in the coming season. Rhodes Stadium, which opened in 1981, will still be used by the district’s sides – sometimes on the same day as Legacy, with kick-offs an hour apart.

These are heady times for Robert McSpadden, aka “Texas Bob”, a stadium aficionado who lives in Katy. That the fast-growing region needed another field is not in question, though whether it had to be so lavish is another matter.

“We had seven teams playing in one stadium, so we had Friday night lights and Thursday night lights, Saturday night lights and Saturday afternoon,” McSpadden said.

“More controversial than the cost, in my opinion, is building it right next to the other stadium. I think it’s ingenious. [At first] I did not like it at all – I thought, ‘That’s the dumbest thing I’ve ever heard of,’ but there’s so many shared resources.”

The city of Allen, near Dallas, ushered in this decade’s mega-stadium era with a $60m, 18,000-capacity venue that made national headlines when it opened in 2012, and then when it was embarrassingly forced to close temporarily in 2014 because of cracks in the concrete.

Another Dallas suburb, McKinney, is expected to complete a $70m, 12,000-capacity arena around the turn of the year. A third, Prosper, plans to open a $48m stadium complex in 2019. Alvin, 25 miles south of downtown Houston, will welcome a 10,000-capacity, $41m stadium next year. The city’s population is about 26,000.

It all makes the brand new place in the Austin suburb of Pflugerville – named The Pfield – seem a relative bargain at $25.8m for 10,000 seats.

There are 1,202 high school football stadiums used for regular-season varsity games in Texas, with a combined seating capacity of over 4.2m, and 17% of which have video scoreboards, according to TexasBob.com, which says that a growing trend is to replace grass with artificial turf.

This works out to roughly one seat per seven Texans, or more seats than there are residents in 24 states and the District of Columbia.

Expenditure is not limited to stadiums. The Dallas Morning News last year found that Texas communities spent about $500m on 144 indoor practice facilities with artificial turf football fields over the past two decades – including two dozen that cost more than $5m in the Dallas-Fort Worth area alone.

by Tom Dart, The Guardian |  Read more:
Image: Tom Dart