Tuesday, October 31, 2017

Why You Hate Contemporary Architecture

The British author Douglas Adams had this to say about airports: “Airports are ugly. Some are very ugly. Some attain a degree of ugliness that can only be the result of special effort.” Sadly, this truth is not applicable merely to airports: it can also be said of most contemporary architecture.

Take the Tour Montparnasse, a black, slickly glass-panelled skyscraper, looming over the beautiful Paris cityscape like a giant domino waiting to fall. Parisians hated it so much that the city was subsequently forced to enact an ordinance forbidding any further skyscrapers higher than 36 meters.

Or take Boston’s City Hall Plaza. Downtown Boston is generally an attractive place, with old buildings and a waterfront and a beautiful public garden. But Boston’s City Hall is a hideous concrete edifice of mind-bogglingly inscrutable shape, like an ominous component found left over after you’ve painstakingly assembled a complicated household appliance. In the 1960s, before the first batch of concrete had even dried in the mold, people were already begging preemptively for the damn thing to be torn down. There’s a whole additional complex of equally unpleasant federal buildings attached to the same plaza, designed by Walter Gropius, an architect whose chuckle-inducing surname belies the utter cheerlessness of his designs. The John F. Kennedy Building, for example—featurelessly grim on the outside, infuriatingly unnavigable on the inside—is where, among other things, terrified immigrants attend their deportation hearings, and where traumatized veterans come to apply for benefits. Such an inhospitable building sends a very clear message, which is: the government wants its lowly supplicants to feel confused, alienated, and afraid.

The fact is, contemporary architecture gives most regular humans the heebie-jeebies. Try telling that to architects and their acolytes, though, and you’ll get an earful about why your feeling is misguided, the product of some embarrassing misconception about architectural principles. One defense, typically, is that these eyesores are, in reality, incredible feats of engineering. After all, “blobitecture”—which, we regret to say, is a real school of contemporary architecture—is created using complicated computer-driven algorithms! You may think the ensuing blob-structure looks like a tentacled turd, or a crumpled kleenex, but that’s because you don’t have an architect’s trained eye.

Another thing you will often hear from design-school types is that contemporary architecture is honest. It doesn’t rely on the forms and usages of the past, and it is not interested in coddling you and your dumb feelings. Wake up, sheeple! Your boss hates you, and your bloodsucking landlord too, and your government fully intends to grind you between its gears. That’s the world we live in! Get used to it! Fans of Brutalism—the blocky-industrial-concrete school of architecture—are quick to emphasize that these buildings tell it like it is, as if this somehow excused the fact that they look, at best, dreary, and, at worst, like the headquarters of some kind of post-apocalyptic totalitarian dictatorship.

Let’s be really honest with ourselves: a brief glance at any structure designed in the last 50 years should be enough to persuade anyone that something has gone deeply, terribly wrong with us. Some unseen person or force seems committed to replacing literally every attractive and appealing thing with an ugly and unpleasant thing. The architecture produced by contemporary global capitalism is possibly the most obvious visible evidence that it has some kind of perverse effect on the human soul. Of course, there is no accounting for taste, and there may be some among us who are naturally are deeply disposed to appreciate blobs and blocks. But polling suggests that devotees of contemporary architecture are overwhelmingly in the minority: aside from monuments, few of the public’s favorite structures are from the postwar period. (When the results of the poll were released, architects harrumphed that it didn’t “reflect expert judgment” but merely people’s “emotions,” a distinction that rather proves the entire point.) And when it comes to architecture, as distinct from most other forms of art, it isn’t enough to simply shrug and say that personal preferences differ: where public buildings are concerned, or public spaces which have an existing character and historic resonances for the people who live there, to impose an architect’s eccentric will on the masses, and force them to spend their days in spaces they find ugly and unsettling, is actually oppressive and cruel.

The politics of this issue, moreover, are all upside-down. For example, how do we explain why, in the aftermath of the Grenfell Tower tragedy in London, more conservative commentators were calling for more comfortable and home-like public housing, while left-wing writers staunchly defended the populist spirit of the high-rise apartment building, despite ample evidence that the majority of people would prefer not to be forced to live in or among such places? Conservatives who critique public housing may have easily-proven ulterior motives, but why so many on the left are wedded to defending unpopular schools of architectural and urban design is less immediately obvious.

There have, after all, been moments in the history of socialism—like the Arts & Crafts movement in late 19th-century England—where the creation of beautiful things was seen as part and parcel of building a fairer, kinder world. A shared egalitarian social undertaking, ideally, ought to be one of joy as well as struggle: in these desperate times, there are certainly more overwhelming imperatives than making the world beautiful to look at, but to decline to make the world more beautiful when it’s in your power to so, or to destroy some beautiful thing without need, is a grotesque perversion of the cooperative ideal. This is especially true when it comes to architecture. The environments we surround ourselves with have the power to shape our thoughts and emotions. People trammeled in on all sides by ugliness are often unhappy without even knowing why. If you live in a place where you are cut off from light, and nature, and color, and regular communion with other humans, it is easy to become desperate, lonely, and depressed. The question is: how did contemporary architecture wind up like this? And how can it be fixed?

For about 2,000 years, everything human beings built was beautiful, or at least unobjectionable. The 20th century put a stop to this, evidenced by the fact that people often go out of their way to vacation in “historic” (read: beautiful) towns that contain as little postwar architecture as possible. But why? What actually changed? Why does there seem to be such an obvious break between the thousands of years before World War II and the postwar period? And why does this seem to hold true everywhere? (...)

Architecture’s abandonment of the principle of “aesthetic coherence” is creating serious damage to ancient cityscapes. The belief that “buildings should look like their times” rather than “buildings should look like the buildings in the place where they are being built” leads toward a hodge-podge, with all the benefits that come from a distinct and orderly local style being destroyed by a few buildings that undermine the coherence of the whole. This is partly a function of the free market approach to design and development, which sacrifices the possibility of ever again producing a place on the village or city level that has an impressive stylistic coherence. A revulsion (from both progressives and capitalist individualists alike) at the idea of “forced uniformity” leads to an abandonment of any community aesthetic traditions, with every building fitting equally well in Panama City, Dubai, New York City, or Shanghai. Because decisions over what to build are left to the individual property owner, and rich people often have horrible taste and simply prefer things that are huge and imposing, all possibilities for creating another city with the distinctiveness of a Venice or Bruges are erased forever.(...)

How, then, do we fix architecture? What makes for a better-looking world? If everything is ugly, how do we fix it? Cutting through all of the colossally mistaken theoretical justifications for contemporary design is a major project. But a few principles may prove helpful.

by Brianna Rennix & Nathan J. Robinson, Current Affairs :  Read more:
Image: uncredited
[ed. If I could, my house and everything in it would be designed Art Deco.]

The Melancholy of Subculture Society

If you crack open some of the mustier books about the Internet - you know the ones I’m talking about, the ones which invoke Roland Barthes and discuss the sexual transgressing of MUDs - one of the few still relevant criticisms is the concern that the Internet by uniting small groups will divide larger ones.

SURFING ALONE

You may remember this as the Bowling Alone thesis applied to the Internet; it got some traction in the late 1990s. The basic idea is: electronic entertainment devices grow in sophistication and inexpensiveness as the years pass, until by the 1980s and 1990s, they have spread across the globe and have devoured multiple generations of children; these devices are more pernicious than traditional geeky fares inasmuch as they are often best pursued solo. Spending months mastering Super Mario Bros - all alone - is a bad way to grow up normal.

AND THEN THERE WERE NONE

The 4 or 5 person Dungeons & Dragons party (with a dungeon master) gives way to the classic arcade with its heated duels and oneupsmanship; the arcade gives way to the flickering console in the bedroom with one playing Final Fantasy VII - alone. The increased graphical realism, the more ergonomic controllers, the introduction of genuinely challenging AI techniques… Trend after trend was rendering a human opponent unnecessary. And gamer after gamer was now playing alone.

Perhaps, the critic says, the rise of the Internet has ameliorated that distressing trend - the trends favored no connectivity at first, but then there was finally enough surplus computing power and bandwidth for massive connectivity to become the order of the day.

It is much more satisfactory and social to play MMORPGs on your PC than single-player RPGS, much more satisfactory to kill human players in Halo matches than alien AIs. The machines finally connect humans to humans, not human to machine. We’re forced to learn some basic social skills, to maintain some connections. We’re no longer retreating into our little cocoons, interacting with no humans.

WELCOME TO THE N.H.K.!

But, the critic continues, things still are not well. We are still alienated from one another. The rise of the connected machines still facilitates withdrawal and isolation. It presents the specter of the hikikomori - the person who ceases to exist in the physical realm as much as possible. It is a Japanese term, of course. They are 5 years further in our future than we are (or perhaps one should say, were). Gibson writes, back in 2001:
The Japanese seem to the rest of us to live several measurable clicks down the time line. The Japanese are the ultimate Early Adopters, and the sort of fiction I write behooves me to pay serious heed to that. If you believe, as I do, that all cultural change is essentially technologically driven, you pay attention to the Japanese. They’ve been doing it for more than a century now, and they really do have a head start on the rest of us, if only in terms of what we used to call future shock (but which is now simply the one constant in all our lives).
Gibson also discusses the Mobile Girl and text messaging; that culture began really showing up in America around 2005 - Sidekicks, Twitter etc. You can do anything with a cellphone: order food, do your job, read & write novels, maintain a lively sociallife, engage in social status envy (She has a smaller phone, and a larger collection of collectibles on her cellphone strap! OMG!)… Which is just another way of saying You can do anything without seeing people, just by writing digital messages. (And this in a country with one of the most undigitizable writing systems in existence! Languages are not created equal.)

The hikikomori withdraws from all personal contact. The hikikomori does not hang out at the local pub, swilling down the brewskis as everyone cheers on the home team. The hikikomori is not gossiping at the rotary club nor with the Lions or mummers or Veterans or Knights. hikikomoris do none of that. They aren’t working, they aren’t hanging out with friends.
The Paradoxical solitude and omnipotence of the otaku, the new century’s ultimate enthusiast: the glory and terror inherent of the absolute narrowing of personal bandwidth. –William Gibson, Shiny balls of Mud (TATE 2002)
So what are they doing with their 16 waking hours a day?

OPTING OUT

But it’s better for us not to know the kinds of sacrifices the professional-grade athlete has made to get so very good at one particular thing…the actual facts of the sacrifices repel us when we see them: basketball geniuses who cannot read, sprinters who dope themselves, defensive tackles who shoot up with bovine hormones until they collapse or explode. We prefer not to consider closely the shockingly vapid and primitive comments uttered by athletes in postcontest interviews or to consider what impoverishments in one’s mental life would allow people actually to think the way great athletes seem to think. Note the way up close and personal profiles of professional athletes strain so hard to find evidence of a rounded human life – outside interests and activities, values beyond the sport. We ignore what’s obvious, that most of this straining is farce. It’s farce because the realities of top-level athletics today require an early and total commitment to one area of excellence. An ascetic focus. A subsumption of almost all other features of human life to one chosen talent and pursuit. A consent to live in a world that, like a child’s world, is very small…[Tennis player Michael] Joyce is, in other words, a complete man, though in a grotesquely limited way…Already, for Joyce, at twenty-two, it’s too late for anything else; he’s invested too much, is in too deep. I think he’s both lucky and unlucky. He will say he is happy and mean it. Wish him well. –David Foster Wallace, The String Theory (July 1996 Esquire)
They’re not preoccupied with our culture - they’re participating in their own subculture. It’s the natural progression of the otaku. They are fighting on Azeroth, or fiercely pursuing their dojinshi career, or… There are many subcultures linked and united by the Internet, for good and ill. For every charitable or benevolent subculture (eg free software) there is one of mixed benefits (World of Warcraft), and one outright harmful (ex. fans of eating disorders, child pornography).

The point the critic wants to make is that life is short and a zero-sum game. You lose a third of the day to sleep, another third to making a living, and now you’ve little left. To be really productive, you can’t divide your energies across multiple cultures - you can’t be truly successful in mainstream culture, and at the same time be able to devote enough effort in the field of, say, mechanical models, to be called an Otaking. A straddler takes onto his head the overhead of learning and participating in both, and receives no benefits (he will suffer socially in the esteem of the normals, and will be able to achieve little in his hobby due to lack of time and a desire to not go overboard).

The otaku & hikikomori recognizes this dilemma and he chooses - to reject normal life! He rejects life in the larger culture for his subculture. It’s a simple matter of comparative advantage; it’s easier to be a big fish in a small pond than in a large one.

THE BIGGER SCREEN

Have you ever woken up from a dream that was so much more pleasant than real life that you wish you could fall back to sleep and return to the dream?…For some, World of Warcraft is like a dream they don’t have to wake up from - a world better than the real world because their efforts are actually rewarded –[Half Sigma, Status, masturbation, wasted time, and WoW]
EVE Online is unique in gaming in that we have always played on the same massive server in the same online universe since May 2003 when it first went live. We not only understand the harsh penalties for failure, but also how longevity and persistence is rewarded with success. When you have over 60,000 people on weekends dealing, scheming, and shooting each other it attracts a certain type of gamer. It’s not a quick fix kind of game. We enjoy building things that last, be they virtual spaceships or real life friendships that together translate into massive Empires and enduring legacies. Those of us who play understand that one man really can truly make a difference in our world. –Mark Seleene Heard, Vile Rat eulogy 2012
As ever more opt out, the larger culture is damaged. The culture begins to fragment back into pieces. The disconnect can be profound; an American anime geek has more in common with a Japanese anime geek (who is of a different ethnicity, a different culture, a different religion, a different language…) than he does with an American involved in the evangelical Christian subculture. There is essentially no common ground - our 2 countrymen probably can’t even agree on objective matters like governance or evolution!

With enough of these gaps, where is American or French culture? Such cultural identities take centuries to coalesce - France did not speak French until the 1900s (as The Discovery of France recounts), and Han China is still digesting & assimilating its many minorities & outlying regions. America, of course, had it easy in starting with a small founder population which could just exterminate the natives.

The national identity fragments under the assault of burgeoning subcultures. At last, the critic beholds the natural endpoint of this process: the nation is some lines on a map, some laws you follow. No one particularly cares about it. The geek thinks, Meh: here, Canada, London, Japan, Singapore - as long as FedEx can reach me and there’s a good Internet connection, what’s the difference? (Nor are the technically-inclined alone in this.)

You can test this yourself. Tell yourself - The country I live in now is the best country in the world for people like me; I would be terribly unhappy if I was exiled. If your mental reply goes something like, Why, what’s so special about the USA? It’s not particularly economically or politically free, it’s not the only civilized English-speaking country, it’s not the wealthiest…, then you are headed down the path of opting out.

This is how the paradox works: the Internet breaks the larger culture by letting members flee to smaller subcultures. And the critics think this is bad. They like the broader culture10, they agree with Émile Durkheim about atomization and point to examples like South Korea, and deep down, furries and latex fetishists really bother them. They just plain don’t like those deviants.

BUT I CAN GET A HIGHER SCORE!

In the future, everyone will be world-famous for 15 minutes.

Let’s look at another angle.

MONOCULTURE

Irony has only emergency use. Carried over time, it is the voice of the trapped who have come to enjoy their cage.
One can’t opt out of culture. There is no view from nowhere. To a great extent, we are our cultural artifacts - our possessions, our complexes of memes, our habits and objects of disgust are all cultural. You are always part of a culture.

Suppose there were only 1 worldwide culture, with no subcultures. The overriding obsession of this culture will be… let’s make it money. People are absolutely obsessed with money - how it is made, acquired, degraded, etc. More importantly, status is defined just by how much you have earned in your life; in practice, tie-breakers include how fast you made it, what circumstances you made it in (everyone admires a person who became a billionaire in a depression more than a good-times billionaire, in the same way we admire the novelist in the freezing garret more than the comfortable academic), and so on.

This isn’t too absurd a scenario: subjects feed on themselves and develop details and complexity as effort is invested in them. Money could well absorb the collective efforts of 7 billion people - already many people act just this way.

But what effect does this have on people? I can tell you: the average person is going to be miserable. If everyone genuinely buys into this culture, then they have to be. Their talents at piano playing, or cooking, or programming, or any form of artistry or scholarly pursuit are denigrated and count for naught. The world has become too big - it did not use to be so big, people so powerless of what is going on:
“Society is composed of persons who cannot design, build, repair, or even operate most of the devices upon which their lives depend…In the complexity of this world people are confronted with extraordinary events and functions that are literally unintelligible to them. They are unable to give an adequate explanation of man-made phenomena in their immediate experience. They are unable to form a coherent, rational picture of the whole.
Under the circumstances, all persons do, and indeed must, accept a great number of things on faith…Their way of understanding is basically religious, rather than scientific; only a small portion of one’s everyday experience in the technological society can be made scientific…The plight of members of the technological society can be compared to that of a newborn child. Much of the data that enters its sense does not form coherent wholes. There are many things the child cannot understand or, after it has learned to speak, cannot successfully explain to anyone…Citizens of the modern age in this respect are less fortunate than children. They never escape a fundamental bewilderment in the face of the complex world that their senses report. They are not able to organize all or even very much of this into sensible wholes….“
You can’t make a mark on it unless there are almost as many ways to make marks as there are persons.

To put it another way: women suffer enough from comparing themselves to media images. If you want a vision of this future, imagine everyone being an anorexic teenager who hates her body - forever.

We all value social esteem. We need to know somebody thinks well of us. We’re tribal monkeys; ostracism means death.
Jaron Lanier: I’d like to hypothesize one civilizing force, which is the perception of multiple overlapping hierarchies of status. I’ve observed this to be helpful in work dealing with rehabilitating gang members in Oakland. When there are multiple overlapping hierarchies of status there is more of a chance of people not fighting their superior within the status chain. And the more severe the imposition of the single hierarchy in people’s lives, the more likely they are to engage in conflict with one another. Part of America’s success is the confusion factor of understanding how to assess somebody’s status. 
Steven Pinker: That’s a profound observation. There are studies showing that violence is more common when people are confined to one pecking order, and all of their social worth depends on where they are in that hierarchy, whereas if they belong to multiple overlapping groups, they can always seek affirmations of worth elsewhere. For example, if I do something stupid when I’m driving, and someone gives me the finger and calls me an asshole, it’s not the end of the world: I think to myself, I’m a tenured professor at Harvard. On the other hand, if status among men in the street was my only source of worth in life, I might have road rage and pull out a gun. Modernity comprises a lot of things, and it’s hard to tease them apart. But I suspect that when you’re not confined to a village or a clan, and you can seek your fortunes in a wide world, that is a pacifying force for exactly that reason. 
Think of the people you know. How many of them can compete on purely financial grounds? How many can compare to the chimps at the top of the financial heap without feeling like an utter failure, a miserable loser? Not many. I can’t think of anyone I know who wouldn’t be at least a little unhappy. Some of them are pretty well off, but it’s awfully hard to compare with billionaires in their department. There’s no way to prove that this version of subcultures is the right one (perhaps fragmenting the culture fragments the possible status), but when I look at simple models, this version seems plausible to me and to explain some deep trends like monogamy.

SUBCULTURES SET YOU FREE
If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself. Enjoy your achievements as well as your plans. Keep interested in your own career, however humble; it is a real possession in the changing fortunes of time.
Having a society in which an artist can mingle as social equals with the billionaire and admit the Nobel scientists and the philanthropist is fundamental to our mental health! If I’m a programmer, I don’t need to be competing with 7 billion people, and the few hundred billionaires, for self-esteem. I can just consider the computing community. Better yet, I might only have to consider the functional programming community, or perhaps just the Haskell programming community. Or to take another example: if I decide to commit to the English Wikipedia subculture, as it were, instead of American culture, I am no longer mentally dealing with 300 million competitors and threats; I am dealing with just a few thousand.

It is a more manageable tribe. It’s closer to the Dunbar number, which still applies online. Even if I’m on the bottom of the Wikipedia heap, that’s fine. As long as I know where I am! I don’t have to be a rich elite to be happy; a master craftsman is content, and a cat may look at a king.

Leaving a culture, and joining a subculture, is a way for the monkey mind to cope with the modern world.

by Gwern Branwen, Gwern.net | Read more:
[ed. Damn. Sometimes I stumble across a site that's just, indescribable... if you're up for taking a deep dive down the rabbit hole into some weird and exhiliarating essays, check this out.]

Monday, October 30, 2017

Neoliberalism 101

Every Successful Relationship is Successful for the Same Exact Reasons

Hey, guess what? I got married two weeks ago. And like most people, I asked some of the older and wiser folks around me for a couple quick words of advice from their own marriages to make sure my wife and I didn’t shit the (same) bed. I think most newlyweds do this, especially after a few cocktails from the open bar they just paid way too much money for.

But, of course, not being satisfied with just a few wise words, I had to take it a step further.

See, I have access to hundreds of thousands of smart, amazing people through my site. So why not consult them? Why not ask them for their best relationship/marriage advice? Why not synthesize all of their wisdom and experience into something straightforward and immediately applicable to any relationship, no matter who you are?

Why not crowdsource THE ULTIMATE RELATIONSHIP GUIDE TO END ALL RELATIONSHIP GUIDES™ from the sea of smart and savvy partners and lovers here?

So, that’s what I did. I sent out the call the week before my wedding: anyone who has been married for 10+ years and is still happy in their relationship, what lessons would you pass down to others if you could? What is working for you and your partner? And if you’re divorced, what didn’t work previously?

The response was overwhelming. Almost 1,500 people replied, many of whom sent in responses measured in pages, not paragraphs. It took almost two weeks to comb through them all, but I did. And what I found stunned me…

They were incredibly repetitive.

That’s not an insult or anything. Actually, it’s kind of the opposite. These were all smart and well-spoken people from all walks of life, from all around the world, all with their own histories, tragedies, mistakes, and triumphs…

And yet they were all saying pretty much the same dozen things.

Which means that those dozen or so things must be pretty damn important… and more importantly, they work.

Here’s what they are: (...)

The most important factor in a relationship is not communication, but respect

What I can tell you is the #1 thing, most important above all else is respect. It’s not sexual attraction, looks, shared goals, religion or lack of, nor is it love. There are times when you won’t feel love for your partner. That is the truth. But you never want to lose respect for your partner. Once you lose respect you will never get it back. 
– Laurie
As we scanned through the hundreds of responses we received, my assistant and I began to notice an interesting trend.

People who had been through divorces and/or had only been with their partners for 10-15 years almost always talked about communication being the most important part of making things work. Talk frequently. Talk openly. Talk about everything, even if it hurts.

And there is some merit to that (which I’ll get to later).

But we noticed that the thing people with marriages going on 20, 30, or even 40 years talked about most was respect.

My sense is that these people, through sheer quantity of experience, have learned that communication, no matter how open, transparent and disciplined, will always break down at some point. Conflicts are ultimately unavoidable, and feelings will always be hurt.

And the only thing that can save you and your partner, that can cushion you both to the hard landing of human fallibility, is an unerring respect for one another, the fact that you hold each other in high esteem, believe in one another—often more than you each believe in yourselves—and trust that your partner is doing his/her best with what they’ve got.

Without that bedrock of respect underneath you, you will doubt each other’s intentions. You will judge their choices and encroach on their independence. You will feel the need to hide things from one another for fear of criticism. And this is when the cracks in the edifice begin to appear.
My husband and I have been together 15 years this winter. I’ve thought a lot about what seems to be keeping us together, while marriages around us crumble (seriously, it’s everywhere… we seem to be at that age). The one word that I keep coming back to is “respect.” Of course, this means showing respect, but that is too superficial. Just showing it isn’t enough. You have to feel it deep within you. I deeply and genuinely respect him for his work ethic, his patience, his creativity, his intelligence, and his core values. From this respect comes everything else—trust, patience, perseverance (because sometimes life is really hard and you both just have to persevere). I want to hear what he has to say (even if I don’t agree with him) because I respect his opinion. I want to enable him to have some free time within our insanely busy lives because I respect his choices of how he spends his time and who he spends time with. And, really, what this mutual respect means is that we feel safe sharing our deepest, most intimate selves with each other. 
– Nicole
You must also respect yourself. Just as your partner must also respect his/herself. Because without that self-respect, you will not feel worthy of the respect afforded by your partner. You will be unwilling to accept it and you will find ways to undermine it. You will constantly feel the need to compensate and prove yourself worthy of love, which will just backfire.

Respect for your partner and respect for yourself are intertwined. As a reader named Olov put it, “Respect yourself and your wife. Never talk badly to or about her. If you don’t respect your wife, you don’t respect yourself. You chose her—live up to that choice.” (...)

Respect goes hand-in-hand with trust. And trust is the lifeblood of any relationship (romantic or otherwise). Without trust, there can be no sense of intimacy or comfort. Without trust, your partner will become a liability in your mind, something to be avoided and analyzed, not a protective homebase for your heart and your mind.

by Mark Manson, Quartz |  Read more:
Image: Reuters/Lucy Nicholson
[ed. Good advice. I'd post this once a month if I could only remember...]

The Infinite Suburb Is an Academic Joke

The elite graduate schools of urban planning have yet another new vision of the future. Lately, they see a new-and-improved suburbia—based on self-driving electric cars, “drone deliveries at your doorstep,” and “teardrop-shaped one-way roads” (otherwise known as cul-de-sacs)—as the coming sure thing. It sounds suspiciously like yesterday’s tomorrow, the George Jetson utopia that has been the stock-in-trade of half-baked futurism for decades. It may be obvious that for some time now we have lived in a reality-optional culture, and it’s vividly on display in the cavalcade of techno-narcissism that passes for thinking these days in academia.

Exhibit A is an essay that appeared last month in The New York Times Magazine titled “The Suburb of the Future is Almost Here,” by Alan M. Berger of the MIT urban design faculty and author of the book Infinite Suburbia—on the face of it a perfectly inane notion. The subtitle of his Times Magazine piece argued that “Millennials want a different kind of suburban development that is smart, efficient, and sustainable.”

Note the trio of clichés at the end, borrowed from the lexicon of the advertising industry. “Smart” is a meaningless anodyne that replaces the worn out tropes “deluxe,” “super,” “limited edition,” and so on. It’s simply meant to tweak the reader’s status consciousness. Who wants to be dumb?

“Efficient” and “sustainable” are actually at odds. The combo ought to ring an alarm bell for anyone tasked with designing human habitats. Do you know what “efficient” gets you in terms of ecology? Monocultures, such as GMO corn grown on sterile soil mediums jacked with petroleum-based fertilizers, herbicides, and fast-depleting fossil aquifer water. It’s a method that is very efficient for producing corn flakes and Cheez Doodles, but has poor prospects for continuing further into this century—as does conventional suburban sprawl, as we’ve known it. Efficiency in ecological terms beats a path straight to entropy and death.

Real successful ecologies, on the other hand, are the opposite of efficient. They are deeply redundant. They are rich in diverse species and functions, many of which overlap and duplicate, so that a problem with one failed part or one function doesn’t defeat the whole system. This redundancy is what makes them resilient and sustainable. Swamps, prairies, and hardwood forests are rich and sustainable ecologies. Monocultures, such as agri-biz style corn crops and “big box” retail monopolies are not sustainable and they’re certainly not even ecologies, just temporary artifacts of finance and engineering. What would America do if Walmart went out of business? (And don’t underestimate the possibility as geopolitical tension and conflict undermine global supply lines.)

Suburbia of the American type is composed of monocultures: residential, commercial, industrial, connected by the circulatory system of cars. Suburbia is not a sustainable human ecology. Among other weaknesses, it is fatally prone to Liebig’s “law of the minimum,” which states that the overall health of a system depends on the amount of the scarcest of the essential resources that is available to it. This ought to be self-evident to an urbanist, who must ipso facto be a kind of ecologist.

Yet techno-narcissists such as MIT’s Berger take it as axiomatic that innovation of-and-by itself can overcome all natural limits on a planet with finite resources. They assume the new-and-improved suburbs will continue to run on cars, only now they will be driverless and electric, and everything in their paradigm follows from that.

I don’t think so. Like it or not, the human race has not yet found a replacement for fossil fuels, especially oil, which has been the foundation of techno-industrial economies for a hundred years, and it is getting a little late in the game to imagine an orderly segue to some as-yet-undiscovered energy regime.

By the way, electricity is not an energy source. It is just a carrier of energy generated in power plants. We have produced large quantities of it at the grand scale using fossil fuels, hydropower, and nuclear fission (which is dependent on fossil fuels to operate). And, by the way, all of our nuclear power plants are nearing the end of their design life, with no plans or prospects for them to be replaced by new ones. We have maxed out on potential hydroelectric sites and the existing big ones are silting up, which will take them out of service inside of this century.

Electricity can also be produced by solar cells and wind turbines, but at nowhere near the scale necessary, on their own, for running contemporary American life. The conceit that we can power suburbia, the interstate highway system, truck-based distribution networks, commercial aviation, the U.S. military, and Walt Disney World on anything besides fossil fuels is going to leave a lot of people very disappointed.

The truth is that we have been running all this stuff on an extravagant ramp-up of debt for at least a decade to compensate for the troubles that exist in the oil industry, oil being the primary and indispensable resource for our way of life. These troubles are often lumped under the rubric peak oil, but the core of the trouble must be seen a little differently: namely, a steep decline in the Energy Return on Investment (EROI) across the oil industry. The phrase might seem abstruse on the face of it. It means simply that it is becoming uneconomical to extract oil from the ground, even with the so-called miracle of “fracking” shale oil deposits. It doesn’t pay for itself, and the EROI is still headed further down. (...)

The world’s major oil companies are cannibalizing themselves to stay in business, with balance sheets cratering, and next-to-zero new oil fields being discovered. The shale oil producers haven’t made a net dime since the project got ramped up around 2005. Their activities have been financed on junk lending made possible by arbitrages on the near-zero Fed fund rate, itself an historical abnormality. The shale-oil drillers are producing all out to service their loans, and have thus driven down oil prices, negating their profit. Low oil prices are not the sign of a healthy industry but of a failing industrial economy, the latter currently expressing itself in a sinking middle class and the election of Donald Trump.

All the techno-grandiose wishful thinking in the world does not alter this reality. The intelligent conclusion from all this ought to be obvious: Restructuring the American living arrangement to something other than “infinite” suburban sprawl based on limitless car dependency.

As it happens, the New Urbanist movement recognized this dynamic beginning in the early 1990s and proposed a return to traditional walkable neighborhoods, towns, and cities as the remedy. It has been a fairly successful reform effort, with hundreds of municipal land-use codes rewritten to avert the inevitable suburban sprawl mandates of the old codes. The movement also produced hundreds of new town projects all over the country to demonstrate that good urbanism was possible in new construction, as well as downtown makeovers in places earlier left for dead like Providence, Rhode Island, and Newburgh, New York.

When the elite graduate schools finally noticed the New Urbanism movement, it provoked extreme jealousy and hostility because they hadn’t thought of it themselves—it was a product of the property-development industry. Harvard’s Graduate School of Design, in particular, had been lost for decades in raptures of Buck Rogers modernism, concerned solely with “cutting edge” aesthetics—that is, architectural fashion statements aimed at status seeking. They affected to be offended by the retrograde front porches and picket fences of the New Urbanists, but they were unable to develop any coherent alternative vision of a plausible future urbanism—because there really wasn’t one.

Instead, around 2002 Harvard came up with a loopy program they called “Landscape Urbanism,” which was a half-baked revision of Ian McHarg’s old Design with Nature idea from the 1970s. Design with Nature had spawned hundreds of PUDs (Planned Unit Developments) of single-family houses nestled in bosky, natural settings and sheathed in environmental-looking cedar, and scores of university housing “complexes” bermed into the terrain (with plenty of free parking). Mostly, McHarg’s methodology was concerned with managing water runoff. It did not result in holistic towns, neighborhoods, or cities.

The projects of so-called Landscape Urbanism were not about buildings, and especially the relationship between buildings, other buildings, and the street. They viewed suburbia as a nirvana that simply required better storm-water drainage and the magic elixir of “edginess” to improve its long-term prospects. (...)

Berger’s P-Rex lab showed absolutely no interest in the particulars of traditional urban design: street-and-block grids, street and building typologies, code-writing for standards and norms in construction, et cetera. They showed no interest in the human habitat per se. Berger and his gang were simply promoting a fantasy they called the “global suburbia.” Their fascination with the suburbs rested on three pillars: 1) the fact that suburbia was already there; 2) the presumption that mass car use would continue to enable that settlement pattern; and 3) a religious faith in technological deliverance from the resource and capital limits that boded darkly for the continuation of suburban sprawl.

I will tell you without ceremony what the future actually holds for the inhabited terrain of North America. The big cities will have to contract severely and the process will be fraught and disorderly. The action will move to the small cities and small towns, especially the places that have a meaningful relationship with farming, food production, and the continent’s inland waterways. The suburbs have three destinies, none of them mutually exclusive: slums, salvage, and ruins. The future has mandates of its own. If we want to remain civilized, we will be compelled to return to a landscape composed of relationships between town and country, at a scale that comports with the resource realities of the future.

by James Howard Kunstler, American Conservative | Read more:
Image: “The Jetsons” (Warner Bros. publicity)

Hillary Clinton Releases Thousands of Pythons in Florida to Win the 2016 Election

How much more evidence do we need?!

When today’s news broke, I was dumbfounded and horrified. Not because I didn’t expect it, but because the mainstream media has once again totally botched the biggest story of the 2016 election. Of course, I am referring to the new allegations that, during the presidential election, Hillary Clinton personally oversaw an effort to set giant pythons loose in Florida to eat all of the Trump voters.

Also, don’t listen to the Mueller stuff. Just stay away from that, okay?

The facts of today’s news are incontrovertible. In 2016, Hillary Clinton visited Florida more than any other state except for Ohio and Pennsylvania. Fact. Also, Florida currently has an infestation of Burmese pythons that are causing chaos in the Everglades. Fact. Finally, there could be a recording out there of Hillary Clinton saying, and I quote, “The Clinton Foundation is a front for raising thousands of snakes that I train to consume people who are likely to vote for my political opponents.” Fact.

Do you know what’s not a fact? The new indictments in the Russia investigation. You can be indicted for anything these days — it doesn’t necessarily mean that you compromised American democracy in cahoots with a foreign power. That being said, Hillary Clinton should be indicted.

We only need one reason to see why: every time she visited Florida during the 2016 campaign, she was completely out of the public eye for literally minutes at a time. During each of those episodes, she had every chance to quietly slip away, creep to a warm nest of mangrove roots or marsh grass, and empty a bag full of baby pythons carefully bred to devour Republican voters in swing districts.

It’s obvious that the entire Mueller investigation is a total charade. This is the real story: Hillary Clinton may very well have personally deposited thousands of pythons throughout Florida with the express intent of murdering thousands of Americans and replacing them with, I assume, liberal robots.

These claims should be given even more weight due to the eerily suspicious timing of the whole so-called “official inquiry.” Do you really think it’s a coincidence that a federal grand jury approved the charges in the special investigation just days before the news broke about the Clinton campaign’s Python Strategy? Seems to me that we’ve got a 2016 presidential candidate spewing out random nonsense in an effort to distract Americans, cast doubt on perfectly legitimate investigations, and slither to safety. That candidate’s name is Hillary Clinton, and do you know what else slithers? Giant man-eating pythons. I am deliberate in my word choice.

If you still doubt this real story that has been reported by numerous real sources, then try proving to yourself that Hillary Clinton didn’t spend the past four years working to build a self-sustaining reptile colony that has been genetically engineered to find Trump supporters tasty. You can’t, because that would require proving a negative, which is impossible, but also because she did it. Maybe she even did it with Russia. Maybe they’re Russian Burmese pythons.

by Matthew Disler, McSweeny's |  Read more:
Image: via

Sunday, October 29, 2017


Tom Gauld
via:

Who Killed Reality? And Can We Bring It Back?

It has taken me nearly forever to notice a stupid and obvious fact about Donald Trump. He rose to fame as a reality TV star, and the one thing everyone understands about reality TV — people who love it and people who hate it — is that reality TV is not reality. It’s something else: the undermining of reality, the pirated version of reality, the perverted simulation of reality. If reality is Hawkins, Indiana, then reality TV is the Upside Down.

So I’m not sure how many of the people who voted for Trump actually thought they were getting a real president of a real country in the real world. (I feel badly for those people, though not as badly as they should feel about themselves.) That whole "real world" thing has sort of worn out its appeal. They wanted a devious goblin-troll from another dimension who would make the libtards howl and pee their panties, and so far they have had no reason to be disappointed.

Zero legislative accomplishments, an utterly incoherent foreign policy, a wink-and-nod acquaintanceship with neo-Nazis and white supremacists and an ever-lengthening list of broken promises and blatant falsehoods? Whatever, Poindexter: Fake news! Anyway, it’s all worth it to watch people in suits with Ivy League educations turn red on TV and start talking about history and the Constitution and all that other crap.

A year ago last August, in what felt like a noxious political environment but now looks like a different nation, a different historical era and perhaps a different reality, I wrote a mid-campaign Zeitgeist report that contained a strong premonition of what was coming. It wasn’t the only premonition I had while covering the 2016 presidential campaign. But I’m honestly not congratulating myself here, because like many other people who write about politics, I covered up my moments of dark insight with heaping doses of smug and wrong. (...)

Since then, I have come to the conclusion that the real innovators and disruptors in this dynamic were not the Bannon-Hannity Trump enablers in the media but the Trump demographic itself, which was more substantial and more complicated than we understood at the time. Trump’s supporters are mostly either studied with anthropological condescension or mocked as a pack of delusional racists hopped up on OxyContin and Wendy’s drive-through, who have halfway convinced themselves that their stagnant family incomes and sense of spiritual aimlessness are somehow the fault of black people and Muslims and people with PhDs. But in some ways they were ahead of the rest of us.

Don’t get me wrong: A lot of them are delusional racists who believe all sorts of untrue and unsavory things. But MAGAmericans have also imbibed a situational or ontological relativism that would impress the philosophy faculty at those coastal universities their grandkids will not be attending. They have grasped something important about the nature of reality in the 21st century — which is that reality isn’t important anymore. (...)

When Trump exults on Twitter over the perceived defeat of his enemies, Republican or Democrat or whomever, it often appears ludicrous and self-destructive to those of us out here in the realm of reality. But he’s making the same point over and over, and I think his followers get it: I’m down here in the labyrinth gnawing on the bones, and you haven’t even figured out how to fight me! To get back to the “Stranger Things” references, there must be a rule in Dungeons & Dragons that covers this scenario: There’s no point in attacking an imaginary creature with a real sword. (...)

Repeatedly hitting people over the head with a rolled-up newspaper, as if they were disobedient doggies, while telling them that Donald Trump is a liar and a fraud is pretty much the apex state of liberal self-parody. They know that. That’s why they like him.

Trump is a prominent symbol of the degradation or destruction of reality, but he didn’t cause it. He would not conceivably be president today — an eventuality that will keep on seeming fictional, as long as it lasts — if all of us, not just Republicans or the proverbial white working class, hadn’t traveled pretty far down the road into the realm of the not-real. Reality just wasn’t working out that well. God is dead, or at least he moved really far away with no phone and no internet, and a lot of reassuring old-time notions of reality loaded in his van. The alternative for many Americans is dead-end service jobs, prescription painkillers and blatantly false promises that someday soon technology and entrepreneurship will make everything better.

by Andrew O'Hehir, Salon |  Read more:
Image: AP/Getty/Shutterstock/Salon

How Big Medicine Can Ruin Medicare for All

In 2013, Senator Bernie Sanders, a self-described “democratic socialist,” couldn’t find a single co-sponsor for his healthcare plan, which would replace private insurance with Medicare-like coverage for all Americans regardless of age or income.

Today, the roll call of supporters for his latest version includes the leading lights of the Democratic party, including many with plausible presidential aspirations. It’s enough to make an exasperated Dana Milbank publish a column in the Washington Post under the headline ‘The Democrats have become socialists’.

But have they? Actually, no.

Real socialized medicine might work brilliantly, as it has in some other countries. In the United Kingdom, the socialist government of Labour’s Clement Attlee nationalized the healthcare sector after the second world war, and today the British government still owns and operates most hospitals and directly employs most healthcare professionals.

The UK’s National Health Service has it problems, but it produces much more health per dollar than America’s – largely because it doesn’t overpay specialists or waste money on therapies and technologies of dubious clinical value. Though they smoke and drink more, Britons live longer than Americans while paying 40% less per capita for healthcare.

But what advocates of “single payer” healthcare in this country are talking about, often without seeming to realize it, is something altogether different. What they are calling for, instead, is vastly expanding eligibility for the existing Medicare program, or for a new program much like it.

So, what does Medicare do? It doesn’t produce healthcare. Rather, it pays bills submitted by private healthcare providers.

What would happen if such a system replaced private healthcare insurance in the United States by becoming the “single payer” of all healthcare bills? If adopted here a generation ago, it could have led to substantial healthcare savings.

After Canada adopted such a system in the early 1970s, each of its provincial governments became the sole purchaser of healthcare within its own borders. These governments then used their concentrated purchasing power to negotiate fee schedules with doctors and fixed budgets with hospitals and medical suppliers that left Canadians with a far thriftier, more efficient system than the United States even as they gained access to more doctors’ visits per capita, better health, and longer lives.

The same might well have happened in the United States if we had adapted a single-payer then. But we didn’t. Instead, we created something more akin to a “single-provider” system by allowing vast numbers of hospital mergers and other corporate combinations that have left most healthcare markets in the United States highly monopolized. And what happens when a single payer meets a single provider? It’s not pretty.

Healthcare delivery in the United States a generation ago was still in many ways a cottage industry, but not any more. Not only have 60 drug companies combined into 10, but hospitals, outpatient facilities, physician practices, labs, and other healthcare providers have been merging into giant, integrated, corporate healthcare platforms that increasingly dominate the supply side of medicine in most of the country.

According to a study headed by Harvard economist David M Cutler, between 2003 and 2013 the share of hospitals controlled by large holding companies increased from 7% to 60%. A full 40% of all hospital stays now occur in healthcare markets where a single entity controls all hospitals.

If you want a hint of what a single-payer healthcare system would look like today if grafted on to our currently highly monopolized system, think about how well our “single-payer” Pentagon procurement system does when it comes to bargaining with sole-source defense contractors.

In theory, the government could just set the price it’s willing to pay for the next generation of fighter jets or aircraft carriers and refuse to budge. But in practice, a highly consolidated military-industrial complex has enough economic and political muscle to ensure not only that it is paid well, but also that Congress appropriates money for weapons systems the Pentagon doesn’t even want.

The dynamic would be much the same if a single-payer system started negotiating with the monopolies that control America’s healthcare delivery systems.

by Phillip Longman, The Guardian | Read more:
Image: Michael Reynolds/EPA
[ed. A longer and more detailed version of this article can be found in the Washington Monthly.] 

Saturday, October 28, 2017

Smile and Say, "Money"

Julie Andrews is one of the most iconic and remarkable performers of our time. She’s played some of our most favorite roles and has become part of our childhoods. Who among us hasn’t sat in front of Mary Poppins hundreds of times or watched The Princess Diaries over and over?

Not only is she a talented performer and accomplished actress, she is also a delightful and beautiful person. Julie Andrews appeared on The Late Show with Stephen Colbert and offered some very handy advice for the host. In the age of constant selfies, this is advice we can all use.


Andrews gifts Stephen Colbert with a bit of advice about how to appear natural and effortless in photos. Much like she always does. The trick is to say “money” instead of cheese, and it works every time. Andrews says it’s foolproof.

“There’s something about it,” she adds. “It drops the jaw a bit and makes you smile nicely.”(...)

You must count down from three and then say the word “money” out loud. For it to really work, however, you should probably think about money too. And just for good measure, you could think about all the new shoes and tacos money could buy you. Because, like Colbert says, “it puts you in a good mood.”

by Sundi Rose, Hello Giggles |  Read more:
Image: YouTube

Cryptonomicon

Neal Stephenson came along a little late in the game to be considered one of the Web's genuine prophets. His first novel, a spiky academic satire called ''The Big U,'' was published in 1984 -- the same year William Gibson, in ''Neuromancer,'' coined the term ''cyberspace.'' Stephenson didn't plant both feet in science fiction until 1992, when his novel ''Snow Crash'' -- a swashbuckling fantasy that largely takes place in a virtual world called the Metaverse -- became an instant, and deserved, cyberpunk classic.

''Snow Crash'' remains the freshest and most fully realized exploration of the hacker mythology. Set in a futuristic southern California, the novel rolls out a society in ruins. Citizens live in heavily fortified ''Burbclaves,'' pizza delivery services are run with military precision by a high-tech Cosa Nostra, and the Library of Congress (now comprising digital information uploaded by swarms of freelance Matt Drudge types) has merged with the C.I.A. and ''kicked out a big fat stock offering.'' In reality, Stephenson's hero delivers pizza. In the Metaverse, he's a warrior on a mission to squelch a deadly computer virus.

Sounds perfectly preposterous -- and it is. But Stephenson, a former code writer, has such a crackling high style and a feel for how computers actually function that he yanks you right along. Despite all the high-tech frippery, there's something old-fashioned about Stephenson's work. He cares as much about telling good stories as he does about farming out cool ideas. There's a strong whiff of moralism in his books, too. The bad guys in his fiction -- that is, anyone who stands in a well-intentioned hacker's way -- meet bad ends. In ''Snow Crash,'' one nasty character tries to rape a young woman, only to find out she's installed an intrauterine device called a ''dentata.''

Stephenson's antiquated commitment to narrative, his Dickensian brio, is part of what makes his gargantuan new novel, ''Cryptonomicon,'' distinct from the other outsize slabs of post-modern fiction we've seen recently -- David Foster Wallace's ''Infinite Jest,'' Don DeLillo's ''Underworld,'' Thomas Pynchon's ''Mason & Dixon.'' For all the pleasures scattered throughout those books, they're dry, somewhat forbidding epics that beckon industrious graduate students while checking the riffraff at the door. ''Cryptonomicon,'' on the other hand, is a wet epic -- as eager to please as a young-adult novel, it wants to blow your mind while keeping you well fed and happy. For the most part, it succeeds. It's brain candy for bitheads.

''Cryptonomicon'' could have easily been titled ''Incoming Mail.'' It's a sprawling, picaresque novel about code making and code breaking, set both during World War II, when the Allies were struggling to break the Nazis' fabled Enigma code, and during the present day, when a pair of entrepreneurial hackers are trying to create a ''data haven'' in the Philippines -- a place where encrypted data (and an encrypted electronic currency) can be kept from the prying eyes of Big Brother. It is, at heart, a book about people who want to read one another's mail.

''Cryptonomicon'' is so crammed with incident -- there are dozens of major characters, multiple plots and subplots, at least three borderline-sentimental love stories and discursive ruminations on everything from Bach's organ music and Internet start-ups to the best way to eat Cap'n Crunch cereal -- that it defies tidy summary. Suffice it to say that some early scenes are set at Princeton University in the 1940's, where an awkward young mathematical genius named Lawrence Pritchard Waterhouse befriends the computer pioneer Alan Turing. (Turing's interest in Waterhouse goes beyond their bicycle rides and theoretical discussions; he makes ''an outlandish proposal to Lawrence involving penises. It required a great deal of methodical explanation, which Alan delivered with lots of blushing and stuttering.'') When war breaks out, Turing is dispatched to Bletchley Park in Britain, where he helps break Enigma. Waterhouse is ultimately assigned to a top-secret outfit called Detachment 2702, led by a gung-ho marine named Bobby Shaftoe, whose mission it is to prevent the Nazis from discovering that their code has been cracked wide open.

Stephenson cheerfully stretches historical plausibility -- there are some absurdly heroic (if electrifying) battle scenes, hilarious cameos by Ronald Reagan and Douglas MacArthur, and shadowy conspiracies involving U-boat captains and fallen priests -- but plays the mathematics of code breaking straight. We witness, in detail, Turing's early attempts to create a bare-bones computer that will help decode Nazi messages, and are plunged into the organized chaos at Bletchley Park, where ''demure girls, obediently shuffling reams of gibberish through their machines, shift after shift, day after day, have killed more men than Napoleon.'' (...)

Stephenson intercuts these wartime scenes with chapters about Waterhouse's grandson, Randy, who works for a start-up company called Epiphyte that plans not only to create a data haven but also to use a cache of gold buried by the Japanese Army during World War II to back an electronic currency protected by state-of-the-art encryption. These chapters are the book's shaggiest and most winsome, if only because Stephenson is so plugged into Randy's hacker sensibilities. As he skims along, Stephenson riffs on everything from Wired magazine -- here called Turing, with the motto ''So Hip, We're Stupid!'' -- and Microsoft's legal team's ''state-of-the-art hellhounds'' to pretentious cultural-theory academics and Silicon Valley's interest in cryogenics. (Some hackers here wear bracelets offering a $100,000 reward to medics who freeze their dead bodies.) That ''Cryptonomicon'' contains the greatest hacking scene ever put to paper, performed by Randy while under surveillance in a Philippine prison, should further endear this novel to computer freaks on both coasts. I expect to see, for the next decade or so, dogeared copies of this novel rattling around on the floorboards of the Toyotas (or, increasingly, Range Rovers) that fill Silicon Valley parking lots.

Should anyone else bother with it? My answer is a guarded yes. Stephenson could have easily cut this novel by a third, and it's terrifying that he imagines this 900-plus-page monster to be the ''first volume'' in an even longer saga. Worse, he strains too hard at reconciling the book's multiple plot strands. We can understand the subtle links between World War II code breaking and today's politicized encryption battles -- and the spiritual links between cryptographers in the 1940's and hackers in the 1990's -- without the nonsense about secret gold deposits and coded messages that filter down (improbably) through generations. Stephenson, I suspect, simply can't help himself; he's having too good a time to ever consider applying the brakes.

by Dwight Garner, NY Times |  Read more:
Image: via:
[ed. I stumbled across Snow Crash and Neil Stephenson just a few months ago and am now deeply into Cryptonomicon. It's a wonderful (and wonderfully dense) novel, full of intrigue and history. So delighted to find someone of this literary talent that I'd somehow overlooked all these years.]  

Friday, October 27, 2017

The End of an Error?

In 1998 the Lancet, one of Elsevier’s most prestigious journals, published a paper by Andrew Wakefield and twelve colleagues that suggested a link between the MMR vaccine and autism. Further studies were quickly carried out, which failed to confirm such a link. In 2004, ten of the twelve co-authors publicly dissociated themselves from the claims in the paper, but it was not until 2010 that the paper was formally retracted by the Lancet, soon after which Wakefield was struck off the UK medical register.

A few years after Wakefield’s article was published, the Russian mathematician Grigori Perelman claimed to have proved Thurston’s geometrization conjecture, a result that gives a complete description of mathematical objects known as 3-manifolds, and in the process proves a conjecture due to Poincaré that was considered so important that the Clay Mathematics Institute had offered a million dollars for a solution. Perelman did not submit a paper to a journal; instead, in 2002 and 2003 he ­simply posted three preprints to the arXiv, a preprint server used by many theoretical ­physicists, mathematicians and computer ­scientists. It was difficult to understand what he had written, but such was his reputation, and such was the importance of his work if it was to be proved right, that a small team of experts worked heroically to come to grips with it, ­correcting minor errors, filling in parts of the argument where Perelman had been somewhat sketchy, and tidying up the presentation until it finally became possible to say with complete confidence that a solution had been found. For this work Perelman was offered a Fields Medal and the million dollars, both of which he declined.

A couple of months ago, Norbert Blum, a theoretical computer scientist from Bonn, posted to the arXiv a preprint claiming to have answered another of the Clay Mathematics Institute’s million-dollar questions. Like Perelman, Blum was an established and respected researcher. The preprint was well written, and Blum made clear that he was aware of many of the known pitfalls that await anybody who tries to solve the problem, giving careful explanations of how he had avoided them. So the preprint could not simply be dismissed as the work of a crank. After a few days, however, by which time several people had pored over the paper, a serious problem came to light: one of the key statements on which Blum’s argument depended directly contradicted a known (but not at all obvious) result. Soon after that, a clear understanding was reached of exactly where he had gone wrong, and a week or two later he retracted his claim.

These three stories are worth bearing in mind when people talk about how heavily we rely on the peer review system. It is not easy to have a paper published in the Lancet, so Wakefield’s paper presumably underwent a stringent process of peer review. As a result, it received a very strong endorsement from the scientific community. This gave a huge impetus to anti-vaccination campaigners and may well have led to hundreds of preventable deaths. By contrast, the two mathematics ­preprints were not peer reviewed, but that did not stop the correctness or otherwise of their claims being satisfactorily established.

An obvious objection to that last sentence is that the mathematics preprints were in fact peer-reviewed. They may not have been sent to referees by the editor of a journal, but they certainly were carefully scrutinized by peers of the authors. So to avoid any confusion, let me use the phrase “formal peer review” for the kind that is organized by a journal and “informal peer review” for the less official scrutiny that is carried out whenever an academic reads an article and comes to some sort of judgement on it. My aim here is to question whether we need formal peer review. It goes without saying that peer review in some form is essential, but it is much less obvious that it needs to be organized in the way it usually is today, or even that it needs to be organized at all.

What would the world be like without formal peer review? One can get some idea by looking at what the world is already like for many mathematicians. These days, the arXiv is how we disseminate our work, and the arXiv is how we establish priority. A typical pattern is to post a preprint to the arXiv, wait for feedback from other mathematicians who might be interested, post a revised version of the ­preprint, and send the revised version to a journal. The time between submitting a paper to a journal and its appearing is often a year or two, so by the time it appears in print, it has already been thoroughly assimilated. Furthermore, looking a paper up on the arXiv is much simpler than grappling with most journal websites, so even after publication it is often the arXiv preprint that is read and not the journal’s formatted version. Thus, in mathematics at least, journals have become almost irrelevant: their main purpose is to provide a stamp of approval, and even then one that gives only an imprecise and unreliable indication of how good a paper actually is. (...)

Defences of formal peer review tend to focus on three functions it serves. The first is that it is supposed to ensure reliability: if you read something in the peer-reviewed literature, you can have some confidence that it is correct. This confidence may fall short of certainty, but at least you know that experts have looked at the paper and not found it ­obviously flawed.

The second is a bit like the function of film reviews. We do not want to endure a large number of bad films in order to catch the occasional good one, so we leave that to film critics, who save us time by identifying the good ones for us. Similarly, a vast amount of academic literature is being produced all the time, most of it not deserving of our attention, and the peer-review system saves us time by selecting the most important articles. It also enables us to make quick judgements about the work of other academics: instead of actually reading the work, we can simply look at where it has been published.

The third function is providing feedback. If you submit a serious paper to a serious journal, then whether or not it is accepted, it has at least been read, and if you are lucky you receive valuable advice about how to improve it. (...)

It is not hard to think of other systems that would provide feedback, but it is less clear how they could become widely adopted. For example, one common proposal is to add (suitably moderated) comment pages to preprint servers. This would allow readers of articles to correct mistakes, make relevant points that are missing from the articles, and so on. Authors would be allowed to reply to these comments, and also to update their preprints in response to them. However, attempts to introduce systems like this have not, so far, been very successful, because most articles receive no comments. This may be partly because only a small minority of preprints are actually worth commenting on, but another important reason is that there is no moral pressure to do so. Throwing away the current system risks throwing away all the social capital associated with it and leaving us impoverished as a result. (...)

Why does any of this matter? Defenders of formal peer review usually admit that it is flawed, but go on to say, as though it were obvious, that any other system would be worse. But it is not obvious at all. If academics put their writings directly online and systems were developed for commenting on them, one immediate advantage would be a huge amount of money saved. Another would be that we would actually get to find out what other people thought about a paper, rather than merely knowing that somebody had judged it to be above a certain not very precise threshold (or not knowing anything at all if it had been rejected). We would be pooling our efforts in useful ways: for instance, if a paper had an error that could be corrected, this would not have to be rediscovered by every single reader.

by Timothy Gowers, TLS |  Read more:
Image: “Perelman-Poincaré” by Roberto Bobrow, 2010
[ed. Crowdsourcing peer reviews. Why not? Possibly because the current dysfunctional scientific journal business and its outsized influence on what gets published and therefore deemed important might be threatened?]

Thursday, October 26, 2017

Dallas Killers Club

There were three horrible public executions in 1963. The first came in February, when the prime minister of Iraq, Abdul Karim Qassem, was shot by members of the Ba’ath party, to which the United States had furnished money and training. A film clip of Qassem’s corpse, held up by the hair, was shown on Iraqi television. “We came to power on a CIA train,” said one of the Ba’athist revolutionaries; the CIA’s Near East division chief later boasted, “We really had the Ts crossed on what was happening.”

The second execution came in early November 1963: the president of Vietnam, Ngo Dinh Diem, was shot in the back of the head and stabbed with a bayonet, in a coup that was encouraged and monitored by the United States. President Kennedy was shocked at the news of Diem’s gruesome murder. “I feel we must bear a good deal of responsibility for it,” he said. “I should never have given my consent to it.” But Kennedy sent a congratulatory cable to Henry Cabot Lodge Jr., the ambassador to South Vietnam, who had been in the thick of the action. “With renewed appreciation for a fine job,” he wrote.

The third execution came, of course, later that month, on November 22. I was six when it happened. I wasn’t in school because we were moving to a new house with an ivy-covered tree in front. My mother told me that somebody had tried to kill the president, who was at the hospital. I asked how, and she said that a bullet had hit the president’s head, probably injuring his brain. She used the word “brain.” I asked why, and she said she didn’t know. I sat on a patch of carpeting in an empty room, believing that the president would still get better, because doctors are good and wounds heal. A little while later I learned that no, the president was dead.

Since that day, till very recently, I’ve avoided thinking about this third assassination. Any time I saw the words “Lee Harvey Oswald” or “grassy knoll” or “Jack Ruby,” my mind quickly skipped away to other things. I didn’t go to see Oliver Stone’s JFK when it came out, and I didn’t read DeLillo’s Libra, or Gaeton Fonzi’s The Last Investigation, or Posner’s Case Closed, or any of the dozens of mass-market paperbacks—many of them with lurid black covers and red titles—that I saw reviewed, blamed, praised.

But eventually you have to face up to it somehow: a famous, smiling, waving New Englander, wearing a striped, monogrammed shirt, sitting in a long blue Lincoln Continental next to his smiling, waving wife, has his head blown open during a Texas parade. How could it happen? He was a good-looking person, with an attractive family and an incredible plume of hair, and although he wasn’t a very effective or even, at times, a very well-intentioned president—he increased the number of thermonuclear warheads, more than doubled the budget for chemical and biological weapons, tripled the draft, nearly got us into an end-time war with Russia, and sent troops, napalm, and crop defoliants into Vietnam—some of his speeches were, even so, noble and true and ringingly delivered and permanently inspiring. He was a star; they loved him in Europe. And then suddenly he was just a dead, naked man in a hospital, staring fixedly upward, with a dark hole in his neck. Autopsy doctors were poking their fingers in his wounds and taking pictures and measuring, and burning their notes afterward and changing their stories. “I was trying to hold his hair on,” Jacqueline Kennedy told the Warren Commission when they asked her to describe her experience in the limousine. She saw, she said, a wedge-shaped piece of his skull: “I remember it was flesh colored with little ridges at the top.” The president, the motorcade he rode in, the whole country, had been, to use a postmortem word, “avulsed”—blasted inside out.

Who or what brought this appalling crime into being? Was it a mentally unstable ex-Marine and lapsed Russophile named Oswald, aiming down at the back of Kennedy’s head through leafy foliage from the book depository, all by himself, with no help? Many bystanders and eyewitnesses—including Jean Hill, whose interview was broadcast on NBC about a half an hour after the shooting, and Kennedy advisers Kenny O’Donnell and Dave Powers, who rode in the presidential motorcade—didn’t think so: hearing the cluster of shots, they looked first toward a little slope on the north side of Dealey Plaza, and not back at the alleged sniper’s window.

A young surgeon at Parkland Memorial Hospital, Charles Crenshaw, who watched Kennedy’s blood and brains drip into a kick bucket in Trauma Room 1, also knew immediately that the president had been fatally wounded from a location toward the front of the limousine, not from behind it. “I know trauma, especially to the head,” Crenshaw writes in JFK Has Been Shot, published in 1992, republished with updates in 2013. “Had I been allowed to testify, I would have told them”—that is, the members of the Warren Commission—“that there is absolutely no doubt in my mind that the bullet that killed President Kennedy was shot from the grassy knoll area.”

No, the convergent gunfire leads one to conclude that the shooting had to have been a group effort of some kind, a preplanned, coordinated crossfire: a conspiracy. But if it was a group effort, what affiliation united the participants? Did the CIA and its hypermilitaristic confederates—Cold Warrior bitter-enders—engineer it? That’s what Mark Lane, James DiEugenio, Gerald McKnight, and many other sincere, brave, long-time students of the assassination believe. “Kennedy was removed from office by powerful and irrational forces who opposed his revisionist Cuba policy,” writes McKnight in Breach of Trust, a closely researched book about the blind spots and truth-twistings of the Warren Commission. James Douglass argues that Kennedy was killed by “the Unspeakable”—a term from Thomas Merton that Douglass uses to describe a loose confederacy of nefarious plotters who opposed Kennedy’s “turn” towards reconciliatory back-channel negotiation. “Because JFK chose peace on earth at the height of the Cold War, he was executed,” Douglass writes.

This is the message, also, of Oliver Stone’s artful, fictionalized epic JFK: Kennedy shied away from the invasion of Cuba, he wanted us out of Vietnam, he wouldn’t bow to the military-industrial combine, and none of that was acceptable to the hard-liners who surrounded him—so they had him killed. “The war is the biggest business in America, worth $80 billion a year,” Kevin Costner says, in JFK’s big closing speech. “President Kennedy was murdered by a conspiracy that was planned in advance at the highest levels of our government, and it was carried out by fanatical and disciplined cold warriors in the Pentagon and CIA’s covert-operation apparatus.”

Well, there’s no question that the CIA was and is an invasive weed, an eyes-only historical horror show that has, through plausibly deniable covert action, brought generations of instability and carnage into the world. There is no question, either, that under presidents Truman, Eisenhower, and Kennedy, the CIA’s string of pre-Dallas coups d’état—in Africa, in the Middle East, in Southeast Asia, in Latin America—contributed to an international climate of political upheaval and bad karma that made Kennedy’s own violent death a more conceivable outcome. There’s also no question that the CIA enlisted mobsters to kill Castro—Richard Bissell, who did the enlisting, later conceded that it was “a great mistake to involve the Mafia in an assassination attempt”—and no question that the CIA’s leading lights have, for fifty years, distorted and limited the available public record of the Kennedy assassination, doing whatever they could to distance the agency from its demonstrable interest in the accused killer, Oswald. It’s also true, I think, that there were some CIA extremists, fans of “executive action,” including William Harvey and, perhaps, James Jesus Angleton, that orchid-growing Anubis of spookitude, who were secretly relieved that Kennedy was shot, and may even have known in advance that he was probably going to die down south. (“I don’t want to sober up today,” Harvey reportedly told a colleague in Rome. “This is the day the goddamned president is gonna get himself killed!” Harvey also was heard to say: “This was bound to happen, and it’s probably good that it did.”) We are in debt to the CIA-blamers for their five decades of work, often in the face of choreographed media smears. They have brought us closer to the truth. But, having now read less than one-tenth of one percent of the available books on the subject, I believe, with full consciousness that I’m only a newcomer, that they’re barking up the wrong conspiracy. I think it was basically a Mafia hit: Kennedy’s death wouldn’t have happened without Carlos Marcello.

The best, saddest, fairest assassination book I’ve read, David Talbot’s Brothers, provides an important beginning clue. Robert Kennedy, who was closer to his brother and knew more about his many enraged detractors than anyone else, told a friend that the Mafia was principally responsible for what happened November 22. In public, for the five years that remained of his life, Bobby Kennedy made no criticisms of the nine-hundred-page Warren Report, which pinned the murder on a solo killer, a “nut” (per Hoover) and “general misanthropic fella” (per Warren Committee member Richard Russell) who had dreams of eternal fame. Attorney general Kennedy said, when reporters asked, that he had no intention of reading the report, but he endorsed it in writing and stood by it. Yet on the very night of the assassination, as Bobby began his descent into a near-catatonic depression, he called one of his organized-crime experts in Chicago and asked him to find out whether the Mafia was involved. And once, when friend and speechwriter Richard Goodwin (who had worked closely with JFK) asked Bobby what he really thought, Bobby replied, “If anyone was involved it was organized crime.”

To Arthur Schlesinger, Bobby was (according to biographer Jack Newfield) even more specific, ascribing the murder to “that guy in New Orleans”—meaning Carlos Marcello, the squat, tough, smart, wealthy mobster and tomato salesman who controlled slot machines, jukebox concessions, narcotics shipments, strip clubs, bookie networks, and other miscellaneous underworldy activities in Louisiana, in Mississippi, and, through his Texas emissary Joe Civello, in Dallas. In the early sixties, the syndicate run by Marcello and his brothers made more money than General Motors; the Marcellos owned judges, police departments, and FBI bureau chiefs. And when somebody failed to honor a debt, they killed him, or they killed someone close to him.

According to an FBI informant, Carlos Marcello confessed to the assassination. Some years before he died in 1993, Marcello said—as revealed by Lamar Waldron in three confusingly thorough books, the latest and best of which is The Hidden History of the JFK Assassination—“Yeah, I had the little son of a bitch killed,” meaning President Kennedy. “I’m sorry I couldn’t have done it myself.” As for Jack Ruby, the irascible strip-club proprietor and minor Marcello operative who silenced Lee Harvey Oswald in the Dallas police station, Bobby Kennedy exclaimed, on looking over the record of Ruby’s pre-assassination phone calls, “The list was almost a duplicate of the people I called before the Rackets Committee.” And then in 1968, Bobby Kennedy himself, having just won the California primary, was shot to death in a hotel kitchen in Los Angeles by an anti-Zionist cipher with gambling debts who had been employed as a groom at the Santa Anita racetrack. The racetrack was controlled by Carlos Marcello’s friend Mickey Cohen. The mob’s palmprints were, it seems, all over the war on the Kennedy brothers. Senator John Kennedy, during the labor-racketeering hearings in 1959, said, “If they’re crooks, we don’t wound them, we kill them.” Ronald Goldfarb, who worked for Bobby Kennedy’s justice department, wrote in 1995, “There is a haunting credibility to the theory that our organized crime drive prompted a plan to strike back at the Kennedy brothers.”

Lamar Waldron’s Hidden History is a primary source for a soon-to-be-produced movie, with Robert De Niro reportedly signed to play Marcello and Leonardo DiCaprio in the part of jailhouse informant Jack Van Laningham. Other new books that offer the Mafia-did-it view are Mark Shaw’s The Poison Patriarch—which contains an interesting theory about Ruby’s celebrity lawyer, Melvin Belli, and fingers “Marcello in collusion with Trafficante, while Hoffa cheered from the sidelines”—and Stefano Vaccara’s Carlos Marcello: The Man Behind the JFK Assassination, which has just been translated. “Dallas was a political assassination because it was a Mafia murder,” writes Vaccara, an authority on the Sicilian Mafia. “The Mafia went ahead with the hit once it understood that the power structure or the ‘establishment’ would not be displeased by the possibility.” Burton Hersh, in his astute and effortlessly well-written Bobby and J. Edgar, a revised version of which appeared in 2013, calls the Warren Commission Report a “sloppily executed magic trick, a government-sponsored attempt to stuff a giant wardrobe of incongruous information into a pitifully small valise.” Carlos Marcello, Hersh is convinced, was “the organizing personality behind the murder of John Kennedy.”

by Nicholson Baker, The Baffler |  Read more:
Image: Michael Duffy
[ed. Sorry for all The Baffler articles lately... they've been putting out some really great stuff (lately and pastly). See also: Stephen King's 11/22/63, one of his best.]