Friday, November 3, 2017

Time Passing

So here's the problem. If you don't believe in God or an afterlife; or if you believe that the existence of God or an afterlife are fundamentally unanswerable questions; or if you do believe in God or an afterlife but you accept that your belief is just that, a belief, something you believe rather than something you know -- if any of that is true for you, then death can be an appalling thing to think about. Not just frightening, not just painful. It can be paralyzing. The fact that your lifespan is an infinitesimally tiny fragment in the life of the universe, and that there is, at the very least, a strong possibility that when you die, you disappear completely and forever, and that in five hundred years nobody will remember you and in five billion years the Earth will be boiled into the sun: this can be a profound and defining truth about your existence that you reflexively repulse, that you flinch away from and refuse to accept or even think about, consistently pushing to the back of your mind whenever it sneaks up, for fear that if you allow it to sit in your mind even for a minute, it will swallow everything else. It can make everything you do, and everything anyone else does, seem meaningless, trivial to the point of absurdity. It can make you feel erased, wipe out joy, make your life seem like ashes in your hands. Those of us who are skeptics and doubters are sometimes dismissive of people who fervently hold beliefs they have no evidence for simply because they find them comforting -- but when you're in the grip of this sort of existential despair, it can be hard to feel like you have anything but that handful of ashes to offer them in exchange.

But here's the thing. I think it's possible to be an agnostic, or an atheist, or to have religious or spiritual beliefs that you don't have certainty about, and still feel okay about death. I think there are ways to look at death, ways to experience the death of other people and to contemplate our own, that allow us to feel the value of life without denying the finality of death. I can't make myself believe in things I don't actually believe -- Heaven, or reincarnation, or a greater divine plan for our lives -- simply because believing those things would make death easier to accept. And I don't think I have to, or that anyone has to. I think there are ways to think about death that are comforting, that give peace and solace, that allow our lives to have meaning and even give us more of that meaning -- and that have nothing whatsoever to do with any kind of God, or any kind of afterlife.

Here's the first thing. The first thing is time, and the fact that we live in it. Our existence and experience are dependent on the passing of time, and on change. No, not dependent -- dependent is too weak a word. Time and change are integral to who we are, the foundation of our consciousness, and its warp and weft as well. I can't imagine what it would mean to be conscious without passing through time and being aware of it. There may be some form of existence outside of time, some plane of being in which change and the passage of time is an illusion, but it certainly isn't ours.

And inherent in change is loss. The passing of time has loss and death woven into it: each new moment kills the moment before it, and its own death is implied in the moment that comes after. There is no way to exist in the world of change without accepting loss, if only the loss of a moment in time: the way the sky looks right now, the motion of the air, the number of birds in the tree outside your window, the temperature, the placement of your body, the position of the people in the street. It's inherent in the nature of having moments: you never get to have this exact one again.

And a good thing, too. Because all the things that give life joy and meaning -- music, conversation, eating, dancing, playing with children, reading, thinking, making love, all of it -- are based on time passing, and on change, and on the loss of an infinitude of moments passing through us and then behind us. Without loss and death, we don't get to have existence. We don't get to have Shakespeare, or sex, or five-spice chicken, without allowing their existence and our experience of them to come into being and then pass on. We don't get to listen to Louis Armstrong without letting the E-flat disappear and turn into a G. We don't get to watch "Groundhog Day" without letting each frame of it pass in front of us for a 24th of a second and then move on. We don't get to walk in the forest without passing by each tree and letting it fall behind us; we don't even get to stand still in the forest and gaze at one tree for hours without seeing the wind blow off a leaf, a bird break off a twig for its nest, the clouds moving behind it, each manifestation of the tree dying and a new one taking its place.

And we wouldn't want to have it if we could. The alternative would be time frozen, a single frame of the film, with nothing to precede it and nothing to come after. I don't think any of us would want that. And if we don't want that, if instead we want the world of change, the world of music and talking and sex and whatnot, then it is worth our while to accept, and even love, the loss and the death that make it possible.

Here's the second thing. Imagine, for a moment, stepping away from time, the way you'd step back from a physical place, to get a better perspective on it. Imagine being outside of time, looking at all of it as a whole -- history, the present, the future -- the way the astronauts stepped back from the Earth and saw it whole.

Keep that image in your mind. Like a timeline in a history class, but going infinitely forward and infinitely back. And now think of a life, a segment of that timeline, one that starts in, say, 1961, and ends in, say, 2037. Does that life go away when 2037 turns into 2038? Do the years 1961 through 2037 disappear from time simply because we move on from them and into a new time, any more than Chicago disappears when we leave it behind and go to California?

It does not. The time that you live in will always exist, even after you've passed out of it, just like Paris exists before you visit it, and continues to exist after you leave. And the fact that people in the 23rd century will probably never know you were alive... that doesn't make your life disappear, any more than Paris disappears if your cousin Ethel never sees it. Your segment on that timeline will always have been there. The fact of your death doesn't make the time that you were alive disappear.

And it doesn't make it meaningless. Yes, stepping back and contemplating all of time and space can be daunting, can make you feel tiny and trivial. And that perception isn't entirely inaccurate. It's true; the small slice of time that we have is no more important than the infinitude of time that came before we were born, or the infinitude that will follow after we die.

But it's no less important, either.

I don't know what happens when we die. I don't know if we come back in a different body, or if we get to hover over time and space and view it in all its glory and splendor, or if our souls dissolve into the world-soul the way our bodies dissolve into the ground, or if, as seems very likely, we simply disappear. I have no idea. And I don't know that it matters. What matters is that we get to be alive. We get to be conscious. We get to be connected with each other, and with the world, and we get to be aware of that connection and to spend a few years mucking about in its possibilities. We get to have a slice of time and space that's ours. As it happened, we got the slice that has Beatles records and Thai restaurants and AIDS and the Internet. People who came before us got the slice that had horse-drawn carriages and whist and dysentery, or the one that had stone huts and Viking invasions and pigs in the yard. And the people who come after us will get the slice that has, I don't know, flying cars and soybean pies and identity chips in their brains. But our slice is no less important because it comes when it does, and it's no less important because we'll leave it someday. The fact that time will continue after we die does not negate the time that we were alive. We are alive now, and nothing can erase that.

Greta Christina, Greta's Blog |  Read more:
Image: uncredited
[ed. Repost]

Lava Faces


photos: markk
[ed. I was just going through the archives today looking for something and found these. I need to get back there more often.]

Liberalism is Dead

Silicon Valley’s techtopian libertarianism points to a disruptive left fascism for the 21st century.

Libreralism is dead. Like other dead historical forms, for example Christianity or cinema, liberalism lumbers around zombie-like, continuing to define lives and wield material power. But its place in the dustbin of history is already assured. It has no future; it is just a question of its long, slow slide into irrelevancy.

Liberalism, and its preferred governmental form liberal democracy, is collapsing because the nation-state—the concept that animated liberalism and gave it historical force through the European wars and revolutions of the 18th and 19th centuries—has transformed from a necessary tool for capitalist development to a hindrance to growth. Capitalism won an epochal victory in the Cold War and has spread to all but the most distant reaches of the world. But it has done so amid a long crisis of profitability, beginning in the 1970s, that leaves the world system teetering atop elaborate debt and logistics structures. This increasingly brittle system relies on instantaneous modes of global transmission and constant access to world markets to keep money circulating. As a result, borders and nations have become most profitable through their absence. And so capitalism, to stave off its demise, has begun eating its favorite children.

Contra anti-materialist “leftists” who claim that the far right has risen in response to online identity politics, the resurgent ethnonationalist right has emerged in response to these dual collapses of nation and capital. This right looks to fill the political and libidinal void left by zombie liberalism and to reinvigorate and reorganize a “nation” capable of surviving the slow-burning capitalist crisis. As with all far-right projects, it seeks to achieve this by restricting the concept of the citizen and those favored with political and social rights solely to the straight white male property owner, as it was in the good old days of naked settler colonialism. Once sufficiently organized and empowered, such a nation of propertied men could happily flourish under a corporatocratic police state, a dictatorship of capital stripped of protections for the workers who produce their wealth and openly genocidal toward those not deemed true members of the nation.

This genocidal attitude toward “unnecessary” populations, a constant feature of American statecraft, has become increasingly visible of late in mainstream American politics. Republicans have been trying to murder millions of Americans via health-care “reform” in Congress while Trump turns dog whistles into air horns and an ethnonationalist movement sprouts like fungus in his shadow. The American right’s slide from crypto-fascism to out and out fascism nears completion.

This three-decades-long ideological and organizational transformation on the right has not been matched with an equivalent strengthening of American liberalism. Rather the 2016 electoral losses of the presidency, both houses, and most governorships illustrate the inefficacy of the liberal project and its empty vision. The Democratic #resistance, rather than offering a concrete vision of a better world or even a better policy program, instead romanticizes a “center” status quo whose main advantage is that it destroys the environment and kills the poor at a slightly slower rate than the Republicans’ plan. Liberalism isn’t failing because the Democrats have chosen unpopular leaders. It is instead a result of the material limits of the debt-dependent economic policy to which it is devoted. Neoliberal economic policy has produced growth through a series of debt bubbles, but that series is reaching its terminal limits in student and medical debt. Liberalism today has nothing to offer but the symbolic inclusion of a small number of token individuals into the increasingly inaccessible upper classes.

As liberalism collapses, so too does the left-right divide that has marked the past century of domestic politics in the capitalist world. The political conflict of the future will not be between liberalism (or its friendlier European cousin, social democracy) and a conservatism that basically agrees with the principles of liberal democracy but wishes the police would swing their billy clubs a lot harder. Instead, the political dichotomy going forward will be between a “left” and “right” fascism. One is already ascendant, and the other is new but quickly growing.

Jürgen Habermas and various other 20th century Marxists used “left fascism” as a generic slander against their ideological opponents, but I am using it to refer to something more specific: the corporatocratic libertarianism that is the counterpart of right fascism’s authoritarian ethnonationalism, forming the two sides of the same coin. When, in the wake of the imminent economic downturn, Mark Zuckerberg runs for president on the promise of universal basic income and a more “global citizen”-style American identity in 2020, he will represent this new “left” fascism: one that, unlike Trump’s, sheds the nation-state as a central concept. A truly innovative and disruptive fascism for the 21st century.

Rather than invoke Herrenvolk principles and citizenship based on blood and soil, these left fascists will build nations of “choice” built around brand loyalty and service use. Rather than citizens, there will be customers and consumers, CEOs and boards instead of presidents and congresses, terms of service instead of social contracts. Workers will be policed by privatized paramilitaries and live in company towns. This is, in fact, how much of early colonialism worked, with its chartered joint-stock companies running plantation microstates on opposite sides of the world. Instead of the crown, however, there will be the global market: no empire, just capital. (...)

In America, the right fascists find their base in agribusiness, the energy industry, and the military-industrial complex, all relying heavily on state subsidies, war, and border controls to produce their wealth. Although they hate taxes and civil rights, they rely on American imperialism, with its more traditional trade imbalances, negotiation of energy “agreements,” and forever wars to make their profits. But the left fascists, based in tech, education, and services, do best through global labor flows and free trade. Their reliance on logistics, global supply chains, and just-in-time manufacturing, combined with their messianic belief in the singularity and technological fixes for social problems, means they see the nation-state mostly as a hindrance and the military as an inefficient solution to global problems.

Both sides agree that the state should be used to cut wages, police the mobs, and eliminate regulatory oversight. The right fascists, the more traditional of the two, want to solve the question of class war once and for all in a final solution of blood and fire, while the left-fash imagine they can disrupt the class war away by creating much smaller and more easily controlled states and providing basic subsistence.

One side sees the people as subjects; the other, customers. The difference between a dictator-subject relationship and a business-customer relationship is that the brutality and exploitation of the latter is masked behind layers of politeness and seduction, and so sometimes can be mistaken for generosity. We’ve already seen this confusion in action. Last February it was a big news story when Apple refused to help the FBI crack the company’s iPhone encryption. Most people understood this as Apple standing up for its customers, protecting their privacy rights. This was an absurd misreading that requires that one willfully forget everything else Apple does with customer data. In fact, it was a play for sovereignty, a move pointed at demonstrating the independence of Apple in particular and Silicon Valley in general from the state, a step toward the left-fascist politics of the future. In understanding the move as a form of protective noblesse oblige, Apple customers revealed nothing so much as their willingness to become customer-subjects of Apple Nation™.

The left fascists, then, will try in the coming years to wrest control of the Democratic Party. Some on the left will inevitably support them in this effort, as they will come bearing such policies as universal basic income, a loosening of border regimes, a multicultural society, and a multipolar world. Many will be bamboozled by these promises coming from the new tech billionaires, and they will provide cover for the left-fascist project of corporatocratic sovereign devolution.

It is a strange time, when fascists see the future more clearly than the revolutionary left. But the left has so long imagined its route to power comes through capturing the nation-state that it can’t see that such a method doesn’t even work for capital anymore. To crush fascism, we’ll have to dramatically reorient our understanding of the future. Revolutionaries have to get over their fetishization of both nation and state, and fast, if they hope to truly destroy this world, let alone having a shot at building a new one.

by Willie Ostersweil, TNI | Read more:
Image: uncredited: All Watched Over by Machines of Loving Grace, 2011

Thursday, November 2, 2017


Jack 47, Green Barn Farms (Jack Herer and AK-47 hybrid)
via: Five Strains of Pot for Sale in Seattle That You Should Try Right Now
[ed. Two of my favorite strains.]

Florida: Why Panic?

Once the hysteria sets in, we tend to forget that the real problem is not accounting for the gas or the water, but the booze.

Allow me to share a Florida man's experience in the second person: You try to get out in front of the storm by stocking up a week in advance, but then you have a surfeit of booze in the house and a reduced work schedule, and you’re watching a loop of apocalyptic news reports, so you invite some friends over and start drinking while the storm is still a hundred miles from Martinique; and then you’re getting drunker as the menacing eye-wall batters Barbuda; and then Gov. Rick Scott is wearing a Navy hat during a press conference and he’s telling you to evacuate your house, but first you have to Google whether Scott was actually in the Navy; and then you doze off as the storm cruises into the Caribbean, and two days go by and you wake up to see Jim Cantore on the Weather Channel in a helmet and bulletproof vest and can’t believe you aren’t dreaming, so you keep drinking, and by the time the storm reaches the Florida Straits, you’re being told that it’s coming straight up the gut and you’re already out of booze and Home Depot is out of plywood and the only thing still open is the Circle K, but the Beer Cave has been turned upside down, so you buy the last two jugs of rotgut burgundy, and the clerk tells you he can’t break a 50, so you tell him to keep the change.

We, Floridians, have something of a toxic relationship to hurricanes. They provide a great deal of excitement during what are otherwise the dog days of summer. What else is there in late August, or early September, when we’re in “peak” hurricane season? In the sports world, “nothing but the sun-dazed and inconsequential third fifth of the baseball season,” and the likewise inconsequential first Saturday of college football.

We track hurricanes with perverse pleasure. They turn us into amateur meteorologists. They also turn professional meteorologists into amateur meteorologists. (A hurricane might cut us all down to size, but the moment of impact seems like an unexpected nadir for the weather professionals, who are relegated to standing outside in the wind and the rain to tell us it’s windy and rainy.)

Maybe we are spoiled, because the ideal outcome happens to be common in Florida: That forecast cone from the National Hurricane Center casts a crimson shadow over the state, but the storm gets sheared somewhere in the Caribbean or veers out to sea at the last minute, and you end up receiving a fierce, but brief torrent of rain, some downed tree limbs, and an afternoon power outage. You’ve spent days prepping for what ends up being the perfect excuse to drink rum (assuming you have enough), boil hot dogs on a camp stove, and play candlelight Scrabble without any preoccupation except baseline survival — an atavistic fantasy.

The poet and critic (and honorary Floridian) Michael Hofmann believes this perversion is a ritual of American theatrics and spectatorship: Wars, hurricanes, and national championships all “begin as an expensive orgy of logistics and end as a pretext for snacking,” when they should give us pause for other, more existential concerns.

In hurricane season, we can also count on the large base of residents who cannot be made to leave their homes, no matter how perilous the forecast. (And they won’t soon forget the preposterous, sensationalist news coverage Irma received.) Locals don’t leave; leaving is capitulation. They’ll just as soon go down swinging (or shooting, in this case). Holding down one’s fort is a point of pride — a metric of advanced stewardship. “Old Florida hands … measure out their lives in hurricane names,” Hofmann writes. “They remember particular angles of attack, depths of flooding, wind velocities and force measurements, destructiveness in dollar amounts … a form of higher geekishness, each man (and of course they’re usually men) his own survivalist.”

Such folks could not be stirred to evacuate, but their panic and dread were heightened to an exceptional degree this time. Harvey, no doubt, stoked their fears: He wasn't even finished dumping rain on Houston when this giant matzo ball called Irma showed up in the Atlantic. Our memory of these storms and their power seems to reset every few years, but Harvey’s impact was attendant and grimly illustrative.

The Atlantic coast had a week to prepare, and then, after the track suddenly shifted into the Gulf of Mexico, it seemed like the entire state had been boarded up and emptied. The evacuations that were ordered in Miami earlier in the week were now directed at Naples and Ft. Myers — all told, 6.5 million people, one of the largest evacuations in U.S. history. Folks from the Atlantic side had fled to areas that ended up being more dangerous than whence they came.

All eyes were rightly fixed on Southwest Florida and the Keys, which would suffer the brunt. But by dint of counterclockwise rotations, favorable tides, and a storm surge that had apparently been lost at sea, much of the Gulf Coast was spared.

It was Northeast Florida, nearly 400 miles from Irma’s landfall, that left everyone scratching their heads. (...)

The morning after Irma eclipsed the state, I woke up to a video in my inbox of a dude riding a Jet Ski through his front yard. The yard was in Middleburg, on Black Creek, south of downtown Jacksonville. The creek had flooded overnight, up to 30 feet above its normal level in some areas. The water was up to the rafters on his house; the roof of his car breached like a turtle shell.

A few days later, once the water had receded, I drove to Middleburg and found a diner that was starting to fill up around lunchtime. Vaulted ceilings and heart-pine joists held small, lacquered replicas of tugs and ferries. Anatomy diagrams on the walls illustrated the freshwater fish varieties you’d find in the surrounding lakes and tributaries. The only thing that seemed out of place was the flat-screen TV mounted to the ceiling. CMT was playing pop-country music videos. I followed along, then looked out the window at the landscape — soft, inert hills, and meandering country roads bisecting forests of slash and longleaf pine. I thought about how North Florida resembles South Georgia, South Georgia resembles Southeast Alabama and so forth, then looked back at the TV. Those starchy, cloying Blake Shelton videos look nothing like the world they claim to inhabit.

by Jordan Blumetti, The Bitter Southerner | Read more:
Image:Jordan Blumetti

Wednesday, November 1, 2017


Edmund Lewandowski, Dynamo, 1948.

Mark Morrisroe, Untitled (Embrace) 1985.
via:

Desperately Seeking Cities

A half-century after the urban crisis, it appeared that the American city was becoming a source of national hope. In the 2016 presidential election, there were few indicators of how one would vote more salient than whether one lived in a city or far outside one. This result has given rise to the idea that cities would increasingly form the nucleus of the soi-disant “resistance” to right-wing nationalism and Donald Trump. Since last November, marches have repeatedly converged on urban cores; against the threats of the Attorney General, mayors touted their cities’ “sanctuary” status; and environmental standards retired federally have been upheld municipally. If the US had any chance to build a progressive, cosmopolitan future, the path lay through the cities.

Then came the contest to locate Amazon’s second headquarters. It turned out that the unifying power of hating Trump was nothing compared to the overwhelming national ardor for Amazon. Over the last two months, cities of every size and in every part of the country fell over themselves in a lurid, nauseating pageant of suitors. To whom would Amazon give the rose? The solidarity supposedly endemic to urban life was revealed to be the narcissism of minor differences, an inveterate competitive streak, a zeal to scrap every public plan in a fever of tax breaks. Faced with a corporation with monopoly power as great as the old railroads, cities genuflected. Millenarian apocalyptic rhetoric over Trump gave way to salvific paeans to Amazon. The company took on the form of a 21st-century Christ, offering its living water to the thirsty urban samaritans. Only San Antonio—appropriately, the city that once housed stolid, reliable, tedious pleasures like Tim Duncan—distinguished itself, refusing to enter a bid. “Sure, we have a competitive toolkit of incentives,” the city’s mayor wrote, at once inhabiting and parodying the language of the corporate brochure, “but blindly giving away the farm isn’t our style.”

I live in Philadelphia, where every day, the prospect of Amazon HQ2 competed with the corruption of our most powerful local congressman for the top story. The city unveiled a website dedicated to its bid—more attractive and user-friendly than any other municipal page—that gloried in Philadelphia being the “biggest small city in America.” “A lot of people don’t know this,” Randy LoBasso, the head of the Bicycle Coalition of Greater Philadelphia says in a video dedicated to “Livability,” “but Philadelphia is the most ‘biked’ per capita big city in the United States”—a sentence so thick with the jargon of urbanism that it is virtually indecipherable. In an emblematic piece, Jon Geeting, a former journalist who is currently the engagement director for Philadelphia 3.0—a “dark money” political group trying to put more business-friendly candidates into office—wrote that the city “could potentially have a real shot at this,” because of its “strength of legacy assets (elite universities, extensive regional rail system, tons of park land, walkable street grid and narrow streets for biking) that we’ve inherited from previous generations.” In comment sections, in conversation, in social media, Philadelphians turned overnight from citizens into urban branding experts. Years of reading Curbed and thinking about “smart cities” had had their effect. Person after person blandly laid out the humble virtues of Philadelphia as a case for Amazon’s noblesse oblige.

Most city dwellers, it turns out, live lives of quiet desperation for Amazon. What was happening to Philadelphia disclosed the emptiness not just of this city, but of what people all over the country had learned to think cities were good for. The value of the Amazon contest is that it has laid bare a fundamental contradiction of contemporary urban life. Amazon appealed to cities—cannily, it must be said—to narrate themselves: what makes them unique, such that Amazon should locate there? The result was that all cities ended up putting forward the same, boring virtues and “legacy assets”: some parks, some universities, some available land, some tax breaks, some restaurants. Each city, it turned out, was indistinguishable from every other city: “thirty-six hours . . . in the same beer garden, museum, music venue, and ‘High Line’-type urban park.” By the same token, all cities were forced to realize their basic inadequacy: that ultimately, all their tireless work to cultivate their urbanity amounted to nothing if they did not have Amazon.

Amazon has bankrupted the ideology it claimed to appeal to: the ideology of “urbanism.” Since the early 20th century at least, critics, reformers, and architects from Daniel Burnham to Ebenezer Howard to Lewis Mumford have tried to solve the “problem” of the city. The solutions that came into being—threading the city with highways and clearing “slums”—lacked their idealism, damaging the city and city planning with it. The upheavals of urban renewal and the cataclysms of the urban crisis gave birth to the idea that cities were on the verge of extinction; the best way to save them was simultaneously to trumpet their inherent virtues and adopt itsy-bitsy policies to improve their basic livability. Against the pummeling, wrecking-ball visions of Robert Moses, Ed Bacon, and Justin Herman, a superficial reading of Jane Jacobs held that the network of urban eyes and the ballet of street life made cities what they were. (Her idea that cities ought to accommodate a diversity of industries and classes did not enter the discussion.) Under the reign of urbanism, cities, effects of a mercantile and then capitalist economy, became fetish objects: one had to love cities, constantly praise them, and find new ways of adoring them. (...)

The most serious academic riposte to the urbanist ideology has been Michael Storper’s Keys to the City (2013), which demonstrates comprehensively what one might always have guessed, and what the Amazon contest has proven: the location of businesses, rather than the walkability, density, and diversity of a city, determines its economic health. A statistically insignificant portion of the country will up and move to Dallas because they are fiending for breakfast tacos that they can sort of walk to, near a private-public partnership-funded park that caps a freeway where they can sort of enjoy them. Most people, however, move to a place in search of jobs, not “urbanism.” “Even though London, New York, and Paris have central-city neighborhoods that are consumption playgrounds for the rich of the world,” Storper writes, “they are above all major productive hubs in the global economy. The vast majority of their people come to these cities in order to work. The world urban system—from its richest to poorest cities—is not a set of playgrounds or amenity parks but instead a vast system of interlinked workshops.” That this even needs to be argued suggests the level of delusion that persists about what metropolitan regions actually do, and why people live in them. It is a delusion that has taken hold not only on the lecture circuit and PowerPoint presentations and websites that lend their names to “ideas festivals,” but among ordinary city-dwellers.

Urbanism has helped to obscure how implicated cities are in the broader changes of our time. The city we have today is, like everything, characterized by a spectacular level of inequality. To look at a standard city map is to miss an invisible overlay of policies and business incentives that pit one part of the city against the other, much as Amazon pitted all cities against one another. Private-public partnerships make some parks better than others; tax abatements create high-end residential and commercial construction in the urban core, while the poor in the urban periphery enjoy the indignity of under-regulated “enterprise zones” or having to compete for federal “promise zones”; property values spike in the vicinity of a “good” school and dive in the neighborhood of a “bad” one; closely related are the policies that entice cavalcades of police to one area and not to others, ensuring that the carceral state weighs in some neighborhoods like an unshakable stone. The determining center remains, as it has for generations, and as it will in the age of Amazon, the wide floor plates in the heavily air-conditioned offices of the country’s major corporations, which pay proportionally less in taxes to the city and state where they reside than your average middle-class family. As a substitute for more concerted city planning, urbanism has had little success in encouraging the diversity it claims to seek. As a cover for the true nature of the neoliberal city, it has been a triumph.

by Nikil Saval, N+1 |  Read more:
Image: philadelphiadelivers.com

Kathy Mattea


[ed. I like Michele Shocked's version the best but can't seem to find it on YT. Here's the original by Jean Ritchie.]

How Big Coal Created a Culture of Dependence

We are, one hears, spending too much time on Appalachia. There are too many dispatches from woebegone towns, coastal reporters parachuting in to ascertain that, yes, the hard-bitten locals are still with their man Donald Trump. There are too many odes to the beleaguered coal miner, even though that entire industry now employs fewer people than Arby’s. Enough already, says the exasperated urban liberal. Frank Rich captured this sentiment in March in a New York magazine piece entitled “No Sympathy for the Hillbilly.” “Maybe,” he mused, “they’ll keep voting against their own interests until the industrial poisons left unregulated by their favored politicians finish them off altogether. Either way, the best course for Democrats may be to respect their right to choose.”

The superficial “downtrodden Trump voter” story has indeed become an unproductive cliché. And upheavals in industries with larger, more diverse workforces than coal, such as retail, deserve close attention as well.

But our decades-long fixation with Appalachia is still justified. For starters, the political transformation of the region is genuinely stunning. West Virginia was one of just six states that voted for Jimmy Carter in 1980; last year, it gave Trump his second-largest margin of victory, forty-two points.

More importantly, the region’s afflictions cannot simply be cordoned off and left to burn out. The opioid epidemic that now grips whole swaths of the Northeast and Midwest got its start around the turn of the century in central Appalachia, with the shameless targeting of a vulnerable customer base by pharmaceutical companies hawking their potent painkillers. The epidemic spread outward from there, sure as an inkblot on a map. People like Frank Rich may be callous enough to want to consign Appalachians to their “poisons,” but the quarantine is not that easy.

We should be thankful, then, for what Steven Stoll, a historian at Fordham University, has delivered in Ramp Hollow: not just another account of Appalachia’s current plight, but a journey deeper in time to help us understand how the region came to be the way it is. For while much has been written about the region of late, the historical roots of its troubles have received relatively little recent scrutiny. Hillbilly Elegy, J. D. Vance’s best-selling memoir of growing up in an Appalachian family transplanted from eastern Kentucky to the flatlands of southwestern Ohio, cast his people’s afflictions largely as a matter of a culture gone awry, of ornery self-reliance turned to resentful self-destruction. In White Trash, the historian Nancy Isenberg traced the history of the country’s white underclass to the nation’s earliest days, but she focused more on how that underclass was depicted and scorned than on the material particulars of its existence.

Stoll offers the ideal complement. He has set out to tell the story of how the people of a sprawling region of our country—one of its most physically captivating and ecologically bountiful—went from enjoying a modest but self-sufficient existence as small-scale agrarians for much of the eighteenth and nineteenth centuries to a dreary dependency on the indulgence of coal barons or the alms of the government. (...)

Yet it was the area’s very natural bounty that would ultimately spell the end of this self-sufficiency. The Civil War’s incursions into the Shenandoah Valley and westward exposed the region’s riches in exactly the minerals demanded by a growing industrial economy. (By 1880, there were 56,500 steam engines in the country, all voracious for coal.) “Her hills and valleys are full of wealth which only needs development to attract capitalists like a magnet,” declared one joint-stock company. In swarmed said capitalists, often in cahoots with local power brokers from Charleston and Wheeling.

The confused legal property claims offered the aspiring coal barons a window: they could approach longtime inhabitants and say, essentially, “Look, we all know you don’t have full title to this land, but if you sell us the mineral rights, we’ll let you stay.” With population growth starting to crimp the wide-ranging agrarian existence, some extra cash in hand was hard to reject. Not that it was very much: one farmer turned over his 740 acres for a mere $3.58 per acre—around $80 today. By 1889, a single company, Flat Top Land Trust, had amassed rights to 200,000 acres in McDowell County in southern West Virginia; just thirteen years later, McDowell was producing more than five million tons of coal per year.

The coal industry had a positively soft touch in the early going, though, compared to timber. Stoll describes the arrival of the “steam skidder,” which “looks like a locomotive with a ship’s mast.” It “clanks and spits, chugs steam, and sweats grease from its wheels and pistons” as workers use cables extending from the mast to grab fallen trees, “pulling or skidding the logs hundreds of feet to a railroad flatbed.” The steam skidder crews would cut everything they could, “leaving the slopes barren but for the stumps, branches, and bark that burned whenever a spark from a railroad wheel or glowing ash from a tinderbox fell on the detritus.”

The harvest was staggering: “Of the 10 million acres that had never been cut in 1870, only 1.5 million stood in 1910.” Stoll quotes one witness from the time: “One sees these beautiful hills and valleys stripped of nature’s adornment; the hills denuded of their forests, the valleys lighted by the flames of coke-ovens and smelting furnaces; their vegetation seared and blackened . . . and one could wish that such an Arcadia might have been spared such ravishment. But the needs of the race are insatiable and unceasing.” Indeed, they were. As one northern lumberman put it: “All we want here is to get the most we can out of this country, as quick as we can, and then get out.”

Such rapaciousness did not leave much of the commons that had sustained the makeshift agrarian existence. Of course, there was a new life to replace it: mining coal or logging trees. By 1929, 100,000 men, out of a total state population of only 1.7 million, worked in 830 mines across West Virginia alone. But it is in that very shift that Stoll identifies the region’s turn toward immiseration. With the land spoiled and few non-coal jobs available, workers were at the mercy of whichever coal company dominated their corner of the region. They lived in camps and were paid in scrip usable only at the company store; even the small gardens they were allowed in the camps were geared less toward self-reliance than toward cutting the company’s costs to feed them.

Stoll quotes a professor at Berea College in eastern Kentucky who captured the new reality in a 1924 book: The miner “had not realized that he would have to buy all his food. . . He has to pay even for water to drink.” Having moved their families to a shanty in the camp, miners owed rent even when the mine closed in the industry’s cyclical downturns, which served to “bind them as tenants by compulsion . . . under leases by which they can be turned out with their wives and children on the mountainside in midwinter if they strike.” As Stoll sums it up, “Their dependency on company housing and company money spent for food in company-owned stores amounted to a constant threat of eviction and starvation.” Of course, Merle Travis had this dynamic nailed way back in his 1947 classic, “Sixteen Tons”: “You load sixteen tons, what do you get? / Another day older and deeper in debt. / Saint Peter, don’t you call me, ’cause I can’t go, / I owe my soul to the company store.”

Nor did the industries bring even a modicum of mass prosperity to compensate for this dependency. By 1960, more than half the homes in central Appalachia still lacked central plumbing, helping give rise to all manner of cruel stereotypes and harsh commentary, such as this, from the British historian Arnold Toynbee: “The Appalachians present the melancholy spectacle of a people who have acquired civilization and then lost it.” An extensive 1981 study of eighty Appalachian counties by the Highlander Research and Educational Center in Tennessee confirmed that, in Stoll’s summary, coal company capital had brought “stagnation, not human betterment,” and a “correlation between corporate control and inadequate housing.”

“Banks in coal counties couldn’t invest in home construction or other local improvements because the greater share of their deposits belonged to the companies,” Stoll writes. “No sooner did that capital flow in than it flowed out, depriving banks of funds stable enough for community lending.” Not only had the coal industry, along with timber, supplanted an earlier existence, but it was actively stifling other forms of growth and development.

Stoll recounts a scene from 1988, when a man named Julian Martin got up at a public hearing to oppose a proposed strip-mining project in West Virginia. Martin described the disappearance of Bull Creek along the Coal River, which he had explored as a kid decades earlier. He pointed out that places that had seen the most strip mining had also become the very poorest in the state. “My daddy was a coal miner, and I understand being out of work, okay?” Martin said. “I’ve been down that road myself. And I know you’ve got to provide for your family. But I’m saying they’re only giving us two options. They’re saying, ‘Either starve—or destroy West Virginia.’ And surely to God there must be another option.”

It’s a powerful moment, and it captures the tragic political irony that is one of the most lasting fruits of the region’s dependency: despite all the depredations of resource extraction—all the mine collapses and explosions (twenty-nine killed at Upper Big Branch in 2010) and slurry floods (125 killed in the Buffalo Creek disaster of 1972) and chemical spills (thousands without drinking water after the contamination of the Elk River in 2014)—many inhabitants, and their elected representatives, remain fiercely protective of the responsible industries. Even the empathetic Stoll can’t help let his frustration show, as he urges the “white working class of the southern mountains to stop identifying their interests with those of the rich and powerful, a position that leaves them poorer and more powerless than they have ever been.”

Well, yes, but many a book has been written to explain why exactly the opposite trend has been happening, as Appalachia turns ever redder. It shouldn’t be that hard to make sense of the coal-related part of this political turn, and voters’ rightful assessment that coastal Democrats are hostile to the industry. The region has been dominated by mining for so long that coal has become deeply interwoven with its whole sense of self. Just last month, I was speaking with a couple of retired union miners in Fairmont, West Virginia, who are highly critical of both coal companies and Trump, and suffer the typical physical ailments from decades spent underground. Yet both said without hesitation that they missed the work for the camaraderie and sense of purpose it provided. Their ancestors identified as agrarians; they identified as miners.

by Alec MacGillis, Washington Monthly | Read more:
Image: Library of Congress/Wikimedia Commons
[ed. See also: Awaiting Trump's coal comeback, miners reject retraining.]

Tuesday, October 31, 2017

Why You Hate Contemporary Architecture

The British author Douglas Adams had this to say about airports: “Airports are ugly. Some are very ugly. Some attain a degree of ugliness that can only be the result of special effort.” Sadly, this truth is not applicable merely to airports: it can also be said of most contemporary architecture.

Take the Tour Montparnasse, a black, slickly glass-panelled skyscraper, looming over the beautiful Paris cityscape like a giant domino waiting to fall. Parisians hated it so much that the city was subsequently forced to enact an ordinance forbidding any further skyscrapers higher than 36 meters.

Or take Boston’s City Hall Plaza. Downtown Boston is generally an attractive place, with old buildings and a waterfront and a beautiful public garden. But Boston’s City Hall is a hideous concrete edifice of mind-bogglingly inscrutable shape, like an ominous component found left over after you’ve painstakingly assembled a complicated household appliance. In the 1960s, before the first batch of concrete had even dried in the mold, people were already begging preemptively for the damn thing to be torn down. There’s a whole additional complex of equally unpleasant federal buildings attached to the same plaza, designed by Walter Gropius, an architect whose chuckle-inducing surname belies the utter cheerlessness of his designs. The John F. Kennedy Building, for example—featurelessly grim on the outside, infuriatingly unnavigable on the inside—is where, among other things, terrified immigrants attend their deportation hearings, and where traumatized veterans come to apply for benefits. Such an inhospitable building sends a very clear message, which is: the government wants its lowly supplicants to feel confused, alienated, and afraid.

The fact is, contemporary architecture gives most regular humans the heebie-jeebies. Try telling that to architects and their acolytes, though, and you’ll get an earful about why your feeling is misguided, the product of some embarrassing misconception about architectural principles. One defense, typically, is that these eyesores are, in reality, incredible feats of engineering. After all, “blobitecture”—which, we regret to say, is a real school of contemporary architecture—is created using complicated computer-driven algorithms! You may think the ensuing blob-structure looks like a tentacled turd, or a crumpled kleenex, but that’s because you don’t have an architect’s trained eye.

Another thing you will often hear from design-school types is that contemporary architecture is honest. It doesn’t rely on the forms and usages of the past, and it is not interested in coddling you and your dumb feelings. Wake up, sheeple! Your boss hates you, and your bloodsucking landlord too, and your government fully intends to grind you between its gears. That’s the world we live in! Get used to it! Fans of Brutalism—the blocky-industrial-concrete school of architecture—are quick to emphasize that these buildings tell it like it is, as if this somehow excused the fact that they look, at best, dreary, and, at worst, like the headquarters of some kind of post-apocalyptic totalitarian dictatorship.

Let’s be really honest with ourselves: a brief glance at any structure designed in the last 50 years should be enough to persuade anyone that something has gone deeply, terribly wrong with us. Some unseen person or force seems committed to replacing literally every attractive and appealing thing with an ugly and unpleasant thing. The architecture produced by contemporary global capitalism is possibly the most obvious visible evidence that it has some kind of perverse effect on the human soul. Of course, there is no accounting for taste, and there may be some among us who are naturally are deeply disposed to appreciate blobs and blocks. But polling suggests that devotees of contemporary architecture are overwhelmingly in the minority: aside from monuments, few of the public’s favorite structures are from the postwar period. (When the results of the poll were released, architects harrumphed that it didn’t “reflect expert judgment” but merely people’s “emotions,” a distinction that rather proves the entire point.) And when it comes to architecture, as distinct from most other forms of art, it isn’t enough to simply shrug and say that personal preferences differ: where public buildings are concerned, or public spaces which have an existing character and historic resonances for the people who live there, to impose an architect’s eccentric will on the masses, and force them to spend their days in spaces they find ugly and unsettling, is actually oppressive and cruel.

The politics of this issue, moreover, are all upside-down. For example, how do we explain why, in the aftermath of the Grenfell Tower tragedy in London, more conservative commentators were calling for more comfortable and home-like public housing, while left-wing writers staunchly defended the populist spirit of the high-rise apartment building, despite ample evidence that the majority of people would prefer not to be forced to live in or among such places? Conservatives who critique public housing may have easily-proven ulterior motives, but why so many on the left are wedded to defending unpopular schools of architectural and urban design is less immediately obvious.

There have, after all, been moments in the history of socialism—like the Arts & Crafts movement in late 19th-century England—where the creation of beautiful things was seen as part and parcel of building a fairer, kinder world. A shared egalitarian social undertaking, ideally, ought to be one of joy as well as struggle: in these desperate times, there are certainly more overwhelming imperatives than making the world beautiful to look at, but to decline to make the world more beautiful when it’s in your power to so, or to destroy some beautiful thing without need, is a grotesque perversion of the cooperative ideal. This is especially true when it comes to architecture. The environments we surround ourselves with have the power to shape our thoughts and emotions. People trammeled in on all sides by ugliness are often unhappy without even knowing why. If you live in a place where you are cut off from light, and nature, and color, and regular communion with other humans, it is easy to become desperate, lonely, and depressed. The question is: how did contemporary architecture wind up like this? And how can it be fixed?

For about 2,000 years, everything human beings built was beautiful, or at least unobjectionable. The 20th century put a stop to this, evidenced by the fact that people often go out of their way to vacation in “historic” (read: beautiful) towns that contain as little postwar architecture as possible. But why? What actually changed? Why does there seem to be such an obvious break between the thousands of years before World War II and the postwar period? And why does this seem to hold true everywhere? (...)

Architecture’s abandonment of the principle of “aesthetic coherence” is creating serious damage to ancient cityscapes. The belief that “buildings should look like their times” rather than “buildings should look like the buildings in the place where they are being built” leads toward a hodge-podge, with all the benefits that come from a distinct and orderly local style being destroyed by a few buildings that undermine the coherence of the whole. This is partly a function of the free market approach to design and development, which sacrifices the possibility of ever again producing a place on the village or city level that has an impressive stylistic coherence. A revulsion (from both progressives and capitalist individualists alike) at the idea of “forced uniformity” leads to an abandonment of any community aesthetic traditions, with every building fitting equally well in Panama City, Dubai, New York City, or Shanghai. Because decisions over what to build are left to the individual property owner, and rich people often have horrible taste and simply prefer things that are huge and imposing, all possibilities for creating another city with the distinctiveness of a Venice or Bruges are erased forever.(...)

How, then, do we fix architecture? What makes for a better-looking world? If everything is ugly, how do we fix it? Cutting through all of the colossally mistaken theoretical justifications for contemporary design is a major project. But a few principles may prove helpful.

by Brianna Rennix & Nathan J. Robinson, Current Affairs :  Read more:
Image: uncredited
[ed. If I could, my house and everything in it would be designed Art Deco.]

The Melancholy of Subculture Society

If you crack open some of the mustier books about the Internet - you know the ones I’m talking about, the ones which invoke Roland Barthes and discuss the sexual transgressing of MUDs - one of the few still relevant criticisms is the concern that the Internet by uniting small groups will divide larger ones.

SURFING ALONE

You may remember this as the Bowling Alone thesis applied to the Internet; it got some traction in the late 1990s. The basic idea is: electronic entertainment devices grow in sophistication and inexpensiveness as the years pass, until by the 1980s and 1990s, they have spread across the globe and have devoured multiple generations of children; these devices are more pernicious than traditional geeky fares inasmuch as they are often best pursued solo. Spending months mastering Super Mario Bros - all alone - is a bad way to grow up normal.

AND THEN THERE WERE NONE

The 4 or 5 person Dungeons & Dragons party (with a dungeon master) gives way to the classic arcade with its heated duels and oneupsmanship; the arcade gives way to the flickering console in the bedroom with one playing Final Fantasy VII - alone. The increased graphical realism, the more ergonomic controllers, the introduction of genuinely challenging AI techniques… Trend after trend was rendering a human opponent unnecessary. And gamer after gamer was now playing alone.

Perhaps, the critic says, the rise of the Internet has ameliorated that distressing trend - the trends favored no connectivity at first, but then there was finally enough surplus computing power and bandwidth for massive connectivity to become the order of the day.

It is much more satisfactory and social to play MMORPGs on your PC than single-player RPGS, much more satisfactory to kill human players in Halo matches than alien AIs. The machines finally connect humans to humans, not human to machine. We’re forced to learn some basic social skills, to maintain some connections. We’re no longer retreating into our little cocoons, interacting with no humans.

WELCOME TO THE N.H.K.!

But, the critic continues, things still are not well. We are still alienated from one another. The rise of the connected machines still facilitates withdrawal and isolation. It presents the specter of the hikikomori - the person who ceases to exist in the physical realm as much as possible. It is a Japanese term, of course. They are 5 years further in our future than we are (or perhaps one should say, were). Gibson writes, back in 2001:
The Japanese seem to the rest of us to live several measurable clicks down the time line. The Japanese are the ultimate Early Adopters, and the sort of fiction I write behooves me to pay serious heed to that. If you believe, as I do, that all cultural change is essentially technologically driven, you pay attention to the Japanese. They’ve been doing it for more than a century now, and they really do have a head start on the rest of us, if only in terms of what we used to call future shock (but which is now simply the one constant in all our lives).
Gibson also discusses the Mobile Girl and text messaging; that culture began really showing up in America around 2005 - Sidekicks, Twitter etc. You can do anything with a cellphone: order food, do your job, read & write novels, maintain a lively sociallife, engage in social status envy (She has a smaller phone, and a larger collection of collectibles on her cellphone strap! OMG!)… Which is just another way of saying You can do anything without seeing people, just by writing digital messages. (And this in a country with one of the most undigitizable writing systems in existence! Languages are not created equal.)

The hikikomori withdraws from all personal contact. The hikikomori does not hang out at the local pub, swilling down the brewskis as everyone cheers on the home team. The hikikomori is not gossiping at the rotary club nor with the Lions or mummers or Veterans or Knights. hikikomoris do none of that. They aren’t working, they aren’t hanging out with friends.
The Paradoxical solitude and omnipotence of the otaku, the new century’s ultimate enthusiast: the glory and terror inherent of the absolute narrowing of personal bandwidth. –William Gibson, Shiny balls of Mud (TATE 2002)
So what are they doing with their 16 waking hours a day?

OPTING OUT

But it’s better for us not to know the kinds of sacrifices the professional-grade athlete has made to get so very good at one particular thing…the actual facts of the sacrifices repel us when we see them: basketball geniuses who cannot read, sprinters who dope themselves, defensive tackles who shoot up with bovine hormones until they collapse or explode. We prefer not to consider closely the shockingly vapid and primitive comments uttered by athletes in postcontest interviews or to consider what impoverishments in one’s mental life would allow people actually to think the way great athletes seem to think. Note the way up close and personal profiles of professional athletes strain so hard to find evidence of a rounded human life – outside interests and activities, values beyond the sport. We ignore what’s obvious, that most of this straining is farce. It’s farce because the realities of top-level athletics today require an early and total commitment to one area of excellence. An ascetic focus. A subsumption of almost all other features of human life to one chosen talent and pursuit. A consent to live in a world that, like a child’s world, is very small…[Tennis player Michael] Joyce is, in other words, a complete man, though in a grotesquely limited way…Already, for Joyce, at twenty-two, it’s too late for anything else; he’s invested too much, is in too deep. I think he’s both lucky and unlucky. He will say he is happy and mean it. Wish him well. –David Foster Wallace, The String Theory (July 1996 Esquire)
They’re not preoccupied with our culture - they’re participating in their own subculture. It’s the natural progression of the otaku. They are fighting on Azeroth, or fiercely pursuing their dojinshi career, or… There are many subcultures linked and united by the Internet, for good and ill. For every charitable or benevolent subculture (eg free software) there is one of mixed benefits (World of Warcraft), and one outright harmful (ex. fans of eating disorders, child pornography).

The point the critic wants to make is that life is short and a zero-sum game. You lose a third of the day to sleep, another third to making a living, and now you’ve little left. To be really productive, you can’t divide your energies across multiple cultures - you can’t be truly successful in mainstream culture, and at the same time be able to devote enough effort in the field of, say, mechanical models, to be called an Otaking. A straddler takes onto his head the overhead of learning and participating in both, and receives no benefits (he will suffer socially in the esteem of the normals, and will be able to achieve little in his hobby due to lack of time and a desire to not go overboard).

The otaku & hikikomori recognizes this dilemma and he chooses - to reject normal life! He rejects life in the larger culture for his subculture. It’s a simple matter of comparative advantage; it’s easier to be a big fish in a small pond than in a large one.

THE BIGGER SCREEN

Have you ever woken up from a dream that was so much more pleasant than real life that you wish you could fall back to sleep and return to the dream?…For some, World of Warcraft is like a dream they don’t have to wake up from - a world better than the real world because their efforts are actually rewarded –[Half Sigma, Status, masturbation, wasted time, and WoW]
EVE Online is unique in gaming in that we have always played on the same massive server in the same online universe since May 2003 when it first went live. We not only understand the harsh penalties for failure, but also how longevity and persistence is rewarded with success. When you have over 60,000 people on weekends dealing, scheming, and shooting each other it attracts a certain type of gamer. It’s not a quick fix kind of game. We enjoy building things that last, be they virtual spaceships or real life friendships that together translate into massive Empires and enduring legacies. Those of us who play understand that one man really can truly make a difference in our world. –Mark Seleene Heard, Vile Rat eulogy 2012
As ever more opt out, the larger culture is damaged. The culture begins to fragment back into pieces. The disconnect can be profound; an American anime geek has more in common with a Japanese anime geek (who is of a different ethnicity, a different culture, a different religion, a different language…) than he does with an American involved in the evangelical Christian subculture. There is essentially no common ground - our 2 countrymen probably can’t even agree on objective matters like governance or evolution!

With enough of these gaps, where is American or French culture? Such cultural identities take centuries to coalesce - France did not speak French until the 1900s (as The Discovery of France recounts), and Han China is still digesting & assimilating its many minorities & outlying regions. America, of course, had it easy in starting with a small founder population which could just exterminate the natives.

The national identity fragments under the assault of burgeoning subcultures. At last, the critic beholds the natural endpoint of this process: the nation is some lines on a map, some laws you follow. No one particularly cares about it. The geek thinks, Meh: here, Canada, London, Japan, Singapore - as long as FedEx can reach me and there’s a good Internet connection, what’s the difference? (Nor are the technically-inclined alone in this.)

You can test this yourself. Tell yourself - The country I live in now is the best country in the world for people like me; I would be terribly unhappy if I was exiled. If your mental reply goes something like, Why, what’s so special about the USA? It’s not particularly economically or politically free, it’s not the only civilized English-speaking country, it’s not the wealthiest…, then you are headed down the path of opting out.

This is how the paradox works: the Internet breaks the larger culture by letting members flee to smaller subcultures. And the critics think this is bad. They like the broader culture10, they agree with Émile Durkheim about atomization and point to examples like South Korea, and deep down, furries and latex fetishists really bother them. They just plain don’t like those deviants.

BUT I CAN GET A HIGHER SCORE!

In the future, everyone will be world-famous for 15 minutes.

Let’s look at another angle.

MONOCULTURE

Irony has only emergency use. Carried over time, it is the voice of the trapped who have come to enjoy their cage.
One can’t opt out of culture. There is no view from nowhere. To a great extent, we are our cultural artifacts - our possessions, our complexes of memes, our habits and objects of disgust are all cultural. You are always part of a culture.

Suppose there were only 1 worldwide culture, with no subcultures. The overriding obsession of this culture will be… let’s make it money. People are absolutely obsessed with money - how it is made, acquired, degraded, etc. More importantly, status is defined just by how much you have earned in your life; in practice, tie-breakers include how fast you made it, what circumstances you made it in (everyone admires a person who became a billionaire in a depression more than a good-times billionaire, in the same way we admire the novelist in the freezing garret more than the comfortable academic), and so on.

This isn’t too absurd a scenario: subjects feed on themselves and develop details and complexity as effort is invested in them. Money could well absorb the collective efforts of 7 billion people - already many people act just this way.

But what effect does this have on people? I can tell you: the average person is going to be miserable. If everyone genuinely buys into this culture, then they have to be. Their talents at piano playing, or cooking, or programming, or any form of artistry or scholarly pursuit are denigrated and count for naught. The world has become too big - it did not use to be so big, people so powerless of what is going on:
“Society is composed of persons who cannot design, build, repair, or even operate most of the devices upon which their lives depend…In the complexity of this world people are confronted with extraordinary events and functions that are literally unintelligible to them. They are unable to give an adequate explanation of man-made phenomena in their immediate experience. They are unable to form a coherent, rational picture of the whole.
Under the circumstances, all persons do, and indeed must, accept a great number of things on faith…Their way of understanding is basically religious, rather than scientific; only a small portion of one’s everyday experience in the technological society can be made scientific…The plight of members of the technological society can be compared to that of a newborn child. Much of the data that enters its sense does not form coherent wholes. There are many things the child cannot understand or, after it has learned to speak, cannot successfully explain to anyone…Citizens of the modern age in this respect are less fortunate than children. They never escape a fundamental bewilderment in the face of the complex world that their senses report. They are not able to organize all or even very much of this into sensible wholes….“
You can’t make a mark on it unless there are almost as many ways to make marks as there are persons.

To put it another way: women suffer enough from comparing themselves to media images. If you want a vision of this future, imagine everyone being an anorexic teenager who hates her body - forever.

We all value social esteem. We need to know somebody thinks well of us. We’re tribal monkeys; ostracism means death.
Jaron Lanier: I’d like to hypothesize one civilizing force, which is the perception of multiple overlapping hierarchies of status. I’ve observed this to be helpful in work dealing with rehabilitating gang members in Oakland. When there are multiple overlapping hierarchies of status there is more of a chance of people not fighting their superior within the status chain. And the more severe the imposition of the single hierarchy in people’s lives, the more likely they are to engage in conflict with one another. Part of America’s success is the confusion factor of understanding how to assess somebody’s status. 
Steven Pinker: That’s a profound observation. There are studies showing that violence is more common when people are confined to one pecking order, and all of their social worth depends on where they are in that hierarchy, whereas if they belong to multiple overlapping groups, they can always seek affirmations of worth elsewhere. For example, if I do something stupid when I’m driving, and someone gives me the finger and calls me an asshole, it’s not the end of the world: I think to myself, I’m a tenured professor at Harvard. On the other hand, if status among men in the street was my only source of worth in life, I might have road rage and pull out a gun. Modernity comprises a lot of things, and it’s hard to tease them apart. But I suspect that when you’re not confined to a village or a clan, and you can seek your fortunes in a wide world, that is a pacifying force for exactly that reason. 
Think of the people you know. How many of them can compete on purely financial grounds? How many can compare to the chimps at the top of the financial heap without feeling like an utter failure, a miserable loser? Not many. I can’t think of anyone I know who wouldn’t be at least a little unhappy. Some of them are pretty well off, but it’s awfully hard to compare with billionaires in their department. There’s no way to prove that this version of subcultures is the right one (perhaps fragmenting the culture fragments the possible status), but when I look at simple models, this version seems plausible to me and to explain some deep trends like monogamy.

SUBCULTURES SET YOU FREE
If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself. Enjoy your achievements as well as your plans. Keep interested in your own career, however humble; it is a real possession in the changing fortunes of time.
Having a society in which an artist can mingle as social equals with the billionaire and admit the Nobel scientists and the philanthropist is fundamental to our mental health! If I’m a programmer, I don’t need to be competing with 7 billion people, and the few hundred billionaires, for self-esteem. I can just consider the computing community. Better yet, I might only have to consider the functional programming community, or perhaps just the Haskell programming community. Or to take another example: if I decide to commit to the English Wikipedia subculture, as it were, instead of American culture, I am no longer mentally dealing with 300 million competitors and threats; I am dealing with just a few thousand.

It is a more manageable tribe. It’s closer to the Dunbar number, which still applies online. Even if I’m on the bottom of the Wikipedia heap, that’s fine. As long as I know where I am! I don’t have to be a rich elite to be happy; a master craftsman is content, and a cat may look at a king.

Leaving a culture, and joining a subculture, is a way for the monkey mind to cope with the modern world.

by Gwern Branwen, Gwern.net | Read more:
[ed. Damn. Sometimes I stumble across a site that's just, indescribable... if you're up for taking a deep dive down the rabbit hole into some weird and exhiliarating essays, check this out.]

Monday, October 30, 2017

Neoliberalism 101

Every Successful Relationship is Successful for the Same Exact Reasons

Hey, guess what? I got married two weeks ago. And like most people, I asked some of the older and wiser folks around me for a couple quick words of advice from their own marriages to make sure my wife and I didn’t shit the (same) bed. I think most newlyweds do this, especially after a few cocktails from the open bar they just paid way too much money for.

But, of course, not being satisfied with just a few wise words, I had to take it a step further.

See, I have access to hundreds of thousands of smart, amazing people through my site. So why not consult them? Why not ask them for their best relationship/marriage advice? Why not synthesize all of their wisdom and experience into something straightforward and immediately applicable to any relationship, no matter who you are?

Why not crowdsource THE ULTIMATE RELATIONSHIP GUIDE TO END ALL RELATIONSHIP GUIDES™ from the sea of smart and savvy partners and lovers here?

So, that’s what I did. I sent out the call the week before my wedding: anyone who has been married for 10+ years and is still happy in their relationship, what lessons would you pass down to others if you could? What is working for you and your partner? And if you’re divorced, what didn’t work previously?

The response was overwhelming. Almost 1,500 people replied, many of whom sent in responses measured in pages, not paragraphs. It took almost two weeks to comb through them all, but I did. And what I found stunned me…

They were incredibly repetitive.

That’s not an insult or anything. Actually, it’s kind of the opposite. These were all smart and well-spoken people from all walks of life, from all around the world, all with their own histories, tragedies, mistakes, and triumphs…

And yet they were all saying pretty much the same dozen things.

Which means that those dozen or so things must be pretty damn important… and more importantly, they work.

Here’s what they are: (...)

The most important factor in a relationship is not communication, but respect

What I can tell you is the #1 thing, most important above all else is respect. It’s not sexual attraction, looks, shared goals, religion or lack of, nor is it love. There are times when you won’t feel love for your partner. That is the truth. But you never want to lose respect for your partner. Once you lose respect you will never get it back. 
– Laurie
As we scanned through the hundreds of responses we received, my assistant and I began to notice an interesting trend.

People who had been through divorces and/or had only been with their partners for 10-15 years almost always talked about communication being the most important part of making things work. Talk frequently. Talk openly. Talk about everything, even if it hurts.

And there is some merit to that (which I’ll get to later).

But we noticed that the thing people with marriages going on 20, 30, or even 40 years talked about most was respect.

My sense is that these people, through sheer quantity of experience, have learned that communication, no matter how open, transparent and disciplined, will always break down at some point. Conflicts are ultimately unavoidable, and feelings will always be hurt.

And the only thing that can save you and your partner, that can cushion you both to the hard landing of human fallibility, is an unerring respect for one another, the fact that you hold each other in high esteem, believe in one another—often more than you each believe in yourselves—and trust that your partner is doing his/her best with what they’ve got.

Without that bedrock of respect underneath you, you will doubt each other’s intentions. You will judge their choices and encroach on their independence. You will feel the need to hide things from one another for fear of criticism. And this is when the cracks in the edifice begin to appear.
My husband and I have been together 15 years this winter. I’ve thought a lot about what seems to be keeping us together, while marriages around us crumble (seriously, it’s everywhere… we seem to be at that age). The one word that I keep coming back to is “respect.” Of course, this means showing respect, but that is too superficial. Just showing it isn’t enough. You have to feel it deep within you. I deeply and genuinely respect him for his work ethic, his patience, his creativity, his intelligence, and his core values. From this respect comes everything else—trust, patience, perseverance (because sometimes life is really hard and you both just have to persevere). I want to hear what he has to say (even if I don’t agree with him) because I respect his opinion. I want to enable him to have some free time within our insanely busy lives because I respect his choices of how he spends his time and who he spends time with. And, really, what this mutual respect means is that we feel safe sharing our deepest, most intimate selves with each other. 
– Nicole
You must also respect yourself. Just as your partner must also respect his/herself. Because without that self-respect, you will not feel worthy of the respect afforded by your partner. You will be unwilling to accept it and you will find ways to undermine it. You will constantly feel the need to compensate and prove yourself worthy of love, which will just backfire.

Respect for your partner and respect for yourself are intertwined. As a reader named Olov put it, “Respect yourself and your wife. Never talk badly to or about her. If you don’t respect your wife, you don’t respect yourself. You chose her—live up to that choice.” (...)

Respect goes hand-in-hand with trust. And trust is the lifeblood of any relationship (romantic or otherwise). Without trust, there can be no sense of intimacy or comfort. Without trust, your partner will become a liability in your mind, something to be avoided and analyzed, not a protective homebase for your heart and your mind.

by Mark Manson, Quartz |  Read more:
Image: Reuters/Lucy Nicholson
[ed. Good advice. I'd post this once a month if I could only remember...]

The Infinite Suburb Is an Academic Joke

The elite graduate schools of urban planning have yet another new vision of the future. Lately, they see a new-and-improved suburbia—based on self-driving electric cars, “drone deliveries at your doorstep,” and “teardrop-shaped one-way roads” (otherwise known as cul-de-sacs)—as the coming sure thing. It sounds suspiciously like yesterday’s tomorrow, the George Jetson utopia that has been the stock-in-trade of half-baked futurism for decades. It may be obvious that for some time now we have lived in a reality-optional culture, and it’s vividly on display in the cavalcade of techno-narcissism that passes for thinking these days in academia.

Exhibit A is an essay that appeared last month in The New York Times Magazine titled “The Suburb of the Future is Almost Here,” by Alan M. Berger of the MIT urban design faculty and author of the book Infinite Suburbia—on the face of it a perfectly inane notion. The subtitle of his Times Magazine piece argued that “Millennials want a different kind of suburban development that is smart, efficient, and sustainable.”

Note the trio of clichés at the end, borrowed from the lexicon of the advertising industry. “Smart” is a meaningless anodyne that replaces the worn out tropes “deluxe,” “super,” “limited edition,” and so on. It’s simply meant to tweak the reader’s status consciousness. Who wants to be dumb?

“Efficient” and “sustainable” are actually at odds. The combo ought to ring an alarm bell for anyone tasked with designing human habitats. Do you know what “efficient” gets you in terms of ecology? Monocultures, such as GMO corn grown on sterile soil mediums jacked with petroleum-based fertilizers, herbicides, and fast-depleting fossil aquifer water. It’s a method that is very efficient for producing corn flakes and Cheez Doodles, but has poor prospects for continuing further into this century—as does conventional suburban sprawl, as we’ve known it. Efficiency in ecological terms beats a path straight to entropy and death.

Real successful ecologies, on the other hand, are the opposite of efficient. They are deeply redundant. They are rich in diverse species and functions, many of which overlap and duplicate, so that a problem with one failed part or one function doesn’t defeat the whole system. This redundancy is what makes them resilient and sustainable. Swamps, prairies, and hardwood forests are rich and sustainable ecologies. Monocultures, such as agri-biz style corn crops and “big box” retail monopolies are not sustainable and they’re certainly not even ecologies, just temporary artifacts of finance and engineering. What would America do if Walmart went out of business? (And don’t underestimate the possibility as geopolitical tension and conflict undermine global supply lines.)

Suburbia of the American type is composed of monocultures: residential, commercial, industrial, connected by the circulatory system of cars. Suburbia is not a sustainable human ecology. Among other weaknesses, it is fatally prone to Liebig’s “law of the minimum,” which states that the overall health of a system depends on the amount of the scarcest of the essential resources that is available to it. This ought to be self-evident to an urbanist, who must ipso facto be a kind of ecologist.

Yet techno-narcissists such as MIT’s Berger take it as axiomatic that innovation of-and-by itself can overcome all natural limits on a planet with finite resources. They assume the new-and-improved suburbs will continue to run on cars, only now they will be driverless and electric, and everything in their paradigm follows from that.

I don’t think so. Like it or not, the human race has not yet found a replacement for fossil fuels, especially oil, which has been the foundation of techno-industrial economies for a hundred years, and it is getting a little late in the game to imagine an orderly segue to some as-yet-undiscovered energy regime.

By the way, electricity is not an energy source. It is just a carrier of energy generated in power plants. We have produced large quantities of it at the grand scale using fossil fuels, hydropower, and nuclear fission (which is dependent on fossil fuels to operate). And, by the way, all of our nuclear power plants are nearing the end of their design life, with no plans or prospects for them to be replaced by new ones. We have maxed out on potential hydroelectric sites and the existing big ones are silting up, which will take them out of service inside of this century.

Electricity can also be produced by solar cells and wind turbines, but at nowhere near the scale necessary, on their own, for running contemporary American life. The conceit that we can power suburbia, the interstate highway system, truck-based distribution networks, commercial aviation, the U.S. military, and Walt Disney World on anything besides fossil fuels is going to leave a lot of people very disappointed.

The truth is that we have been running all this stuff on an extravagant ramp-up of debt for at least a decade to compensate for the troubles that exist in the oil industry, oil being the primary and indispensable resource for our way of life. These troubles are often lumped under the rubric peak oil, but the core of the trouble must be seen a little differently: namely, a steep decline in the Energy Return on Investment (EROI) across the oil industry. The phrase might seem abstruse on the face of it. It means simply that it is becoming uneconomical to extract oil from the ground, even with the so-called miracle of “fracking” shale oil deposits. It doesn’t pay for itself, and the EROI is still headed further down. (...)

The world’s major oil companies are cannibalizing themselves to stay in business, with balance sheets cratering, and next-to-zero new oil fields being discovered. The shale oil producers haven’t made a net dime since the project got ramped up around 2005. Their activities have been financed on junk lending made possible by arbitrages on the near-zero Fed fund rate, itself an historical abnormality. The shale-oil drillers are producing all out to service their loans, and have thus driven down oil prices, negating their profit. Low oil prices are not the sign of a healthy industry but of a failing industrial economy, the latter currently expressing itself in a sinking middle class and the election of Donald Trump.

All the techno-grandiose wishful thinking in the world does not alter this reality. The intelligent conclusion from all this ought to be obvious: Restructuring the American living arrangement to something other than “infinite” suburban sprawl based on limitless car dependency.

As it happens, the New Urbanist movement recognized this dynamic beginning in the early 1990s and proposed a return to traditional walkable neighborhoods, towns, and cities as the remedy. It has been a fairly successful reform effort, with hundreds of municipal land-use codes rewritten to avert the inevitable suburban sprawl mandates of the old codes. The movement also produced hundreds of new town projects all over the country to demonstrate that good urbanism was possible in new construction, as well as downtown makeovers in places earlier left for dead like Providence, Rhode Island, and Newburgh, New York.

When the elite graduate schools finally noticed the New Urbanism movement, it provoked extreme jealousy and hostility because they hadn’t thought of it themselves—it was a product of the property-development industry. Harvard’s Graduate School of Design, in particular, had been lost for decades in raptures of Buck Rogers modernism, concerned solely with “cutting edge” aesthetics—that is, architectural fashion statements aimed at status seeking. They affected to be offended by the retrograde front porches and picket fences of the New Urbanists, but they were unable to develop any coherent alternative vision of a plausible future urbanism—because there really wasn’t one.

Instead, around 2002 Harvard came up with a loopy program they called “Landscape Urbanism,” which was a half-baked revision of Ian McHarg’s old Design with Nature idea from the 1970s. Design with Nature had spawned hundreds of PUDs (Planned Unit Developments) of single-family houses nestled in bosky, natural settings and sheathed in environmental-looking cedar, and scores of university housing “complexes” bermed into the terrain (with plenty of free parking). Mostly, McHarg’s methodology was concerned with managing water runoff. It did not result in holistic towns, neighborhoods, or cities.

The projects of so-called Landscape Urbanism were not about buildings, and especially the relationship between buildings, other buildings, and the street. They viewed suburbia as a nirvana that simply required better storm-water drainage and the magic elixir of “edginess” to improve its long-term prospects. (...)

Berger’s P-Rex lab showed absolutely no interest in the particulars of traditional urban design: street-and-block grids, street and building typologies, code-writing for standards and norms in construction, et cetera. They showed no interest in the human habitat per se. Berger and his gang were simply promoting a fantasy they called the “global suburbia.” Their fascination with the suburbs rested on three pillars: 1) the fact that suburbia was already there; 2) the presumption that mass car use would continue to enable that settlement pattern; and 3) a religious faith in technological deliverance from the resource and capital limits that boded darkly for the continuation of suburban sprawl.

I will tell you without ceremony what the future actually holds for the inhabited terrain of North America. The big cities will have to contract severely and the process will be fraught and disorderly. The action will move to the small cities and small towns, especially the places that have a meaningful relationship with farming, food production, and the continent’s inland waterways. The suburbs have three destinies, none of them mutually exclusive: slums, salvage, and ruins. The future has mandates of its own. If we want to remain civilized, we will be compelled to return to a landscape composed of relationships between town and country, at a scale that comports with the resource realities of the future.

by James Howard Kunstler, American Conservative | Read more:
Image: “The Jetsons” (Warner Bros. publicity)