Tuesday, October 29, 2019

‘So Alien! So Other!'

How Western TV Gets Japanese Culture Wrong

It just feels so alien! So other! So extraordinarily strange!” So said Sue Perkins as she walked across Tokyo’s most crowded zebra crossing in the opening sequence of her travelogue. But shouldn’t this all be more familiar by now?

After all, BBC One’s Japan With Sue Perkins, which aired last month, was only the latest in a long run of British TV programmes inviting us to boggle at the east Asian country. These shows always feature a shot of the aforementioned Shibuya Crossing, items on AI and sumo wrestling, and a concerned interview with an undersexed young man (sometimes called otaku) and/or an overexcited young woman (something to do with kawaii). Only rarely do they offer fresh insight.

At least the upcoming Queer Eye: We’re in Japan! on Netflix and BBC Two’s drama Giri/Haji have obeyed the most basic rule of making British TV about Japan: don’t name it after that slightly racist 1980s hit about masturbation. That’s where Channel 5’s Justin Lee Collins: Turning Japanese went wrong. Or rather, it was the first of many wrong turns in which the since-disgraced comic’s 2011 travelogue erred.

TV’s other orientalist missteps are less daft, but more common. The premise of Queer Eye – five sophisticates makeover a sad-sack – puts Karamo Brown in danger of doing an accidental “Lawrence of Arabia” when he arrives in Tokyo. That is, updating the colonial yarn of the westerner who is eventually accepted by an alien community and then asserts his inherent superiority by embodying the culture better than the locals. Happily, Queer Eye has addressed that risk by including Kiko Mizuhara, a Japanese-American model and Tokyo resident as its guide.

The illuminating presence of Mizuhara is, however, unusual. “British television programmes have a tendency to represent Japanese people as stereotypically odd or kooky, without explaining the cultural context,” says Professor Perry R Hinton, an expert in intercultural communication.

This kind of othering reveals a narrow-mindedness. As Shinichi Adachi, the Japanese-British film-maker behind YouTube culinary series The Wagyu Show explains, Japanese culture isn’t particularly strange, just more accepting of humanity’s strangeness. “They respect people, even if they don’t understand them. People don’t really care if others have weird hobbies.” (...)

“To attract viewers, it’s understandable,” says Chiho Aikman, of the Daiwa Anglo-Japanese Foundation in London, “but the reality of Japanese culture is quite different.” She suggests architecture, regional cuisine (“We don’t just eat sushi!”) and the spread of hate speech as topics that don’t get enough attention. Instead, shows about hikikomori (modern-day hermits) and 40-year-old virgins with huge hentai (manga/anime porn) collections give the impression that subcultures typify an entire nation. In truth, such selections often say more about the audience than they do about the subject. So, if anyone comes out of this looking like socially inadequate, culturally insular, sex-obsessed pervs, well, it’s not the Japanese, is it?

by Ellen E Jones, The Guardian | Read more:
Image: Composite: ITV; Alamy Stock Photo; BBC
[ed. See also: Can We Ever Make It Suntory Time Again? (Longreads).]

The Everything Bubble

I’m not going to call it “tech,” because most of the startups in that so-called tech space aren’t tech companies. They’re companies in mundane businesses. And many of these companies aren’t startups anymore but mature companies that have been in business for over a decade and now have tens of thousands of employees. And then there is the entire shale-oil and gas space that has turned the US into the largest oil and gas producer in the world.

They all share two things in common:
  • One, they’re fabulously efficient, finely tuned, and endlessly perfected cash-burn machines.
  • And two, investors in these companies count on new cash from new investors to bail out and remunerate the existing investors.
This scheme is a fundamental part of the Everything Bubble, and there is a huge amount of money involved, and it has a big impact on the real economy in cities where this phenomenon has boomed, and everyone loves it, until these hoped-for new investors start seeing the scheme as what it really is, and they’re suddenly reluctant to get cleaned out, and they refuse to bail out and remunerate existing investors. And suddenly the money runs out. Then what?

Calling these companies “tech” is a misnomer, designed to create hype about them and drive up their “valuations.” They engage in mundane activities such as leasing office space, running taxi operations, doing meal delivery, producing and selling fake-meat hamburgers and hot dogs, providing banking and brokerage services, providing real estate services, and renting personal transportation equipment, such as e-bikes and e-scooters.

And let’s just put this out there right now: e-scooters appeared in public for the first time in the late 1800s, along with electric cars and trucks.

Then there is the endless series of new social media platforms, in addition to the old social media platforms of Facebook, Twitter, WhatsApp, Instagram, and the like, where people post photos, videos, promos, and messages about whatever.

That’s the “tech” sphere mostly today.

There are some tech startups in that group, however. And that technology is about spying on Americans and others and datamining their personal events, purchases, and thoughts to be used by advertisers, government intelligence agencies, law enforcement agencies, political parties and candidates running for office, and whoever is willing to pay for it.

And there is some real tech work going on in the automation scene, which includes self-driving vehicles, but most of this work isn’t done by startups these days – though some of it is – but by big companies such as Google, big chipmakers such as Nvidia, and just about all global automakers.

And there is a slew of big publicly-traded companies that have stopped being startups years ago, that are burning huge amounts of cash to this day, and that need to constantly get even more cash from investors to have more fuel to burn. This includes Tesla, which succeeded in extracting another $2.7 billion in cash in early May from investors. Tesla duly rushed to burn this cash. And it includes Netflix, which extracted another $2.2 billion in April. From day one, these companies – just Netflix and Tesla – have burned tens of billions of dollars in cash and continue to do so, though they’re mature companies.

And it includes Uber which received another $8 billion from investors during its IPO in May, which it is now busy burning up in its cash-burn machine.

Don’t even get me started about the entire shale oil-and-gas space – though there is some real technology involved.

That entire space has burned a mountain of cash. Many of these shale oil companies are privately owned, including by private equity firms, and it’s hard to get cash-flow data on them. But for example, just to get a feel for the magnitude, by sorting through 29 publicly traded shale oil companies, the Institute for Energy Economics and Financial Analysis found that between 2010 through 2018, $181 billion in cash was burned. In 2019, they’re burning an additional pile of cash because oil prices have plunged again. And shale drilling started on a large scale before 2010. Plus, there’s the cash burned by the privately held companies. So, the total cash burned is likely in the neighborhood of several hundred billion bucks.

These companies and industries are “disruptive.” They claim that they change, and some of them actually do change, the way things used to be done.

But they have not figured out how to have a self-sustaining business model, or how to actually make money doing it. It’s easy to quote-unquote “disrupt” an industry if you can lose billions of dollars a year, if you keep getting funded by new investors, while everyone else in this industry would go bankrupt and disappear if they used a similar business model.

The only reason these companies have had such growth is because investors didn’t care about the business model, profits, and positive cash flows. (...)

This scheme is a key feature of the Everything Bubble. And it has had a large impact on the real economy.

When a company has a negative cash flow, which these companies all do, it means that they spend more investor money in the real economy than they take out. This acts like a massive stimulus of the local economy and even of the broader economy.

They’re paying wages, and these employees spend those wages on rent or house payments, on cars, electronics, food, craft beer, shoes, and they’re becoming bank customers and buy insurance and go to restaurants and pay taxes at every twist and turn. Few of those employees end up saving much. Most of them spend most of their wages, and this money goes to other companies and their employees, and it gets recycled over and over again, allowing for more hiring and more wages and more consumption to percolate through the economy.

Some of this money that is circulating comes from revenues, and is thereby extracted from the economy to be recycled. But the rest of the money – the amount that companies spend that exceeds their revenues, so the negative cash flow – comes from investors. And this is pure stimulus.

This is how the $10 billion that Softbank sank into WeWork was and will be recycled via salaries and office leases and purchases, and via local taxes, and purchases of furniture and decorations and rehabbing offices whereby the money was recycled by construction crews and electricians and flooring suppliers. Softbank’s money was routed via WeWork into the various local economies where WeWork is active. And it helped pump up commercial real estate prices and office rents along the way.

The shale oil-and-gas sector spends a lot of the negative cash flow in the oil patch, but also the locations where the equipment they buy is manufactured, such as sophisticated computer equipment, the latest drilling rigs, big generators, high-pressure pumps, and the like.

So an oil driller in Texas will transfer some investor money to manufacturers in distant cities. And the employees at these manufacturing plants buy trucks and boats and used cars, and they buy houses, and all kinds of stuff, and all of those hundreds of billions of dollars that investors plowed into the industry got transferred and recycled endlessly.

This is the multiplier effect of investors plowing their cash into money-losing negative cash-flow operations.

So what happens when investors figure out that this money is gone, and that any new money they might give these companies will also be gone?

by Wolf Richter, Wolfstreet |  Read more:

Monday, October 28, 2019

Taylor Swift

Why We Can't Tell the Truth About Aging

Reading through a recent spate of books that deal with aging, one might forget that, half a century ago, the elderly were, as V. S. Pritchett noted in his 1964 introduction to Muriel Spark’s novel “Memento Mori,” “the great suppressed and censored subject of contemporary society, the one we do not care to face.” Not only are we facing it today; we’re also putting the best face on it that we possibly can. Our senior years are evidently a time to celebrate ourselves and the wonderful things to come: travelling, volunteering, canoodling, acquiring new skills, and so on. No one, it seems, wants to disparage old age. Nora Ephron’s “I Feel Bad About My Neck” tries, but is too wittily mournful to have real angst. Instead, we get such cheerful tidings as Mary Pipher’s “Women Rowing North: Navigating Life’s Currents and Flourishing as We Age,” Marc E. Agronin’s “The End of Old Age: Living a Longer, More Purposeful Life,” Alan D. Castel’s “Better with Age: The Psychology of Successful Aging,” Ashton Applewhite’s “This Chair Rocks: A Manifesto Against Ageism,” and Carl HonorĂ©’s “Bolder: Making the Most of Our Longer Lives”—five chatty accounts meant to reassure us that getting old just means that we have to work harder at staying young. (...)

These authors aren’t blind to the perils of aging; they just prefer to see the upside. All maintain that seniors are more comfortable in their own skins, experiencing, Applewhite says, “less social anxiety, and fewer social phobias.” There’s some evidence for this. The connection between happiness and aging—following the success of books like Jonathan Rauch’s “The Happiness Curve: Why Life Gets Better After 50” and John Leland’s “Happiness Is a Choice You Make: Lessons from a Year Among the Oldest Old,” both published last year—has very nearly come to be accepted as fact. According to a 2011 Gallup survey, happiness follows the U-shaped curve first proposed in a 2008 study by the economists David Blanchflower and Andrew Oswald. They found that people’s sense of well-being was highest in childhood and old age, with a perceptible dip around midlife.

Lately, however, the curve has invited skepticism. Apparently, its trajectory holds true mainly in countries where the median wage is high and people tend to live longer or, alternatively, where the poor feel resentment more keenly during middle age and don’t mind saying so. But there may be a simpler explanation: perhaps the people who participate in such surveys are those whose lives tend to follow the curve, while people who feel miserable at seventy or eighty, whose ennui is offset only by brooding over unrealized expectations, don’t even bother to open such questionnaires.

One strategy of these books is to emphasize that aging is natural and therefore good, an idea that harks back to Plato, who lived to be around eighty and thought philosophy best suited to men of more mature years (women, no matter their age, could not think metaphysically). His most famous student, Aristotle, had a different opinion; his “Ars Rhetorica” contains long passages denouncing old men as miserly, cowardly, cynical, loquacious, and temperamentally chilly. (Aristotle thought that the body lost heat as it aged.) These gruff views were formed during the first part of Aristotle’s life, and we don’t know if they changed before he died, at the age of sixty-two. The nature-is-always-right argument found its most eloquent spokesperson in the Roman statesman Cicero, who was sixty-two when he wrote “De Senectute,” liberally translated as “How to Grow Old,” a valiant performance that both John Adams (dead at ninety) and Benjamin Franklin (dead at eighty-four) thought highly of.

Montaigne took a more measured view. Writing around 1580, he considered the end of a long life to be “rare, extraordinary, and singular . . . ’tis the last and extremest sort of dying: and the more remote, the less to be hoped for.” Montaigne, who never reached sixty, might have changed his mind upon learning that, in the twenty-first century, people routinely live into their seventies and eighties. But I suspect that he’d still say, “Whoever saw old age, that did not applaud the past, and condemn the present times?” No happiness curve for him.

There is, of course, a chance that you may be happier at eighty than you were at twenty or forty, but you’re going to feel much worse. (...)

In short, the optimistic narrative of pro-aging writers doesn’t line up with the dark story told by the human body. But maybe that's not the point. “There is only one solution if old age is not to be an absurd parody of our former life,” Simone de Beauvoir wrote in her expansive 1970 study “The Coming of Age,” “and that is to go on pursuing ends that give our existence a meaning—devotion to individuals, to groups, or to causes—social, political, intellectual, or creative work.” But such meaning is not easily gained. In 1975, Robert Neil Butler, who had previously coined the term “ageism,” published “Why Survive? Being Old in America,” a Pulitzer Prize-winning study of society’s dereliction toward the nation’s aging population. “For many elderly Americans old age is a tragedy, a period of quiet despair, deprivation, desolation and muted rage,” he concluded. (...)

A contented old age probably depends on what we were like before we became old. Vain, self-centered people will likely find aging less tolerable than those who seek meaning in life by helping others. And those fortunate enough to have lived a full and productive life may exit without undue regret. But if you’re someone who—oh, for the sake of argument—is unpleasantly surprised that people in their forties or fifties give you a seat on the bus, or that your doctors are forty years younger than you are, you just might resent time’s insistent drumbeat. Sure, there’s life in the old boy yet, but certain restrictions apply. The body—tired, aching, shrinking—now quite often embarrasses us. Many older men have to pee right after they pee, and many older women pee whenever they sneeze. Pipher and company might simply say “Gesundheit” and urge us on. Life, they insist, doesn’t necessarily get worse after seventy or eighty. But it does, you know. I don’t care how many seniors are loosening their bedsprings every night; something is missing.

It’s not just energy or sexual prowess but the thrill of anticipation. Even if you’re single, can you ever feel again the rush of excitement that comes with the first brush of the lips, the first moment when clothes drop to the floor? Who the hell wants to tear his or her clothes off at seventy-five? Now we dim the lights and fold our slacks and hope we don’t look too soft, too wrinkled, too old. Yes, mature love allows for physical imperfections, but wouldn’t we rather be desired for our beauty than forgiven for our flaws? These may seem like shallow regrets, and yet the loss of pleasure in one’s own body, the loss of pleasure in knowing that one’s body pleases others, is a real one.

I can already hear the objections: If my children are grown and happy; if my grandchildren light up when they see me; if I’m healthy and financially secure; if I’m reasonably satisfied with what I’ve accomplished; if I feel more comfortable now that I no longer have to prove myself—why, then, the loss of youth is a fair trade-off. Those are a lot of “if”s, but never mind. We should all make peace with aging. And so my hat is off to Dr. Oliver Sacks, who chose to regard old age as “a time of leisure and freedom, freed from the factitious urgencies of earlier days, free to explore whatever I wish, and to bind the thoughts and feelings of a lifetime together.” At eighty-two, he rediscovered the joy of gefilte fish, which, as he noted, would usher him out of life as it had ushered him into it.

“No wise man ever wished to be younger,” Swift asserted, never having met me. But this doesn’t mean that we have to see old age as something other than what it is. It may complete us, but in doing so it defeats us. “Life is slow dying,” Philip Larkin wrote before he stopped dying, at sixty-three—a truth that young people, who are too busy living, cavalierly ignore. Should it give them pause, they’ll discover that just about every book on the subject advocates a “positive” attitude toward aging in order to maintain a sense of satisfaction and to achieve a measure of wisdom. And yet it seems to me that a person can be both wise and unhappy, wise and regretful, and even wise and dubious about the wisdom of growing old.

by Arthur Krystal, New Yorker | Read more:
Image: Joost Swarte
[ed. See also: Put down the self-help books. Resilience is not a DIY endeavour (The Globe and Mail).]

Sunday, October 27, 2019

Hillary Clinton Spoils the Party

In the middle of October, Hillary Clinton managed to perform a minor political miracle. By baselessly speculating that Rep. Tulsi Gabbard, D-Hawaii, was a “favorite of the Russians” and preparing to run as an independent, she revived one of the more quixotic, eccentric and moribund campaigns of this election cycle while spoiling a primary that has proved shockingly substantive for a major party in the United States.

Gabbard “clapped back,” tweeting that Hillary was “the queen of warmongers, embodiment of corruption, and personification of the rot that has sickened the Democratic Party.” The congresswoman then proceeded to parlay Clinton’s political anti-genius for hauling feckless enemies out of political obscurity and crowning them with a notoriety they’d never be able to achieve on her own, into a brief turn in the media spotlight. Gabbard even went on the eponymous Fox News show “Hannity,” which makes Tucker Carlson’s white power hour look like the School of Athens, to complain about her treatment by a woman she blames for the last two decades of American wars, and to echo Republican procedural complaints about the ongoing impeachment inquiry into Donald Trump.

Clinton’s record as secretary of state speaks for itself. Her avid cheerleading for the disastrous “intervention” in Libya alone should be tattooed on her forehead and carved onto her eventual monument as a warning to the next hundred generations. But Gabbard’s own anti-war bona fides are themselves questionable, appealing to suckers and desperate contrarians alike. Scratch the surface and her foreign policy reveals itself as little more than pre-Bush realpolitik, with a Kissinger-ian preference for an archipelago of U.S.-aligned strongman governments to keep the dual threats of “Islamic Terrorism” and pan-Arabism in line. That foreign policy includes robust American expeditionary forces and drone warfare capabilities to prosecute the so-called War on Terror.

Clinton, meanwhile, seems constitutionally incapable of letting go of the bogus narrative that she lost to Donald Trump in 2016 not because she ran a lousy campaign that couldn’t turn out the vote in critical states, but because of Jill Stein’s third-party run, which garnered less than one third of the votes of fellow third-party candidate Gary Johnson. Combined with the still-nebulous conspiracy of “Russian interference,” of which Jill Stein is and is not a part, depending on the theorist, this keeps getting Clinton in trouble.

Much like Trump himself, the Clintons have long surrounded themselves with a coterie of slavish hangers-on, so it follows that there is no one left in their inner circle to say, Mrs. Clinton, maybe you’d better not. Ironically, in picking this fight with Gabbard, Hillary could be recapitulating the very error that she and her husband made in 2015, when Bill infamously encouraged Trump to run as Republican spoiler, inadvertently elevating the one character Hillary was least equipped to confront and defeat.

Gabbard is no Trump: she lacks his odious magnetism, his greedy horniness for fame and notoriety. And unlike Trump, for whom a tacky, gross American ordinariness is a huge part of his successful public charm, she is a genuine eccentric—a bundle of personal and political contradictions totally out of keeping with the aggressive someone-oughtta-do-something resentments of the angry America that elected our current president.

But Hillary Clinton is no Hillary Clinton; not anymore. And on the vastly diminished stage of Twitter spats and cable media hits, she cannot hope to win here. Even were she to manage to make some political enemy look small, she can only look smaller, this figure who could have retired to a life of philanthropy, for which she would have been feted by cultural tastemakers, and out of which she might have actually engendered the very sentiment for which she is so obviously and ineffectively clamoring now: a sentimental, hypothetical nostalgia for that which might have been had she won.

This makes all the more grotesquely poignant the recent New York Times report that a “half-dozen Democratic donors” had gathered in Manhattan at the Whitby Hotel, “a celebration of contemporary art and design . . . on the doorstep of some of New York’s leading restaurants, galleries and museums, including MoMA.” (Including MoMA! Lord save us from the Manhattan provincialism of the stupidly rich.) These donors were getting together to ask themselves seemingly the only question their wealth and privilege will allow them about the Democratic primary: “Is there anyone else?” Could they, in other words, draft some other centrist sucker into the race: the already-abandoned Howard Shultz? Former Attorney General Eric Holder? The perennial will-he/won’t-he billionaire, Michael Bloomberg? Hillary?

by Jacob Bacharach, Truthdig | Read more:
Image: Julio Cortez/AP
[ed. : )  Intramurals.]

What the End of Modern Philosophy Would Look Like

Philosophy is, no doubt, the slowest-moving branch of human inquiry. The best proof of this can be seen in its peculiar use of the word “modern.”

When musicians speak of “modern jazz,” they are generally referring to the emergence of bop in the 1940s, as in the music of Charlie Parker and Dizzy Gillespie, and thus to a period of music that is still honored today even if supplanted by later developments. Modern architecture looks further back in time, to the early twentieth century rejection of the Beaux Art and Neoclassical styles. Although modern architecture still produces original variants even now, there would be an inherent challenge in arguing that “modernism” is still an accurate description of that field today. Modern art goes back even further, and is often traced to Édouard Manet’s canvases of 1863. Here, there is wider consensus that modernism is dead, replaced by a “postmodern” period identified as running from the 1960s through the present.

Is modern philosophy a thing of the past, in the way that one might argue for modern jazz, modern architecture, and modern painting? It may be surprising for readers to learn that “modern philosophy” is taken to begin with RenĂ© Descartes (1596-1650), who abandoned traditional Arisotelianism in favor of regrounding the discipline in the immediate evidence of the thinking human subject: “I think, therefore I am.” A nuance is usually added: Descartes and his fellow seventeenth-century thinkers are often qualified as “early moderns,” while modern philosophy proper (the kind that is still practiced today) is defined temporally by the ideas of the Scottish skeptic David Hume (1711-1776) and the pivotal German thinker Immanuel Kant (1724-1804). No philosopher is likely to be taken seriously if they attempt to return to the period prior to Hume. A case in point is the twentieth-century English philosopher Alfred North Whitehead (1861-1947), who, while widely respected as a mathematician, is not universally recognized as part of the canon of great philosophers. The reason for this can be found largely in his rejection of the basic presuppositions of the Hume/Kant modernism. By and large, those who wish to be taken seriously in academic departments of philosophy need to accept these presuppositions.

This article takes a contrary view. As I see it, we are long overdue for a revision of what counts as an acceptable starting point for philosophy, and, therefore, for a decisive parting with modernism, which the other fields mentioned above completed as long as half a century ago.

The reason for Descartes’ famous principle (“I think, therefore I am”) was his wish to ground philosophy in a rigorous starting point worthy of mathematics or the natural sciences. To do this, he undertook his famous method of radical doubt. Am I really so sure that I am not dreaming or deluded at this very moment? This question has become the basis of much that popular culture considers “philosophical”: in films ranging from The Wizard of Oz to Fight Club to The Matrix, the philosophically minded director is thought to be one who challenges our commonsense notion of reality, to the point that this has become a clichĂ©: “it was all just a dream.” But the science fiction writer J.G. Ballard claimed the opposite: given that we are now surrounded with fictions in everyday life (propaganda, advertisements, conspiracy theories) the role of the artist has been reversed, and should now involve creating realities able to hold together the many fictions that perplex us.

In any case, Descartes asks us to imagine the worst-case scenario of an evil God who deceives us about absolutely everything, so that even my body and the facts of my everyday life are “fake news.” But even under this nightmare scenario, Descartes held, I must still be thinking in order to be deceived. If I did not at least exist as a thinker, the evil God would have had no one to deceive. Therefore, to repeat: “I think, therefore I am.” From there, Descartes provides further arguments that strike most contemporary readers as more naĂ¯ve, and, therefore, as not fully modern (but just “early modern”) whereas Hume and Kant can still be called modern in the full-fledged sense. Namely, Descartes said that since I have an idea of perfection in my mind but no experience of anything perfect, the idea of perfection must have been put in my mind from the outside. This must have been done by a perfect God, who (since He is perfect) could not be deceiving us constantly, and, as a result, we cannot be experiencing a world of sheer illusion. We do make many errors, of course, but for Descartes these errors result only from an improper use of our reasoning powers. If we use our reason correctly, the truth is well within our grasp. Aside from his philosophical work, Descartes was also a pioneer in the use of mathematical reasoning in physics, and is a key figure in the scientific revolution no less than in modern philosophy itself.

Hume and Kant strike us as more fully modern because they do not resort to the argument from God, but base their philosophies on the evidence of immediate experience. For Hume, all we see in experience are perceptions, qualities, or ideas, not objects. The apple sitting before me need not be an independent object called an “apple,” since all we really experiences are its shifting qualities: red, ripe, spherical, shiny, and so forth. In other words, the apple is just a “bundle of qualities” rather than an object, and by analogy, what I take to be my mind is really just a “bundle of perceptions” with no evidence of an identical “soul” or even “mind” that persists from birth to death and beyond. By the same token, we have no evidence of causal relations between things outside the mind. Although it seems as if every time we touch fire it burns and hurts us—meaning that we would do well never to put our hands in fire—, there is no way to prove that the next time we touch fire it will not freeze us instead. All we have is experience. Enduring objects, our own enduring minds, and the apparent causal powers of both, are known only from experience and cannot be established as existing outside it.

Kant wrote that Hume’s writings awoke him “from his dogmatic slumber.” Hume, he held, was basically right, but with disastrous consequences for human knowledge. Accordingly, Kant created a famous divide between what he called the thing-in-itself (the real world outside us) and appearances (the world as it seems in experience). Although he agreed with Hume that we could never prove the existence of individual objects or souls, let alone causal relations outside human experience, he denied that knowledge of such things was impossible. Instead, we merely need to accept that no knowledge is possible of the thing-in-itself outside experience, but that knowledge can be attained of the unvarying structures of human experience itself. Examples of such structures include the fact that time moves in one direction from past to future, that space is experienced as having exactly three dimensions, and that human understanding functions according to twelve “categories” which are so basic that we cannot even imagine a kind of experience that would not follow them. For Kant, cause and effect is one of these categories: perhaps in the world of the thing-in-itself things happen randomly, or do not happen at all, but for human experience it is an absolute law that everything happens according to a law of cause and effect.

Now, philosophers since then have by no means accepted everything said by these two thinkers. Many do not accept Hume’s notion that mathematics deals with logic alone, and practically no one accepts Kant’s idea of a thing-in-itself beyond all human experience. Nonetheless, Hume and Kant still feel like contemporaries, because nearly every self-respecting philosopher accepts the two pillars of philosophy borrowed from them.

I call these pillars “Taxonomical.” Usually, taxonomy refers to the classification of the various types of anything that exists: the numerous kinds of birds, fish, or berries there are in the world. But Taxonomy is very simple in the case of modern philosophy, which recognizes two, and only two, kinds of things: (1) human thought, and (2) everything else. While this may look absurd at first glance –why should a fragile and recent animal species like human beings deserve to fill up half of philosophy?– we recall that Descartes gave what looked like a good reason for it. Namely, human thought is immediately given to me and must exist, since otherwise I cannot even be dreaming or hallucinating, while everything else (including God and the world) is only known derivatively by comparison with thought.

In short, Modern Taxonomy can be rewritten as: (1) that which is immediately given, and (2) that which is known only in mediated fashion. This gave rise to what might be called the intellectual division of labor in the modern world, in which philosophy is left to puzzle over the thought-world relation, while the relation between any two parts of the world itself is reserved for natural science.

This, in turn, leads us to another variant of Modern Taxonomy: (1) restriction of philosophy to the thought-world relation, and (2) science-worship when it comes to the relation between inanimate objects. By “science-worship” I mean something very specific: the notion that in treating topics such as causality, time, space, and individuals, philosophy must not stray far from the current state of “the best science we have.” We saw that Hume and Kant set the general horizon for what counts as respectable philosophizing today. The best way to look like a philosophical crackpot is to reject one or both pillars of Modern Taxonomy: if you say that we can think about the features belonging to all relations and not just that between thought and world, or if you claim that philosophy has nothing to add about “nature” that the natural sciences are not already saying better, you are likely to look suspicious, even if you are as great a speculative thinker as Whitehead himself. In this spirit, and returning to the title of this article, what would the end of modern philosophy look like? It would amount to the end of the two pillars of Taxonomy: (1) philosophy is primarily concerned with the relation between world and thought, and (2) science-worship as concerns the relation between anything in the world outside thought. If we reject these two principles, have we not automatically become pre-modern crackpots? Not at all. Let’s take them one at a time.

by Graham Harman, The Philosophical Salon |  Read more:
Image: uncredited

Saturday, October 26, 2019

When GoFundMe Gets Ugly

GoFundMe has become the largest crowdfunding platform in the world— 50 million people gave more than $5 billion on the site through 2017, the last year fundraising totals were released. The company used to take 5 percent of each donation, but two years ago, when Facebook eliminated some charges for fundraisers, GoFundMe announced that it would do the same and just ask donors for tips. (Company officials wouldn’t say whether this model is profitable, though the site does have other sources of revenue, such as selling its online tools to nonprofits; the “grand ambition,” Solomon told me, is to have all internet charity, whether initiated by individuals or large organizations, flow through GoFundMe.)

The spectacularly fruitful GoFundMes are the ones that make the news—$24 million for Time’s Up, Hollywood’s legal-defense fund to fight sexual harassment; $7.8 million for the victims of the Pulse nightclub shooting in Orlando—but most efforts fizzle without coming close to their financial goals. Comparing the hits and misses reveals a lot about what matters most to us, our divisions and our connections, our generosity and our pettiness. And even the blockbuster successes, the stories that make the valedictory lap that is GoFundMe’s homepage, are much more complicated than any viral marketer would care to admit. (...)

Gofundme campaigns that go viral tend to follow a template similar to Chauncy’s Chance: A relatively well-off person stumbles upon a downtrodden but deserving “other” and shares his or her story; good-hearted strangers are moved to donate a few dollars, and thus, in the relentlessly optimistic language of GoFundMe, “transform a life.” The call-and-response between the have-nots and the haves poignantly testifies to the holes in our safety net—and to the ways people have jerry-rigged community to fill them. In an era when membership in churches, labor unions, and other civic organizations has flatlined, GoFundMe offers a way to help and be helped by your figurative neighbor.

What doesn’t fit neatly into GoFundMe’s salvation narratives are the limits of private efforts like Matt White’s. GoFundMe campaigns blend the well-intentioned with the cringeworthy, and not infrequently bring to mind the “White Savior Industrial Complex”—the writer Teju Cole’s phrase for the way sentimental stories of uplift can hide underlying structural problems. “The White Savior Industrial Complex is not about justice,” Cole wrote in 2012. “It is about having a big emotional experience that validates privilege.” (...)

Search the GoFundMe site for cancer or bills or tuition or accident or operation and you’ll find pages of campaigns with a couple thousand, or a couple hundred, or zero dollars in contributions. While the platform can be a stopgap solution for families on the financial brink—one study estimated that it prevented about 500 bankruptcies from medical-related debt a year, the most common reason for bankruptcy in the U.S.—the average campaign earns less than $2,000 from a couple dozen donors; the majority don’t meet their stated goal. (...)

Part of the allure of GoFundMe is that it’s a meritocratic way to allocate resources—the wisdom of the crowd can identify and reward those who most need help. But researchers analyzing medical crowdfunding have concluded that one of the major factors in a campaign’s success is who you are—and who you know. Which sounds a lot like getting into Yale. Most donor pools are made up of friends, family, and acquaintances, giving an advantage to relatively affluent people with large, well-resourced networks. A recent Canadian study found that people crowdfunding for health reasons tend to live in high-income, high-education, and high-homeownership zip codes, as opposed to areas with greater need. As a result, the authors wrote, medical crowdfunding can “entrench or exacerbate socioeconomic inequality.” Solomon calls this “hogwash.” The researchers made assumptions based on “limited data sets,” he said, adding that GoFundMe could not give them better information, because of privacy concerns.

The Roys did not have a robust social-media network, or real-life one, for that matter. A native of England, Richard has no family nearby, and his wife’s only relatives are her aging mother and a sister. Laila had deleted her Facebook account not long after her twins’ premature birth, a tense, precarious time when vague well wishes and “likes” from acquaintances only made her feel more alone. Richard worked from home and had only a couple hundred Facebook friends. “Maybe if he worked for a large local company and I worked for a large local company, maybe if we were churchgoers—that’s another network. But I don’t go to church, and he doesn’t either,” Laila said. “I have been told explicitly by social workers that you should go to church just to network. But I try not to be a hypocrite.”

What’s wrong with you also influences whether you score big with medical crowdfunding, according to the University of Washington at Bothell medical anthropologist Nora Kenworthy and the media scholar Lauren Berliner, who have been studying the subject since 2013. Successful campaigns tend to focus on onetime fixes (a new prosthetic, say) rather than chronic, complicated diagnoses like Laila’s. Terminal cases and geriatric care are also tough to fundraise for, as are stigmatized conditions such as HIV and addiction- or obesity-related problems.

“It’s not difficult to imagine that people who are traditionally portrayed as more deserving, who benefit from the legacies of racial and social hierarchies in the U.S., are going to be seen as more legitimate and have better success,” Kenworthy told me. At the same time, the ubiquity of medical crowdfunding “normalizes” the idea that not everyone deserves health care just because they’re sick, she said. “It undermines the sense of a right to health care in the U.S. and replaces it with people competing for what are essentially scraps.”

by Rachel Monroe, The Atlantic | Read more:
Image: Akasha Rabut

The Not-Com Bubble Is Popping

It is easy to look at today’s crop of sinking IPOs—like Uber, Lyft, and Peloton—or scuttled public offerings, like WeWork, and see an eerie resemblance to the dot-com bubble that popped in 2000.
  • Both then and now, consumer-tech companies spent lavishly on advertising and struggled to find a path to profit.
  • Both then and now, companies that bragged about their ability to change the world admitted suddenly that they were running out of money.
  • Both then and now, the valuations of once-heralded tech enterprises were halved in a matter of weeks.
  • Both then and now, there was a widespread sense of euphoria curdling into soberness, washed down with the realization that thousands of workers in once-promising firms were poised to lose their jobs.
But if you look closer, today’s correction isn’t much like the dot-com bubble at all. In fact, it might be more accurate to say that what’s happening today is the very opposite of the dot-com bubble.

Let’s first understand what exactly that bubble was: a mania of stock speculation, in which ordinary investors—from taxi drivers to Laundromat owners to shoe-shiners—bid up the price of internet-related companies for no good reason other than “because, internet.” Companies realized that they could boost their stock price by simply adding the prefix e- (as in “e-Bay”) or the suffix com (as in Amazon.com) to their corporate names to entice, and arguably fool, nonprofessionals. “Americans could hardly run an errand without picking up a stock tip,” The New York Times reported in its postmortem.

As prices became untethered from reality, the Nasdaq index doubled in value between 1999 and 2000 without “any plausible candidate for fundamental news to support such a large revaluation,” as the economists J. Bradford DeLong and Konstantin Magin wrote in a paper on the bubble. The crash was equally swift and arbitrary. Between February 2000 and February 2002, the NASDAQ lost three-quarters of its value “again without substantial negative fundamental news,” DeLong and Magin wrote. By late 2000, more than $5 trillion in wealth had been wiped out. This sudden rise and sudden collapse in asset prices—without much change in information about the underlying assets—is the very definition of a bubble.

The current situation is different, in at least two important ways.

by Derek Thompson, The Atlantic |  Read more:
Image: Brendan McDermid/Reuters

Bright Leaf

A habit. A comfort. An addiction. An indulgence. A nuisance. A crime. A vice. A sin. An error. A joy.

Surprising cigarette-smoking locations:

The dentist’s chair in Italy. The dentist was a friend of the family with whom I was staying. His name was Gigi and he could see me that afternoon. It was a gum abscess, quite painful. He was both quick and careful while fixing it.

Then I felt faint.

“Just stay there,” he said. “Don’t get up.” He brought me some water and put his hand on my arm. “Have a cigarette,” he said. “It’ll make you feel better.”

It did.

Driving lessons at the age of thirty-five. Driving made me nervous, which was why I’d put off learning for so long. I went for one jerky spin around the parking lot with my instructor. Then: “Pull over.”

I came to an abrupt stop.

“Lady,” he asked, “Do you smoke?”

“Yes.”

“So will you please have a cigarette. You’re too tense.”

“I don’t know if I can smoke and drive at the same time,” I said.

“You gotta learn. Might as well do it all at once.”

In Charles DeGaulle airport. Decades after smoking had been banned in the air and then almost everywhere else, I found a small, yellowed room on the floor below gate access with a sign on the door in three languages: SMOKING. Inside, several travelers were stoking up for their voyages. I joined them. I had four cigarettes in a row, enough to make me feel sick to my stomach. I was halfway across the Atlantic before I wanted another, and by then I thought I could make it. (...)

People do not hesitate to tell me to stop smoking and to inform me how dangerous it is, in case I haven’t heard about that. Some kinds of bad behavior are off-limits for comment: drinking too much, eating too much, spending time with idiots or losers. Doing those things may provoke disapproval, but almost nobody will criticize the person in public. Smoking is not like that. People often justify this by talking about secondhand smoke, but since I don’t smoke inside and they aren’t exposed to my secondhand smoke, this argument doesn’t have much weight.

At eight-thirty one morning I was walking to work at my proofreading job through an almost-deserted Harvard Square. Another woman was walking about fifteen feet in front of me. I lit a cigarette. She began waving her hands around her head and making little coughing sounds. After a block of this, she turned around.

“Would you put that cigarette out,” she said. She had a disdainful, pained expression.

“I think Harvard Square is big enough for both of us,” I said.

I crossed the street, but I kept smoking. (...)

What is smoking for? People who don’t smoke think it’s for feeding an addiction, and they’re right. Once you’ve started smoking, it’s hard to stop. But that’s a narrow definition. There’s a physical addiction and there’s also a spiritual addiction. Perhaps you could call it a metaphysical addiction.

To have a cigarette is to step out of day-to-day existence and into a private, solitary existence. It’s just you and your cigarette. Hello, says the cigarette, You’ve come to visit me. And you say, Yes, hello—but really, you know that you’ve come to visit yourself. The cigarette is a method of being alone and listening to yourself, of having nobody but yourself to listen to or to be with.

It’s also a way to stop time. Time spent smoking is not real time. Nothing else is happening. There is no progress. There is no trying to start something or complete something or even forget something. Since smokers have been excommunicated from indoor life, this contemplative aspect of smoking has come to the fore. I’m grateful that I can’t smoke inside anymore. Now, about once an hour, I can stop whatever I’m doing without making an excuse for stopping it, and go outside. Then I am with birds and trees, or with skyscrapers and trucks, or with rain, or with the sunset that is beginning, pink and streaky, over in the west. The whole world is there and I am also there, but I have nothing to do except watch it or ignore it and smoke my cigarette.

Smoking is also a punctuation mark, probably a period, but sometimes an exclamation point. Dinner’s in the oven, time for a cigarette. Did all the errands on my list, cigarette! Finished reading that book, emptied the dishwasher, got through to that person who never answers the phone: cigarette, cigarette, cigarette.

And a clock. Smoking is both a marker of the passage of time and a way to elude time. I know how long an hour is because nicotine tells me. Then the cigarette gives me four or five minutes (I am not sure how many minutes it takes to smoke a cigarette) that are not exactly minutes. They are pure existence. (...)

Though it’s embarrassing to admit, I didn’t want to participate in the general wellness culture. I didn’t want to be one of the many people who were improving themselves by going on juice fasts, cutting out red meat, or meditating daily. Smoking was my meditation. I didn’t want to hear from people who’d been telling me or even begging me to stop smoking how wonderful it was that I had finally done so. I had (and still have) an adolescent kind of rebelliousness. I saw that, and I knew it was ridiculous and petulant and inappropriate (a terrible word used by people who were on juice fasts or who didn’t eat red meat) for a supposedly adult person. That was one reason.

The main reason, though, was that I enjoyed it.

by Susanna Kaysen, N+1 |  Read more:
Image: Federico Faruffini: 'La Lectora'. Wikimedia Commons.

Friday, October 25, 2019


Njideka Akunyili Crosby, Bush Babies, 2017
via:

George Jones


Mick Mather, Manifest
via:

Just 6% of US adults on Twitter account for 73% of Political Tweets

A small number of prolific U.S. Twitter users create the majority of tweets, and that extends to Twitter discussions around politics, according to a new report from the Pew Research Center out today. Building on an earlier study, which discovered that 10% of users created 80% of tweets from U.S. adults, the organization today says that just 6% of U.S. adults on Twitter account for 73% of tweets about national politics.

Though your experience on Twitter may differ, based on who you follow, the majority of Twitter users don’t mention politics in their tweets.

In fact, Pew found that 69% never tweeted about politics or tweeted about the topic just once. Meanwhile, across all tweets from U.S. adults, only 13% of tweets were focused on national politics.

The study was based on 1.1 million public tweets from June 2018 to June 2019, Pew says (2,427 users participated).

Similar to its earlier report about how prolific users dominate the overall conversation, Pew found there’s also a small group of very active Twitter users dominating the conversation about national politics — and they all tend to be heavy news consumers and more polarized in their viewpoints.

Only 22% of U.S. adults even have a Twitter account, and of those, only 31% are defined as “political tweeters” — that is, they’ve posted at least five tweets and have posted at least twice about politics during the study period.

Within this broader group of political tweeters, just 6% are defined as “prolific” — meaning they’ve posted at least 10 tweets and at least 25% of their tweets mention national politics.

This small subset then goes on to create 73% of all tweets from U.S. adults on the subject of national politics.

What’s concerning about the data is that it’s those who are either far to the left or far to the right who are the ones dominating the political conversation on Twitter’s platform. A majority of the prolific political tweeters (55%) say they identify as either “very liberal” or “very conservative.” Among the non-political tweeting crowd, only 28% chose a more polarized label for themselves.

This polarized subgroup also heavily leans left. For example, those who strongly approve of President Trump generated 25% of all tweets mentioning national politics. But those who strongly disapprove of Trump generated 72% of all tweets mentioning national politics. (They’re also responsible for 80% of all tweets from U.S. adults on the platform.)


This isn’t a fully representative picture of U.S. politics. The share of U.S. adults on Twitter who strongly disapprove of Trump (55%) is 7 percentage points higher than the share of the general public that holds this view (48%).

Trump supporters, as a result, are under-represented on Twitter. Perhaps this is because they’ve flocked to alternate platforms; or because they don’t tweet their views as often in public; or because they violate Twitter’s policies more often, resulting in bans. Or as is likely, it’s a combination of factors. In any event, the reasoning was beyond the scope of this study.

The study also found the prolific tweeters are highly engaged with the news cycle; 92% follow the news “most of the time,” compared to 58% of non-prolific political tweeters and 53% of non-political tweeters. They’re also civically engaged, as 34% have attended a political rally or event, 57% have contacted an elected official and 38% have donated to campaigns.

Also of note, the political tweets are more likely to come from older users. Those ages 65 and older produce only 10% of all tweets from U.S. adults, but they contribute 33% of tweets related to national politics. And those 50 and older produce 29% of all tweets but contribute 73% of tweets mentioning national politics.

by Sarah Perez, TechCrunch |  Read more:
Image: Pew Research Center

Hiroshi Yoshida, Otenjo, 1926
via:

Thursday, October 24, 2019

Why We Need to Dream Bigger Than Bike Lanes

There’s a quote that’s stuck with me for some time from Aaron Sorkin’s The Newsroom: “You know why people don't like liberals? Because they lose. If liberals are so f***ing smart, how come they lose so goddamn always?”

American urbanists and bike advocates are smart, or at least well informed. We know how important cycling is. We are educated about cycling cities in other parts of the world and how they are so much better for health, well-being, economics, traffic, pollution, climate, equity, personal freedom, and on and on.

But if we’re so smart how come we lose so goddamn always?

Why is the best we seem to be able to accomplish just a few miles of striped asphalt bike “lanes,” or if we’re lucky, a few blocks of plastic pylons—“protected” bike lanes?

Our current model is to beg for twigs

More often than not, bike infrastructure is created reactively. Typically in response to a collision or near collision with a car, an individual or advocacy group identifies a single route that needs better infrastructure. We gather community support and lobby local officials for the desired change, trying as hard as we can to ask for the cheapest, smallest changes so that our requests will be seen as realistic.

What’s the problem with this model?

It’s like imagining a bridge and asking for twigs—useless, unable to bear any meaningful weight, easily broken. And it’s treating bike infrastructure like a hopeless charity case.

This makes bike infrastructure seem like a small, special-interest demand that produces no real results in terms of shifting to sustainable transportation, and it makes those giving up road space and tax dollars feel as though they are supporting a hopeless charity.

But when roads, highways, and bridges are designed and built, they aren’t done one neighborhood at a time, one city-council approval at a time. We don’t build a few miles of track, or lay down some asphalt wherever there is “local support” and then leave 10-mile gaps in between.

And yet this is exactly how we “plan” bike infrastructure.

Bike lanes are intermittent at best in most North American cities, and since they are usually paint jobs that put cyclists between fast-moving traffic and parked cars with doors that capriciously swing open, only experienced riders brave them. The lanes are easily blocked anyway, by police, delivery trucks, and film crews, if not random cars banking on the low likelihood of being ticketed.

This kind of bike “infrastructure” doesn’t actually do very much to protect existing cyclists, let alone encourage and inspire the general population to start cycling.

Why are we settling for easily broken twigs? The total number of people on bikes and other micromobility modes like scooters and skateboards is large and growing. An enormous force has been divided and conquered, splintered among thousands of neighborhoods.

In the grand scheme of things, the twig bike lanes we fight for aren’t going to create the significant mode shift needed for the environmental, social, and safety gains we hope to achieve. No one wants to fight for twigs. This cycle does nothing to inspire and grow a strong pro-micromobility movement.

Cars and trucks get billions in federal, state, and local money. Governments can mindlessly belch out vast sums for highway widenings—see the $1.6 billion spent on a single-lane addition to the 405 freeway in Los Angeles, even though we’ve known for years that it would not make a dent in travel times. With all this money seemingly available for car infrastructure, some of which is absolutely useless or makes traffic worse, there’s only a pittance devoted to robust bike networks. Why?

Bigger is better for infrastructure projects

For infrastructure projects, the larger you make it, the bigger the engineering and construction firms vying to get lucrative contracts, the more jobs are created, and bigger ribbon-cutting ceremonies politicians can go to. Expensive projects get media coverage, fire up the imagination, and grab hold of valuable mind share.

Our tweets and op-eds may vaunt the vital virtues of car-free mobility, but our infrastructure demands and budget sizes sadly do not. By lowballing our demands, we micromobilists are pitching ourselves as a niche, special-interest group: We are tacitly agreeing that cars are and should be the dominant mode of transportation, making our near nonexistent position in the budgetary pecking order inevitable. We also leave billions on the table by doing little to go after state and federal transportation funds.

by Terenig Topjian, City Lab |  Read more:
Image:Robert Galbraith/Reuters
[ed. Bike-centric planning, no. Micromobility, yes. There are lots of ways to get around (I like golf carts myself).] 

On Achieving Quantum Supremacy

In a paper today in Nature, and a company blog post, Google researchers claim to have attained “quantum supremacy” for the first time. Their 53-bit quantum computer, named Sycamore, took 200 seconds to perform a calculation that, according to Google, would have taken the world’s fastest supercomputer 10,000 years. (A draft of the paper was leaked online last month.)

The calculation has almost no practical use—it spits out a string of random numbers. It was chosen just to show that Sycamore can indeed work the way a quantum computer should. Useful quantum machines are many years away, the technical hurdles are huge, and even then they’ll probably beat classical computers only at certain tasks. (See “Here’s what quantum supremacy does—and doesn’t—mean for computing.”)

But still, it’s an important milestone—one that Sundar Pichai, Google’s CEO, compares to the 12-second first flight by the Wright brothers. I spoke to him to understand why Google has already spent 13 years on a project that could take another decade or more to pay off.

The interview has been condensed and edited for clarity. (Also, it was recorded before IBM published a paper disputing Google’s quantum supremacy claim.)

MIT TR: You got a quantum computer to perform a very narrow, specific task. What will it take to get to a wider demonstration of quantum supremacy?

Sundar Pichai: You would need to build a fault-tolerant quantum computer with more qubits so that you can generalize it better, execute it for longer periods of time, and hence be able to run more complex algorithms. But you know, if in any field you have a breakthrough, you start somewhere. To borrow an analogy—the Wright brothers. The first plane flew only for 12 seconds, and so there is no practical application of that. But it showed the possibility that a plane could fly.

A number of companies have quantum computers. IBM, for example, has a bunch of them online that people can use in the cloud. Why can their machines not do what Google’s has done?

The main thing I would comment on is why Google, the team, has been able to do it. It takes a lot of systems engineering—the ability to work on all layers of the stack. This is as complicated as it gets from a systems engineering perspective. You are literally starting with a wafer, and there is a team which is literally etching the gates, making the gates and then [working up] layers of the stack all the way to being able to use AI to simulate and understand the best outcome.

The last sentence of the paper says “We’re only one creative algorithm away from valuable near-term applications.” Any guesses as to what those might be?

The real excitement about quantum is that the universe fundamentally works in a quantum way, so you will be able to understand nature better. It’s early days, but where quantum mechanics shines is the ability to simulate molecules, molecular processes, and I think that is where it will be the strongest. Drug discovery is a great example. Or fertilizers—the Haber process produces 2% of carbon [emissions] in the world [see Note 1]. In nature the same process gets done more efficiently.

So how far away do you think an application like improving the Haber process might be?

I would think a decade away. We are still a few years away from scaling up and building quantum computers that will work well enough. Other potential applications [could include] designing better batteries. Anyway, you’re dealing with chemistry. Trying to understand that better is where I would put my money on.

Even people who care about them say quantum computers could be like nuclear fusion: just around the corner for the next 50 years. It seems almost an esoteric research project. Why is the CEO of Google so excited about this?

Google wouldn’t be here today if it weren’t for the evolution we have seen in computing over the years. Moore’s Law has allowed us to scale up our computational capacity to serve billions of users across many products at scale. So at heart, we view ourselves as a deep computer science company. Moore’s Law is, depending on how you think about it, at the end of its cycle. Quantum computing is one of the many components by which we will continue to make progress in computing.

The other reason we’re excited is—take a simple molecule. Caffeine has 243 states or something like that [actually 1048—see Note 2]. We know we can’t even understand the basic structure of molecules today with classical computing. So when I look at climate change, when I look at medicines, this is why I am confident one day quantum computing will drive progress there.

by Gideon Lichfield, MIT Technology Review |  Read more:
Image: Google/MIT Technology Review
[ed. See also: Quantum supremacy from Google? Not so fast, says IBM. (MIT Technology Review).]

Why They Bulldozed Your Block

My first memories of life are in a public-housing project. My parents, then college students, had two kids, and then quickly three, and soon found subsidized housing in a new high-rise in Philadelphia, with brightly colored plastic doors and gray concrete terraces, where we lived for three years. At the tail end of the great period of the fifties Western, all the kids on the concrete balconies played at “Davy Crockett” and “Gunsmoke,” riding hobbyhorses and firing cap guns up and down their gray length, a form of play as alien now as Homeric poetry.

This was the heyday of urban redevelopment, when city planners, doing what was then called “slum clearance,” created high-density, low-cost public housing, often on a Corbusian model, with big towers on broad concrete plazas. In the still optimistic late fifties and early sixties, it was possible to imagine and actually use public housing as its original postwar planners had imagined it could be used: not as a life sentence but as a cheerful, clean platform that people of various racial and ethnic backgrounds without much money could use in a transition to another realm of life.

It was a dream that was over almost before it began and has since been condemned by all sides: by urbanists who came to hate the uniformity of its structures and their negation of street life; by minority communities who increasingly recognized these places as artificial ghettos, without the distinctive character and variety of real neighborhoods; and by the city officials who had to police the plazas. As Alex Krieger, a Harvard professor of urban design, writes in “City on a Hill: Urban Idealism in America from the Puritans to the Present” (Harvard), “Having an address in such places was like wearing a scarlet letter—perhaps a P, as in ‘I am Poor.’ ” Such places were publicly executed throughout the eighties and nineties, imploded with dynamite by despairing state and city governments. (All the great implosion videos are of either casinos or public housing, a sign of the American times.) Schuylkill Falls, the public-housing project of my happy early memory, was among them, demolished in 1996 after sitting abandoned and desolate for twenty years.

Now, however, for the first time in a half century, the people who built the bad stuff are reĂ«merging as possible models of how we might yet build good stuff—with a reclamation of such once-banished terms as “urban renewal” and “high-rise housing.” This revival has been pushed forward by the same force that has recently pushed other forms of public neo-progressivism, at least rhetorically: a desire for public action in the face of the obvious impasse of the private, with free-market mechanisms having left city housing so costly that teachers and cops often live two hours outside the neighborhoods they serve. You “can’t trust the private sector to protect the public interest” was the city planner Edward Logue’s most emphatic aphorism on the subject, and it is one that has taken on new life.

Even New York’s “master builder,” Robert Moses himself, a hate object for later urbanists, who preferred preservation to innovation and the small-scale to the large, has come in for a revisionist look: whatever his faults, he built city amenities for city people—playgrounds and parks and the Triborough Bridge—rather than splinters filled with condos for the ultra-rich. Not since the Beaux-Arts revival of the mid-seventies, when neoclassical ornament and elaborate façades became fashionable again—when Philip Johnson could put a Chippendale edifice on the A.T. & T. building—has there been such a return of the architectural repressed. It is even possible to speak again in praise of the brutalist style in which much of that fifties and sixties public building was done. When people begin to cast a fonder eye on the Port Authority Bus Terminal, it means an epoch has altered.

Ed Logue was the consensus villain of the old urban planning. In a 2001 interview between the writer James Kunstler and the sainted urbanist Jane Jacobs, Logue was the subject of an extended hate:
Q: He went on to inadvertently destroy both New Haven and much of central Boston by directing Modernist urban renewal campaigns in the 1960s. Did you watch these schemes unfold and what did you think of them? 
A: I thought they were awful. And I thought he was a very destructive man and I came to that opinion during the first time I met him, which was in New Haven.
Lizabeth Cohen’s new book, “Saving America’s Cities: Ed Logue and the Struggle to Renew Urban America in the Suburban Age” (Farrar, Straus & Giroux), is an attempt to salvage the villain’s reputation, mostly by putting it in the Tragedy of Good Intentions basket instead of the Arrogance of Élitist Certainties basket, albeit recognizing that these are adjacent baskets. Cohen, an American historian at Harvard, reminds the reader, as any first-rate historian would, that what look, in the retrospective cartooning of polemical history, like obvious choices and clear moral lessons are usually gradated and surprising. Logue, whose career was more far reaching and ambitious than that of any other urbanist of his time, helped remake New Haven, Boston, and New York, and his ambitions for city planning were thoroughly progressive: “To demonstrate that people of different incomes, races, and ethnic origins can live together . . . and that they can send their children to the same public schools.” Despite his reputation as a “slum-clearer,” Logue was uncompromising about the primacy of integration. “The pursuit of racial, not just income, diversity in residential projects animated all his work,” Cohen writes. (Jane Jacobs, to put it charitably, didn’t really notice that her beloved Hudson Street, in the West Village, tended toward the monochrome.)

Simple sides-taking exercises between good guys and bad guys turn out to betray the far more complicated fabric of big-city life. Logue’s mixed achievement is a testament either to the inadequacies of his proposals or to the intractability of his problems, and probably to both at once. (...)

What defeated Logue’s vision in the magazines and universities was the rise of Jane Jacobs and the conservationist left. For Jacobs, “dated stores, modest personal services, and cheap luncheonettes” were the city. In “The Death and Life of Great American Cities” (1961), she showed a generation how small enterprise helped sustain the complex ecology of mutual unplanned effort that makes cities work. Logue and Jacobs once had an onstage debate, in which Logue needled Jacobs about her highly romantic vision of her West Village neighborhood—he’d been out there at 8 p.m. and hadn’t seen the ballet of the street that she cooed over. (Jacobs was an instinctive Whitmanesque poet, not a data collector: you don’t count the angels on the head of a small merchant.) Logue also made the serious point that the emerging anti-renewal consensus was fine for someone who already had a safe place in the West Village. For those who didn’t, it was just a celebration of other people’s security.

But what defeated people like Logue on the ground was the increasingly agonized racial politics of big cities. In 1967, Logue ran for mayor of Boston, and, though regarded as a serious contender, was squeezed between another reformist candidate, Kevin White, and Louise Day Hicks, a ferocious anti-busing activist. (Her slogan: “You know where I stand.”) Determined to protect Irish neighborhoods from interfering outsiders who wanted to bus their children, and from “the element”—that is, minorities who wanted to take over their beloved blocks—Hicks is a reminder that the fault lines visible now in America are a long-standing feature of the American foundation.

Cohen makes a larger point about the context in which Logue and his colleagues rose and fell. In the early years of the Cold War, “expertise” was seen as a powerful support of liberal democracies. This was the expertise of engineers and architects—and of a growing class of professionals who had been able to go to colleges that their parents could not attend. The traumas of the sixties upended faith in experts. The same people who designed the Strategic Hamlet Program, in Vietnam, had remade downtown New Haven (and, one could argue, on similar principles: replacing the exposed, organic village with a secured fortress, the mall). The expertise of the urban planner was undermined as well, by the new prestige attached to the preservationist, which, for good or ill, remains undiminished. As the next generation of development would show, however, what tends to replace expertise is not the intelligence of the street. What replaces expertise is the idiocy of the deal.

by Adam Gopnik, New Yorker |  Read more:
Image: David Plunkert; photograph by LeRoy Ryan / The Boston Globe / Getty (man)