Saturday, July 7, 2018

The Best Antivirus Is Not Traditional Antivirus

We set out to do a standard Wirecutter guide to the best antivirus app, so we spent months researching products, reading reports from independent testing labs and institutions, and consulting experts on safe computing. And after all that, we learned that most people should neither pay for a traditional antivirus suite, such as McAfee, Norton, or Kaspersky, nor use free programs like Avira, Avast, or AVG. The “best antivirus” for most people to buy, it turns out, is not a traditional antivirus package.

Information security experts told us that the built-in Windows Defender is good-enough antivirus for most Windows PC owners, and that both Mac and Windows users should consider using Malwarebytes Premium, an anti-malware program that augments both operating systems’ built-in protections. These options provide reliable protection without slowing your computer significantly, installing unwanted add-ons, or harassing you about upgrades.

Malwarebytes is not an all-in-one option for protecting your system against exploits, malware, and other bad stuff. But information security experts repeatedly recommended it as a useful anti-malware layer, one of multiple layers of security you need for your devices, coupled with good habits. Relying on any one app to protect your system, data, and privacy is a bad bet, especially when almost every security app—including Malwarebytes and Windows Defender—has proven vulnerable on occasion. You should have good virus and malware protection, yes, but you also need secure passwords, two-factor logins, data encryption, and smart privacy tools added to your browser. Check out our guide to setting up those layers here.

Why you should trust us

As writers and editors for Wirecutter, we have combined decades of experience with different computers and mobile devices, and their inherent vulnerabilities. We spent dozens of hours for this guide reading results from independent labs like AV-Test and AV-Comparatives, features at many publications such as Ars Technica and PCMag, and white papers and releases by institutions and groups like Usenix, Google’s Project Zero, and IEEE. We also read up on the viruses, ransomware, spyware, and other malware of recent years to learn what threats try to get onto most people’s computers today.

Then we interviewed experts, including computer-security journalists, experienced security researchers, and the information security team at The New York Times (parent company of Wirecutter), whose responsibilities include (but are not limited to) protecting reporters and bureaus both overseas and here in the US from hacking and surveillance:
These experts helped us reach a more nuanced consensus than the typical table-tennis headlines: antivirus is increasingly useless, actually it’s still pretty handy, antivirus is unnecessary, wait no it isn’t, and so on. Although we often test all the products we’re considering, we can’t test the performance of antivirus suites any better than the experts at independent test labs already do, so we relied on their expertise.

Furthermore, every information security expert we talked to agreed that most people shouldn’t pay for a traditional antivirus suite: The virus and malware protection built into Windows and macOS, combined with good habits, are enough for most people. Malwarebytes is a nonintrusive additional layer, one that may catch things written to work around Windows Defender or the Mac’s inherent defenses. So we tested Malwarebytes on Windows and macOS to learn how easy the app was to use, if it noticeably slowed performance or interfered with other apps, or if it had any annoying notifications.

Why we don’t recommend a traditional antivirus suite

It’s insufficient for a security app to just protect against a single set of known “viruses.” There are potentially infinite malware variations that have been crypted—encoded to look like regular, trusted programs—and that deliver their system-breaking goods once opened. Although antivirus firms constantly update their detection systems to outwit crypting services, they’ll never be able to keep up with malware makers intent on getting through.

A quick terminology primer: The word malware just means “bad software” and encompasses anything that runs on your computer with unintended and usually harmful consequences. In contrast, antivirus is an out-of-date term that software makers still use because viruses, Trojan horses, and worms were huge, attention-getting threats in the 1990s and early 2000s. Technically, all viruses are a kind of malware, but not all malware is a virus.

Although each expert we interviewed had their own preferred solutions to the endless stream of computer threats, none recommended buying a traditional antivirus app. So why shouldn’t you install a full antivirus suite from a known brand, just to be on the safe side? For many good reasons:
For these reasons, we don’t recommend most people spend the time or the money to add traditional antivirus software to their personal computer. We didn’t consider newer antivirus products that have not yet been tested by known independent research labs or that aren’t available to individuals.

Two caveats to our recommendations on malware protection:

If you have a laptop provided by your work, school, or another organization, and it has antivirus or other security tools installed, do not uninstall them. Organizations have systemwide security needs and threat models that differ from those of personal computers, and they have to account for varying levels of technical aptitude and safe habits among their staff. Do not make your IT department’s hard job even more difficult.
People with sensitive data to protect (medical, financial, or otherwise), or with browsing habits that take them into riskier parts of the Internet, have unique threats to consider. Our security and habit recommendations are still a good starting point, but such situations may call for more intense measures than we cover here.

by Kevin Purdy, Wirecutter |  Read more:
Image: Kyle Fitzgerald

Distilled Golf: The 3-Club Challenge

If you’re a regular golfer you probably have one club in your bag that you love more than others. One that’s as reliable as Congress is dysfunctional. For Kevin Costner in Tin Cup it was his trusty seven iron. For someone like Henrik Stenson, probably his three wood. For me, it’s my eight iron. There are certain clubs, either through experience, ability, or default just seem to stand out. 

Then there are those that just give us the willies. For example, unlike Henrik I’d put my three wood in that category. I’m convinced no amount of practice will ever make me better with that club. Invariably, I chunk or thin it and rarely hit it straight, but keep carrying it around because I’m convinced I need it - like when a situation calls for a 165 yard blooper on a 210 yard approach. A friend of mine has problems with his driver. He'll carry it around for weeks or months at a time but never use it, because “it’s just not working”. Little wonder.

If you’ve been golfing for a while you’ve probably indulged in the ‘what if’ question. I’m not talking about the misery stories you hear in a clubhouse after every round - those tear-in-the-beer laments like ‘what if I’d only laid up instead of trying to cut that corner’, or, ‘what if I hadn’t bladed that bunker shot into the lake’? Bad decisions and bad breaks. Conversations like those will go on for as long as golf exists and really aren’t that interesting (except for the person drowning their sorrows).

No, what I’m talking about is a more existential question. One that goes to the heart of every golfer’s game: what if you only had three clubs to play with, which ones would you choose? And why?

It’s a fun thought experiment because it makes you think about your abilities in a more distilled perspective: how well do I hit my clubs and what’s the best combination to use to get around a course in the lowest possible score?

Maybe you’ve had the chance to compete in a three-club tournament. They’re out there. Once in a while someone puts one together and they sound like a lot of fun. I’ve never played in one myself, but have wondered at times what clubs I'd choose if given the opportunity. Recently, I got to find out, with some surprising results.

Caveat: I’m not here to suggest that there’s one right mix of clubs for everyone, but I will say that it’s possible to shoot par golf (or better) with only three golf clubs.

First, some background. I’m a senior golfer that’s been playing the game for nearly 25 years. High single to low double digit handicap (I’m guessing since I don’t keep one). Usually shoot in the low to mid-80s with an occasional excursion into the high 70s.

Lately I’ve been playing on a nice nine hole course that rarely sees more than a dozen golfers at any time, even on the weekends. It’s not an executive course or goat-track by any means. In fact it’s as challenging a course as any muni, if not more so, and definitely in better condition. The greens-keeping staff keep it in excellent shape and share resources with a nearby Nicklaus-designed course. It’s your average really nice nine hole course, and would command premium prices if expanded to 18 holes.

Anyway, because there’s hardly anyone around I usually play three balls, mainly for exercise and practice. I’ve always carried my bag, so it’s easy to drive up, unload my stuff, stick three balls in my pocket and take off.

A while back we had some strong winds. Stiff, persistent winds that lasted for days. I don’t mind playing in wind, but these were strong enough that my bag kept falling over when I set it down, and twisting around my body, throwing me off balance when walking up and down fairways. I’m sure I must have looked a bit like a drunk staggering around (not an uncommon sight on some of the courses I’ve played), so I decided to dump the bag and just play with three clubs.

But which ones? Keep in mind that everyone is different, and the clubs I selected are the ones that I thought would work best for me.
***
To begin with, I realized that two are already taken. First, I’d need a putter. According to Golf Digest and Game Golf, you need a putter roughly 41 percent of the time on average. I don’t know about you, but I’m not going to try putting with a driver, three wood, or hybrid no matter how utilitarian they might be. It just feels too weird. Perhaps it’s just personal preference, and if that’s not a big deal with you go for it.

The next club I selected was something that could get me close from a 120 yards out, help around the fringe, and get me out of a bunker. No brainer: sand wedge. I thought about a lob wedge but it didn’t have the distance, and a gap or pitching wedge was just too tough out of the sand and didn’t have enough loft for short flops to tight pins.

Finally, my last club: a six iron. Why the six? A number of reasons. First, and probably most important: I suck at my six iron. Not as bad as my three wood, but for some reason the six has always given me problems. Maybe it's because I’ve never been fit for clubs and it always stood out as being more difficult than most of the others in my bag. I don’t know why, really. In any case, I thought “why not get a little more practice and see if I can get this guy under control”? It also has the distance. When I hit it flush, I can get it to go maybe 170 yards. Maybe. So that completed the set and my new streamlined self was ready for the wind.

Here’s where it gets interesting. Given that most Par 4s are generally in the 350 – 450 yard range or less (see here and here) and Par 5s generally about 450-690 yards (see here), it’s not that hard if you’re hitting a 160 - 170 yard six iron to get on the green in two on shorter Par 4s, and on in three for shorter Par 5s. Even on longer holes if you come up short, you’re still close enough that it’s a sand wedge into the green, usually pitching or chipping from 50 yards or less. Then it’s just a putt for par. Plus, the second or third shot is usually from the middle of the fairway, so there’s an excellent chance that you’ll put your wedge in a good position. I’ve been pleasantly surprised to find that I can make at least one, sometimes two or three pars (even a birdie sometimes), with just three clubs and three balls. It all depends on the length of the hole and the accuracy of my chipping and putting (and of course the wind). It’s a great way to get better at iron play and, especially, short game from 100 yards in.

But there’s more, and here’s where it really gets fun. For various reasons, sometimes I’ll find myself somewhere in the 120 – 160 yard range coming into a green. Too long for a sand wedge but too short for a six iron so I’ve had to learn to dial it back a bit. Hitting a six iron a 140 yards is not that much different than hitting a half swing pitch, but with more control and easier effort. The fun thing is learning how much swing is needed for various distances within that 40 yard gap. For a while, I’d frequently come up 10 yards short of the green or 10 yards long, but it’s getting better, and again, it’s been another opportunity to sharpen up my short game.

I’ve tried substituting a five iron and even a hybrid for more distance off the tee, but the second shot seems harder to control with less lofted clubs (and is tougher to dial back on short Par 3s). Maybe those clubs would work better for other golfers depending on their skill set, but dialing it back is the trickiest part for me. To each his own. The six iron just seemed to strike the right balance. The main thing is finding the right clubs that will give you the greatest accuracy, distance, and control.

Now I’ve got a whole new perspective on the game. Besides being in the fairway more often, I’m hitting more greens in regulation and, when short, still chipping or pitching up to putt for par. There’s also a new sense of creativity. Too often in the past I’d just take whatever club was at the outer limits of my abilities and swing away, full blast (with variable directional and distance control). Now I don’t mind taking a lesser club and swinging easier. To top it off, my iron play and short game have improved considerably. My sand wedge used to be my go to 80-90 yard club, and now tops out at 115. Six iron went from a shaky 165 to a reliable 170. My putting still stinks. Maybe the pros can dial in pin point accuracy with every club, but given the variability I have throughout my bag (and the varying shaft lengths of the clubs themselves) it’s been much more helpful to focus on just these three and improve on what each can do.

It also speeds up the game considerably.

So, last week I took my full bag out, thinking I needed to tune up my driver, three wood and other clubs because I didn’t want those skills to get too rusty. Guess what? I shot worse than I did with my three club setup - mainly because I was all over the fairway, short and long of the greens, and in the woods again. I’m not ready to give up on all my clubs yet, but it’s gratifying to know there are still a few new ways to rediscover the game and enjoy new challenges. Give it a try sometime. Maybe you'll find less is more.

by markk, Duck Soup |  Read more:
Image: markk

via:
[ed. I think I know where this is.]

Friday, July 6, 2018

Same As It Ever Was

Despite public and political pressure, pharmaceutical giant Pfizer keeps raising the prices of its drugs—standing apart from some of its rivals who have vowed to rein in periodic price hiking.

Around 100 of Pfizer’s drugs got higher list prices this week, the Financial Times first reported. The affected drugs include big sellers, such as Lyrica pain capsules, Chantix smoking-cessation medication, Norvasc blood-pressure pills, and the lung-cancer treatment Xalkori.

The price hikes mark a second round of increases for Pfizer this year. While many of the price changes in the individual rounds hover at or under 10 percent—many at 9.4 percent—the hikes collectively boost many drugs’ prices by double-digit percentages for the year overall. For instance, Chantix’s price jumped nearly 17 percent this year; Pfizer gave it a 9.4 percent increase in January and another seven percent boost July 1, bringing the list price of a 56-tablet bottle to $429, the Wall Street Journal noted. Likewise, Pfizer’s erectile dysfunction drug Viagra saw a 9.4 percent increase July 1 after a similar hike in January. Those hikes bring the list price of a month’s supply to $2,211.

Such twice-a-year price increases of around 10 percent used to be commonplace in the US pharmaceutical industry. But notable, eye-popping hikes have made such bumps a flashpoint for consumers and lawmakers. For instance, public fury ignited at Martin Shkreli’s abrupt 5,000 percent price increase of an old, cheap anti-parasitic drug—one often given to babies and people with HIV/AIDS. And Mylan’s gradual 400 percent price increase for the live-saving EpiPen further enraged the public and Congressional committees.

In the aftermath, many—but not all—of Pfizer’s rivals pledged to raise prices just once a year and generally keep the hikes to under 10 percent. Moreover, President Donald Trump suggested on May 30 that the industry was poised to make “massive” voluntary price cuts in the coming weeks.

No such cuts have been announced, and Pfizer’s continued increases belie that notion. “The latest increases signal that it is ‘business as usual’ rather than the voluntary concessions that Trump indicated were coming,” Michael Rea, chief executive of Rx Savings Solutions, told the Financial Times.

by Beth Mole, Ars Technica |  Read more:
Image: Getty/Bloomberg

Forget About It

In the uneasy months following 9/11, the Bush Administration provoked a minor controversy when it announced the name of a new office dedicated to protecting the United States from terrorism and other threats. “Homeland security” had unsavory associations: the Nazis often spoke of Heimat, which was also used in the 1920s and 1930s by an Austrian right-wing paramilitary group, the Heimwehr or Heimatschutz. Even Donald Rumsfeld, Bush’s secretary of defense, who had been in discussions about the term months before its introduction, had been discomfited. “The word ‘homeland’ is a strange word,” he wrote in a memo on February 27, 2001. “ ‘Homeland’ Defense sounds more German than American.” Barbara J. Fields, a historian at Columbia University, predicted in 2002 that the term would “remain a resident alien rather than a naturalized citizen in American usage.”

Seventeen years and a television series later, “homeland” no longer unsettles. The Department of Homeland Security has a $40 billion budget, 240,000 employees, and a Cabinet seat. Last summer, as journalists, academics, and intellectuals debated whether a fascist had invaded the White House, a bill reauthorizing DHS sailed through the House of Representatives by a bipartisan vote of 386 to 41. The phrase has found a home in the United States. It is a naturalized citizen.

One of the benefits of turning fifty, which I did in November, is that your memories become useful in unexpected ways. Throughout most of the Bush years, I was in my thirties — old enough to remember a time when there wasn’t a Department of Homeland Security, young enough to feel the novelties of the era. Middle age provides you a different perch. You get to watch, in real time, the shock of the new get absorbed by the soft cushions of the American tradition.

When Bush left office in 2009, he was widely loathed, with an approval rating of 33 percent. Today, 61 percent of the population approves of him, with much of that increase coming from Democrats and independents. A majority of voters under thirty-five view him favorably, which they didn’t while he was president. So jarring is the switch that Will Ferrell was inspired to reprise his impersonation of Bush on Saturday Night Live. “I just wanted to address my fellow Americans tonight,” he said, “and remind you guys that I was really bad. Like, historically not good.”

This is how a member of the younger generation viewed Bush in 2003, after the United States had invaded Iraq on the basis of false claims that the country possessed weapons of mass destruction:
The damage to this country and our body politic is staggering. . . . For our Government to be lying to us as they invoke our ideals in their rhetoric sickens me to the core of my being. It means something has gone so rotten. . . . It’s the bile you swallow in the back of your throat but keeps rising back up. It is a pattern, a pattern of cruelty, trickery, deceit, crass politics, and manipulative actions. It’s something that I can no longer ignore and it is absolutely shattering my optimism. . . . And that is a terrible thing, when our Government destroys the idealism of our young.
Strong stuff, suggesting the kind of experience you don’t easily recover from. If such feelings of betrayal don’t overwhelm you with a corrosive cynicism, inducing you to withdraw from politics, they provoke an incipient realism or an irrepressible radicalism. The Gulf War, which happened when I was twenty-three, set me on the latter path, guided, I’d like to think, by some sense of the former. But whether one opts for realism or radicalism or both, such great disillusionment would seem to preclude making statements like this, fifteen years later:
I’ve never seen anything as cynical in politics as Republicans spending four months refusing to reauthorize the Children’s Health Insurance Program, then attaching reauthorization to another controversial bill, then blaming Democrats for not supporting CHIP. It’s breathtaking.
You get to lose your innocence only once. But Ezra Klein, the author of both these statements, loses his every night as he scans the day’s report of the latest Republican Party outrage. American liberalism is also a party of the born-again.

The United States of Amnesia: true to form, we don’t remember who coined the phrase. It’s been attributed to Gore Vidal and to Philip Rahv, though it also appears in a syndicated column from 1948. But more than forgetfulness is at work in our ceremonies of innocence repeatedly drowned. And while it’s tempting to chalk up these rituals to a native simplicity or a preternatural naïveté — a parody of a Henry James novel, in which you get soiled by crossing the Potomac rather than the Atlantic — even our most knowing observers perform them.

The distance of a decade, for example, was all it took for Philip Roth to completely rewrite his experience of the Nixon Administration. There was a “sense,” Roth said of those years,
of living in a country with a government morally out of control and wholly in business for itself. Reading the morning New York Times and the afternoon New York Post, watching the seven and then again the eleven o’clock TV news — all of which I did ritualistically — became for me like living on a steady diet of Dostoevsky. . . . One even began to use the word “America” as though it was the name not of the place where one had been raised and to which one had a patriotic attachment, but of a foreign invader that had conquered the country and with whom one refused, to the best of one’s strength and ability, to collaborate. Suddenly America had turned into “them.”
That was in 1974, when Watergate and the Vietnam War were not yet a memory. In 1984, with Reagan straddling the horizon, Roth recalled the era differently: “Watergate made life interesting when I wasn’t writing, but from nine to five every day I didn’t think too much about Nixon or about Vietnam.” And while Roth had been unsparing about Nixon in 1974 — “Of course there have been others as venal and lawless in American politics, but even a Joe McCarthy was more identifiable as human clay than this guy is” — in 2017 Nixon had become, for Roth, a benign counter to Trump. Neither Nixon nor Bush
was anything like as humanly impoverished as Trump is: ignorant of government, of history, of science, of philosophy, of art, incapable of expressing or recognizing subtlety or nuance, destitute of all decency, and wielding a vocabulary of seventy-seven words that is better called Jerkish than English.
Donald Trump is making America great again — not by his own hand but through the labor of his critics, who posit a more perfect union less as an aspiration for the future than as the accomplished fact of a reimagined past. (...)

Ever since the 2016 presidential election, we’ve been warned against normalizing Trump. That fear of normalization misstates the problem, though. It’s never the immediate present, no matter how bad, that gets normalized — it’s the not-so-distant past. Because judgments of the American experiment obey a strict economy, in which every critique demands an outlay of creed and every censure of the present is paid for with a rehabilitation of the past, any rejection of the now requires a normalization of the then.

We all have a golden age in our pockets, ready as a wallet. Some people invent the memory of more tenderhearted days to dramatize and criticize present evil. Others reinvent the past less purposefully. Convinced the present is a monster, a stranger from nowhere, or an alien from abroad, they look to history for parent-protectors, the dragon slayers of generations past. Still others take strange comfort from the notion that theirs is an unprecedented age, with novel enemies and singular challenges. Whether strategic or sincere, revisionism encourages a refusal of the now.

Or so we believe.

The truth is that we’re captives, not captains, of this strategy. We think the contrast of a burnished past allows us to see the burning present, but all it does is keep the fire going, and growing. Confronting the indecent Nixon, Roth imagines a better McCarthy. Confronting the indecent Trump, he imagines a better Nixon. At no point does he recognize that he’s been fighting the same monster all along — and losing. Overwhelmed by the monster he’s currently facing, sure that it is different from the monster no longer in view, Roth loses sight of the surrounding terrain. He doesn’t see how the rehabilitation of the last monster allows the front line to move rightward, the new monster to get closer to the territory being defended.

by Corey Robin, Harper's |  Read more:
Image: via
[ed. See also: The People Are the Problem]

Thursday, July 5, 2018


Erika StoneAmish children, Pennsylvania 1981
via:

Truth and a Good Life

'Count no man happy until he is dead.' So says Aristotle, quoting Solon, one of the wise men of ancient Athens, and agreeing with him.

Even death may be too soon if Aristotle is right - and I think he is - that happiness is not a short or even a long term state of mind, not something that belongs in a list with a burst of elation, a pang of sorrow, a twinge of pain, a bout of giddiness, and the like. It also is not a longer term feeling of well-being such as alcohol, marijuana, and other substances can induce.

Nor is a happy life one where pleasant states of mind overbalance unpleasant ones. There are pleasant states of mind to be had through anticipating things you are going to do or undergo later on. Think of how enjoyable it can be to plan a holiday. Such states of mind are sometimes illusory in that upon engaging in the activity or the passivity, frustration, disappointment and boredom overtake you. An unhappy life could be one in which less time was spent feeling frustrated, disappointed and bored than was spent in pleasant anticipation. Or a life could be full of pleasant remembrances, nostalgic delight occupying most of one's time in the otherwise dismal atmosphere of a prison cell, the new tyrant having ended your pleasing days of ruling and living luxuriously.

Aristotle's word that we translate as 'happy' was 'eudaimon', and Greek thinkers had a lot to say about eudaimonia. Some recent writers prefer to translate the Greek word as 'flourishing' or 'well-being'. Others stick with 'happiness' and ask us to notice the meaning of 'happy' in 'happy outcome' or 'happy ending' or in a phrase such as 'these happy isles' to speak of Britain in its glory days. That meaning is the thing to keep in mind when you think about what living a good life is supposed to be.

The infelicity of 'happy' as the word we want is also indicated by this: When we imagine a human life entirely free of fear, of sorrow, of grief, utterly devoid of distress or suffering, are we not imagining a shallow life? I am not suggesting that it is a good thing to let suffering thrive, to see to it that poverty, sickness and hunger enhance the lives of the lower orders. In that connection, Simone Weil's words are apt: 'In the social realm it is our duty to eliminate as much suffering as we can; there will always be enough left over for the elect'.

Aristotle and Plato both held that a necessary condition of living a flourishing life, of eudaimonia, was that it had to be a just life, an ethical life. Plato had a striking notion of being just as achieving harmony of the soul. We can get the word 'soul' out of the idea by speaking, in post-Christian terms, of being decent, getting your act together and having your priorities right. I don't myself mind talk of the soul so long as it does not mean a substantial item, a thing which can exist without a body. Simone Weil is helpful here too. She speaks of the soul in terms of harm that can be done to the life of a human being without injury to the body.

Plato and Aristotle, despite their agreement about the necessity of decency for a good life, disagreed about the sufficiency of it. Roughly speaking, Aristotle thought that happiness, eudaimonia, was vulnerable to the vicissitudes of fortune. No matter how good a person you might be, the world could still crap on you. Plato knew perfectly well that the world could crap on the just man, but he thought that could do no harm. The crap could not blight or diminish the excellence of the life of a just man. The harmony of the soul persists no matter how discordant the surroundings happen to be. Plato also argued, perhaps because he so much hoped that, in the long run, the unjust would get their come-uppance. The 20th Century was blessed with a man who held Plato's view. A leading principle of Groucho Marxism is a pithy expression of Plato's ethics: 'Time wounds all heels'. I am sure many of you have heard of the fallacy of deducing what ought to be the case from what is the case. Groucho here indulges in the converse fallacy, deducing from what ought to be the case that, eventually at least, it will be. He derives 'is' from 'ought'.

So far I have wanted to bring out how reflection on living well, being happy, flourishing, leads to some idea of judging or evaluating the life of a person. And the opening remark 'Count no man happy until he is dead' suggests that such an evaluative conception is not a subjective matter; the person whose life is evaluated is not the only one to make such an evaluation. It is possible for the person whose life is under consideration to get it wrong, to suffer from error. A good deal needs to be said about the source of such errors, how much it is a matter of illusion or delusion, of self-deception, of willful avoidance of knowledge, victimization by deceit, etc

The main thing Aristotle and Solon had in mind was the possibility of events occurring after someone's death that bear on any judgment to be made about his or her life. Central concerns and ambitions in someone's life may be well be on track at death, but suffer derailment afterward. It may be as clear cut as an earthquake killing a man's entire family at his graveside. Surely, in such a case, we can pity the person, perhaps with the words: 'The poor sonofabitch'. Of course someone may react differently, saying 'Well, he doesn't know and won't find out, no need for pity.' If we think differently on this we are having a more or less serious evaluative, even I would say, ethical disagreement. Maybe it is more like an aesthetic disagreement. I don't really care exactly what sort of disagreement it is. It is not, no matter how you think of it, that one of us is making sense, the other failing to.

I want to focus on is one strand in the conceptual fabric we find here. I want to focus on beliefs of great importance to a person, beliefs central to his or her grip on who he or she is and what his or her life amounts to. How much does it matter if such beliefs are false? A way to filter out the relevant beliefs from others, many of which are sure to be false, is to ask whether and to what extent, a person would be devastated by finding out the truth. By devastation I mean what can lead people to think of their lives no longer making sense, no longer seeming worthwhile, drained of meaning. I shall also consider the question of the extent to which, if at all, we should disabuse others of such false beliefs if we are placed to do so. What it is to be so placed is itself of interest. Whose business is it? Surely not just that of anybody who knows the relevant truth. That will also involve the issue of how important truth, in the aspect of truthfulness to others, is. That is a nice topic in its own right, though I don't think anybody anymore holds the medieval Christian view, and the view of Immanuel Kant, that lying is never justified.

So there are two questions. First, can not knowing insulate a person from harm, exempt his or her life from being a proper object of pity? Is it right to say that what you don't know doesn't hurt you even if it is something that had you known it, would have devastated you? Second, what is appropriate if you are placed in a position to inform someone of what will devastate him and refraining from doing so will itself be a refusal of truthfulness on your part? Reflection here not about the nature of truth or about the ludicrous idea that there is no such thing as truth I am taking truth for granted and asking about how much and why it matters.

In his drama The Wild Duck, Henrik Ibsen, provides an instance where we readily judge that it would have been better to leave others with false beliefs, in particular beliefs about the relations a man's wife had to a powerful benefactor of hers and his, before and perhaps during the marriage. A child's paternity is rendered uncertain by revelations a friend of the husband is determined to make. The friend is zealous about openness and honesty in marriage. It is all a disaster; the daughter, 14 years old, an attractive and loving child, kills herself out of an induced need to prove she loves her father via a sacrifice of something precious to her The zealous has urged the girl to kill a cherished wild duck that is kept in the attic. The girl applies the advice, but not to the duck, shooting herself in the way she is told to shoot the duck, one bullet in just the right spot on her breast.

In this play, one cannot say that the man and his family are flourishing; they are not doing very well and he is as asinine in his way as his zealous friend is in his. There is no doubt that, as one of the likeable characters in the play, a Dr. Relling, says, it would have been better to leave them as they were. Relling thinks that it is generally better to leave people with their lies; he speaks of helping people to construct or hold on to lies that enable them carry on.

Still we do have here a case of truth being devastating, or received as devastating by the self-dramatizing father. A feature of the play is that the main victim of the destructive force of truth is the child, who is not mired in false belief. She is devoted to her father, or the man who may not be, but may be, her father. But he cruelly rejects her when he learns he is probably not her father. The mother is uncertain and, 15 years having passed with her as a caring and dutiful wife and mother, she cannot see why it matters. Few are those who not stand on her side.

I do not think that we can say of Hjalmer, the father, anything like 'poor fellow', something more like 'silly clod' comes to mind. He is not really a victim of deceit, though it is true that his wife was not completely open with him. The only character to be pitied is the child. With Hjalmer, we are, or I am, disposed to deplore his agonies over not being a biological father when he has been quite a good father to the child and she a devoted and charming child. Anyway, we do not have here a case of a life's worth or happiness blighted by illusion or delusion. Hjalmer's discovery of the truth about his benefactor and his wife and his doubt about his paternity do not blight a good life; at most a somewhat shabby life has removed from its brightest patch, the attractive, lively and devoted daughter, Hedvig.

We certainly get a case here for saying 'What they don't know is not hurting them'. And for saying that Hjalmer with his false belief, which was somewhat the product of avoidance of evidence, would have been better off, lived a happier life, without the intrusion of his friend's zealousness Dr. Relling's therapy of letting people live with their falsehoods, with their lies as he puts is, is vindicated.

I now have to resort to my own, perhaps perverse, aptitude for fiction for further cases. There are two.

Here is one scenario: George, a very successful salesman, is on his deathbed at the relatively early age of 60. His wife Mabel and best friend, Fred are at his side, each holding one of George's hands. George is dying in a glow of warm conviction and remembrance of the loyalty of these two, the most important people in his life. Tender farewells are exchanged and George expires. Fred and Mabel look across to each lustfully, shovel George out of the bed, and proceed to vigorous humping in the deathbed, a corpse sprawled in the centre of the room. We learn from their talk that, whenever possible - and George was, after all, a traveling salesman - they have indulged in these orgiastic delights, for 30 years.

The words that come to mind as I envisage the scene and attend to the crumpled body on the floor those I have used earlier: 'The poor sonofabitch'. If George is not an appropriate object of pity, I do not understand what pity is. But as I said earlier, if you are inclined to withhold pity in the light of the thought that George never knew of the duplicity of Mabel and Fred and was generally pleased his work and life, then you and I are having some kind of ethical disagreement. I agree with Thomas Nagel who says that it is hard to see how it can be that knowing something could be harmful, as devastating as some things can be, and yet be harmless if unknown.

Maybe our disagreement belongs to ethics somewhat indirectly. If you think no harm has been done because Fred and Mabel succeeded in hiding their disloyalty, mustn't you think that they were not acting wrongly or badly. A strict utilitarian must, I think, hold that since their disloyalty did not, in fact, cause George to suffer, Fred and Mabel did no wrong. Maybe their haste in having at it in the deathbed is a kind of disrespect for the dead, but their pleasure might easily outweigh that, if such things can be weighed against each other. Strict utilitarianism has anyway to stretch to deplore disrespect for the dead.

Here is my second story: Giovanni is a dying. He is a prosperous and respected Sicilian winegrower. His prosperity was much helped by his two sons who emigrated to New York many years ago. Giovanni worked hard when the boys were young to get them educated and gain admission to Harvard, where they both studied law and joined large corporations in NYC. The sons visited Sicily frequently and invested in their father's winery, which prospered. Giovanni took in his best friend and neighbour, Luigi, as a partner, absorbing Luigi's smaller, adjacent winery. The partners have been fortunate to avoid the imprecations of the Mafia, and their winery has flourished as has their friendship.

The sons and their families are expected soon for a final visit, Luigi having telephoned with the sad news of impending death some weeks ago. Giovanni is so fragile that his doctor forbids him a telephone on his sickbed. Luigi has assured him that he will take any calls and contact the sons again if they do not arrive as expected. They do not, and Luigi pursues the matter. What he learns is appalling. The sons, while indeed having studied at Harvard, were enticed into being lawyers for the Mafia, eventually into occasional assassinations of public officials and businessmen who were threats to Mafia operations. One of the inducements to getting entangled with the Mafia was assurance that their father's business in Sicily would never be touched.

Luigi uncovers all this along with the devastating news that the sons will almost certainly be sentenced to death in the electric chair, their crimes having finally caught up with them. Giovanni is sure that Luigi has looked into the tardiness of the sons and when Luigi next visits the sickbed, about to be a deathbed, Giovanni asks him what is happening, why are the sons not here to bid farewell to their father? It is, after all, their great success in America and their generosity to him and their own flourishing lives that have done so much to make his own life worthwhile, especially given how much he worked and sacrificed early in his life to set them on a hopeful path..

Luigi is aware of all this and aware that another thing in Giovanni's life, and his own, has been their friendship and trusting partnership. No joy or sorrow, no elation or frustration, no satisfaction or disappointment, no problem or plight or crisis was ever anything but shared or communicated between them. Luigi knows it will devastate Giovanni to learn about his sons, especially their impending executions. But Giovanni has asked him what is happening. Luigi has no doubt that if Giovanni realizes or finds out that he is being lied to by Luigi, that will itself have some devastating force. He has to decide whether or not to lie to his dying friend.

I have no good idea as to how to finish the story. One thing I am sure of, though, is that, whether Giovanni learns the truth or not, I could not say that he had a good or a happy or a flourishing life. The prospering of his winery was, unbeknownst to him, fostered by crime. He is surely as proper an object of pity as was George in my earlier tale. As for Luigi, well, I would not like to be in his shoes. I do not see how he can avoid doing something he will find it hard to live with.

I have given cases where it makes sense to pity a person and to evaluate a life more or less negatively, in spite of the person involved being ignorant of or deceived about the relevant facts. So it can be that a life fails to be a happy one, a flourishing life, despite the likelihood that the one who lives the life would think of it as a good life. But can I generalize? Can I maintain that for any life, the falsehood, illusoriness or even delusiveness of deeply important beliefs, despite the falsehood never being revealed, is a blight on that life, a basis for pitying the person embedded in falsehood.

by Lloyd Reinhardt, Philosopher | Read more:
Image: via

Rickie Fowler, 2016 Ryder Cup at Hazeltine National Golf Club
via:
[ed. Don't worry, Rickie's doing ok.]

Ice Poseidon’s Lucrative, Stressful Life as a Live Streamer

A strange creature stalks Los Angeles, hunting for content. He is pale and tall, as skinny as a folded-up tripod. His right hand holds a camera on a stick, which he waves like an explorer illuminating a cave painting. His left hand clutches a smartphone close to his face. Entering a restaurant, he wraps his left wrist around the door handle, so that he can pull the door open while still looking at the phone.

Chaos follows him. The restaurant starts getting a lot of unusual phone calls. The callers say that they are Paul Denino’s father or his mother and they urgently need to talk to their son, who is autistic. An employee asks the man if he is Paul Denino. He says yes, but then explains that the callers are pranking him. He is live-streaming through the camera on the stick, and some of the thousands of people watching are trying to fuck with him. The calls grow more disturbing. Callers claim that Denino is a pedophile trying to lure children to his lair, or that the large backpack he’s wearing contains a bomb, rather than a two-thousand-dollar cellular transmitter. The restaurant manager asks Denino to leave. Almost immediately, the restaurant’s rating on Yelp begins to plummet. Dozens of one-star reviews flood the page within seconds. They’re full of obscure references to Denino and to the Purple Army, the name of the legion of virtual fans who follow him wherever he goes.

Denino is twenty-three years old, and his job is broadcasting his life to thousands of obsessed viewers. He wakes up at two in the afternoon, then streams for between two and six hours at a time for the rest of the day. When I first met him, in January, he said that he was on track to make sixty thousand dollars that month, through sponsorships and donations from viewers. On average, ten thousand people watch him at any given time, though once, when he staged a boxing match between viewers in his ex-girlfriend’s back yard, sixty-five thousand tuned in. He sometimes arranges elaborate events for his stream, but more often he does things that a typical twenty-three-year-old does, such as go on dates, barhop, and smoke weed in his apartment. Even then, he is not simply recording his daily life. He is performing the role of a foulmouthed trickster called Ice Poseidon. If you watch his stream, you might see Ice Poseidon using boorish lines to pick up women on the street, or rolling around Los Angeles in a giant transparent ball, or tearfully recounting his lonely childhood. Ice Poseidon’s catchphrase is “Fuck it, dude.” When I watch him, I find myself cringing from disgust, secondhand embarrassment, and a sense of impending disaster. I also can’t help but laugh sometimes.

Denino is the most notorious of what are known as I.R.L. streamers. The I.R.L., or “in real life,” distinguishes them from people who broadcast themselves playing video games, which is what Denino did until he decided to take his act out of his bedroom. Now he treats the world as a game. The goal is to generate entertainment for his viewers. He keeps one eye on his phone, where a chat room fills with comments. If his viewers enjoy what he is doing, they post laughing emojis and cries of “content!” If they don’t, they write “ResidentSleeper,” a reference to one of the most boring streaming moments of all time, in which a gamer fell asleep at his computer. The ResidentSleeper thing really gets to Denino. His viewers love to needle him—to “trigger” him, as they say—and they know his vulnerabilities as well as anyone in his life does. (...)

The fact that people can now broadcast live video from wherever they are seems like a relatively small development in the history of technology, but for streaming fans it is as exciting as the invention of television. Live streamers laud the way the medium allows them to connect directly with their viewers. Most streams are accompanied by a chat room, where viewers can offer instant feedback, and a stream often plays out as an extended conversation between the streamer and the audience. To Denino and his fans, social media, once hailed as the gold standard of authenticity, now appears artificial. Denino told me that he hates the whitewashed, feel-good version of life portrayed in the Instagram posts of online influencers. Every moment of uncontrolled chaos that unfolds on Ice Poseidon’s stream emphasizes that he is showing his viewers how things really are.

Live streaming began in 1996, when a nineteen-year-old college student named Jennifer Ringley started broadcasting grainy images of her life in her dorm room. Nothing very interesting happened at first, but millions of people tuned in; she appeared on Letterman and in countless news stories as a herald of a new age of transparency. Professional live streaming was born in 2011, with the launch of Twitch, the video-game streaming platform. Twitch offered a number of ways to monetize a live stream and attracted a huge audience of young gamers who, to their parents’ confusion, wanted not only to watch people play video games for hours but also to give money to their favorite streamers in the form of subscriptions and tips. Today, top streamers can make millions of dollars a year. The best live streamers please their audience while maintaining the creative freedom to grow, though the fact that fickle viewers are also a live streamer’s investors makes this balance more precarious than it is in perhaps any other form of entertainment. Simply changing the type of game they play has sent many streamers’ audience numbers, and income, tumbling.

Successful streamers often rely as much on their personalities as on their skill at playing video games. Like everything else, Denino has taken this idea to the extreme. As he has moved away from games, he has turned his life into a self-produced reality show. Denino’s viewers know his home address and his blood pressure. Everyone in his life is part of the show. “If I don’t know what to do on a certain day, I’ll just call someone over and we can develop their character,” he told me. These characters are given names like Anything4Views, Hampton Brandon, Salmon Andy, Mexican Andy, Asian Andy, and Motorcycle Andy. (Andy is a nickname that his viewers like to apply to minor characters.) His fans make memes about his parents, his former employers, and his childhood photos. Denino believes that such transparency will make his viewers feel invested in the never-ending journey of his life rather than just in the content he can produce. In a little more than two years, they have watched Ice Poseidon go from a gamer who lived in his parents’ house and worked as a line cook at an Italian restaurant to a geek rock star whose life is awash in Monster Energy drink, pot smoke, and hot chicks.

If your job is to constantly share your life, your life becomes a product that you are selling, and every moment, even the worst one, can be a lucrative opportunity to please your audience. Denino often lands at the top of a message board called LivestreamFails, which functions as a micro-TMZ for the personal lives of live-streaming celebrities. Last year, the biggest story on LivestreamFails was the revelation by a popular video-game streamer called Dr. DisRespect that he was cheating on his wife. Dr. DisRespect posted a tearful apology and disappeared for months. Streamers claim to hate drama, but they also understand that a popular post on LivestreamFails can be great for their numbers. “Drama equals views equals money,” Denino told me. In February, when Dr. DisRespect made a triumphant return, it was one of the most watched live streams in history, with about three hundred and eighty thousand viewers. (...)

Denino has lived in Los Angeles for a year and a half, and during that time he has been kicked out of six apartments. The moves have been exhausting for him, but for viewers they offer an easy way to delineate eras in the Ice Poseidon show—“seasons,” as one put it to me. Denino’s first apartment was a two-bed-two-bath in a brand-new building in the heart of Hollywood. “I just Googled apartments in L.A., and it was literally the first one that popped up,” he told me. The prominent placement on search engines is probably related to the fact that the building was reportedly once the home of Logan Paul, the popular YouTuber. It is now a mecca for online-content creators, and it seemed like the perfect environment for Denino. “Most of the people who lived there were loud as fuck, did YouTube stuff,” Denino said. “We would throw balls of bread off the balcony to see how far we could throw it.” His viewers recall the era fondly. But Denino was kicked out after six months. “The building’s office was getting mass-called by my viewers every day, just non-stop, like ‘Hey, we know Paul Denino lives there. He’s burning down his apartment.’ ”

The biggest problem was the swattings. People would call 911 with false reports of hostage situations or bomb threats, in order to get a swat team sent to Denino’s apartment. Swatting has its origins in the subculture of Internet trolls, where it is a favorite tactic for harassing and bullying people. Swatting has exploded in popularity in recent years, owing in part to the rise of live streaming. Previously, the hoaxer would have to imagine his target’s distress when a team of heavily armed police officers broke down his door. But, if the target is broadcasting himself live, the hoaxer can see his handiwork play out in real time.

by Adrian Chen, New Yorker |  Read more:
Image: Siggi Eggertsson
[ed. Ever wonder what everyone's looking at, noses glued to their smartphones all the time? Artificial life.]

How Smart TVs in Millions of U.S. Homes Track More Than What’s On Tonight

The growing concern over online data and user privacy has been focused on tech giants like Facebook and devices like smartphones. But people’s data is also increasingly being vacuumed right out of their living rooms via their televisions, sometimes without their knowledge.

In recent years, data companies have harnessed new technology to immediately identify what people are watching on internet-connected TVs, then using that information to send targeted advertisements to other devices in their homes. Marketers, forever hungry to get their products in front of the people most likely to buy them, have eagerly embraced such practices. But the companies watching what people watch have also faced scrutiny from regulators and privacy advocates over how transparent they are being with users.

Samba TV is one of the bigger companies that track viewer information to make personalized show recommendations. The company said it collected viewing data from 13.5 million smart TVs in the United States, and it has raised $40 million in venture funding from investors including Time Warner , the cable operator Liberty Global and the billionaire Mark Cuban.

Samba TV has struck deals with roughly a dozen TV brands — including Sony, Sharp, TCL and Philips — to place its software on certain sets. When people set up their TVs, a screen urges them to enable a service called Samba Interactive TV, saying it recommends shows and provides special offers “by cleverly recognizing onscreen content.” But the screen, which contains the enable button, does not detail how much information Samba TV collects to make those recommendations.

Samba TV declined to provide recent statistics, but one of its executives said at the end of 2016 that more than 90 percent of people opted in.

Once enabled, Samba TV can track nearly everything that appears on the TV on a second-by-second basis, essentially reading pixels to identify network shows and ads, as well as programs on Netflix and HBO and even video games played on the TV. Samba TV has even offered advertisers the ability to base their targeting on whether people watch conservative or liberal media outlets and which party’s presidential debate they watched.

The big draw for advertisers — which have included Citi and JetBlue in the past, and now Expedia — is that Samba TV can also identify other devices in the home that share the TV’s internet connection.

Samba TV, which says it has adhered to privacy guidelines from the Federal Trade Commission, does not directly sell its data. Instead, advertisers can pay the company to direct ads to other gadgets in a home after their TV commercials play, or one from a rival airs. Advertisers can also add to their websites a tag from Samba TV that allows them to determine if people visit after watching one of their commercials.

If it sounds a lot like the internet — a company with little name recognition tracking your behavior, then slicing and dicing it to sell ads — that’s the point. But consumers do not typically expect the so-called idiot box to be a savant.

“It’s still not intuitive that the box maker or the software embedded by the box maker is going to be doing this,” said Justin Brookman, director of consumer privacy and technology policy at the advocacy group Consumers Union and a former policy director at the Federal Trade Commission. “I’d like to see companies do a better job of making that clear and explaining the value proposition to consumers.” (...)

Samba TV’s language is clear, said Bill Daddi, a spokesman. “Each version has clearly identified that we use technology to recognize what’s onscreen, to create benefit for the consumer as well as Samba, its partners and advertisers,” he added.

Still, David Kitchen, a software engineer in London, said he was startled to learn how Samba TV worked after encountering its opt-in screen during a software update on his Sony Bravia set.

The opt-in read: “Interact with your favorite shows. Get recommendations based on the content you love. Connect your devices for exclusive content and special offers. By cleverly recognizing onscreen content, Samba Interactive TV lets you engage with your TV in a whole new way.”

The language prompted Mr. Kitchen to research Samba TV’s data collection and raise concerns online about its practices.

Enabling the service meant that consumers agreed to Samba TV’s terms of service and privacy policy, the opt-in screen said. But consumers couldn’t read those unless they went online or clicked through to another screen on the TV. The privacy policy, which provided more details about the information collected through the software, was more than 4,000 words, and the terms exceeded 6,500 words.

“The thing that really struck me was this seems like quite an enormous ask for what seems like a silly, trivial feature,” Mr. Kitchen said. “You appear to opt into a discovery-recommendation service, but what you’re really opting into is pervasive monitoring on your TV.”

by Sapna Maheshwari, NY Times | Read more:
Image: uncredited

Plugspreading


John Atkinson
via:

Wednesday, July 4, 2018

Everything You Love Will Be Eaten Alive

The Efficient City’s war on the Romantic City…

Here are two different visions for what a city ought to be. Vision 1: the city ought to be a hub of growth and innovation, clean, well-run, high-tech, and business-friendly. It ought to attract the creative class, the more the better, and be a dynamic contributor to the global economy. It should be a home to major tech companies, world-class restaurants, and bold contemporary architecture. It should embrace change, and be “progressive.” Vision 2: the city ought to be a mess. It ought to be a refuge for outcasts, an eclectic jumble of immigrants, bohemians, and eccentrics. It should be a place of mystery and confusion, a bewildering kaleidoscope of cultures and classes. It should be a home to cheap diners, fruit stands, grumpy cabbies, and crumbling brownstones. It should guard its traditions, and be “timeless.”

It should be immediately obvious that not only are these views in tension, but that the tension cannot ever be resolved without one philosophy succeeding in triumphing over the other. That’s because the very things Vision 2 thinks make a city worthwhile are the things Vision 1 sees as problems to be eliminated. If I believe the city should be run like a business, then my mission will be to clear up the mess: to streamline everything, to eliminate the weeds. If I’m a Vision 2 person, the weeds are what I live for. I love the city because it’s idiosyncratic, precisely because things don’t make sense, because they are inefficient and dysfunctional. To the proponent of the progressive city, a grumpy cabbie is a bad cabbie; we want friendly cabbies, because we want our city to attract new waves of innovators. (Hence a meritocratic star-rating system for ride-share app drivers is unquestionably a good thing.) To the lover of the City of Mystery, brash personalities are part of what adds color to life. In the battle of the entrepreneurs and the romantics, the entrepreneurs hate what the romantics love, and the romantics hate what the entrepreneurs love. In the absence of a Berlin-like split, there can be no peace accord, it must necessarily be a fight to the death. What’s more, neither side is even capable of understanding the other: a romantic can’t see why anyone would want to clean up the dirt that gives the city its poetry, whereas an entrepreneur can’t see why anyone would prefer more dirt to less dirt.

Vanishing New York: How A Great City Lost Its Soul, based on the blog of the same name, is a manifesto for the Romantic Vision of the city, with Michael Bloomberg cast as the chief exponent of the Entrepreneurial Vision. “Nostalgic” will probably be the word most commonly used to capture Jeremiah Moss’s general attitude toward New York City, and Moss himself embraces the term and argues vigorously for the virtues of nostalgia. But I think in admitting to being “nostalgic,” he has already ceded too much. It’s like admitting to being a “preservationist”: they accuse you of being stuck in the past, and you reply “Damn right, I’m stuck in the past. The past was better.” But this isn’t simply about whether to preserve a city’s storied past or charge forward into its gleaming future. If that were the case, the preservationists would be making an impossible argument, since we’re heading for the future whether they like it or not. It’s also about different conceptions of what matters in life. The entrepreneurs want economic growth, the romantics want jazz and sex and poems and jokes. To frame things as a “past versus future” divide is to grant the entrepreneurs their belief that the future is theirs.

Moss’s book is about a city losing its “soul” rather than its “past,” and he spends a lot of time trying to figure out what a soul is and how a city can have one or lack one. He is convinced that New York City once had one, and increasingly does not. And while it is impossible to identify precisely what the difference is, since the quality is of the “you know it when you see it” variety, Moss does describe what the change he sees actually means. Essentially, New York City used to be a gruff, teeming haven for weirdos and ethnic minorities. Now, it is increasingly full of hedge fund managers, rich hipsters, and tourists. Tenements and run-down hotels have been replaced with glass skyscrapers full of luxury condos. Old bookshops are shuttered, designer clothes stores in their place. Artisanal bullshit is everywhere, meals served on rectangular plates. You used to be able to get a pastrami and a cup of coffee for 50 cents! What the hell happened to this place?

It’s very easy, as you can see, for this line of thought to rapidly slip from critiquing to kvetching, and Moss does frequently sound like a cranky old man. But that’s half the point, he wants to show us that the cranky old men are not crazy, that we should actually listen to them. It’s not a problem with them for complaining that the neighborhoods of their childhood are being destroyed, it’s a problem with us for not caring about that destruction. (See Leonard Nimoy talking about the tragic redevelopment of his vibrant multiethnic childhood neighborhood in Boston.) Moss is a psychoanalyst, and he does not see “nostalgia” as irrational, but as a healthy and important part of being a person. We are attached to places, to the memories we make in them, and if you bulldoze those places, if you tear away what people love, you’re causing them a very real form of pain.

Moss loves a lot of places, and because New York City is transitioning from being a city for working-class people to a city for the rich, he is constantly being wounded by the disappearance of beloved institutions. CBGB, the dingy punk rock music club where the Ramones and Patti Smith got their start, is forced out after its rent is raised to $35,000 a month. Instead, we get a commemorative CBGB exhibit at the Met, with a gift shop selling Sid Vicious pencil sets and thousand-dollar handbags covered in safety pins. The club itself becomes a designer clothing store selling $300 briefs. The ornate building that once housed the socialist Jewish Daily Forward newspaper, the exterior of which featured bas-relief sculptures of Marx and Engels, is converted to luxury condos. Its ethnic residents largely squeezed out, bits of Little Italy are carved off and rebranded as “Nolita” for the purpose of real estate brochures, since—as one developer confesses—the name “Little Italy” still connotes “cannoli.” A five-story public library in Manhattan, home to the largest collection of foreign-language books in the New York library system, is flattened and replaced with a high-end hotel (a new library is opened in the hotel’s basement, with hardly any books). Harlem’s storied Lenox Lounge is demolished, its stunning art-deco facade gone forever. Rudy Giuliani demolishes the Coney Island roller coaster featured in Annie Hall. Cafe Edison, a Polish tea house (see photo p. 32-33), is evicted and replaced with a chain restaurant called “Friedman’s Lunch,” named after right-wing economist Milton Friedman. (I can’t believe that’s true, but it is.) Judaica stores, accordion repairmen, auto body shops: all see their rent suddenly hiked from $3,000 to $30,000, and are forced to leave. All the newsstands in the city are shuttered and replaced; they go from being owner-operated to being controlled by a Spanish advertising corporation called Cemusa. Times Square gets Disneyfied, scrubbed of its adult bookstores, strip joints, and peep shows. New York University buys Edgar Allen Poe’s house and demolishes it. (“We do not accept the views of preservationists who say nothing can ever change,” says the college’s president.) (...)

This complaint against the demise of the mom-n-pops and the takeover of chain retailers is now decades old. And it has its flaws: sometimes labor practices can be better at large corporations than at the celebrated “small business,” because there is actually recourse for complaints against abusive managers. If the only person above you is the owner, and the owner is a tyrant, there’s not much you can do. Still, the core critique is completely valid. Chain retail exists to make the world more efficient, but ends up turning the world uninteresting. I have actually noticed that I am less inclined to travel because of this. Why would I go to New York, when I can see a Starbucks right here? Monoculture is such a bleak future; local variation is part of what makes the world so wonderful. You can measure whether a place is succeeding by whether it’s possible to write a good song or poem about it. It’s almost literally impossible to write a good non-ironic poem about an Applebee’s. Compare that with nearly any greasy spoon or dive bar. (...)

The effort to replace poor people with rich people is often couched in what Moss calls “propaganda and doublespeak.” One real estate investment firm claims to “turn under-achieving real estate into exceptional high-yielding investments,” without admitting that this “under-achieving real estate” often consists of people’s family homes. (Likewise, people often say things like “Oh, nobody lives there” about places where… many people live.) One real estate broker said they aspired to “a well-cultivated and curated group of tenants, and we really want to help change the neighborhood.” “Well-cultivated” almost always means “not black,” but the assumption that neighborhoods actually need to be “changed” is bad enough on its own.

In fact, one of the primary arguments used against preservationists is the excruciating two-word mantra: cities change. Since change is inevitable and desirable, those who oppose it are irrational. Why do you hate change? You don’t believe that change is good? Because it’s literally impossible to stop change, the preservationist is accused of being unrealistic. Note, however, just how flimsy this reasoning is: “Well, cities change” is as if a murderer were to defend himself by saying “Well, people die.” The question is not: is change inevitable? Of course change is inevitable. The question is what kinds of changes are desirable, and which should be encouraged or inhibited by policy. What’s being debated is not the concept of change, but some particular set of changes.

Even “gentrification” doesn’t describe just one thing. It’s a word I hate, because it captures a lot of different changes, some of which are insidious and some of which seem fine. There are contentious debates over whether gentrification produces significant displacement of original residents, and what its economic benefits might be for those residents. The New York Times chided Moss, calling him “impeded by myopia,” for failing to recognize that those people who owned property in soon-to-be-gentrified areas could soon be “making many millions of dollars.” But that exactly shows the point: Moss is concerned with the way that the pursuit of many millions of dollars erodes the very things that make a city special, that give it life and make it worth spending time in. A pro-gentrification commentator, in a debate with Moss, said that he didn’t really see any difference, because “people come for the same reason they always have: to make as much money as possible.” That’s exactly the conception that Moss is fighting. People came to New York, he says, because it was a place worth living in, not because they wanted to make piles of money.

by Nathan J. Robinson, Current Affairs |  Read more:
Image: Jeremiah Moss
[ed. Companion piece to the post following this one re: Seattle. See also: Is Housing Inequality the Main Driver of Economic Inequality?]

Is Bezos Holding Seattle Hostage? The Cost of Being Amazon's Home

However they see Amazon, for good or ill, residents of the fastest-growing city in the US largely agree on the price Seattle has paid to be the home of the megacorporation: surging rents, homelessness, traffic-clogged streets, overburdened public transport, an influx of young men in polo shirts and a creeping uniformity rubbing against the city’s counterculture.

But the issue of Jeff Bezos’s balls is far from settled. “Have you seen the Bezos balls?” asked Dave Christie, a jewellery maker at a waterfront market who makes no secret of his personal dislike for the man who founded and still runs Amazon. “No one wanted them. They’ve disfigured downtown. Giant balls say everything about the man. Bezos is holding Seattle hostage.”

It’s not strictly true to say everyone is against the three huge plant-forested glass spheres at what Amazon calls its “campus” in the heart of the city. The Bezos balls, as the conservatories are popularly known, are modelled on the greenhouses at London’s Kew Gardens, feature walkways above fig trees, ferns and rhododendrons, and provide hot-desking for Amazon workers looking for a break from the neighbouring office tower.

“They are absolutely gorgeous. There was nothing in that area 10 years ago,” said Jen Reed, selling jerky from another market stall. “I don’t hate Amazon the way that a lot of people hate them. Seattle has changed a lot. My rent’s gone from $500 to $1,000, but outside of that Amazon have been beneficial. It’s give and take, and anyway we invited them here.”

But even those sympathetic to the biggest retailer in the US are questioning whether there has been more take than give. Amazon has long been accused of stretching the city’s transit and education systems, and its highly paid workers have driven up prices of goods and housing.

The resentful murmur recently became a roar after Amazon reacted to the city’s latest tax proposal, which would have charged large businesses an annual $275 per employee, by resorting to what critics call blackmail. In mid-June, less than a month after unanimously passing the tax, Seattle’s council abandoned it in the face of threats from the corporation. The tension has sharpened the debate about whether the city can retain its identity as one of the most progressive in the country, or is destined to be just another tech hub.

Ironically, given Amazon’s much-publicised “city sweepstakes”, in which municipalities in North America are competing to land the company’s second headquarters, Seattle did not reach a Faustian pact with Amazon to lure it in the first place. The city gave no tax breaks and passed no anti-union laws, although the fact that Washington state law bars income tax was certainly appealing. The council did encourage the firm’s massive growth, however, with accommodations on building regulations that helped drive $4bn in construction.

Amazon has remade Seattle in many ways beyond new buildings. The city’s population has surged by about 40% since the company was founded, and nearly 20,000 people a year are moving there, often drawn by the company and its orbit. The tech industry has brought higher-paying jobs, with its average salary about $100,000. But that is twice as much as half the workers in the city earn, and the latter’s spending power is dropping sharply, creating a clear economic divide between some of the city’s population and the new arrivals.

The better-paid have driven up house prices by 70% in five years, and rents with them, as they suck up the limited housing stock. The lower-paid are being forced out of the city, into smaller accommodation or on to the streets. The Seattle area now has the highest homeless population in the country after New York and Los Angeles, with more than 11,000 people without a permanent home, many living in tent camps under bridges, in parks and in cemeteries.

“It’s incredibly difficult to find housing in Seattle now,” said Nicole Keenan-Lai, executive director of Puget Sound Sage, a Seattle thinktank focused on low-income and minority communities. “Two years ago a study came out that said 35% of Seattle’s homeless population has some college or a college degree.”

John Burbank of the Economic Opportunity Institute said there is a a direct link between the surge in highly paid jobs and the numbers of people forced on to the street.

“There’s an incredible correlation between the increase in homelessness and the increase in the number of people who have incomes in excess of $250,000,” he said. “That has grown by almost 50% between 2011 and 2017. The population of homeless kids in the Seattle public schools has grown from 1,300 kids to 4,200.” (...)

This is not how large numbers of people in Seattle think things should work. They argue that Amazon should contribute to upgrading a transport system that’s struggling under the influx it created, and improving schools that provide the educated workforce the company benefits from.

“It is sort of a bipolar relationship because we do have a progressive city in some respects, we have a progressive city council in some respects, and then we have an environment that embraces individual wealth,” said Burbank.

“I think Amazon’s attitude has to do with the difference between social liberties and economic equality. Bezos was helpful in the campaign for same-sex marriage, but he also put $100,000 in opposition to the [initiative on introducing an] income tax that we ran in our state in 2010.

“So if it has to do with personal freedom, that’s OK. But if it has to do with actually trying to create a shared quality of life which entails taxation of the affluent or higher taxation, that’s not OK. And so this is a really good city for him because we have a lot of personal freedom and we have no taxation. Of course chickens are going to come home to roost at some point.”

Bezos’s company has arguably done much to erode the liberal and progressive culture of the city that first attracted him. Unlike other locally based giant corporations such as Microsoft, Starbucks and Boeing, Amazon planted itself in the heart of the city, and the influx of well-paid tech workers has changed the feel of Seattle. Keenan-Lai sees it in the erosion of the identity of her old neighbourhood on the city’s Capitol Hill and the disappearance of older, quirky restaurants, driven out by newer, more polished places.

“I can’t begrudge people moving to try to find opportunity,” she said. “But I do hear a lot of people say ‘I don’t want those programmers coming to Seattle’. It does create a lot of tension. Amazon represents both innovation and progress, and also dystopian fears for a lot of folks.”

Reed, at the market stall, added: “It’s definitely weird when you go into a dive bar that used to have bike gangs and now everyone’s in a polo shirt.”

by Chris McGreal, The Guardian | Read more:
Image:Ted S. Warren/AP
[ed. I don't like to drive near Seattle, let alone through it anymore.]