Monday, September 12, 2016

Wells Fargo Exec Who Headed Phony Accounts Unit Collected $125 Million

[ed. So, just to get this straight: 5300 employees were fired over a five year period even though the criminal practices they allegedly were fired for were still being actively encouraged by management? How did that work? And the company could have saved $45 million by firing this "sandbagger-in-chief" but let her retire instead, thus protecting her serverance payout? And the company explains this by saying it's because of their consistent "commitment to customers"? See also: Why It’s Unlikely Anyone Will Go to Jail Over Wells Fargo’s Massive Fraud Scheme.]

Wells Fargo & Co’s “sandbagger”-in-chief is leaving the giant bank with an enormous pay day—$124.6 million.

In fact, despite beefed-up “clawback” provisions instituted by the bank shortly after the financial crisis, and the recent revelations of massive misconduct, it does not appear that Wells Fargo is requiring Carrie Tolstedt, the Wells Fargo executive who was in charge of the unit where employees opened more than 2 million largely unauthorized customer accounts—a seemingly routine practice that employees internally referred to as “sandbagging”—to give back any of her nine-figure pay.

On Thursday, Wells Fargo agreed to pay $185 million, including the largest penalty ever imposed by the Consumer Financial Protection Bureau, to settle claims that that it defrauded its customers. The bank’s shareholders will ultimately have to swallow the cost of that settlement. The bank also said it had fired 5,300 employees over five years related to the bad behavior.

Tolstedt, however, is walking away from Wells Fargo with a very full bank account—and praise. In the July announcement of her exit, which made no mention of the soon-to-be-settled case, Wells Fargo’s CEO John Stumpf said Tolstedt had been one of the bank’s most important leaders and “a standard-bearer of our culture” and “a champion for our customers.”

On Thursday, Richard Cordray, the head of the CFPB, said, “It is quite clear that [the actions of Tolstedt’s unit] are unfair and abusive practices under federal law. They are a violation of trust and an abuse of trust.”

A spokesperson for Wells Fargo said that the timing of Tolstedt’s exit was the result of a “personal decision to retire after 27 years” with the bank. The spokesperson declined to comment on whether the bank was considering clawing back Tolstedt’s back pay.

In a statement following the settlement, Wells Fargo said, “Wells Fargo reached these agreements consistent with our commitment to customers and in the interest of putting this matter behind us. Wells Fargo is committed to putting our customers’ interests first 100% of the time, and we regret and take responsibility for any instances where customers may have received a product that they did not request.”

Shortly after the financial crisis, big banks in the nation, including Wells Fargo, promised that their top bankers would not be able to keep large paydays if it was found that those rewards were gained through harmful conduct. It was supposed to be the stick to the carrot of Wall Street bonuses. But the latest example of fraud at Wells Fargo shows that the big banks are unwilling to wield those sticks, especially when it comes to their top executives.

It is not clear how closely, or at all, Tolstedt was responsible for or even aware of the widespread abusive tactics at the bank. Neither the CFPB nor the Los Angeles City Attorney’s office, which sued the bank, named Tolstedt directly. Wells Fargo said the 5,300 firings happened over five years, and included managers as well as employees. It’s likely that Tolstedt managed as least part of that purge. But in bringing the charges, an official from the CFPB said Wells Fargo was aware of the behavior for longer than it should have, without putting a stop to it.

What’s more, Tolstedt ran the community banking division of the bank, which included its retail banking and credit card divisions, during the entire period in which the customer abuse was alleged, which goes back to 2011. The CFPB said about three quarters of the unauthorized accounts opened by employees of Wells Fargo were bank deposit accounts. Another 565,000 were unauthorized credit card applications. Tolstedt took over the division in 2008, after Wells Fargo merged with Wachovia during the financial crisis. (...)

Tolstedt was regularly praised for her unit’s ability to get customers to open numerous accounts. For a number of years, Wells Fargo’s proxy statement, which details executive pay, cited high “cross-selling ratios” as a reason that Tolstedt had earned her roughly $9 million in annual pay. For instance, in Wells Fargo’s 2015 proxy statement, the company said that its compensation committee had authorized Tolstedt’s $7.3 million stock and cash bonus that year, because “under her leadership, Community Banking achieved a number of strategic objectives, including continued strong cross-sell ratios, record deposit levels, and continued success of mobile banking initiatives.”

Later that year, the L.A. City Attorney’s office sued the bank because of its sales tactics, saying that many of the abusive practices came from intense pressure on Wells Fargo’s employees to get customers to open up numerous accounts. A separate class action of former employees alleges they were fired for not meeting cross-selling goals, or going along with the aggressive sales tactics.

Earlier this year when Wells Fargo released its annual proxy statement, it once again said that in order to justify her multimillion dollar bonus, Tolstedt’s division had “achieved a number of strategic objectives.” But this time, for the first time in years, cross-selling wasn’t listed as one of them.

When Tolstedt leaves Wells Fargo later this year, on top of the $1.7 million in salary she has received over the past few years, she will be walking away with $124.6 million in stock, options, and restricted Wells Fargo shares. Some of that hasn’t vested yet. But Tolstedt gets to keep all of it because she technically retired. Had she been fired, Tolstedt would have had to forfeit at least $45 million of that exit payday, and possibly more.

by Stephen Gandel, Fortune | Read more:
Image: Istock

Sunday, September 11, 2016

How I Moved On From My What Not To Wear Style

[ed. Never saw What Not To Wear, but I like this woman's perspective. See also: Designers refuse to make clothes to fit American women.]

At my age, if you aren’t Oprah or a man, the stigma of getting older starts to take shape. I’m 47. I am seriously and officially middle-aged. Like, deep into it. I'm here, but heck if I know how I got here so fast. I certainly don’t feel it. In a sense, I’ve grown up without becoming a conventional grown-up. Meaning, I’m not married. I don’t have kids, a second home, or a mortgage. I don’t run an office full of employees. I don’t go to the same job every day. And because of this, sometimes people (myself included) find it hard to measure my value without the traditional milestones of a life lived or a collection of identifiable CliffsNotes at the ready.

There are moments when this unconventional approach to aging feels freeing, and I can romanticize it. Not being able to be labeled so easily has its advantages. I’m a curiosity of sorts. I’m a mystery. An enigma. People seemingly want to know more about me, because I haven’t played by conventional social rules. I don’t “act” my age. That was cute when I was the precocious youngest woman in the room. It can be equally as enticing as the oldest. But my point is that I am usually the oldest in the room these days. Almost all my friends are younger than I am. I simply don’t have as much in common with friends my age who got married and had kids. My younger friends haven’t had to make these life choices yet. They enjoy the kind of freedom that I do. But for all my freedom, as I age, I’m not always sure where or with whom I belong. I’m a new classification of person, really. And like anything new, the unknown can feel a bit scary.

Don’t get me wrong. I'm happy with where I'm at. I am very proud of my career and all that I’ve accomplished. I get joy from work, and that probably keeps me somewhat youthful in disposition. But there seemed to be so much time back when I was 32. It wasn’t this “decision” written in stone that I wouldn’t get married or have kids. Maybe I still will. What has happened is I’ve had to let go of the age when all things were possible (32) and started to look at what is (47). I am part of the first generation of women not truly dependent on anyone. My feminist mom was married, had kids, got divorced, and made a career for herself. Does only being able to check the last box make me a pariah or a pioneer? Because in my opinion, they dress differently, I can tell you that.

One thing I am sure of: I didn’t really start to think about my age until I started to feel that all clothes were not appropriate for me. Now, of course, not all clothes and not all trends are appropriate for everyone. I spent years and years telling everybody yes to this, no to that. But when I started to ask myself if a dress was too short or showed too much skin or the eyeshadow I wanted was a little too bright, I realized my style wasn’t in Kansas anymore. (Or maybe it was only allowed in Kansas. Hard to say. Not sure where I was going with this metaphor.) I’ve been dying to wear that LoveShackFancy pink cotton tiered halter minidress that I got at the sample sale. But every time I put it on I laugh, proof positive that my brain has NOT caught up to my age. She (my young girl brain) still loves too much sparkle and skirts that twirl. But at 47, I really don’t want to go for a Suicide Squad-Harley Quinn-looking pouf skirt. (I know, she wears underwear most of the movie, but you get my point.) For me, that dress simply reinforces that I may not act my age, but I can’t avoid aging. I can make choices that allow me connections with people younger than myself, but I am no longer young.  (...)

It isn’t simply that I no longer play by the gender rulebook, it’s that the rules suddenly feel stacked against me. We still live in a culture where men grow more handsome, distinguished, and even trustworthy with age. Women are not afforded the same. Sociobiologically speaking, in caveman days, if we could no longer bear children our use-value dropped sharply and inevitably. And it was rather convenient that our lifespans were short enough that we would generally die soon after childbearing age anyway. So what’s a modern-day woman, who could live to be 120, going to do with all this extra time in the middle? In the middle of the middle? Current culture leads me to believe I’m supposed to attempt to look 25 for the next 50 years. Even if we’re past bearing children, are we meant to look as if we still can? Is that what Botox and fillers and peels and exercising 11 times a week are meant to do for us? Hang on. What?

What’s so bad about growing older when it’s revered in almost every society except ours? (All of you who hate my gray streak because you say it makes me look "old"? I don’t see why that can’t be a compliment.) Of course we want to stay strong and healthy as long as possible, but young? Why don’t we embrace age for all of its positive attributes? Because to value those things above youth and a particular kind of beauty requires a change in thinking (and seeing) much like changing the way we perceive a woman like me. You don’t need to ask me about my feelings on marriage or children. You can invite me over to dinner parties, even when it’s just married couples. (I have a boyfriend, but even if I didn’t!) Really! It’s okay! You can ask me about politics, the stock market, the best movies of the 1970s, what I think of this election, and of course whether or not you should keep the dress you wore once three years ago. (The answer to that is OF COURSE NOT.) I don’t want to be defined by my age. But I consider it to be a great asset. You can ask me about heartbreak and disappointment, about triumph and fear and courage. I’ve had more experience with it because I’ve had more TIME to have experience. And I want my style to reflect that experience.

by Stacy London, Refinery29 |  Read more:
Image: Winnie Au

Banned in the USA

I’ll never forget that day,” says C.J. Pierce, guitarist for Dallas metal band Drowning Pool, of the day no one can forget. “I was laying in my bunk on the bus — a little hungover from the night before, of course, this is rock ’n’ roll, I had a couple drinks, whatever — and Clint Lowery from Sevendust comes running on my bus: ‘They’re bombing our country!’ I just remember him yelling, ‘They’re bombing our country!’”

Their bands were scheduled to play a show in Wisconsin. “It was an arena. It was a big show. And a lot of people showed up. The fans showed up, so we’re gonna play. I remember, we did a moment of silence. Each band. We still played the show.”

It seemed obvious. Maybe it was. “What else can you do?”

The idea, after the terrorist attacks of September 11, 2001, was to do exactly what you’d done before, and listen to whatever you liked to listen to while you did it. Or the terrorists win.

For example. Jennifer Lopez’s “I’m Real,” featuring Ja Rule, was the no. 1 song in America. Maxwell’s Now was the no. 1 album. Jay Z’s The Blueprint, Bob Dylan’s Love and Theft, and Slayer’s God Hates Us All came out that very day. If you’re inclined to view history through the prism of the music that inadvertently soundtracked it, 9/11 is unbeatable for tragedy, absurdity, and pitch-black comedy. But 15 years later, it’s the songs the radio wouldn’t play that tell you the most.

In the week after the attacks, Clear Channel Communications, the Texas-based radio empire then controlling nearly 1,200 radio stations reaching 110 million listeners nationwide, drew up an informal blacklist of sorts — more than 150 songs its DJs should avoid, so as not to upset or offend anyone. As a Snopes investigation subsequently revealed, adherence was voluntary, and many stations ignored it; at the time, sheepish anonymous employees described it to The New York Times as a corporate memo gone wrong, snowballing thanks to an “overzealous regional executive” who kept adding more songs and soliciting more input. A wayward reply-all email debacle made sentient.

And then the list leaked, and became an invaluable source of mild outrage and desperately needed comic relief.

“Imagine.” “Ruby Tuesday.” “Rocket Man.” Rage Against the Machine’s entire catalog. Seven AC/DC songs, from “TNT” to “Dirty Deeds Done Dirt Cheap.” “American Pie.” “Free Fallin’.” “Rock the Casbah.” “Dancing in the Streets.” “It’s the End of the World as We Know It (And I Feel Fine).” The list is uncomfortably corporate and painfully human, as notable for what it omits (there’s very little country, and no rap) as what it includes. It’s usually presented in alphabetical order, but you can plot the stages, follow the bonkers logic.

Phase one: Contemporary hits from various rock and metal bands, some with violent imagery, some just with the wrong vibe. Metallica. Godsmack. Soundgarden. Third Eye Blind’s “Jumper.” Tool’s “Intolerance.”

Next, pop hits of any era with vaguely confrontational, or war-adjacent, or morbid imagery. Pat Benatar’s “Hit Me With Your Best Shot” and “Love Is a Battlefield.” The Gap Band’s “You Dropped a Bomb on Me.” “Great Balls of Fire.” “Dust in the Wind.” “Knockin’ on Heaven’s Door,” both the Bob Dylan and Guns N’ Roses versions.

Then, songs about aviation: Lenny Kravitz’s “Fly Away.” Red Hot Chili Peppers’ “Aeroplane.” Foo Fighters’ “Learn to Fly.” Steve Miller Band’s “Jet Airliner.” Peter, Paul and Mary’s “Leaving on a Jet Plane.”

Finally, and most hilariously, the irony tier: songs so peaceful and utopian they might scan now as oblique taunts. “What a Wonderful World.” “Bridge Over Troubled Water.” “Ob-La-Di, Ob-La-Da.” And just to be safe, Alanis Morissette’s “Ironic.”

At the time, this story was an uneasy delight — in the teeth of the alleged Death of Irony, with Saturday Night Live and The Onion and all the late-night talk shows respectfully silent, you took your laughs where you could get them, and not much back then was funnier than “things are so bad out there they banned ‘Imagine.’”

But for the active artists who made the list, however unofficial and well-meaning it might’ve been, it had a profound effect. (The company, since rebranded iHeartMedia and still the medium’s dominant power, declined comment.) It bumped singles, stalled albums, derailed promising careers. And to listeners, to the American people, it was a whimsical interlude to the grim dystopia of George W. Bush’s war years, clearly signaling that the major corporations dominating the music industry were susceptible to panicked censorship and misguided patriotism.

It was a clumsy, confusing message to a shocked and thoroughly shook populace just when it needed music the most. Any music. Whatever you’re into. Whatever works. And it’s hard to interpret the act of banning every Rage Against the Machine song as anything but a quick and dirty attempt to manufacture a chilling effect on protest songs overall; indeed, with dismayingly few exceptions, prominent protest songs were sorely lacking as the wars in Afghanistan and Iraq unfolded. Songs that did directly address the national mood tended toward the pandering, the jingoistic, the geopolitically disingenuous: Think Toby Keith’s alarmingly brazen “Courtesy of the Red, White, and Blue (The Angry American),” or Darryl Worley’s syrupy strawman broadside “Have You Forgotten?”

Music was hardly the biggest focal point or the hardest-hit entity after 9/11, but the ripple effect was profound and dismaying all the same. The Clear Channel list was mostly comic relief, but the mood had changed dramatically two years later, when the Dixie Chicks dissed George W. Bush onstage in London, and triggered an instant, near-total blacklist so thorough and visceral they made a movie about it. This list was the first indication that both fallible, well-meaning humans and at least slightly less benevolent megacorporations had enormous influence over who and what you heard and saw. And as the national mood got darker and heavier, that influence grew more sinister in turn.

Here, in their own words, are recollections from five of the artists whose songs made Clear Channel’s 9/11 memo.

by Rob Harvilla, The Ringer | Read more:
Image: Getty Images/Ringer illustration

Saturday, September 10, 2016

Les Paul and Mary Ford



[ed. Mary could kick ass on guitar, too. Check out this medley (especially around 3:00)]

Eddie Colla, Thailand 2014
via:

Apocalypse Tourism

On Aug. 16, the Crystal Serenity set out from Seward, Alaska, carrying 1,700 passengers and crew, and escorted by a comparatively minuscule, 1,800-ton icebreaker. She circled west and north around the Alaska Peninsula and through the Bering Strait before heading east into the maze of straits and sounds that constitute the Northwest Passage. For centuries, explorers tried to establish a sea route here between Europe and Asia. Many met with ruin. A few stranded sailors famously ate their boots—and each other. When the Crystal Serenity emerged free and clear of the maze on Sept. 5, there were no accounts of scurvy or cannibalism, only tales of bingeing on themed buffets and grumbles from shutterbugs about the Arctic’s monotonous landscape.

Operated by Crystal Cruises, the Serenity became on that day the first passenger liner to successfully ply the Northwest Passage. As climate change melts Arctic sea ice twice as fast as models predicted, more and larger ships have made their way along these fatal shores. In 2013, the Nordic Orion was the first bulk cargo carrier to transit the Passage, hauling a load of coal.

Rates on the Serenity started at around $22,000 per person. For that, passengers were anointed, by Slate, “the world’s worst people”—for venturing into a vulnerable ecosystem in a diesel-burning, 69,000-ton behemoth. Canada’s National Post described the cruise as an “invasion” of indigenous communities. Britain’s Telegraph hinted at Titanic hubris, asking, Is this “the world’s most dangerous cruise”?

by By Katie Orlinsky and Eva Holland, Bloomberg | Read more:
Image: Katie Orlinsky

Letter of Recommendation: Glass Bricks

Like so many seemingly innocuous things, glass bricks were created to make the world a better place. They were invented at the turn of the 20th century to provide factory workers with more natural light. Soon they moved beyond the world of industry, as Art Deco architects took to their sleek modernism, using them to adorn building exteriors and divide interiors. A 1930 issue of Popular Science speculated about a future in which skyscrapers would be made almost entirely of glass bricks.

Today glass bricks are most closely associated with the decadence of 1980s architecture, which channeled the elegance and streamlined surfaces of Art Deco — an ’80s callback to the retro future imagined in the ’20s. Though they are designed to look pristinely high-gloss forever, time and dirt take their toll on most. There’s something kind of sleazy about them and not just because they show up in the background of so many scenes shot in Encino porn houses. You find them in corner bars and mini-malls, reflecting neon or LED light. They’re often installed to replace windows, providing translucency but keeping the outside firmly out. If they once signaled progress, nowadays glass bricks signify an oddly compelling sort of decline. And for me, they evoke my own Los Angeles childhood. At some point, I became completely obsessed.

My fixation was fueled, in part, by a fear that glass bricks were becoming endangered. Los Angeles has morphed in recent years. Ritzy apartment buildings and trendy hotels went up in formerly decaying areas like downtown and Hollywood, turning decrepit blocks into lavish playgrounds. This year it was announced that the Los Angeles Memorial Sports Arena, in Exposition Park, would be bulldozed to make way for a more modern stadium. The arena had been around since 1959, and in Los Angeles, a building from 1959 is considered fairly historic. But its datedness was exactly what I loved about the arena; it felt like a portal to a Los Angeles I’d never seen, whose ghosts I could sense. It was also full of glass bricks, and I feared they would be ground to crystalline dust.

I started taking long, aimless drives down major city-spanning thoroughfares like Beverly Boulevard and through suburban neighborhoods like Glendale, searching for glass bricks that I could capture with my camera phone. I started to see them everywhere in Los Angeles, and through Twitter, I learned that they were actually everywhere. A #glassbricks hashtag I started as a joke became real when people began sending me photos of their own sightings from all corners of the world: Amsterdam, Tokyo, Zurich. Last week, a friend sent me a shot of some glass bricks near Chernobyl that probably haven’t been touched since 1986. There are glass bricks at the end of the world.

by Molly Lambert, NY Times |  Read more:
Image: Coley Brown

Thursday, September 8, 2016

Wells Fargo Fined for Fraudulently Opening Accounts for Customers

[ed. It never ends. Will anyone higher up ever be prosecuted? One guess. The fact that there's a done deal before news ever reached the public should tell you everything you need to know - either about the regulators enforcing banking regulations, or the regulations themselves (and Wells Fargo, who somehow thought it wasn't important enough to disclose the investigation in recent regulatory filings.]

For years, Wells Fargo employees secretly issued credit cards without a customer’s consent. They created fake email accounts to sign up customers for online banking services. They set up sham accounts that customers learned about only after they started accumulating fees.

On Thursday, these illegal banking practices cost Wells Fargo $185 million in fines, including a $100 million penalty from the Consumer Financial Protection Bureau, the largest such penalty the agency has issued.

Federal banking regulators said the practices, which date back to 2011, reflected serious flaws in the internal culture and oversight at Wells Fargo, one of the nation’s largest banks. The bank has fired at least 5,300 employees who were involved.

In all, Wells Fargo employees opened roughly 1.5 million bank accounts and applied for 565,000 credit cards that may not have been authorized by customers, the regulators said in a news conference. The bank has 40 million retail customers.

Some customers noticed the deception when they were charged unexpected fees, received credit or debit cards in the mail that they did not request, or started hearing from debt collectors about accounts they did not recognize. But most of the sham accounts went unnoticed, as employees would routinely close them shortly after opening them. Wells has agreed to refund about $2.6 million in fees that may have been inappropriately charged.

Wells Fargo is famous for its culture of cross-selling products to customers — routinely asking, say, a checking account holder if she would like to take out a credit card. Regulators said the bank’s employees had been motivated to open the unauthorized accounts by compensation policies that rewarded them for opening new accounts; many current and former Wells employees told regulators they had felt extreme pressure to open as many accounts as possible.

“Unchecked incentives can lead to serious consumer harm, and that is what happened here,” said Richard Cordray, director of the Consumer Financial Protection Bureau.

Wells said the employees who were terminated included managers and other workers. A bank spokeswoman declined to say whether any senior executives had been reprimanded or fired in the scandal.

“Wells Fargo is committed to putting our customers’ interests first 100 percent of the time, and we regret and take responsibility for any instances where customers may have received a product that they did not request,” the bank said in a statement. (...)

Banking regulators said the widespread nature of the illegal behavior showed that the bank lacked the necessary controls and oversight of its employees. Ensuring that large banks have tight controls has been one of the central preoccupations of banking regulators after the mortgage crisis.

Such pervasive problems at Wells Fargo, which has headquarters in San Francisco, stand out given all of the scrutiny that has been heaped on large, systemically important banks since 2008.

“If the managers are saying, ‘We want growth; we don’t care how you get there,’ what do you expect those employees to do?” said Dan Amiram, an associate business professor at Columbia University.

It is a particularly ugly moment for Wells, one of the few large American banks that have managed to produce consistent profit increases since the financial crisis. Wells has earned a reputation on Wall Street as a tightly run ship that avoided many of the missteps of the mortgage crisis because it took fewer risks than many of its competitors. At the same time, Wells has managed to be enormously profitable, as other large banks continued to stumble because of tighter regulations and a choppy economy.

Analysts have marveled at the bank’s ability to cross-sell mortgages, credit cards and auto loans to customers. The strategy is at the core of modern-day banking: Rather than spend too much time and money recruiting new customers, sell existing customers on new products.

by Michael Corkery, NY Times |  Read more:
Image: Eric Thayer

Thomas Kaltenbach, Bundeswehrlager Kassel, 2007
via:

The Privacy Wars Are About to Get A Whole Lot Worse

[ed. I can't wait until some massive data collection operation like Google or ATT or Facebook gets their system hacked and suddenly hundreds of millions of people's web surfing habits (and other personal data) are available for searching, by anyone. You know it's going to happen eventually. Then we'll all finally know where the bear shits in the buckwheat.]

It used to be that server logs were just boring utility files whose most dramatic moments came when someone forgot to write a script to wipe out the old ones and so they were left to accumulate until they filled the computer’s hard-drive and crashed the server.

Then, a series of weird accidents turned server logs into the signature motif of the 21st century, a kind of eternal, ubiquitous exhaust from our daily lives, the CO2 of the Internet: invisible, seemingly innocuous, but harmful enough, in aggregate, to destroy our world.

Here’s how that happened: first, there were cookies. People running web-servers wanted a way to interact with the people who were using them: a way, for example, to remember your preferences from visit to visit, or to identify you through several screens’ worth of interactions as you filled and cashed out a virtual shopping cart.

Then, Google and a few other companies came up with a business model. When Google started, no one could figure out how the com­pany would ever repay its investors, especially as the upstart search-engine turned up its nose at the dirtiest practices of the industry, such as plastering its homepage with banner ads or, worst of all, selling the top results for common search terms.

Instead, Google and the other early ad-tech companies worked out that they could place ads on other people’s websites, and that those ads could act as a two-way conduit between web users and Google. Every page with a Google ad was able to both set and read a Google cookie with your browser (you could turn this off, but no one did), so that Google could get a pretty good picture of which websites you visited. That information, in turn, could be used to target you for ads, and the sites that placed Google ads on their pages would get a little money for each visitor. Advertisers could target different kinds of users – users who had searched for information about asbestos and lung cancer, about baby products, about wedding planning, about science fiction novels. The websites themselves became part of Google’s ‘‘inventory’’ where it could place the ads, but they also improved Google’s dossiers on web users and gave it a better story to sell to advertisers.

The idea caught the zeitgeist, and soon everyone was trying to figure out how to gather, aggregate, analyze, and resell data about us as we moved around the web.

Of course, there were privacy implications to all this. As early breaches and tentative litigation spread around the world, lawyers for Google and for the major publishers (and for publishing tools, the blogging tools that eventually became the ubiquitous ‘‘Content Management Systems’’ that have become the default way to publish material online) adopted boiler­plate legalese, those ‘‘privacy policies’’ and ‘‘terms of service’’ and ‘‘end user license agreements’’ that are referenced at the bottom of so many of the pages you see every day, as in, ‘‘By using this website, you agree to abide by its terms of service.’’

As more and more companies twigged to the power of ‘‘surveillance capitalism,’’ these agreements proliferated, as did the need for them, because before long, everything was gathering data. As the Internet everted into the physical world and colonized our phones, we started to get a taste of what this would look like in the coming years. Apps that did innocuous things like turning your phone into a flashlight, or recording voice memos, or letting your kids join the dots on public domain clip-art, would come with ‘‘permissions’’ screens that required you to let them raid your phone for all the salient facts of your life: your phone number, e-mail address, SMSes and other messages, e-mail, location – everything that could be sensed or inferred about you by a device that you carried at all times and made privy to all your most sensitive moments.

When a backlash began, the app vendors and smartphone companies had a rebuttal ready: ‘‘You agreed to let us do this. We gave you notice of our privacy practices, and you consented.’’

This ‘‘notice and consent’’ model is absurd on its face, and yet it is surprisingly legally robust. As I write this in July of 2016, US federal appellate courts have just ruled on two cases that asked whether End User Licenses that no one read and no one understands and no one takes seriously are enforceable. The cases differed a little in their answer, but in both cases, the judges said that they were enforceable at least some of the time (and that violating them can be a felony!). These rulings come down as the entirety of America has been consumed with Pokémon Go fever, only to have a few killjoys like me point out that merely by installing the game, all those millions of players have ‘‘agreed’’ to forfeit their right to sue any of Pokémon’s corporate masters should the com­panies breach all that private player data. You do, however, have 30 days to opt out of this forfeiture; if Pokémon Go still exists in your timeline and you signed up for it in the past 30 days, send an e-mail to with the subject ‘‘Arbitra­tion Opt-out Notice’’ and include in the body ‘‘a clear declaration that you are opting out of the arbitration clause in the Pokémon Go terms of service.’’

Notice and consent is an absurd legal fiction. Jonathan A. Obar and Anne Oeldorf-Hirsch, a pair of communications professors from York University and the University of Connecticut, published a working paper in 2016 called ‘‘The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Net­working Services.’’ The paper details how the profs gave their students, who are studying license agreements and privacy, a chance to beta-test a new social network (this service was fictitious, but the students didn’t know that). To test the network, the students had to create accounts, and were given a chance to review the service’s terms of service and privacy policy, which prominently promised to give all the users’ personal data to the NSA, and demanded the students’ first-born children in return for access to the service. As you may have gathered from the paper’s title, none of the students noticed either fact, and almost none of them even glanced at the terms of service for more than a few seconds.

Indeed, you can’t examine the terms of service you interact with in any depth – it would take more than 24 hours a day just to figure out what rights you’ve given away that day. But as terrible as notice-and-consent is, at least it pretends that people should have some say in the destiny of the data that evanescences off of their lives as they move through time, space, and information.

The next generation of networked devices are literally incapable of participating in that fiction.

The coming Internet of Things – a terrible name that tells you that its proponents don’t yet know what it’s for, like ‘‘mobile phone’’ or ‘’3D printer’’ – will put networking capability in everything: appliances, light­bulbs, TVs, cars, medical implants, shoes, and garments. Your lightbulb doesn’t need to be able to run apps or route packets, but the tiny, com­modity controllers that allow smart lightswitches to control the lights anywhere (and thus allow devices like smart thermostats and phones to integrate with your lights and home security systems) will come with full-fledged computing capability by default, because that will be more cost-efficient that customizing a chip and system for every class of devices. The thing that has driven computers so relentlessly, making them cheaper, more powerful, and more ubiquitous, is their flexibility, their character of general-purposeness. That fact of general-purposeness is inescapable and wonderful and terrible, and it means that the R&D that’s put into making computers faster for aviation benefits the computers in your phone and your heart-monitor (and vice-versa). So every­thing’s going to have a computer.

You will ‘‘interact’’ with hundreds, then thou­sands, then tens of thousands of computers every day. The vast majority of these interactions will be glancing, momentary, and with computers that have no way of displaying terms of service, much less presenting you with a button to click to give your ‘‘consent’’ to them. Every TV in the sportsbar where you go for a drink will have cameras and mics and will capture your image and process it through facial-recognition software and capture your speech and pass it back to a server for continu­ous speech recognition (to check whether you’re giving it a voice command). Every car that drives past you will have cameras that record your like­ness and gait, that harvest the unique identifiers of your Bluetooth and other short-range radio devices, and send them to the cloud, where they’ll be merged and aggregated with other data from other sources.

In theory, if notice-and-consent was anything more than a polite fiction, none of this would hap­pen. If notice-and-consent are necessary to make data-collection legal, then without notice-and-consent, the collection is illegal.

But that’s not the realpolitik of this stuff: the reality is that when every car has more sensors than a Google Streetview car, when every TV comes with a camera to let you control it with gestures, when every medical implant collects telemetry that is collected by a ‘‘services’’ business and sold to insurers and pharma companies, the argument will go, ‘‘All this stuff is both good and necessary – you can’t hold back progress!’’

It’s true that we can’t have self-driving cars that don’t look hard at their surroundings all the time, and pay especially close attention to humans to make sure that they’re not killing them. However, there’s nothing intrinsic to self-driving cars that says that the data they gather needs to be retained or further processed. Remember that for many years, the server logs that recorded all your inter­actions with the web were flushed as a matter of course, because no one could figure out what they were good for, apart from debugging problems when they occurred. (...)

The next iteration of this is the gadgets that will spy on us from every angle, in every way, all the time. The data that these services collect will be even more toxic in its potential to harm us. Consider that today, identity thieves merge data from several breaches in order to piece together enough information to get a duplicate deed for their victims’ houses and sell those houses out from under them; that voyeurs use untargeted attacks to seize control over peoples’ laptops to capture nude photos of them and then use those to blackmail their victims to perform live sex-acts on camera; that every person who ever applied for security clearance in the USA had their data stolen by Chinese spies, who broke into the Office of Personnel Management’s servers and stole more than 20,000,000 records.

The best way to secure data is never to collect it in the first place. Data that is collected is likely to leak. Data that is collected and retained is certain to leak. A house that can be controlled by voice and gesture is a house with a camera and a microphone covering every inch of its floorplan.

The IoT will rupture notice-and-consent, but without some other legal framework to replace it, it’ll be a free-for-all that ends in catastrophe.

by Cory Doctorow, Locus |  Read more:
Image: Cory Doctorow uncredited

How Global Entertainment Killed Culture

A large number of studies in recent years have looked to define the characteristics of contemporary culture within the context of the globalization of capitalism and of markets, and the extraordinary revolution in technology. One of the most incisive of these studies is Gilles Lipovetsky and Jean Serroy’s La cultura-mundo: Respuesta a una sociedad desorientada (Culture-World: Response to a Disoriented Society). It puts forward the idea that there is now an established global culture—a culture-world—that, as a result of the progressive erosion of borders due to market forces, and of scientific and technical revolutions (especially in the field of communications), is creating, for the first time in history, certain cultural values that are shared by societies and individuals across the five continents, values that can be shared equally despite different traditions, beliefs, and languages. This culture, unlike what had previously been defined as culture, is no longer elitist, erudite and exclusive, but rather a genuine “mass culture”:
Diametrically opposed to hermetic and elitist vanguard movements, this mass culture seeks to offer innovations that are accessible to the widest possible audience, which will entertain the greatest number of consumers. Its intention is to amuse and offer pleasure, to provide an easy and accessible escapism for everyone without the need for any specific educational background, without concrete and erudite references. What the culture industries invent is a culture transformed into articles of mass consumption.
This mass culture, according to the authors, is based on the predominance of image and sound over the word. The film industry, in particular Hollywood, “globalizes” movies, sending them to every country, and within each country, reaching every social group, because, like commercially available music and television, films are accessible to everyone and require no specialist background to be enjoyed. This process has been accelerated by the cybernetic revolution, the creation of social networks and the universal reach of the Internet. Not only has information broken through all barriers and become accessible to all, but almost every aspect of communication, art, politics, sport, religion, etc., has felt the reforming effects of the small screen: “The screen world has dislocated, desynchronized and deregulated the space—time of culture.”

All this is true, of course. What is not clear is whether what Lipovetsky and Serroy call the “culture-world” or mass culture (in which they include, for example, even the “culture of brands” of luxury objects), is, strictly speaking, culture, or if we are referring to essentially different things when we speak, on one hand, about an opera by Wagner or Nietzsche’s philosophy and, on the other hand, the films of Alfred Hitchcock and John Ford (two of my favorite directors), and an advertisement for Coca-Cola. They would say yes, that both categories are culture, while I think that there has been a change, or a Hegelian qualitative leap, that has turned this second category into something different from the first.

Furthermore, some assertions of La cultura-mundo seem questionable, such as the proposition that this new planetary culture has developed extreme individualism across the globe. Quite the reverse: the ways in which advertising and fashion shape and promote cultural products today are a major obstacle to the formation of independent individuals, capable of judging for themselves what they like, what they admire, or what they find disagreeable, deceitful or horrifying in these products. Rather than developing individuals, the culture-world stifles them, depriving them of lucidity and free will, causing them to react to the dominant “culture” with a conditioned, herd mentality, like Pavlov’s dogs reacting to the bell that rings for a meal.

Another of Lipovetsky’s and Serroy’s ideas that seems questionable is the assertion that because millions of tourists visit the Louvre, the Acropolis and the Greek amphitheaters in Sicily, then culture has lost none of its value, and still enjoys “a great legitimacy.” The authors seem not to notice that these mass visits to great museums and classic historical monuments do not illustrate a genuine interest in “high culture” (the term they use), but rather simple snobbery because the fact of having been in these places is part of the obligations of the perfect postmodern tourist. Instead of stimulating an interest in the classical past and its arts, these visits replace any form of serious study and investigation. A quick look is enough to satisfy people that their cultural conscience is clear. These tourist visits “on the lookout for distractions” undermine the real significance of these museums and monuments, putting them on the same level as other obligations of the perfect tourist: eating pasta and dancing a tarantella in Italy, applauding flamenco and cante jondo in Andalucía, and tasting escargots, visiting the Louvre and the Folies-Bergère in Paris.

In 2010, Flammarion in Paris published Mainstream by the sociologist Frédéric Martel. This book demonstrates that, to some extent, the “new culture” or the “culture-world” that Lipovetsky and Serroy speak of is already a thing of the past, out of kilter with the frantic maelstrom of our age. Martel’s book is fascinating and terrifying in its description of the “entertainment culture” that has replaced almost everywhere what scarcely half a century ago was understood as culture. Mainstream is, in effect, an ambitious study, drawing on hundreds of interviews from many parts of the world, of what, thanks to globalization and the audiovisual revolution, is now shared by people across five continents, despite differences in languages, religions and customs.

Martel’s study does not talk about books—the only one mentioned in its several hundred pages is Dan Brown’s The Da Vinci Code, and the only woman writer mentioned is the film critic Pauline Kael—or about painting and sculpture, or about classical music and dance, or about philosophy or the humanities in general. Instead it talks exclusively about films, television programs, videogames, manga, rock, pop and rap concerts, videos and tablets and the “creative industries” that produce and promote them: that is, the entertainment enjoyed by the vast majority of people that has been replacing (and will end up finishing off) the culture of the past.

The author approves of this change, because, as a result, mainstream culture has swept away the cultural life of a small minority that had previously held a monopoly over culture; it has democratized it, putting it within everyone’s reach, and because the contents of this new culture seem to him to be perfectly attuned to modernity, to the great scientific and technological inventions of our era.

The accounts and the interviews collected by Martel, along with his own analysis, are instructive and quite representative of a reality that, until now, sociological and philosophical studies have not dared to address. The great majority of humanity does not engage with, produce or appreciate any form of culture other than what used to be considered by cultured people, disparagingly, as mere popular pastimes, with no links to the intellectual, artistic, and literary activities that were once at the heart of culture. This former culture is now dead, although it still survives in small social enclaves, without any influence on the mainstream.

The essential difference between the culture of the past and the entertainment of today is that the products of the former sought to transcend mere present time, to endure, to stay alive for future generations, while the products of the latter are made to be consumed instantly and disappear, like cake or popcorn. Tolstoy, Thomas Mann, still more Joyce and Faulkner, wrote books that looked to defeat death, outlive their authors and continue attracting and fascinating readers in the future. Brazilian soaps, Bollywood movies, and Shakira concerts do not look to exist any longer than the duration of their performance. They disappear and leave space for other equally successful and ephemeral products. Culture is entertainment and what is not entertaining is not culture.

Martel’s investigation shows that this is today a global phenomenon, something that is occurring for the first time in history, in which developed and underdeveloped countries participate, no matter how different their traditions, beliefs or systems of government although, of course, each country and society will display certain differences in terms of detail and nuance with regard to films, soap operas, songs, manga, animation, etc.

by Mario Vargas Llosa, Literary Hub | Read more:
Image: The Truman Show

Wolf Alice


[ed. Pretty excellent band (never heard of them till today). At least somebody's still making interesting music these days. See also: Wolf Alice - Lollapalooza 2016 .]

Wednesday, September 7, 2016

The Pleasures of Protest: Taking on Gentrification in Chinatown

[ed. If there's anywhere I'd like to live in Seattle (if there was anywhere I could live in Seattle, it would be the Asian district - or Beacon Hill, the outermost reaches of its influence). Sadly, those communities won't be around in their present form much longer I'm afraid.]

On a cold night in the early winter months of 2007, I was with a group of tenants — all Latino and Chinese immigrant families — clustered together in front of their home, two buildings on Delancey Street that straddled the border between Chinatown and the Lower East Side. We were there, shivering in the cold, to protest their landlords.

Ever since they bought the two buildings in 2001, the owners of 55 Delancey and 61 Delancey Street — Nir Sela, Michael Daniel, and 55 Delancey Street Realty LLC — had been attempting to kick out the Chinese and Latino families who had lived there, but in recent months, the situation had come to a head. They had begun aggressively bringing tenants to housing court, often on trumped up charges (one lawsuit argued that, based on the number of shoes displayed inside the apartment, it was obvious that more than just one family lived there); offered several families significant buyouts to leave; and had refused to make basic repairs. For stretches at a time, and in the coldest days of winter, there had been no heat or hot water.

That evening, huddled in our winter coats and clutching hand-made signs, we waited for the arrival of one of the owners, who had agreed to meet with us and discuss our demands.

I had been volunteering with CAAAV, a tenant organizing group in Chinatown, and in the months prior, I had spent many of my nights going from apartment to apartment, often with Zhi Qin Zheng, a resident of the building as well as an organizer at CAAAV, helping to painstakingly document their living conditions and assisting residents in calling the city’s 311 hotline so that each housing code violation would be on record.

Their apartments were cramped, even rundown, but for these families, it was home, and they wanted to stay. Over the years, each building had become a small community, one where people felt comfortable leaving their doors open and asking each other to watch their children. “If we left, where would we go?” Sau Ying Kwok, a feisty grandmother with a nimbus of frizzy hair, wondered aloud. She had become one of the more vocal leaders in the building, along with the soft-spoken You Liu Lin, a man in his middle years with a penchant for Brylcreeming his hair as well as shoving bottles of water and perfect Fuji apples into my hands whenever I knocked on his door.

I often questioned why I was there on those trips. I had moved to the city three years prior from Texas, fresh out of college and possessing a vague notion that I would put my Asian American Studies degree to use and, in the words of 1960s radicals inspired by Mao Zedong, “serve the people.”

In a way, I was continuing the tradition of those who were part of the Asian American movement of the 1960s — young, mostly college-educated Chinese, Japanese, and Filipino Americans who not only coined the term “Asian American” but also immersed themselves in ethnic enclaves like Chinatown on the east and west coasts.

In Serve The People: Making Asian America in the Long Sixties, her book chronicling the Asian American movement, Karen Ishizuka wrote that people had to become Asian American. It wasn’t about your ethnic background, but “a political identity developed out of the oppositional consciousness of the Long Sixties, in order to be seen and heard.”

But there has always been a disconnect between these Asian American activists and the people they served, who tended to be primarily working-class immigrants, a disconnect that I felt keenly. What was I, an ABC (American-born Chinese) doing in a mostly immigrant community, with my barely passable Mandarin? I didn’t really know, but I felt a complicated sense of belonging that I had never experienced before, complicated because I was, in many ways, an outsider — someone not from the neighborhood or embedded in its history, who wasn’t threaded through the day-to-day life that makes a grouping of city blocks a community. Yet the residents didn’t treat me as an outsider when they invited me into their homes; being Chinese, it seemed, was enough.

It was easy to understand why the owners would want to wholesale evict these families, who all lived in rent-stabilized apartments where rents were, on average, $1000 or less, far below what the owners could charge in the hot real estate market of lower Manhattan, where people fought for the right to pay $3000 a month for a two-bedroom apartment.

That night, I got a lesson in what some have called the pleasures of protest. When Nir Sela and his wife arrived and saw the mass of people waiting for them on the sidewalk, when they saw the cameras, they quickly turned around and walked away. We began following them, scores of people chanting, “Shame on you! Shame on you!” They quickly got into a cab and sped away. Despite the abrupt cancellation of the meeting we had planned, everyone seemed pleased, smiles on their faces.

Soon after, the tenants decided to go on a rent strike. It was a success — a few months later, the owners capitulated, agreeing to make all the necessary repairs and to end eviction proceedings, along with a payment of $3000 to each household. Less than a year later, I would join the staff of CAAAV as a full-time housing organizer, still high off the success of that campaign victory.

But in a city where finance capital reigns, this sense of victory wouldn’t last for long.

* * *

Chinatown as we know it today didn’t really exist until the 1970s, when, in the wake of the 1965 Immigration Act, Chinese immigrants began arriving in large numbers.

Yet as early as the 1850s, one could find a small bachelor community of Chinese men living in what was then known as Five Points (and what some today have called “America’s first slum”), a neighborhood that had arisen on top of a landfill whose residents were free blacks as well as Irish, Jewish, and Italian immigrants. Jacob Riis in his influential 1890 book, How the Other Half Lives: Studies Among the Tenements of New York, devoted an entire chapter to Chinatown, writing dismissively, “Chinatown as a spectacle is disappointing. Next door to the Bend, it has little of its outdoor stir and life, none of its gayly-colored rags or picturesque filth and poverty.” Yet the neighborhood, he noted, had already taken on the tinge of the exotic, New Yorkers believing it was rife with far more opium dens than actually existed.

The black residents fled after an anti-abolition riot; the Chinese men, sailors as well as workers who had moved from the west coast in the wake of increasingly oppressive laws and racist mob violence, stayed because they had nowhere else to go. “Residents of New York Chinatown could not cross Canal Street into Little Italy without the risk of being beaten up;” wrote John Kuo Wei Tchen, the historian and founder of the Museum of Chinese in America, “laundry men in the scattered boroughs and suburbs illegally lived in the back of their shops because they could not rent apartments.”

By the early 1960s, there were only 5,000 residents of Chinatown, mostly elderly men who lived on the blocks clustered around Columbus Park. The neighborhood surrounding it was in decline, the Irish having moved away decades prior, and the Jewish and Italian immigrants who had come to define the Lower East Side having already begun fleeing in rapid numbers.

Without the 1965 Immigration Act, Chinatown would have faded away. But as tens of thousands of immigrants began flocking to New York City, the empty tenements and boarded up storefronts filled with families and small businesses, and the old garment factories once again hummed with the sound of sewing machines, this time manned by a workforce of Chinese immigrant women. Chinatown mushroomed over the next two decades, spreading until it was bordered by Soho and Tribeca to the west and the East River on the opposite end, with Delancey Street settled as the line delineating Chinatown from the Lower East Side.

According to the scholar Peter Kwong, this expansion ended by the mid-1990s, halted by the revitalization of the neighborhoods bordering Chinatown. The events of 9/11 further destabilized the neighborhood, located as it was so close to the Financial District, but, as Kwong put it in the New York Times: “The root cause of the decline of Chinatown predated the 9/11 attack; the collapse of the garment industry and years of harm done by real estate speculation had already taken their toll on the community.”

I didn’t know any of this history when I came to New York City in 2004 and moved into an apartment in central Harlem, itself a neighborhood in flux, where I paid $750 each month to live with two roommates. Like most, all I knew was that Chinatown had a lot of Chinese people, and that fact alone drew me to the neighborhood on evenings after work and on weekends. Having grown up in south Texas, I had moved in large part out of a desire to live somewhere where I could feel a sense of belonging that I hadn’t had as a child.

People expressed a lot of strange beliefs about Chinatown, ideas that became increasingly more bizarre to me the more time I spent in the neighborhood. It’s often described as “gritty” or “dirty,” or as “exotic.” Other commonly used descriptors are “authentic” and “unchanging.”

Those descriptions made me cringe, not only for the casual racism underpinning them and, in the words of the scholar Lisa Lowe in her book Immigrant Acts, “the gaze that seeks to exoticize [Chinatown] as antiquated artifact”, but because they miss an essential truth of the neighborhood — that what is thought of as exotic or authentic to some, is simply the minutiae of life for others. (...)

And yet I too was guilty of a sort of fetishization, for I had my own foolish, romantic notions of the neighborhood, tinged with a nostalgia for a home I had never had. Eating dumplings wasn’t just a meal — it was embracing my culture. During the four years that I worked in the neighborhood, these notions were quickly disabused by the everyday life and reality that I saw around me. I began to understand that Chinatown was a vibrant neighborhood of the present, the kind that urban planning writer and activist Jane Jacobs described as displaying the “exuberant diversity” that she believed characterized the best cities, the ones that thrived.

by Esther Wang, Longreads |  Read more:
Image: Katie Kosma

Leo Berne, Hunger, February 2016
via:

The State of the Menswear Union

[ed. I have more nice clothes to wear than I have opportunities to wear them.]

A man in his early thirties relaxes outside a barber shop on Crosby Street one humid New York afternoon. He scrolls intently through his iPhone, the square ice cubes in a cup resting by his elbow tinted brown by what little remains of his coffee.

He looks great: thick black dreads piled in a haphazardly perfect manner atop his head, an off-white linen shirt that's both stylish and functionally appropriate for the unrelenting heat, baby blue pants that hug — not squeeze — his body, canvas sneakers, no socks. He's the modern man, cool and comfortable and aesthetically aware.

The other guys wandering down Crosby Street look similar, many with skinny black jeans rolled at the ankles, the better to show off bright new Nikes. The coolest dude pairs pants that have huge holes in the knees with an oversized white tee under button-down chambray, plus a flat-brim hat. He disappears into a building that's under construction. Even the worst-dressed men — five bros loudly recounting the previous night's exploits — look pretty good. They make their athleisure tracksuit pants work with the simple shirts and sneakers they chose after waking up hungover that morning.

Yes, this is Crosby Street, one of the most fashionable blocks in New York City, where signs herald the imminent opening of a Rick Owens boutique and idle stoop-sitters could be professional models. Guys should dress well here. But the focus on clothes has spread far, far beyond Soho.

We're witnessing a fascinating, exciting, very specific moment, a "choose-your-own-adventure time of menswear, where guys are letting their freak flags fly," in the words of Jian DeLeon, senior menswear editor for trend forecasting company WGSN. Information has never been more readily available, and online shopping has lowered the barrier to entry significantly. (...)

Traditionally, conversations about men's style have been quieter than the ones about women's, constantly happening only if you know where to look. In the last decade or so, though, they've become easier to find. The discussion moved online in the midaughts when forums like Ask Andy About Clothes and blogs like The Sartorialist started to enter the consciousness of a certain type of man. Guys geeking out about fashion could find each other, sharing tips about designers, history, whatever. Age mattered less than disposition. On the message boards and in the comments sections, no one knew or cared who was a teenager in Iowa or a thirtysomething in Manhattan. The only thing that mattered was that the poster had a smart sense of style, which meant focusing on timeless quality rather than of-the-moment trends, and offered an intelligent opinion.

Fast forward a few years, and the menswear conversation shifted to Tumblr, where you could find an endless stream of guys dressing to impress, often to the point of absurdity. This became known as #menswear, a reference to the Tumblr hashtag, and was epitomized by images of wannabe tastemakers peacocking at Pitti Uomo. (The mockumentary The Life of Pitti Peacocks features garish paisley suit jackets, absurd floral-print pants, and more in just its first half-minute; it illustrates the see-and-be-seen insanity perfectly, as do so many Instagram photos.) In response, satirical Tumblrs like Kevin Burrows and Lawrence Schlossman's Fuck Yeah Menswear cropped up, injecting a bit of fun into the increasingly self-serious #menswear movement. It was, after all, just clothes.

The ultimate distillation of this scene came with Four Pins, the Complex-owned site headed by Schlossman and his team of rabble-rousers. They took aim at anything and everything, mixing biting commentary with long explainers that placed trends in historical context. Readers had their laughs while learning about the clothes they were wearing, or at least aspired to own.

When Four Pins shut down in January, it felt like the end of an era. "It wasn't like someone was going to make their own Four Pins," says Schlossman, who now works as a brand director at the resale site Grailed. "It was more like if Four Pins can't succeed, then maybe this movement is done. It wasn't that the door was open. It was like the door was slammed shut."

Green agrees. "If ever there was a menswear punk-rock era, where it was like the Wild, Wild West — a bunch of uncool dudes talking shit and building this following that no one had ever really seen before, having fun, and making fun of these designers and men's clothing — that was it," he says. "As annoying as some of those guys are and as corny as some of them are, I think a lot of them are really witty and really smart. We made fun of it at the time, but I gotta say, I think it was special."

While #menswear might be dead, menswear has never been bigger. Online menswear sales in particular grew faster than every other category between 2010 and 2015, and show no signs of slowing down; research firm Euromonitor International speculates that the global menswear market will rise from $29 billion in 2015 to $33 billion by 2020. (By comparison, the women's clothing market actually declined by 0.9 percent annually between 2011 and 2016, according to research company IBISWorld.) One-third of men reported they'd like to spend more money on clothes in 2016 than they did in 2015, according to Rupa Ghosh, a retail analyst at Mintel.

Menswear is moving to the masses.

by Noah Davis, Racked |  Read more:
Image: Lindsay Mound

Why Luck Plays a Big Role in Making You Rich

Robert Frank was playing tennis one cold Saturday morning in Ithaca, N.Y., when his heart stopped. Sudden cardiac arrest—a short-circuit in the heart’s electrical signaling—kills 98 percent of its victims and leaves most of the rest permanently impaired.

Yet two weeks later, Frank was back on the tennis court.

How did this happen? There was a car accident a few hundred yards away from where Frank collapsed. Two ambulances responded but the injuries were minor and only one was needed. The other ambulance, usually stationed five miles away, reached Frank in minutes.

“I’m alive today because of pure dumb luck,” says Frank, a 71-year-old economics professor at Cornell University. Or you can call it a miracle. Either way, Frank can’t take credit for surviving that day. From coincidence or the divine, he got help. Nine years later, he is still grappling with the concept of luck. And, applied to his field of economics, it’s led him into some dangerous territory: wealth.

Talk about luck and money in the same sentence, he says, and prepare to deal with “unbridled anger.” U.S. Democratic Senator Elizabeth Warren of Massachusetts and President Barack Obama were pilloried for suggesting rich Americans should be grateful for what Obama called “this unbelievable American system that we have that allowed you to thrive.” Even referring to the wealthy as “the luckiest among us”—as I did a few months ago—can spark some unhinged reactions.

“There are people who just don’t want to hear about the possibility that they didn’t do it all themselves,” Frank says.

Mild-mannered and self-effacing, he isn’t about to tell the rich “you didn’t build that,” as Obama did (and likely regretted). Frank’s new book, Success and Luck: Good Fortune and the Myth of Meritocracy, is a study in diplomacy. Combining memoir with academic research, it’s an earnest argument that all of us—even the rich—would be better off recognizing how luck can lead to success. (...)

Winner-take-all markets

For more than 20 years, Frank has been studying the rise of winner-take-all markets—fields of fierce economic competition in which only a few top performers take home the bulk of the rewards. More and more of the economy is starting to look like sports or music, Frank says, where millions of people compete and the winners are paid thousands of times more than the runners-up.

Another example he gives is the humble neighborhood accountant. In the 20th century, the typical accountant was competing against nearby rivals. If you worked hard, there was a good chance of winning over the most lucrative clients in town. Today, neighborhood accountants face much more competition: Sophisticated global accounting firms can swoop in and sign up their biggest clients. Tax preparation, an accountant’s bread and butter, has been mostly swallowed up by two large players—H&R Block for storefront preparation and TurboTax online.

“Technology has enabled people who are best at what they do to extend their reach geographically,” Frank says. TurboTax was initially just one of a number of tax software programs on the market. But, as happened with search engines and social media sites, it was able to win over customers early, and its competitive advantage snowballed. TurboTax now dominates online tax preparation—thousands of local accountants replaced by one company.

In these winner-take-all markets, luck can play a huge role. A simulation conducted by Frank shows how: Imagine a tournament in which every contestant is randomly assigned a score representing their skill. In this simple scenario, the most skilled person wins. The more competitors there are, the higher the score the winner will likely have.

Now introduce chance by randomly assigning each participant a “luck” score. That score, however, can play only a tiny role in the ultimate outcome, just 2 percent compared with 98 percent allotted to skill. This minor role for chance is enough to tilt the contest away from the top-skilled people. In a simulation with 1,000 participants, the person with the top skill score prevailed only 22 percent of the time. The more competition there is, the hardest it is for skill alone to win out. With 100,000 participants, the most skilled person wins just 6 percent of the time.

Frank writes:
Winning a competition with a large number of contestants requires that almost everything go right. And that, in turn, means that even when luck counts for only a trivial part of overall performance, there’s rarely a winner who wasn’t also very lucky.
Winner-take-all markets can end up creating vast wealth differences between the lucky and unlucky. One person—smart, persistent, but unlucky—struggles, while an equally (or even slightly less) talented and hard-working person gets a lucky break that can reap millions, or billions, of dollars.

by Ben Steverman, Bloomberg |  Read more:
Image: William Andrew/Getty Images