Tuesday, December 19, 2017

Don’t Forget Testosterone

Well, I guess I should have seen this coming:
We have to stop seeing sexual harassment and sexual assault as some sort of flattery of women gone awry. In truth, sexual assault has nothing to do with sex, or sexuality, or flirting, or courtship, or love. Rather, sexual assault is a kind of hate. The men who gratify themselves by abusing women aren’t getting off on those women, but on power. These men don’t sexually assault women because they like women but because they despise them as subordinate creatures.
Here’s a question. If sexual harassment, abuse, and assault are entirely about misogyny, sexism, and hate, how do you explain the cases of Kevin Spacey and Bryan Singer and James Levine? Their patterns seem very similar to many of the other heterosexual cases — and worse than many. And yet there are no women involved whatsoever. What gives?

My own suggestion of an answer to this conundrum is a combination of two things: the resilient human ability (which knows no gender) to abuse power; and the role that testosterone plays in making sex an area in which men abuse that power far more frequently than women. I’m sure that if you’ve endured a lifetime of male depredations (as many women have) it’s utterly understandable why you might see this as entirely about misogyny — and in many cases, you’d be at least partly right. But it’s also, it seems to me, about what testosterone does to men’s minds and bodies, whether there are women around or not.

I’ve been fascinated by this question for quite a while now — my interest was sparked by my own medical use of testosterone as part of my HIV regimen, and I explored the issue at length here. To experience a sudden surge in testosterone — and to see oneself almost structurally altered by it — is to wake up to forces that are so much part of the background we can forget they’re there at all. Men have ten times as much testosterone as women, and testosterone is deeply connected with aggression, power, ambition, drive, pride, stubbornness, strength, and violence. In every species, testosterone makes one gender the more risk-taking, the more physically powerful, and the more assertive, and this includes the small number of species in which testosterone is predominant among females. It is also worth reflecting (for a few seconds, at least) on the simple physical fact that human reproduction requires the male to penetrate a female repeatedly in order to orgasm. This cannot happen in reverse. In the act itself, if it is to achieve its most obvious purpose, sex and power are inherently fused.

And so it is no big surprise that gay male sexuality, for example, has more in common with straight male sexuality than most of us want to acknowledge — because we’re afflicted and blessed with the same psyche-forming hormone. Many gay men, especially younger ones, want to get laid any time all the time, and will drop anything at any moment to get it. Gay men also objectify other men in exactly the same way straight men objectify women (“locker room talk” is by no means an exclusively straight phenomenon, except with gays, it’s other men whose body parts get scrutinized). If you want to know what handsy can really mean, check out the middle of the dance floor. And yes, the gay male sex drive leads us into blind alleys, and horrible blunders (as well as some of the greatest loves humans can ever know). We can often see sex as an act rather than as a relationship. We can be blind to the feelings of others. There’s a ruthlessness to the hierarchy of beauty and youth in many parts of gay culture that would be instantly recognizable to any woman. On the apps, where most gay sexual socializing now takes place, we broadcast desire with all the subtlety of a Breitbart op-ed.

The absence of women, moreover, removes most obstacles to getting laid any time you really want to. So gay men are particularly vulnerable to drowning, or at least getting swept up, in the undertow of testosterone. Gay men, like straight men, risk jobs, relationships, marriages, you name it … for a quick and ready lay. And when we’re really horny, most of our brains disappear out the window in an obsessive pursuit of the nut, seconds after which we come to, shake our heads, and wonder “How on Earth did I end up here?” This has never been better expressed than in Shakespeare’s Sonnet 129, and I don’t usually get a chance to air the Bard, so check out this small slice of genius:

The expense of spirit in a waste of shame
Is lust in action; and till action, lust
Is perjured, murderous, bloody, full of blame,
Savage, extreme, rude, cruel, not to trust;
Enjoy’d no sooner but despised straight;
Past reason hunted; and no sooner had,
Past reason hated, as a swallowed bait,
On purpose laid to make the taker mad:
Mad in pursuit, and in possession so;
Had, having, and in quest to have, extreme;
A bliss in proof, and proved, a very woe;
Before, a joy proposed; behind, a dream.

All this the world well knows; yet none knows well
To shun the heaven that leads men to this hell.


“All this the world well knows.” Except today’s debate about men and women seems to have missed it.

I’m not praising or lamenting this. I’m just recognizing it. It excuses nothing with respect to abuse, assault, harassment, and so on. There’s a bright line here and I see little moral difference between Spacey’s foulness and Weinstein’s. But testosterone helps explain why male power primarily gravitates toward sex, why sexual abuse occurs much more often among men, and why separating sex and power from male sexuality is to miss something important. It is always about both. If we are to have a conversation about men and women, work and play, power and love, then ignoring nature — pretending that this is all about social power dynamics or even hatred — is a very misleading thing.

by Andrew Sullivan, NY Magazine |  Read more:
Image: via

Why All Designers Should Read Cyberpunk

Molly Millions is cool. Her augmented eyes are coated in mirrors, and beneath her immaculately manicured nails, quicksilver daggers wait to be sprung. Her boyfriend was Johnny Mnemonic, a human hard drive, gray matter encrypted with a passcode that only the highest bidder can unlock. But that was before he died. Now, Molly is a “razorgirl”: a lithe assassin periodically hired for jobs involving computer espionage. Not that she jacks into cyberspace herself. She leaves that to her charges, the console cowboys she’s paid to protect as they slump in their VR rigs.

You might never have heard of Molly Millions, the street-samurai heroine of William Gibson’s Neuromancer, but in a way, you’re living in her era. Like Helen of Troy, hers is a face that has launched a thousand ships: Companies like Google and Facebook and Amazon and and Snapchat have all—in one way or another—been directly inspired by cyberpunk, the once-obscure ’80s genre of science fiction to which Molly Millions belongs and which is now more relevant to designers than ever.

Writer Bruce Bethke coined the term “cyberpunk” in 1983, in his short story of the same title. He created the word to refer to what he thought would be the true disruptors of the 21st century: “the first generation of teenagers who grew up ‘truly speaking’ computer.” Other authors, inspired by the more psycho-literary science fiction of J.G. Ballard and Philip K. Dick from the ’60s and ’70s, embraced the term. The enduring works of cyberpunk of the ’80s and ’90s—Neuromancer or Neil Stephenson’s Snow Crash, about a virus so deadly it can be spoken verbally and hack the human mind—examine dystopian futures in which the lines between virtual and authentic, human and machine have blurred. The heroes of cyberpunk novels are heroic hackers; the villains, all too often, monolithic mega-corporations.

You need only look to Hollywood to see that cyberpunk is big right now. Blade Runner 2049 is in theaters, Mr. Robot is on TV. At Fox, Deadpool’s Tim Miller is hard at work on a Neuromancer movie; Amazon has a Snow Crash mini-series on the production slate. Even Steven Spielberg is getting in on the action, with the movie version of Ready Player One, the popular cyberpunk novel by Ernest Cline. The reason is simple: The fantastical themes of cyberpunk—the tension between man and machine, virtual and real—have never been more real. And a large part of that is because the people who read cyberpunk as kids grew up to be the major movers and shakers of Silicon Valley, which now sets the world’s cultural compass.

Take Mark Zuckerberg, for example. The Facebook founder famously suggests that all his employees read Snow Crash. For cyberpunk aficionados, then, it was no surprise when, in 2014, Facebook dropped $2 billion on Oculus VR, the company behind the Oculus Rift headset. A huge chunk of Snow Crashhappens in what Stephenson calls the Metaverse, a virtual social network that is accessed exclusively through VR headsets. Inspired by the book, Zuckerberg had already created half of the Metaverse; by buying Oculus, his company is making a long-term investment in making its CEO’s teenage sci-fi dream a reality.

There are plenty of other analogues. For example, Google named its Nexus devices in a nod to the Nexus series of replicants in Blade Runner. Apple’s whole design motif is essentially cyberpunk, in the way it makes high technology feel organic: Sleek, sexy, silver, and glass, the new iPhone X is a street samurai of a phone. Likewise, augmented-reality products like Google Glass, Snapchat’s Snap, Apple’s ARKit, and Magic Leap are attempts to make real, at least in part, Molly Millions’s mirrored eyes, folding the virtual into the real.

The examples go on and on. Virtual assistants like Siri that whisper into your ear through wireless AirPods. Consumer genetic testing such as 23andme. Apps that translate foreign languages in real time. High-speed, vacuum-sealed rail networks like the Hyperloop. Artificial retinas and cochlear implants. Hacker collectives like Anonymous. All of these have their direct equivalents in cyberpunk.There’s a reason, then, that cyberpunk has suddenly become a thing again in the cultural zeitgeist. Look at filmmaker Denis Villeneuve’s widely well-regarded Blade Runner sequel, Blade Runner 2049. I won’t spoil anything for you, but the movie poses several questions that, for the first time ever, are relevant to your average person, in ways that its 1982 original was not.
  • What does it mean to be “human”? In the world of Blade Runner, this is about the distinction between humans, AIs, and android replicants. But it’s just as relevant to our world, where the average person might behave very differently in real life than they do on Facebook, or where it’s unclear which of the president’s more zealous Twitter followers are human or bots.
  • What is the difference between a real memory and a fake one? In Blade Runner, memories can be implanted, and they can be either real or virtual. Even if one of your memories is “real,” though, it might not be one you made; it could have been altered, or somehow even copied from someone else. Sound familiar in an era in which Facebook and Google “reminds” you of your memories from a certain date, which are then served back to you, altered with Instagram filters or other neural-network-driven improvements?
  • Where does real life end and the virtual begin? In the world of Blade Runner 2049, holographic ads interact with each person, AIs cater to our every need when we’re at home, and augmented-reality glasses allow people to “exist” in multiple places at once. How different is this from our world, where each person receives individually targeted web ads? Where Siri- and HomeKit-connected houses are quickly becoming the norm? Where all of us carry a virtual world everywhere with us, within our smartphones?
All of these questions would have been solely the purview of sci-fi back in the analog ’80s. Now, though, they are eerily relevant to everyone. Tech has caught up.

by John Brownlee, Magenta |  Read more:
Image: via
[ed. See reviews for: Snow Crash and Cryptonomicon (which anticipates digital/cryptocurrencies long before bitcoin came into existence.]

via:
[ed. Sorry postings have been so thin lately. Traveling. ]

Monday, December 18, 2017

Steamed Fish w/Chung Choi (Salted Turnip)


Ingredients

2 lb. onaga*, cleaned
3 TB Aloha shoyu
1 TB vegetable oil
1 TB sesame seed oil
1 piece chung choi (salted turnip), rinsed, minced
2 tsp. ginger, peeled, grated
¼ c. + ¼ c. green onion, minced
1 bunch Chinese parsley, chopped
¼ c. vegetable oil

*Red snapper is a great alternative to onaga. Try to get the fish whole with skin on.

Cooking Process:

In a flat dish; combine soy sauce, 1 TB of vegetable oil, sesame seed oil and chung choi. Dip fish on both sides into mixture. Place on steamer; top with ginger and ¼ cup of green onion. Drizzle with soy sauce mixture. Steam 5-6 minutes, until cooked through. In a skillet over medium heat; warm oil. Drizzle over fish; top with remaining green onion and Chinese parsley.

Serves 4

by Deirdre K. Todd, Cooking Hawaiian Style | Read more:
Image: James Temple
[ed. This should work well with any whole firm, white-fleshed fish. You don't even need a steamer, just microwave so it's partly cooked, then throw in the oven in a shallow foil-covered pan with a little bit of water. Stuff with whatever you like... shrimp, bacon, sautéd scallops, mayonnaise, lap cheong... My brother uses smoking peanut oil to finish, drizzling over the cooked fish and garnish so it sizzles.]

Sunday, December 17, 2017

Facebook Says It's Bad For You and It Has a Solution

Yesterday (Dec. 15), a strange post went up on Facebook’s corporate blog. It was strange because it suggested that Facebook might, in fact, be bad for you.

What solution can the social network provide? The same answer it gives to every question: namely, more Facebook.

The post was the latest in Facebook’s somewhat new series, “Hard Questions.” This set of blog posts aims to address concerns that social media broadly, and Facebook specifically, might be having a negative impact on society. Topics include “Hate Speech,” “How We Counter Terrorism,” and the latest one, “Is Spending Time on Social Media Bad for Us?”

The structure of these posts is usually the same. Step one: identify some ill in society. Step two: admit that people think technology, and Facebook, might be contributing to that ill. Step three: assert that more Facebook, not less, is the cure for said ill.

In the new post on the potential downside of social media, the authors, who are researchers at Facebook, begin by correctly saying that people are worried about the effect social media has on relationships and mental health. They then point to research that suggests scrolling through Facebook, and blindly hitting the “like” button, makes people feel like crap. “In general, when people spend a lot of time passively consuming information—reading but not interacting with people—they report feeling worse afterward,” they write.

The key phrase is “passively consuming.” The authors’ solution to this problem is not, as you might think, using Facebook less. It is using it more, and more actively. Instead of just liking things, and scrolling through our feeds, they suggest that we should be all-in. Send more messages, post more updates, leave more comments, click more reaction buttons. “A study we conducted with Robert Kraut at Carnegie Mellon University found that people who sent or received more messages, comments and Timeline posts reported improvements in social support, depression and loneliness,” they cheerily note.

They then add a caveat that “simply broadcasting status updates wasn’t enough; people had to interact one-on-one with others in their network.” But wait. Isn’t Facebook a social network, connecting me to hundreds or thousands of other people? I don’t need Facebook to interact one-on-one, over text, email, or coffee.

Facebook might admit it has some negative effects, but it is unwilling to face up to the fact that the solution might be using it less. This latest post mentions Facebook’s “take a break” feature. This will hide your ex-partner’s profile updates for you after a break-up, to help in “emotional recovery.” Because, sure, that seems healthier than just not using Facebook at all for a little while.

by Nikhil Sonnad, Quartz |  Read more:
Image: via

A Public Internet is Possible

A favorite party trick of neoliberalism is claiming that whatever the public sector can do, the private market can do better. For the last three decades, the two major political parties have teamed with the corporate sector to inform us that government is a rusty machine: too burdened by red tape to innovate, too slow-churning to adapt to change and operate efficiently. Then they turn around and sell us a plethora of terrible ideas—private schools, private prisons, private emergency services. Because the privatized approach is more complicated, less democratized, and overall less appealing than the public version, its proponents have to paint the public option as an utter failure. And what better way to ensure this failure than by actively investing in its destruction? Then they dare ask us to believe that privatization is actually increasing our choices. It was only a matter of time before the privatization mafia turned its attention to the internet.

(Yes, this is an article about net neutrality. Please don’t stop reading. Yes, there are a million pieces out there about net neutrality. If you’re like me, you probably avidly avoid them. But as boring, overplayed, and obnoxiously hyped as this issue is by internet bros, and as much as there are infinite other pressing attacks on our common humanity, this is actually really important and it’s a fantastic microcosm for American politics right this minute. And there is a little-discussed alternative even better than the net neutrality status quo—a true public option! Stay with me!)

I probably don’t need to extoll the virtues of broadband access for you. Odds are, you logged onto Twitter or Facebook or Current Affairs’s website to get to this very paragraph. So you already know that for better or worse (and full disclosure: I lean towards “better”), the internet has revolutionized the way we consume and process information, the way we communicate and connect with each other, and the way we buy and sell things, including our time. For the most part, we’ve been able to enjoy it with little interference from internet service providers, which are overwhelmingly private corporations dominated by a few behemoths at the top of the market.

Since ISPs control the infrastructure that connects us to the internet, they are technically able to control a lot of things about that access. For example, your ISP can determine how quickly you upload or download information. For a long time, ISPs could only really limit internet access speed as a whole. ISPs did what you might expect someone in control of access speeds writ large to do: they charged more for faster access, or limited the amount of data one household could access unless it paid for more.

Other restrictions like age-limits (for websites that advertise alcohol or sexual content) or the order of search results by search engines are imposed by the actual websites and not ISPs. Until now, the law prevented ISPs from having much more of a say in what websites we visited. So if you were willing to pay for a certain internet speed or amount of data, then your ISP could not subject you to slower speeds to punish you for looking at a website that it did not like, or for looking at a website that hadn’t paid the ISP a separate fee. This version of the internet that we’re used to, that we probably take for granted, is net neutrality.

The world without net neutrality is the world where ISPs can decide which consumers get to access which websites at which speeds. Imagine how attractive this is for the ISPs. They know that people really like Facebook, cat videos, or holiday gift catalog hate blogs. So they can simultaneously charge consumers extra money if they want to access holiday gift catalog hate blogs and even more extra money if they want to access those hate blogs at reasonable speeds. They can also charge the blogs for making their sites available to customers at all. They sit between the audience and the content and can extort money from both sides at once. (Excited for Comcast to determine what is going viral then charge you more to view it? Internet surge pricing, anyone?) (...)

Even if you believe in competition and the free market generally, it’s really not a thing here. Most people can’t just hop over to another ISP if they don’t like the one they have. With infinite wisdom and foresight, our forebears encouraged the ISPs to form regional monopolies. So many of us live in cities with no more than three or four ISPs altogether if that many. But that’s city-wide. We also live in buildings or neighborhoods with a single option for internet service. Unless we were willing to forgo having the internet altogether at home, thousands of us will probably give in to spiking prices for website access subject to the whims of our ISP overlords. (...)

What to do? In the last few years, several cities or counties around the the country entered the broadband market themselves. Their goal was to provide cheaper and faster internet. The list includes Chattanooga in Tennessee, Lafayette in Louisiana, and Wilson in North Carolina. The results have been astonishing. For example, Chattanooga was able to provide discounted prices to lower income residents and sell internet access at speeds that surpassed Google Fiber, which until then was the fastest internet in the country. Other cities like Sandy, Oregon were able to offer fast speeds for prices lower than the average ISP’s packages. The winners in all this were the consumers rescued from the Invisible Consolidating Hand’s shortcomings by a government able to prioritize the provision of an important public good over maximizing profit.

You could imagine a future in which every city and state ran an ISP that would ignore the FCC’s repeal vote and provide faster and cheaper internet on the basis of net neutrality principles. What reasonable consumer would ever choose a private ISP over the public broadband? If we followed this model in more cities, we could create a world where the repeal of net neutrality doesn’t matter. To compete with the public broadband, the private ISPs would have to ditch their restrictions and actually have to lower their prices as well.

But it wouldn’t be an American tale if the next part of the story didn’t include an intervention to thwart a public success. Thanks to the lobbies friendly to the interests of private internet service providers, like the National Association of Regulatory Utility Commissioners, and the many politicians whose coffers they line with cash, almost half of states have passed laws to prevent cities from running their own broadband service. This means that states are making sure only the worst corporate conglomerates can provide internet access, even if local residents have voted for a public option and even if the public option would confer the most benefit on the state’s residents.

by Vanessa A. Bee, Current Affairs |  Read more:
Image: uncredited

Thursday, December 14, 2017



via: here and here

That Giving Feeling

The central question that private bankers ask their clients is: “What does your money mean to you?” It’s a fundamental moral issue at all levels of wealth. Revealing answers range from the odious (controlling the lives of your family members) to the visionary (saving the world).

Eventually, bankers say, new wealth enjoys the luxury lifestyle for about five years before they start looking for some purpose in their lives.

New and old Asian wealth have confused and conflated the meaning of charity versus philanthropy, and the need to accomplish more with their vast assets. The best analogy is that charity is when you hand money to the Salvation Army in the street, who then decides how to distribute it. Philanthropy is when you stand in the street and decide by yourself who to hand money to.

Living with the obligations and responsibilities of wealth isn’t easy. Big money creates its own gravity, forcing their owners’ lives into an orbit. Gift giving as a form of charity is certainly commendable and flexible, allowing donors to shift the management of charity to established organisations.

But this concept is becoming inadequate, even corrupted, considering the super wealth being created by technology success. And charities are also becoming a source of potential abuse. (...)

Here’s a twist on the spirit of giving. In his recent Facebook post, Mark Zuckerberg said he intended to divest between 35 million and 75 million Facebook shares in the next 18 months to fund his charity. He currently holds 53 per cent of the voting stock. If he sold 35 million shares, his voting stake would be reduced to 50.6 per cent.

But, according to the Financial Times, if he sold 75 million shares, he would be dependent on the votes of co-founder Dustin Muskovitz to exercise control over a majority of votes. So Zuckerberg’s advisers cooked up a stock reclassification that effectively created a third non-voting class, that would have solved this problem. Objections and the threat of a lawsuit from investors stopped his plan.

Once the US$12 billion of proceeds from the stock sale is transferred to his foundation, all investment income is tax free. He only needs to donate 5 per cent of principal per year to charity. Most foundations and family investment offices of that magnitude can make investment returns more than 5 per cent per annum. So the principal in the foundation never, ever actually need to be disbursed for charity.

For many foundations, the present value of the tax subsidy to the tycoon personally far exceeds the net disbursement of the principal from the foundation on charity.

New technology wealth seems fixated on funding scalable charity projects with the same model as their companies. Or that which benefit their companies.

Unfortunately, many poverty alleviation projects can’t be scaled, such as finding clean water for poor villages in Africa. It would be more practical and noble if Zuckerberg would simply give away the US$12 billion, rather than playing games with tax planning.

by Peter Guy, South China Morning Post |  Read more:
Image: uncredited
[ed. See also: 2017 Was Bad for Facebook. 2018 Will Be Worse.]

Wednesday, December 13, 2017

The Future is Here – AlphaZero

Imagine this: you tell a computer system how the pieces move — nothing more. Then you tell it to learn to play the game. And a day later — yes, just 24 hours — it has figured it out to the level that beats the strongest programs in the world convincingly! DeepMind, the company that recently created the strongest Go program in the world, turned its attention to chess, and came up with this spectacular result.

DeepMind and AlphaZero

About three years ago, DeepMind, a company owned by Google that specializes in AI development, turned its attention to the ancient game of Go. Go had been the one game that had eluded all computer efforts to become world class, and even up until the announcement was deemed a goal that would not be attained for another decade! This was how large the difference was. When a public challenge and match was organized against the legendary player Lee Sedol, a South Korean whose track record had him in the ranks of the greatest ever, everyone thought it would be an interesting spectacle, but a certain win by the human. The question wasn’t even whether the program AlphaGo would win or lose, but how much closer it was to the Holy Grail goal. The result was a crushing 4-1 victory, and a revolution in the Go world. In spite of a ton of second-guessing by the elite, who could not accept the loss, eventually they came to terms with the reality of AlphaGo, a machine that was among the very best, albeit not unbeatable. It had lost a game after all.

The saga did not end there. A year later a new updated version of AlphaGo was pitted against the world number one of Go, Ke Jie, a young Chinese whose genius is not without parallels to Magnus Carlsen in chess. At the age of just 16 he won his first world title and by the age of 17 was the clear world number one. That had been in 2015, and now at age 19, he was even stronger. The new match was held in China itself, and even Ke Jie knew he was most likely a serious underdog. There were no illusions anymore. He played superbly but still lost by a perfect 3-0, a testimony to the amazing capabilities of the new AI.

Many chess players and pundits had wondered how it would do in the noble game of chess. There were serious doubts on just how successful it might be. Go is a huge and long game with a 19x19 grid, in which all pieces are the same, and not one moves. Calculating ahead as in chess is an exercise in futility so pattern recognition is king. Chess is very different. There is no questioning the value of knowledge and pattern recognition in chess, but the royal game is supremely tactical and a lot of knowledge can be compensated for by simply outcalculating the opponent. This has been true not only of computer chess, but humans as well.

However, there were some very startling results in the last few months that need to be understood. DeepMind’s interest in Go did not end with that match against the number one. You might ask yourself what more there was to do after that? Beat him 20-0 and not just 3-0? No, of course not. However, the super Go program became an internal litmus test of a sorts. Its standard was unquestioned and quantified, so if one wanted to test a new self-learning AI, and how good it was, then throwing it at Go and seeing how it compared to the AlphaGo program would be a way to measure it.

A new AI was created called AlphaZero. It had several strikingly different changes. The first was that it was not shown tens of thousands of master games in Go to learn from, instead it was shown none. Not a single one. It was merely shown the rules, without any other information. The result was a shock. Within just three days its completely self-taught Go program was stronger than the version that had beat Lee Sedol, a result the previous AI had needed over a year to achieve. Within three weeks it was beating the strongest AlphaGo that had defeated Ke Jie. What is more: while the Lee Sedol version had used 48 highly specialized processors to create the program, this new version used only four!

Approaching chess might still seem unusual. After all, although DeepMind had already shown near revolutionary breakthroughs thanks to Go, that had been a game that had yet to be ‘solved’. Chess already had its Deep Blue 20 years ago, and today even a good smartphone can beat the world number one. What is there to prove exactly?

It needs to be remembered that Demis Hassabis, the founder of DeepMind has a profound chess connection of his own. He had been a chess prodigy in his own right, and at age 13 was the second highest rated player under 14 in the world, second only to Judit Polgar. He eventually left the chess track to pursue other things, like founding his own PC video game company at age 17, but the link is there. There was still a burning question on everyone’s mind: just how well would AlphaZero do if it was focused on chess? Would it just be very smart, but smashed by the number-crunching engines of today where a single ply is often the difference between winning or losing? Or would something special come of it?

A new paradigm

On December 5 the DeepMind group published a new paper at the site of Cornell University called "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm", and the results were nothing short of staggering. AlphaZero had done more than just master the game, it had attained new heights in ways considered inconceivable. The test is in the pudding of course, so before going into some of the fascinating nitty-gritty details, let’s cut to the chase. It played a match against the latest and greatest version of Stockfish, and won by an incredible score of 64 : 36, and not only that, AlphaZero had zero losses (28 wins and 72 draws)!

Stockfish needs no introduction to ChessBase readers, but it's worth noting that the program was on a computer that was running nearly 900 times faster! Indeed, AlphaZero was calculating roughly 80 thousand positions per second, while Stockfish, running on a PC with 64 threads (likely a 32-core machine) was running at 70 million positions per second. To better understand how big a deficit that is, if another version of Stockfish were to run 900 times slower, this would be equivalent to roughly 8 moves less deep. How is this possible? 

The paper "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" at Cornell University.

The paper explains:
“AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations – arguably a more “human-like” approach to search, as originally proposed by Shannon. Figure 2 shows the scalability of each player with respect to thinking time, measured on an Elo scale, relative to Stockfish or Elmo with 40ms thinking time. AlphaZero’s MCTS scaled more effectively with thinking time than either Stockfish or Elmo, calling into question the widely held belief that alpha-beta search is inherently superior in these domains.”
In other words, instead of a hybrid brute-force approach, which has been the core of chess engines today, it went in a completely different direction, opting for an extremely selective search that emulates how humans think. A top player may be able to outcalculate a weaker player in both consistency and depth, but it still remains a joke compared to what even the weakest computer programs are doing. It is the human’s sheer knowledge and ability to filter out so many moves that allows them to reach the standard they do. Remember that although Garry Kasparov lost to Deep Blue it is not clear at all that it was genuinely stronger than him even then, and this was despite reaching speeds of 200 million positions per second. If AlphaZero is really able to use its understanding to not only compensate 900 times fewer moves, but surpass them, then we are looking at a major paradigm shift.
How does it play?

Since AlphaZero did not benefit from any chess knowledge, which means no games or opening theory, it also means it had to discover opening theory on its own. And do recall that this is the result of only 24 hours of self-learning. The team produced fascinating graphs showing the openings it discovered as well as the ones it gradually rejected as it grew stronger!

by Albert Silver, Chess News |  Read more:
Image: uncredited

Tuesday, December 12, 2017

The Transhumanist FAQ

1.1 What is transhumanism?

Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows: 

(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities. 

(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies. Transhumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as “human”. 

It is not our human shape or the details of our current human biology that define what is valuable about us, but rather our aspirations and ideals, our experiences, and the kinds of lives we lead. To a transhumanist, progress occurs when more people become more able to shape themselves, their lives, and the ways they relate to others, in accordance with their own deepest values. Transhumanists place a high value on autonomy: the ability and right of individuals to plan and choose their own lives. Some people may of course, for any number of reasons, choose to forgo the opportunity to use technology to improve themselves. Transhumanists seek to create a world in which autonomous individuals may choose to remain unenhanced or choose to be enhanced and in which these choices will be respected. 

Through the accelerating pace of technological development and scientific understanding, we are entering a whole new stage in the history of the human species. In the relatively near future, we may face the prospect of real artificial intelligence. New kinds of cognitive tools will be built that combine artificial intelligence with interface technology. Molecular nanotechnology has the potential to manufacture abundant resources for everybody and to give us control over the biochemical processes in our bodies, enabling us to eliminate disease and unwanted aging. Technologies such as brain-computer interfaces and neuropharmacology could amplify human intelligence, increase emotional well-being, improve our capacity for steady commitment to life projects or a loved one, and even multiply the range and richness of possible emotions. On the dark side of the spectrum, transhumanists recognize that some of these coming technologies could potentially cause great harm to human life; even the survival of our species could be at risk. Seeking to understand the dangers and working to prevent disasters is an essential part of the transhumanist agenda. 

Transhumanism is entering the mainstream culture today, as increasing numbers of scientists, scientifically literate philosophers, and social thinkers are beginning to take seriously the range of possibilities that transhumanism encompasses. A rapidly expanding family of transhumanist groups, differing somewhat in flavor and focus, and a plethora of discussion groups in many countries around the world, are gathered under the umbrella of the World Transhumanist Association, a non-profit democratic membership organization. 

1.2 What is a posthuman?

It is sometimes useful to talk about possible future beings whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards. The standard word for such beings is “posthuman”. (Care must be taken to avoid misinterpretation. “Posthuman” does not denote just anything that happens to come after the human era, nor does it have anything to do with the “posthumous”. In particular, it does not imply that there are no humans anymore.) 

Many transhumanists wish to follow life paths which would, sooner or later, require growing into posthuman persons: they yearn to reach intellectual heights as far above any current human genius as humans are above other primates; to be resistant to disease and impervious to aging; to have unlimited youth and vigor; to exercise control over their own desires, moods, and mental states; to be able to avoid feeling tired, hateful, or irritated about petty things; to have an increased capacity for pleasure, love, artistic appreciation, and serenity; to experience novel states of consciousness that current human brains cannot access. It seems likely that the simple fact of living an indefinitely long, healthy, active life would take anyone to posthumanity if they went on accumulating memories, skills, and intelligence. 

Posthumans could be completely synthetic artificial intelligences, or they could be enhanced uploads [see “What is uploading?”], or they could be the result of making many smaller but cumulatively profound augmentations to a biological human. The latter alternative would probably require either the redesign of the human organism using advanced nanotechnology or its radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, anti-aging therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable computers, and cognitive techniques. 

Some authors write as though simply by changing our self-conception, we have become or could become posthuman. This is a confusion or corruption of the original meaning of the term. The changes required to make us posthuman are too profound to be achievable by merely altering some aspect of psychological theory or the way we think about ourselves. Radical technological modifications to our brains and bodies are needed. It is difficult for us to imagine what it would be like to be a posthuman person. Posthumans may have experiences and concerns that we cannot fathom, thoughts that cannot fit into the three-pound lumps of neural tissue that we use for thinking. Some posthumans may find it advantageous to jettison their bodies altogether and live as information patterns on vast super-fast computer networks. Their minds may be not only more powerful than ours but may also employ different cognitive architectures or include new sensory modalities that enable greater participation in their virtual reality settings. Posthuman minds might be able to share memories and experiences directly, greatly increasing the efficiency, quality, and modes in which posthumans could communicate with each other. The boundaries between posthuman minds may not be as sharply defined as those between humans. 

Posthumans might shape themselves and their environment in so many new and profound ways that speculations about the detailed features of posthumans and the posthuman world are likely to fail.

by Nick Bostrom, Oxford University |  Read more: (pdf)
[ed. Repost]

Naked 9 – 4
blood moon

via:

For the Good of Society - Delete Your Map App

I live on an obnoxiously quaint block in South Berkeley, California, lined with trees and two-story houses. There’s a constant stream of sidewalk joggers before and after work, and plenty of (good) dogs in the yards. Trick-or-treaters from distant regions of the East Bay invade on Halloween.

Once a week, the serenity is interrupted by the sound of a horrific car crash. Sometimes, it’s a tire screech followed by the faint dint of metal on metal. Other times, a boom stirs the neighbors outside to gawk. It’s always at the intersection of Hillegass, my block, and Ashby, one of the city’s thoroughfares. It generally happens around rush hour, when the street is clogged with cars.

It wasn’t always this way. In 2001, the city designated the street as Berkeley’s first “bicycle boulevard,” presumably due to some combination of it being relatively free of traffic and its offer of a direct route from the UC Berkeley campus down into Oakland. But since that designation, another group has discovered the exploit. Here, for the hell of it, are other events that have occurred since 2001:

2005: Google Maps is launched.
2006: Waze is launched.
2009: Uber is founded.
2012: Lyft is founded.

“The phenomenon you’re experiencing is happening all over the U.S.,” says Alexandre Bayen, director of transportation studies at UC Berkeley.

Pull up a simple Google search for “neighborhood” and “Waze,” and you’re bombarded with local news stories about similar once-calm side streets now the host of rush-hour jams and late-night speed demons. It’s not only annoying as hell, it’s a scenario ripe for accidents; among the top causes of accidents are driver distraction (say, by looking at an app), unfamiliarity with the street (say, because an app took you down a new side street), and an increase in overall traffic.

“The root cause is the use of routing apps,” says Bayen, “but over the last two to three years, there’s the second layer of ride-share apps.” (...)

All that extra traffic down previously empty streets has created an odd situation in which cities are constantly playing defense against the algorithms.

“Typically, the city or county, depending on their laws, doesn’t have a way to fight this,” says Bayen, “other than by doing infrastructure upgrades.”

Fremont, California, has lobbed some of the harshest resistance, instituting rush-hour restrictions, and adding stop signs and traffic lights at points of heavy congestion. San Francisco is considering marking designated areas where people can be picked up or dropped off by ride-shares (which, hmm, seems familiar). Los Angeles has tinkered with speed bumps and changing two-way streets into one-ways. (Berkeley has finally decided to play defense on my block by installing a warning system that will slow cars at the crash-laden intersection; it will be funded by taxpayers.) (...)

Perhaps you see the problem. If cities thwart map apps and ride-share services through infrastructure changes with the intent to slow traffic down, it has the effect of slowing down traffic. So, the algorithm may tell drivers to go down another side street, and the residents who’ve been griping to the mayor may be pleased, but traffic, on the city whole, has been negatively affected, making everyone’s travel longer than before. “It’s nuts,” says Bayen, “but this is the reality of urban planning.”

Bayen points out that this is sort of a gigantic version of the prisoner’s dilemma. “If everybody’s doing the selfish thing, it’s bad for society,” says Bayen. “That’s what’s happening here.” Even though the app makes the route quicker for the user, that’s only in relation to other drivers not using the app, not to their previous drives. Now, because everyone is using the app, everyone’s drive-times are longer compared to the past. “These algorithms are not meant to improve traffic, they’re meant to steer motorists to their fastest path,” he says. “They will give hundreds of people the shortest paths, but they won’t compute for the consequences of those shortest paths.”

by Rick Paulas, Select/All | Read more:
Image: Waze

How Email Open Tracking Quietly Took Over the Web

"I just came across this email," began the message, a long overdue reply. But I knew the sender was lying. He’d opened my email nearly six months ago. On a Mac. In Palo Alto. At night.

I knew this because I was running the email tracking service Streak, which notified me as soon as my message had been opened. It told me where, when, and on what kind of device it was read. With Streak enabled, I felt like an inside trader whenever I glanced at my inbox, privy to details that gave me maybe a little too much information. And I certainly wasn’t alone.

There are some 269 billion emails sent and received daily. That’s roughly 35 emails for every person on the planet, every day. Over 40 percent of those emails are tracked, according to a study published last June by OMC, an “email intelligence” company that also builds anti-tracking tools.

The tech is pretty simple. Tracking clients embed a line of code in the body of an email—usually in a 1x1 pixel image, so tiny it's invisible, but also in elements like hyperlinks and custom fonts. When a recipient opens the email, the tracking client recognizes that pixel has been downloaded, as well as where and on what device. Newsletter services, marketers, and advertisers have used the technique for years, to collect data about their open rates; major tech companies like Facebook and Twitter followed suit in their ongoing quest to profile and predict our behavior online.

But lately, a surprising—and growing—number of tracked emails are being sent not from corporations, but acquaintances. “We have been in touch with users that were tracked by their spouses, business partners, competitors,” says Florian Seroussi, the founder of OMC. “It's the wild, wild west out there.”

According to OMC's data, a full 19 percent of all “conversational” email is now tracked. That’s one in five of the emails you get from your friends. And you probably never noticed.

“Surprisingly, while there is a vast literature on web tracking, email tracking has seen little research,” noted an October 2017 paper published by three Princeton computer scientists. All of this means that billions of emails are sent every day to millions of people who have never consented in any way to be tracked, but are being tracked nonetheless. And Seroussi believes that some, at least, are in serious danger as a result. (...)

I stumbled upon the world of email tracking last year, while working on a book about the iPhone and the notoriously secretive company that produces it. I’d reached out to Apple to request some interviews, and the PR team had initially seemed polite and receptive. We exchanged a few emails. Then they went radio silent. Months went by, and my unanswered emails piled up. I started to wonder if anyone was reading them at all.

That’s when, inspired by another journalist who’d been stonewalled by Apple, I installed the email tracker Streak. It was free, and took about 30 seconds. Then, I sent another email to my press contact. A notification popped up on my screen: My email had been opened almost immediately, inside Cupertino, on an iPhone. Then it was opened again, on an iMac, and again, and again. My messages were not only being read, but widely disseminated. It was maddening, watching the grey little notification box—“Someone just viewed ‘Regarding book interviews’—pop up over and over and over, without a reply.

So I decided to go straight to the top. If Apple’s PR team was reading my emails, maybe Tim Cook would, too.

I wrote Cook a lengthy email detailing the reasons he should join me for an interview. When I didn’t hear back, I drafted a brief follow-up, enabled Streak, hit send. Hours later, I got the notification: My email had been read. Yet one glaring detail looked off. According to Streak, the email had been read on a Windows Desktop computer.

Maybe it was a fluke. But after a few weeks, I sent another follow up, and the email was read again. On a Windows machine.

That seemed crazy, so I emailed Streak to ask about the accuracy of its service, disclosing that I was a journalist. In the confusing email exchange with Andrew from Support that followed, I was told that Streak is “very accurate,” as it can let you know what time zone or state your lead is in—but only if you’re a salesperson. Andrew stressed that “if you’re a reporter and wanted to track someone's whereabouts, [it’s] not at all accurate.” It quickly became clear that Andrew had the unenviable task of threading a razor thin needle: maintaining that Streak both supplied very precise data but was also a friendly and non-intrusive product. After all, Streak users want the most accurate information possible, but the public might chafe if it knew just how accurate that data was—and considered what it could be used for besides honing sales pitches. This is the paradox that threatens to pop the email tracking bubble as it grows into ubiquity. No wonder Andrew got Orwellian: “Accuracy is entirely subjective,” he insisted, at one point.

Andrew did, however, unequivocally say that if Streak listed the kind of device used—as opposed to listing unknown—then that info was also “very accurate.” Even if pertained to the CEO of Apple.

by Brian Merchant, Wired |  Read more:
Image: Getty

He Made Masterpieces with Manure

On the acknowledgements page of Traces of Vermeer, Jane Jelley thanks one friend who tracked down pig bladders and another who harvested mussel shells from a freshwater moat. Jelley, a painter, takes her research on the Dutch Golden Age painter Johannes Vermeer (1632–75) out of galleries and archives and into the studio. Her experiments are two parts Professor Branestawm, one part Great British Bake Off. She discovers that she can make yellow ‘lakes’ – pigments produced from dyes of the kind used by Vermeer and his contemporaries to create subtle ‘glazed’ effects – in her kitchen at home. First, you collect some unripe buckthorn berries from a hedgerow or the flowers of the broom shrub. Next, ‘You have to boil up the plants; and then you need some chalk, some alum; some coffee filters; and a large turkey baster.’ She reminds us how fortunate modern artists are to be able to buy their paint in ready-mixed tubes from Winsor & Newton.

Before he laid down even a dot of paint, Vermeer would have weighed, ground, burned, sifted, heated, cooled, kneaded, washed, filtered, dried and oiled his colours. Some pigments – the rare ultramarine blue made from lapis lazuli from Afghanistan, for example – had to be plunged into cold vinegar. Others – such as lead white – needed to be kept in a hut filled with horse manure. The fumes caused the lead to corrode, creating flakes of white carbonate that were scraped off by hand.

Vermeer knew how to soak old leather gloves to extract ‘gluesize’, applied as a coating to artists’ canvas. Or he might have followed the recipe for goat glue in Cennino Cennini’s painters’ manual The Craftsman’s Handbook: boiled clippings of goat muzzles, feet, sinews and skin. This was best made in January or March, in ‘great cold or high winds’, to disperse the goaty smell.

An artist had to be a chemist – and he had to have a strong stomach. He would have known, writes Jelley, ‘the useful qualities of wine, ash, urine, and saliva’. ‘Do not lick your brush or spatter your mouth with paint,’ warned Cennini. Lead white and arsenic yellow were poisonous, goat glue merely unpleasant. The art historian Jan Veth, writing in 1908 about Girl with a Pearl Earring (c 1665–7), fancied that Vermeer had painted with ‘the dust of crushed pearl’. Forensics have since revealed the earthier truth.

by Laura Freeman, Literary Review |  Read more:
Image: Wikipedia

Monday, December 11, 2017


Kimi Werner
via:
[ed. Free diver extraordinaire (and rider of great white sharks).]

Jonas Wood, Scholl Canyon 2, 2017
via:

via:
[ed. Mondays]

What to Make of New Positive NSI-189 Results?

I wanted NSI-189 to be real so badly.

Pharma companies used to love antidepressants. Millions of people are depressed. Millions of people who aren’t depressed think they are. Sell them all a pill per day for their entire lifetime, and you’re looking at a lot of money. So they poured money into antidepressant research, culminating in 80s and 90s with the discovery of selective serotonin reuptake inhibitors (SSRIs) like Prozac. Since then, research has moved into exciting new areas, like “more SSRIs”, “even more SSRIs”, “drugs that claim to be SNRIs but on closer inspection are mostly just SSRIs”, and “drugs that claim to be complicated serotonin modulators but realistically just work as SSRIs”. Some companies still go through the pantomime of inventing new supposedly-not-SSRI drugs, and some psychiatrists still go through the pantomime of pretending to be excited about them, but nobody’s heart is really in it anymore.

How did it come to this? Apparently discovering new antidepressants is really hard. Part of it is that depression has such a high placebo response rate (realistically probably mostly regression to the mean) that it’s hard for even a good medication to separate much from placebo. Another part is that psychopharmacology is just a really difficult field even at the best of times. Pharma companies tried, tried some more, and gave up. All the new no-really-not-SSRIs are the fig leaf to cover their failure. Now people are gradually giving up on even pretending. There are still lots of exciting possibilities coming from the worlds of academia and irresponsible self-experimentation, but the Very Serious People have left the field. This is a disaster, insofar as they’re the only people who can get things through the FDA and into the mass market where anyone besides fringe enthusiasts will use them.

Enter NSI-189. A tiny pharma company called Neuralstem announced that they had a new antidepressant that worked on directly on neurogenesis – a totally new mechanism! nothing at all like SSRIs! – and seemed to be getting miraculous results. Lots of people (including me) suspect neurogenesis is pretty fundamental to depression in a way serotonin isn’t, so the narrative really worked – we’ve finally figured out a way to hit the root cause of depression instead of fiddling around with knobs ten steps away from the actual problem. Irresponsible self-experimenters managed to synthesize and try some of it, and reported miraculous stories of treatment-resistant depressions vanishing overnight. Someone had finally done the thing!

There are many theories about what place our world holds in God’s creation. Here’s one with as much evidence as any other: Earth was created as a Hell for bad psychiatrists. For one thing, it would explain why there are so many of them here. For another, it would explain why – after getting all of our hopes so high – NSI-189 totally flopped in FDA trials.

I don’t think the data have been published anywhere (more evidence for the theory!), but we can read off the important parts of the story from Neuralstem’s press release. In Stage 1, they put 44 patients on 40 mg NSI-189 daily, another 44 patients on 80 mg daily, and 132 patients on placebo for six weeks. In Stage 2, they took the people from the placebo group who hadn’t gotten better in Stage 1 and put half of them on NSI-189, leaving the other half on placebo – I think this was a clever trick to get a group of people pre-selected for not responding to placebo and so avoid the problem where everyone does well on placebo and so it’s a washout. But all of this was for nothing. On the primary endpoint – a depression rating instrument called MADRS – the NSI-189 group failed to significantly outperform placebo during either stage.

Neuralstem’s stock fell 61% on news of the study. Financial blog Seeking Alpha advised readers that Neuralstem Is Doomed. Investors tripped over themselves to withdraw support from a corporation that apparently was unable to handle the absolute bread-and-butter most basic job of a pharma company – fudging clinical trial results so that nobody figures out they were negative until half the US population is on their drug.

From last month’s New York Times:
The first thing you feel when a [drug] trial fails is a sense of shame. You’ve let your patients down. You know, of course, that experimental drugs have a poor track record – but even so, thisdrug had seemed so promising (you cannot erase the image of the cancer cells dying under the microscope). You feel as if you’ve shortchanged the Hippocratic Oath […] 
There’s also a more existential shame. In an era when Big Pharma might have macerated the last drips of wonder out of us, it’s worth reiterating the fact: Medicines are notoriously hard to discover. The cosmos yields human drugs rarely and begrudgingly – and when a promising candidate fails to work, it is as if yet another chemical morsel of the universe has been thrown into the dumpster. The meniscus of disappointment rises inside you: That domain of human biology that the medicine hoped to target may never be breached therapeutically.
And so the rest of us gave a heavy sigh, shed a single tear, and went back to telling ourselves that maybe vortioxetine wasn’t exactly an SSRI, in ways.

II.

But the reason I’m writing about all of this now is that Neuralstem has just put out a new press release saying that actually, good news! NSI-189 works after all! Their stock rose 67%! Investment blogs are writing that Neuralstem Is A Big Winner and boasting about how much Neuralstem stock they were savvy enough to hold on to!

What are these new results? Can we believe them?

I’m still trying to figure out exactly what’s going on; the results themselves were presented at a conference and aren’t directly available. But from what I can gather from the press release, this isn’t a new trial. It’s new secondary endpoints from the first trial, that Neuralstem thinks cast a new light on the results.

What are secondary endpoints? Often during a drug trial, people want to measure whether the drug works in multiple different ways. For depression, these are usually rating scales that ask about depressive symptoms – things like “On a scale of 1 to 5, how sad are you?” or “How many times in the past month have you considered suicide?”. You could give the MADRS, a scale that focuses on emotional symptoms. Or you could give the HAM-D, a scale that focuses more on psychosomatic symptoms. Or since depression makes people think less clearly, you could give them a cognitive battery. Depending on what you want to do, all of these are potentially good choices.

But once you let people start giving a lot of tests, there’s a risk that they’ll just keep giving more and more tests until they find one that gives results they like. Remember, one out of every twenty statistical analyses you do will be positive at the 0.05 level by pure coincidence. So if you give people ten tests, you’ve got a pretty good chance of getting one positive result – at which point, you trumpet that one to the world.

Statisticians try to solve this loophole by demanding researchers pre-identify a primary endpoint. That is, you have to say beforehand which test you want to count. You can do however many tests you want, but the other ones (“secondary endpoints”) are for your own amusement and edification. The primary endpoint is the one that the magical “p = 0.05 means it works” criteria gets applied to.

Neuralstem chose the MADRS scale as their primary endpoint and got a null result. This is what they released in July that had everybody so disappointed. The recently-released data are a bunch of secondary endpoints, some of which are positive. This is the new result that has everybody so excited.

You might be asking “Wait, I thought the whole point of having primary versus secondary endpoints was so people wouldn’t do that?” Well…yes. I’m trying to figure out if there’s any angle here besides “Company does thing that you’re not supposed to do because it can always give you positive results, gets positive results, publishes a press release”. I am not an expert here. But I can’t find one. (...)

Except…why did their stock jump 67%? We just got done talking about the efficient market hypothesis and the theory that the stock market is never wrong in a way detectable by ordinary humans.

First of all, maybe that’s wrong. My dad is a doctor, and he swears that he keeps making a lot of money from medical investments. He just sees some new medical product, says “Yeah, that sounds like the sort of thing that will work and become pretty popular”, and buys it. I keep telling him this cannot possibly work, and he keeps coming to me a year later telling me he made a killing and now has a new car. Maybe all financial theory is a total lie, and if you get a lucky feeling when looking at a company’s logo you should invest in them right away and you will always make a fortune.

Or maybe the it’s that it’s not investors’ job to answer “Does this drug work?” but rather “Will investing in this stock make me money?”. Neuralstem has mentioned that they’ll be bringing these new results in front of the FDA, presumably in the hopes of getting a Phase III trial. FDA standards seem to have gotten looser lately, and maybe a fig leaf of positive results is all they need to give the go ahead for a bigger trial anyway – after all, they wouldn’t be approving the drug, just saying more research is appropriate. Then maybe that trial would come out better. Or it would be big enough that they would discover some alternate use (remember, Viagra was originally developed to lower blood pressure, and only got switched to erectile dysfunction after Phase 1 trials). Or maybe Neuralstem will join the 21st century and hire a competent Obfuscation Department.

I don’t know. I’m beyond caring. The sign of a really deep depression is abandoning hope, and I’ve abandoned hope in NSI-189…

by Scott Alexander, Slate Star Codex |  Read more:
Image: via
[ed. See also: NSI-189: A Nootropic Antidepressant That Promotes Neurogenesis]

Why Corrupt Bankers Avoid Jail

Prosecution of white-collar crime is at a twenty-year low.

In the summer of 2012, a subcommittee of the U.S. Senate released a report so brimming with international intrigue that it read like an airport paperback. Senate investigators had spent a year looking into the London-based banking group HSBC, and discovered that it was awash in skulduggery. According to the three-hundred-and-thirty-four-page report, the bank had laundered billions of dollars for Mexican drug cartels, and violated sanctions by covertly doing business with pariah states. HSBC had helped a Saudi bank with links to Al Qaeda transfer money into the United States. Mexico’s Sinaloa cartel, which is responsible for tens of thousands of murders, deposited so much drug money in the bank that the cartel designed special cash boxes to fit HSBC’s teller windows. On a law-enforcement wiretap, one drug lord extolled the bank as “the place to launder money.”

With four thousand offices in seventy countries and some forty million customers, HSBC is a sprawling organization. But, in the judgment of the Senate investigators, all this wrongdoing was too systemic to be a matter of mere negligence. Senator Carl Levin, who headed the investigation, declared, “This is something that people knew was going on at that bank.” Half a dozen HSBC executives were summoned to Capitol Hill for a ritual display of chastisement. Stuart Gulliver, the bank’s C.E.O., said that he was “profoundly sorry.” Another executive, who had been in charge of compliance, announced during his testimony that he would resign. Few observers would have described the banking sector as a hotbed of ethical compunction, but even by the jaundiced standards of the industry HSBC’s transgressions were extreme. Lanny Breuer, a senior official at the Department of Justice, promised that HSBC would be “held accountable.”

What Breuer delivered, however, was the sort of velvet accountability to which large banks have grown accustomed: no criminal charges were filed, and no executives or employees were prosecuted for trafficking in dirty money. Instead, HSBC pledged to clean up its institutional culture, and to pay a fine of nearly two billion dollars: a penalty that sounded hefty but was only the equivalent of four weeks’ profit for the bank. The U.S. criminal-justice system might be famously unyielding in its prosecution of retail drug crimes and terrorism, but a bank that facilitated such activity could get away with a rap on the knuckles. A headline in the Guardian tartly distilled the absurdity: “HSBC ‘Sorry’ for Aiding Mexican Drug Lords, Rogue States and Terrorists.”

In the years since the mortgage crisis of 2008, it has become common to observe that certain financial institutions and other large corporations may be “too big to jail.” The Financial Crisis Inquiry Commission, which investigated the causes of the meltdown, concluded that the mortgage-lending industry was rife with “predatory and fraudulent practices.” In 2011, Ray Brescia, a professor at Albany Law School who had studied foreclosure procedures, told Reuters, “I think it’s difficult to find a fraud of this size . . . in U.S. history.” Yet federal prosecutors filed no criminal indictments against major banks or senior bankers related to the mortgage crisis. Even when the authorities uncovered less esoteric, easier-to-prosecute crimes—such as those committed by HSBC—they routinely declined to press charges.

This regime, in which corporate executives have essentially been granted immunity, is relatively new. After the savings-and-loan crisis of the nineteen-eighties, prosecutors convicted nearly nine hundred people, and the chief executives of several banks went to jail. When Rudy Giuliani was the top federal prosecutor in the Southern District of New York, he liked to march financiers off the trading floor in handcuffs. If the rules applied to mobsters like Fat Tony Salerno, Giuliani once observed, they should apply “to big shots at Goldman Sachs, too.” As recently as 2006, when Enron imploded, such titans as Jeffrey Skilling and Kenneth Lay were convicted of conspiracy and fraud.

Something has changed in the past decade, however, and federal prosecutions of white-collar crime are now at a twenty-year low. As Jesse Eisinger, a reporter for ProPublica, explains in a new book, “The Chickenshit Club: Why the Justice Department Fails to Prosecute Executives” (Simon & Schuster), a financial crisis has traditionally been followed by a legal crackdown, because a market contraction reveals all the wishful accounting and outright fraud that were hidden when the going was good. In Warren Buffett’s memorable formulation, “You only find out who is swimming naked when the tide goes out.” After the mortgage crisis, people in Washington and on Wall Street expected prosecutions. Eisinger reels off a list of potential candidates for criminal charges: Countrywide, Washington Mutual, Lehman Brothers, Citigroup, A.I.G., Bank of America, Merrill Lynch, Morgan Stanley. Although fines were paid, and the Financial Crisis Inquiry Commission referred dozens of cases to prosecutors, there were no indictments, no trials, no jail time. As Eisinger writes, “Passing on one investigation is understandable; passing on every single one starts to speak to something else.” (...)

The very conception of the modern corporation is that it limits individual liability. Yet, in the decades after the United Brands case, prosecutors often pursued both errant executives and the companies they worked for. When the investment firm Drexel Burnham Lambert was suspected of engaging in stock manipulation and insider trading, in the nineteen-eighties, prosecutors levelled charges not just against financiers at the firm, including Michael Milken, but also against the firm itself. (Drexel Burnham pleaded guilty, and eventually shut down.) After the immense fraud at Enron was exposed, federal authorities pursued its accounting company, Arthur Andersen, for helping to cook the books. Arthur Andersen executives, desperate to cover their tracks, deleted tens of thousands of e-mails and shredded documents by the ton. In 2002, Arthur Andersen was convicted of obstruction of justice, and lost its accounting license. The corporation, which had tens of thousands of employees, was effectively put out of business.

Eisinger describes the demise of Arthur Andersen as a turning point. Many lawyers, particularly in the well-financed realm of white-collar criminal defense, regarded the case as a flagrant instance of government overreach: the problem with convicting a company was that it could have “collateral consequences” that would be borne by employees, shareholders, and other innocent parties. “The Andersen case ushered in an era of prosecutorial timidity,” Eisinger writes. “Andersen had to die so that all other big corporations might live.”

With plenty of encouragement from high-end lobbyists, a new orthodoxy soon took hold that some corporations were so colossal—and so instrumental to the national economy—that even filing criminal charges against them would be reckless. In 2013, Eric Holder, then the Attorney General, acknowledged that decades of deregulation and mergers had left the U.S. economy heavily consolidated. It was therefore “difficult to prosecute” the major banks, because indictments could “have a negative impact on the national economy, perhaps even the world economy.”

Prosecutors came to rely instead on a type of deal, known as a deferred-prosecution agreement, in which the company would acknowledge wrongdoing, pay a fine, and pledge to improve its corporate culture. From 2002 to 2016, the Department of Justice entered into more than four hundred of these arrangements. Having spent a trillion dollars to bail out the banks in 2008 and 2009, the federal government may have been loath to jeopardize the fortunes of those banks by prosecuting them just a few years later. (...)

Numerous explanations have been offered for the failure of the Obama Justice Department to hold the big banks accountable: corporate lobbying in Washington, appeals-court rulings that tightened the definitions of certain types of corporate crime, the redirecting of investigative resources after 9/11. But Eisinger homes in on a subtler factor: the professional psychology of élite federal prosecutors. “The Chickenshit Club” is about a specific vocational temperament. When James Comey took over as the U.S. Attorney for the Southern District of New York, in 2002, Eisinger tells us, he summoned his young prosecutors for a pep talk. For graduates of top law schools, a job as a federal prosecutor is a brass ring, and the Southern District of New York, which has jurisdiction over Wall Street, is the most selective office of them all. Addressing this ferociously competitive cohort, Comey asked, “Who here has never had an acquittal or a hung jury?” Several go-getters, proud of their unblemished records, raised their hands.

But Comey, with his trademark altar-boy probity, had a surprise for them. “You are members of what we like to call the Chickenshit Club,” he said.

Most people who go to law school are risk-averse types. With their unalloyed drive to excel, the élite young attorneys who ascend to the Southern District have a lifetime of good grades to show for it. Once they become prosecutors, they are invested with extraordinary powers. In a world of limited public resources and unlimited wrongdoing, prosecutors make decisions every day about who should be charged and tried, who should be allowed to plead, and who should be let go. This is the front line of criminal justice, and decisions are made unilaterally, with no review by a judge. Even in the American system of checks and balances, there are few fetters on a prosecutor’s discretion. A perfect record of convictions and guilty pleas might signal simply that you’re a crackerjack attorney. But, as Comey implied, it could also mean that you’re taking only those cases you’re sure you’ll win—the lawyerly equivalent of enrolling in a gut class for the easy A.

You might suppose that the glory of convicting a blue-chip C.E.O. would be irresistible. But taking such a case to trial entails serious risk. In contemporary corporations, the decision-making process is so diffuse that it can be difficult to establish criminal culpability beyond a reasonable doubt. In the United Brands case, Eli Black directly authorized the bribe, but these days the precise author of corporate wrongdoing is seldom so clear. Even after a provision in the Sarbanes-Oxley Act, of 2002, began requiring C.E.O.s and C.F.O.s to certify the accuracy of corporate financial reports, few executives were charged with violating the law, because the companies threw up a thicket of subcertifications to buffer accountability.

As Samuel Buell, who helped prosecute the Enron and Andersen cases and is now a law professor at Duke, points out in his recent book, “Capital Offenses: Business Crime and Punishment in America’s Corporate Age,” an executive’s claim that he believed he was following the rules often poses “a severe, even disabling, obstacle to prosecution.” That is doubly so in instances where the alleged crime is abstruse. Even the professionals who bought and sold the dodgy mortgage-backed instruments that led to the financial crisis often didn’t understand exactly how they worked. How do you explicate such transactions—and prove criminal intent—to a jury?

Even with an airtight case, going to trial is always a gamble. Lose a white-collar criminal trial and you become a symbol of prosecutorial overreach. You might even set back the cause of corporate accountability. Plus, you’ll have a ding on your record. Eisinger quotes one of Lanny Breuer’s deputies in Washington telling a prosecutor, “If you lose this case, Lanny will have egg on his face.” Such fears can deter the most ambitious and scrupulous of young attorneys.

The deferred-prosecution agreement, by contrast, is a sure thing. Companies will happily enter into such an agreement, and even pay an enormous fine, if it means avoiding prosecution. “That rewards laziness,” David Ogden, a Deputy Attorney General in the Obama Administration, tells Eisinger. “The department gets publicity, stats, and big money. But the enormous settlements may or may not reflect that they could actually prove the case.” When companies agree to pay fines for misconduct, the agreements they sign are often conspicuously stinting in details about what they did wrong. Many agreements acknowledge criminal conduct by the corporation but do not name a single executive or officer who was responsible. “The Justice Department argued that the large fines signaled just how tough it had been,” Eisinger writes. “But since these settlements lacked transparency, the public didn’t receive basic information about why the agreement had been reached, how the fine had been determined, what the scale of the wrongdoing was and which cases prosecutors never took up.” These pas de deux between prosecutors and corporate chieftains came to feel “stage-managed, rather than punitive.”

by Patrick Radden Keefe, New Yorker | Read more:
Image: Eiko Ojala