Thursday, April 26, 2018

The Operating Theatre: Contemporary Fiction & Las Vegas

Las Vegas is known for a multiplicity of things. Gambling. Transience. Heat. Solitude and isolation. Freakery. Global tourism. Indulgence. The triumph of spectacle, celebrating the celebratory. Somewhat recently, in the last five to ten years, prior to the November 2016 election of President Donald Trump and the October 2017 tragedy of the Mandalay Bay mass shooting, in the realm of literature the Nevadan metropolis of Las Vegas has become an examination room and a battleground, a kenoma in which writers have elucidated an elegiac donnée, one that refracts the soul (or soullessness) of an America enamored with the effervescence of the eternal present.

The city’s literary history is rarely insular, often regionally inscribed by its proximity to Los Angeles, an hour by plane or four hours by car, calling to mind Joan Didion’s Play It As It Lays (1970). It is the consummate desert city, defined by long, scorching summers and unremitting drought, essentially adjacent to the Southern Californian high desert, Death Valley, the Hoover Dam, and the Colorado River. It is a one-of-a-kind oasis, but it is also part of the American southwest. Book-wise, it is most readily associated with Hunter S. Thompson’s much-loved Fear and Loathing in Las Vegas (1972), novel-ish in structure, avant-garde in execution, a cult-classic exercise in self-invention. The genesis of gonzo journalism, a freakier version of the non-fiction novel in the aftermath of Capote and Mailer, but drug-doused, thus kin to Kesey, Thompson’s opus can also be categorized as Social Criticism. The nineties saw the success of John O’Brien’s 1990 novel Leaving Las Vegas, followed by a film adaptation five years later that brought the author’s distinctively bleak semi-autobiographical stylings to the screen. In recent years, three major works employ Las Vegas as a setting for literary fiction: Chris Abani’s The Secret History of Las Vegas (2014), Donna Tartt’s The Goldfinch (2013), and the tentpole of the New Vegas novel, Charles Bock’s Beautiful Children (2008). (...)

Uniforms of all types, not just military ones, proliferate more in Las Vegas than in most cities. Remembering Thoreau’s dictum, “Beware of all enterprises that require new clothes,” Abani takes pleasure in blurring, puncturing, and deconstructing clearly identified roles. The casino/hotel/resort employees are decked out in the trimmings of their trade as bellhop or desk clerk, pit boss or baccarat dealer, maid or gift shop employee. The female self-objectification-for-money hierarchy runs from cocktail waitress to stripper to hooker, each with her own particular costume. The tourists also love to play dress-up in the Southern Nevadan citadel, women donning a mini-skirt or a cleavagy black dress, men sporting a blazer and jeans at the steakhouse with the fellas, or visor and fanny pack at the pool with a little age hampering their gait but never dimming their glad-to-be-there grins. As one can imagine, Halloween on the Strip is a singular and revelatory forum for revelry. And of course, the other type of raiment that is every bit as prevalent as that of the scantily clad over-the-top get-ups or the soldier boys in their finery, is that of the brides and grooms, wedding parties trailing behind, whether lavish and quasi-celebrity at a ballroom inside The Wynn, or cheap and fast—like the subjects of Didion’s Saturday Evening Post essay “Marrying Absurd” from Slouching Towards Bethlehem (1968)—at one of the old downtown chapels all wicker and white paint, kitschy and pseudo-quaint. (...)

Las Vegas is a sunny place, hardly ever even cloudy or overcast, but its landscape and its paradigms are rarely clear. It is hard to find reality. The superficial is as rampant as the heat, and the constant commerce is as much of a glut as the slow-moving pedestrians and the always waiting lines of cabs and limos, and even the Strip itself, the literal sidewalk, is often buried beneath discarded handouts, fliers, and baseball card–sized advertisements for escorts. Las Vegas as a setting for the literature of social comment has only been lightly studied; it remains under-utilized, far from exhausted.

In an August 2014 issue of LA Weekly, Henry Rollins writes, “In many ways, Las Vegas is the ultimate statement of Homo sapiens. Not Coltrane, not NASA or literacy. This assault on nature is one of the most obscene attempts to tame the wild. It is a massive concrete, steel and pavement tantrum.” It is as if everybody knows that Washington D.C. isn’t really the capital anymore. Sure, there’s still a plentitude of power suits, the agglomeration of mid-Atlantic wealth, the Machiavellian allure of politics typified by Kevin Spacey’s Frank Underwood in House of Cards, but it’s not the most American of cities, it’s not the most popular or the best-known. It’s a relic, a studio lot for CNN, talking heads with landmarks hovering in the background like something out of a much-reused sixth-grade textbook. The modern-day United States is condensed into instantly recognizable symbols, as portrayed through the media, social or traditional, old or new, in all their hyper-linked and instantly gratifying versions. And though the Washington Monument and the Capitol Building and the White House are well-known edifices, they don’t quite encapsulate or semiotically signify, not in the way that the Manhattan skyline or the Hollywood sign serve as American archetypes.

For the latter half of the twentieth century, those two port cities, N period Y period and L period A period, were America’s benchmarks and bookends, but the twenty-first century may more appropriately be compared to Las Vegas, Nevada and all its cartoonish glory. Where the booms and busts hit first and hardest (chronicled in Michael Lewis’s The Big Short (2010) and its 2015 film adaptation). Where the American empire sends you on vacation or to a conference. William Chalmers’s book America’s Vacation Deficit Disorder (2013) stipulates, via a citation of the MMGY Global/Harrison Group’s 2012 Portrait of American Travelers, that only 9 percent of U.S. residents are even interested in international destinations, so Las Vegas provides a Cliff’s Notes version of the world, a more comfortable brand of tourism, without the massive time shifts, language barriers, and xenophobic fears inherent to international travel. “L.V.” says you don’t have to go to Paris and put up with those snooty French people to see the Eiffel Tower or to order a baguette from an ersatz boulangerie. You can experience the thrill of a tropical isle without the interference of child beggars and that depressing drive through the dilapidated slums on the way from the airport. The photogenic amenities of Venice are available without the stink of the canals, and the gondoliers backgrounded in selfies kick back for a Krispy Kreme donut or a Coors Light after work.

Las Vegas has acknowledged America’s fame obsession and is vending it. The illusion of being a somebody. You get to be whatever you want and, better yet, you get to leave it there. Hide from your job and your boss and your kids, shed your skin and let out that inner slouching beast. Yeats got the city wrong, but he was right about the desert. Las Vegas is a place where the temporal and the ephemeral are morphed into insta-culture, a palace of assimilation that’s open twenty-four/seven. There is no past or future, no need for perspective or long-range thought, there is only the unending now, the new promised land, a most codified place, a domain of myth and ritual.

by Sean Hooks, 3:AM Magazine |  Read more:
Image: uncredited

New Spotify Free Version

Spotify, long the leader in streaming music, has found itself in a precarious position: Analysts project it is about to lose the top spot. In February, a report by the Wall Street Journal revealed that while Spotify had 70 million subscribers to Apple Music’s 30 million (a stat last updated in September), Apple Music’s growth rate far surpassed Spotify’s—so much so that Apple Music was on pace to become the top music streaming service by summer’s end. It would seem that having its devices in the hands of consumers has been a huge boon to the adoption of Apple’s streaming service. But Spotify has been working on its own strategy for remaining on top. While Apple offers a free trial of Apple Music, its app is largely inaccessible without a paid subscription. With a free app, Spotify can give users a taste of the full premium app experience, a strategy that has been one of its primary means in acquiring new subscribers.

On Tuesday, Spotify introduced a new look and new features for the free version of its app. While the paid version offers a wide range of capabilities—playlist creation and curation, the ability to build and listen to artist or song radio stations, and the option to follow public playlists—the free app was more limited. Non-paying customers could do little more than listen to a selection of the app’s playlists on permanent shuffle mode. With Tuesday’s update, they gain access to 15 customized playlists. Before, they could only listen to whatever song Spotify’s algorithm happened to churn out next; now, they can listen to any song on that playlist whenever they like. These 15 playlists, which include the popular Discover Weekly, Release Radar, and Daily Mix playlists, are curated by Spotify based on your listening habits. They jointly contain more than 750 songs and are typically updated with new selections daily.

Spotify is also updating the onboarding experience for new free users, allowing them to select artists that they like so the app can start customizing playlists immediately. And for those who want to ensure they don’t bust through their monthly data plan in a day of frenzied streaming, there’s now a “data saver” option that minimizes the app’s impact on your usage. While the app’s premium version has seen numerous updates over the years, this is the free app’s first major overhaul since 2014, and it could prove an important update: Spotify’s free app is, for many, the gateway into a paid subscription. The free app currently has 90 million listeners, and according to the company, 60 percent of the company’s paying subscribers originated as users of the free version. By giving free users a bigger taste of what the premium Spotify experience offers—as well as a more prominently placed button to subscribe to the service in the bottom right of its navigation menu—Spotify likely hopes to boost its growth numbers and derail Apple’s march towards streaming music domination.

by Christina Bonnington, Slate |  Read more:
Image: Photo illustration by Slate. Photos by Spotify and Apple

Rooting for Elon

Over the past year, in pursuit of his ambitious goals to transform U.S. auto and energy markets, Elon Musk has met critics from all directions: customers, stockholders, and workers. After Tesla recently missed its Model 3 production quotas for the third time in three quarters, the South African playboy entrepreneur offered a rare glimpse of contrition: the “car biz is hell,” he tweeted, adding that he was sleeping in the Tesla factory to overcome production shortfalls.

The week before, a Delaware court allowed a class action suit against the company to move ahead. Shareholders are alleging Tesla management engaged in a “self-dealing” breach of fiduciary duty. From a historic peak of $360, Tesla share prices fell by nearly a third to $250 in April. The New York Times ran the headline “Tesla Looked Like the Future. Now Some Ask if It Has One,” while The Economist warned that “Tesla is heading for a cash crunch.”

This is a hard pill for many to swallow. For years now, Musk has come to stand in for something more than each of his three manufacturing companies: Tesla, the plug-in electric car manufacturer; SolarCity, the solar-panel and battery manufacturer that merged with Tesla in 2016; and SpaceX, the federally financed private rocketry firm. In his heroic gleam and boyish daring, many Americans see something as close to a leader as they are likely to have experienced in recent memory. (...)

But what is the substance of this vision? With Musk entering a new phase in his manufacturing career, it is a question worth considering. While the headlines, stock prices and investor ratings (Moody’s just downgraded Tesla) follow the production numbers and profit margins, the rest of us should examine just what it is we expect Musk to do.

What would success look like? Can it be done within the constraints of a private business firm? And, if so, as the debts come due, who is willing to sacrifice to help Musk achieve it?

What we often mean when we root for Musk is that we want to hasten the coming business-directed energy transition of our industrial system away from fossil fuels. In this, he embodies both a widely-popular yearning for social transformation and the businessman’s stolidity restraining it.

For those condemned to life on Earth, what Musk represents above all is the possibility of a “green” or “renewable”—and therefore “sustainable”—capitalism.

Species survival is one way of putting it, but this elides all the details relevant to our political lives. That ambiguity is precisely why the vision is so appealing: it can be both revolutionary and ostensibly consensual. For the past fifty years, after all, among the easiest and most widely accepted formulas for people to work together to change their futures has been through patterns of personal consumption. We invest our savings, purchase private equipment, place our bets in the enthralling spectator sport that is the clash of powerful personalities and organizations—and then we wait.

It is this sleek, efficient temple of opportunity and security that Musk has cultivated and we have bought in to that justifies the massive government spending behind his projects. Indeed, the most potent collective action behind Musk’s success has come from the state. In today’s political climate, projects such as his, which promise a return on investment and private-management practices, are the only ones deemed worthy of public investment. The states of California, Nevada, New York, and Oregon, have all joined the federal government in offering direct grants and loans to Musk’s companies, and hundreds of millions of dollars in consumer rebates are ultimately paid into Tesla through consumers (the federal government, for example, pays a $7,500 tax credit to purchasers of electric cars; California pays a further $2,500). As early as 2015, the Los Angeles Times attempted to sum the total public aid to Musk’s operations and arrived at $4.9 billion.

Yet Musk’s profitability—his success by the conventional standard—still hasn’t materialized. Tesla was run at a $671 million loss in the third quarter of 2017—$117 million paid in interest on the company’s debts alone. In fiscal year 2017, losses summed to $2.2 billion, about three times what the company lost in 2016.

Moreover, Tesla has repeatedly missed every deadline promised to its customers and is currently under investigation by the National Labor Relations Board for denying employees the right to collective bargaining. The company faces numerous shareholder lawsuits alleging managerial violations of fiduciary duty, with a raft of class-action suits following a recent investigation by the Securities and Exchange Commission.

Musk’s career thus illustrates the central challenge of U.S. industrial planning. Because of taboos against government ownership and income-tax financed public services, the public must find ways of persuading businessmen to manage private property to meet public objectives. Often this leaves us choking at an ideological and political impasse. Rather than have government authorities spend billions to own and operate their own plant under public oversight and administration, we are trapped debating which private profit-making groups the government should support in pursuit of its public-interest goals.

If there is a coherent strategy, it is to underwrite the financing of uncertain companies that operate largely to generate capital gains for insiders, while unloading risk to savers on the outside. But when these companies threaten savers, the vainglory of businessmen loses much of its utility as an instrument of public policy.

Meanwhile, the effect of this style of industrial policy in the labor market is palpably unpleasant. To avoid becoming Ponzi schemes, companies such as Tesla and SolarCity must compete in product markets by undermining existing, middle-class jobs. The brazen fact here is that the assemblage of jobs and green-energy programs behind Tesla use public expenditure, but they guarantee little employment income and no production targets.

by Andrew Elrod, Boston Review |  Read more:
Image: SpaceX

Wednesday, April 25, 2018

What Is a DJ's Role in Today's Dance Music Festivals?

It's no secret that festivals have a crucial place in today's dance music culture. These super-sized events aren't just an entry-level gateway for new fans; they're also a powerful platform for the spread of new ideas and sounds, and a glimpse into where the culture is heading. Plus, they are a reliable stream of revenue for both independent promoters and major corporations.

But as festivals continue to grow in scale and importance, their most central attraction—celebrity DJs— are experiencing an existential crisis. Specifically, about what the hell they should be doing when they're up on stage. In recent interviews with the New York Times, MTV, and other outlets, several top DJs seem to be in disagreement, or at least hold vastly different views, on what their role at a dance festival should be.

In a New York Times feature last week, Swedish House Mafia alums Axwell and Sebastian Ingrosso provided a treasure trove of fun facts about their post-SHM career as duo Axwell /\ Ingrosso. (My personal fave: that Axwell likes to mutter "turnt up, turnt up" to himself before he gets on stage.) While the dance music media honed in on Axwell's comment that "underground dance music [is] amateur," what the interview really focused on was the Swedish duo's live show.

"The most important thing is not what we play, but the personality and how we interact with the crowd," said Ingrosso. In one fell swoop, he summed up the mentality of DJs like Avicii and Steve Aoki, who have been criticized for playing predictable or possibly pre-recorded sets, to coordinate with the deployment of pyrotechnics (or baked goods).

As Axwell and Ingrosso explain, their coveted 90 minutes on a main stage surrounded by a blur of fireworks, lasers, and LEDs is like a "victory lap" after years of grunt work in the studio. So what if the extent of their effort is doing Jesus hands and twiddling a few knobs? "They don't know what we do before the shows," said Axwell, "A guy with a guitar might know how to play the guitar, but does he know how to produce a whole song?"

This is, perhaps, the official recasting notice for the role of the DJ from skilled track selector to adulated player of big hits, downplaying the importance of improvisation and surprise in sets in favor of familiarity and spectacle.

It's easy to imagine seasoned DJs like Paul Van Dyk and John Digweed gnashing their teeth over these comments. Last week, they both spoke out against the current crop of top DJs playing the same tired hits over and over again at festivals.

"If you're the biggest DJ in the world, you're in a position where you can play stuff that people don't know and blow people's minds," Digweed said to MTV. "But if you just chose to play stuff they know just to get a reaction, that's just being lazy."

He proudly confessed that his set at Ultra, where he played on Carl Cox's stage, was based on tracks he'd downloaded that same afternoon. Playing a record no one knows and hearing them go crazy is a "better buzz," he added.

Similarly, Van Dyk told MTV: "I think it is our responsibility as DJs to dig through all those thousands and thousands of tracks that come out each week and pick out the ones that actually mean something."

Both Digweed and Van Dyk are a half generation older than Axwell and Ingrosso, having first found success in the 90s and reached the mainstream zenith of their careers during the first electronic wave of the early 00s, a decade before the EDM craze washed over America and ebbed on shores abroad. In the last five years, Van Dyk has largely stayed out of the festival circuit, while Digweed has maintained a low key presence on side stages only. In other words, both of them have effectively opted out of the EDM festival bonanza that Axwell and Ingrosso are leading the charge on.

Digweed and Van Dyk's comments are therefore emblematic of an older school of DJing—one that puts the dynamics of the dancefloor as utmost priority. The explosion of dance music into the mainstream has changed the nature of its performance. Festivals now blithely take on the characteristics of pop and rock shows. While Digweed and Van Dyk used to play sets that stretched for hours, Axwell and Ingrosso's sets usually hit about an hour and a half. Dance music academic and cultural critic Luis-Manuel Garcia called this process the "concertization" of electronic music.

"A newer breed of EDM musicians have mostly abandoned the performance practices of the DJ booth to adopt those of a pop or rock stage artist: short, high-intensity musical sets that are paced like a rock concert, larger-than-life stage personae and a seemingly endless investment in visual spectacle to accompany the sensory overload of 'brain-melting' sound," Garcia wrote in a feature for Resident Advisor. He could have easily been talking about Axwell /\ Ingrosso's main stage finale at Ultra.

by Michelle Lhooq, Thump | Read more:
Image: via
[ed. Yes, but what does a DJ actually do? See here:]

Art Deco Lawnmower
via:

How My Nobel Dream Bit the Dust

“You may speculate from the day that days were created,
but you may not speculate on what was before that.”
—Talmud, Tractate Hagigah 11b, 450 A.D.

To go back to the beginning, if there was a beginning, means testing the dominant theory of cosmogenesis, the model known as inflation. Inflation, first proposed in the early 1980s, was a bandage applied to treat the seemingly grave wounds cosmologists had found in the Big Bang model as originally conceived. To call inflation bold is an understatement; it implied that our universe began by expanding at the incomprehensible speed of light ... or even faster! Luckily, the bandage of inflation was only needed for an astonishingly minuscule fraction of a second. In that most microscopic ash of time, the very die of the cosmos was cast. All that was and ever would be, on a cosmic scale at least—vast assemblies of galaxies, and the geometry of the space between them—was forged.

For more than 30 years, inflation remained frustratingly unproven. Some said it couldn’t be proven. But everyone agreed on one thing: If cosmologists could detect a unique pattern in the cosmos’s earliest light, light known as the cosmic microwave background (CMB), a ticket to Stockholm was inevitable.

Suddenly, in March 2014, humanity’s vision of the cosmos was shaken. The team of which I had been a founding member had answered the eternal question in the affirmative: Time did have a single beginning. We had proof. It was an amazing time indeed.

For weeks I had known it was coming. Our entire team was furiously working to finalize the results we would soon make public. We had relentlessly reviewed the data, diligently debating the strength of the findings, discussing what could be one of the greatest scientific discoveries in history. In the intensely competitive world of modern cosmology, the stakes couldn’t have been higher. If we were right, our detection would lift the veil on the birth of the universe. Careers would skyrocket, and we would be forever immortalized in the scientific canon. Detecting inflation equaled Nobel gold, plain and simple.

But what if we were wrong? It would be a disaster, not only for us as individual scientists but for science itself. Funding for our work would evaporate, tenure tracks would be derailed, professional reputations ruined. Once gleaming Nobel gold would be tarnished. Glory would be replaced by disappointment, embarrassment, perhaps even humiliation.

The juggernaut rolled on. The team’s leaders, confident in the quality of our results, held a press conference at Harvard University on March 17, 2014, and announced that our experiment, BICEP2, had detected the first direct evidence of inflation—evidence, albeit indirect, of the very birth pangs of the universe. (...)

For years BICEP2 looked for a swirling, twisting pattern (called a B-mode polarization pattern) in the CMB that cosmologists believed could only have been caused by gravitational waves squeezing and stretching space-time as they rippled through the infant universe. What could have caused these waves? Inflation and inflation alone. BICEP2’s detection of this pattern would be evidence of primordial gravitational waves generated during inflation, all but proving that inflation happened.

Then we saw it. There was no going back.

The broadcast from Harvard’s Center for Astrophysics captivated media around the world. Nearly 10 million people watched the press conference online that day. Every major news outlet, from The New York Times to the Economist to obscure gazettes deep within the Indian subcontinent, covered the announcement “above the fold.” My kids’ teachers had heard about it. My mother’s mahjong partners were kvelling about it.

Watching the live video, I could see MIT cosmologist Max Tegmark reporting the event. He wrote, “I’m writing this from the Harvard press conference announcing what I consider to be one of the most important scientific discoveries of all time. Within the hour, it will be all over the web, and before long, it will lead to at least one Nobel Prize.”

Finally, we’d seen what we, and the whole world apparently, had wanted to see. The BICEP2 team’s announcement was that we had read the very prologue of the universe—which, after all, is the only story that doesn’t begin in medias res.

Still, doubts plagued me. It sure seemed to be a discovery for the ages. But was it? No one is immune from confirmation bias. And scientists, despite what you may think, are rarely mere gatherers of facts, dispassionately following data wherever it may lead. Scientists are human, often all too human. When desire and data are in collision, evidence sometimes loses out to emotion. It was impossible to rule out every possible contaminant. Had we fretted enough?

The most worrisome aspect of BICEP2’s signal was how huge it was. It was shockingly big, more like finding a crowbar in a haystack than a needle, as one team member phrased it. At the time of our announcement, we were worried about being beaten by our chief competitor, a $1 billion space telescope called the Planck satellite with the perfect heavenly perch from which to scoop us. Prior to BICEP2’s press conference, Planck had already ruled out a B-mode signal half as big as the one we claimed to have observed. Cosmologists were expecting a whisper. We claimed BICEP2 had heard a roar. (...)
***
Within three weeks of the press conference, 250 scientific papers had been written about our results. That was astonishing; a paper is considered “famous” if it has 250 citations over the course of decades! Then, in early April, I got an email from the physicist Matias Zaldarriaga. How many times can he be congratulating me, I wondered?
“When the dust is low, but spread over a wide area, it betokens the approach of infantry.” —Sun Tzu, The Art of War
Matias’s April email was no “attaboy.” He was disturbed. He wanted to talk details. What did I know and when did I know it? It was the beginning of a trial I had long feared. Rumors were swirling at Princeton about the way we had used the infamous Planck slide. “People here in Princeton are very concerned about dust,” he said, ominously adding, “In fact they have managed to convince me that there is not a very good reason for me to believe it is not just dust. Have you looked into the foregrounds yourself?” Of course I had looked at the foregrounds—potential sources of contamination such as polarized emission from the Milky Way’s dust. The whole team had been worried about our galaxy producing spurious B-mode polarization that would masquerade as primordial gravitational wave B-modes. But data at low frequencies from BICEP1 and at high frequencies from Planck’s scrubbed PowerPoint slide convinced us we were okay.

A few days later, I got wind of a colloquium that Princeton University’s David Spergel had given just after the Harvard press conference. David said he had spotted a blunder in our results, that our data were contaminated by dust within the Milky Way galaxy. Soon, I found out there were others at Princeton laser-focused on the way we modeled dust. The BICEP2 leadership had anticipated an onslaught, perhaps even a backlash, from the Princeton folks, who were working on several competing B-mode experiments. Maybe they were just frustrated after being scooped on another major CMB discovery.

I asked Matias if it was David Spergel alone causing his concerns. Ominously, Matias said, “I think there is nothing else people here talk about.” My heart stopped. Princeton’s cosmology program is the top-ranked in the country—cosmology’s own Holy See, comprised of the world’s best experimentalists and theorists, among them multiple members of the National Academies of Sciences. It felt like an inflationary Inquisition, one that could put the BICEP2 results on a modern-day Index of banned pre-prints.

Imagine finding out the entire IRS is obsessed with your tax return. Not just one rogue auditor, but everyone, from the Secretary of the Treasury on down, fixated on your Form 1040! It was petrifying.

by Brian Keating, Nautilus |  Read more:
Image: Amble / Wikipedia
[ed. How science works.]

Understanding the Cinematography of Christopher Doyle

Interactive Nuclear Blast

Tuesday, April 24, 2018

Who is Watching Wall Street?

Since the ink dried on the GOP tax plan, officially known as the Tax Cut and Jobs Act, back in December 2017, companies have spent over $218 billion dollars re-purchasing their own stock at the going price on the open market. The point of the tax law, according to Republicans, was to free up corporate cash so that companies could create jobs. Instead, it seems, companies are using the cash windfall to reward shareholders. Daily spending on buybacks has doubled from just a year ago and could reach a record high of $800 billion this year.

Why do companies buy back their own shares? Because buying back shares raises the price of the remaining shares—each share is now a slightly bigger slice of the corporate pie. Share buybacks push up share prices easily and instantly without the hard work of companies making improvements in how they attract customers or create their products.

They are also a perfectly legal way for corporate executives, who hold huge chunks of stock, to juice their own pay. Executives decide which days to buy back shares and can then sell their own shares at the newly bumped up price. Top executives generally make the majority of their compensation through performance-based pay, which is either directly or indirectly tied to stock prices. Even though the rules of performance-based pay changed under tax reform, it is likely that executives will remain large shareholders.

But the problem with stock buybacks isn’t just frustration with the 1 percent getting even richer. Nor is it just the hypocrisy of how the tax bill was sold by the Republican Party—though there is plenty of that. While Republicans promised the bill would raise worker wages, all of the analyses about the ratio of spending on buybacks to spending on workers tell the same story: massive amounts of money are moving out to shareholders while very little is trickling down to workers. Moreover, Republicans promised improved innovation, but it should surprise no one that corporate investment as compared to profits has declined compared to historical levels—hurting corporate potential in the long-run—just as stock buybacks are on the rise.

Ending stock buybacks could be straightforward. Congress could amend the Securities and Exchange Act to simply make open-market share repurchases illegal. Or it could impose limits on buybacks for companies that aren’t investing in their employees or funding their pension commitments, or it could only allow buybacks when workers also receive a dividend. The Securities and Exchange Commission (SEC) could also repeal the “safe-harbor” rule, which lets companies spend massively on buybacks, or at the very least make companies justify why buybacks are a good use of corporate cash.

But the current surge of stock buybacks is a symptom of a much larger problem: how deeply corporate leaders are able to manipulate our economy for their own gain, without oversight from those who are supposed to hold them accountable. We’re in the grip of a shareholder primacy ideology, which posits that the purpose of corporate tax reform is to benefit shareholders because shareholders have the only right to the spoils.

To find our way out of this mess, we must first understand how we got here.

Shareholder primacy as a framework for corporate behavior only became entrenched in the 1980s. The postwar era was dominated by “managerial capitalism,” in which the management of big corporations focused on sales growth and, in some cases, labor peace to ensure growing productivity. For a white male worker, you could get a steady job that paid the bills, promised a pension, and was all but guaranteed for life. Shareholders were an afterthought.

In the 1960s, the big firms grew into conglomerates—highly diversified companies that by the 1970s ended up being worth less than the sum of their parts. Shareholders grew restless in the 1970s as the economy slowed and interest rates rose, but they were stymied from takeovers because of prohibitive state corporate law and anti-trust regulation at the federal level that still held back some industry consolidation. As the 1970s came to a close, prominent economists reframed the responsibility of executives from ensuring rising sales to maximizing shareholder value. Further they claimed that the ideal executive compensation package should include large chunks of shares to align executives’ interest with shareholders.

This has—not surprisingly—led to an obsessive focus by corporate leaders on short-term share prices and cost-cutting, with the workforce as the first cost cut. There has been a significant shift—in power and in material rewards—away from workers and towards shareholders since shareholder primacy rose to dominance in the 1970s. But it wasn’t a gradual or cultural shift—key policy interventions under Ronald Reagan broke the back of managerial capitalism and ushered in shareholder primacy.

In 1982, four key policy changes occurred that allowed shareholders to take over, or threaten to take over, companies, and pushed executives to focus on the share price or get out of the way. The first was an overhaul of the Department of Justice’s antitrust merger review guidelines so that industry consolidation was welcomed, not forbidden. The second was a Supreme Court case, Edgar v. MITE, which made state antitakeover statutes unconstitutional and allowed for the rise of the hostile takeover. Third, Reagan’s wholesale attack on unions ended an era of fragile labor peace.

The fourth policy change was Rule 10b-18, a Securities and Exchange Commission rule that ushered in the era of stock buybacks. Back in 1968, Congress gave the SEC the authority to prohibit buybacks if they so choose under the Williams Act Amendment. The SEC never prohibited them, but throughout the 1970s, it proposed a rule that would have limited buybacks to 15 percent of the volume of a company’s shares that were trading on the open market, and, more importantly, presumed that any buybacks over this limit were stock price manipulation and therefore likely illegal.

But the rule never passed. And in a turnaround that is familiar today, the Reagan Administration came in and promulgated a new rule that allowed companies to do whatever level of buybacks they liked. In 1982, under the leadership of John S. R. Shad—the first SEC Chair from Wall Street since the Great Depression—the Commission passed Rule 10b-18, the “safe harbor rule,” which limits corporates to a daily limit of buying back only an amount of shares equal to 25 percent of what’s trading on the open market. But the rule is superficial—companies do not have to disclose how many shares they buy back each day—only per quarter—and even if they exceed that limit, there is no presumption that the purchases are market manipulation.

The safe harbor rule is akin to driving rules, in that you have to stay within a certain speed. But imagine if no one ever sat by the side of the road to see how fast you drove. What would you do?

The answer is buy back a huge volume of stock in order to keep those share prices rising. Economist William Lazonick, who has done the most to bring attention to the harm of stock buybacks, calculated that from 2003-2012, public companies in the S&P 500 index spent over 90 percent of their earnings on buybacks and dividends. You might think the Obama Administration’s SEC would have given more attention to this practice, but you’d be wrong. In 2015, former SEC Chair Mary Jo White admitted that the agency does not collect the data to know if companies are staying within the daily volume and timing limits. All of this precedes the avalanche of buybacks we’re seeing now.

This practice is the heart of “shareholder primacy”—executives claim that they’re helpless to raise wages, slow down the scale of stock buybacks, or stop the fissuring of the workplace because they have to meet the insatiable demand for an ever-rising share price. The dollar amount that companies spend on buybacks shows just how easily corporations could raise pay for decent wages. Walmart’s base wage, for instance, rose from $10 to $11 this year and was announced to great fanfare after tax reform. But for a starting worker, it still means that he or she earns $19,448 a year. Meanwhile, Walmart authorized a buyback program of $20 billion in 2017.

by Lenore Palladino, Boston Review |  Read more:
Image: Sam Valadi

Bolivia’s Quest to Spread the Gospel of Coca

One thing about chewing coca leaves that is weird to the neophyte is their specific, sylvan kind of taste. Unlike the chemical stain that cocaine burns on the back of the throat, coca can seem like a hippie cleanse for the mouth. To start, there is the inescapable fibrousness; even with some dexterous tongue and tooth work, little twig-like stems end up pressed against the inside of the cheek or stabbing at the gums. Then there is the flavor, a musty piquancy of autumn leaves suffused with a tannic tang. The effect is slightly astringent. Chewing is generally a misnomer, since coca is piled up into a wad on one side of the mouth and sucked on, but some people gnash at the lanceolate leaves until tiny green specks garnish the teeth like dried parsley.

When a person chews coca, a cocktail of compounds is secreted from the leaves and absorbed into the body. This contains dozens of alkaloids that include the cocaine compound, and it has mild psychotropic effects in its unprocessed form. Its processed form, obviously, is a different matter. People from Andean countries like to say that coca’s relationship to cocaine is like the grape to wine. The equivalence isn’t totally precise, but coca is a centerpiece in traditional ceremonies and has the status of a sacred substance and so it enjoys, like the Holy Eucharist, a certain factual leniency.

Of course, neither its natural consumption nor its spiritual status has saved the coca plant from becoming a harbinger of bloodshed. Coca garnered its peculiar status when a German graduate student isolated a pure form of its electrifying alkaloid from a fresh shipment of leaves in 1859 using alcohol, sulfuric acid, sodium carbonate and ether. Cocaine’s global market is now worth around 80 billion dollars per year. It is also illicit. An untold number of people have been killed for having some connection, tenuous or not, to the trade. Drug-related violence has made parts of Latin America among the most dangerous places on the planet.

Nowhere has coca been more important than in Bolivia, South America’s poorest country. Though its governments have traditionally toed the line of U.S. foreign policy on drugs since at least the 1980s, Bolivia’s current president, Evo Morales, threw out the Drug Enforcement Agency (DEA) nearly a decade ago while vowing to resuscitate coca’s sullied reputation. “Coca,” Morales has said so often that the phrase could be printed on the currency, “is not cocaine.” After decades of sweaty counter-narcotics operations, during which U.S.-trained soldiers scoured the jungle uprooting coca bushes and Americans and Europeans snorted cocaine anyway, Morales called a stop to eradication campaigns in his country. Instead, the cocaleros of Bolivia have cultivated the conviction that they can spread the gospel of coca. “Our philosophy is clear,” the country’s leading anti-drug official, Sabino Mendoza, told me. “Coca should be consumed, in its natural state.” To that end, the Bolivian government has spent millions of dollars and put forward a law to support its coca market. It has shunned the War on Drugs and sought instead to create alternate markets for coca leaf by supporting industrialization. Teas, shampoos, wines, cakes, liquors, flour, toothpastes, energy drinks and candies that feature the leaf have been produced, some in government-backed factories.

It sometimes seems like Bolivians will market anything that contains their quasi-magical plant. Anything that could lure investors. Anything that could trade internationally. Anything, anything but cocaine. (...)

Coca, especially in the highlands, enjoys near panacea status. It had deep ties to indigenous culture, and the 30 percent of Bolivians who chew it regularly believe that it can alleviate most ills. In the new and growing coca product market, this tonic-like reputation is its most marketable aspect. “With Coca Real, it’s just the same,” one of Bolivia’s rising coca entrepreneurs, Juan Manuel Rivero, told me, referring to his flagship product, a carbonated energy drink containing coca extract. “A healthy beverage that will effectively combat sorojchi, alleviate exhaustion, and eliminate physical or mental fatigue.” Rivero is one of a dozen or so entrepreneurs who have obtained permission from the government to purchase coca for industrial development. While it’s not illegal to have coca in Bolivia, there is a limit on the amount that can be transported without a permit, and the movement of leaves is closely monitored. His Coca Real drink is one of the products that have entered the market seeking to capitalize on a sympathetic regime and shifting global attitudes about regulating certain kinds of substances.

At Rivero’s factory, where he produces soda concentrate, he offered me some of the finished, neon-green liquid product in a glass to try. It tasted like coca’s distant cousin, just arrived from Miami smacking bubble gum and raving about party yachts. Sweet, bubbly; the unmistakable descendant of Red Bull. I drank it quickly, and recognized an afternote redolent of coca’s tang. “Coca has one bad alkaloid, which is cocaine, and the rest of its alkaloids are good,” Rivero said. (The white powder cocaine is usually the cocaine alkaloid isolated in hydrochloride salt form, occasionally cut with other substances.) “We are sure that our product does not contain a single bad alkaloid. We want to show Bolivia and the world that it’s possible to make appealing derivatives that can be consumed and don’t cause addiction.” (...)

In July 2017, I travelled to the Chapare, a tropical province north of Cochabamba and one of Bolivia’s two major coca-growing regions, to meet Rivero’s outreach team. The road from the highlands down to the rainforest river basin traces its way along mountain saddles overhung with clouds and neon panicles of lobster claw flowers. It is also punctuated by checkpoints. Just a few decades ago, growing coca in the Chapare was prohibited. The area became ground zero in the U.S. War on Drugs. Interdiction forces conducted merciless campaigns against coca growers, who still bitterly resent the authors of their suffering.

I was going to the annual coca fair, where Coca Real was making a pitch, held just up the road from a mirrored glass-plated factory that was built to produce coca products. Flanked on all sides by the hyper-green rainforest, the fair stalls created haphazard corridors where revelers wandered, their cheeks bulging with coca. One vendor, selling frosting-smeared cupcakes topped with decorative coca leaf, told me that she had experimented for months to get the flavor right–there can’t be too much coca, she said, or the cake turns bitter. A man hawked coca shampoo as a cure for hair loss.

Nowhere in Bolivia has the impact of President Evo Morales’s 2005 election been felt more dramatically than the Chapare, where his activism leading one of the major coca unions thrust him into the national political spotlight and ultimately carried him to electoral victory thirteen years ago. Morales, who is the country’s first indigenous president and who was raised in poverty in the highlands before moving to the Chapare as a young man, has remained loyal to his base. Duly, he had promised to make an appearance at the fair. On the day of his scheduled arrival, farmers stood in their mud-splattered shoes and Sunday shirts with eyes turned skyward waiting for a sign of his helicopter.

Morales has increasingly become a subject of controversy in Bolivia, ever more with his recent efforts to massage the constitution to extend his long tenure in the presidential palace. But in the Chapare, support for him is unflagging. Asterio Romero, Morales’s friend and union colleague and currently the mayor of one of the region’s largest cities, told me he believed Morales was sent by God. That, he said, was the only explanation for Morales’s famous work ethic–the president sleeps little, and has been known to call ministers to the palace for meetings at 5 o’clock in the morning. To the people of the Chapare, he also represents someone who understands the pain of the drug war years.

For Morales, the piecemeal documentation of atrocities committed in the 1980s and 1990s in the name of eradicating coca plants is not jarring. He was there for clashes that produced albums filled with grainy photos of men and women with lash-like bruises and gaping bullet wounds, undergoing emergency outdoor surgeries or building barricades to block police trucks; the medical certificates of hematomas, contusions, puncture wounds and edemas; the autopsy reports documenting bullet trajectories. One report from 2008, published by Bolivian government agencies, in which Morales says he was tortured during his many detentions by anti-narcotics squads, includes photographs of the president himself. In them, he has the same mop-top haircut, but his face has the sheen of youth, and he is propped on a medical examining table with purple lesions crisscrossing his back and snaking over his shoulder.

By the time the report was published, Morales had been elected to his first presidential term, and he would with short shrift expel the U.S. Ambassador and the DEA from the country. Although many of the boots-on-the-ground anti-narcotics campaigns were carried out by special Bolivia police and military forces like UMOPAR, the Chapare was one of the first places where the DEA began its foreign War on Drugs operations, and many Bolivians still hold the U.S. responsible for the squads’ violence and corruption. “From the U.S., they made the DEA pressure us at gunpoint and with gas,” a coca union leader named Isidora Coronado told me. “It was a difficult time. A lot of women, especially, were traumatized; there were assaults, and the men in uniform could do whatever they wanted. But from the moment [Morales] became president we haven’t had those kinds of clashes anymore.”

If anything, the coca fair was celebration of the victories won, and by extension, each artisanal coca product on offer seemed a small tribute to the struggle (Coca Real’s stand, with its flashy cardboard cutout of a life-sized bottle, was nearly alone in its unabashedly commercial design). Wherever Morales goes, he is greeted by garlands of flowers; shortly after his helicopter landed on the afternoon of the fair and he emerged from a black SUV among a flock of bodyguards, Morales was garlanded with coca leaves and presented with a shamrock-colored cake made from ground leaves and varnished with white icing. Farmers with pleated skirts and long braids presented him with baskets of guava and sweet potatoes. He spoke for 15 minutes, praising the new coca policies and promising more industrialization. To finish his speech, he chanted a famous slogan in Quechua, joined by hundreds of voices: “Kawsachun coca! Huañuchun Yanquis!” Long live coca. Down with the Yankees.

About 17 million people around the world used cocaine at some point in 2015, according to the latest data from the United Nations Office on Drugs and Crime (UNODC). A third of those people were in North America. While the DEA estimates that cocaine use is increasing in the U.S., most of its field divisions don’t consider the drug to be as urgent a threat as other controlled substances. Cocaine-related deaths have spiked, but this is largely due to a fad of speedballing it with fentanyl. In any case, the agency’s laboratory analyses conclude that 92 percent of cocaine in the U.S. market originated in Colombia and six percent in Peru–two countries where American interdiction programs are still robustly in place.

Bolivia, however, has been singled out by the U.S. government as being a special pain in the ass. Its truculence has earned it repeat mention on the White House’s annual presidential memorandum on illicit drug producing countries, where it is rebuked for having “failed demonstrably” to adequately enact counternarcotics policies. Since it’s an illegal market, drug production can only be measured by proxy, and so the UNODC calculates the number of hectares of coca cultivated using satellite and aerial imagery to guess at the amount of cocaine produced (it also looks at police seizures of finished cocaine and of the intermediary butter-like paste product). Its most recent data for the three major coca producing countries put cultivation at 146,000 hectares in Colombia, 43,900 in Peru, and 23,100 in Bolivia. The U.S. Department of State disagrees with the methodology and says there are more hectares in cultivation, though still less than in Colombia or Peru. But in September’s memo, the White House exempted Colombia, reasoning that its police and army are close security allies.

Bolivia is something else entirely.

by Jessica Camille Aguirre, Guernica | Read more:
Image: Ansellia Kulikku

Sonny Boy Williamson

Oregon Grew More Cannabis Than Customers Can Smoke


A recent Sunday afternoon at the Bridge City Collective cannabis shop in North Portland saw a steady flow of customers.

Little wonder: A gram of weed was selling for less than the price of a glass of wine.

The $4 and $5 grams enticed Scotty Saunders, a 24-year-old sporting a gray hoodie, to spend $88 picking out new products to try with a friend. "We've definitely seen a huge drop in prices," he says.

Across the wood-and-glass counter, Bridge City owner David Alport was less delighted. He says he's never sold marijuana this cheap before.

"We have standard grams on the shelf at $4," Alport says. "Before, we didn't see a gram below $8."

The scene at Bridge City Collective is playing out across the city and state. Three years into Oregon's era of recreational cannabis, the state is inundated with legal weed.

It turns out Oregonians are good at growing cannabis—too good.

In February, state officials announced that 1.1 million pounds of cannabis flower were logged in the state's database.

If a million pounds sounds like a lot of pot, that's because it is: Last year, Oregonians smoked, vaped or otherwise consumed just under 340,000 pounds of legal bud.

That means Oregon farmers have grown three times what their clientele can smoke in a year.

Yet state documents show the number of Oregon weed farmers is poised to double this summer—without much regard to whether there's demand to fill.

The result? Prices are dropping to unprecedented lows in auction houses and on dispensary counters across the state.

Wholesale sun-grown weed fell from $1,500 a pound last summer to as low as $700 by mid-October. On store shelves, that means the price of sun-grown flower has been sliced in half to those four-buck grams.

For Oregon customers, this is a bonanza. A gram of the beloved Girl Scout Cookies strain now sells for little more than two boxes of actual Girl Scout cookies.

by Matt Stangel and Katie Shepherd, Willamette Week |  Read more:
Image: East Fork Cultivars

Your Sea Wall Won’t Save You

In 2011, a catastrophic flood washed through greater Bangkok. Hydrologically, this was not so unusual; Bangkok occupies the Chao Phraya River Delta, and although the rainfall that year was higher than normal, the waters didn’t reach the hundred-year flood level. But the landscape was more vulnerable than in past cycles. Factory development in the flood plain, subsidence caused by groundwater extraction, and mismanagement of dams upriver led to severe flooding that killed more than 800 people and affected some 13 million lives. Protected by the King’s dike, which encircles the Bangkok Metropolitan Area, the capital city was largely spared, but displaced floodwaters made conditions worse in outlying districts. The sacrifice zones were inundated for weeks, and then months. As angry “flood mobs” descended on the protected areas, opening flood gates and tearing holes in the sandbag walls, the prime minister counseled them to think of the national good. If the city center flooded, she said, it would cause “foreigners to lose confidence in us and wonder why we cannot save our own capital.”

And here is a dark truth of planning for “climate resilience.” Decisions about which areas will be protected are not only about whose safety will be guaranteed; they also involve transnational concerns like reassuring global investors and preserving manufacturing supply chains. In Thailand, thousands of soldiers were dispatched to patrol the floodwalls. They were enforcing resilience. This is both a rational decision and a disturbing vision of our climate-changed future. We are heading toward a world in which the unequal distribution of environmental risks is administered by state violence. How did we get here?

This article looks at four large cities in Southeast Asia facing major climate risks: Jakarta, Manila, Ho Chi Minh City, and Bangkok. Each is home to at least 8 million people living in a low-lying delta threatened by rapid urbanization, sinking ground, and rising seas. As officials seek to make their cities more resilient, they bring in outside planning experts who push “climate-proofing” models developed in Japan and Europe, especially in the Netherlands, which has a long history of advanced water management strategies. Highly engineered, technocratic programs come with readymade slogans, like “making room for the river,” a concept which works well along the banks of the Rhine but can mean mass evictions in the Global South. When “slums” (often a slur for urbanized villages with deep histories) are represented as a blight, to be scraped away with little if any recompense, and their people resettled in untenable locations far from the city center, we must ask: “Whose resilience” is really being promoted? Too often, the rhetoric of climate adaptation is doublespeak for the displacement of poor, informal communities, and an alibi for unsustainable growth. (...)

Jakarta

Let’s start in Jakarta, where the “Great Garuda” is the charismatic megafauna of resilience infrastructures. About 40 percent of the city is below sea level, and regular flooding along the highly polluted rivers and colonial canals is a fact of life. In 2007, floods forced 300,000 evacuations and spurred new plans to fortify the city against rising waters. An international team led by the Dutch engineering firms Witteveen+Bos and Grontmij proposed to build the National Capital Integrated Coastal Development, which envisions artificial islands in the Jakarta Bay anchored by the world’s largest sea wall. The scheme, which resembles a garuda, the mythical bird that is a national symbol of Indonesia, is financed largely by private development on the islands, including a new Central Business District housing 1.5 million people.

Victor Coenen, the project manager for Witteveen+Bos, describes the NCICD as “one big polder,” referencing the Dutch strategy for enclosing land within dikes to artificially control its hydrology. Essentially, Jakarta Bay will be a bathtub, completely separated from the Java Sea; the city’s rivers will drain here and then be pumped out to the ocean. Critics argue that disrupting the hydrology will harm local fisheries, trap polluted waters within the city, and exacerbate flooding outside the wall. In response to these concerns, as well as allegations of corruption, the sea wall was redesigned to be a mere(!) 30 km long. Now nearing completion, it will be Jakarta’s iron lung, requiring a whole secondary life-support system of pumps and drainage systems. To prevent retention areas from becoming polluted “black lagoons,” the city will need major sanitation upgrades, which are being led by German and Japanese partners. Getting the various projects to play well together, on time and within scope, is an immense challenge.

by Lizzie Yarina, Places Journal |  Read more:
Image: KuiperCompagnons

What Went Wrong With the Internet

Over the last few months, Select All has interviewed more than a dozen prominent technology figures about what has gone wrong with the contemporary internet for a project called “The Internet Apologizes.” We’re now publishing lengthier transcripts of each individual interview. This interview features Jaron Lanier, a pioneer in the field of virtual reality and the founder of the first company to sell VR goggles. Lanier currently works at Microsoft Research as an interdisciplinary scientist. He is the author of the forthcoming book Ten Arguments for Deleting Your Social Media Accounts Right Now.

You can find other interviews from this series here.


Jaron Lanier: Can I just say one thing now, just to be very clear? Professionally, I’m at Microsoft, but when I speak to you, I’m not representing Microsoft at all. There’s not even the slightest hint that this represents any official Microsoft thing. I have an agreement within which I’m able to be an independent public intellectual, even if it means criticizing them. I just want to be very clear that this isn’t a Microsoft position.

Noah Kulwin: Understood.
Yeah, sorry. I really just wanted to get that down. So now please go ahead, I’m so sorry to interrupt you.

In November, you told Maureen Dowd that it’s scary and awful how out of touch Silicon Valley people have become. It’s a pretty forward remark. I’m kind of curious what you mean by that.

To me, one of the patterns we see that makes the world go wrong is when somebody acts as if they aren’t powerful when they actually are powerful. So if you’re still reacting against whatever you used to struggle for, but actually you’re in control, then you end up creating great damage in the world. Like, oh, I don’t know, I could give you many examples. But let’s say like Russia’s still acting as if it’s being destroyed when it isn’t, and it’s creating great damage in the world. And Silicon Valley’s kind of like that.

We used to be kind of rebels, like, if you go back to the origins of Silicon Valley culture, there were these big traditional companies like IBM that seemed to be impenetrable fortresses. And we had to create our own world. To us, we were the underdogs and we had to struggle. And we’ve won. I mean, we have just totally won. We run everything. We are the conduit of everything else happening in the world. We’ve disrupted absolutely everything. Politics, finance, education, media, relationships — family relationships, romantic relationships — we’ve put ourselves in the middle of everything, we’ve absolutely won. But we don’t act like it.

We have no sense of balance or modesty or graciousness having won. We’re still acting as if we’re in trouble and we have to defend ourselves, which is preposterous. And so in doing that we really kind of turn into assholes, you know?

How do you think that siege mentality has fed into the ongoing crisis with the tech backlash?

One of the problems is that we’ve isolated ourselves through extreme wealth and success. Before, we might’ve been isolated because we were nerdy insurgents. But now we’ve found a new method to isolate ourselves, where we’re just so successful and so different from so many other people that our circumstances are different. And we have less in common with all the people whose lives we’ve disrupted. I’m just really struck by that. I’m struck with just how much better off we are financially, and I don’t like the feeling of it.

Personally, I would give up a lot of the wealth and elite status that we have in order to just live in a friendly, more connected world where it would be easier to move about and not feel like everything else is insecure and falling apart. People in the tech world, they’re all doing great, they all feel secure. I mean they might worry about a nuclear attack or something, but their personal lives are really secure.

And then when you move out of the tech world, everybody’s struggling. It’s a very strange thing. The numbers show an economy that’s doing well, but the reality is that the way it’s doing well doesn’t give many people a feeling of security or confidence in their futures. It’s like everybody’s working for Uber in one way or another. Everything’s become the gig economy. And we routed it that way, that’s our doing. There’s this strange feeling when you just look outside of the tight circle of Silicon Valley, almost like entering another country, where people are less secure. It’s not a good feeling. I don’t think it’s worth it, I think we’re wrong to want that feeling.

It’s not so much that they’re doing badly, but they have only labor and no capital. Or the way I used to put it is, they have to sing for their supper, for every single meal. It’s making everyone else take on all the risk. It’s like we’re the people running the casino and everybody else takes the risks and we don’t. That’s how it feels to me. It’s not so much that everyone else is doing badly as that they’ve lost economic capital and standing, and momentum and plannability. It’s a subtle difference.

There’s still this rhetoric of being the underdog in the tech industry. The attitude within the Valley is “Are you kidding? You think we’re resting on our laurels? No! We have to fight for every yard.”

There’s this question of whether what you’re fighting for is something that’s really new and a benefit for humanity, or if you’re only engaged in a sort of contest with other people that’s fundamentally not meaningful to anyone else. The theory of markets and capitalism is that when we compete, what we’re competing for is to get better at something that’s actually a benefit to people, so that everybody wins. So if you’re building a better mousetrap, or a better machine-learning algorithm, then that competition should generate improvement for everybody.

But if it’s a purely abstract competition set up between insiders to the exclusion of outsiders, it might feel like a competition, it might feel very challenging and stressful and hard to the people doing it, but it doesn’t actually do anything for anybody else. It’s no longer genuinely productive for anybody, it’s a fake. And I’m a little concerned that a lot of what we’ve been doing in Silicon Valley has started to take on that quality. I think that’s been a problem in Wall Street for a while, but the way it’s been a problem in Wall Street has been aided by Silicon Valley. Everything becomes a little more abstract and a little more computer-based. You have this very complex style of competition that might not actually have much substance to it.

You look at the big platforms, and it’s not like there’s this bountiful ecosystem of start-ups. The rate of small-business creation is at its lowest in decades, and instead you have a certain number of start-ups competing to be acquired by a handful of companies. There are not that many varying powers, there’s just a few.

That’s something I’ve been complaining about and I’ve written about for a while, that Silicon Valley used to be this place where people could do a start-up and the start-up might become a big company on its own, or it might be acquired, or it might merge into things. But lately it kind of feels like both at the start and at the end of the life of a start-up, things are a little bit more constrained. It used to be that you didn’t have to know the right people, but now you do. You have to get in with the right angel investors or incubator or whatever at the start. And they’re just a small number, it’s like a social order, you have to get into them. And then the output on the other side is usually being acquired by one of a very small number of top companies.

There are a few exceptions, you can see Dropbox’s IPO. But they’re rarer and rarer. And I suspect Dropbox in the future might very well be acquired by one of the giants. It’s not clear that it’ll survive as its own thing in the long term. I mean, we don’t know. I have no inside information about that, I’m just saying that the much more typical scenario now, as you described, is that the companies go to one of the biggies.

I’m kind of curious what you think needs to happen to prevent future platforms, like VR, from going the way of social media and reaching this really profitable crisis state.

A lot of the rhetoric of Silicon Valley that has the utopian ring about creating meaningful communities where everybody’s creative and people collaborate and all this stuff — I don’t wanna make too much of my own contribution, but I was kind of the first author of some of that rhetoric a long time ago. So it kind of stings for me to see it misused. Like, I used to talk about how virtual reality could be a tool for empathy, and then I see Mark Zuckerberg talking about how VR could be a tool for empathy while being profoundly nonempathic, using VR to tour Puerto Rico after the storm, after Maria. One has this feeling of having contributed to something that’s gone very wrong.

So I guess the overall way I think of it is, first, we might remember ourselves as having been lucky that some of these problems started to come to a head during the social-media era, before tools like virtual reality become more prominent, because the technology is still not as intense as it probably will be in the future. So as bad as it’s been, as bad as the election interference and the fomenting of ethnic warfare, and the empowering of neo-Nazis, and the bullying — as bad as all of that has been, we might remember ourselves as having been fortunate that it happened when the technology was really just little slabs we carried around in our pockets that we could look at and that could talk to us, or little speakers we could talk to. It wasn’t yet a whole simulated reality that we could inhabit.

Because that will be so much more intense, and that has so much more potential for behavior modification, and fooling people, and controlling people. So things potentially could get a lot worse, and hopefully they’ll get better as a result of our experiences during this era.

As far as what to do differently, I’ve had a particular take on this for a long time that not everybody agrees with. I think the fundamental mistake we made is that we set up the wrong financial incentives, and that’s caused us to turn into jerks and screw around with people too much. Way back in the ’80s, we wanted everything to be free because we were hippie socialists. But we also loved entrepreneurs because we loved Steve Jobs. So you wanna be both a socialist and a libertarian at the same time, and it’s absurd. But that’s the kind of absurdity that Silicon Valley culture has to grapple with.

And there’s only one way to merge the two things, which is what we call the advertising model, where everything’s free but you pay for it by selling ads. But then because the technology gets better and better, the computers get bigger and cheaper, there’s more and more data — what started out as advertising morphed into continuous behavior modification on a mass basis, with everyone under surveillance by their devices and receiving calculated stimulus to modify them. So you end up with this mass behavior-modification empire, which is straight out of Philip K. Dick, or from earlier generations, from 1984.

It’s this thing that we were warned about. It’s this thing that we knew could happen. Norbert Wiener, who coined the term cybernetics, warned about it as a possibility. And despite all the warnings, and despite all of the cautions, we just walked right into it, and we created mass behavior-modification regimes out of our digital networks. We did it out of this desire to be both cool socialists and cool libertarians at the same time.

by Noah Kulwin, Select All |  Read more:
Image: Brian Ach/Getty Images

Monday, April 23, 2018


Mark Anderson
via: Andertoon.com

Where Have All the Pilots Gone?


— well, the instructor who made that first takeoff seem easy told me, later that same day, that most people who begin pilot training never finish it. There are plenty of good reasons for that. It is, as my friend Dillo put it, more expensive than a crack habit. People hit plateaus and get frustrated and give up. But I think the main reason is because it’s complicated, and difficult, and stressful, and when the lessons stop being novel, people stop forcing themselves to do the hard thing, despite the ultimate rewards.

Where Have All the Pilots Gone? (Tech Crunch)
Image: FAA

via: The Guardian
repost

Sunday, April 22, 2018

Is the Internet Complete?

In 2013, a debate was held between friends Peter Thiel and Marc Andreessen, the thrust of which was to determine whether we are living through an innovation golden age, or whether innovation was in fact stalling. Thiel, of course, played the innovation sceptic, and it is interesting now with five years remove to look back on the debate to see how history has vindicated his position. In short all of those things that were ‘just around the corner’ in 2013 are, sure enough, still ‘just around the corner.’

One strand of Thiel’s argument at the time (and since) was that the ostentatious progress made in computing in the last 15 years has blinded us to the lack of technological progress made elsewhere. We can hardly have failed to notice the internet revolution, and thus we map that progress onto everything, assuming that innovation is a cosmic force rather than something which happens on a piecemeal basis.

Certainly, this argument has gained more traction since 2013. However, in this piece I’d like to add an extra layer to it. Is it possible that innovation is not only stalling in non-tech areas, but in tech itself? Could we make an argument to say that the internet itself is, in fact, complete?

The driving logic for this argument is easy to dismiss—namely that all of the big ‘possible’ ideas associated with the internet have been taken. One might say that companies like Google, Facebook, and Amazon were all inevitabilities from the moment computers around the world started to link up, and that once these roles were filled, innovation started to dry up as there was fundamentally ‘nothing left to do.’

The first counter to this is that it’s easy to say in hindsight. Sure Amazon—or a company like it—seems like an inevitability now, but there was once a time when people were highly sceptical of the idea that anyone would want to conduct any type of financial transaction over the internet. The second counter, proved by the first, is to say that we can’t possibly know what might be coming over the horizon at any given time. The next Google might be just about to break, and if it were to do so then it would make a mockery of such defeatism.

Both of these arguments are fair and true. However they simply refute the idea that the internet is finished at this moment, rather than the more fundamental idea that it’s possible for the internet to be finished at all. It is this second idea—or at least the theoretical possibility of it—that I want to illustrate here.

Let’s compare the internet to another world changing innovation—the car. The car started as a ‘base concept’; a motorised chassis to transport you from point A to point B. That was the car on ‘day one,’ and this underlying concept has remained true up to this present day. However, that does not mean that the idea was complete on ‘day one.’ Over time, the car was innovated upon and developed. We added passenger seats so you could take people with you. We added a roof, so it wasn’t only suitable for fair weather. We added air conditioning to keep us comfortable, and a radio to keep us entertained. And, of course, we dramatically improved its performance and reliability. All in all, it probably took about 60 years for the car to go from ‘base concept’ to ‘finished article,’ from which point all cars have remained, on the whole, the same. Sure, a car from 2018 is far more advanced than a car from 1965, but it isn’t fundamentally different. It’s just a more polished version of the same thing. The 1965 car is, however, quite a lot different from an 1895 car, because that was the period of true innovation that fleshed out the idea.

We can say, therefore, that the car—as a concept—is ‘finished.’ Now, that isn’t to say of course that there has been no innovation since 1965, and that there won’t be any innovation in the future. Far from it. But it is to say that this innovation has been, on the whole, mere improvement on a static idea. Cars are cars, TVs are TVs, washing machines are washing machines. Once the idea is complete, we merely fiddle in the edges.

In spite of this precedent, we don’t see the internet in the same way. We don’t see the internet as a ‘base concept’ (i.e. a vast directory of information), which is gradually being shaped and polished into a finished article, from which point it will just tick along. Why not? I would suggest it’s because of the business structure. With the car, you had competing businesses each turning out their own version of the idea. Ford versus Mercedes versus Nissan. However, with the internet, you don’t have different ‘competing internets,’ you just have one—and business’s role within it is to look after the component pieces.

It’s a bit like there had only ever been one car, and different brands had each brought a new addition to the table to create the final useful thing. Facebook came along and put in the seats, Google the driving interface, YouTube the radio, and so on until the car was finished.

Seeing the internet this way, we might speculate that we have come to the end of the initial shaping of the idea, and that from this point on we shall merely be optimising it. We have on our hands the internet equivalent of a 58 Chevy—there’s a long way to go, but fundamentally it does what we want it to do.

by Alex Smith, Quillette |  Read more:
Image: uncredited
[ed. See also: The Comments section.]

Peer Pressure

As I was writing this review, two friends called to ask me about ''that book that says parents don't matter.'' Well, that's not what it says. What ''The Nurture Assumption'' does say about parents and children, however, warrants the lively controversy it began generating even before publication.

Judith Rich Harris was chucked out of graduate school at Harvard 38 years ago, on the grounds that she was unlikely to become a proper experimental psychologist. She never became an academic and instead turned her hand to writing textbooks in developmental psychology. From this bird's-eye vantage point, she began to question widespread belief in the ''nurture assumption -- the notion that parents are the most important part of a child's environment and can determine, to a large extent, how the child turns out.'' She believes that parents must share credit (or blame) with the child's own temperament and, most of all, with the child's peers. ''The world that children share with their peers is what shapes their behavior and modifies the characteristics they were born with,'' Harris writes, ''and hence determines the sort of people they will be when they grow up.''

The public may be forgiven for saying, ''Here we go again.'' One year we're told bonding is the key, the next that it's birth order. Wait, what really matters is stimulation. The first five years of life are the most important; no, the first three years; no, it's all over by the first year. Forget that: It's all genetics! Cancel those baby massage sessions!

What makes Harris's book important is that it puts all these theories into larger perspective, showing what each contributes and where it's flawed. Some critics may pounce on her for not having a Ph.D. or an academic position, and others will quarrel with the importance she places on peers and genes, but they cannot fault her scholarship. Harris is not generalizing from a single study that can be attacked on statistical grounds, or even from a single field; she draws on research from behavior genetics (the study of genetic contributions to personality), social psychology, child development, ethology, evolution and culture. Lively anecdotes about real children suffuse this book, but Harris never confuses anecdotes with data. The originality of ''The Nurture Assumption'' lies not in the studies she cites, but in the way she has reconfigured them to explain findings that have puzzled psychologists for years.

First, researchers have been unable to find any child-rearing practice that predicts children's personalities, achievements or problems outside the home. Parents don't have a single child-rearing style anyway, because how they treat their children depends largely on what the children are like. They are more permissive with easy children and more punitive with defiant ones.

Second, even when parents do treat their children the same way, the children turn out differently. The majority of children of troubled and even abusive parents are resilient and do not suffer lasting psychological damage. Conversely, many children of the kindest and most nurturing parents succumb to drugs, mental illness or gangs.

Third, there is no correlation -- zero -- between the personality traits of adopted children and their adoptive parents or other children in the home, as there should be if ''home environment'' had a strong influence.

Fourth, how children are raised -- in day care or at home, with one parent or two, with gay parents or straight ones, with an employed mom or one who stays home -- has little or no influence on children's personalities.

Finally, what parents do with and for their children affects children mainly when they are with their parents. For instance, mothers influence their children's play only while the children are playing with them; when the child is playing alone or with a playmate, it makes no difference what games were played with mom.

Most psychologists have done what anyone would do when faced with this astonishing, counterintuitive evidence -- they've tried to dismiss it. Yet eventually the most unlikely idea wins if it has the evidence to back it up. As Carole Wade, a behavioral scientist, puts it, trying to squeeze existing facts into an outdated theory is like trying to fit a double-sized sheet onto a queen-sized bed. One corner fits, but another pops out. You need a new sheet or a new bed.

''The Nurture Assumption'' is a new sheet, one that covers the discrepant facts. I don't agree with all the author's claims and interpretations; often she reaches too far to make her case -- throwing the parent out with the bath water, as it were. But such criticisms should not detract from her accomplishment, which is to give us a richer, more accurate portrait of how children develop than we've had from outdated Freudianism or piecemeal research.

The first problem with the nurture assumption is nature. The findings of behavior genetics show, incontrovertibly, that many personality traits and abilities have a genetic component. No news here; many others have reported this research, notably the psychologist Jerome Kagan in ''The Nature of the Child.'' But genes explain only about half of the variation in people's personalities and abilities. What's the other half?

Harris's brilliant stroke was to change the discussion from nature (genes) and nurture (parents) to its older version: heredity and environment. ''Environment'' is broader than nurture. Children, like adults, have two environments: their homes and their world outside the home; their behavior, like ours, changes depending on the situation they are in. Many parents know the eerie experience of having their child's teacher describe their child in terms they barely recognize (''my kid did what?''). Children who fight with their siblings may be placid with friends. They can be honest at home and deceitful at school, or vice versa. At home children learn how their parents want them to behave and what they can get away with; but, Harris shows, ''These patterns of behavior are not like albatrosses that we have to drag along with us wherever we go, all through our lives. We don't even drag them to nursery school.''

Harris has taken a factor, peers, that everyone acknowledges is important, but instead of treating it as a nuisance in children's socialization, she makes it a major player. Children are merciless in persecuting a kid who is different -- one who says ''Warshington'' instead of ''Washington,'' one who has a foreign accent or wears the wrong clothes. (Remember?) Parents have long lamented the apparent cruelty of children and the obsessive conformity of teen-agers, but, Harris argues, they have missed the point: children's attachment to their peer groups is not irrational, it's essential. It is evolution's way of seeing to it that kids bond with each other, fit in and survive. Identification with the peer group, not identification with the parent, is the key to human survival. That is why children have their own traditions, words, rules, games; their culture operates in opposition to adult rules. Their goal is not to become successful adults but successful children. Teen-agers want to excel as teen-agers, which means being unlike adults.

It has been difficult to tease apart the effects of parents and peers, Harris observes, because children's environments often duplicate parental values, language and customs. (Indeed, many parents see to it that they do.) To see what factors are strongest, therefore, we must look at situations in which these environments clash. For example, when parents value academic achievement and a student's peers do not, who wins? Typically, peers. Differences between black and white teen-agers in achievement have variously been attributed to genes or single mothers, but differences vanish when researchers control for the peer group: whether its members value achievement and expect to go to college, or regard academic success as a hopeless dream or sellout to ''white'' values.

Are there exceptions? Of course, and Harris anticipates them. Some children in anti-intellectual peer groups choose the lonely path of nerdy devotion to schoolwork. And some have the resources, from genes or parents, to resist peer pressure. But exceptions should not detract from the rule: that children, like adults, are oriented to their peers. Do you dress, think and behave more like others of your generation, your parents or the current crop of adolescents?

by Carol Tavris, NY Times (1998) | Read more:
Image: Goodreads
[ed. See also: The Nurture Assumption: First Chapter (Judith Rich Harris, NY Times).]