Monday, December 16, 2019
Capitlism's Grave Diggers: Why Private Equity Firms Should Be Abolished
In his latest BIG newsletter, Matt Stoller (previously) relates the key moments in the history of private equity, from its roots in the notorious "leveraged buyouts" of the 1980s, and explains exactly how the PE con works: successful, productive business are acquired through debt financing, drained of their cash and assets, and then killed, leaving workers unemployed and with their pension funds looted, and with the business's creditors out in the cold. (...)
[ed. Link to Stoller's essay, Why Private Equity Should Not Exist: here]
The darlings of this movement -- Henry Manne, Milton Friedman, Michael Jenson -- promoted the idea of "shareholder capitalism" and the notion that managers have a single duty: to put as much money in the pockets of investors, even at the expense of the business's sustainability or the well-being of its workers. They joined forces with Robert Bork, who had set about discrediting antitrust law, arguing (successfully) that the only time laws against monopolies should be enforced was when monopolists raised prices immediately after attaining their monopolies -- everything else was fair game (Bork is a major reason that every industry in the economy is now super-concentrated, with only a handful of major firms).
Simon's policy prescriptions -- massive reductions in capital gains taxes, deregulation of trucking, finance and transport, and a move from guaranteed pensions to 401(k)s that only provide in old age if you make the right bets in the stock market -- were adopted by Carter and the Democrats, flooding the market with huge amounts of cash to be invested.
That's when the leveraged buyout industry was born. In 1982, Simon convinced Barclays and General Electric to loan him $80m to buy Gibson Greeting Cards from its parent company RCA. Once the company was theirs, they looted its bank account to pay themselves a $900k "special dividend," sold off its real-estate holdings for $4m, and took the company public for $270m, with Simon cashing out $70m from the transaction (Simon's total investment was $330k).
This was the starter pistol for future leveraged buyouts, through which companies like Bain Capital and the Carlyle Group buy multiple companies in the same sector and transmit "winning strategies" between them: new ways to dodge taxes, raise prices, and avoid regulation. PE owners suck any financial cushion out of companies -- funds that firms set aside for downturns or R&D -- and replace it with "brutal debt schedules." The PE owners benefit massively when this drives up share prices, but take no downsides when the companies fail.
Under PE, companies have emphasized firing workers and replacing them with overseas subcontractors, and amassing "brands, patents and tax loopholes" as their primary assets. PE firms specialize in self-dealing, cutting in the banks and brokers who set up the deals for a share of the upside. A company bought by a private equity firm is ten times more likely to go bankrupt than one with a traditional capital/management structure.
Elizabeth Warren has proposed some commonsense reforms to private equity: making PE investors liable for the debts they load their companies up with (including an obligation to fund workers' pensions); ending special fees and dividends; and reforming bankruptcy and tax laws to force PE companies to operate on the same terms as other businesses. Stoller calls this "reunifying ownership and responsibility": making the people who assume ownership of these productive companies take responsibility for their liabilities, not just their profits.
As Stoller points out, critics of Warren's plan say that this would end private equity investing as we know it ("Unfortunately, Warren’s fixes for these problems... would pretty much guarantee that nobody invests in or lends to private equity firms" -- Steven Pearlstein, Washington Post), but of course, that's the whole point.
[ed. Link to Stoller's essay, Why Private Equity Should Not Exist: here]
The darlings of this movement -- Henry Manne, Milton Friedman, Michael Jenson -- promoted the idea of "shareholder capitalism" and the notion that managers have a single duty: to put as much money in the pockets of investors, even at the expense of the business's sustainability or the well-being of its workers. They joined forces with Robert Bork, who had set about discrediting antitrust law, arguing (successfully) that the only time laws against monopolies should be enforced was when monopolists raised prices immediately after attaining their monopolies -- everything else was fair game (Bork is a major reason that every industry in the economy is now super-concentrated, with only a handful of major firms).
Simon's policy prescriptions -- massive reductions in capital gains taxes, deregulation of trucking, finance and transport, and a move from guaranteed pensions to 401(k)s that only provide in old age if you make the right bets in the stock market -- were adopted by Carter and the Democrats, flooding the market with huge amounts of cash to be invested.
That's when the leveraged buyout industry was born. In 1982, Simon convinced Barclays and General Electric to loan him $80m to buy Gibson Greeting Cards from its parent company RCA. Once the company was theirs, they looted its bank account to pay themselves a $900k "special dividend," sold off its real-estate holdings for $4m, and took the company public for $270m, with Simon cashing out $70m from the transaction (Simon's total investment was $330k).
This was the starter pistol for future leveraged buyouts, through which companies like Bain Capital and the Carlyle Group buy multiple companies in the same sector and transmit "winning strategies" between them: new ways to dodge taxes, raise prices, and avoid regulation. PE owners suck any financial cushion out of companies -- funds that firms set aside for downturns or R&D -- and replace it with "brutal debt schedules." The PE owners benefit massively when this drives up share prices, but take no downsides when the companies fail.
Under PE, companies have emphasized firing workers and replacing them with overseas subcontractors, and amassing "brands, patents and tax loopholes" as their primary assets. PE firms specialize in self-dealing, cutting in the banks and brokers who set up the deals for a share of the upside. A company bought by a private equity firm is ten times more likely to go bankrupt than one with a traditional capital/management structure.
Elizabeth Warren has proposed some commonsense reforms to private equity: making PE investors liable for the debts they load their companies up with (including an obligation to fund workers' pensions); ending special fees and dividends; and reforming bankruptcy and tax laws to force PE companies to operate on the same terms as other businesses. Stoller calls this "reunifying ownership and responsibility": making the people who assume ownership of these productive companies take responsibility for their liabilities, not just their profits.
As Stoller points out, critics of Warren's plan say that this would end private equity investing as we know it ("Unfortunately, Warren’s fixes for these problems... would pretty much guarantee that nobody invests in or lends to private equity firms" -- Steven Pearlstein, Washington Post), but of course, that's the whole point.
by Cory Doctorow, Boing Boing | Read more:
***
[ed. Exerpt below:]
"So what is private equity? In one sense, it’s a simple question to answer. A private equity fund is a large unregulated pool of money run by financiers who use that money to invest in and/or buy companies and restructure them. They seek to recoup gains through dividend pay-outs or later sales of the companies to strategic acquirers or back to the public markets through initial public offerings. But that doesn’t capture the scale of the model. There are also private equity-like businesses who scour the landscape for companies, buy them, and then use extractive techniques such as price gouging or legalized forms of complex fraud to generate cash by moving debt and assets like real estate among shell companies. PE funds also lend money and act as brokers, and are morphing into investment bank-like institutions. Some of them are public companies.
While the movement is couched in the language of business, using terms like strategy, business models returns of equity, innovation, and so forth, and proponents refer to it as an industry, private equity is not business. On a deeper level, private equity is the ultimate example of the collapse of the enlightenment concept of what ownership means. Ownership used to mean dominion over a resource, and responsibility for caretaking that resource. PE is a political movement whose goal is extend deep managerial controls from a small group of financiers over the producers in the economy. Private equity transforms corporations from institutions that house people and capital for the purpose of production into extractive institutions designed solely to shift cash to owners and leave the rest behind as trash. Like much of our political economy, the ideas behind it were developed in the 1970s and the actual implementation was operationalized during the Reagan era. (...)
The takeover of Toys “R” Us is a good example of what private equity really does. Bain Capital, KKR, and Vornado Realty Trust bought the public company in 2005, loading it up with debt. By 2007, though Toys “R” Us was still an immensely popular toy store, the company was spending 97% of its operating profit on debt service. Bain, KKR, and Vornado were technically the ‘owners’ of Toys “R” Us, but they were not liable for any of the debts of the company, or the pensions. Periodically, Toys “R” Us would pay fees to Bain and company, roughly $500 million in total. The toy store stopped innovating, stopped taking care of its stores, and cut costs as aggressively as possible so it could continue the payout. In 2017, the company finally went under, liquidating its stores and firing all of its workers without severance. A lot of people assume Amazon or Walmart killed Toys “R” Us, but it was selling massive numbers of toys until the very end (and toy suppliers are going to suffer as the market concentrates). What destroyed the company were financiers, and public policies that allowed the divorcing of ownership from responsibility."
While the movement is couched in the language of business, using terms like strategy, business models returns of equity, innovation, and so forth, and proponents refer to it as an industry, private equity is not business. On a deeper level, private equity is the ultimate example of the collapse of the enlightenment concept of what ownership means. Ownership used to mean dominion over a resource, and responsibility for caretaking that resource. PE is a political movement whose goal is extend deep managerial controls from a small group of financiers over the producers in the economy. Private equity transforms corporations from institutions that house people and capital for the purpose of production into extractive institutions designed solely to shift cash to owners and leave the rest behind as trash. Like much of our political economy, the ideas behind it were developed in the 1970s and the actual implementation was operationalized during the Reagan era. (...)
The takeover of Toys “R” Us is a good example of what private equity really does. Bain Capital, KKR, and Vornado Realty Trust bought the public company in 2005, loading it up with debt. By 2007, though Toys “R” Us was still an immensely popular toy store, the company was spending 97% of its operating profit on debt service. Bain, KKR, and Vornado were technically the ‘owners’ of Toys “R” Us, but they were not liable for any of the debts of the company, or the pensions. Periodically, Toys “R” Us would pay fees to Bain and company, roughly $500 million in total. The toy store stopped innovating, stopped taking care of its stores, and cut costs as aggressively as possible so it could continue the payout. In 2017, the company finally went under, liquidating its stores and firing all of its workers without severance. A lot of people assume Amazon or Walmart killed Toys “R” Us, but it was selling massive numbers of toys until the very end (and toy suppliers are going to suffer as the market concentrates). What destroyed the company were financiers, and public policies that allowed the divorcing of ownership from responsibility."
by Matt Stoller, BIG | Read more:
[ed. Must read. Private equity is neoliberalism's Godzilla, profiting from the destruction of a wide range of American businesses (and insinuating itself into more industries every day). See also: What are private equity firms? (Wikipedia).]
Sunday, December 15, 2019
Having Kids
Before I had kids, I was afraid of having kids. Up to that point I felt about kids the way the young Augustine felt about living virtuously. I'd have been sad to think I'd never have children. But did I want them now? No.
If I had kids, I'd become a parent, and parents, as I'd known since I was a kid, were uncool. They were dull and responsible and had no fun. And while it's not surprising that kids would believe that, to be honest I hadn't seen much as an adult to change my mind. Whenever I'd noticed parents with kids, the kids seemed to be terrors, and the parents pathetic harried creatures, even when they prevailed.
When people had babies, I congratulated them enthusiastically, because that seemed to be what one did. But I didn't feel it at all. "Better you than me," I was thinking.
Now when people have babies I congratulate them enthusiastically and I mean it. Especially the first one. I feel like they just got the best gift in the world.
What changed, of course, is that I had kids. Something I dreaded turned out to be wonderful.
Partly, and I won't deny it, this is because of serious chemical changes that happened almost instantly when our first child was born. It was like someone flipped a switch. I suddenly felt protective not just toward our child, but toward all children. As I was driving my wife and new son home from the hospital, I approached a crosswalk full of pedestrians, and I found myself thinking "I have to be really careful of all these people. Every one of them is someone's child!"
So to some extent you can't trust me when I say having kids is great. To some extent I'm like a religious cultist telling you that you'll be happy if you join the cult too — but only because joining the cult will alter your mind in a way that will make you happy to be a cult member. But not entirely. There were some things about having kids that I clearly got wrong before I had them.
For example, there was a huge amount of selection bias in my observations of parents and children. Some parents may have noticed that I wrote "Whenever I'd noticed parents with kids." Of course the times I noticed kids were when things were going wrong. I only noticed them when they made noise. And where was I when I noticed them? Ordinarily I never went to places with kids, so the only times I encountered them were in shared bottlenecks like airplanes. Which is not exactly a representative sample. Flying with a toddler is something very few parents enjoy.
What I didn't notice, because they tend to be much quieter, were all the great moments parents had with kids. People don't talk about these much — the magic is hard to put into words, and all other parents know about them anyway — but one of the great things about having kids is that there are so many times when you feel there is nowhere else you'd rather be, and nothing else you'd rather be doing. You don't have to be doing anything special. You could just be going somewhere together, or putting them to bed, or pushing them on the swings at the park. But you wouldn't trade these moments for anything. One doesn't tend to associate kids with peace, but that's what you feel. You don't need to look any further than where you are right now.
Before I had kids, I had moments of this kind of peace, but they were rarer. With kids it can happen several times a day.
If I had kids, I'd become a parent, and parents, as I'd known since I was a kid, were uncool. They were dull and responsible and had no fun. And while it's not surprising that kids would believe that, to be honest I hadn't seen much as an adult to change my mind. Whenever I'd noticed parents with kids, the kids seemed to be terrors, and the parents pathetic harried creatures, even when they prevailed.
When people had babies, I congratulated them enthusiastically, because that seemed to be what one did. But I didn't feel it at all. "Better you than me," I was thinking.
Now when people have babies I congratulate them enthusiastically and I mean it. Especially the first one. I feel like they just got the best gift in the world.
What changed, of course, is that I had kids. Something I dreaded turned out to be wonderful.
Partly, and I won't deny it, this is because of serious chemical changes that happened almost instantly when our first child was born. It was like someone flipped a switch. I suddenly felt protective not just toward our child, but toward all children. As I was driving my wife and new son home from the hospital, I approached a crosswalk full of pedestrians, and I found myself thinking "I have to be really careful of all these people. Every one of them is someone's child!"
So to some extent you can't trust me when I say having kids is great. To some extent I'm like a religious cultist telling you that you'll be happy if you join the cult too — but only because joining the cult will alter your mind in a way that will make you happy to be a cult member. But not entirely. There were some things about having kids that I clearly got wrong before I had them.
For example, there was a huge amount of selection bias in my observations of parents and children. Some parents may have noticed that I wrote "Whenever I'd noticed parents with kids." Of course the times I noticed kids were when things were going wrong. I only noticed them when they made noise. And where was I when I noticed them? Ordinarily I never went to places with kids, so the only times I encountered them were in shared bottlenecks like airplanes. Which is not exactly a representative sample. Flying with a toddler is something very few parents enjoy.
What I didn't notice, because they tend to be much quieter, were all the great moments parents had with kids. People don't talk about these much — the magic is hard to put into words, and all other parents know about them anyway — but one of the great things about having kids is that there are so many times when you feel there is nowhere else you'd rather be, and nothing else you'd rather be doing. You don't have to be doing anything special. You could just be going somewhere together, or putting them to bed, or pushing them on the swings at the park. But you wouldn't trade these moments for anything. One doesn't tend to associate kids with peace, but that's what you feel. You don't need to look any further than where you are right now.
Before I had kids, I had moments of this kind of peace, but they were rarer. With kids it can happen several times a day.
by Paul Graham | Read more:
The Strange Death of Social-Democratic England
The immediate, clear consequence of the UK election of December 12, 2019, is that Boris Johnson’s Conservative Party has succeeded where Theresa May’s failed in the last general election, in 2017—by winning an emphatic parliamentary majority that can pass the legislation necessary to facilitate Britain’s departure from the European Union. The faint irony of that two-year hiatus and the handover of party leadership from May to Johnson is that the latter’s deal is rather worse—from the Brexiteers’ point of view—than the one May repeatedly failed to get past Parliament. Nevertheless, the 2019 general election will go down as the moment British voters in effect voted a resounding “yes” in a de facto second referendum on Brexit and gave Boris Johnson a mandate to make his deal law and attempt to meet the latest Brexit deadline (January 31, 2020).

Brexit’s compromise over the status of Northern Ireland, half-in and half-out of Europe, is an unstable constitutional non-settlement that risks the fragile peace that’s held there since the 1998 Good Friday Agreement, while accelerating the hopes of some for a United Ireland. But the future of the Union faces a still more pressing challenge from renewed calls for a referendum on independence for Scotland, where a large majority of voters favor continued membership in Europe. The specter of “the breakup of Britain” that has long haunted the United Kingdom may materialize at last—just at the moment when English nationalists are celebrating their Brexit victory.
So much for the political landscape; what of the social fabric? A fourth successive defeat for the Labour Party, with its most ambitious anti-austerity program yet, and an outright win for a Conservative Party that has purged its moderates have sharpened dividing lines, squeezed the liberal center, and broken consensus into polarity. A minority of Britons—roughly a third, who will now see themselves as effectively disenfranchised—voted for a radical expansion of the public sector, a great leap forward toward a socialist Britain. But the plurality chose a party that, while promising more spending, has actually recomposed itself around a reanimated Thatcherite vision of exclusionary, anti-egalitarian, moralizing social Darwinism. Some part of the Tory electoral coalition might have more welfare-chauvinist reflexes, but the greater part of it distrusts the state, resents the taxation that pays for it, and would like to shrink both.
What is at stake after this election, then—in a Britain that might soon mean, to all intents and purposes, England & Wales—is the future of what has made it a reasonably civilized country since 1945: social democracy.
by Matt Seaton, NYRB | Read more:
Image: Christopher Furlong/Getty Images
Labels:
Culture,
Economics,
Government,
history,
Politics
The Drums of Cyberwar
That Vermont water treatment plant’s industrial control system is just one of 26,000 ICS’s across the United States, identified and mapped by the Dutch researcher, whose Internet configurations leave them susceptible to hacking. Health care, transportation, agriculture, defense—no system is exempt. Indeed, all the critical infrastructure that undergirds much of our lives, from the water we drink to the electricity that keeps the lights on, is at risk of being held hostage or decimated by hackers working on their own or at the behest of an adversarial nation. According to a study of the United States by the insurance company Lloyd’s of London and the University of Cambridge’s Centre for Risk Studies, if hackers were to take down the electric grid in just fifteen states and Washington, D.C., 93 million people would be without power, quickly leading to a “rise in mortality rates as health and safety systems fail; a decline in trade as ports shut down; disruption to water supplies as electric pumps fail and chaos to transport networks as infrastructure collapses.” The cost to the economy, the study reported, would be astronomical: anywhere from $243 billion to $1 trillion. Sabotaging critical infrastructure may not be as great an existential threat as climate change or nuclear war, but it has imperiled entire populations already and remains a persistent probability.

If it hadn’t worked, a more powerful cyberweapon, Nitro Zeus, was being held in reserve, apparently by the US, primed to shut down parts of the Iranian power grid, as well as its communications systems and air defenses. Had it been deployed, Nitro Zeus could have crippled the entire country with a cascading series of catastrophes: hospitals could not have been able to function, banks could have been shuttered and ATMs could have ceased to work, transportation could have come to a standstill. Without money, people might not have been able to buy food. Without a functioning supply chain, there would have been no food to buy. The many disaster scenarios that could have followed are not hard to imagine and can be summed up in just a few words: people would have died.
When government officials like the director of the US Defense Intelligence Agency, Lieutenant General Robert Ashley, say they are kept up at night by the prospect of cyberwarfare, the vulnerability of industrial control systems is likely not far from mind. In 2017 Russian hackers found their way into the systems of one hundred American nuclear and other power plants. According to sources at the Department of Homeland Security, as reported in The New York Times, Russia’s military intelligence agency, in theory, is now in a position “to take control of parts of the grid by remote control.”
More recently, it was discovered that the same hacking group that disabled the safety controls at a Saudi Arabian oil refinery in 2017 was searching for ways to infiltrate the US power grid. Dragos, the critical infrastructure security firm that has been tracking the group, calls it “easily the most dangerous threat publicly known.” Even so, a new review of the US electrical grid by the Government Accountability Office (GAO) found that the Department of Energy has so far failed to “fully analyze grid cybersecurity risks.” China and Russia, the GAO report states, pose the greatest threat, though terrorist groups, cybercriminals, and rogue hackers “can potentially cause harm through destruction, disclosure, modification of data, or denial of service.” Russia alone is spending around $300 million a year on its cybersecurity and, in the estimation of scholars affiliated with the New America think tank, has the capacity to “go from benign to malicious rapidly, and…rapidly escalate its actions to cyber warfare.”
It’s not just Russia. North Korea, Iran, and China all have sophisticated cyberwarfare units. So, too, the United States, which by one account spends $7 billion a year on cyber offense and defense. That the United States has not advocated for a ban on cyberattacks on critical infrastructure, the Obama administration’s top cybersecurity official, J. Michael Daniel, tells Greenberg in Sandworm, may be because it wants to reserve that option itself. In June David Sanger and Nicole Perlroth reported in The New York Times that the United States had increased its incursions into the Russian power grid.
There are no rules of engagement in cyberspace. Like cyberspace itself, cyberwarfare is a relatively new concept, and one that is ill-defined. Greenberg appears to interpret it liberally, suggesting that it is a state-sponsored attack using malware or other malicious software, even if there is no direct retaliation, escalation, or loss of life. It may seem like a small semantic distinction, but cyberwarfare is not the same as cyberwar. The first is a tactic, the second is either a consequence of that tactic, or an accessory to conventional armed conflicts. (The military calls these kinetic combat.) This past June, when the United States launched a cyberattack on Iran after it shot down an American drone patrolling the Strait of Hormuz, the goal was to forestall or prevent an all-out kinetic war. Responding to a physical attack with a cyberattack was a risk because, as Amy Zegart of Stanford’s Hoover Institute told me shortly afterward, we don’t yet understand escalation in cyberspace.
Absent rules of engagement, nation-states have a tremendous amount of leeway in how they use cyberweapons. In the case of Russia, cyberwarfare has enabled an economically weak country to pursue its ambitious geopolitical agenda with impunity. It has used cyberattacks on industrial control systems to cripple independent states that had been part of the Soviet Union in an effort to get them back into the fold, while sending a message to established Western democracies to stay out of its way.
As Russia has attacked, Greenberg has not been far behind, reporting on these incursions in Wired while searching for their perpetrators. Like the best true-crime writing, his narrative is both perversely entertaining and terrifying.
by Sue Halpern, NYRB | Read more:
Image: Vladimir Putin; drawing by Tom Bachtell
Labels:
Crime,
Government,
Military,
Politics,
Security,
Technology
Bob Dylan & Johnny Cash
[ed. I like that they went with the imperfect version.]
Chrome 79 Will Continuously Scan Your Passwords Against Public Data Breaches
Google's password checking feature has slowly been spreading across the Google ecosystem this past year. It started as the "Password Checkup" extension for desktop versions of Chrome, which would audit individual passwords when you entered them, and several months later it was integrated into every Google account as an on-demand audit you can run on all your saved passwords. Now, instead of a Chrome extension, Password Checkup is being integrated into the desktop and mobile versions of Chrome 79.

The whole point of this is security, so Google is doing all of this by comparing your encrypted credentials with an encrypted list of compromised credentials. Chrome first sends an encrypted, 3-byte hash of your username to Google, where it is compared to Google's list of compromised usernames. If there's a match, your local computer is sent a database of every potentially matching username and password in the bad credentials list, encrypted with a key from Google. You then get a copy of your passwords encrypted with two keys—one is your usual private key, and the other is the same key used for Google's bad credentials list. On your local computer, Password Checkup removes the only key it is able to decrypt, your private key, leaving your Google-key-encrypted username and password, which can be compared to the Google-key-encrypted database of bad credentials. Google says this technique, called "private set intersection," means you don't get to see Google's list of bad credentials, and Google doesn't get to learn your credentials, but the two can be compared for matches.
Building Password Checkup into Chrome should make password auditing more mainstream. Only the most security-conscious people would seek out and install the Chrome extension or perform the full password audit at passwords.google.com, and these people probably have better password hygiene to begin with. Building the feature into Chrome will put it in front of more mainstream users who don't usually consider password security, which are exactly the kind of people who need this sort of thing. This is also the first time password checkup has been available on mobile, since mobile Chrome still doesn't support extensions (Google plz).
by Ron Amadeo, Ars Technica | Read more:
Image: Google
[ed. I believe Firefox has this feature too.]
[ed. I believe Firefox has this feature too.]
Fox News Is Now a Threat to National Security
It’s worse than lunacy, though. Fox’s bubble reality creates a situation where it’s impossible to have the conversations and debate necessary to function as a democracy. Facts that are inconvenient to President Trump simply disappear down Fox News’ “memory hole,” as thoroughly as George Orwell could have imagined in 1984.
The idea that Fox News represents a literal threat to our national security, on par with Russia’s Internet Research Agency or China’s Ministry of State Security, may seem like a dramatic overstatement of its own—and I, a paid contributor to its competitor CNN, may appear a biased voice anyway—but this week has made clear that, as we get deeper into the impeachment process and as the 2020 election approaches, Fox News is prepared to destroy America’s democratic traditions if it will help its most important and most dedicated daily viewer.

In the impeachment hearings, former National Security Council official Fiona Hill and other witnesses made clear how those who, like Fox News hosts and the president, advance the false narrative that Ukraine meddled in the US election are serving the Kremlin’s interests. Russia is playing a weak hand geopolitically—its economy is sputtering along and its population shrinking—and so its greatest hope is to stoke internal discord in the West. Robert Mueller warned of this; James Clapper has warned of it; and now Fiona Hill has done the same. “Our nation is being torn apart,” she said. “Truth is questioned.” Yet Fox, and the GOP more broadly, has warmly embraced almost every twist of Kremlin propaganda, up to and including the idea that Russia never meddled in the 2016 election to begin with.
Fox’s clear willingness to parry the wingnuttiest ideas in service of the president, long-term implications to the United States be damned, should worry all concerned about the state of the United States. The Ukraine myth is hardly the only example; for years, it has repeated false conspiracies about the murder of Democratic staffer Seth Rich, a conspiracy literally cooked up by Russian intelligence and fed into the US media. (To say nothing of Fox’s long-term commitment to undermining and questioning climate science, leaving the US both behind in mitigating the worst effects of climate change and also ill-equipped to face the myriad security consequences of a warming planet.)
It’s possible to paint Fox with too broad of a brush—Chris Wallace remains one of the toughest and best interviewers on television and has repeatedly stood up to vapid GOP talking points, and Bret Baier is a talented journalist and historian—but it’s clear from this year that something fundamental and meaningful has tipped inside the network.
While propagandizing has long been a key facet of Fox’s business (Stephen Colbert debuted his own Fox News host alter ego, in dedicated pursuit of “truthiness,” all the way back in 2005), the situation is clearly getting worse: the lies deeper, its always-tenuous commitment to “Fair and Balanced” unraveling further. Whatever loose adherence to a reality-based world the Fox worldview once possessed, whatever guardrails on truth the network might have once installed, are now gone. Shep Smith, long one of the network’s biggest names and best reporters, literally walked out of the Fox building this fall, departing abruptly after apparently deciding that he couldn’t in good conscience be part of a “news” operation that treated facts so fungibly.
Indeed, as the year has unfolded, Fox’s evening talk shows and its presidentially endorsed morning show have proven to be a particularly egregious and odious swamp of fetid, metastasizing lies and bad faith feedback loops that leave its viewers—and, notably, its Presidential Audience of One—foaming at the mouth with outrage and bile. (...)
More than simply embarrassing themselves by spouting obvious falsehoods, though, Fox News’ incendiary, fanatical rants serve to delegitimize to its viewers the very idea of a political opposition. Every Democrat is evil. Every person who disagrees with President Trump is an enemy of the state. Every career federal employee is a member of a deep state opposition.
As writer Gabe Sherman, who authored a history of Fox News, tweeted over the weekend, “Been thinking a lot about why Trump will survive impeachment when Nixon didn’t. For 20+ years Fox News (and rightwing talk radio) has told GOP voters that Democrats are evil. As lawless as Trump is, Republicans believe Dems are worse. That’s the power of propaganda.”
by Garrett M. Graff, Wired | Read more:
Image: Drew Angerer
[ed. Nothing new, but it makes you wonder why people accept being lied to politically and not personally.]Saturday, December 14, 2019
Evil is Baked into Big Tech’s Business Plan
Google co-founders Larry Page and Sergey Brin famously launched their search engine with the mantra: “Don’t Be Evil.” Yet soon enough, their ambitions would lead them to talk publicly about doing good while stealthily refining business models based on exploiting, deceiving, and spying on their fellow humans.
Maybe they harbored an adolescent notion of being good. As if nice intentions were enough – like their early aim to steer clear of ad-based business models — without the responsibility, wisdom, and judgment that guide them to reality. Or maybe they had soaked up too much of the Ayn Rand-ethos of the ‘80s. In any case, Page and Brin tossed their youthful idealism overboard to make superyacht-loads of money grabbing your personal data to sell to the highest advertising bidder. Together with other once-idealistic Silicon Valley entrepreneurs, they created gigantic, Gilded Age-style companies that can bully or buy off anyone who stands in their way, sucking up public resources and shredding the democratic fabric as they go. Page and Brin, who recently stepped down from Google, are now the sixth and seventh richest people in the world.
Amid a rising public outcry against Big Tech’s alarming surveillance practices, monopolistic business practices, and growing threats to democracy, America needs guidance on how to reign in these behemoths. Enter the clear-eyed Financial Times columnist Rana Foroohar, who investigates how we ended up in this mess and proposes some practical ways to get back on track. In the following conversation, she discusses her penetrating new book, Don’t be Evil: How Big Tech Betrayed Its Founding Principles — and All of Us, with the Institute for New Economic Thinking.
Lynn Parramore: As global business correspondent, you write on a wide array of topics for the Financial Times. Why a book-length dive on this particular subject at this moment?
Rana Foroohar: Two things. First, I was trolling for some focus in terms of where to throw my reporting energy, and I came across an amazing McKinsey Global Institute stat showing that 80 percent of corporate value was being held by just 10 percent of firms – those with the most IP [networking software] and data. Most were big tech companies, like the FAANGs [Facebook, Apple, Amazon, Netflix, and Google].
At the same time, I came home one day and opened a credit card bill and started scrolling down it and noticed all these tiny charges in increments of $1.99, 3 bucks, etc. I noticed they were all from the App store. At first, I thought I’d been hacked, but it turns out that my 10-year-old son had become addicted to an online soccer game and racked up all these charges in “in app” purchases. I was horrified as a mother, but fascinated as a journalist, and felt I needed to learn all about this insidious business model.
LP: You mention that Silicon Valley was heavily influenced by 60s counterculture and that many people involved in its rise started out wanting world a better place for everyone. How did the tune go from “We are the World” to “Dirty Deeds Done Dirt Cheap”?
RF: I think that over the decades, the “connect the world” ethos morphed from something beautiful into something that was really just about supra-national companies offshoring capital, data, and profits to wherever they could. Many of the people running the platform tech firms today are very young; they came of age in the 80s when you had a neoliberal ethos, a greed is good culture, and a sense that the only thing that the government can do is cut taxes. They don’t think about the largest public/private ecosystem, or citizens – only consumers. That’s a larger shift in the economy and society of course, but Big Tech represents it’s apex – if capital can fly 35,000 feet over the problems of the nation state, data has been able go even further and faster.
LP: We’re used to throwing around the word ‘big’ in association with certain industries: Big Ag, Big Pharma, etc. But the bigness of Big Tech feels like something new. You mention, for example, that the market capitalizations of the FAANG companies is bigger than the economy of France. What does this bigness mean to the rest of the economy?
RF: Platform companies enjoy a super-star effect to the nth degree. Their model has been to move fast, break things, ring fence as much data as possible and then use that position to establish monopoly power. That was always the aim, and the possibility with such firms – you can go back to economists like Hal Varian and look at their early writings and see that data economists understood that potential. So, that’s one reason that I don’t buy it when tech titans come to Capitol Hill to testify and say “oh, we had no idea.” Of course you did. That was the plan – to become the operating system for people’s lives, in every aspect of their lives. (...)
LP: How might we realign the interests of tech firms and interests of customers and citizens? Where do you stand on breaking these companies up v. regulating them more effectively?
RF: I would like to see 4 things – first, public data banks in which democratically elected governments can decide which companies and which parts of the public sector get access to consider data and under what terms (rather like what Toronto is doing with the Google Sidewalk project). Data being considered a resource of value, like labor, with an appropriate portion of the corporate value extracted from it going back to the individual (this might be done via a sovereign wealth fund for data, or a digital dividend program – both are being considered in California). I think network and commerce should be separated, a la the 19th century railroad model or bank holding company model – a firm like Amazon, for example, shouldn’t be able to own the entire ecommerce network, and compete against its own clients in commerce (particularly with no algorithmic transparency). And finally, we need a digital bill of rights that enshrines basic civil liberties in the digital space, and an FDA of technology to study the cognitive effects and regulate digital tech properly.
by Lynn Parramore, INET | Read more:
Maybe they harbored an adolescent notion of being good. As if nice intentions were enough – like their early aim to steer clear of ad-based business models — without the responsibility, wisdom, and judgment that guide them to reality. Or maybe they had soaked up too much of the Ayn Rand-ethos of the ‘80s. In any case, Page and Brin tossed their youthful idealism overboard to make superyacht-loads of money grabbing your personal data to sell to the highest advertising bidder. Together with other once-idealistic Silicon Valley entrepreneurs, they created gigantic, Gilded Age-style companies that can bully or buy off anyone who stands in their way, sucking up public resources and shredding the democratic fabric as they go. Page and Brin, who recently stepped down from Google, are now the sixth and seventh richest people in the world.
Amid a rising public outcry against Big Tech’s alarming surveillance practices, monopolistic business practices, and growing threats to democracy, America needs guidance on how to reign in these behemoths. Enter the clear-eyed Financial Times columnist Rana Foroohar, who investigates how we ended up in this mess and proposes some practical ways to get back on track. In the following conversation, she discusses her penetrating new book, Don’t be Evil: How Big Tech Betrayed Its Founding Principles — and All of Us, with the Institute for New Economic Thinking.
Lynn Parramore: As global business correspondent, you write on a wide array of topics for the Financial Times. Why a book-length dive on this particular subject at this moment?
Rana Foroohar: Two things. First, I was trolling for some focus in terms of where to throw my reporting energy, and I came across an amazing McKinsey Global Institute stat showing that 80 percent of corporate value was being held by just 10 percent of firms – those with the most IP [networking software] and data. Most were big tech companies, like the FAANGs [Facebook, Apple, Amazon, Netflix, and Google].
At the same time, I came home one day and opened a credit card bill and started scrolling down it and noticed all these tiny charges in increments of $1.99, 3 bucks, etc. I noticed they were all from the App store. At first, I thought I’d been hacked, but it turns out that my 10-year-old son had become addicted to an online soccer game and racked up all these charges in “in app” purchases. I was horrified as a mother, but fascinated as a journalist, and felt I needed to learn all about this insidious business model.
LP: You mention that Silicon Valley was heavily influenced by 60s counterculture and that many people involved in its rise started out wanting world a better place for everyone. How did the tune go from “We are the World” to “Dirty Deeds Done Dirt Cheap”?
RF: I think that over the decades, the “connect the world” ethos morphed from something beautiful into something that was really just about supra-national companies offshoring capital, data, and profits to wherever they could. Many of the people running the platform tech firms today are very young; they came of age in the 80s when you had a neoliberal ethos, a greed is good culture, and a sense that the only thing that the government can do is cut taxes. They don’t think about the largest public/private ecosystem, or citizens – only consumers. That’s a larger shift in the economy and society of course, but Big Tech represents it’s apex – if capital can fly 35,000 feet over the problems of the nation state, data has been able go even further and faster.
LP: We’re used to throwing around the word ‘big’ in association with certain industries: Big Ag, Big Pharma, etc. But the bigness of Big Tech feels like something new. You mention, for example, that the market capitalizations of the FAANG companies is bigger than the economy of France. What does this bigness mean to the rest of the economy?
RF: Platform companies enjoy a super-star effect to the nth degree. Their model has been to move fast, break things, ring fence as much data as possible and then use that position to establish monopoly power. That was always the aim, and the possibility with such firms – you can go back to economists like Hal Varian and look at their early writings and see that data economists understood that potential. So, that’s one reason that I don’t buy it when tech titans come to Capitol Hill to testify and say “oh, we had no idea.” Of course you did. That was the plan – to become the operating system for people’s lives, in every aspect of their lives. (...)
LP: How might we realign the interests of tech firms and interests of customers and citizens? Where do you stand on breaking these companies up v. regulating them more effectively?
RF: I would like to see 4 things – first, public data banks in which democratically elected governments can decide which companies and which parts of the public sector get access to consider data and under what terms (rather like what Toronto is doing with the Google Sidewalk project). Data being considered a resource of value, like labor, with an appropriate portion of the corporate value extracted from it going back to the individual (this might be done via a sovereign wealth fund for data, or a digital dividend program – both are being considered in California). I think network and commerce should be separated, a la the 19th century railroad model or bank holding company model – a firm like Amazon, for example, shouldn’t be able to own the entire ecommerce network, and compete against its own clients in commerce (particularly with no algorithmic transparency). And finally, we need a digital bill of rights that enshrines basic civil liberties in the digital space, and an FDA of technology to study the cognitive effects and regulate digital tech properly.
Companion Planting
In this article we’ll provide you with some of the ‘need to know’ details that you should follow in order to become an expert companion planting gardener. We’ll look at the plants that you should plant together, and those that you shouldn’t. There are also several benefits that come with companion planting, some of which we’ll carefully take you through. But first, what is companion planting?
What is companion planting?
Companion planting is a bit more than just the general notion that some specific plants can benefit others if they are planted close to each other. It has been defined as the planting of two or more crop species together in order to achieve benefits such as higher yields and pest control.
Companion planting has a long history, but the methods of planting plants for the beneficial interaction are not always well documented in texts. In many situations, they are created from oral tradition, front porch musings and family recommendations. Despite these historical traditions and the science of horticultural farming, we often practice companion planting simply because it’s a practical planting method!
It allows you to grow herbs, veggies and exotic crops to their full potential. The process also helps to keep insects away, as well as helping you to maintain healthy soil. Eventually, you’ll note that the food you grow even tastes better. To kick-start your gardening adventure, here are some important reminders:
Why is companion planting significant?
- You should know that beans can grow with almost everything. You can plant them next to spinach and tomatoes for great results.
- To increase their resistance to diseases, you should plant your horseradish next to your potatoes.
- Summer cornfields are easily converted into fields of pumpkins in the autumn. In the past, the First Nations people of North America planted pumpkins together with pole and corn beans in a method called the ‘Three Sisters.' The corn offers a sufficient ‘pole’ for the growth of beans, while the beans trap nitrogen in the soil, which is then greatly beneficial for the pumpkins. The pumpkins create a dense ground cover to stop the spread of weeds and to also keep away harmful pests.
- Pumpkins also function best as a row type of crop when planted together with sunflowers.
- It’s a good idea to plant some healthy nasturtium next to your squash, as it helps in keeping away those lousy squash vine borers.
- Consider using sweet marjoram in your gardens and beds to make your herbs and vegetables sweeter!
There are many benefits to companion planting. For instance, tomatoes taste better when planted together with basil. Similarly, harvesting them to make a lovely salad is easy, because they are located next to each other.
What are some of the other additional benefits?
by First Tunnels | Read more:
Image: via
[ed. See also: Companion planting (Wikipedia)]
Do Journalists Know Less Than They Used To?
Of the myriad crises threatening journalism — and therefore democracy — one challenge is almost invisible. For a host of reasons, journalists today understand less of the truth about the people they’re covering.
No doubt some will vehemently disagree. But it’s a subtle and important shift that alters the public’s knowledge of events. A little history will help me explain.
In 1989, while a press critic at the Los Angeles Times, I got a call from our Miami correspondent, Barry Bearak. One of the paper’s great talents, Bearak was often the writer of choice for complex and tragic breaking news events, such as plane crashes, the kind of stories that required an analytical mind and a poet’s touch.
“We’ve crossed some kind of border,” Bearak said. He was at the scene of a plane crash, and there were so many news outlets there, he said, he could no longer get near the story he was covering. Reporters were being kept behind ropes. They were forbidden from talking to anyone without a press aide present. Most communication occurred in press conferences. “Up till now,” he said, “I used to be able to walk the crash site with NTSB investigators.” But here even people he knew and trusted him wouldn’t speak to him. The result, he worried, was his stories were shallower and less accurate as a result.
Bearak was witnessing a tipping point the relationship between journalists and sources that would accelerate in the digital age. TV news had recently acquired epochal new technology — mobile satellite trucks and light video cameras — which allowed TV to be “live” from anywhere, technology that helped usher in Reagan’s TV presidency. Now, amid slipping ratings, local news stations were discovering they could “parachute” into any story anywhere and make it live, local and late-breaking. Gulf War One, featuring local news standups from Kuwait, was a year away. The O.J. trial was six.
Yet more outlets and reporters on the same story, it turned out, didn’t equal more public knowledge or understanding. Part of the controls being imposed by newsmakers to keep reporters at a distance was a logistical necessity. Officials worried that the influx of hundreds of new journalists and camera crews could literally trample the scene of a crime or an accident.
And some unwitting journalists also brought the controls on themselves. Many parachuting in had little grounding in the kinds of stories they were covering. When more experienced reporters with better sources broke stories, some of the parachutists complained to officials about uneven access, worried their bosses would be angry when they were scooped. It became easier for public relations people to regress to the mean — to make sure all reporters got the same information, banning government officials from talking even to those reporters they knew and trusted. All information was increasingly controlled.
More outlets covering the news had the ironic effect of shifting power away from journalists toward newsmakers. It was simple economic theory at work: More outlets competing for stories made it “a sellers’ market” for information. Sources, rather than journalists, were more able to dictate the terms of the sale, cherry-picking friendly outlets and angles (Trump and his Fox and friends).
A second epochal change compounded the growing number of outlets: speed. No longer just the province of cable and local TV news, live and late-breaking became part of everyone’s business model. And that further shifted the relationship between journalists and sources. Reporters, working in diminished newsrooms, trying to produce across multiple platforms in real time, had less time to carefully develop sources. Technology also fed that, encouraging less face-to-face time. Journalists could assemble stories from official quotes that arrived digitally into their in-baskets. Email sources questions. File without picking up the phone or leaving the office.
The literature of press management talks about the Constituent vs. Conduit Model of media. The Constituent Model involves news sources convincing reporters of the merits of their arguments. The Conduit Model involves news sources treating journalists as technology or conduit through which messages are delivered. Good communications strategists do a little of both.
But the technology that has democratized media has also pushed us toward the Conduit Model, in which reporters are managed and controlled, not persuaded. All this plays out in ways we now take for granted. Sports leagues control the video and produce their own content. Athletes restrict their comments to scrums and press conferences and learn to say as little as possible. So do politicians and corporations.
The same web tools that democratized communication also gave newsmakers more ways to deliver messages without journalists involved much at all. Barack Obama had his YouTube crew. Donald Trump has done away with the White House even holding press briefings to explain what he’s doing. Twitter is his primary form of official communication.
The press is no longer a gatekeeper over what the public knows — the classic definition of the media. It is now instead often an annotator of what the public has already heard.
by Tom Rosenstiel, Poynter | Read more:
No doubt some will vehemently disagree. But it’s a subtle and important shift that alters the public’s knowledge of events. A little history will help me explain.
In 1989, while a press critic at the Los Angeles Times, I got a call from our Miami correspondent, Barry Bearak. One of the paper’s great talents, Bearak was often the writer of choice for complex and tragic breaking news events, such as plane crashes, the kind of stories that required an analytical mind and a poet’s touch.
“We’ve crossed some kind of border,” Bearak said. He was at the scene of a plane crash, and there were so many news outlets there, he said, he could no longer get near the story he was covering. Reporters were being kept behind ropes. They were forbidden from talking to anyone without a press aide present. Most communication occurred in press conferences. “Up till now,” he said, “I used to be able to walk the crash site with NTSB investigators.” But here even people he knew and trusted him wouldn’t speak to him. The result, he worried, was his stories were shallower and less accurate as a result.
Bearak was witnessing a tipping point the relationship between journalists and sources that would accelerate in the digital age. TV news had recently acquired epochal new technology — mobile satellite trucks and light video cameras — which allowed TV to be “live” from anywhere, technology that helped usher in Reagan’s TV presidency. Now, amid slipping ratings, local news stations were discovering they could “parachute” into any story anywhere and make it live, local and late-breaking. Gulf War One, featuring local news standups from Kuwait, was a year away. The O.J. trial was six.
Yet more outlets and reporters on the same story, it turned out, didn’t equal more public knowledge or understanding. Part of the controls being imposed by newsmakers to keep reporters at a distance was a logistical necessity. Officials worried that the influx of hundreds of new journalists and camera crews could literally trample the scene of a crime or an accident.
And some unwitting journalists also brought the controls on themselves. Many parachuting in had little grounding in the kinds of stories they were covering. When more experienced reporters with better sources broke stories, some of the parachutists complained to officials about uneven access, worried their bosses would be angry when they were scooped. It became easier for public relations people to regress to the mean — to make sure all reporters got the same information, banning government officials from talking even to those reporters they knew and trusted. All information was increasingly controlled.
More outlets covering the news had the ironic effect of shifting power away from journalists toward newsmakers. It was simple economic theory at work: More outlets competing for stories made it “a sellers’ market” for information. Sources, rather than journalists, were more able to dictate the terms of the sale, cherry-picking friendly outlets and angles (Trump and his Fox and friends).
A second epochal change compounded the growing number of outlets: speed. No longer just the province of cable and local TV news, live and late-breaking became part of everyone’s business model. And that further shifted the relationship between journalists and sources. Reporters, working in diminished newsrooms, trying to produce across multiple platforms in real time, had less time to carefully develop sources. Technology also fed that, encouraging less face-to-face time. Journalists could assemble stories from official quotes that arrived digitally into their in-baskets. Email sources questions. File without picking up the phone or leaving the office.
The literature of press management talks about the Constituent vs. Conduit Model of media. The Constituent Model involves news sources convincing reporters of the merits of their arguments. The Conduit Model involves news sources treating journalists as technology or conduit through which messages are delivered. Good communications strategists do a little of both.
But the technology that has democratized media has also pushed us toward the Conduit Model, in which reporters are managed and controlled, not persuaded. All this plays out in ways we now take for granted. Sports leagues control the video and produce their own content. Athletes restrict their comments to scrums and press conferences and learn to say as little as possible. So do politicians and corporations.
The same web tools that democratized communication also gave newsmakers more ways to deliver messages without journalists involved much at all. Barack Obama had his YouTube crew. Donald Trump has done away with the White House even holding press briefings to explain what he’s doing. Twitter is his primary form of official communication.
The press is no longer a gatekeeper over what the public knows — the classic definition of the media. It is now instead often an annotator of what the public has already heard.
Friday, December 13, 2019
A 5,000-Year-Old Plan to Erase Debts Is Now a Hot Topic
In ancient Babylon, a newly enthroned king would declare a jubilee, wiping out the population’s debts. In modern America, a faint echo of that idea -- call it jubilee-lite -- is catching on.
Support for write-offs has been driven by Democratic presidential candidates. Elizabeth Warren says she’d cancel most of the $1.6 trillion in U.S. student loans. Bernie Sanders would go further -– erasing the whole lot, as well as $81 billion in medical debt.
But it’s coming from other directions too. In October, one of the Trump administration’s senior student-loan officials resigned, calling for wholesale write-offs and describing the American way of paying for higher education as “nuts.’’
Real-estate firm Zillow cites medical and college liabilities as major hurdles for would-be renters and home buyers. Moody’s Investors Service listed the headwinds from student debt -– less consumption and investment, more inequality -- and said forgiveness would boost the economy like a tax cut.
While the current debate centers on college costs, long-run numbers show how debt has spread through the economy. The U.S. relies on consumer spending for growth -– but it hasn’t been delivering significantly higher wages. Household borrowing has filled the gap, with low interest rates making it affordable.
And that’s not unique to America. Steadily growing debts of one kind or another are weighing on economies all over the world.
The idea that debt can grow faster than the ability to repay, until it unbalances a society, was well understood thousands of years ago, according to Michael Hudson, an economist and historian.
Last year Hudson published “And Forgive Them Their Debts,’’ a study of the ancient Near East where the tradition known as a “jubilee” -- wiping the debt-slate clean -- has its roots. He describes how the practice spread through civilizations including Sumer and Babylon, and came to play an important role in the Bible and Jewish law.
Rulers weren’t motivated by charity, Hudson says. They were being pragmatic -- trying to make sure that citizens could meet their own needs and contribute to public projects, instead of just laboring to pay creditors. And it worked, he says. “Societies that canceled the debts enjoyed stable growth for thousands of years.’’
But it’s coming from other directions too. In October, one of the Trump administration’s senior student-loan officials resigned, calling for wholesale write-offs and describing the American way of paying for higher education as “nuts.’’

While the current debate centers on college costs, long-run numbers show how debt has spread through the economy. The U.S. relies on consumer spending for growth -– but it hasn’t been delivering significantly higher wages. Household borrowing has filled the gap, with low interest rates making it affordable.
And that’s not unique to America. Steadily growing debts of one kind or another are weighing on economies all over the world.
The idea that debt can grow faster than the ability to repay, until it unbalances a society, was well understood thousands of years ago, according to Michael Hudson, an economist and historian.
Last year Hudson published “And Forgive Them Their Debts,’’ a study of the ancient Near East where the tradition known as a “jubilee” -- wiping the debt-slate clean -- has its roots. He describes how the practice spread through civilizations including Sumer and Babylon, and came to play an important role in the Bible and Jewish law.
Rulers weren’t motivated by charity, Hudson says. They were being pragmatic -- trying to make sure that citizens could meet their own needs and contribute to public projects, instead of just laboring to pay creditors. And it worked, he says. “Societies that canceled the debts enjoyed stable growth for thousands of years.’’
by Ben Holland, Bloomberg | Read more:
Image: NY Fed Consumer Credit Panel / Equifax
[ed. Instead of government continuing to pass tax cuts for corporations and wealthy shareholders (in hopes of encouraging trickle-down benefits, which never materialize), money that's now being used to service debt payments and bankruptcies could be used instead to stimulate consumer spending (which corporations and shareholders would benefit from as well). Basically, trickle-up economics. See also: The historical case for abolishing billionaires (The Guardian).]
Congress Learns Pentagon Wasted $1 Trillion, Promptly Gives It Bigger Budget
Here’s a fun little thought experiment: Imagine a “big government” bureaucracy embarked on a wildly ambitious project of social engineering — only to discover, almost immediately, that it had little hope of meeting its stated objectives. Reluctant to admit defeat, or jeopardize funding for its endeavors, this federal agency proceeded to deliberately mislead the public about how badly its project was going, and the likelihood of its ultimate success. Over an 18-year-period, these pointy-headed bureaucrats and their allied elected officials conspired to shovel roughly $1 trillion of taxpayer money into an initiative that exacerbated the very problems it purported to solve — and got 2,300 Americans killed in the process!
Now imagine that a major newspaper published a bombshell report meticulously documenting this bureaucracy’s conscious efforts to mislead the American people whom it claimed to serve, so as to ensure that it could carry on squandering our blood and treasure with impunity.
Would Congress reward that bureaucracy with a $22 billion budget increase hours later, with self-identified “small government” conservatives leading the call?
This week, we learned that the answer is “of course.”
On Monday, the Washington Post published “The Afghanistan Papers,” thousands of pages of war documents that our government did not want us to see, and which the paper only secured after a protracted legal battle. Those documents include nearly 2,000 pages of notes from interviews with generals, diplomats, and other officials who played a central role in waging America’s longest war. (...)
This campaign of deceit facilitated mindless misuses of public funds. The Defense Department was not directly responsible for all of this waste. And America’s civilian leadership bears primary responsibility for the war itself. But in routinely misrepresenting the state of the conflict, and lobbying for higher levels of funding for both military and aid operations in Afghanistan, the Pentagon is complicit in boondoggles like these:
[ed. See also: Afghanistan papers detail US dysfunction: 'We did not know what we were doing'; and Afghanistan agony is a product of political self-delusion – and public indifference (The Guardian).]

Would Congress reward that bureaucracy with a $22 billion budget increase hours later, with self-identified “small government” conservatives leading the call?
This week, we learned that the answer is “of course.”
On Monday, the Washington Post published “The Afghanistan Papers,” thousands of pages of war documents that our government did not want us to see, and which the paper only secured after a protracted legal battle. Those documents include nearly 2,000 pages of notes from interviews with generals, diplomats, and other officials who played a central role in waging America’s longest war. (...)
This campaign of deceit facilitated mindless misuses of public funds. The Defense Department was not directly responsible for all of this waste. And America’s civilian leadership bears primary responsibility for the war itself. But in routinely misrepresenting the state of the conflict, and lobbying for higher levels of funding for both military and aid operations in Afghanistan, the Pentagon is complicit in boondoggles like these:
During the peak of the fighting, from 2009 to 2012, U.S. lawmakers and military commanders believed the more they spent on schools, bridges, canals and other civil-works projects, the faster security would improve. Aid workers told government interviewers it was a colossal misjudgment, akin to pumping kerosene on a dying campfire just to keep the flame alive.
One unnamed executive with the U.S. Agency for International Development (USAID) guessed that 90 percent of what they spent was overkill: “We lost objectivity. We were given money, told to spend it and we did, without reason.” (...)But no detail from our misadventure in Afghanistan may do more to validate the conservative critique of “big government” excess than this one: Before the U.S. invasion, the Taliban had almost completely eradicated the opium trade in Afghanistan. After 18 years of war — and $9 billion in U.S. funding for anti-opium programs in the country — the Taliban remains in power, only now, it presides over a country that supplies 80 percent of the world’s illicit opium.
The Washington Post and New York Times aired all this dirty laundry on Monday morning. Hours later, Congress’s Armed Services Committee released a bipartisan draft of the 2020 National Defense Authorization Act (NDAA) that would give the Pentagon an additional $22 billion to play with next year, bringing its annual budget to $738 billion. Before Donald Trump took office, the U.S. was already spending more on our military than China, Russia, Saudi Arabia, India, France, the United Kingdom, and Japan spend on theirs, combined. The Defense Department’s budget is now $130 billion larger than it was the day Trump was sworn in. Meanwhile, nearly 2 million Americans are still living in places that do not have running water.
by Eric Levitz, The Cut | Read more:
Image: Joe Raedle/Getty Images[ed. See also: Afghanistan papers detail US dysfunction: 'We did not know what we were doing'; and Afghanistan agony is a product of political self-delusion – and public indifference (The Guardian).]
Thursday, December 12, 2019
Music for World, No Strings Attached
Zheng'an, a small county in Guizhou province of Southwest China, is the world's largest guitar manufacturing center from where locals export the instrument globally.
In all, 56 producers of guitar and related spare parts are based out of the guitar industrial park in Zheng'an. The county has developed 34 independent brands, according to the local government.
Zunyi Shenqu Musical Instruments Manufacturing Co Ltd, a major guitar retailer in the county, has developed three independent brands. It also does processing work for some top global music instrument brands such as Japan's Ibanez and US' Fender. So far, it has exported its products to the United States, Brazil, Spain, Germany, and Japan.
"The main consumer groups who buy guitars are schools, art training centers and guitar lovers. Compared with overseas markets, we have relatively lower costs and prices. The prices of average products range from 1,500 yuan ($214) to 5,000 yuan each. Higher-end guitars can cost over 10,000 yuan each, depending on their quality, features and craftsmanship," said Zheng Chuanjiu, general manager of Zunyi Shenqu. (...)
Last year, Zheng'an produced more than 6 million guitars, and the output value reached 6 billion yuan. More than 3.6 million guitars were exported to more than 30 countries and regions globally, including the US and Brazil, and the export value accounted for nearly one-third of the export value of guitars made in China.
With nearly 15,000 employees working there, the industrial park in Zheng'an expects sales to exceed 7 million units by the end of this year, with an output value of more than 7 billion yuan, according to the local government.
by Zhu Wenqian in Beijing and Yang Jun in Guiyang, China Daily | Read more:
In all, 56 producers of guitar and related spare parts are based out of the guitar industrial park in Zheng'an. The county has developed 34 independent brands, according to the local government.

"The main consumer groups who buy guitars are schools, art training centers and guitar lovers. Compared with overseas markets, we have relatively lower costs and prices. The prices of average products range from 1,500 yuan ($214) to 5,000 yuan each. Higher-end guitars can cost over 10,000 yuan each, depending on their quality, features and craftsmanship," said Zheng Chuanjiu, general manager of Zunyi Shenqu. (...)
Last year, Zheng'an produced more than 6 million guitars, and the output value reached 6 billion yuan. More than 3.6 million guitars were exported to more than 30 countries and regions globally, including the US and Brazil, and the export value accounted for nearly one-third of the export value of guitars made in China.
With nearly 15,000 employees working there, the industrial park in Zheng'an expects sales to exceed 7 million units by the end of this year, with an output value of more than 7 billion yuan, according to the local government.
by Zhu Wenqian in Beijing and Yang Jun in Guiyang, China Daily | Read more:
Image: Zhao Yongzhang/For China Daily
The Age of Instagram Face

“I think ninety-five per cent of the most-followed people on Instagram use FaceTune, easily,” Smith told me. “And I would say that ninety-five per cent of these people have also had some sort of cosmetic procedure. You can see things getting trendy—like, everyone’s getting brow lifts via Botox now. Kylie Jenner didn’t used to have that sort of space around her eyelids, but now she does.”
Twenty years ago, plastic surgery was a fairly dramatic intervention: expensive, invasive, permanent, and, often, risky. But, in 2002, the Food and Drug Administration approved Botox for use in preventing wrinkles; a few years later, it approved hyaluronic-acid fillers, such as JuvĂ©derm and Restylane, which at first filled in fine lines and wrinkles and now can be used to restructure jawlines, noses, and cheeks. These procedures last for six months to a year and aren’t nearly as expensive as surgery. (The average price per syringe of filler is six hundred and eighty-three dollars.) You can go get Botox and then head right back to the office.
A class of celebrity plastic surgeons has emerged on Instagram, posting time-lapse videos of injection procedures and before-and-after photos, which receive hundreds of thousands of views and likes. According to the American Society of Plastic Surgeons, Americans received more than seven million neurotoxin injections in 2018, and more than two and a half million filler injections. That year, Americans spent $16.5 billion on cosmetic surgery; ninety-two per cent of these procedures were performed on women. Thanks to injectables, cosmetic procedures are no longer just for people who want huge changes, or who are deep in battle with the aging process—they’re for millennials, or even, in rarefied cases, members of Gen Z. Kylie Jenner, who was born in 1997, spoke on her reality-TV show “Life of Kylie” about wanting to get lip fillers after a boy commented on her small lips when she was fifteen.
Ideals of female beauty that can only be met through painful processes of physical manipulation have always been with us, from tiny feet in imperial China to wasp waists in nineteenth-century Europe. But contemporary systems of continual visual self-broadcasting—reality TV, social media—have created new disciplines of continual visual self-improvement. Social media has supercharged the propensity to regard one’s personal identity as a potential source of profit—and, especially for young women, to regard one’s body this way, too. In October, Instagram announced that it would be removing “all effects associated with plastic surgery” from its filter arsenal, but this appears to mean all effects explicitly associated with plastic surgery, such as the ones called “Plastica” and “Fix Me.” Filters that give you Instagram Face will remain. For those born with assets—natural assets, capital assets, or both—it can seem sensible, even automatic, to think of your body the way that a McKinsey consultant would think about a corporation: identify underperforming sectors and remake them, discard whatever doesn’t increase profits and reorient the business toward whatever does. (...)
There was something strange, I said, about the racial aspect of Instagram Face—it was as if the algorithmic tendency to flatten everything into a composite of greatest hits had resulted in a beauty ideal that favored white women capable of manufacturing a look of rootless exoticism. “Absolutely,” Smith said. “We’re talking an overly tan skin tone, a South Asian influence with the brows and eye shape, an African-American influence with the lips, a Caucasian influence with the nose, a cheek structure that is predominantly Native American and Middle Eastern.” Did Smith think that Instagram Face was actually making people look better? He did. “People are absolutely getting prettier,” he said. “The world is so visual right now, and it’s only getting more visual, and people want to upgrade the way they relate to it.”
This was an optimistic way of looking at the situation. I told Smith that I couldn’t shake the feeling that technology is rewriting our bodies to correspond to its own interests—rearranging our faces according to whatever increases engagement and likes. “Don’t you think it’s scary to imagine people doing this forever?” I asked.
“Well, yeah, it’s obviously terrifying,” he said.
by Jia Tolentino, New Yorker | Read more:
Image: via
Subscribe to:
Posts (Atom)