Sunday, July 14, 2019

In Defence of Antidepressants

I was first prescribed antidepressants in 2000. Ever since, I have been on and off these drugs, mostly because the idea of taking them made me uncomfortable. It was a mixture of guilt, probably not unlike the guilt some athletes must feel for taking a prohibited doping substance; shame for needing a pill that had such a profound impact on my behaviour; and frustration with the recurrent episodes of depression that would bring me back to the antidepressants I would then quickly abandon.

I broke this cycle when my daughters were born and I realised that it would be irresponsible to stop treatment because being a good father meant having a stable mood. It was a purely pragmatic decision, made without resolving the existential issues that antidepressants had raised for me before. That being the case, I do not write with the fervour of the newly converted, although sometimes I speculate about how much smoother my life would have been had I decided much sooner to stick to the antidepressants.

Depression is widespread. According to the World Health Organization, in 2015 depression affected more than 300 million people, or 5.1 per cent of females and 3.6 per cent of males, worldwide. It was the single largest contributor to global disability, and the major cause of the nearly 800,000 deaths by suicide recorded every year – suicide being the second leading cause of death among 15- to 29-year-olds.

Despite these statistics, depression remains misunderstood by the public at large and is, it seems, best described by those who have lived it. The novelist William Styron wrote in his memoir Darkness Visible (1990) that: ‘For those who have dwelt in depression’s dark wood, and known its inexplicable agony, their return from the abyss is not unlike the ascent of the poet, trudging upward and upward out of hell’s black depths.’ Andrew Solomon’s memoir The Noonday Demon (2001) is a useful tome and the book on depression for the public at large. ‘It is the aloneness within us made manifest,’ he writes of the state, ‘and it destroys not only connection to others but also the ability to be peacefully alone with oneself.’

For those outside the experience, part of the confusion comes from the association of the disease with melancholia and sadness, feelings we all have experienced. Malignant sadness, or depression, is something else entirely, and it takes a leap of faith to accept that too much of something can become something completely other. (...)

It is obvious that the discomfort I once felt over taking antidepressants echoed a lingering, deeply ideological societal mistrust. Articles in the consumer press continue to feed that mistrust. The benefit is ‘mostly modest’, a flawed analysis in The New York Times told us in 2018. A widely shared YouTube video asked whether the meds work at all. And even an essay on Aeon this year claims: ‘Depression is a very complex disorder and we simply have no good evidence that antidepressants help sufferers to improve.’

The message is amplified by an abundance of poor information circulating online about antidepressants in an age of echo chambers and rising irrationality. Although hard to measure, the end result is probably tragic since the ideology against antidepressants keeps those in pain from seeking and sticking to the best available treatment, as once happened to me. Although I am a research scientist, I work on topics unrelated to brain diseases, and my research is not funded by the ‘pharma industry’ – the disclaimer feels silly but, trust me, it is needed. I write here mainly as a citizen interested in this topic. I take for granted that a world without depression would be a better place, and that finding a cure for this disease is a noble pursuit. Without a cure, the best treatment available is better than none at all. (...)

One reason for the recent surge of skepticism is a gigantic meta-analysis by the psychiatrist Andrea Cipriani at the University of Oxford and colleagues, published in The Lancet in 2018. While the earlier study by Kirsch had included 5,133 participants, Fournier’s had 718, and another study, by Janus Christian Jakobsen in Denmark in 2017, had 27,422, Cipriani and colleagues analysed data from 116,477 people – or 3.5 times more participants than in the three previous studies combined.

The sample size is not sufficient to ensure quality, but the authors were careful to select only double-blind trials and did their best to include unpublished information from drug manufacturers to minimise publication bias. They found no evidence of bias due to funding by the pharma industry, and also included head-to-head comparisons between drugs (which minimised blind-breaking). They concluded that ‘all antidepressants included in the meta-analysis were more efficacious than placebo in adults with MDD, and the summary effect sizes were mostly modest’. The results are summarised by a statistic, the odds ratio (OR) that quantifies the association between health improvement and the action of the antidepressant. If the OR is 1, then antidepressants are irrelevant; for ORs above 1, a positive effect is detected. For 18 of the 21 antidepressants, the ORs they found ranged from 1.51 to 2.13. These results have been widely mischaracterised and described as weak in the press.

It is not intuitive to interpret ORs, but these can be converted to percentages that reflect the chances of experiencing health improvement from the antidepressant, which in this study ranged from 51 per cent to 113 per cent. These percentage increases are relevant, particularly taking into account the incidence of the disease (20 per cent of people are likely to be affected by depression at some stage of their lives).

For comparison, please note the uncontroversial finding that taking aspirin reduces the risk of stroke – its associated OR is ‘only’ 1.4, but no one describes it as weak or has raised doubts about this intervention. It would be unscientific to describe the work of Cipriani and colleagues as the definitive word on the topic, but it’s the best study we have so far. The message is clear: antidepressants are better than placebo; they do work, although the effects are mostly modest, and some work better than others. This paper was an important confirmation in times of a reproducibility crisis in so many scientific fields. We don’t have to look too far: a major study was published this spring that does not confirm the association of any of the 18 genes that were reanalysed and had been proposed to be associated with MDD. (...)

The human body contains at least 12,000 metabolites. On the day of his final exam, a biochemistry major might know a few hundred, but most of us will be able to name only a few dozen, with a clear bias for the metabolites known to influence behaviour. We will immediately associate adrenalin, cortisol, testosterone, oestrogen, oxytocin and dopamine with stereotypical behaviours and personality types, but what about serotonin? The molecule is certainly no obscure metabolite. The French novelist Michel Houellebecq named his latest novel Sérotonine (2019). But would you associate the ‘happy hormone’, as serotonin is often described, with the formation and maintenance of social hierarchies and the impetus to fight observed across the animal kingdom, from lobsters to primates? Indeed, since SSRIs have been found to influence our moral decision making, naming serotonin the ‘happy hormone’ appears to be a mistake. Apart from its role in mood balance, this neurotransmitter is involved in appetite, emotions, sleep-wake cycles, and motor, cognitive and autonomic functions. In fact, most of the body’s serotonin production is not found in the brain, but in the gut.

We simply do not have a consensual overarching explanation for how SSRIs/SNRIs work in depression, and how to link these neurotransmitters to the environmental stressors, genetic factors, and immunologic and endocrine responses proposed to contribute to depression. It is also clear that restoring the chemical balance of monoamines in the brain with a pill, which only takes minutes or hours, is insufficient to immediately produce therapeutic effects, which take several weeks. Indeed, without a complete picture of the mechanism of depression, it is not surprising that the available drug treatments are not fully effective. In a study involving thousands of MDD patients consecutively encouraged to move to a different treatment if they did not achieve remission from the previous treatment, only about 67 per cent of the MDD patients taking antidepressants went into clinical remission, even after four consecutive treatments. Thus, there is a large group of patients who don’t respond to SSRI/SNRIs, which raises doubts about the monoamine hypothesis to explain depression in full.

Other ideas have emerged. One line of thought focuses on the neurotransmitters glutamate (involved in cognition and emotion) and GABA (involved in inhibition), among others. One of the most exciting findings in the field is the clinical efficacy of ketamine, which targets glutamate neurotransmission, producing immediate effects in patients refractory to SSRI/SNRI treatments. Along with the monoamine hypothesis, most of these newer approaches are somehow related to the notion of neuronal plasticity, the ability of the nervous system to change, both functionally and structurally, in response to experience and injury, which can take some time to occur. Thus, it could be that the decreased levels of monoamines are not the real cause of depression, perhaps not even an absolutely necessary condition for depression. The data certainly suggest that there might be better targets to be found, and that the pharmacological approach has to become progressively more tailored.

That said, the temptation to dismiss the monoamine hypothesis to score points against antidepressants shows a lack of understanding of how medicine has worked for most of its history; imperfect but useful therapies have been the rule, even as we refine our understanding of disease.

by Vasco M Barreto, Aeon |  Read more:
Image: Gabriele Diwald/Unsplash
[ed. See also: Cipriani on Anitdepressants; and What to Make of New Positive NSI-189 Results? (Duck Soup/SSC).]

Saturday, July 13, 2019

What the Measles Epidemic Really Says About America

In two essays, “Illness as Metaphor” in 1978 and “AIDS and Its Metaphors” in 1988, the critic Susan Sontag observed that you can learn a lot about a society from the metaphors it uses to describe disease. She also suggested that disease itself can serve as a metaphor—a reflection of the society through which it travels. In other words, the way certain illnesses spread reveals something not just about a nation’s physiological health but also about its cultural and political health. For instance, AIDS would not have ravaged America as fully as it did without institutionalized homophobia, which inclined many Americans to see the disease as retribution for gay sex.

Now another virus is offering insights into the country’s psychic and civic condition. Two decades ago, measles was declared eliminated in the U.S. Yet in the first five months of this year, the Centers for Disease Control and Prevention recorded 1,000 cases—more than occurred from 2000 to 2010.

The straightforward explanation for measles’ return is that fewer Americans are receiving vaccines. Since the turn of the century, the share of American children under the age of 2 who go unvaccinated has quadrupled. But why are a growing number of American parents refusing vaccines—in the process welcoming back a disease that decades ago killed hundreds of people a year and hospitalized close to 50,000?

One answer is that contemporary America suffers from a dangerous lack of historical memory. Most of the parents who are today skipping or delaying their children’s combined measles, mumps, and rubella (MMR) vaccine don’t remember life with measles, much less that it used to kill more children than drowning does today. Nor do they recall how other diseases stamped out by vaccines—most prominently smallpox and polio—took lives and disfigured bodies.

Our amnesia about vaccines is part of a broader forgetting. Prior generations of Americans understood the danger of zero-sum economic nationalism, for instance, because its results remained visible in their lifetimes. When Al Gore debated Ross Perot about NAFTA in 1993, he reminded the Texan businessman of the 1930 Smoot-Hawley Tariff Act, which raised tariffs on 20,000 foreign products—prompting other countries to retaliate, deepening the Great Depression, and helping to elect Adolf Hitler. But fewer and fewer people remember the last global trade war. Similarly, as memories of Nazism fade across Europe and the United States, anti-Semitism is rising. Technology may improve; science may advance. But the fading of lessons that once seemed obvious should give pause to those who believe history naturally bends toward progress.

Declining vaccination rates not only reflect a great forgetting; they also reveal a population that suffers from overconfidence in its own amateur knowledge. In her book Calling the Shots: Why Parents Reject Vaccines, the University of Colorado at Denver’s Jennifer Reich notes that starting in the 1970s, alternative-health movements “repositioned expertise as residing within the individual.” This ethos has grown dramatically in the internet age, so much so that “in arenas as diverse as medicine, mental health, law, education, business, and food, self-help or do-it-yourself movements encourage individuals to reject expert advice or follow it selectively.” Autodidacticism can be valuable. But it’s one thing to Google a food to see whether it’s healthy. It’s quite another to dismiss decades of studies on the benefits of vaccines because you’ve watched a couple of YouTube videos. In an interview, Reich told me that some anti-vaccine activists describe themselves as “researchers,” thus equating their scouring of the internet on behalf of their families with the work of scientists who publish in peer-reviewed journals.

In many ways, the post-1960s emphasis on autonomy and personal choice has been liberating. But it can threaten public health. Considered solely in terms of the benefits to one’s own child, the case for vaccinating against measles may not be obvious. Yes, the vaccine poses little risk to healthy children, but measles isn’t necessarily that dangerous to them either. The problem is that for others in society—such as children with a compromised immune system—measles may be deadly. By vaccinating their own children, and thus ensuring that they don’t spread the disease, parents contribute to the “herd immunity” that protects the vulnerable. But this requires thinking more about the collective and less about one’s own child. And this mentality is growing rarer in an era of what Reich calls “individualist parenting,” in which well-off parents spend “immense time and energy strategizing how to keep their children healthy while often ignoring the larger, harder-to-solve questions around them.”

Historical amnesia and individualism have contributed to a third cultural condition, one that is more obvious but also, perhaps, more central to measles’ return and at least as worrying for society overall: diminished trust in government. For earlier generations of Americans, faith in mass vaccines derived in large part from the campaign to eradicate polio, in the 1950s—a time when the country’s victory in World War II and the subsequent postwar boom had boosted the public’s belief in its leaders. This faith made it easy to convince Americans to accept the polio vaccine, and the vaccine’s success in turn boosted confidence in the officials who protected public health. So popular was the vaccine’s inventor, Jonas Salk, that in 1955 officials in New York offered to throw him a ticker-tape parade. (...)

Yet it’s not only conservatives who translate their suspicion of government into suspicion of vaccines. Many liberals distrust the large drug companies that both produce vaccines and help fund the Food and Drug Administration, which is supposed to regulate them. The former Green Party presidential candidate Jill Stein has suggested that “widespread distrust” of what she describes as the medical-industrial complex is understandable because “regulatory agencies are routinely packed with corporate lobbyists and CEOs.” The environmental activist Robert F. Kennedy Jr. claims that thimerosal, a preservative formerly used in some vaccines, harms children. Bright-blue counties in Northern California, Washington State, and Oregon have some of the lowest vaccination rates in the country.

Although polls suggest that conservatives are slightly less accepting of vaccines than liberals are, a 2014 study found that distrust of government was correlated with distrust of vaccines among both Republicans and Democrats. Indeed, the best predictor of someone’s view of vaccines is not their political ideology, but their trust in government and their openness to conspiracy theories.

It’s not surprising, therefore, that a plunge in the percentage of Americans who trust Washington to do the right thing most or all of the time—which hovered around 40 percent at the turn of the century and since the 2008 financial crisis has regularly dipped below 20 percent—has coincided with a decline in vaccination rates. In 2001, 0.3 percent of American toddlers had received no vaccinations. By 2017, that figure had jumped more than fourfold. Studies also show a marked uptick in families requesting philosophical exemptions from vaccines, which are permitted in 16 states.

by Peter Beinart, The Atlantic |  Read more:
Image: Edmon De Haro

Where Are All the Bob Ross Paintings?


Bob Ross painted more than 1,000 landscapes for his television show — so why are they so hard to find? Solving one of the internet’s favorite little mysteries. (NY Times)

Thursday, July 11, 2019

Oscar Peterson Piano Lesson

The Fed Satisfies Fewer and Fewer These Days

Give financial markets what they wish for, and they will ask for more. Give them more than they wish for and they’ll still ask for more. Just ask Fed chairman Jerome Powell.

His remarks to Congress on Wednesday were more dovish than expected, but rather than strengthening the Federal Reserve’s effectiveness and easing the political and social pressures on it, this could aggravate the risk of a lose-lose outcome for the world’s most powerful central banks.

Powell’s remarks did more than solidify expectations that the Fed would cut interest rates by 25 basis points when the Federal Open Market Committee meets later this month. It also empowered a growing number of market participants to call for, and expect, a 50-basis-point cut. And all this at a time when the unemployment rate is at a five-decade low, financial conditions are the loosest for over two decades, market interest rates are at historically-low levels, stock prices are elevated and, to use Powell’s own words, the U.S. economy is in a good place.

Given the Fed’s systemically important role, the institution’s more dovish policy guidance is seen, correctly, as also opening the way for looser monetary policies by many other central banks around the world. With that, investors’ initial reaction was to trigger what Bloomberg labeled a “buy everything” rally.

There are downsides to this so-called Fed put: The more it is used, the higher the risk of unfortunate spillover effects for economic, financial, political, social and institutional conditions.

What is liked by financial markets is unlikely to translate into a material improvement in either of the two variables that the Fed pursues under its mandated objectives – low unemployment and manageable inflation – and the further decoupling of elevated asset prices from fundamentals increases the risk of financial instability down the road.

All this is intimately related to how perceptions of the put have evolved over the last 10 years. The initial 2008-09 “Bernanke put” under the former chairman was seen as aimed at normalizing what were, at that time, highly dysfunctional markets that threatened sever damage to the economy. The Yellen put that followed was much more about buying time for the economy in the hope that the deeply polarized politics would improve and enable a more comprehensive policy response involving a lot more than exceptional monetary measures.

This has now morphed into the current Powell put, which increasingly fuels perceptions that the Fed is now held captive by markets, having to tilt ever more dovish regardless of financial conditions that are already ultra loose, a solid domestic economy and limited evidence that the Fed is able to materially improve economic conditions at this stage of the economic cycle.

by Mohamed A. El-Erian, Bloomberg | Read more:
Image: Brendan Smialowski/AFP, via Getty Images

Kimbra

Water Is For Fighting

Water history in the west is a full-service crash-course in all of 19th- and 20th-century American capitalism’s very worst tendencies. We encountered a scarce and necessary resource, understood exactly how scarce and necessary it was, then proceeded to kill each other for the chance to turn as much of it as possible into profit before all of it was gone. Now, in 2019, it’s mostly all gone and the west is burning.

The prophet of water in the west, the man who foresaw nearly every issue we’ve struggled to deal with in the last 150 years, is John Wesley Powell. What’s the best way to describe Powell? Well, to start, he was an absolute madman. Imagine a one-armed Civil War veteran turned college professor who, in 1869, decides to take four boats down the Green River, into the Colorado, and through the Grand Canyon. At the time, no one had even come close to achieving this feat. Some white settlers had seen or boated various parts of the river—a few had witnessed the Grand Canyon from the rim—and native communities had lived along some parts of Powell’s route. But, to the best of anyone’s knowledge at the time, not even the natives had attempted this entire trip by boat. The rivers were truly unknown and unknowable to Powell. He could have encountered a waterfall as extreme as Niagara Falls at any point along the way, and died instantly. And, to compound this uncertainty, Powell undertook the journey in four wooden boats, never having run a rapid before, with an entire crew of men who had also never run a rapid before.

Somehow, Powell and most of his crew survived the three-month journey. Those who didn’t survive couldn’t exactly blame Powell for their fate. At the top of a truly massive rapid in the Grand Canyon, three men abandoned the expedition, thinking they’d have better survival odds hiking than trying to run these rapids in their tender wooden boats. Only two days later, Powell and the remaining crew finished their journey, having run the intimidating rapids without issue. The three who’d abandoned them had already harassed a band of natives near the rim of the Grand Canyon and been killed.

The journey is what Powell is famous for, but his later reports are what make him a prophet of water in the west. (...)

But by the 1870s, westward expansion was in full swing. Powell saw the gradual changes in the names on maps—the area labeled “Great American Desert” moving farther west as speculators and railroad companies encouraged people to make a go of farming west of the 100th meridian. Areas that were once labeled “Great Desert” were increasingly re-dubbed “Great Plains,” and throughout the 1870s, (paid) scientists were pushing some absolutely insane theories to encourage people to move west. The most absurd of these may have been the “rain follows the plow” theory. Think Field of Dreams, but for farming. The idea was that once people started farming in dry regions, rain would naturally come. When you plowed soil, even very bad and seemingly un-arable soil, the plowing would just automatically release trapped moisture into the atmosphere. This was, of course, nonsense. At heart, the “great plains” and “rain follows the plow” were marketing slogans, an early instance of real estate developers rebranding a previously undesirable area to turn a profit.

Powell’s reports attempted to show just how hopeless much of this project was. The Homestead Acts were granting western lands (to people but also to corporations, and with very high rates of fraud) in 160-acre tracts. This tract size made sense in the east where irrigation wasn’t necessary. But 160 acres was too large a tract to productively irrigate, and too small a tract to use as unirrigated land in the west. And, Powell calculated, even if you put every ounce of freshwater in the western U.S. to work irrigating farmland, you would still only be able to produce crops from 1-3 percent of the available land. There just wasn’t enough water.

The problem was complex, and so was Powell’s solution. He pushed for a slow, orderly, well-researched expansion of irrigated agriculture using publicly-constructed dams to collect and store water. And, rather than the eastern U.S. method of granting water rights only to those who owned land that touched the river or stream that the water came from, Powell suggested employing a use-permit system to regulate and trade water rights. (The land-adjoining water rights system is called “riparianism,” while the use-based system in the west is called “prior appropriation.” Read the Wikipedia articles for these and you will have learned a good deal of what they’d teach you in a water law class at a top law school.) Finally, Powell recommended organizing our political boundaries to facilitate the communal use and cooperative regulation of water resources. That is, he thought state boundaries in the west should conform to watersheds. This would mean that state governments and residents would have purview over their entire water system and wouldn’t have to fight with other states over upstream or downstream uses.

Two of Powell’s recommendations were implemented, though likely not in the way he would have preferred. In the early- and mid-20th century, the U.S. embarked on an epic dam-building spree. But much of this building was driven less by need for water storage and electrical power than by competition for funding between two giant federal bureaucracies. Marc Reisner’s exhaustive and surprisingly thrilling retelling of the battle between the Bureau of Reclamation and the Army Corps of Engineers in Cadillac Desert is the authoritative history on this point. Reisner documents decades through which the agencies battled to outmaneuver each other, spending billions to build thousands of dams just to make sure the other agency didn’t build them first.

The beneficiaries of the dam-building extravaganza were largely agricultural interests in the surrounding areas. More dams than could possibly be justified meant relatively large water stores. A combination of the mythology around small American farmers and the reality of agribusiness lobbying power meant that agricultural interests could reap direct benefits from these federal projects, mostly through heavily subsidized water, power prices, and food control. Far from Powell’s vision of watershed-based communities managing their water resources according to their local best interests, government and industry joined forces to turn Powell’s somewhat communitarian or at least localist vision into a capitalist suicide pact. This is not to say that all dams are bad or unnecessary—they are a valuable water management and power-generating tool. But a lot of dams that exist right now are both bad and unnecessary.

The western U.S. also adopted Powell’s recommendation for a water rights system. Most of the west now uses a prior appropriation system for allocating water rights rather than a riparian system. This means that water rights are divorced from land ownership: Anyone who can access water can make a claim to it, and those claims operate on a first come, first served basis. So if water sources dry up, the senior rights holders are the most protected, while newer users are left out to dry, both figuratively and literally.

Like dams, this is not an unreasonable system on its own. But it has combined with crafty capital and legal maneuvering to disastrous effect. First, to hold your water right, you have to use it. This means that nearly all water is going to get used, because there’s no formal protection (in the system as first conceived—things are slightly better now) for letting water run its natural course. Throughout the west, water systems have seen the complete loss of native wildlife populations and increasingly compromised water supplies, as water is taken out of rivers to irrigate land and returned to the water system in lower quantities and with higher salt content.

Second, water rights can be purchased and the water diverted far away. This is what allowed someone like William Mulholland in Los Angeles to buy up the entire Owens River and divert it to the San Fernando Valley, turning the Owens River Valley from “the Switzerland of California” into a desert, drying up Owens Lake, and massively increasing the value of Mulholland’s own properties in San Fernando. The battles over the Owens River are also covered in Reisner’s book. They involve spies within the Bureau of Reclamation, lies and grift and outright theft to obtain water rights, locals dynamiting the L.A. aqueduct, and Mulholland mustering an army of 600 gun-toting LAPD officers and transporting them hundreds of miles inland to protect his waterways. In case you’re wondering how all that turned out, the Owens River Valley is still a desert, nearly the entire river goes west via aqueduct, and Mulholland Drive is one of the fanciest streets in L.A.

It should come as no surprise that Powell’s vision of cautious and sustainable water use in the west was never fully realized. An additional part of the puzzle that Powell failed to foresee was the use of groundwater. Massive underground aquifers underlie much of the U.S., and the western states have managed to produce agriculture in an area marginally larger than Powell’s 1-3 percent estimate thanks to the use of that groundwater. The problem, of course, is that groundwater is largely a non-renewable resource. Depending on the type of aquifer, it might recharge slowly or not at all. And the coastal aquifers, like the one underlying Santa Barbara, are at risk of seawater intrusion if they’re drawn down too low.

And have we managed this non-renewable resource in a reasonable way to preserve it for as long as possible? Spoiler: We have not. In fact, officials in charge of regulating groundwater have always been well aware that it was a non-renewable resource, and decided to exhaust it anyway. The former head of the Colorado Water Conservation Board described their decision to use the state’s groundwater supply over the course of 25 to 50 years like this: “What are you going to do with all that water? Are you going to leave it in the ground? . . . Well, when we use it up, we’ll just have to get more water from somewhere else.” The New Mexico State Engineer openly admitted the same: “We made a conscious decision to mine out our share of the Oglalla (aquifer) in a period of twenty-five to forty years.” It has been more than 40 years since these programs started.

by Sparky Abraham, Current Affairs |  Read more:
Image: uncredited

Arturo Pacheco Lugo
via:

Mad: The Hilariously Sly Magazine Hated By the Stiff Set

The first reaction to the news that Mad magazine was ceasing publication of new content was, I confidently suspect, one of collective epiphany that Mad magazine was still publishing at all.

Though the magazine had existed for just shy of seven decades, its presence in a typical reader’s life is haltingly brief: from the end of elementary school to no later than the end of middle school. It pads the bridge between childhood and adolescence, and then it’s gone.

The magazine was always dependent on that cyclical arrangement; readers mature and the younger siblings inherit the subscriptions. But with digital media shaping more and more how humor is expressed and consumed, Mad came to be seen as something of a relic. Passing by grocery store magazine racks over the past decade, I don’t ever remember actually seeing issues displayed.

Mad has resigned itself to this reality, pivoting to something resembling an encyclopedia: publishing issues with archival material and saving contemporary material for year-end special editions. It’s a bit anticlimactic but nothing entirely new. A big part of Mad’s output has always been reprints and anthologies of its older work, of which there is an unfathomable amount and much of it is still valuable.

Mad’s role as juvenile ephemera has often caused its status as a legacy publication to be overlooked. It appeared only a year before Playboy and coincided with Esquire’s and New Yorker’s peaking influence. And like those titles, Mad‘s fingerprints are all over popular culture of the late 20th and early 21st century. It might even have more traces than any of them. (...)

The magazine was a creatively fertile platform for its artists, who innovated the comic book medium by simultaneously sending up its formulae and testing their limitations, then adapted it to the more sophisticated magazine form. Among Mad’s best early artists was Kurtzman’s high school classmate Will Elder. Elder’s precise style could imitate any comic to an uncanny degree and could pack a single panel with copious sight gags. The latter skill was on full display when he drew an entire story around the text of “The Raven” in issue nine.

As a magazine, Mad’s focus broadened beyond lampooning comic books. It poked fun at do-it-yourself assemblage guides, dating customs, sports, the Cold War, and social pretensions. One feature was “How to Be a Mad Non-Conformist.” “Ordinary conformists,” the piece goes, “waste their time reading banal best-sellers” and “sensational daily newspapers. Ordinary non-conformists go for childish science fiction” and “boring literary journals,” while “Mad non-conformists read The Roller Derby News, the pre-Civil War Congressional Record, old Tom Swift books, and back copies of Classified Telephone Directories.”

More impressive were the magazine’s parody advertisements, with faux-Rockwell paintings and earnest copy that could, if only for a few seconds, fool the inattentive reader into thinking “Sailem Floating Cigarettes” and “Crust Gum Paste” were genuine products.

Though Mad was more visual than verbal, its Jewish humor was an unmistakable element. Kurtzman filled its text with Yiddish-based wordplay—potrzebie, ganef, furshlugginer—that became a secret language for devoted readers. In addition, outside contributors included comedians Ernie Kovacs and Sid Caesar, the comedy duo Bob Elliot and Ray Goulding, and musical satirist Stan Freberg. It experimented with verse, advertising copy, and even CB radio jargon. “Mad was a puzzle of comedy,” Phil Proctor, cofounder of The Firesign Theatre, said. “You couldn’t take it all in in one reading, so you’d delve back in.”

by Chris R. Morgan, The American Conservative |  Read more:
Image: K. Vlahos

The Loss of Longing in the Age of Curated Reality

During the 2018 Christmas shopping season, a revealing blip appeared on the consumer radar when Payless Shoesource surreptitiously opened a Beverly Hills boutique under the Italianized label Palessi. They invited a group of sixty select fashion “influencers” to attend the launch and give on-camera testimonials about the new line of designer shoes. (An influencer, if you are new to the term, is like the social media version of the cool kids in high school—the ones who taught us to listen to Depeche Mode and trade in our Velcro sneaks for Doc Martens; both groups have followers, but influencers can get paid for product name-dropping by advertisers, who stick like ticks on their posts.) The twist with Palessi was that the shoes were nothing more than Payless’s latest line of low-budget products. The social media sophisticates bestowed their enthusiastic blessings on what were, as the shoe company soon revealed, thirty-dollar faux-leather poseurs listed at a gargantuan markup. It was an egg-on-the-face prank that won a nod of approval from the broader media audience. Having already filed for Chapter 11 bankruptcy, Payless had nothing to lose.

To lick their wounds, some of the influencers went to New York City in February to attend Fashion Week, an event at which the fetish for designer wares is annually consecrated into a cult of the brand. It’s true that the spectacle isn’t for normal people per se. It’s for fashion culture itself. But as it fortifies its own image, Fashion Week grossly aestheticizes the fantasyland of desire in our social imaginary. And in come the influencers to set the trends a-trending with real-time Tweet-storming, Instagramming, and emoji-winking commentary on all the gaudy swank of live-streamed runway shows parading outfits that often climb north of $100,000. No one pretends we are going to buy this ridiculous stuff. It’s a strategic calculation to stoke consumer desire by provoking our sense of alienation from stylized satisfaction. Lend us your screens and the fantasy will be yours.

But all the hoopla was itself indirectly pranked by an off-runway product coming out of New York at the same time. A different sort of influencer, the Pushcart Prize–winning writer Melissa Broder, published an unusual personal reflection in the New York Times, “Life without Longing.” In the article, Broder relates how she came to realize that her adventures in search of stylized romantic love were at root a “yearning for yearning itself.” What had been driving her was the hope of “making meaning in this life” and sustaining “the sensation of a forward motion…a reason for being.” But when the “illusion” of finding erotic “completion” gave out, she found herself with a “spiritual longing…for some kind of eternal beauty or ineffable truth” that was “more nebulous, always just out of reach."

Broder’s testimony reveals more than she may have realized. Although they seem synonymous, longing wants something different from what desire wants—and not just in the sphere of fashion or romantic love. Desire is the particularizing and possessive agenda of self-creation—the self in the mode of a performance aesthetic. Longing is the self’s yearning to be grounded in something irreducible to the object in front of it or the designs within it—the self in the mode of a storied aesthetic in which it is not the primary author and satisfaction is not its ultimate endgame. But the trouble today is that longing must vie with a state of affairs in which desire is shaped by those influences of commercial finery and technologically mediated fantasies that supervene on the very ways we sort out who and how we are in the world. Although desire appears to be that which is most our own, it tends to be cultivated in us and places us at a distance from the true experience of longing. Desire has become longing’s counterfeit.

It’s time to pull a Palessi and call desire’s bluff. To do that, we need to work our way through a formative paradox: The nature of desire is expansive and the nature of longing is restrictive, but longing is the better influencer in our authentication of identity and truth.

Anxiety of Influence

Influencer is the perfect word for what our advertising and marketing cultures have wanted to devise all along, and in an obsessively technological age their strategies are all too effective. The term owns up to the larger paradigm of commodification that shapes our relationships to commercial objects, ideas, and even ourselves. I needn’t rehearse the well-documented perils attendant upon our penchant for materialism and greed, digital dwelling, or device addiction, and all the spine-bending and psychological debts these accrue. Historian William Leach named all this the “culture of desire.” Political theorist Sheldon Wolin called it a “whirl” in which the world is “continuously redefined by contemporary science, technology, corporate capitalism, and its media.” One does not have to be glued to digital marketing or fashion trends or Internet porn to come under influencer sway. When a click-baiting signal, message notification, or neatly packaged podcast courses through the wires and pumps a little dopamine into our brains, or when our minds spin with the estimated 5,000 ads we take in daily (to the tune of nearly 200 billion marketing dollars in the United States), these are just the latest pointillist strokes of a deeper figuration of who we are and how we perform the “reality” that is curated for us.

Before terms like branding, targeting, and influencer entered our parlance, the keyword was propaganda. In the 1920s, Edward Bernays published a book by that name that famously began with this psychosocial observation:
The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. We are governed, our minds are molded, our tastes formed, and our ideas suggested, largely by men we have never heard of.… It is they who pull the wires that control the public mind, who harness old social forces and contrive new ways to bind and guide the world.
As a descriptive account, Bernays’s point is not wrong. But one expects a discussion of ethics to follow. It doesn’t. What does, rather, is an earnest proto–Mad Men case for how advertising could “bind and guide” the demos in a helpful way by shouldering the burden of complicated life decisions. Unsurprisingly, the case was a slippery slope, and before long Bernays was envisioning how what he called “the engineering of consent” could infuse our (often unconscious) desire function with a faith in vigorous acquisition. Advertising could effectively become the “invisible government” and profitably relieve us of the duty of seeking Aristotle’s “good life” on the feeble basis of what our minds, tastes, and ideas might sort out on their own. Among his clients were General Electric, Procter & Gamble, the American Tobacco Company, CBS, and President Calvin Coolidge.

David Ogilvy solidified this vision in a cut-to-the-chase way when he built what would by 1964 be the Ogilvy & Mather advertising agency. You know their work if you’ve ever felt your heartstrings pulled by the likes of Dove, American Express, Merrill Lynch, or IBM. Today the firm is in eighty-three countries, with 132 offices, and Ogilvy’s strategies for engineering consent are lauded as “timeless in marketing” and well suited to “the new challenges of the era of Social Media.” Advertising, he declared, “is a message for a single purpose: to sell.” How to do this? Make the product irresistibly interesting by using customers’ language, “the language in which they think.” That is another way of saying that advertising’s goal is to win over the inner grammar of our minds, tastes, and ideas. Apple, for example, as blogger and Ogilvy fan Camila Villafañe puts it, “knows how to whisper their beliefs into the ears of their audience.… Apple’s positioning strategy focuses primarily on emotions and the consumer’s lifestyle, their imagination, passions, dreams, hopes, aspirations.”

Business school students today learn how to whisper on the basis of the “integrative marketing” model of consumption outlined by George Belch and Michael Belch in their textbook Advertising and Promotion (1997). The science identifies a sequence of psychological stages in the consumer’s makeup that advertising can appeal to and catalyze on its own terms: shaping motivation, perception, and attitude, then formation, integration, and learning. Integration is the moment of a “purchase decision,” and presumably “learning” involves realizing that I will have a more integrated life if I purchase more and more.

The integrative marketing strategy has a sincerity about it, almost like a vocational calling. Brian Martin, CEO of Brand Connections, has implored his sector’s leaders to invest more wisely by serving consumers’ aspirations to be cared for and connected with others, their desires to “feel that they matter” and “believe there is a higher purpose.” Martin lists American Express, Lexus, Rolex, Starbucks, Twitter, and Facebook as brands that help us integrate our personhood. Bernays would be impressed. Integrative marketing plays into our core existential project and what has become our late-modern inclination to, according to Buddhist scholar David Loy, “make ourselves feel more real by reorganizing the whole world until we can see our own image everywhere, reflected in the ‘resources’ with which we try to manipulate and secure the material conditions of our existence.”

Wanting to feel more real, I think Loy would agree, is not the problem. Rather, it’s the illusion of “making” this be so—securing it on our terms and giving it the bottom-line sheen of goods and services. We are just so good at making things. Why not the self? Why not the world? Why not the fundamental truth of both, wrought in the self’s aspirational image? It’s almost irresistible. But making assumes, among other things, an outcome-based calculus and narrowly utilitarian means. What if the real conditions of reality exceeded the reach of human production, and the real life of purpose could not be contained on a grid? What if the inconclusive, ever-unfolding scope of meaning sounded in our ears or flashed its own faint figure beyond the territory of self-imaging? What then would become of desire?

by Christopher Yates, Hedgehog Review |  Read more:
Image: Plato’s Cave, by Cveto Vidovic

Wednesday, July 10, 2019

It Ain't Innovation if No One Wants To Buy What You're Selling

In case you missed it, last month Gibson, the famed guitar company, filed for bankruptcy. Matt LeMay has a really fascinating and worth reading Medium post up, claiming that Gibson's failure is a "cautionary tale about innovation." He compares what Gibson's management did over the past few years to another big name in guitars: Fender. And finds quite a telling story in the contrast.

Specifically, he notes that Gibson doubled down on "innovation" and trying to come up with something new -- almost none of which really seemed to catch on, while more or less ignoring the core product. Meanwhile, Fender took a step back and looked at what the data showed concerning what its existing customers wanted, and realized that it wasn't serving the customer as well as it could. LeMay points to a Forbes interview with Fender CEO, Andy Mooney, where he explains:
“About two years ago we did a lot of research about new guitar buyers. We were hungry for data and there wasn’t much available. We found that 45% of all the guitars we sell every year go to first-time players. That was much higher than we imagined. Ninety percent of those first-time players abandoned the instrument in the first 12 months — if not the first 90 days — but the 10% that didn’t tended to commit to the instrument for life and own multiple guitars and multiple amps. 
We also found that 50% of new guitar buyers were women and that their tendency was to buy online rather than in a brick and mortar store because the intimidation factor in a brick and mortar store was rather high. 
The last thing we found was that new buyers spend four times as much on lessons as they do on equipment. So that shaped a number of things. It shaped the commitment we made to Fender Play because we felt there was an independent business opportunity available to us that we’d never considered before because the trend in learning was moving online. We also found we needed to communicate more to the female audience in terms of the artists we connect with, in terms of using women in our imagery and thinking generally about the web.”
The end result is two very different approaches to innovation. LeMay points out that this is perfectly demonstrated in what you see when you go to each company's website:
A cursory glance at Fender’s website tells you a lot about how the company has implemented their findings: pictures of women playing their instruments dominate, and the “Fender Play” platform for learning how to play guitar is given equal billing with the guitars themselves. (Gibson’s website, on the other hand, features a picture of Slash with the headline “global brand ambassador” — a noxious and deeply company-centric piece of marketing jargon if ever there was one.)
It's a really good point, though I think it's slightly misplaced to argue that the problem was Gibson's focus on "innovation." The problem is Gibson's focus on something new and shiny without paying enough attention to what people actually wanted. If you've done anything in product development ever, you've probably heard the famous (and probably apocryphal) Henry Ford quote:
“If I had asked people what they wanted, they would have said faster horses.”
This is often deeply embedded in the minds of people who are quite sure they're coming up with the next great thing. And it's rarely actually true. There are exceptions, of course, but they are really few and far between. True innovation tends to come from better understanding what people actually want to accomplish and then helping them better do that. Sometimes it's coming up with something new. Sometimes it's coming up with a new way to sell. Or a more convenient way to use something. Or a better business model. Or a better way to educate. There are all sorts of innovations.

Indeed, digging deep into the Techdirt archives, I'm reminded of the debates we used to have about the difference between invention and innovation. Invention is coming up with something new. Innovation is successfully bringing something to a market that wants it. Sometimes the processes overlap, but not always. But, as we've pointed out (in the context of debates over patents), it's usually the innovation (successfully bringing something to market in a way that people want) that's much more important in the grand scheme of things than invention (just making something new).

It seems clear from looking at the approaches that Gibson and Fender each took that one focused on true innovation: figuring out a better way to solve the needs of customers. The other used the falsely promoted definition of innovation -- the one that is more synonymous with just "coming up with something completely new."

by Mike Masnick, TechDirt |  Read more:
Image: Mark Rogan
[ed. From a year ago, still relevant.]

Tuesday, July 9, 2019

Nossa Alma Canta Trio

Robespierre’s Kitchen

Editor’s note: This post was originally written both to earnestly respond to a Wired article and to make fun of a much-derided Bret Stephens New York Times column. But the pages that New York Times column were printed on are now crumpled up in the bottom of a trash can and eventually that trash can and the things contained within it, like the street on which it resides and the era in which it exists, will all become one-thousandth an inch of sediment for future alien archaeologists to discover. But the central point of this post will still be true: no one should force anyone else to eat mayonnaise.

I was walking through a train station reading Bret Stephens’ latest column when I spotted a very famous celebrity whose work I admire. He greeted me with condolences: “Sorry to hear about the mayonnaise.”

Had my experience already become part of the public conversation? Earlier that day I had ordered a sandwich from a fine and respected NY eatery whose food I admire. I asked the waiter to “hold the mayo,” but when my club sandwich arrived the turkey was blanketed with the appalling stuff. I sent it back. The waiter apologized and a few minutes later brought me a pristine sandwich sans mayo.

I had barely time to swallow my first bite before I heard my fellow diners describing me as “a fucking idiot,” “the mayor of clowntown,” and “a total fart factory.” Their reactions were corroborated by my own sister, writing her hômage to mayo in Wired, who suggested I was a “hypocrite and a coward.” As the insults piled up into the hundreds, I couldn’t help but feel like I’d been cast in the role of Giles in some sort of gastronomic version of Arthur Miller’s The Crucible.

It’s upsetting to be in the center of this type of maelstrom, however meaningless and inconsequential, simply because I had the temerity to voice an anti-mayo opinion. It could not simply be that I do not like mayo and wanted a sandwich without mayo. I had to be a “delusional circus freak who actually loves mayo but thinks he doesn’t.” Nobody likes to be slandered by so-called “friends” at a restaurant. Nobody wants to be the next Sebastian, a former friend of ours whose social life was nearly destroyed in 2015 because of his single, injudicious complaint about aioli.

The result has been a self-silencing of much of America. According to data from Quartz, mayonnaise is the most popular condiment in the country. In 2013, people spent $2 billion on mayo, which translates to $6 of mayo per person. But numbers can be misleading. For instance, I purchased no mayo in 2013. That means someone else must have spent more than $6 on mayo. Who was it? I don’t know! I don’t need to know. I don’t think they should be sent to prison. But similarly, I and the millions of people like me should not be sent to the mayo prison.

The data confirms what everyone with eyes and ears and a brain knows from their gut: In the proverbial land of the free, people who order something and ask them to hold the mayo live in mortal fear that it will still have mayo on it. In the ivory towers of the foodie intelligentsia, it is inconceivable that someone would not like mayo.

If you’re of a certain persuasion, you might think this isn’t such a bad thing. Mayonnaise is but one tool in a chef’s toolbox, one arrow in the chef’s quiver, one color on the chef’s palette, or taste on the chef’s palate. Chefs should not be burdened with odious restrictions that would curtail their creativity. Up to a point, you aren’t wrong. Everyone has felt sympathy for the chef who has to accommodate the large group that comes in just before closing time and has 15 different insane food restrictions. Thinking before you order is always good practice. I accept this.

America has long since passed the point of “up to a point.” Six years ago, I was in a restaurant on Manhattan’s Upper West Side, and I ordered a BLT with no mayo. It arrived with mayo. I sent it back and when it returned it again had mayo on it. I couldn’t help but laugh! The waiter, mortified at first but warmed by my amusement, confided, “The chef really likes mayo.” When the third BLT finally had no mayonnaise, he whispered, “I hate mayo too.” I wonder now if in this current climate that friendly server whose candor I admired would be comfortable to make such an admission!

Reader, mayo wasn’t even listed as an ingredient on the menu.

by Ben Dreyfuss, Mother Jones |  Read more:
Image: Mother Jones illustration; Hippolyte Lecomte
[ed. Not a fan of mayo or aioli.]

The Restaurant of Order Mistakes


Worldwide, dementia affects 47.5 million people with 9.9 million new cases each year. Recently, a pop-up restaurant in Tokyo spent 3 days in operation, changing the public’s perception of those suffering from dementia and Alzheimer’s. The Restaurant of Order Mistakes, which was open in early June, was staffed by sufferers of these disorders.

Six smiling waitresses took orders and served food to customers, who came in knowing they may not get what they asked for. Each waitress suffers either from dementia or Alzheimer’s, hence the name of the restaurant. One waitress, who used to work in a school, decided to participate since she was used to cooking for children and thought she could do it. But, of course, the day was not without mistakes.

[ed. See: ‘The Restaurant of Order Mistakes’ Only Staffs Waiters with Dementia, So Every Order is a Surprise (My Modern Met).]

Monday, July 8, 2019

Julian Lage, Kenny Wollesen, and Scott Colley

Americans Shocked to Find Their Rights Literally Vanish at U.S. Airports

If you’re traveling outside the United States this summer you might want to rethink taking your electronics along. Government agents have been detaining American citizens without arrest, searching, and in some cases downloading the entire contents of phones, tablets, laptops, and other devices. And this all happens without a warrant or access to an attorney.

“The border has become a rights-free zone for Americans who have to travel,” Senator Ron Wyden said in a statement to TAC. “The founders never could have imagined that the government would be able to sift through your entire digital life, from pictures to emails and even where you’ve been, just because you decide to take a vacation or travel for work.”

Border searches of electronic devices have exploded at an exponential rate in recent years: in 2018, U.S. Customs and Border Protection (CBP) searched over 33,295 smartphones, laptops, and other electronic devices; up nine percent from fiscal year 2017 and over six times the number searched in 2012. And that’s just the statistics from CBP; Immigration and Customs Enforcement (ICE) does not maintain records of the number of electronic device searches it conducts.

“The government is accessing all your private data,” Sophia Cope, senior staff attorney with the Electronic Frontier Foundation (EFF), told TAC. These “deeply intrusive” searches of electronic devices “reveal a lot about you: your emails, contacts, bank history, internet searches, medical history, social media usage, and political beliefs.”

The “border is not a Constitution-free zone,” said Cope. But right now, it’s essentially functioning as one, as laws that protect Americans privacy are being run over roughshod by agents at the border.

In a unanimous decision in 2014, the Supreme Court ruled that when a person has been arrested, law enforcement need a warrant to search their electronic devices.

But government agents at the border assert that they can search anyone’s device, at any time, for any reason, or for no reason at all. CBP has largely been operating under its own rules; they say they do not need a warrant, or even probable cause, to conduct this digital invasion because of the “border search exception” to the Fourth Amendment’s requirement for probable cause or a warrant.

A lawsuit brought by EFF and the American Civil Liberties Union (ACLU) argues that these searches are in violation of the First and Fourth Amendments of the U.S. Constitution.

For travelers whose professions require they maintain the privacy of sensitive information, like journalists, attorneys, clergy, and doctors, the effect of these searches can be quite chilling. We have laws that preserve the privacy of patients and attorneys’ clients—even journalists are protected by shield laws in most states—but there’s no such protection when CBP seizes electronic devices.

by Barbara Boland, The American Conservative |  Read more:
Image: Michael Ball/Wikimedia Commons

Blindsided: Alaska’s University System Pleads for a Lifeline

More than a month after Alaska lawmakers settled on a plan to cut $5 million in support for the state’s universities, Gov. Mike J. Dunleavy shocked the state last month by using a veto to cut much deeper, taking away $130 million more from the system that gave him his master’s degree.

Mr. Dunleavy, a Republican in his first year as governor, has seized on a hawkish approach to budgeting, in order to fulfill a campaign promise to increase the amount of oil-revenue dividends the state pays each Alaska resident, to about $3,000 a year.

The governor’s slashing of state funding left university leaders blindsided and in turmoil. The university’s supporters have embarked on a desperate scramble to persuade lawmakers to override the governor’s line-item veto, which would reduce the operating funds the university system gets from the state by 41 percent.

With a special legislative session convening on Monday, they have just five days to do so before the cuts become official.

“I think people are actually frightened,” said Maria Williams, a professor who chairs the University of Alaska’s Faculty Alliance. “I’m frightened because I feel that what is happening is a drastic reshaping of the state of Alaska.”

The showdown in the Legislature this week comes at a time of economic trouble in the state. While much of the United States has benefited from robust economic growth in recent years, Alaska’s fortunes have been largely tied to those of the state’s declining oil and gas industry. Falling oil revenues have brought on a persistent recession that has forced the state to confront lingering questions about how to best revive its economy.

Faced with looming deficits, political leaders avoided imposing a state sales tax or personal income tax, and chose to reduce payouts from the oil dividend fund instead. But the reductions were unpopular with some voters, and Mr. Dunleavy won election last year promising not only to restore the full dividend payments for the future but to fight for catch-up payments to make up for past reductions.

Leaders of the University of Alaska system, which serves more than 26,000 students from Juneau to Fairbanks, expect the governor’s budget cut to result in the shuttering of some satellite campuses, the elimination of hundreds of staff and faculty positions and an unprecedented reduction in the number of students the system is able to serve.

Mr. Dunleavy, a former teacher who got a master’s in education from the University of Alaska system, said his cuts to the state budget, including those for the university system, were necessary to lay a better foundation for private-sector job growth. But Jim Johnsen, the president of the university, said a strong university system was necessary to develop innovators and training for a work force that is increasingly dependent on postsecondary education.

“There is really no strong state without a strong university,” Mr. Johnsen said. “It just doesn’t exist.”

Paying the dividend

During his campaign last year, Mr. Dunleavy vowed to balance the state’s budget while avoiding new taxes, and to provide Alaskans with bigger payouts from the Alaska Permanent Fund, which holds oil revenue for later distribution. Lawmakers gave each Alaskan a $1,600 dividend last year; an old formula for the payouts that the governor hopes to revive would yield payments of about $3,000 a person this year, according to Bryce Edgmon, the speaker of the State House.

To raise the dividend while lowering the state’s deficit, Mr. Dunleavy proposed a large budget cut for the university system earlier this year. But after the university system worked closely with lawmakers through the budget-writing process, the Alaska Legislature settled on a reduction of just $5 million in the $327 million of operating-budget support the state provides.

Mr. Johnsen, fearing that the governor might not stomach the Legislature’s plan, met with Mr. Dunleavy in late May and quietly provided him with a written plan that he regarded as a drastic alternative: A $49 million reduction spread over several years, with significant cuts to personnel and “a reduced capacity to serve our students and our state.” Mr. Johnsen said he thought that such a reduction would be a challenge that would force some difficult choices, but one that the university could handle.

“It was an interesting discussion,” Mr. Johnsen said in an interview about his talk with the governor. “He nodded his head. He stood up. He shook my hand. He said, ‘We’ll talk.’”

The governor used a line-item veto to cut the operating support the state gives the university by 41 percent. University leaders said the cut would sharply reduce the number of students the school could serve.

The next time they spoke was the morning of Mr. Dunleavy’s veto announcement, he said. Legislative leaders had also been left in the dark. (...)

Along with the cuts to the university, Mr. Dunleavy also used his veto to push deep spending reductions elsewhere in the state budget. He eliminated funding for the Alaska State Council on the Arts, for public television and radio, and for a benefits program for older people.

He also cut $334,700 from the state’s appellate court system, writing in a veto document that the amount reflected the cost of government-funded abortion services. Mr. Dunleavy disliked a state Supreme Court ruling earlier this year that struck down state regulations that would have curtailed abortion coverage under Medicaid.

Despite all the cuts, the governor did not manage to completely close the state’s budget gap, which has grown in recent years as oil prices and revenues have declined and the state’s economy has been in recession. With a gap of hundreds of millions of dollars still remaining, Mr. Dunleavy suggested in announcing the veto that more cuts could be coming.

“Next year, it’s our goal to complete this process,” Mr. Dunleavy said.

An uncertain vote

Overriding the governor’s veto would require a three-quarters majority of the state’s 60 representatives and senators. More than half of them are Republicans. [ed. Who are so dysfunctional they can't even decide where to meet.]

Republican Party officials have celebrated Mr. Dunleavy’s actions in recent days, among them the state chairman, Glenn Clary, who said last week that the governor understood that the state must live within its means.

“Alaska’s economic future is in good hands with our governor and his staff,” Mr. Clary said.

by Mike Baker, NY Times |  Read more:
Image: Joshua Corbett for The New York Times
[ed. I lived in Alaska for nearly 40 years and am just sick at what the state has become: short-sighted, greedy, entitled, and mean. Rather than institute a state income tax, sales tax, or any other kind of tax, the governor and majority Republican legislature continue to cut state services and programs, just to pass out (MORE) free money to every resident. It's the worst kind of political pandering. See also: A partial list of Dunleavy’s line-item budget vetoes (ADN); and, because this seems to be the direction they're heading: the Kansas Experiment (Wikipedia). God even seems to be sending them a message.]