Friday, April 16, 2021

Seeing on the Far Side of the Moon

Instead of using one very large dish to collect radio waves, data from a number of radio telescopes (called an array) can be stitched together by computers into a coherent single observation. These telescopes can be located at a single site, or they can be separated by oceans. The Event Horizon Telescope (EHT), the instrument that Bouman and colleagues used to image the black hole, is actually a network of telescopes in Europe, North America, South America, Antarctica, and Hawaii. The resolution of the array is proportional not to the diameter of any one instrument, but rather to the distance between those instruments that are farthest apart. The EHT’s black hole measurement was made at a stunning resolution of 25 microarcseconds, roughly the capability from Earth to distinguish a golf ball on the moon. (...)

Space telescopes are incredible instruments. NASA’s most famous, the Hubble Space Telescope, has made numerous significant discoveries since it entered service in 1990, most famously estimating the age of the universe at 13.7 billion years, two orders of magnitude more precisely than the previous scientific estimate of 10 to 20 billion years. But Hubble operates mainly in the optical band, something that is mostly accessible from Earth. NASA’s less famous infrared instrument, the Spitzer Space Telescope, which was deactivated this year after tripling its planned design life, studied bands not observable from the ground. Its replacement, the powerful James Webb Space Telescope, is due to launch next year. It should produce even more stunning observations than Hubble when it comes online, as its sensitivity to infrared light is perfect for capturing optical waves, redshifted by the expansion of the cosmos, from some of the most distant objects in the observable universe.

But the biggest problem with these orbiting telescopes is that they cannot avail themselves of the solution used by terrestrial arrays to increase resolution—adding more telescopes and stitching the data together using computation. James Webb’s aperture is 6.5 meters in diameter, while the Event Horizon Telescope has an effective aperture the size of Earth. Space telescopes lack the power that arrays on the ground can achieve.

Astronomy, then, faces a Catch-22. Terrestrial telescopes can be built with excellent resolution thanks to aperture synthesis, but they have to cope with atmospheric interference that limits access to certain bands, as well as radio interference from human activity. Space telescopes don’t experience atmospheric interference, but they cannot benefit from aperture synthesis to boost resolution. What we need is to develop a telescope array that can marry the benefits of both: a large synthetic aperture like Earth-based arrays that is free from atmospheric and human radio interference like space telescopes.

A telescope array on the surface of the moon is the only solution. The moon has no atmosphere. Its far side is shielded from light and radio chatter coming from Earth. The far side’s ground is stable, with little tectonic activity, an important consideration for the ultra-precise positioning needed for some wavelengths. Turning the moon into a gigantic astronomical observatory would open a floodgate of scientific discoveries. There are small telescopes on the moon today, left behind from Apollo 16 and China’s Chang’e 3 mission. A full-on terrestrial-style far-side telescope array, however, is in a different class of instrument. Putting one (or more) on the moon would have cost exorbitant sums only a few years ago, but thanks to recent advances in launch capabilities and cost-reducing competition in the new commercial space industry, it is now well worth doing—particularly if NASA leverages private-sector innovation.

by Eli Dourado, Works in Progress | Read more:
Image:Antennas of the Atacama Large Millimeter/submillimeter Array (ALMA), on the Chajnantor Plateau. Credit: ESO/C. Malin

Thursday, April 15, 2021


via:
[ed. ...and about a million other songs, just rearrange as needed.]

via:

What the U.S. Got for $2 Trillion in Afghanistan

All told, the cost of nearly 18 years of war in Afghanistan will amount to more than $2 trillion. Was the money well spent?

There is little to show for it. The Taliban control much of the country. Afghanistan remains one of the world’s largest sources of refugees and migrants. More than 2,400 American soldiers and more than 38,000 Afghan civilians have died.

Still, life has improved, particularly in the country’s cities, where opportunities for education have grown. Many more girls are now in school. And democratic institutions have been built — although they are shaky at best.

Drawing on estimates from Brown University’s Costs of War Project, we assessed how much the United States spent on different aspects of the war and whether that spending achieved its aims.

$1.5 trillion waging war

When President George W. Bush announced the first military action in Afghanistan in the wake of terrorist attacks by Al Qaeda in 2001, he said the goal was to disrupt terrorist operations and attack the Taliban.

Eighteen years later, the Taliban are steadily getting stronger. They kill Afghan security force members — sometimes hundreds in a week — and defeat government forces in almost every major engagement, except when significant American air support is used against them.

Al Qaeda’s senior leadership moved to Pakistan, but the group has maintained a presence in Afghanistan and expanded to branches in Yemen, northern Africa, Somalia and Syria.

The $1.5 trillion in war spending remains opaque, but the Defense Department declassified breakdowns of some of the three most recent years of spending.

Most of the money detailed in those breakdowns — about 60 percent each year — went to things like training, fuel, armored vehicles and facilities. Transportation, such as air and sea lifts, took up about 8 percent, or $3 billion to $4 billion a year.

$10 billion on counternarcotics

Afghanistan supplies 80 percent of the world’s heroin.

In a report last year, the Special Inspector General for Afghanistan Reconstruction described counternarcotics efforts as a “failure.” Despite billions of dollars to fight opium poppy cultivation, Afghanistan is the source of 80 percent of global illicit opium production.

Before the war, Afghanistan had almost completely eradicated opium, according to United Nations data from 1996 to 2001, when the Taliban were in power.

Today, opium cultivation is a major source of income and jobs, as well as revenue for the Taliban. Other than war expenditures, it is Afghanistan’s biggest economic activity.

$87 billion to train Afghan military and police forces

Afghan forces can’t support themselves.

One of the major goals of the American effort has been to train thousands of Afghan troops. Most of American spending on reconstruction has gone to a fund that supports the Afghan Army and police forces through equipment, training and funding.

But nobody in Afghanistan — not the American military, and not President Ashraf Ghani’s top advisers — thinks Afghan military forces could support themselves.

The Afghan Army in particular suffers from increasing casualty rates and desertion, which means they have to train new recruits totaling at least a third of their entire force every year.

President Barack Obama had planned to hand over total responsibility for security to the Afghans by the end of 2014 and to draw down all American forces by 2016. That plan faltered when the Taliban took quick advantage and gained ground.

The American military had to persuade first President Obama, and then President Trump, to ramp up forces. Some 14,000 U.S. troops remained in the country as of this month.

$24 billion on economic development

Most Afghans still live in poverty.

War-related spending has roughly doubled the size of Afghanistan’s economy since 2007. But it has not translated into a healthy economy.

A quarter or more of Afghans are unemployed, and the economic gains have trailed off since 2015, when the international military presence began to draw down.

Overseas investors still balk at Afghanistan’s corruption — among the worst in the world, according to Transparency International, an anticorruption group — and even Afghan companies look for cheaper labor from India and Pakistan.

Hopes of self-sufficiency in the mineral sector, which the Pentagon boasted could be worth $1 trillion, have been dashed. A few companies from China and elsewhere began investing in mining, but poor security and infrastructure have prevented any significant payout.

$30 billion on other reconstruction programs

Much of that money was lost to corruption and failed projects.

American taxpayers have supported reconstruction efforts that include peacekeeping, refugee assistance and aid for chronic flooding, avalanches and earthquakes.

Much of that money, the inspector general found, was wasted on programs that were poorly conceived or riddled with corruption.

American dollars went to build hospitals that treated no patients, to schools that taught no students (and sometimes never existed at all) and to military bases the Afghans found useless and later shuttered.

The inspector general documented $15.5 billion in waste, fraud and abuse in reconstruction efforts from 2008 through 2017.

Thanks to American spending, Afghanistan has seen improvements in health and education — but they are scant compared with international norms.

Afghan maternal mortality remains among the highest in the world, while life expectancy is among the lowest. Most girls still receive little or no schooling, and education for boys is generally poor.

$500 billion on interest

The war has been funded with borrowed money.

To finance war spending, the United States borrowed heavily and will pay more than $600 billion in interest on those loans through 2023. The rest of the debt will take years to repay.

In addition to the more than $2 trillion the American government has already spent on the war, debt and medical costs will continue long into the future.

$1.4 trillion on veterans that have fought in post-9/11 wars by 2059

Medical and disability costs will continue for decades.

More than $350 billion has already gone to medical and disability care for veterans of the wars in Iraq and Afghanistan combined. Experts say that more than half of that spending belongs to the Afghanistan effort.

The final total is unknown, but experts project another trillion dollars in costs over the next 40 years as wounded and disabled veterans age and need more services.

by Sarah Almukhtar and Rod Nordland, NY Times | Read more:
Image: Johannes Eisele / AFP / Getty via
[ed. Reproduced nearly in full (...hope the NYT doesn't make me take it down). Don't forget the other "forever war" we're currently engaged in that's equally as insane and costly. See also: Leaving Afghanistan, and the Lessons of America’s Longest War (New Yorker).]

There Shouldn’t Be Vaccine Patents in a Health Crisis

The extremity of the Covid-19 vaccine apartheid cannot be overstated. As of mid-February, the United States had acquired enough vaccines for three times its total population, while in 130 countries, not a single vaccine shot had been administered. This is no accident, but the direct and long-predicted result of a vaccine production and access model tied to privatized intellectual property and entrenched medicine monopolies.

The majority of Americans want President Joe Biden to act to end this intolerable vaccine inequality. Sixty percent of U.S. voters said they wanted Biden to endorse a motion at the World Trade Organization that would waive patent barriers and other crucial intellectual property protections on Covid-19 vaccines, according to a new poll from Data for Progress and the Progressive International. This would enable a significant expansion of global production and rollout, while disrupting the extraordinary profiteering of pharmaceutical leviathans in a death-dealing pandemic.

The refusal on the part of major pharmaceutical companies and Western powers to ensure the sharing of vaccine patent and production information has been an immeasurable moral failure, not to mention a most foolish approach to a pandemic in need of a global response. The new poll also makes clear that, for Biden, blocking vaccine sharing is not even a popular position. Seventy-two percent of registered Democrats want the president to remove patent barriers to speed vaccine rollout and reduce costs for less affluent nations.

At present, WTO rules over intellectual property mean that most countries are barred from producing the leading vaccines that have been approved, including those by Pfizer, Moderna, and Johnson & Johnson, which are U.S.-produced. Last October, South Africa and India brought a proposal to the WTO for a temporary waiver that would apply to certain intellectual property on Covid-19 medical tools and technologies until global herd immunity is reached.

It garnered majority support from member states: A hundred countries support the proposal overall, and 58 governments now co-sponsor it; 375 civil society organizations, including Doctors Without Borders, Oxfam, and Amnesty International have signed a letter in support.

The waiver was blocked, however, by a small number of wealthy nations and blocs, including the U.S., the U.K., and the EU, that chose instead to leave vaccine production in the hands of only a few pharmaceutical companies, which, through public-private partnerships, have ensured priority access to the rich countries in turn.

There are no legitimate grounds for maintaining patent barriers in this health crisis unless you’re a pharmaceutical giant making billions or, of course, a Western power invested in maintaining global power through neoliberalization, market monopolies, and racialized capitalism. The strongest advocates of intellectual property protections in medicine, Bill Gates chief among them, have offered no ethical basis for the current status quo beyond vague gestures to protecting “innovation.”

Even a self-interested approach, that sees the devastating economic possibilities of a mutating virus turning the pandemic into something endemic, should make the necessity of a patent waiver clear. The commitment to monopoly medicine is, in this sense, ideological.

The WTO proposal needs backing by a consensus of the the organization’s 164 members to pass. It was under President Donald Trump that the U.S. blocked the patent waiver: a move that came as no surprise for an administration of white nationalists, which proudly left the World Health Organization. A change of tack by the Biden administration, which rejoined the WHO on Day One, could go a long way in pushing other wealthy countries to follow suit. (...)

Sen. Bernie Sanders, I-Vt., chair of the Senate Budget Committee, responded to the poll saying the U.S. should be “leading the global effort to end the coronavirus pandemic.” According to Sanders, “a temporary WTO waiver, which would enable the transfer of vaccine technologies to poorer countries, is a good way to do that.” More than 60 lawmakers have added their signature to a letter pushing Biden to save lives through a global vaccination drive.

by Natasha Lennard, The Intercept | Read more:
Image: Jessica Rinaldi/The Boston Globe via Getty Images
[ed. See also: Let Other Countries Copy the Covid Vaccines; and How Bill Gates Impeded Global Access to Covid Vaccines (TNR).]

US Congress: A Coin-Operated Stalemate Machine (and Whither AOC?)

Yves here. Tom Neuburger gives a hard look at AOC’s recent donations to corporate Democrats and tries to ferret out what she intended to accomplish.

Tom is at a loss to understand why AOC chose the party members she did. I am at a loss to understand why she thought $5,000 donations would have made any difference to the recipients even if they had been on board with taking funds from her. As I am sure readers know, there’s a dark art as to how heavyweight bundlers and donors work around formal contribution limits.

And on top of that, Congressional Democrats run a pay-to-play operation. Kicking in enough money to the DCCC is the cost of entry for getting House committee leadership positions. We explained this back in 2011, via the work of Tom Ferguson, in Congress is a “Coin Operated Stalemate Machine.” I strongly urge you to read the entire post. Key section:
A new article by Ferguson in the Washington Spectator sheds more light on this corrupt and defective system. Partisanship and deadlocks are a direct result of the increased power of a centralized funding apparatus. It’s easy to raise money for grandstanding on issues that appeal to well-heeled special interests, so dysfunctional behavior is reinforced.

Let’s first look at how crassly explicit the pricing is. Ferguson cites the work of Marian Currander on how it works for the Democrats in the House of Representatives:
Under the new rules for the 2008 election cycle, the DCCC [Democratic Congressional Campaign Committee] asked rank-and-file members to contribute $125,000 in dues and to raise an additional $75,000 for the party. Subcommittee chairpersons must contribute $150,000 in dues and raise an additional $100,000. Members who sit on the most powerful committees … must contribute $200,000 and raise an additional $250,000. Subcommittee chairs on power committees and committee chairs of non-power committees must contribute $250,000 and raise $250,000. The five chairs of the power committees must contribute $500,000 and raise an additional $1 million. House Majority Leader Steny Hoyer, Majority Whip James Clyburn, and Democratic Caucus Chair Rahm Emanuel must contribute $800,000 and raise $2.5 million. The four Democrats who serve as part of the extended leadership must contribute $450,000 and raise $500,000, and the nine Chief Deputy Whips must contribute $300,000 and raise $500,000. House Speaker Nancy Pelosi must contribute a staggering $800,000 and raise an additional $25 million.
Ferguson teases out the implications:
Uniquely among legislatures in the developed world, our Congressional parties now post prices for key slots on committees. You want it — you buy it, runs the challenge. They even sell on the installment plan: You want to chair an important committee? That’ll be $200,000 down and the same amount later, through fundraising…..

The whole adds up to something far more sinister than the parts. Big interest groups (think finance or oil or utilities or health care) can control the membership of the committees that write the legislation that regulates them. Outside investors and interest groups also become decisive in resolving leadership struggles within the parties in Congress. You want your man or woman in the leadership? Just send money. Lots of it….

The Congressional party leadership controls the swelling coffers of the national campaign committees, and the huge fixed investments in polling, research, and media capabilities that these committees maintain — resources the leaders use to bribe, cajole, or threaten candidates to toe the party line… Candidates rely on the national campaign committees not only for money, but for message, consultants, and polling they need to be competitive but can rarely afford on their own..

This concentration of power also allows party leaders to shift tactics to serve their own ends….They push hot-button legislative issues that have no chance of passage, just to win plaudits and money from donor blocs and special-interest supporters. When they are in the minority, they obstruct legislation, playing to the gallery and hoping to make an impression in the media…

The system …ensures that national party campaigns rest heavily on slogan-filled, fabulously expensive lowest-common-denominator appeals to collections of affluent special interests. The Congress of our New Gilded Age is far from the best Congress money can buy; it may well be the worst. It is a coin-operated stalemate machine that is now so dysfunctional that it threatens the good name of representative democracy itself.
If that isn’t sobering enough, a discussion after the Ferguson article describes the mind-numbing amount of money raised by the members of the deficit-cutting super committee. In addition, immediately after being named to the committee, several members launched fundraising efforts that were unabashed bribe-seeking. But since the elites in this country keep themselves considerable removed from ordinary people, and what used to be considered corruption in their cohort is now business as usual, nary an ugly word is said about these destructive practices.

So as much as AOC has seemed disappointing of late, the overwhelming majority of voters have no clue as to what she is up against.

by Yves Smith and Thomas Neuberger, Naked Capitalism |  Read more:
Image: Seth Wenig/AP Photo via Politico
[ed. A bit of inside baseball here for political junkies. Apparently AOC gave $5000 to various Democratic members of Congress to help with their campaigns, a few of them DINOs (Dems in name only), who see any association with her as radioactive in their conservative-leaning districts. So they've decided to reject or return the funds. The question is: why did AOC do this (and with such meager amounts)? Is she gravitating toward the middle, and becoming more of an establishment player? Trying to mend fences? Or, as one commenter suggested, playing "eleventy-dimensional chess" and using the money to shine a light on people who've never been exposed in this way to this kind of scrutiny before? Who knows? But as this post indicates, funding is a sensitive and intricate process. By the way, the numbers above are for Democrats. I'd bet the one's for Republicans are equally as stunning, if not significantly worse (I'm not going to check). Also, this is from 2011. Citizens United undoubtedly made the process (and money involved) even more obscene.]

The Decay of Cinema

This deep into the coronavirus pandemic, how many cinephiles haven’t yet got word of the bankruptcy or shuttering of a favorite movie theater? Though the coronavirus hasn’t quite killed filmgoing dead — at least not everywhere in the world — the culture of cinema itself had been showing signs of ill health long before any of us had heard the words “social distancing.” The previous plague, in the view of Martin Scorsese, was the Hollywood superhero-franchise blockbuster. “That’s not cinema,” the auteur-cinephile told Empire magazine in 2019. “Honestly, the closest I can think of them, as well made as they are, with actors doing the best they can under the circumstances, is theme parks.”

This past March, Scorsese published an essay in Harper‘s called “Il Maestro.” Ostensibly a reflection on the work of Federico Fellini, it also pays tribute to Fellini’s heyday, when on any given night in New York a young movie fan could find himself torn between screenings of the likes of La Dolce Vita, François Truffaut’s Shoot the Piano Player, Andrzej Wajda’s Ashes and Diamonds, John Cassavetes’ Shadows, and the work of other masters besides. This was early in the time when, as New Yorker critic Anthony Lane puts it, “adventurous moviegoing was part of the agreed cultural duty, when the duty itself was more of a trip than a drag, and when a reviewer could, in the interests of cross-reference, mention the names ‘Dreyer’ or ‘Vigo’ without being accused of simply dropping them for show.”

Alas, writes Scorsese, today the art of cinema today is “systematically devalued, sidelined, demeaned, and reduced to its lowest common denominator, ‘content.'” Video essayist Daniel Simpson of Eyebrow Cinema calls this lament “more than an artist railing against a businessman’s terminology, but a yearning for a time when movies used to be special in and of themselves, not just as an extension of a streaming service.” In “The Decay of Cinema,” Simpson connects this cri de cinephilic coeur by the man who directed Taxi Driver and GoodFellas to a 25-year-old New York Times opinion piece by Susan Sontag. A midcentury-style film devotee if ever there was one, Sontag mourns “the conviction that cinema was an art unlike any other: quintessentially modern; distinctively accessible; poetic and mysterious and erotic and moral — all at the same time.”

Some may object to Sontag’s claim that truly great films had become “violations of the norms and practices that now govern movie making everywhere.” Just two weeks after her piece ran, Simpson points out, the Coen brothers’ Fargo opened; soon to come were acclaimed pictures by Mike Leigh and Lars von Trier, and the next few years would see the emergence of Wes Anderson and Paul Thomas Anderson both. But what of today’s masterpieces, like Chung Mong-hong’s A Sun? Though released before the havoc of COVID-19, it has nevertheless — “without a franchise, rock-star celebrities, or an elevator-pitch high concept” — languished on Netflix. And as for an event of such seemingly enormous cinematic import as the completion of Orson Welles’ The Other Side of the Wind three decades after his death, the result wound up “simply dumped on the platform with everything else.”

In a time like this, when the many stuck at home have few options besides streaming services, one hesitates to accuse Netflix of killing either cinema or cinephilia. And yet Simpson sees a considerable difference between being a cinephile and being a “user,” a label that suggests “a customer to be satiated” (if not an addict to be granted a fix of his habit-forming commodity). “There’s only one problem with home cinema,” writes Lane. “It doesn’t exist.” Choice “pretty much defines our status as consumers, and has long been an unquestioned tenet of the capitalist feast, but in fact carte blanche is no way to run a cultural life (or any kind of life, for that matter).” If we continue to do our viewing in algorithm-padded isolation, we surrender what Simpson describes as “the human connection to the film experience” — one of the things that, when all the social distancing ends, even formerly casual moviegoers may find themselves desperately craving.

by Colin Marshall, Open Culture | Read more:
Image: The Decay of Cinema

Mick Jagger & Dave Grohl


[ed. Everybody's hit the wall.]

Wednesday, April 14, 2021

via:


Weeds: dandelion
via:

Two Paths to the Future

The world of 2120 is going to be radically different. In exactly what way I cannot say, any more than a peasant in 1500 could predict the specifics of the industrial revolution. But it almost certainly involves unprecedented levels of growth as the constraints of the old paradigm are dissolved under the new one. One corollary to this view is that our long-term concerns (global warming, dysgenics, aging societies) are only relevant to the extent that they affect the arrival of the next paradigm.

There are two paths to the future: silicon, and DNA. Whichever comes first will determine how things play out. The response to the coronavirus pandemic has shown that current structures are doomed to fail against a serious adversary: if we want to have a chance against silicon, we need better people. That is why I think any AI "control" strategy not predicated on transhumanism is unserious.

Our neolithic forefathers could not have divined the metallurgical destiny of their descendants, but today, perhaps for the first time in universal history, we can catch a glimpse of the next paradigm before it arrives. If you point your telescope in exactly the right direction and squint really hard, you can just make out the letters: "YOU'RE FUCKED".

Artificial Intelligence
Nothing human makes it out of the near-future.
There are two components to forecasting the emergence of superhuman AI. One is easy to predict: how much computational power we will have. The other is very difficult to predict: how much computational power will be required. Good forecasts are either based on past data, or generalization from theories constructed from past data. Because of their novelty, paradigm shifts are difficult to predict. We're in uncharted waters here. But there are two sources of information we can use: biological intelligence (brains, human or otherwise), and progress in the limited forms of artificial intelligence we have created thus far.

ML progress

GPT-3 forced me to start taking AI concerns seriously. Two features make GPT-3 a scary sign of what's to come: scaling, and meta-learning. Scaling refers to gains in performance from increasing the number of parameters in a model. Here's a chart from the GPT-3 paper:


Meta-learning refers to the ability of a model to learn how to solve novel problems. GPT-3 was trained purely on next-word prediction, but developed a wide array of surprising problem-solving abilities, including translation, programming, arithmetic, literary style transfer, and SAT analogies. Here's another GPT-3 chart:


Put these two together and extrapolate, and it seems like a sufficiently large model trained on a diversity of tasks will eventually be capable of superhuman general reasoning abilities. As gwern puts it:
More concerningly, GPT-3’s scaling curves, unpredicted meta-learning, and success on various anti-AI challenges suggests that in terms of futurology, AI researchers’ forecasts are an emperor sans garments: they have no coherent model of how AI progress happens or why GPT-3 was possible or what specific achievements should cause alarm, where intelligence comes from, and do not learn from any falsified predictions.

GPT-3 is scary because it’s a magnificently obsolete architecture from early 2018 (used mostly for software engineering convenience as the infrastructure has been debugged), which is small & shallow compared to what’s possible, on tiny data (fits on a laptop), sampled in a dumb way⁠, its benchmark performance sabotaged by bad prompts & data encoding problems (especially arithmetic & commonsense reasoning), and yet, the first version already manifests crazy runtime meta-learning—and the scaling curves still are not bending
Still, extrapolating ML performance is problematic because it's inevitably an extrapolation of performance on a particular set of benchmarks. Lukas Finnveden, for example, argues that a model similar to GPT-3 but 100x larger could reach "optimal" performance on the relevant benchmarks. But would optimal performance correspond to an agentic, superhuman, general intelligence? What we're really interested is surprising performances in hard-to-measure domains, long-term planning, etc. So while these benchmarks might be suggestive (especially compared to human performance on the same benchmark), and may offer some useful clues in terms of scaling performance, I don't think we can rely too much on them—the error bars are wide in both directions. (...)

How much power will we have?

Compute use has increased by about 10 orders of magnitude in the last 20 years, and that growth has accelerated lately, currently doubling approximately every 3.5 months. A big lesson from the pandemic is that people are bad at reasoning about exponential curves, so let's put it in a different way: training GPT-3 cost approximately 0.000005%5 of world GDP. Go on, count the zeroes. Count the orders of magnitude. Do the math! There is plenty of room for scaling, if it works.

The main constraint is government willingness to fund AI projects. If they take it seriously, we can probably get 6 orders of magnitude just by spending more money. GPT-3 took 3.14e23 FLOPs to train, so if strong AGI can be had for less than 1e30 FLOPs it might happen soon. Realistically any such project would have to start by building fabs to make the chips needed, so even if we started today we're talking 5+ years at the earliest.

Looking into the near future, I'd predict that by 2040 we could squeeze another 1-2 orders of magnitude out of hardware improvements. Beyond that, growth in available compute would slow down to the level of economic growth plus hardware improvements.

Putting it all together

The best attempt at AGI forecasting I know of is Ajeya Cotra's heroic 4-part 168-page Forecasting TAI with biological anchors. She breaks down the problem into a number of different approaches, then combines the resulting distributions into a single forecast. The resulting distribution is appropriately wide: we're not talking about ±15% but ±15 orders of magnitude. (...)

Metaculus has a couple of questions on AGI, and the answers are quite similar to Cotra's projections. This question is about "human-machine intelligence parity" as judged by three graduate students; the community gives a 54% chance of it happening by 2040. This one is based on the Turing test, the SAT, and a couple of ML benchmarks, and the median prediction is 2038, with an 83% chance of it coming before 2100.(...)

Both extremes should be taken into account: we must prepare for the possibility that AI will arrive very soon, while also tending to our long-term problems in case it takes more than a century.

Human Enhancement
All things change in a dynamic environment. Your effort to remain what you are is what limits you.
The second path to the future involves making better humans. Ignoring the AI control question for a moment, better humans would be incredibly valuable to the rest of us purely for the positive externalities of their intelligence: smart people produce benefits for everyone else in the form of greater innovation, faster growth, and better governance. The main constraint to growth is intelligence, and small differences cause large effects: a standard deviation in national averages is the difference between a cutting-edge technological economy and not having reliable water and power. While capitalism has ruthlessly optimized the productivity of everything around us, the single most important input—human labor—has remained stagnant. Unlocking this potential would create unprecedented levels of growth.

Above all, transhumanism might give us a fighting chance against AI. How likely are they to win that fight? I have no idea, but their odds must be better than ours. The pessimistic scenario is that enhanced humans are still limited by numbers and meat, while artificial intelligences are only limited by energy and efficiency, both of which could potentially scale quickly.

The most important thing to understand about the race between DNA and silicon is that there's a long lag to human enhancement. Imagine the best-case scenario in which we start producing enhanced humans today: how long until they start seriously contributing? 20, 25 years? They would not be competing against the AI of today, but against the AI from 20-25 years in the future. Regardless of the method we choose, if superhuman AGI arrives in 2040, it's already too late. If it arrives in 2050, we have a tiny bit of wiggle room.

Let's take a look at our options.

Normal Breeding with Selection for Intelligence (...)
Gene Editing (...)
Cyborgs (...)
Iterated Embryo Selection (...)
Cloning (...)

A Kind of Solution
I visualise a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.
Let's revisit the AI timelines and compare them to transhumanist timelines.
  • If strong AGI can be had for less than 1e30 FLOPs, it's almost certainly happening before 2040—the race is already over.
  • If strong AGI requires more than 1e40 FLOPs, people alive today probably won't live to see it, and there's ample time for preparation and human enhancement.
  • If it falls within that 1e30-1e40 range (and our forecasts, crude as they are, indicate that's probable) then the race is on.
Even if you think there's only a small probability of this being right, it's worth preparing for. Even if AGI is a fantasy, transhumanism is easily worth it purely on its own merits. And if it helps us avoid extinction at the hand of the machines, all the better!

So how is it actually going to play out? Expecting septuagenarian politicians to anticipate wild technological changes and do something incredibly expensive and unpopular today for a hypothetical benefit that may or may not materialize decades down the line—is simply not realistic. Right now from a government perspective these questions might as well not exist; politicians live in the current paradigm and expect it to continue indefinitely. On the other hand, the Manhattan Project shows us that immediate existential threats have the power to get things moving very quickly. In 1939, Fermi estimated a 10% probability that a nuclear bomb could be built; 6 years later it was being dropped on Japan.

by Alvaro de Menard, Fantastic Anachronism | Read more:
Image: via

[ed. Not a very encouraging prospect. Reminds me of the old Woody Allen quote: “More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.” For more scary predictions, see: Book Review: The Precipice (SSC).]

Terms: GPT-3: an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. AGI: Artificial General Intelligence: hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI, or general intelligent action. FLOPs: floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per secondTranshumanism: a philosophical movement, the proponents of which advocate and predict the enhancement of the human condition by developing and making widely available sophisticated technologies able to greatly enhance longevity, mood and cognitive abilities (Wikipedia).

Afghanistan: An End to America's "Forever War"

Joe Biden has decided that 20 years is enough for America’s longest war, and has ordered the remaining troops out no matter what happens between now and September.

Biden’s withdrawal is one area of continuity with his predecessor, although unlike Donald Trump, this administration consulted the Afghans, US allies and its own agencies before announcing the decision. But both presidents were responding to a national weariness of “forever wars”.

To the surprise of no one, the Republican party that acquiesced in Trump’s order to get the troops out by May, is now launching attacks on Biden’s “reckless” decision. The political attacks will mount if, as many expected, the current peace initiative fails and the Taliban steps up their offensive.

In Afghanistan, any US president is damned if you do and damned if you don’t. Biden has plainly decided in that case, “don’t” is the better option.

In the Obama administration, Biden was a consistent voice of scepticism over the utility of military force in foreign policy, sometimes in opposition to advocates of humanitarian intervention.

He bluntly told a television interviewer on the campaign trail that he would feel “zero responsibility” if the status of Afghan women and other human rights suffered as a consequence of a US withdrawal.

“Are you telling me that we should go into China, go to war with China because what they’re doing to the Uyghurs,” he asked his CBS interviewer.

Safeguarding Afghan women and civil society has never been an official aim of the vestigial US military presence, but in the absence of a clearly defined goal, it became part of the de facto rationale.

“There are things that American officials have said over time to encourage that kind of thinking,” said Laurel Miller, who served as US special representative for Afghanistan and Pakistan, and now runs the Asia programme of the International Crisis Group.

“I’ll admit to – when I was in government – not feeling comfortable with some of those statements of enduring commitment, because I didn’t think it was believable.”

In making this decision, Biden has made clear he is setting aside Colin Powell’s famous “Pottery Barn rule”: if you break it, you own it. The quote comes from 2002 when the then secretary of state cited the fictional rule (which is not the policy of that furniture store) to warn George W Bush of the implications of invading Iraq. In Afghanistan, the US has part-owned the store for two decades now, and in reality, people and their livelihoods are still getting smashed.

by Julian Borger, The Guardian |  Read more:
Image: Kim Jae-Hwan/AFP/Getty Images
[ed. Finally. The problem being there was never a Plan B to start with. Just making stuff up as we went along (a textbook example of mission creep). See also: What Did the U.S. Get for $2 Trillion in Afghanistan? (NYT)]

The Daily Grind


Quite how long it takes a woman to grind for a family, apart from the time husking and shucking the maize, collecting the cooking water, and shaping and cooking the tortillas, depends on her skill and strength, the age and number of family members, the type of masa, and the quality of the metate. My estimate is that it takes about five hours a day to make enough masa for a family of five. This may seem incredible but it is in line with other estimates for contemporary Mexico and Guatemala collected by Michael Searcy, with Arnold Bauer’s estimate for Mexico, and experimental estimates for Europe collected in David Peacock’s in The Stone of Life (2013), 127. Since five hours is about as much as anyone can grind, the labor of one in five adults has to be devoted to making the staple bread.

The Daily Grind (Works in Progress)
Image: Magnus Ingvar Agustsson

How Trader Joe’s $2 "Two-Buck Chuck" Became a Best-Seller

Walk into almost any Trader Joe’s store and you’ll spot a behemoth display of Charles Shaw wine — or, as it’s more affectionately known, “Two Buck Chuck.”

Priced at a mere $1.99 to $3.79 per bottle, this magical ether is cheaper than most bottled water. It’s been knighted as the “darling of the discount wine world” by critics, and boasts a cult following among price-minded consumers.

For Trader Joe’s, the wine is also a gold mine.

The grocery chain has sold 1B+ bottles of Two Buck Chuck since debuting the beverage in 2002. Today, some locations sell as many as 6k bottles/day — or ~16% of the average store’s daily sales.

How is a supposedly decent wine sold at such a low price point? Where does it come from? And how did it rise to prominence?

This is the tale of one wine brand, two vintners, and the unlikely democratization of a historically snobby industry. (...)

The box wine baron

Fred Franzia did not share Shaw’s air d’élégance.

He was unrefined and heavyset, with a body shape the New Yorker likened to a “gourmet marshmallow.” Reclusive and gruff, he shied away from public appearances. He referred to winemakers as “bozos” and didn’t care for France.

Nonetheless, Franzia came from a long lineage of winemakers: His great-grandfather, Giuseppe, had immigrated to California’s Central Valley in 1893 and set up Franzia Brothers Winery (later sold to Coca-Cola); his uncle, Ernest Gallo, had built the largest wine exporter in California.

In 1973, Franzia launched his own wine company, Bronco Wine Co.

In a rickety wood-paneled trailer held together with duct tape, he set out to produce extremely cheap, high-quality “super-value” wines — wines that rejected the pretentiousness of Napa Valley.

Initially, Bronco operated as a wholesaler, buying bulk wine and selling it to larger wineries at a profit.

But soon, Franzia saw an opportunity to produce his own cheap wines — wines, as he later put it, that “yuppies would feel comfortable drinking.”

Through a legal loophole, he could say his wines were “Cellared and Bottled in Napa” if the brand was founded prior to 1986. So, he developed a strategy of buying out distressed wineries with distinguished-sounding names — Napa Ridge, Napa Creek, Domaine Napa — and using them to sell his stock of less-desirable Central Valley wines.

On a summer day in 1995, a few years after Charles F. Shaw Winery went bust, Franzia purchased the winery’s brand, label, and name for a mere $27k.

“We buy wineries from guys from Stanford who go bankrupt,” he later boasted. “Some real dumb-asses from there.”

Unbeknownst to the real Charles Shaw, Franzia was about to transform the once-fancy wine brand into an impossibly cheap everyman’s juice.

And in the process, he’d change the wine industry forever.

by Zachary Crockett, The Hustle |  Read more:
Image: Stephen Osman/Los Angeles Times via Getty Images

Tuesday, April 13, 2021

Charlie Musselwhite


[ed. See also: Ben Harper, Charlie Musselwhite - I'm In I'm Out And I'm Gone.]

What Is Kafkaesque? - The 'Philosophy' of Franz Kafka

Safety is Fatal

Humans need closeness and belonging but any society that closes its gates is doomed to atrophy. How do we stay open?

Many of us will recall Petri dishes from our first biology class – those shallow glass vessels containing a nutrient gel into which a microbe sample is injected. In this sea of nutrients, the cells grow and multiply, allowing the colony to flourish, its cells dividing again and again. But just as interesting is how these cells die. Cell death in a colony occurs in two ways, essentially. One is through an active process of programmed elimination; in this so-called ‘apoptotic’ death, cells die across the colony, ‘sacrificing’ themselves in an apparent attempt to keep the colony going. Though the mechanisms underlying apoptotic death are not well understood, it’s clear that some cells benefit from the local nutrient deposits of dying cells in their midst, while others seek nutrition at the colony’s edges. The other kind of colony cell death is the result of nutrient depletion – a death induced by the impact of decreased resources on the structure of the waning colony.

Both kinds of cell death have social parallels in the human world, but the second type is less often studied, because any colony’s focus is on sustainable development; and because a colony is disarmed in a crisis by suddenly having to focus on hoarding resources. At such times, the cells in a colony huddle together at the centre to preserve energy (they even develop protective spores to conserve heat). While individual cells at the centre slow down, become less mobile and eventually die – not from any outside threat, but from their own dynamic decline – life at the edges of such colonies remains, by contrast, dynamic. Are such peripheral cells seeking nourishment, or perhaps, in desperation, an alternative means to live?

But how far can we really push this metaphor: are human societies the same? As they age under confinement, do they become less resilient? Do they slow down as resources dwindle, and develop their own kinds of protective ‘spores’? And do these patterns of dying occur because we’ve built our social networks – like cells growing together with sufficient nutrients – on the naive notion that resources are guaranteed and infinite? Finally, do human colonies on the wane also become increasingly less capable of differentiation? We know that, when human societies feel threatened, they protect themselves: they zero in on short-term gains, even at the cost of their long-term futures. And they scale up their ‘inclusion criteria’. They value sameness over difference; stasis over change; and they privilege selfish advantage over civic sacrifice.

Viewed this way, the comparison seems compelling. In crisis, the colony introverts; collapsing inwards as inequalities escalate and there’s not enough to go around. In a crisis, as we’ve seen during the COVID-19 pandemic, people define ‘culture’ more aggressively, looking for alliances in the very places where they can invest their threatened social trust; for the centre is threatened and perhaps ‘cannot hold’.

Human cultures, like cell cultures, are not steady states. They can have split purposes as their expanding and contracting concepts of insiders and outsiders shift, depending on levels of trust, and on the relationship between available resources and how many people need them. Trust, in other words, is not only related to moral engagement, or the health of a moral economy. It’s also dependent on the dynamics of sharing, and the relationship of sharing practices to group size – this last being a subject that fascinates anthropologists.

In recent years, there’s been growing attention to what drives group size – and what the implications are for how we build alliances, how we see ourselves and others, and who ‘belongs’ and who doesn’t. Of course, with the advent of social media, our understanding of what a group is has fundamentally changed.

The British anthropologist Robin Dunbar popularised the question of group size in his book How Many Friends Does One Person Need? (2010). In that study, he took on the challenge of relating the question of group size to our understanding of social relationships. His interest was based on his early studies of group behaviour in animal primates, and his comparison of group sizes among tribal clans. Dunbar realised that, in groups of more than 150 people, clans tend to split. Averaging sizes of some 20 clan groups, he arrived at 153 members as their generalised limit.

However, as we all know, ‘sympathy groups’ (those built on meaningful relationships and emotional connections) are much smaller. Studies of grieving, for example, show that our number of deep relationships (as measured by extended grieving following the death of a sympathy group member) reach their upward limit at around 15 people, though others see that number as even smaller at 10, while others, still, focus on close support groups that average around five people.

For Dunbar, 150 is the optimal size of a personal network (even if Facebook thinks we have more like 500 ‘friends’), while management specialists think that this number represents the higher limits of cooperation. In tribal contexts, where agrarian or hunting skills might be distributed across a small population, the limiting number is taken to indicate the point after which hierarchy and specialisation emerge. Indeed, military units, small egalitarian companies and innovative think-tanks seem to top out somewhere between 150 and 200 people, depending on the strength of shared conventional understandings.

Though it’s tempting to think that 150 represents both the limits of what our brains can accommodate in assuring common purpose, and the place where complexity emerges, the truth is different; for the actual size of a group successfully working together is, it turns out, less important than our being aware of what those around us are doing. In other words, 150 might be an artefact of social agreement and trust, rather than a biologically determined structural management goal, as Dunbar and so many others think. We know this because it’s the limit after which hierarchy develops in already well-ordered contexts. But we also know this because of the way that group size shrinks radically in the absence of social trust. When people aren’t confident about what proximate others are mutually engaged in, the relevant question quickly turns from numbers of people in a functioning network to numbers of potential relationships in a group. So, while 153 people might constitute a maximum ideal clan size, based on brain capacity, 153 relationships exist in a much smaller group – in fact, 153 relationships exist exactly among only 18 people.

Dunbar’s number should actually be 18, since, under stress, the quality of your relationships matters much more than the number of people in your network. The real question is not how many friends a person can have, but how many people with unknown ideas can be put together and manage themselves in creating a common purpose, bolstered by social rules or cultures of practice (such as the need to live or work together). Once considered this way, anyone can understand why certain small elite groups devoted to creative thinking are sized so similarly.

Take small North American colleges. Increasingly, they vie with big-name universities such as Harvard and Stanford not only because they’re considered safer environments by worried parents, but because their smaller size facilitates growing trust among strangers, making for better educational experiences. Their smaller size matters. Plus, it’s no accident that the best of these colleges on average have about 150 teaching staff (Dunbar’s number) and that (as any teacher will know) a seminar in which you expect everyone to talk tops out at around 18 people.

But what do we learn from these facts? Well, we can learn quite a bit. While charismatic speakers can wow a crowd, even the most gifted seminar leader will tell you that his or her ability to involve everyone starts to come undone as you approach 20 people. And if any of those people require special attention (or can’t tolerate ideological uncertainty) that number will quickly shrink.

In the end, therefore, what matters much more than group size is social integration and social trust. As for Facebook’s or Dunbar’s question of how many ‘friends’ we can manage, the real question ought to be: how healthy is the Petri dish? To determine this, we need to assess not how strong are the dish’s bastions (an indicator of what it fears) but its ability, as with the small North American college, to engage productively and creatively in extroverted risk. And that’s a question that some other cultures have embraced much better than even North American colleges.

by David Napier, Aeon |  Read more:
Image: Fiddlesticks Country Club, a gated community in Fort Meyers, Florida. Photo by Michael Siluk/UIG/Getty

Novel HIV Vaccine Approach Shows Promise in “Landmark” First-in-Human Trial

A novel vaccine approach for the prevention of HIV has shown promise in Phase I trials, reported IAVI and Scripps Research. According to the organisations, the vaccine successfully stimulated the production of the rare immune cells needed to generate antibodies against HIV in 97 percent of participants.

The vaccine is being developed to act as an immune primer, to trigger the activation of naïve B cells via a process called germline-targeting, as the first stage in a multi-step vaccine regimen to elicit the production of many different types of broadly neutralizing antibodies (bnAbs). Stimulating the production of bnAbs has been pursued as a holy grail in HIV for decades. It is hoped that these specialised blood proteins could attach to HIV surface proteins called spikes, which allow the virus to enter human cells, and disable them via a difficult-to-access regions that does not vary much from strain to strain.

“We and others postulated many years ago that in order to induce bnAbs, you must start the process by triggering the right B cells – cells that have special properties giving them potential to develop into bnAb-secreting cells,” explained Dr William Schief, a professor and immunologist at Scripps Research and executive director of vaccine design at IAVI’s Neutralizing Antibody Center, whose laboratory developed the vaccine. “In this trial, the targeted cells were only about one in a million of all naïve B cells. To get the right antibody response, we first need to prime the right B cells. The data from this trial affirms the ability of the vaccine immunogen to do this.” (...)

One of the lead investigators on the trial, Dr Julie McElrath, senior vice president and director of Fred Hutch’s Vaccine and Infectious Disease Division, said the trial was “a landmark study in the HIV vaccine field,” adding that they had demonstrated “success in the first step of a pathway to induce broad neutralising antibodies against HIV-1.”

HIV affects more than 38 million people globally and is among the most difficult viruses to target with a vaccine, in large part because of its unusually fast mutation rate which allows it to constantly evolve and evade the immune system.

Dr Schief commented: “This study demonstrates proof of principle for a new vaccine concept for HIV, a concept that could be applied to other pathogens as well. With our many collaborators on the study team, we showed that vaccines can be designed to stimulate rare immune cells with specific properties and this targeted stimulation can be very efficient in humans. We believe this approach will be key to making an HIV vaccine and possibly important for making vaccines against other pathogens.”

The company’s said this study sets the stage for additional clinical trials that will seek to refine and extend the approach, with the long-term goal of creating a safe and effective HIV vaccine. As a next step, the collaborators are partnering with the biotechnology company Moderna to develop and test an mRNA-based vaccine that harnesses the approach to produce the same beneficial immune cells. According to the team, using mRNA technology could significantly accelerate the pace of HIV vaccine development, as it did with vaccines for COVID-19.

by Hannah Balfour , European Pharmaceutical Review | Read more:
Image: uncredited
[ed. The holy grail.]

Awful but Lawful

The courtroom is a stage. The rules of who says what and when are carefully determined, sometimes after prolonged legal contention. The truth that emerges is viewed as legitimate because it is the product of process. The prosecution and defense each present two versions of the truth, and the jury is set the task of selecting one. Punishment is allotted, or the defendant acquitted, based on which version is selected.

One assumption behind the courtroom theater is that both versions of the truth, the one presented by the prosecution and the one suggested by the defense, are equal, in that either one could be selected by the jury as the winning version of what happened. What is left unanswered in this system of procedural justice is whether both sides are equally worthy if one version of these truths depends on its connection to racial prejudices that a jury of ordinary people may have.

Derek Chauvin’s trial for the killing of George Floyd is an example of this. The lawyers for former Minneapolis police officer Chauvin are basing their defense on racist notions of Black men as angry and uncontrollable, and Black communities as inherently threatening and menacing. Beliefs that would be considered overtly racist is other contexts are thus drawn into the courtroom without any scrutiny of their foundations. In the opening argument, the defense counsel specifically notes that “Mr. Chauvin stands five foot nine, 140 pounds. George Floyd is six three and 223 pounds.” Not only that, but in these first two weeks of the trial, the defense has resorted to another stereotype—that of the drug-addicted Black man. In the defense’s opening statement, Chauvin’s lawyer described the initial call to police. Floyd, it was reported, “was under the influence of something. . . He’s not acting right. He’s six to six-and-a-half feet tall.”

The racist seed of the uncontrollable black man, planted early, has been nourished ever since. Defense counsel Eric Nelson insisted that video preceding Floyd’s death shows “the police squad car rocking back and forth” to highlight just how strong and wild Floyd was. So significant was the danger posed by the large and out-of-control Floyd, the defense wants the jury (and everyone else watching) to believe, that it justified the use of “maximum restraint technique,” what used to be called the “hobble or the hog tie.” It is in the justified and appropriate process of hog tying George Floyd that the defendant Derek Chauvin used “[one] knee to pin Mr. Floyd’s left shoulder blade and back to the ground and his right knee to pin Mr. Floyd’s left arm to the ground.”

As these details are tossed about in the courtroom, the damage to the larger discourse about race is already done. The Black man has been framed as an animal, a “hog” that must be tied up so that it won’t thrash and flail at having been apprehended. In the choreographed environment of the courtroom, where the focus is upon discerning a truth in the specific case at hand, there is no room to draw connections to America’s larger racist history. (...)

Having stated that a Black man can be hog tied and held to the ground, the defense also set out to prove that the Black crowd at Cup Foods was belligerent, hostile, and menacing. In his opening statement, Nelson told the jury that “as the crowd grew in size, similarly so too did their anger.” In the defense narrative, there is no effort to individuate the members of the crowd, to note that the “crowd” included a seventeen-year-old and her nine-year-old cousin out to get a snack at Cup Foods, an off-duty EMT who begged police to let her help Floyd, an MMA fighter who had trained alongside Minneapolis police officers, and an old man who could not help but cry when he took the stand. The defense deployment of the “angry black man” became literal in the cross-examination of Donald Wynn Williams II, the mixed martial artist. “It’s fair to say you got angrier and angrier?” Nelson needled Williams, until he replied, “I grew professional and professional. I stayed in my body. You can’t paint me out to be angry.” (...)

There is no doubt that the murder of George Floyd has provoked a racial reckoning in the United States. In the anti-racist work and conversations that have taken place since, efforts have been made to expose surreptitious and systemic racism. But if the Derek Chauvin trial is any illustration, this work has not yet reached the courtroom. Regardless of the outcome of the trial itself and whether there is any justice for George Floyd, the direct appeal to racist prejudice, featuring the uncontrollable and angry Black man, the rough and lawless world of the inner city, the mostly Black crowd as menacing, have all been normalized by the defense as plausible explanations of what happened. Having these ideas form such an integral and overt centerpiece of the trial in a courtroom, where they are not critically examined, suggests to those watching that such a narrative is permissible and possibly true.

by Rafia Zakaria, The Baffler | Read more:
Image: Defense attorney Eric Nelson delivers his opening statement at the Derek Chauvin trial. | AJC

Monday, April 12, 2021

Florida GOP Introduces Ballotless Voting In Disenfranchised Communities


TALLAHASSEE, FL—In an effort to streamline the state’s electoral process, Florida Republicans introduced a new bill to the legislature Thursday that would establish ballotless voting in disenfranchised communities. “We’ve eliminated the complex and insecure process of casting a ballot so that voters from underserved communities don’t have to worry about going to the polls or mailing anything in,” said co-sponsor Rep. Chris Sprowls of the popular proposal, which had already garnered unanimous support among Republicans in the House and Senate. “Come voting day, voters will be able to walk right up to the doors of their polling place, then turn around. No lines, no worry. We’ve listened to your concerns, and are confident that ballotless voting will address them.” At press time, Sprowls added that the bill would also help fight voter fraud by eliminating the likelihood of votes being erroneously counted.

by The Onion |  Read more:
Image: uncredited
[ed. See also: Georgia Lawmakers Warn Stricter Gun Regulation Could Cause Mass Shooters To Move To Other States; Man Opposes Taxing Rich Because He Knows One Day He Could Find $20 Bill On Ground; and, Report: Today Not One You Will Remember (Onion). And, more seriously: Republicans Are Making 4 Key Mistakes (The Atlantic): 

"Arizona Republicans propose to reduce the number of days for early voting. They want to purge voter rolls of people who missed the previous election. They want to cut off mail-in balloting five days before Election Day. And they want to require that affidavits of identity accompany any ballot that is mailed in.

Texas Republicans are pushing a bill to limit early voting, prohibit drive-through voting, limit the number of ballot drop-off locations, and restrict local officials’ ability to publicize voting by mail."]