I’m not sure if I really believe in the AI doomerism stuff, but one thing I find discouraging is how often we screw up similar problems that are analogous to AI alignment (predicting how a complex system will behave given a set of rules) — but much easier.
For example, recently a law was passed with the goal of protecting people with sesame allergies, and the result is that now sesame is being put in many more products.Seriously, I’m not kidding. The Food Allergy Safety, Treatment, Education and Research (Faster) Act, which was passed with bipartisan support, basically said:
- If you’re selling a food that contains sesame, it has to be labelled.
- If you’re selling a food that doesn’t contain sesame, you need to follow a bunch of burdensome rules, including careful cleaning of manufacturing equipment, to be absolutely 100% sure there is no contamination.
The result is that now a lot of companies are choosing to add small amounts of sesame to products that had previously been sesame-free, and labeling it. According to the article, this is causing some pretty serious hardship for people who are allergic to sesame:
The article also quotes some consumer protection advocates and politicians who had lobbied for the passage of the law. None of them apologized or took responsibility for the situation. (...)
This is a general problem in public policy. Sometimes political conflicts happen when different groups have competing interests,
public choice theory, etc, and advocate for policies accordingly. But other times there are these alignment problems where we simply fail to correctly predict what the results of a policy will be, so the results don’t match our desired outcome even when we get the policy we’re advocating for.
by Mike Saint-Santoine, Mike's Blog |
Read more:
Image: via Wikipedia