Image: uncredited
[ed. A lot to unpack here re: all things AI, from the micro-to-macro with broad insights and predictions concerning the technology, industry, issues, debates, politics, and other "theses" surrounding AI now and going forward (with many good links, as well). Highly recommended. Excerpts:]
Most proposed “AI regulations” are ill-conceived or premature
Most proposed “AI regulations” are ill-conceived or premature
1. There is a substantial premium on discretion and autonomy in government policymaking whenever events are fast moving and uncertain, as with AI.
2. It is unwise to craft comprehensive statutory regulation at a technological inflection point, as the basic ontology of what is being regulated is in flux.
3. The optimal policy response to AI likely combines targeted regulation with comprehensive deregulation across most sectors.
4. Regulations codify rules, standards and processes fit for a particular mode of production and industry structure, and are liable to obsolesce in periods of rapid technological change.
5. The benefits of deregulation come less from static efficiency gains than from the greater capacity of markets and governments to adapt to innovation.
6. The main regulatory barriers to the commercial adoption of AI are within legacy laws and regulations, mostly not prospective AI-specific laws.
7. The shorter the timeline to AGI, the sooner policymaker and organizations should switch focus to “bracing for impact.”
8. The most robust forms of AI governance will involve the infrastructure and hardware layers.
9. Existing laws and regulations are calibrated with the expectation of imperfect enforcement.
10. To the extent AI greatly reduces monitoring and enforcement costs, the de facto stringency of all existing laws and regulations will greatly increase absent a broader liberalization.
11. States should focus on public sector modernization and regulatory sandboxes and avoid creating an incompatible patchwork of AI safety regulations.
3. The optimal policy response to AI likely combines targeted regulation with comprehensive deregulation across most sectors.
4. Regulations codify rules, standards and processes fit for a particular mode of production and industry structure, and are liable to obsolesce in periods of rapid technological change.
5. The benefits of deregulation come less from static efficiency gains than from the greater capacity of markets and governments to adapt to innovation.
6. The main regulatory barriers to the commercial adoption of AI are within legacy laws and regulations, mostly not prospective AI-specific laws.
7. The shorter the timeline to AGI, the sooner policymaker and organizations should switch focus to “bracing for impact.”
8. The most robust forms of AI governance will involve the infrastructure and hardware layers.
9. Existing laws and regulations are calibrated with the expectation of imperfect enforcement.
10. To the extent AI greatly reduces monitoring and enforcement costs, the de facto stringency of all existing laws and regulations will greatly increase absent a broader liberalization.
11. States should focus on public sector modernization and regulatory sandboxes and avoid creating an incompatible patchwork of AI safety regulations.
AI progress is accelerating, not plateauing
1. The last 12 months of AI progress were the slowest they’ll be for the foreseeable future.
2. Scaling LLMs still has a long way to go, but will not result in superintelligence on its own, as minimizing cross-entropy loss over human-generated data converges to human-level intelligence.
1. The last 12 months of AI progress were the slowest they’ll be for the foreseeable future.
2. Scaling LLMs still has a long way to go, but will not result in superintelligence on its own, as minimizing cross-entropy loss over human-generated data converges to human-level intelligence.
3. Exceeding human-level reasoning will require training methods beyond next token prediction, such as reinforcement learning and self-play, that (once working) will reap immediate benefits from scale.
4. RL-based threat models have been discounted prematurely.
5. Future AI breakthroughs could be fairly discontinuous, particularly with respect to agents.
6. AGI may cause a speed-up in R&D and quickly go superhuman, but is unlikely to “foom” into a god-like ASI given compute bottlenecks and the irreducibility of high dimensional vector spaces, i.e. Ray Kurzweil is underrated.
7. Recursive self-improvement and meta-learning may nonetheless give rise to dangerously powerful AI systems within the bounds of existing hardware.
8. Slow take-offs eventually become hard.
4. RL-based threat models have been discounted prematurely.
5. Future AI breakthroughs could be fairly discontinuous, particularly with respect to agents.
6. AGI may cause a speed-up in R&D and quickly go superhuman, but is unlikely to “foom” into a god-like ASI given compute bottlenecks and the irreducibility of high dimensional vector spaces, i.e. Ray Kurzweil is underrated.
7. Recursive self-improvement and meta-learning may nonetheless give rise to dangerously powerful AI systems within the bounds of existing hardware.
8. Slow take-offs eventually become hard.