EA - Future Matters #7: AI timelines, AI skepticism, and lock-in by Pablo
The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund
Categorías:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #7: AI timelines, AI skepticism, and lock-in, published by Pablo on February 3, 2023 on The Effective Altruism Forum.That man is born merely for a few, who thinks only of the people of his own generation. Many thousands of years and many thousands of peoples will come after you; it is to these that you should have regard.Lucius Annaeus SenecaFuture Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.ResearchAjeya Cotra’s biological anchors model to forecast AGI timelines consists of three parts — an estimate of the compute required to train AGI with 2020 algorithms, a projection of how these compute requirements decrease over time due to algorithmic progress, and a forecast of how the size of training runs will increase over time due to declining hardware costs and increased investment in AI training. Tom Davidson’s What a compute-centric framework says about AI takeoff speeds extends Cotra’s framework to incorporate a more sophisticated model of how R&D investment translates into algorithmic and hardware progress, and also to capture the “virtuous circle†whereby AI progress leads to more automation in AI R&D and in turn faster AI progress. This results in a model of AI takeoff speed, defined here as the time between AI being able to automate 20% of cognitive tasks to being able to automate 100% of cognitive tasks. Davidson’s median estimate for AI takeoff is approximately three years.This is an impressive and significant piece of research, which we cannot summarize adequately here; we hope to feature a conversation with the author in a future issue to explore it in more depth. The full report is available here. Readers are encouraged to play around with the neat interactive model.Zac Hatfield-Dodds shares some Concrete reasons for hope about AI safety []. A researcher at Anthropic (writing in a personal capacity), he takes existential risks from AI seriously, but pushes back on recent pronouncements that AI catastrophe is pretty much inevitable. Hatfield-Dodds highlights some of the promising results from the nascent efforts at figuring out how to align and interpret large language models. The piece is intended to “rebalance the emotional scales†in the AI safety community, which he feels have recently tipped too far towards a despair that feels is both unwarranted and unconstructive.Holden Karnofsky's Transformative AI issues (not just misalignment) [] surveys some of the high-stakes issues raised by transformative AI, particularly those that we should be thinking about ahead of time in order to make a lasting difference to the long-term future. These include not just existential risk from misalignment, but also power imbalances, early AI applications, new life forms, and persistent policies and norms. Karnofsky is inclined to prioritize the first two issues, since he feels very uncertain about the sign of interventions focused on the remaining ones.Lizka Vaintrob argues that we should Beware safety-washing [] by AI companies, akin to greenwashing, where companies misrepresent themselves as being more environmentally conscious than they actually are, rather than taking costly actions to reduce their environmental impact. This could involve misleading not just consumers, but investors, employees, regulators, etc. on whether an AI project took safety concerns seriously. One promising way to address this would be developing common standards for safety, and trustworthy methods for auditing and evaluating companies against these standards.In How we could ...
