{"version":"1.1.0","chapters":[{"startTime":18.0,"title":"Introduction"},{"startTime":60.0,"title":"Footnote 1"},{"startTime":99.0,"title":"(Text Resumes)"},{"startTime":180.0,"title":"Summary"},{"startTime":238.0,"title":"Footnote 2"},{"startTime":260.0,"title":"(Text resumes)"},{"startTime":286.0,"title":"Footnote 3"},{"startTime":330.0,"title":"(Text Resumes)"},{"startTime":381.0,"title":"Our overall view"},{"startTime":466.0,"title":"Footnote 4"},{"startTime":577.0,"title":"(Text Resumes)"},{"startTime":678.0,"title":"Note from the author:"},{"startTime":701.0,"title":"Footnote 5"},{"startTime":802.0,"title":"(Text Resumes)"},{"startTime":850.0,"title":"Main Article Text Begins"},{"startTime":935.0,"title":"Footnote 6"},{"startTime":1024.0,"title":"(Text Resumes)"},{"startTime":1028.0,"title":"1. Many AI experts think there’s a non-negligible chance AI will lead to outcomes as bad as extinction"},{"startTime":1075.0,"title":"Footnote 7"},{"startTime":1161.0,"title":"(Text Resumes)"},{"startTime":1188.0,"title":"Footnote 8"},{"startTime":1252.0,"title":"(Text Resumes)"},{"startTime":1325.0,"title":"Footnote 10"},{"startTime":1349.0,"title":"Footnote 11"},{"startTime":1553.0,"title":"(Text Resumes)"},{"startTime":1569.0,"title":"Footnote 12"},{"startTime":1682.0,"title":"(Text Resumes)"},{"startTime":1714.0,"title":"Footnote 13"},{"startTime":1842.0,"title":"(Text Resumes)"},{"startTime":1855.0,"title":"Footnote 14"},{"startTime":1877.0,"title":"(Text Resumes)"},{"startTime":1935.0,"title":"2. We’re making advances in AI extremely quickly"},{"startTime":1941.0,"title":"Image: cats dressed as programmers"},{"startTime":1992.0,"title":"Footnote 15"},{"startTime":2019.0,"title":"(Text Resumes)"},{"startTime":2191.0,"title":"Sidebar: What is deep learning? "},{"startTime":2243.0,"title":"Diagram: A simple neural network"},{"startTime":2368.0,"title":"(Text Resumes)"},{"startTime":2397.0,"title":"Footnote 16"},{"startTime":2764.0,"title":"(Text Resumes)"},{"startTime":2804.0,"title":"Current trends show rapid progress in the capabilities of ML systems"},{"startTime":2869.0,"title":"Graph: Computation used to train notable AI systems"},{"startTime":2935.0,"title":"(Text resumes)"},{"startTime":2967.0,"title":"Footnote 19"},{"startTime":2994.0,"title":"(Text resumes)"},{"startTime":3020.0,"title":"Sidebar: Let's take a look at what GPT-3 is capable of doing "},{"startTime":3063.0,"title":"Footnote 17"},{"startTime":3073.0,"title":"(Text Resumes)"},{"startTime":3117.0,"title":"Footnote 18"},{"startTime":3161.0,"title":"(Text Resumes)"},{"startTime":3211.0,"title":"(Text resumes)"},{"startTime":3256.0,"title":"When can we expect transformative AI?"},{"startTime":3279.0,"title":"Footnote 20"},{"startTime":3406.0,"title":"(Text resumes)"},{"startTime":3430.0,"title":"Footnote 21"},{"startTime":3453.0,"title":"(Text resumes)"},{"startTime":3542.0,"title":"Footnote 22"},{"startTime":3575.0,"title":"(Text resumes)"},{"startTime":3676.0,"title":"Table: chance of transformative AI"},{"startTime":3744.0,"title":"(Text resumes)"},{"startTime":3771.0,"title":"3. Power-seeking AI could pose an existential threat to humanity"},{"startTime":3871.0,"title":"Footnote 23"},{"startTime":3894.0,"title":"It’s likely we’ll build advanced planning systems"},{"startTime":3907.0,"title":"Footnote 24"},{"startTime":4053.0,"title":"These systems seem technically possible and we’ll have strong incentives to build them"},{"startTime":4078.0,"title":"Footnote 25"},{"startTime":4146.0,"title":"(Text resumes)"},{"startTime":4174.0,"title":"Footnote 26"},{"startTime":4208.0,"title":"(Main text resumes)"},{"startTime":4280.0,"title":"Footnote 27"},{"startTime":4310.0,"title":"(Text resumes)"},{"startTime":4345.0,"title":"Footnote 28"},{"startTime":4411.0,"title":"(Text Resumes)"},{"startTime":4417.0,"title":"Advanced planning systems could easily be dangerously ‘misaligned’"},{"startTime":4433.0,"title":"Footnote 29"},{"startTime":4563.0,"title":"(Text resumes)"},{"startTime":4583.0,"title":"Footnote 30"},{"startTime":4637.0,"title":"(Text resumes)"},{"startTime":4652.0,"title":"Footnote 31"},{"startTime":4695.0,"title":"(Text resumes)"},{"startTime":4717.0,"title":"Three examples of “misalignment” in a variety of systems"},{"startTime":4751.0,"title":"Footnote 32"},{"startTime":4838.0,"title":"(Text resumes)"},{"startTime":4971.0,"title":"Gif: specification gaming"},{"startTime":4995.0,"title":"Footnote 33"},{"startTime":5014.0,"title":"(Text resumes)"},{"startTime":5026.0,"title":"Why these systems could (by default) be dangerously misaligned"},{"startTime":5286.0,"title":"It might be hard to find ways to prevent this sort of misalignment"},{"startTime":5323.0,"title":"Footnote 34"},{"startTime":5430.0,"title":"Footnote 35"},{"startTime":5463.0,"title":"Footnote 36"},{"startTime":5746.0,"title":"At this point, you may have questions like:"},{"startTime":5794.0,"title":"Disempowerment by AI systems would be an existential catastrophe"},{"startTime":5869.0,"title":"Footnote 37"},{"startTime":5913.0,"title":"People might deploy misaligned AI systems despite the risk"},{"startTime":5942.0,"title":"Footnote 38"},{"startTime":6158.0,"title":"This all sounds very abstract. What could an existential catastrophe caused by AI actually look like?"},{"startTime":6208.0,"title":"4. Even if we find a way to avoid power-seeking, there are still risks"},{"startTime":6243.0,"title":"AI could worsen war"},{"startTime":6262.0,"title":"Footnote 39"},{"startTime":6291.0,"title":"Footnote 40"},{"startTime":6324.0,"title":"(Text resumes)"},{"startTime":6344.0,"title":"Footnote 41"},{"startTime":6469.0,"title":"(Text resumes)"},{"startTime":6498.0,"title":"AI could be used to develop dangerous new technology"},{"startTime":6509.0,"title":"Footnote 42"},{"startTime":6558.0,"title":"Footnote 43"},{"startTime":6598.0,"title":"Footnote 44"},{"startTime":6613.0,"title":"(Text Resumes)"},{"startTime":6616.0,"title":"AI could empower totalitarian governments"},{"startTime":6635.0,"title":"Footnote 45"},{"startTime":6678.0,"title":"(Text resumes)"},{"startTime":6693.0,"title":"Other risks from AI"},{"startTime":6766.0,"title":"So, how likely is an AI-related catastrophe?"},{"startTime":6929.0,"title":"Footnote 46"},{"startTime":6969.0,"title":"(Text resumes)"},{"startTime":7033.0,"title":"Footnote 47"},{"startTime":7081.0,"title":"(Text resumes)"},{"startTime":7148.0,"title":"5. We can tackle these risks"},{"startTime":7196.0,"title":"Technical AI safety research"},{"startTime":7254.0,"title":"AI governance research and implementation"},{"startTime":7313.0,"title":"Here are some more questions you might have:"},{"startTime":7341.0,"title":"6. This work is extremely neglected"},{"startTime":7444.0,"title":"What do we think are the best arguments we’re wrong?"},{"startTime":7520.0,"title":"We might have a lot of time to work on this problem"},{"startTime":7642.0,"title":"AI might improve gradually over time"},{"startTime":7740.0,"title":"We might need to solve alignment anyway to make AI useful"},{"startTime":7899.0,"title":"The problem could be extremely difficult to solve"},{"startTime":7982.0,"title":"We could be wrong that strategic AI systems are likely to seek power"},{"startTime":8005.0,"title":"Footnote 48"},{"startTime":8169.0,"title":"Footnote 49"},{"startTime":8200.0,"title":"(Text resumes)"},{"startTime":8338.0,"title":"Arguments against working on AI risk to which we think there are strong responses"},{"startTime":8439.0,"title":"Is it even possible to produce artificial general intelligence?"},{"startTime":8581.0,"title":"Why can't we just unplug a dangerous AI?"},{"startTime":8678.0,"title":"Couldn't we just 'sandbox' any potentially dangerous AI system until we know it's safe?"},{"startTime":8764.0,"title":"Surely a truly intelligent AI system would know not to disempower everyone?"},{"startTime":8853.0,"title":"Can't you just not give an AI system bad goals?"},{"startTime":9010.0,"title":"Isn't the real danger from actual current AI — not some sort of futuristic superintelligence?"},{"startTime":9142.0,"title":"But can't AI also do a lot of good?"},{"startTime":9206.0,"title":"You'd have to be really stupid to build or use a system that could genuinely kill everyone, right?"},{"startTime":9328.0,"title":"Footnote 50"},{"startTime":9371.0,"title":"(Text resumes)"},{"startTime":9381.0,"title":"Why shouldn't I dismiss this as motivated reasoning by a group of people who just like playing with computers and want to think that's important?"},{"startTime":9461.0,"title":"This all reads, and feels, like science fiction"},{"startTime":9615.0,"title":"Can it make sense to dedicate my career to solving an issue based on a speculative story about a technology that may or may not ever exist?"},{"startTime":9711.0,"title":"Is this a form of Pascal's mugging — taking a big bet on tiny probabilities?"},{"startTime":9960.0,"title":"What you can do concretely to help"},{"startTime":10024.0,"title":"Technical AI safety"},{"startTime":10027.0,"title":"Approaches"},{"startTime":10171.0,"title":"Key organisations"},{"startTime":10307.0,"title":"Conceptual AI safety labs:"},{"startTime":10373.0,"title":"AI Safety in Academia"},{"startTime":10497.0,"title":"AI governance and strategy"},{"startTime":10501.0,"title":"Approaches"},{"startTime":10562.0,"title":"Footnote 51"},{"startTime":10630.0,"title":"Key organisations"},{"startTime":10822.0,"title":"Complementary (yet crucial) roles"},{"startTime":10881.0,"title":"Other ways to help"},{"startTime":10983.0,"title":"Want one-on-one advice on pursuing this path?"},{"startTime":11018.0,"title":"Find vacancies on our job board"},{"startTime":11063.0,"title":"Top resources to learn more"}]}