
The /acc Cast
The /acc Cast is a thoughtful exploration of accelerationism, AI development, and emerging technologies through a defensive accelerationist (d/acc) lens. Host Hunter Horsfall unpacks influential writings from leading voices in AI and tech, examining the promises and perils of rapid technological advancement. Each episode dives deep into the philosophical and practical implications of building toward artificial general intelligence, while advocating for individual agency, data sovereignty, and safety considerations in our race toward the future. Whether analyzing blog posts from industry leaders or exploring cutting-edge tools and trends, The /acc Cast bridges the gap between techno-optimism and necessary caution in our rapidly evolving digital landscape.
The /acc Cast
The /acc Cast Episode 1- Unpacking Sam Altman's "The Gentle Singularity" Post
Hunter examines Sam Altman's recent blog post "The Gentle Singularity," offering a critical defensive accelerationist perspective on OpenAI's CEO's vision of humanity's path to superintelligence. While Altman paints a largely utopian picture, Hunter questions whether we're racing ahead without adequate safety measures and individual protections.
This episode explores the massive AI investment landscape approaching hundreds of billions globally, the complexities of the alignment problem, and why casually mentioning "plugging-in" to brain-computer interfaces should concern us all. Hunter also dives into the exciting democratization of software development through AI tools, sharing insights from pop-up cities and emerging tech communities.
"We are past the event horizon, but takeoff has started. Humanity is close to building digital superintelligence, and at least so far, it's much less weird than it seems like it should be. m These are the opening words of Sam Altman's post, The Gentle Singularity. If you guys know me, I love to explore new writings about AI. I love to sort of unpack and unfold these writings and sort of examine them from like a defensive accelerationist lens. And I've read a lot of different things, like predominantly by Vitalik, read "D/acc one year later" I read "My Techno Optimism." um I read Machines of Love and Grace by Dario Amadei. I've read little bits and pieces of Superintelligence by Nick Bostrom. ah But I have not actually read a ton of Sam Altman's blog posts. And this one caught my eye. The title of it right off the bat. to me just seems a bit like an alarming place to start. Maybe not just alarming, but decidedly like a little bit naive, know, "The Gentle Singularity, you know, the start of this, even the start of this, ah yeah, we're past the event horizon, the takeoff has started, right? Sort of would imply that we are through the thick of the societal struggles in adopting AI or... um The big struggles that we thought might occur are maybe a thing of the past. I kind of feel like, you know, this just isn't, it's just not true. um Anyone who's read, for example, AI 2027, which was written by um four different authors, some of them were former folks from OpenAI, some folks who work in AI safety and policy, um some folks who've some really strong. some strong predictions about the future. know, AI 2027 is definitely more on the side of, look, you know, we're about to enter a period of rapid, rapid development AI and that it's likely going to be to resemble something like a Cold War between the US and China in terms of how we're building and how fast we're building and how much money we're investing. So it's interesting to read. this post by Sam Altman that sort of is like, almost feels like a victory lap, right? Like we're on the path to super intelligence. He talks about um the dark areas receding and that we're in an increasingly well lit space. I think this is of course true in many ways, ah but um the undercurrent is that AI investments are poised to approach $200 billion globally this year. ah And I think that number is just only going to continue increasing. We're going to see more and more data centers, m some of them probably building, just creating synthetic data to train models. em Other super large gigafactories, probably just to train AI. em So I would say right off the bat with "The Gentle Singularity", I don't like the fact that I feel like I'm being pacified em when I know the reality about how much money is being spent on developing AI. And personally, I feel that maybe we lack some of the necessary structure, like just to begin this whole process. Maybe we're missing, ah I don't know, some fundamental technologies for individuals so that we can do this in a safe way so that people uh have self-determination and autonomy with their personal data, with their digital footprint. I mean, this is something I've spent. the last year thinking about it, something I continue to think about on a daily basis. But Sam goes on to talk about the massive gains AI will produce for scientific progress. And this is something that seems obvious and is echoed by almost everyone else who is competing to develop the most powerful AI tooling. Of course, all of these guys sort of to present sort of the... the utopic viewpoint of what the world could look like if all of the best predictions about AI come true without focusing or polarizing too much around the many problems of alignment, which is, I uh think, extremely complicated. Alignment with who? Alignment with what? I mean, of course, as you align with a broader audience, gets, it's just increasingly complicated in terms of the number of paradoxes it introduces. And of course, increased intelligence comes with um massive benefits and a tailwind of faster scientific development. I'm not, I'm all for that. I think that's great. I think we are moving generally in this direction. um But in creating something increasingly autonomous with a higher degree of intelligence than we possess, uh Shouldn't we maybe like be considering the safety portion a bit more deeply? mean, um this is just a uh focus for me that Nick Bostrom does this really great job in super intelligence telling this parable of the sparrows where um these sparrows who basically they're a little bit helpless, they're a little bit weak. ah They're in a tree and they're all in their moment of weakness kind of discussing with one another. Wouldn't it be great if we had some more powerful species to protect us? Like maybe we could find a baby weasel or we could maybe find uh an owl egg. Like if we could find an owl egg, then we could raise an owl. And that owl would ultimately be our benefactor, our protector. um And so off the sparrows go to search for a baby weasel or an owl egg. Um, meanwhile, there's one sparrow who says kind of, wait, wait, wait, shouldn't we, shouldn't we figure out if we know how to raise an owl before we go searching for an owl egg? Like an owl is also, you know, a lot more capable than we are. It's also sort of our natural predator. Um, like, shouldn't we be a little more self-assured that we can, we can raise an owl to be our benefactor before we, before we bring it into, you know, our basically our home and our community. And uh he kind of goes on to describe this whole parable is really a parable for super intelligence, right? Super intelligence is in this case kind of the owl egg, right? Shouldn't we as human beings be a little more assured that we can maintain control over some increasingly autonomous form of intelligence that will eventually become super intelligent before we just put our foot on the gas pedal and only accelerate. This is just something I think a lot about. m so when I read "The Gentle Singularity", when I'm reading this post, I feel it's maybe deliberately just skipping over this um safety and alignment part, which makes me a little nervous. um Sam goes on to describe how Um, he does, he does describe that "...a small misalignment multiplied by hundreds of millions of people can cause a great deal of negative impact." think this is just obviously true. Um, but also that we'll see massive gains in terms of what one person can accomplish. And in this, then these coming years. And I think that is by far one of the coolest things about AI. Um, before this last year, I'd never had any experience recording a podcast. Um, I had never. really recorded a podcast, I'd never edited a podcast, ah I'd never done anything like creating short form content, or really, I'd done some social media stuff, ah but over the course of this last year I learned that with AI tools you can do all of this stuff really pretty effortlessly, uh which I think is really fascinating and really cool. So I do think there are just huge benefits in terms of uh what one person will be able to accomplish is. is rapidly changing and one of the most exciting things about AI. He talks a little bit about the cost of intelligence, which I think is really interesting. He says that the cost of intelligence will decrease and eventually look more like the cost of electricity. It's interesting considering or thinking about intelligence in terms of like as a utility, Intelligence as a utility is like, he says that, he goes on to say that, Right now a chat GPT query uses about point three four watts of energy or the equivalent of using an oven for about one second or a high-efficiency light bulb for a few minutes and One fifteenth of a teaspoon of water, right? So this is a query obviously deep research and some of the more advanced features are gonna look a little bit different uh but it is interesting to think about the cost of intelligence and uh He goes on to just describe sort of how we how we'll distribute intelligence how how to get super intelligence as it emerges to the masses, which again, I think is a little bit of a, like we're jumping the gun here just a little bit. We gotta pump the brakes. But I think there's this idea that as we're approaching sort of the asymptotes or the steepest part of exponential growth, what's ahead always looks like we're climbing vertically while what's below looks like flat. And these are kind of his words, not mine. I think that's a really apt description. em The speed of technological process is just going to continue to increase. em It's not slowing down. m And increasingly, the future is going to look something far, far, far different from the past. We're going be making leaps and bounds of progress. think progress is great. I think there's tons of benefits, massive benefits to humanity across. m across fields like neuroscience and biology. In Machines of Love and Grace, Dario Amadei writes that he thinks we'll see the same progress in the five to 10 years that follow the advent of something like super intelligence that we saw in the last 50 to 100 years. m So it does get really different. Like when you consider maybe the progress we used to make in 10 years now happening in a year or maybe a month. m It's, this is, I would say, it's still a pretty utopic vision. And I think there's like an important piece here that's hard to consider. Maybe it's just too unknown. Maybe it's too much for people to think about, but this utopic view is great. And I think a lot of these utopic things are coming true and will continue to come true. But I really do think that... um the safety part of this, the alignment part of this, the defensive part of this is just really important to consider and that fundamentally maybe we're missing some necessary protections for individuals that are just gonna become more more important as AI gets more and more intelligent. um Yeah, Sam, I would say that uh one of the things that he says a little later in "The Gentle Singularity" is that he, thinks one of our advantages as human beings is that we care for each other and care about what other people think, but we don't care very much about machines. ah But like, might this same concept become true of the very machines and intelligences that we are building ourselves? Like, Maybe we build machines that mimic this behavior. uh And as they become increasingly autonomous, have a preference for, I don't know, like their own species, but don't necessarily care so much for humans. Maybe their spec is more aligned with scientific progress and solving complex physics problems than it is with preserving human life. I think as we accelerate, these things are really important to consider. think especially as this becomes like a global race to have the most intelligent AI, that it's extremely important to consider what the implications of that are and what it looks like to live in a world where there's this competitive element around having the most powerful AI that is global. you know, almost like an arms race, right? I think it really changes what we will consider uh as a collective when there's an element of competition introduced. And this is the thing that I think is most worrisome to me, is like in the sense of the excitement of competition, the desire to be the front runner, the power that comes with being the front runner. um How do we simultaneously consider safety concerns? How do we look at this from an intelligible standpoint where we don't find ourselves down a rabbit hole in the trap of irreversible human disempowerment? um So yeah, uh think that ultimately "The Gentle Singularity" uh is really worth a read. It presents a view that's largely utopic. feels like he kind of glazes over an idea he presents later of people plugging in, right? Like plugging in with, how does he describe it? um Using high bandwidth brain computer interfaces, right? So this is just like. This is just sort of thrown in towards the end of the post. He's like, yeah, some people are going to plug in using these BCIs. um This is something I've really spent a lot of time thinking about. I think it's really alarming that someone would just glaze over this concept, right? You think about someone being able to connect their consciousness um to a machine using a BCI. I guess the question is like, Yeah, is this BCI something that is sovereign? Like, does it represent a sovereign extension of myself? Or do I have to pay some AI SaaS company to be like an intermediary between me, my thoughts, and what I'm interacting with? And beyond that, he talks a little bit about uh the alignment problem. um you know, as a collective, we've got to try and align AI with what we want. He doesn't really go into we. He doesn't really, he doesn't really, he doesn't go too far down this rabbit hole. He's saying, you know, we'll just align it with what we want as a collective. But globally, you know, I mean, really just in the U.S., people have vastly different opinions about things. The larger that collective becomes, the harder it is to align with. everyone's desired outcomes. And I think that this is just an obvious paradox with what he's saying. em He does do sort of a good job of describing social media algorithms as behaving like misaligned AI, where they're exploiting the reward center in the brain to trade your long-term vision for short-term gain, kind of like sugar. You know, there's a reason they call it brain rot. This is the thing I'm most concerned about, right? This concept is that is that as AI becomes more powerful, it has more more context for who you are and who I am. As we feed our personal context into an AI, I think it's really important that we have some control over it and some ability to revoke access to our data. um Because what could happen is you essentially are giving AI or whoever holds that data the keys to program your attention. which I think is obviously really dangerous and really problematic. So he has kind of his rough two-part plan is to solve the alignment problem, right? Which, okay, yeah, we'll just solve the alignment problem. I think this is easier said than done, obviously. I think pretty much everyone else in the AI space would say this is easier said than done. um But okay, you know, and then he says, after that, we focus on making super intelligent. superintelligence widely distributed and cheaply available. So he's like, as soon as we get to this point that AI is aligned, we get superintelligence to as many people as possible and give them lots of freedom within boundaries that society has to decide on. And again, it's like which society, right? It's like we're distributing superintelligence. what is the... I don't really... Maybe this is just more of a utopic visionary thing, but to me it feels like We're just skipping a lot of important steps here. Sure, it would be great if people would have access to intelligence that enables them to accomplish tasks and to have a quality of life that's generally really high. But I do think that we can't reduce this problem of alignment to... to like, oh yeah, we'll cross that bridge when we get there, right? It's just like this, the owl egg in Nick Bostrom's parable, right? Like, do we wanna decide about how we're gonna try and align an AI maybe before we hatch the owl egg? Like, cause we've kind of already hatched the owl egg. And this feels kind of like kicking the can down the road. We're saying, you know, we'll deal with the alignment problem eventually. We're just gonna keep developing and eventually we'll deal with the alignment problem. To me, this isn't a compelling argument and I think it's really, there's not much I feel like I can do about it. It's part of the reason I'm filming this is just to kind of share my thoughts. um I think that there are uh ways of doing this better, of developing this technology better, and I think they're not purely focused, they're not pure accelerationism, they're not purely focused on hitting the gas. I still personally really believe that there are some fundamental strategies we can adopt to protect kind of individuals. as super intelligence becomes more powerful, protect their agency and autonomy. think agency and autonomy are things I think a lot about, the things I talk a lot about. um I think we already have enough things in life that sort of try and hijack our reward center from things like social media to Instagram, to uh the food we consume or the beverages we choose to drink. uh There are tons of things already competing for this attention. And I think that If something like AGI or superintelligence is in charge of, for example, constructing your social media algorithm, um that's really dangerous. I want the ability personally to reset that and say, we are wiping the slate clean and starting fresh because you're serving me content that is counterintuitive, that is basically short-circuiting my brain. And I want to be able to redirect this in a way that is positive for me. want to basically, I want to be able to hold the keys to program my own attention. um And I think that there are versions of this technology being developed. You know, I spent the better part of the last year working with a dear, dear friend of mine who was a former, uh had a former residency at OpenAI and also worked in the DID space at Spruce ID. um We spent a lot of time discussing this problem together. And he's working on a project called Tiny Cloud, which I think is still exceptionally important. It's a technology that's designed to give users uh self-determination, autonomy with their personal data. uh I think this sort of technology is really, really important. um I think zero-knowledge technologies and things like ZK, TLS are really interesting for... preserving people's privacy, for giving them the ability to have some protection when still sharing some of their personal data. But I think it's going to be really important in the future we're moving towards that people have some protections that allow them to kind of mediate what information they're sharing with AI and how, and basically to give them the ability to sort of reset. the way they're interacting with AI. I think it's really important to be able to wipe the slate clean. Otherwise we're creating a whole new type of echo chambers that are, I think, are really scary and fundamentally different than the ones we've created in the past. Oh, there's one last thing that Sam talks about in "The Gentle Singularity that I think is really interesting. So I'll read you a quote here. He says,"For a long time, technical people in the startup industry have made fun of the idea guys. People who had an idea and were looking for a team to build it. It looks like now, or it looks to me, like they are about to have their day in the sun." ah I'd say to some degree I'm an idea guy. ah To me, this is one of the coolest parts of AI. um It really is, we really are entering sort of uncharted territory where anyone with a great idea and a passion for building and a focus on building something uh can apply that drive, that ambition and use tools like Replit, Lovable, GitHub Copilot, these sort no-code coding tools to turn their ideas into applications. m This is like super exciting. The easier this becomes, um the moat for developing software is sort of drying up and... individuals who formerly couldn't operate in this domain with great ideas, they now have an opening to kind of make their contribution. So I'm really excited about this. I have some ideas for ways that I want to apply AI. And what's cool is that with tools like Lovable and like Replit, I'm beginning to see the opportunities where I can just talk with AI and it can code something. for me and you know a lot of technical guys will say something like well well it's gonna write shitty code this is probably true to some degree ah but it's getting better and better every day um even just in the last year working on podcast stuff it was amazing to me using tools like uh Opus Clips, Riverside, Buzzsprout just how even when I first started using these tools you know they were maybe missing some of the actually really important features but they had They had met the baseline in a way that was really compelling. And I would say daily they were shipping features. Daily I'd see, I'd think of some feature, I'd be like, man, it should be great if I could do this when I'm generating short form content. Lo and behold, a few days later, the feature would be added. um So yeah, think, again, this is just one of the coolest parts of AI. The way it enables people with good ideas and a vision to execute that vision. We're getting closer and closer to just... imagining an application being able to utilize it, um which is really cool. It's something that I encountered a lot through traveling to different pop-up cities over this last year from Aleph to Edge City, which is another pop-up in Edge City, Lana in Thailand. There's one happening in California right now. um To Network School, where I was recently at Balaji's Network School and that was a place that... Man, there was just, there was a whole wide array of really sharp, super intelligent operators and founders and entrepreneurs and yogis and meditators who are all investigating tools like Replit and Lovable and just shipping their own applications. It's really cool. I really love it. Just in a few days here, actually, I'm going to be heading to, to Cannes for the Oz City. It's a pop-up city at the intersection of AI and crypto. So. There'll be a lot of people developing with these tools there as well. So I'm super excited to go to go participate and be a part of that. yeah, so I just, thought I'd share my thoughts about this post because I thought it was really compelling. em I think I'll probably plan on doing some more of the stuff just in my downtime because I love reading about AI. I like sharing with my friends about it. And you know, if you listen, you think it's interesting. Yeah, I hope, I hope you get something out of this and em Please feel free to share with me m any D/acc related content or articles or things you're reading that you think might be interesting to talk about. m I think I'll probably just keep doing these. m I'll probably try and find some pals to chat with as well because m I think it's really important that we have an open discussion about this stuff and that we are. We are um collectively as a society, hopefully shaping the direction some of these things take. m Because ultimately, uh if you build something like AGI or super intelligence, m if someone builds something like AGI or super intelligence, it's going to impact all of us. It's going to impact every single one of our lives. It already is, right? The beginning parts of this already are. um So yeah, I'd love to continue the discussion. um Thanks for taking the time to uh watch my little unpacking of "The Gentle Singularity. And if any of y'all want to talk about it in greater detail, please feel free to reach out to me. I would love to chat with you and continue the conversation. All right, everyone have a great day.