Things like fold fusion. And it's called deforestation because of the process of removing trees. And he using these algebraic laws transforms that program into a the linear Cadence algorithm solution. And in the midst of that paper, he mentioned something called the fold scan fusion law.
BenRight. So the anamorphism is a the opposite of a fold, where you're going from a seed value, it's an unfold.
ConorYeah, an unfold.
BenGoing from a seed value to a sequence, let's say. Yeah. And catamorphism is the generalization of going from of a fold, going from a sequence to a summation value. And so the hylomorphism is where you fuse them both together. You don't need to generate the sequence in the middle, as it were.
ConorI mean I sent you some topics of like a rabbit hole that I kind of Yeah, you sent me some some cryptic topics, I have to say.
BenAlgebra laws, BMF, Haskell Deforestation. Are you familiar with Haskell deforestation? Does that mean anything? I'm not familiar with any of these things really. BMF I looked up, is that well, I'm assuming it's is that Bird Merton's formalization? Yes.
ConorI think I think formalism, if I'm formalism. Sure.
BenAnd so I mean there was two different let's let me open And this is the process of algorithm algorithmic transformation of programs typically inside the compiler in a functional language compiler. Similar to deforestation, so it ties in with that. So wait, so you Which is the same idea. I've heard of it. I know almost nothing about it, but I know but having having read half a Wikipedia page, I can string together a few sentences.
ConorYeah, let me what is the So I'm trying to find the paper.
BenSo I gather that it's it's related to things like fold fusion. And it's called deforestation because of the process of removing trees and simplifying tree structures.
ConorI don't know why it's called that, because honestly, I I haven't actually gone and read the paper yet.
BenWell I have to think. It's uh Philip Wadwili Waddler used that uh expression in a paper, deforestation.
ConorYeah, it was uh because I that's the thing is I have too many now of these of these deep dive papers that I've had, you know, these research engines go and build.
BenYeah. I'm trying to f Well this touches on I'll let I'll let you search. What we're gonna do.
ConorWell I'm just looking for because I have these three different three different papers, and all of them have to do with Richard Byrd's Algebra Laws, so he's the author of The Algebra of Programming, which I have I have not read yet, but I know that you've put it on my radar, so I assume that you've read or at least perused it at some point in time.
BenI have I uh now you can if you're gonna put me on the spot, I can't remember what's in it, but I have read it. Yes. Was Richard Richard Byrd also did a couple of other books. I'm thinking. Do you thinking functionally with Haskell? Is that Richard Byrd? The Tiger Book?
ConorSo the I know the ty- is the Tiger Book the Designing Algorithms with Haskell? That was written by Byrd and co-authored by Jeremy Gibbons. Give me two seconds here.
BenOkay. So here are the books I was thinking of. The Tiger Book is Thinking Functionally with Haskell. Okay, so there's two Tiger books. Well, the other one is a lion book. Oh, it's a lion book. Algorithm designed with Haskell. That's what you just said. Richard Byrd and Jeremy Gibbons. I actually have not the algebra of programming, but the fun of programming by more of the usual suspects. Jeremy Gibbons and Urga de Moore. Probably mispronouncing that, sorry. It's probably Dutch. And this book is also one I was thinking of Pearls of Functional Algorithm Design. This is a fantastic book. Really good.
ConorThe Pearls. I'm gonna put that on my list. My ever-growing list. That's a risk of talking to Ben, folks. So don't think you're gonna help he's gonna help you get through your reading list. He's only gonna add to it. I literally have books sitting on my shelf that I've purchased years ago. I still have the annotated Alice in Wonderland, and I tell myself every Christmas, because I don't know why, but it seems like Christmas is the right time. You got a little bit of a few days off to read the book. Still have not read sitting on my shelf. It's a beautiful book. Oh, you should read it. It's great. I know every time I mention it, you say the same thing. It's like, ah, you gotta put that at the top of your list. And I'm actually listening to an audiobook that Phineas Porter, a past guest on ADSP. He recommended me a book called The Fund, which is a book about the myth of Ray Dalio and kind of his re I don't actually know much about Ray Dalio, other than he's got Bridgewater, etc. Anyways, he told me to read that book like a month and a or a month and a half, a year and a half ago, and I immediately purchased it, and I'm only just getting around to reading it. It's too too little time in the day. We need uh whatever, Doctors Doctor Strange's ability to astral project and read stuff while you sleep.
BenWell, I recently reread The Annotated Alice, and I'm probab I'm probably going to read it again in the next month because the talk I'm giving at C now is the Plato, Magritte, Sartre, and Carol talk.
ConorSo you're doing research. You're doing research for uh uh your upcoming talk. I am. It's one of the best ways if you really want to force yourself to do something, is to sign up to give a talk. You've already read the book, so alright. The Pearl of Functional Programming by Richard Byrd is now on my list again. I did while you were Pearls of Functional Algorithm Design. Or a functional algorithm design. Oh, there we go. Google knew what I was looking for. And is that a book? That's in so first of all, we gotta figure out what year it is copyright 2010. Which means it's probably the Haskell phase. Yeah, looks like it.
BenYes. It looks like the code in here is Haskell.
ConorYeah, I've got like a little Google preview, and I'm looking at zip filter, repeat. And what's so what's curious, I mean, how did I how did I end up going down this rabbit hole? I was doing some work-related stuff, and I stumbled across some stuff that made me think of a paper that my boss put on my radar like two or three years ago in 2023. And it was a paper called the Algebraic 1989 Algebraic Identities for Program Calculation. And it's only like a seven or eight-page paper, I want to say. Okay. It is Who's the author? Uh Richard Byrd. And I didn't actually realize when my boss put the paper on my radar that it was written by the same individual that had written algebra of programming, because that's what I most closely associate Byrd with at the time. So I haven't really ed I haven't read any of Byrd's textbooks, but I am aware it is at the top of my list of books to read is the Algebra of Programming. And I go back to this paper, and it's a really, really fascinating. I would probably put it in my top five papers that I've read, along with phrasal forms by Ken Iverson and a couple others. And it walks through the Cadence algorithm, maximum contiguous subarray sum. And he shows a program transformation from the max of the sum of segments. I think it's max of mapping the sum over segments, where segments are every single possible contiguous subarray. So it's like a cubic algorithm, not very efficient. And he using these algebraic laws transforms that program into a the linear Cadence algorithm solution. And in the midst of that paper, he mentioned something called the fold scan fusion law, which is that if you have a scan succeeded by a fold, you can merge those two essentially and basically end up with a single fold, and therefore you don't have to materialize this the result of the scan. Which is great because it's still linear, linear in time complexity, but now it's constant in space complexity, which is a fantastic improvement. Very nice. And I, you know, thought it was a cool paper, but didn't really think too much more of it. But somewhere in the paper he says, you know, this is just one of many laws. And at the time I was like, well, I don't have time to go on some deep dive. But now, thanks to these little research engines we have, I this time looking at it, went and did like a little research engine, and it came up with a bunch of stuff, which is like the history of this kind of not necessarily program transformation. So that's what the BMF, the Bird Mirthan's formalism, and then later on this language called squiggle. It's all about like stating these algebraic laws, some that are as simple as like the last element of a scan is equal to the equivalent reduction. Um and and some you know, you some of them are even simpler, like the first element of some kind of mapping operation is can just can just be like first or last or something like that. So some of the algebraic laws almost seem obvious. Yeah.
BenBut other times that's the case, of course.
ConorYeah, but if you need those obvious laws to be a part of your algebra in order to do like the full transformations and whatnot. But I am less interested in program transformations per se than I am in the idea of just building like libraries or languages in the case of Haskell, where you can automatically avoid these materializations, which essentially is what like ranges in C is, right? Like you're smirking, okay.
BenNo, what confused me was what the the way you just said that. You said I'm less interested in program transformations than in making these libraries that do program transformations. So I sort of think that it's the same thing what you just said, right?
ConorIs it the same thing though? I mean that's a good great question. Maybe it is the same thing. Because is is uh the implementation of a library like ranges in C or iterators in Rust or streams in Java, is that the same thing as programmatically transforming one program into a more efficient program?
BenWell, I I think in the best case it is the same. There are a few examples of that in in ranges, I think. And I don't think they are at all formalized. I don't think they really use program transformation formalisms or laws. I think they are just sort of ad hoc empirical observations in most cases. Things like if you do a reverse view of a r of a reverse view, then what you get back is the original view unwrapped. Right? There's no need to wrap it twice because that double wrapping is the same as if you hadn't wrapped it.
ConorSo wait, what did you say there? You said that it was an ad hoc observation. What was the deal?
BenMy feeling is that these things are just ad hoc sort of things as they're empirically observed. You know. They they I don't think that I could be wrong, but I don't think that implementers or or API designers in C ⁇ oftentimes sit down with the background of these formalisms in their mind. They they put together useful algorithms, algorithms that have proved to be useful in everyday day-to-day life and what people have asked for, and they see opportunities for optimizations. Right? A lot of times when we program C in particular, we're always thinking about how can we make this fast. And so we tend to see empirically opportunities for optimizations. In my experience, we don't often get to that point coming from formalisms. That's what I'm saying.
ConorI mean, and that's exactly what we're talking about, folks. So I'm I mean, I'm thinking my my gears are spinning, they're tur they're turning in my head, and I guess that that is what I am become very curious about. And we kind of skipped a couple, or I don't know if we skipped or we just went on tangents and now we're coming back. But you know, we mentioned Haskell deforestation, which is a term that I had never heard of. And the the paper that I had been that was put on my radar by the you know research engines are a shortcut to deforestation 1993 by Gil, Launchbury, and Peyton Jones. And the little summary of it is basically it introduces this kind of fold R, which is consumes and destruc uh destructs a list, and then build, which is like a you know, generator or a producer. You know, there's a bunch of different names for these things over time. But in 1990, this 1993 paper, they're calling it the fold r slash build rule. Whereas if you like chain these things together, you can basically, you know, it's like if you're if you're summing up an iota sequence, you don't actually have to build Okay, okay.
BenThis is like this is fusing together uh an anamorphism and a catamorphism. Exactly. If we were to go to that terminology.
ConorYes. And it's so it's so crazy sometimes how this stuff works, because I was talking to Is that called a hylomorphism? Yes, yes, hylomorphism. And you you actually put this on my radar at one point, and I recall you explaining it to me, and I maybe nodded my head and I was like, ah, that makes sense, but it's one of those times where you're kind of squinting and being like, I mean, in theory it makes sense, but let's hope you don't expect me to explain it to anybody else, because I didn't really understand it that well. But yes, hylomorphism is the combination of uh anamorphism and a catamorphism.
BenRight. So the anamorphism is a what's the word? The opposite of a fold, where you're going from a seed value, it's an unfold.
ConorYeah, an unfold.
BenGoing from a seed value to a sequence, let's say. Yeah. And catamorphism is the generalization of going from of a fold, going from a sequence to a summation value. Yeah. And so the hylomorphism is where you fuse them both together, you don't need to generate the sequence in the middle, as it were, and you and you s and you you go to a one space, like you would just say.
ConorYeah. And so that I mean this is and this is what's kind of interesting, is that like it's 2026 right now. This paper was written in the 90s, but like it's it's crazy to think that if we go back 50 years ago, like people didn't have terminology for talking about this stuff. And like, yeah, at a certain point, like I I remember reading one of, or not reading, but perusing Alexander Stepanov's papers on his you know, papers.html page. And I think it's on like higher order notes on scheme or something like that. And on page like 86, he's got an implementation in in scheme of reduce, and he's got a little comment that says, like, you know, uh reduce comes from APL, but just this the idea that like, oh, this reduction operation is a thing that we need to name and it's like a higher order function. It seems like what are we talking about? Like, that's obvious. Like every language I've ever used has like in one form or another reduce function. And but you know, Alexander Stepanov was naming this and like pointing at APL.
BenYou're 15 to 20 years younger than me, Connor, right? So same as not like there's no reduce in basic. There's no as far as that's true. I guess that's true. There's no reduce in Pascal, there's no reducing the languages that I learned as a child, as a teenager. There's no reducing is there reducing Fortran? I don't know. I mean that Fortran is pretty bare bones, at least, Fortran 77 as I learned it back in the day.
ConorBut so yeah, I guess it depends on when you started your programming journey. But it I mean, anyways, this is just like a side note that I'm reading this kind of history of papers that started in the 90s of this idea of you can you know, call it what you will, anamorphism plus catamorphism equals hylomorphism. This paper's calling it fold r slash build rule.
BenI don't in my in my everyday life, I don't throw go around throwing these high-falutin Greek derived terms around. Yeah. Hylomorphism, I that's the one I remember because of the sort of mnemonic of sort of going high and then going low, sort of expanding out and and contracting back.
ConorYeah. I mean, so but the thing is, is I agree, it's very highfalutin words, but we we need a way to talk about this stuff, right?
BenWell, exactly. When you're you're defining, like Dijkstra said, the purpose of abstraction is to create a new level on which we can be absolutely precise. It's not it's not trying to be fuzzy about these things. It's if you if you are working in that space, you need the terms to work with and be precise.
ConorYeah. And so we have the you know, the fold build rule from this paper, but then there's another paper that was published in 2007 called Stream Fusion from lists to streams to nothing at all, that builds on the fold build framework, but then adds other things to the fusion quote unquote rules, like zips, concat maps, take and drop, etc. And so it's it seems that there's like a history of papers that are showing that like you can do more than just like avoid materialization and like reduce your memory footprints, then just like you know, build and fold, you know, maps you can fuse together. You know, the the my favorite, the zip tail trick that many people, including Michael Case, I think, Odin Holmes, Ivan or even Kuchek, I always mess up his pronunciation of his name, but you know, author of the functional programming in C. A lot of these you can avoid materialization by doing this kind of quote unquote lazy operations. And it it uh anyways, I've just been trying to sort all this stuff in my head because you're saying that yeah, a lot of people that implement this stuff, they're not going to some set of academic papers and being like, here is uh an algebra that I can then implement and here are all the lazy operations.
BenWell, I think I think in some ways we might come out with better outcomes if people did that more. But my point is they're not they're not working in that highly they're not mathematicians, right? They're not working mathematicians. They're they're working programmers, they're working library writers. That their first job is to produce something that's that's usable, performs well, and fulfills use cases, right? So their focus is not necessarily in in using formalisms to achieve that. Uh and now I think there's a level of like, let's it it behooves us to recognize what we're doing. Right? This is what I always I always think. The process of writing code is and producing nice code, producing beautiful code, is in many ways recognizing what it is you're actually doing. And sometimes, you know, sometimes the mathematics helps with that. Yeah. Right. It's like we said before, like you can be a good baker without without studying chemistry. Yeah. Right? But if you are a good baker, odds are you kinda know chemistry, at least as far as it relates to baking. And and studying chemistry can make you a better baker. So I I think that you know that's it's that same analogy applies to programming in this kind of mathematics.
ConorYeah. I mean I feel like I'm a I'm a bad baker and a bad chemist. But I've got a lot of thoughts on like the chemistry.
BenI live at altitude, so baking is a little bit trickier usually. I've had some bakes come out poorly. Really? Altitude makes a difference? It certainly does.
ConorOh well. That's good to know. I guess I live at roughly sea level, so I guess I have it as easy as it gets then.
BenWell, yes, most most recipes you find are designed for sea level B. Altitude requires some adjustments sometimes.
ConorReally? I had no idea. I had no idea. Anyways, all of this all of this is to say that I've I feel like there's a I don't know. There's there's a through line here from these, you know, academic papers and formalisms and algebras and calculi, whatever you want to call them. Which, like I said, I'm not a trained mathematician. And I'd I'm doing my best to read the academies and parse this stuff, but then it makes it I we I the other day I actually was trying to figure out like who was the original implementer of like the boost range v1. I know there's like there's boost ranges 2.0.
BenOh, because v V3 is the one we all know. Yeah. So did did Eric implement these V1 and V2 before V3? I don't know.
ConorNo, I I don't think he did. I mean, if you go to Boost Boost Ranges, if you Google it, it comes up with 2.0 and then it says copyright of Thorsten Ottoson and Neil Groves. Those names are not familiar to me. But then also, is 2.0 an evolution of 1.0, which they wrote, or is it the same thing that happened with V3? And it has me very curious. Of at what point did the I like, you know, and you can tell me, you've been programming in C much longer, and you've also been doing high performance stuff because you you know worked at you know gaming companies before and high frequency firms before. Is like is the idea of operating on you know what in boost ranges were called fancy iterators and then kind of evolved to be views in range v3. Is this like a modern thing? Because like I know SickP had the idea of streams, and like those existed in like Lisp land and Scheme land decades ago. Yeah.
BenBecause Lisp had closures, and if you have closures, you can make lazy streams.
ConorYeah. And so is is that like the genesis of because there seems to be like definitely there's an academic like paper trail of people writing about this stuff and discovering this stuff. And then we live in 2026 today where all modern languages have versions of these kind of stream-like, fusion-like libraries. And uh anyways, I'm just very curious, like where did it all start and at what point did people start realizing that you can do like I don't know, uh, that you can do these kinds of lazy things. Was it was it did it come from the functional programming influences of language adopting, or do does nobody know? And uh I'm just doing like what do you call it, ar programming language archaeology?
BenWell, programming language archaeology is a lot of fun, of course. I and I I you know uh does does no b does nobody know? I think the question really is well maybe that maybe that question maybe the question as stated isn't worth answering, but we can we can we can only we can observe what's happened, right? Which is which is the history of programming languages, and in particular in the last twenty years. Depending on which language you pro you use in your day job, you know, that y you will have noticed the influence sooner or later. But functional languages have for sure been influencing mainstream languages, if you like, for at least twenty years, and probably a little bit longer, you know, and so so you know, back in the earlier days, we had Fortran, we had Lisp. Those were two very different takes on computation, right? Fortran embodying the sort of von Neumann machine architecture, Lisp embodying the lambda calculus. We go forward a little bit, we we have you know the the and we can sort of, if we squint, think of those as the the pragmatic camp and the mathematical camp, or the you know, the the camp which is concerned with what hardware does, and the camp which is concerned with maybe more mathematical ideas. That's probably a very unfair generalization, because you know it's not like the people people who make programming languages are in general well-versed in the mathematics, right? It's not like the folks who've made Fortran were not mathematicians. They absolutely were mathematicians. But you know, then we move through the 70s and we see C and we see ML, right? 1973. And then as we move into the 80s, we see and and and beyond, we see a a proliferation of the mainstream sort of C derived languages. And then we see the web and we see things coming in from the functional side, you know. JavaScript at its core has a lot of things in common with a Lisp. Right? It sort of pulls from both camps, if you like. And then it's very you know, in the last 10, 15, 20 years, it's very in vogue for the mainstream what what we might have considered, you know, C until I d until 2000 maybe, until 2005 even, we would have put in the machine sympathetic camp, right? Squarely. But but now C ⁇ , certainly for the last decade plus fifteen plus years even, has had a lot of influence from the functional side, you know. You go to any any C conference, you'll hear people talking about monads. And you know, folks who've been who work in functional languages are like, you know, it's nice that the kids are learning about these elementary concepts now. Introducing lambdas. You've never heard of anything like exactly C only got lambdas in twenty eleven in C11. Mind you, C11 is now further away than C 98 was from C11, right? C11 was over half a standard lifetime ago. Oh well yeah. But you know, so so so the languages that that I work in, have worked in for my entire career, I'm happy to see are getting more influence from from the functional side of things. Because that is you know, and generic programming is very functional in nature as well, as a paradigm. Because that seems to be the best way we've found so far to achieve actual useful composition, to achieve actual useful reasonability of the code. And and it's the declarative functional style that gives us that.
ConorThis guy so I guess in that story, it comes from the functional influences.
BenI guess. I mean I probably cut I cut tons of out of that story. You know, there are lots of languages along the way that in a way all languages sort of influence each other. And so I didn't mention language, you know, there's some very influential languages which are not small talkers out there going, No, you didn't mention small talkers. There's languages like self and clue, and there's languages like Ada. Yeah. You know, that that pioneered what we might think of as aspects of object orientation or aspects of generic programming. Yeah.
ConorYeah, I've got to I've got to continue um doing my archaeology in because I really feel like there's something here, you know? I mean, my what is my goal at the end of the day? I work for a company that runs programs on a different architecture, you know, GPUs. And a lot of this literature that's been written about algebraic laws and programming transformation, it implicitly is designed for CPUs. And the quote-unquote fusion laws that are possible on a CPU are not the same as the laws that are possible on a GPU. Okay. The fold scan fusion law, I mean, there's a version of it that can work, but like scans are not parallelizable the same way that folds are. A fold can be parallelized essentially in the same kind of way that a CPU would do it.
BenYeah. A GPU would do it. But due to the way that a scan Because the scan has a data dependency. Exactly. Whereas the fold, if you have just an associative operation, it's you can execute it in any order.
ConorExactly. And so that is a it's a a small difference if you're computing quote unquote, you know, left to right sequentially, but it's a massive difference if you're trying to parallelize things. And I think the quote unquote, what did you say? You called it ad hoc empirical observations, they became they become more difficult, you know. Um observing that, you know, you can chain maps together and you know avoid materializing stuff, I think is pretty it's pretty straightforward and obvious. I don't know. Maybe maybe not for everybody, but in in in across different like hardware, these like implicit algebraic laws actually are less obvious, and there's way more opportunity to leave performance on the floor, especially if you're dispatching to different types of hardware. And so I I have this like grand vision in the back of my head that's like there is some truth out there in the form in some whether it's a formalism or an algebra or whatever, or maybe none of that stuff, and it's just a thoughtful implementation of a library that like hides that stuff from the user, which is kind of what ranges is, right? Like you you compose ranges together, yeah, you uh you get an efficient slow memory program. Anyways, you're about to say something.
BenI agree. I was gonna say I agree a hundred percent. Like, like when you write a library, if you are if you are making these observations, if you are, you know, writing the library to do certain use cases, and you notice you can fold things together, that's that's great, right? But I agree, at some point, like unless you are like a super genius, I guess, you are going to hit a wall where just the next you just don't see that these things can be folded together because it's really, really not obvious, right? It's very difficult and it's very difficult to recognize what we have written in code, like in that sense. Yeah. As well as talking about earlier, the re the recognition of what we're actually doing. And and to do that, you do need to study a bit of the chemistry, right? You do need to study a bit of the mathematics. And and and then that helps you unlock something in the code. And then frequently, you know, the the the experience I've often had is that sometimes the code just just melts away, right? When you when you have decomposed it and you've expressed the mathematics properly, the thing the thing that you had, which you thought was some kind of kernel of complexity in there that you couldn't crack, right, actually that's now been decomposed and parts of it have not just become simpler, but they've actually disappeared.
ConorAnd are you saying that that happens behind the scenes with like a well-designed library, or are you saying that that is something once you've studied the quote-unquote chemistry and math that you as a programmer can kind of tear apart?
BenI I'm saying, in my experience, studying the mathematics and the and is the only way I get to that point. Like I'm not clever enough to write a library that does these things and and simplify it enough internally without using the mathematics.
ConorRight, right, right, right, right. So yeah, it it requires taking the time to like whether it's mathematics or chemistry or what whatever metaphor.
BenUm whatever the analogy is, yeah.
ConorYeah. I don't know. And I this is what's been dominating like my free cycles. Um because you know my my uh affinity for array languages as well. Like there's so much of I mean well, there's different flavors of this that have come up over the years where at one point I had the realization that that w what what it ha what happens with like iterator tag dispatch is basically like a form of this, right? You know, you you pass a collection to a a well designed, thoughtfully designed algorithm, and depending on the iterator category tags, it'll then dispatch to a certain algorithm that will be the best performance for the iterator types that have been passed um to that algorithm. And array languages do this kind of thing by attaching metadata to the containers. So, you know, if it's uh only Boolean data, ones and zeros, or if it's a sorted container, you know, that's that's an example of uh if you have a sorted data and you do first, you know, that's an O1 operation that it's like, well, why would you be writing code where you're taking the first of you know something that you just sorted? Well, it's like, well, actually, there's a few different operations that end up with sorted data that are you can easily forget about. Like if you do a Mac scan, that results in sorted data. You might not immediately think about that, but it is uh there are multiple operations that result in these kind of like metadata properties that if you store them and you check just up front before you do some operation, it could lead not to like a, you know, we're talking like a quadratic to linear or quadratic to linear rhythmic. It could be like, what is n log n is linear rhythmic to 01, which is like absurd. And it you know, you you you tell people this and they go, oh no, but that would be obvious. Obviously, no one would ever you would be surprised at how often like you nothing is that obvious.
BenYeah, I agree.
ConorAnd I I started one of my little side projects from a couple years ago was this uh jello program that would enable you to type keywords and convert that into a jelly, which is like a code golfing. And even for me, it's a very hard language to remember all the different uh symbols because they use like the extended character set of I think it's Spanish or something, where it's like over dots and under dots and accents. Um it's not even symbols. And I started there's a file that called that's called like hints or something, where anytime it recognizes a composition or some like idiom, it says, Oh, you can you can use this instead because like it's it's so hard to keep track of all the different little idioms and patterns and things like that. And I would love a language, like it's too complicated with a language like C that you'd get these little syntactic, like, oh hey, by the way, you're doing this, you could be doing this, and that'll be better for reasons X, Y, and Z. A language or a library that just hides all of that stuff for you, and just whatever you write is the kind of naive, you know, way of expressing your solution, but then so I guess maybe we're we are back to program transformations. Uh uh, but uh what is actually a program transformation? Is does a a library that does all that stuff, does that count? Does it matter? I don't know.
BenI think it counts. Whether it happens inside the compiler or in in the library. I mean in C expression templates are often touted as this kind of thing, right?
ConorExpression templates count as this thing.
BenThey're often cited, I I I sa I would say. Interesting. I mean, think about like a linear algebra library using expression templates. The reason it uses those is to take what you write and transform it into a machine sympathetic thing. Like you know, if use multiply ad would be a trivial example, right? Huh.
ConorYeah, I never thought about that. I mean this is it makes my brain hurt of like all of this stuff, it's like we're it's all the same stuff at the end of the day. And it makes me think that like we're too early in the computer science. Like we ha we don't actually have a a well-formed vocabulary to talk about all this stuff because you find the same patterns. I don't know. This makes me always makes me think that like Bartosz Meluski's correct in that. He's working on his second book, right, about how in everything's category theory, but then there's some other like higher-level category theory where it's like music, programming, everything it's all the same at the end of the day, it's just a different spelling.
BenYeah, maybe. I mean, we can keep generalizing and generalizing, and though we get to a point where everything's the same because it's the sort of ultimately generalized.
ConorBe sure to check these show notes either in your podcast app or at adspthepodcast.com for links to anything we mentioned in today's episode, as well as a link to a GitHub discussion where you can leave thoughts, comments, and questions. Thanks for listening. We hope you enjoyed and have a great day. I am the anti Bryce. Um