MEDIASCAPE: Insights From Digital Changemakers

AI, Ethics, and Transparent Markets: Insights from Noah Healy

Hosted by Joseph Itaya & Anika Jackson Episode 45

Curious about the unexpected intersections between academia and tech? Join us as we welcome Noah Healy, a market designer and game theorist, who shares his fascinating journey from the University of Virginia's engineering program to the forefront of digital media. Noah's unique path, which includes studies in nuclear engineering and a serendipitous connection through a games club, exemplifies how diverse experiences can lead to remarkable opportunities in the tech industry. Discover how everything from neutron activation studies to gaming statistics can inform a successful career in data and technology.

Ever wondered how automation could impact high-level decision-making tasks? We navigate the intriguing world of interpreters, compilers, and AI to uncover how these tools transform human-written programs into machine-executable tasks. Noah offers fascinating insights into the potential benefits and dangers of generative AI, probing the challenge of eliminating unnecessary tasks to achieve true efficiency in an automated world. This engaging discussion challenges conventional thinking about automation and stresses the importance of refining our approach to digital work.

How are ethical considerations shaping the future of AI and digital currencies? Our conversation with Noah Healy tackles the pressing ethical challenges in fields like engineering, medicine, and law, and considers the complexities of integrating AI into professional sectors. As we explore the evolving role of technology in financial systems and currency markets, Noah's insights on transparent markets and ethical trading practices offer a thought-provoking perspective on the future. Learn why sticking to principles and prioritizing meaningful contributions over fleeting trends is essential in today's rapidly advancing technological landscape.

This podcast is proudly sponsored by USC Annenberg’s Master of Science in Digital Media Management (MSDMM) program. An online master’s designed to prepare practitioners to understand the evolving media landscape, make data-driven and ethical decisions, and build a more equitable future by leading diverse teams with the technical, artistic, analytical, and production skills needed to create engaging content and technologies for the global marketplace. Learn more or apply today at https://dmm.usc.edu.

Speaker 1:

Welcome to Mediascape insights from digital changemakers, a speaker series and podcast brought to you by USC Annenberg's Digital Media Management Program. Join us as we unlock the secrets to success in an increasingly digital world.

Speaker 2:

Welcome to the Mediascape podcast. I'm one of your hosts, anika Jackson, and I am here with Noah Healy today. Noah is a market designer, game theorist, working on better economic systems, and what particularly interested me, besides all of the things that all the knowledge he has, all the things he's so interested and engaged in, was talking about algorithmic bias and ethics, which, of course, is a big part of what we talk about in the Digital Media Management Program. So, noah, thank you for giving us some time today.

Speaker 2:

Thanks for having me Of course, let's talk a little bit about your backstory. You have always been in the data field, but in different iterations. Did you know? That was always where you wanted to go with your career.

Speaker 3:

No, I never really had career aspirations at any point in my life actually, yeah, you have a BS in engineering, science and nuclear engineering.

Speaker 3:

Well. So that also just evolved on its own. So I was in a program that pretty much allowed me to wander around the engineering college at UVA and take anything that looked good. So UVA has a famous scholarship program in their College of Arts and Sciences where you can actually just take 120 hours of classes and graduate. It's called the Eccles Program and they have a kind of sister program that's a lot smaller in the engineering school and they quite explicitly say that you're not allowed to do that. But they have an engineering science major for sort of dedicated, ambitious go-getters that want to form some sort of path to graduate school.

Speaker 3:

And I was in the Rodman program and I very much impressed the head of the to graduate school. And I was in the Rodman program and I very much impressed the head of the engineering science program and he more or less decided that I was smart enough to make my own decisions, which is probably unwise for anybody in the guidance field dealing with a child or even a young adult. So that's what I did. I would take pretty much an introductory even a young adult. So that's what I did. I would take pretty much an introductory like two usually 300 level class in a subject. If I liked the look of that, I'd take like a 500 level class. If I liked the look of that, I'd take like a 700 level class.

Speaker 3:

And I just was sort of looking for good professors. One of the best professors I ran into had turned out to be the Dean of Undergraduate Students and he had been a chemical engineer but he was teaching in the nuclear engineering department and he was teaching a class on reactor safety and he said something provocative. I had an idea. He said that that idea might be able to be turned into a thesis and introduced me to another member of the nuclear faculty who basically just guided me right through the whole thing. He was a good teacher too and I got to take some very interesting classes neutron activation studies as an assay technique where you expose chemical samples to radiation to make them radioactive and then, by detecting the radioactivity of the activated sample, you can determine what the sample used to be made of before you irradiated it be made of before you irradiated it.

Speaker 2:

Wow, it sounds very interesting, engaging and compelling. I know my daughter is taking AP Chem right now and things that she talks about and putting things together and then taking them apart to their core essence and figuring out what went into that process is something that's been really fascinating to her, so I love the fact that you were able to do this. It sounds a lot more like going to university overseas. You're thinking about colleges like Cambridge or Oxford that have you basically specialize and you have this real hands-on learning where you get to explore the topics that are most interesting to you and have I know at Oxford they call them tutors, at Cambridge they call them something else, but it sounds like that was really your experience To some extent.

Speaker 3:

Yeah, my thesis advisor. I call him the perfect thesis advisor because they were actually shutting the program down and I'm not much of a writer, so it took me a pretty long time to get my thesis together and when I turned it in to him he was like so what do you want me to do with this? Just proofread it, Okay. So it was really quite nice. I had a lot of other cool experiences. I actually learned Python from Randy Pausch, who later became famous for the Last Lecture, and, yeah, whereas I didn't have to go overseas, UVA is basically across town for me.

Speaker 2:

Right, fantastic. And then, what did you do with this knowledge and this education?

Speaker 3:

Well, I needed a job once I got out of school and the CTO of a local tech startup was also in the UVA Games Club with me. I'd been the president for a few years and that place was one of the earliest places that the game Settler of Catan was imported to the United States and I was the best player of that game in that group. Our group produced something like three or four of the first four or five national champions. So anyway, Chris was like oh, you're really smart, we're hiring warm bodies Come on down and interview.

Speaker 3:

So I went on down and interviewed and that's when I started getting serious about computers. I had a little bit of algorithmic math. I took a lot of math. I think all in starting in high school. I covered about 17 semesters of math at UVA.

Speaker 2:

Yeah, wow. So this is talking about? You might not have figured out what you wanted to do with your degree, but the connections that you made through the university, that networking through your passion for gameplay, helped create this through line for you.

Speaker 3:

Yeah, well, also, it turns out that the statistics of radiation and the statistics of websites are identical.

Speaker 2:

Really.

Speaker 3:

Yeah, yeah. So there's this thing called Poisson processes. Poisson was a French mathematician, contemporary of Napoleon. Napoleon was actually fairly famous for funding a lot of pure and basic research, along with a lot of practical research. Canning is attributable to Napoleon's efforts, for example.

Speaker 3:

And in that day and age one of the serious problems with large-form logistics was people getting kicked by horses and donkeys, and Napoleon wanted to be able to predict when that would happen, to prevent or ameliorate the issue. So he puts Poisson on it. And Poisson discovers that, just like there's a sort of limit form of continuous processes that we call the Gaussian or the normal curve, there's also a limit form of discrete processes that don't go from negative infinity to positive infinity, but just go from zero to however many. And if you're inside that limit form, there's a single parameter curve that will describe whatever situation you're in. So Poisson basically said I can't tell you when or where a donkey will kick somebody, but I can tell you how many you know, given I can give you the distribution of how many times people get kicked by donkeys if you can give me the sample size of you know how long the campaign is and how many people on donkeys are going to be on it.

Speaker 3:

We can, you know, plug a formula in, get one thing, so it turns out that communication cues. So how often people come to your website. That all follows poisson curves. The frequency at which a decay, a radioactive decaying sample, will produce single particles all follow Poisson curves. And also with the activation thing, modern, you know, detectors are way more sensitive than Geiger counters, and so they actually tell a lot about the specific, you know kind of radiation that's coming in. Just like modern web servers can read practically your entire life history off your browser, and so it's exactly the same thing. You're getting discrete counts of things that are Poisson processes that you can gain a lot of individual knowledge about, and so all the same techniques apply.

Speaker 2:

That's really amazing. I have to say this is going to be one of the most fascinating interviews I've done, because I don't think we've ever gone this far into thinking about equations, statistics, probability, and I love it, so I'm totally here for it. Now you parlayed your role right as a programmer then into working as a software developer in your favorite industry, the video game industry.

Speaker 3:

Again, I've just kind of bounced around town. So I've actually spent the plurality of my professional career working for companies that made and leased slot machines to Indian casinos, and that was pretty much just a pure coincidence again. So in the first place there was a company that was headquartered out of Tennessee, but that's just because that's where the owner was from. At the time, slot machines were so illegal in the state of Tennessee that if you had one in the trunk of your car and were driving through the state, that was illegal. You'd have to have it disassembled to be able to transport it through the state. So he couldn't hire anybody in his own state to work for his company, because anything that they had would basically be against the laws of his state. He, however, was able to eventually get contracts for something like 40% of all the slot machines in Oklahoma, and this led to him becoming an extraordinarily wealthy man and getting the rules changed in his state so that he could bring things back.

Speaker 3:

And I parted ways at almost the same time that that transition was happening, because the job for which I was actually hired I had completely automated and he had a policy that there were no career opportunities at the company. So that was that then. However, because of that, several of the people who had been associated with the company formed their own company here in town, because they also didn't want to move but liked the industry and had gained a lot of skill and so on. And so they've made a reasonably successor company and I signed on with them and kind of set up and built out their data science group a little while back.

Speaker 2:

Nice. Before we get into your current vision and what you're putting forth and propagating to make the world a better place, I'd like to learn a little bit more about the phrase that you used, which was that you had automated your job completely. And now this is in. You know the earlier 2000s to where we are now. There are a lot of people who are just learning about automations, I'm sure, who are listening and maybe haven't played with them yet. How did you figure out how to use automation and essentially take yourself out of a job? Because I know that's one of the big AI conversations right now is is AI going to take our jobs?

Speaker 3:

Well, so I already figured that out quite some time before. I'll try to do this at a fairly high level. But when people write programs, the computers in general do not actually run those programs. There are these other programs that we can call either interpreters or compilers, and there's a feature. So most people use web browsers basically constantly. If you right click on a page, a little menu will pop up and one of the things on that page that pops up is a thing that says view source, and if you click on that, what will show up is this enormous string of text, which is the HTML. That actually is the page that you're looking at. And it's not all the HTML, and there's other programs involved. Web browser actually is.

Speaker 3:

Is this thing called an interpreter? And what an interpreter is is a machine that when you give it instructions, it acts like the machine that you gave instructions for. So imagine if you had some machine in your garage that you could feed the blueprints of a car into, and it would act like it was the car that you fed the blueprints in. That's what interpreters are. Compilers are perhaps even more impressive. Compilers take a program written in one language and write that program over again in a different language and your computer the physical chip in your computer is an interpreter. So what your computer does is it takes a set of instructions and it acts like it's the machine that that set of instructions describes. So if you've got like a video game console in your house, it doesn't have. You know first person shooters and horror games and you know mom's farming game and all that stuff in it.

Speaker 3:

You stick in a cartridge and it says oh, I'm going to turn myself into a halo game. Oh, I'm going to turn myself into a Halo game. Oh, I'm going to turn myself into Super Smash Brothers. Oh, I'm going to turn myself into Call of Duty 5 million or whatever they're up to. So what compilers do is they write your program in that programming language that the interpreter, that is, your actual machine, reads. And so what this means is that the only difference between a task a computer can carry out and a task that humans carry out is whether or not a explicit enough description exists, and compilers slash interpreters for that explicit description exist to allow the transition from human action to the chip that's in the machine. And it's a pretty long slog to learn the various bits and pieces.

Speaker 3:

By the time I got the job at VGT, I had been obsessively reading the work in compiler design, interpreting different programming paradigms and so on, for about seven years. There are literally thousands of papers and books on this subject. There are even children's books on this subject and I've read some of those as well. But yeah, that becomes the basic challenge. The question around sort of AI glomping essentially comes down to whether or not those systems would be able to create those sets of explicit directions, explicitly enough to bridge the gap for any particular task.

Speaker 3:

And what I have been saying for decades now is that the easiest jobs to automate are the ones at the top of our society. If you think of the job of CEO of multinational conglomerate or president of the United States or Supreme Court justice, these people are taking in sets of information that are mostly written word and then providing usually fairly small amounts of decision guidance like yes, this, no, that. Typically they are not providing sort of large discursive. Even Supreme Court justices don't get into the nitty gritty of describing, you know, how their judgments are going to individually affect every citizen in the United States. They're doing the high level stuff. And because it's much, much easier to get computers to read and write text than it is to get them to flip physical switches in the real world it is in fact much, much easier to write a computer program that does what the CEO of a Fortune 500 company does than it is to write a computer program that does what a assembly line worker of a Fortune 500 company does.

Speaker 2:

That is a completely different construct than I was thinking, because you hear that assembly lines can be very automated. I've also heard of the new job at least new to a lot of us who have been played in this field of data interpreter as a human job that's taking what the CEO and the C-suite and executives want and understands the language of the data scientists as well and can put those together so that each party can understand what needs to get done.

Speaker 3:

Well, yeah, and that's one of the to me potential horror stories is that what I'm seeing generative AI more being used for is turbocharging Parkinson's law. So C Northcote Parkinson wrote a book called Parkinson's law. It's got some other ones, but he was writing in the, I think, early half of the 20th century about the human structure, sort of the sociological structure of bureaucracies, and he points out that, if you look at it, bureaucracies actually tend to grow regardless of how much work they're doing, and so his law is that work expands to fill the time allotted to it. So when you have a bureaucratic structure, if you make stuff easier, what you actually do is just increase the amount of busy work that's available within bureaucracy. You actually do is just increase the amount of busy work that's available within bureaucracy.

Speaker 3:

And so the real trick, and the reason I was able to actually successfully fully automate a task, is that you have to, as a person, become extremely ruthless about identifying that which is sort of superfluous nonsense, that humans sort of the fluffy stuff that humans would want to do to basically justify their own laziness. I mean, you know, I certainly have plenty of laziness that I would like to justify, but as a professional, you have to, you know, say no, we're not doing that. We're going to stay on task, stay on target, and it still takes a while to kind of get it all put together. But once it's all put together, you've got a machine that's doing work and people can now do. You know something else, and the alternative is that you know what? If, instead of having to fill out, you know a handful of blanks on the form in the DMV, you have to write 600 page essays for each of those things, because ChatGPT is going to do it for you.

Speaker 2:

Wow, wow.

Speaker 3:

Right, and then ChatGPT is going to read it for them and get it all put together and so we all wind up sort of drowning in this system where if you don't have access to the things then they exclude you in that fashion.

Speaker 3:

So these are to me real dangers and we've already seen them Going back to Supreme Court again. The length of Supreme Court decisions apparently correlates pretty well with how many clerks they have, and so if you look at Supreme Court decisions from the 19th and early 20th centuries, traditionally in those times they would have usually zero, maybe one clerk, and the court's decisions are a lot more pithy, whereas in the last 30, 40 years are sort of very set up career tracks with very long form. I think wasn't one of the recent Supreme Court justices actually like a clerk for the court herself in her earlier career? Yeah, this is like these sort of long form things. They've got half a dozen or more assistants and so you know, suddenly you've got these 20 or 30 page you know decisions and everybody wants to have their own version of it and that stuff.

Speaker 2:

Yeah, as a professor right across roads as well, where I mean thinking about your DMV example, because I just took my daughter to get her permit last week and there's already enough pre-democracy at the DMV. But I'm also thinking about when people turn in papers and we're encouraging them to learn how to use AI tools. I mean gen AI tools, right, but I can always tell the difference between somebody who is using their own thoughts and just putting the questions into their AI tool of choice and you know, spitting out the answer. So I do agree with you we have to be mindful and thoughtful, and this is where the thought about ethics can come in. On the human side, we also have these discussions around algorithmic ethics. I'd love to talk to you a little bit about that. And then, how does that play into it's not a new company, but CoreDisk and the functionality and what you're trying to achieve with that company.

Speaker 3:

Well, the ethics question goes right to what we were talking about before, like, the way that you actually manage to completely automate a job is by ruthlessly not allowing the task to expand to things that are useless to the job. That would simply justify make work for yourself or somebody else, and that same kind of attitude needs to prevail in other realms. One of the results of my engineering training that sort of got me into reading the deeper stuff before I discovered that it was fascinating and profound and started doing it on my own is that a key component of professionalism is that you're supposed to know what you're talking about. If you go to the doctor and they don't actually know anything and they just throw an opinion out there, you can't tell the difference between that and an actual informed opinion that you know, and furthermore, that creates this necessity for professional ethics, where they know what they're talking about and the thing that they're throwing out there is in your interests, not in their interests, and that becomes very challenging. We've seen, I'd say, a pretty wide scale reversion in professional ethics across all of the real professions engineering, medicine and law where all of these groups seem to be acting much more in their own interests or in other interests than their.

Speaker 3:

You know subjects are, and you know we're seeing that in decaying infrastructure projects that aren't being kept up. We're seeing that with, you know, things like the opioid crisis, where there's a mass misdiagnosis based on under-examined sales pitches by the industry. And, of course, lawyer ethics is a subject of comedy or black comedy for millennia. So I don't know if that's going down or whatnot, but it certainly hasn't suddenly become a paragon.

Speaker 3:

This gets even more serious, I think, once you have machines performing in a professional capacity and we do. Like you know, I automated a machine to do a job that was pretty decently paying, and we have more and more of that happening We'll soon have even more of that happening. An example I like to try out a lot medical care again is that we know for a fact that we can train machines to read medical scans like x-rays radically better than any human being is capable of reading those things. They can extract information from a chest x-ray that no human being could ever extract, and they're so good at it that you can actually blur out the x-ray. You can sort of pixelate it the way that you know.

Speaker 3:

60 Minutes used to pixelate mafia informants and stuff so that no human being would even be able to tell what part of your body was being x-rayed and the computer would still be able to correctly diagnose you, extract information about your lifestyle, extract your race, your sex, probably your age, and so on. That's how good we can make those things. But the way that those training programs are carried out is it produces a non-examinable output that works strictly backwards, essentially. So we know that it's working on the examples that we fed it better than anything that could possibly, you know, be parts of those inputs. But while interpolation can have serious problems but usually works pretty well, extrapolation, which is what we actually care about we would like these super radiologists to you know, make us healthier and be better diagnosed. But extrapolation is a radically different proposal than interpolation, and the challenge to getting an extrapolated machine that we could trust enough to offload diagnostics onto it is an ethical problem, not a technical one.

Speaker 2:

Yeah, but I will mention that one of the great benefits of having computer vision in the medical field is that there are not enough people becoming radiologists. We do see that, and so this is an example of where the technology could replace people. But it's actually very needed because there aren't enough people and of course you still have to have a doctor say yes, sign off on everything and speak to the human whose scans it is to give them, whatever their care plan might be and different things, and also just have that at least now, that human empathy. Just have that at least now that human empathy?

Speaker 3:

Yeah, well, that again gets back to the point. Our social structures are all built around the concept that basically all communication and decisions are made by, and will be run through, human beings. But we're actually a couple centuries into telecom at this point and we are now very rapidly transitioning to a point where larger and larger fractions of content being produced and decisions that are being made. But to link to what I'm working with Chord Disk, there are marketplaces, which are the most economically significant institutions that we have on earth, have been majority gen AI for decades now.

Speaker 2:

Let's get right into that. Yeah, tell us more, because this is a big surprise.

Speaker 3:

So BlackRock has a program they call Aladdin, which I think they launched in like 2007, 2008. And it's not really connected to the global financial crisis, in spite of the timing. That fall apart was also due to AI, but not Gen AI, and it's a Gen AI system that supposedly sets their strategy for investment. That supposedly sets their strategy for investment, and they own something like 20-ish percent, I think, between them and Vanguard. It's something like 40% of all of the equity on the planet or something, and they sell that. So you can buy advice time from Aladdin and you can get advice from their advice maker to go do what you're doing, which strikes me as insane. But people do that.

Speaker 3:

And even prior to that, computerization in these fields goes back to the seventies and AI doesn't have to be apparently human to still outperform us in one dimension or another.

Speaker 3:

So back in sort of 70s 80s, at the last gasp of actual human activity in marketplaces, in the commodity market specifically, the ratio between physical trades and paper trades. So trades that eventually were delivered and trades that were pretty much just being made to go back and forth were, on average, about eight to one. Today, that average is between 40 to 50 to one and is continuing to increase, because the machines produce paper trades, and they produce a lot of them. So you know, we had a market that was 87% sort of management that was being carried out by human beings. We now have a market that's 97.5% to 98% management that's being carried out by machines, and so our economy can no longer be laid at our feet. We're doing what the machines are telling us. Economy can no longer be laid at our feet. We're doing what the machines are telling us and, for the most part, the people that are writing these programs have absolutely no possibility of understanding what the meaning of the outputs of these programs are.

Speaker 2:

Okay, gary.

Speaker 3:

I always think so, but I can't get anybody else excited about this stuff.

Speaker 2:

I think this is something a lot of people just are unaware of or they're just thinking they're trusting the experts, who are trusting the machines.

Speaker 3:

Yes, which the experts programmed and decided to trust. There's a very famous case he died recently at Renaissance Technologies. They were a hedge fund that apparently was producing 60 plus percent annualized returns and they actually wouldn't accept anybody's money. They were like we only invest our own money, like we don't want any of yours, and nobody's 100% sure of what they were doing because it was totally private. But what they claimed they were doing was writing AIs and having them invest and just turning all the cash over to them and doing whatever they were told by the robots, regardless of what it was, because they didn't understand anything and the robots were figuring stuff out.

Speaker 2:

All right. So where does CoreDisk come in?

Speaker 3:

Well, coredisk creates an entirely new market mechanism that actually effectively creates an ethical incentive structure intrinsic to its operation. So the key thing about computers and markets is that they're both information machines, and so what markets do right now in the financial space is that they use an emergent system. So emergence is a property of systems that comes into existence at very large scale. Things like the distribution of predators and prey in national parks is an emergent phenomenon. You can't really figure out what it's going to be ahead of time, it just happens. As a result, this bear catches that rabbit and that fox gets away and whatever, and all of that sort of adds up to what happens. And the same thing happens in our marketplaces. Literally billions of orders come in over the course of a day. Every microsecond there's an opportunity for new orders to come in or for orders that are already in to be canceled. Some very tiny fraction of those orders actually cross and trades occur. Every trade that actually occurs is published and as a result of that, everybody gets to see the trades that get published and people that pay sort of for the premium stuff get to see the orders coming in, and so everyone's making decisions based on what everybody else is doing all of the time, and so it's a little bit like hot and cold, but everybody's playing around the same thing, except that there's a real disconnect. There's a big difference between getting to see what the price was at the end of the day tomorrow and getting to see a million times a second what everybody's doing in real time. And so the professional traders buy computers that are in the same room with the computers that are operating the marketplace, that have communication connection links that are actually measured out to the millionth of a meter, because if your connection link was 100 meters long and my connection link was 99 meters long, then fiber optically my signal would reach my computers about a like 400 millionth of a second faster. That would give me a few hundred extra cycles of trading where I would get to basically just take money out of your account and put it into mine, and so obviously that's what I would do, so everybody gets to have exactly the same length and so on.

Speaker 3:

And, of course, people that aren't in the rarefied position of having hedge funds that are worth hundreds of millions to hundreds of billions of dollars and don't have access to this technology and the technology that can actually read terabytes of data every day and make decisions that are somewhat sensible on them is. You know they're the suckers. You know you come in. Is this where I drop my money? Oh, it is great. Can I have anything? Oh, it's over there. I guess I'll go over there now.

Speaker 3:

Yeah, and so what I did is I effectively separated the information system into its own piece, and so, instead of having a two-sided market between buyers, who have some information and money, and sellers who have some information and product, there's a three-sided market between producers with product, consumers with money, and negotiators, speculators, the informed, that have information. And so the current buyer can go into the negotiator door to negotiate and then go into the producer door to sell or vice versa, and the seller can also go into the negotiating door to negotiate and then the consumer door to buy. But the paper trader this 98% of everything, that's mostly Silicon at this point they can just go live in the negotiation room, but they're no longer picking people off. So insider trading is illegal because the insider trader makes money, because you don't know what they know and you accept a bad deal that you wouldn't have accepted if you did know what they know and what the negotiation market in my system allows, is you to make money by making everybody else get better deals.

Speaker 2:

Okay, right.

Speaker 3:

And so the insider trader doesn't have to be illegal in my system, because what they can do is, instead of finding somebody sort of ignorant enough to actually accept this incredibly bad deal, they can instead let everybody know that there's something very good or very bad about to happen, improve the lives of everybody across the entire market for that knowledge, and then take a share of all of that improvement for themselves, and that can be a bigger number than you can get out of just you know, having the good fortune to find one ignorant sucker that you happen to really be able to take advantage of.

Speaker 2:

Can anybody join your system?

Speaker 3:

In operation. The requirements are that you would actually have to have product in order to operate as a producer. You'd have to actually have money to operate as a consumer, and in order to operate as a negotiator, you would have to have the self-confidence to believe that you can put useful information to the system, and you'd have to have at least a little bit of money to back up your self-confidence. But because the rates of return are so much higher than in existing markets, if you have more information than you have money, that's the kind of problem that's going to solve itself very, very quickly.

Speaker 3:

Somebody that's doing a really good job investing in modern markets might get a 20% return. So if you are starting with $1,000, you're going to double it every four-ish years. 16 years later, you've doubled four times. Now you've got $16,000. In my system, you may well be able to double every year. And so you know you start with $1,000, you double every year. 16 years later, if you actually can stay good that long, you'd have $65 million, and so if it turned out that you have a lot more information in you, you're going to be able to start exploiting that very, very rapidly.

Speaker 2:

Yeah, is it easy for somebody to understand how it works? Say, you have some money. You're not happy with what you have for retirement or for whatever you know pie in the sky idea that you have. You put your thousand dollars in. Then are you having to learn the system? Because I've talked to other people who have figured out different ways that not like this, of course, but where they have a lot of information that they'll sell you right to figure out how to gain the market and get better returns. But it's all still on you as the person who's investing.

Speaker 3:

This really isn't a passive investment environment. There's actually not a lot of value in that. The proper sort of ethical response for passive investment would be for governmental intervention to radically raise the lower limits on savings accounts. Quite frankly, savings accounts probably should be the default passive investment and for the average consumer, your savings account should probably outperform the SOC markets for you. And the fact that that doesn't happen is almost certainly entirely due to the financial system behaving in ways that are systematically unethical and effectively using their existing sort of crux of the information to weight things in ways that make it so that they're gaining returns at the general expense of the economy. And it's not exactly like that's unheard of.

Speaker 3:

In like the Victorian era there was a concept and into the Edwardian there was a concept in England called the funds, and that was more or less what it was. There were these sort of halfway hedge funds, but really more savings account type objects that the gentry and the middle classes put their monies in to be safe, and those things advanced at more or less the rate of the economy and the financial system saw it as essentially their duty. This broke down when the various global financial crises associated with the World Wars basically meant that they couldn't hold it together anymore, and new Keynesian ideas. That government intervention was supposed to just print money through these crises, which has been pretty much an unmitigatable disaster ever since, and we're still seeing that. I mean, you know, we just saw China's economy decide to take a little discursion and China, who has been the largest money printer on earth over the last 16 years, decided to respond to it with the largest governmental liquidity intervention that's ever been announced.

Speaker 3:

So their stock market is doing great. Whether or not their actual economy is or will do great is not something that I could have any knowledge of. I could have any knowledge of. But yeah, one of the really serious issues when you kind of dig into the economics and ethics of the financial system is that you have to reverse a lot of your sort of core assumptions. One of the kind of central tenets of competitive capitalism is that winning is good and good is what we pay for, and so making money is winning.

Speaker 3:

But, in a service environment. Your income is other people's costs. So imagine going again. You know doctors are the most common form of service professional that people interact with. Imagine if every doctor had a second doctor's office right next to theirs that guaranteed identical outcomes but charged half as much. Some people would stick with their family doctors. Obviously Some people would not, and in general people would start using the cheaper alternative. And if, as a result of that cheaper alternative, the family doctor went away and a new door opened up next door that said guaranteed equal results, half what they're charging? That would keep happening and a society with continually declining cost of services but no decline or even advance in the quality of those services would be massively better off. You know you could get, like you know, a skilled neurosurgeon and cosmetic surgeon to fix your hangnail for you, rather than you know, trying to figure out how to cut it off yourself because it's like an awkward angle. You know, seriously, rather than doing a DIY, putting a deck on your back porch, you could have the greatest civil engineering firm of the age build like every little minor extension in your home for you if it was cheap enough.

Speaker 3:

And again going back to sort of what computers are for and what they're actually good at. Computers can move around information and process it at rates that human brains simply cannot do, and so if we start properly regarding the various people with billion dollar incomes or a hundred million dollar incomes in the financial system as sort of the winners of life, as rather cost centers, that if we can eliminate them, the entirety of society becomes better off. What if every financial decision that you made about your personal family life could be made by a confluence of, like Jamie Diamond and you know the blank? Fine, I'm not great with names, but you know, you know who we're talking about here. What if all of those people were effectively managing your savings account for you and everybody else's as well? We would all be a lot richer than we currently are, and they wouldn't be richer than they currently are. But at the end of the day, there's, you know, 8 billion of us and five of them, so we're not going to care very much about that.

Speaker 2:

Amazing. We have to make this to be continued, because I know there's a lot more that I want to uncover with you, so I'm going to ask you one question. That could be a whole other conversation, and then we'll close out the interview. I wanted to ask about currency and if we're going to have the ability in the future to keep printing money, of have coins, or if it's all going to be digital, if we're really going to be operating through crypto and blockchain.

Speaker 3:

Well, so I can say with some degree of certainty that we will not be going to crypto blockchain with current technologies as a dominant currency, because the existing crypto blockchains all have a thing called hash dependency, where they basically have a fixed algorithm that describes how their blockchains are expanded, and that algorithm depends on a currently unsolved cryptography problem. But the nature of cryptography problems is that we don't know how to make unsolvable ones, so we just know how to make ones that haven't been publicly solved, and so once one of them gets publicly solved, then it's over. So Edgar Allan Poe actually wrote about this, about why you had to publish your crypto algorithms, and pointing out that the kinds of ludicrously simple cryptography that was used during the Revolutionary War and the Drogon's decoding wheel, which is the little thing that they put in cereal boxes for kids and is now not really up to snuff either. So, anyhow, that won't happen, whether digital currencies.

Speaker 3:

Digital currencies would be exceptionally dangerous to go into general adoption, because it simply exacerbates all of the existing issues that we have with the financial system's inability to self-regulate. And, however, it's worth noting that virtually all of the existing currency is already digital, because the overwhelming majority of circulating currency is actually credit money. People pay things with their credit cards or, more rarely, with their debit cards. People very rarely pay for things with physical cash.

Speaker 3:

I have a very hard time trying to work out what the actual day-to-day structure of a functioning civilization looks like, because the only thing I know for certain is that we're not living in one right now and nobody has ever lived in one before. So I can eliminate what I'm seeing and I can eliminate anything I read about in a history book, but that doesn't tell me what would work. So that becomes quite difficult and again, a whole other conversation. But to me, the issue that the hardcore crypto guys and also the gold people go on about is that the issue is in fact this inflation of the currency, mostly through the credit system, and that hardening our currency solves all our problems. And my analysis of the situation is that unfortunately, that's not even remotely true. The markets themselves because of this you know computers with the hundred meter, you know strings and the rest of that are actually obscurant enough in their own right that it's possible to run the entire thing as a Ponzi scam without with hard currency in this day and age.

Speaker 3:

Now it's a lot easier to do with this. You know Swiss cheese tofu that we use, but it doesn't make a difference. So I believe that if we come up with markets that are providing clear and accurate information across the entire public, then that would also reverse the incentives that presently exist to make up new money, to allow rich people to pretend like they actually did things right when to cover up when they did things wrong, which is going back to events like long-term capital management in 97, and the response to all of the major financial crises that we've been seeing since then has been the pattern that we've been following and that's, you know, that's not a pattern we can keep up for much longer.

Speaker 2:

Yeah, wow, thank you for that, and of course, we'll have your website in the show notes. Noah, what is one piece of life advice that you would like to impart to their audience?

Speaker 3:

Well, I usually go with like stick to your principles but, you know, also stick to your people. At the end of the day, even if we do manage to figure out how to get all professional, you know, advice and use out of machines, people are still going to psychologically and biologically require one another to keep making more society happen there, to keep making more society happen. So try to gauge things in terms of whether or not they are advancing that goal and less whether or not they seem to comport with the current fad, because the fads go away a lot faster than you'd expect.

Speaker 2:

Yeah, awesome. Thank you. This has been a really engaging, enlightening conversation. We'll talk offline and figure out when we can bring you back on to continue this discussion. Noah Healy, thank you so much for being here. On Mediascape Insights from Digital Changemakers.

Speaker 1:

To learn more about the Master of Science in Digital Media Management program, visit us on the web at dmmuscedu.

People on this episode