
Fed to Fed
Connecting government and industry to promote innovation through collaboration. The Fed to Fed podcast highlights the latest in innovation and technology modernization in the US government and industry. Join us for inspiring conversations with leaders and change-makers in the government technology space.
Fed to Fed
Fed to Fed Podcast - A Conversation with Clement Chen and Ethan Tu
Artificial Intelligence, AI, is a hot topic in government and industry. How can we use this powerful tool responsibly? What challenges does this technology present in the government and private sector?
Join us for today's exciting episode of the Fed to Fed podcast where we sit down with Clement Chen of DS Federal, and Ethan Tu of Taiwan AI Labs to talk about the real applications of AI in government. We discuss ethical considerations, security, case studies, and more as we just scratch the surface of how AI technology impacts our world!
About today's guests:
Clement Chen
Clement Chen is the chief executive officer of DSFederal, a woman-owned small business focused on data science, data services, and digital solutions for the Federal government. He is responsible for achieving the company’s strategic, financial, and operational goals as the firm embarks on its next phase of transformation and growth.
Ethan Tu
Ethan Tu founded Taiwan AI Labs, Asia’s first open AI research organization specializing in next-generation AI solutions. AI Labs has shown significant results in medical AI/imaging applications, which contributed to the detection of Covid-19 from chest x-rays during the pandemic. He also started The Taiwan AI Federated Learning Alliance to encourage the pooling of data samples for better AI advancement while protecting privacy rights. Ethan also leads teams to develop AI solutions for drones, Smart Cities, speech and facial recognition, and the fighting of fake news.
Thanks for listening! Would you like to know more about us? Follow our LinkedIn page!
Welcome to the Fed to Fed podcast, where we dive into the dynamic world of government technology. In this podcast series will be joined by current and former federal leaders and industry trailblazer who are at the forefront of innovation. Here we speak openly and honestly about the challenges and opportunities facing the federal government and the Department of Defense and its partners in the modern age, driving innovation and the incredible capabilities of technology along the way. Whether you're a federal leader, a tech industry professional, or simply fascinated by I.T. modernization, just like us, this podcast is for you, and we're so happy to have you tuning in. Today we have two truly exceptional guests joining us, Clement Chen, CEO of DS Federal, and Ethan Tu, founder and president of Taiwan AI Labs and president of AI Learning Alliance. Today, we focus on what government customers want from AI, hype versus reality of AI, use cases that are more promising than others, and risks and challenges that people should be paying attention to. Clement and Ethan, thank you so much for joining us today. Thank you, Susan. Well, why don't we go ahead and get started and I'll ask the first question. What do government customers really want from AI? I I'll start off with that. And Ethan feels free to pile on. I guess the first thing I would say is what they don't want, might be a good place to start. And I think they certainly don't want a Cadillac when a bicycle will do so to speak. There's also, I think, an allergic reaction to bright, shiny objects. And I don't think many of the customers are necessarily interested in being the guinea pig for your innovation. There has to be some tangible material proof points before you come in to work with a customer. Now, that said, there are many mission and operational needs that government customers are seeking and and they're looking for something that is practical and useful now, not necessarily some sort of field of dreams capability that might be there at some point in the distant future. Now, that said, there is some concern about FOMO, the fear of missing out, but I think they tend to want to experiment in ways that has, for lack of a better term, low risk and medium return as opposed to a high risk, high return sort of set up. That's generally what we're seeing in the U.S. government market. Ethan, I'm not sure how you're seeing it in terms of the Taiwan government as a proxy for how the international community might be viewing what they want from AI. Okay, So Taiwan is always a good global citizen. Which means when there's AI, more than 50% of AI chips should say manufactured in Taiwan. Only 50%, though The Hague has computing power, it's also manufactured in Taiwan. And we also have a lot of AIT devices. So therefore, of course Taiwan’s government embrace AI and encourage all industry to use AI. However, when we talk about can government use the AI compared to the industry. When the government think about using AI they more concerned about trustworthy, responsible and also the data governance. For example, in Taiwan we care about human data, for example. So we want to avoid when we apply the AI algorithm or the AI solution not actually in the end, one giant tech company or foreign government may own our citizens’ data. So that part, the data justice, the data governancy is one critical part we think about. And in Taiwan we also care about people's privacy. We we want to enjoy the convenience of AI but at the same time don't want to hurt people's privacy, so as federacy approaches discourse in Taiwan. Of course, a lot of the innovation is happening in AI, and you can also benefit the funds - saving the effort and saving the costs in the government spending make the system more efficient. That's how governments see that interest. But they also worry about, for example, the job loss, maybe the change of the job, and different people may come by this migration. So there are also a lot of education digital literacy programs here working in Taiwan. Also when we are using artificial intelligence, we also care about, for example, how can we make sure this artificial intelligence responsible before you apply an algorithm to make a decision you rely on. You rely on this algorithm, it has bias. You may make a wrong decision. Maybe this decision is more China-favored or Russia-favored instead of the United States and Taiwan's standards. So let's also include about the transparency and the AI bias evaluation and also negative validation and also the regulating stop. I think the government right now, Taiwan government is very active looking for AI solutions, but at the same time is pretty cautious on the trustworthy and responsible as well as the data governance part. Yeah. Ethan, I'd like to add on to that. You know, in the U.S., it's interesting that there is a convergence in terms of overarching strategic priorities relative to A.I.. The Biden administration, White House executive order reflects many of the themes that Ethan just talked about: safe, secure, privacy-preserving, trustworthy, responsible A.I. and the White House executive order is rather sweeping. It addresses not only standards for A.I. security, but also how it might impact privacy, how it might impact consumers, patients, students, workers, as well as the international collaboration arena. The public sector is a little bit different than a commercial sector in its use of A.I., because the stakes in a public good, in the impact of it is extremely far reaching, as opposed to the many, subsectors of a commercial industry that is optimizing profits and stakeholder value, so to speak. Whereas a public sector has a unique set of responsibilities and the sweeping nature of its impact on its citizens. Excellent. That was very helpful. So how does the hype versus the reality of AI differ? There's a lot of hype. I, I think, Susan, you and I were both at HIMSS, the big health I.T. conference in Orlando this week where we gathered together with 30,000 of our closest friends. And it was a AI everywhere, everything everywhere all at once, And what's interesting is if we take HIMSS as sort of a proxy for the hype machine, every year, there's a theme in HIMSS. You know, in the past, the theme of the day may have been blockchain, may have been mobility, it may have been patient engagement, population health management, big data. This week was certainly A.I.. And what I'll say about A.I. is, one, it's a fascinating thing, but what we ought not to do is foist onto it mystical powers as if it's some sort of oracle emerging from the ether that is sort of like its own being. At its root, it's essentially a social collaboration tool, and it has to be viewed as such if we're going to take advantage of its promise, as well as avoiding all of its potential dangers. So certainly from a hype standpoint, there are some concerns that it'll eradicate people's jobs or it may do all these wonderful, amazing things that may make human involvement less pronounced. I think that's a lot of arm waving, a lot of hype. But I think the reality of the situation is it is very effective at being sort of a force multiplier for human activity. And I think so much of our technology today is somewhat brittle in the sense that it forces the human to act more like the machine in order to get the value out of the machine. And I think the promise of AI reverses that a little bit, in which the machine becomes more within the flow and the mojo of the human and therefore may make the interactions a little more natural. So instead of going from a graphical user interface to a mobility app to a chat bot, now language becomes the user interface of words. Ethan, any thoughts on this? I founded AI Lab in Taiwan in 2017. And before I founded Taiwan AI Labs I was also the principal then manager for Microsoft as they were developing Microsoft Cortana. I was saying that new technology always comes with a hype, which means that over expectation and at the same time there's always a disappointment point. There's always reality and a press through after that. Yeah. So. So I would say that in year 2013 to 2017 had always seen artificial intelligence like the Internet is going to change everything in the future. If you have big data, if you have computing power, if you had the algorithm, you can one by one meet, human parity in all the difficult fields. And of course there are a lot of marketing around artificial intelligence, a lot of the time, including my company. Microsoft before that, there's always some overt worry over expectation a lot of the time. I would say 2023 will be a very different year because people will think about artificial intelligence and we know they are coming but we don't know what does that mean to us? What does that mean to people? After 2023 Microsoft released Chat GPT with Open AI. I think suddenly people can feel like this is into your day to day work now. Yeah so I would say after 2023 the large language model generative AI makes a lot of people think that the hype becomes more realistic. Okay now I see artificial intelligence is near me, then how can I use it? So there are a lot of discussion and there are a lot of innovation. I would say last year, there were thousands of new start up AI companies around the world. A lot of them of course they are based on the gen AI from the text to image to music all kind of application in different field. So I will say definitely every time there's a breakthrough, they will come with another hype. Like today we see a lot of companies, a lot of the government sector, they think, “oh, we can use artificial intelligence to do this, to do that.” Yes, that's good. But of course there are also some realities we need to understand. And there are always some, especially in the public sector. There are some regulation and there are some“How we can make trustworthy and human-centric technology is full of public good?” Instead, you send the general public into another evil empire or another government control surveillance program. So that is another topic. We have in AI Labs worked closely with government and determine which path we can go faster, which paths we need to be very careful. And we also work closely the European Union is AI 8. We also work closely with the government, is assess the risk of the AI. After we know the risks of AI we can know how we can do the evaluation, how we can know the impact, and how we can make sure before we release to the general public what is a good process for what we call responsible AI AI that is transparant, principled, traceable, verifiable and auditable approach. So we have different programs we are working on right now in Taiwan. You know, Ethan, I want to riff on something you had mentioned. We're kind of in the shock and awe phase of Gen AI. And I think what has really made it take off in the last year is the ease of use and the almost democratized access to some of the initial features of it. That makes startlingly great narrative text that's pretty well informed. It's almost like a parlor trick. It's almost like magic, but it's actually becoming very helpful. But what oftentimes happens with new technologies when they show up, we try to shove them into the most demanding use cases. And that's where a lot of the disappointment happens. And I think the IBM big data work with Watson and MD Anderson from ten years ago was a great example of it. It ended up MD Anderson ended up spending tens of billions on trying to work on this oncology expert advisor application of big data and the beginnings of A.I. relative to that. But the data access the was in the midst of an epic implementation. So many practical realities of the operational exigencies of how health systems or customers actually operate mad that end use application of it... It became a failure. And I think there's a tendency to take a new technology, whether it's Gen AI or whatever, and shove it into the most demanding circumstances, which is not necessarily a great way to go. You don't want it to be doing everything for everybody everywhere you wanted to do something for somebody somewhere. And I think what's really great about what's happening right now, there's a lot of experimentation going on both in U.S. government as well as internationally to see what the universal possibilities are. And so as some of these use cases become more pronounced, and in some of them, that's why Taiwan AI Labs is so interesting to us, and why DS federal wanted to form a strategic alliance with them is in much of what they're doing they've extended beyond just a pilot experimentation, but they're dealing with a material application of the technology in a way that scales. So there's a continuum, there's a crawl walk, run phenomenon that is going to be in play. And what we found particularly compelling about Taiwan AI Labs is they've they're moving along that continuum at a pace probably a little further than most. It's very exciting. So you mentioned use cases. What are some use cases for A.I. that are potentially more promising than others? let's start with HHS, Health and Human Services, since, one that's a focus for DS Federal, and in, frankly, right wrong or different health effects everyone without exception. And I think it's instructive to see how it has unfolded at HHS over the period of just two years. Back in FY 22, if you look at the use case inventory at HHS, you'll see that there was at least 50 or so use cases. The real numbers are probably a lot higher, but most of it was being experimented with NIH. And I think in a lot of it, when you look at the various features that might be of interest, image exploitation, search, there's this process automation, prediction, de-duplication, etc. The categorization function actually became one of the most prevalent in all the use cases. And that makes a lot of sense because that's sort of the a near term crawl phase of the use of the technology and became extremely effective. So if you look at, the 50 or so use cases that were in play a couple of years ago at HHS, categorization was key and a lot of it was linked to grants data. So DS Federal specifically was involved in some of this, particularly in the research condition, disease categorization function of the Office of Extramural Research. NIH processes$30-40 billion in grants a year, but they have to deal with unstructured grant applications that have to then be categorized, taxonomized, auto classified, auto tagged or whatever to follow it across the 350 research conditions and disease categories for reporting purposes and accountability purposes and things of that nature. And that's just the public facing categories. Behind the Green Door, there's thousands of other categories that are being assessed for relevancy to us and to the next official category or being experimented with for some other end purposes in mind. So the practical use of A.I. doing A.I. for real is actually happening there. Fast forward to this past year. You know, Ethan mentioned that there was like this leap ahead step function in not only the capability embodied by the Gen AI movement, but also in overall experimentation adoption. I think the use case inventory, right now at HHS is more than triple what it was literally the year before. And I think FDA is at the forefront of a lot of that right now. And so, now they're looking not only in categorization but including more aspects of its prediction as it applies to anything from the opioid problem to drug interaction, problems to etc., etc.. So what you see is a focus on mission data analysis more so than operations. And I think this is instructive for us because that's our natural approach. We take the new technology and we shove it into the mission app. But I'll tell you that some of the more interesting use cases or the more practical ones are actually operational effectiveness and efficiency. And that's actually not where a lot of the use cases are happening at HHS. But that may change over time because I think the foothold of proving its value may actually ultimately express itself there first before the mission data analysis, only because of the level of complexity that's required, the more you become mission oriented. I know Ethan and Taiwan AI Labs have actually done, not only operational effectiveness and mission suitability, but they're doing it in very demanding circumstances. And I'd love to hear Ethan elaborate on some of that Yes, definitely. So I actually worked for NIH Human Genome Research Institute back in 2003 - 2006. I was senior program in the Human Genome Research Institute. Yeah. Similar to United States where we say health care is a big issue everywhere for all of countries because all the government spending increasing a lot because health is actually a pretty complicated problem. And Taiwan’s government every year we spend more than 800 billion to 900 billion Taiwanese dollars in the health care. Therefore, how we can make these health care more efficient, how we can predict human disease more efficiently. Before you try to cure patients, how we can support the doctors so the doctors' diagnoses could be more precise. Therefore, you don't need to give people drug test, even diagnosis. So now a lot of precision health, precision medicine, research initiated in Taiwan. So Taiwan AI Labs we work closely with Taiwan’s medical centers. So right now we cover like 99.7 of 23 million population health care data. So with this health data we work with the best doctor in all our medical centers to understand the disease, to support the doctor, to diagnose a disease by AI. And how we can do a genome, genome associated study with health, local genomic data, and then also the health condition data all together. So that's one nationwide project we are working in Taiwan. During the pandemic for example, we also worked closely with Taiwan's hospital, how we can develop artificial intelligence to support the doctor to do the treatment, to support the doctor through the chest x-ray we know if the patient has COVID or not. To support the doctor using the latest approach, for example, to know the contact tracing without government collecting user’s data. That's also one research project we did. I think we also worked closely with DS Federal. So for example, the CDC idea and we discuss with USCDC. And there's also a project we are going on is how we can without centralized data. For example, the CDC can use the data science as a service, then deployed state government with a federated analytics approach. So we can have lab based data scientist in with them, with DS Federal and USCDC working together to support the country to do the disease preventing, diagnosing without collecting their data. So that's also a very interesting project we are working on. In Taiwan now because of Chat GPT we also have to have some very interesting projects. For example, we are the first company in the world to use Chat GPT along with medial image diagnosis technology. We train the large language model to identify a brain tumor so we can identify if this patient has a brain metasis tumor and then use Chat GPT to support the doctor to write a report. In this report, we follow the national Taiwan insurance agency’s guidance. So if there’s observation, there is evidence, then and how using the finding to write the diagnosis report. So this can again help the doctor to reduce their time, their effort, and also reduce the misdiagnosis. So there are different projects we’re working on and overall right now in Taiwan, for example we cover 80% of medical centers with more than 122 projects working on in Taiwan. Every field base, hospital based doctor and lab learning team in Taiwan, we work together for the health care innovation. So I would say, yes, health care is an important part. We are focusing on it right now. I also echo what Clement said that is such a big important to do this in scale and try to land in the hospitals. Yeah. Ethan, I want to jump in on something. you know, Susan, up until now we've been talking with a laser like focus on health applications because obviously that that's it's existential for everyone. But what's interesting is AI has the ability to cross-pollinate insights across multiple markets. So if you look at something like the epidemiological characteristics of disease spread. So whether it’s COVID or flu or H5N1, If you look at the disease propagation patterns where, you know, a localized disease infects a number of hosts, that then converts over into a raging epidemic, that sort of outbreak analytics actually is not unlike what you see in the willful or otherwise disinformation spreading across related to anything from the political elections to anything else. And what's kind of interesting, most of our use cases that we've been discussing right now has been health centric. However, Taiwan AI Labs actually levers the same technologies and the problem patterns to address. For instance, election disinformation in another realm with their infodemic platform. And this has actually captured the attention of quite a number of folks in the in the U.S. government market. And Ethan has been invited by Hillary Clinton to do a panel with the Aspen Institute on March 28 in New York City to talk about A.I. and specifically the risks of A.I. impact the political realm and, perhaps you want to say a few words about Infodemic as sort of another use case founded on the same set of technologies. That is actually a very interesting topic. So a lot of time we think about technology, the fundamental technology is actually the same. The way we understand how you humans talk like ChatGPT as large language model. You look into the genome sequence it is similar to language model. We learn how genomes talk, therefore we can understand that disease and the related to which genomic difference. also for information propagation, along with COVID 19. And we also Taiwan also facing a big challenge is because we are number one in terms of foreign government dissemination of false information. Therefore we attract a lot of misinformation happening in Taiwan will also occur in other country in the same pattern. For example, running out of toilet paper. These kind of information manipulation happened in Taiwan first, then Japan, Australia and go to the United States. We have one wave after another wave of information manipulation, happening in Taiwan first. Therefore, we also use the large language model like ChatGPT and use it to identify all the information on the news media and also the social media. And we train our artificial intelligence program how to identify the authentic user and the inauthentic user, which means we can distinguish the real user and also the fake account and if the account is being controlled by the campaign company and if the company is controlled by China or Russia. So we can identify different accounts. and interesting we find if you look into the social media nowadays, a very good percentage of misinformation is actually dispersed by those troll accounts. A very good percentage. Almost all a public issue. You can just imagine, like when we have email, then after a couple of years they have spam email is increased, more than human beings email. Same thing in social media now. We have social media, we have accounts, and now, according to our study during the election is during the pandemic and during the Taiwanese election, a considerable percentage of the information spread out on social media is actually not from a real user it’s from a troll account group. Take Taiwan’s presidential election, for example. We actually monitored billions of activity last year. Over the year we identified more than tens of thousands of troll activities. And with these troll activity, we can identify what are the topics the trolls try to misleading people and what other topics these trolls are trying to discredit United States, for example. We when we see in April 2023, when Joe Biden said United States will support Taiwan if there’s a military invasion. And then we see a lot of trolls come into the social media and help to discredit, for example saying, “Oh, the United States is supporting Taiwan to develop the bio weapons in southeast China Sea” for example. So we discovered this kind of activity. Therefore, during the election last year. The major thing we do is we identify the impact of the information manipulation. We identify the information created by troll accounts. A lot of it is created by the Gen AI including text, images, short videos. And I would say the fact checking or debunking is useless now today after we have Gen AI and all these troll accounts. So the strategy we have worked closely with Taiwan Social Scientists Group, we actually identify and disclose what is happening on the social media. So in this way we encourage people's resilience and less information manipulation. Yeah, that's what we did in Taiwan. it was terrific. Thank you so much for that overview. So now we need to talk about some of the risks and challenges associated with the AI. Clement, from your perspective, what are you seeing? Most Certainly the integrity of the training data and some of it being willfully bad, some of it's codifying biases that have already existed in the data a priori. That's probably one of the biggest risks. Second is actually the willful misuse of it, whether it's in deepfakes and what have you. There's also a third party risks creating exposures. There's IP infringement, risk of data exposure, data loss. And then there's always, a new threat vector with Gen AI enabling, you know, a new breed of malware, etc., Lots lots of risks, that's for sure. Ethan, any other thoughts on that? Yes, I echo what Clement just said. There's always bias in AI. So a lot of people are saying, “okay, now I use artificial intelligence.” But for example, the Facebook will say“I use AI to censor the misinformation.” But who edits the AI? So the bias over here comes from the quality of that data. For example, the bias of the labeler, for example. That’s also the reason when we talk about like a content censors reading Mandarin, we always ask “do you train this content censor in Taiwan or in China?” That’s very different because the data then the social status is very different. How you collect the data, how you label it that can result in a very different result. So the third part is, “So what is the risk if we misuse the artificial intelligence?” For example, you cannot ask Chat GPT for healthcare advice, for a diagnosis. Chat GPT is not designed for the health care situation, is not for the financial situation. A lot of people will try to follow Chat GPT to understand the world. But a lot of information on Chat GPT is actually not real, it has misinformation. Therefore the misuse of AI or even using AI to create the misinformation you can train Chat GPT to generate the news. That never happened. But people think that's real because Chat GPT’s so convincing. So you can convince everyone through some prompt engineering. And the same thing through the deep fake video and the deep fake image. In the past, we say “To see is to believe.” In the future, we say “To see, we cannot believe, we still need to verify.” That is a good point. so I would say there are a lot of new issue happening and new risks happening now and we already identify, especially for a democracy country like we just mentioned, information manipulation is actually trending topic. Which means now we can leverage artificial intelligence to create a lot of accounts to spread out a lot of information and let people think something that doesn't exist. You can repeat having troll accounts say “look” and that become the misleading information, then make a real impact. Right now, the toilet paper was a very good example. So I will say because Taiwan AI Labs, we dedicate our effort on the trustworthy and responsible technology. the major challenge for us is how we can put human back into thr center of the artificial intelligence. We control A.I. instead of being controlled by AI. Then on the other side, maybe the government from Russia, China or the big tech, they want to make profit. So how we can make sure the technology is being used in a trustworthy way, how we can evaluate and how we can work together with the regulators in the future. So a lot of the people are, like in the European Union, people are saying when you are using AI for the general public, then it should be like an FDA drug where you say, “This AI is good in diagnosis”, then you'll need to have the clinical trial level with the local data and local people. so there's a lot of discussion going on right now and I see a lot of effort in Taiwan and a lot of experts with good results happening. Of course, if DS Federal works closely with US federal government, we can maybe we can share our experience, then we can make these solutions in Taiwan. That's why we did. I can also bring everything to the United States. And of course the United States is, all the big tech companies are in the US. So we can also imagine if we work closely with DS Federal for like health care, with a big tech company, then with transparancy methodology and the scale and strategy we do in Taiwan, we can make these more impactful and more practical. Excellent. Thank you so much for that. So what hot takes do you have on AI? I guess to the first is a little bit on what Ethan touched on keeping the human in the in the central position on A.I.. The human always has to be the noun. The AI can only be an adjective and possibly a verb, but the human being, as opposed to the human doing, has to be central to it. And if we don't view AI as a tool, then ultimately we're going to be the tool, And that's that would be the first thing I would say. The second thing I would say is that new technology plus old process always translates into expensive old process. And I think when AI’s on the scene, whether people think about use cases or whatever, I can't tell you how many times people ask me, well, what should my AI strategy be when you word the question is, what is my which should my AI strategy be? I'm 99.99 9% certain the answer is going to involve AI. And that's the problem. It's not about A.I. is, again, the adjective or the verb. It's not the noun. And ultimately, if you're going to employ AI in a way that's meaningful and productive and efficient, etc., you're probably going to have to co-evolved the processes in concert, not just the process, but the organizational construct and everything else in concert with the implementation of the technology. Otherwise you're just going to end up with a very expensive old process. Ethan, do you have other thoughts? Similar to what Clement mentioned. Right now, we talk about AI everywhere, every day. But I think after a couple of years, maybe we don't need mentioning about A.I. because AI's already everywhere. So in the future, I would say the artificial intelligence will become our day to day life. Therefore, how we can live AI in a trustworthy way? There won't be a lot of code of ethics we need to put in practice. So AI's not only, you cannot only view AI as a proxy machine or technology to control a disease or control the human beings. Otherwise these these technology may go in the wrong direction and we are not able to change it back. So I think that is also the reason for the discussion happening on March 28th. I think there's a global worry about artificial intelligence in the future because this year is actually a super election year. Over I think over 60 democracies country will do the election. At the same time as you a lot of technical guys already telling people like Taiwan we we know information manipulating using AI is already happening and a lot of tough smart guys are saying like a democracy index around the world essentially declining because artificial intelligence and the power of using technology to control human beings mindset is so so powerful. therefore you can see not only control how people think of how to control that disease. I know a lot of people, when they want to go to a restaurant, for example. They find they will all pick the same restaurant because thay all look into Google maps. And if your intelligence will be the same. They will all ask Chat GPT or they will when they do that mostly there will be another AI to do that, So in the end all the decision making will reside on the one super intelligence. Then how we audit this superintelligence? Does that work for the government, work for the profit, or work for human beings good? And that’s a very interesting topic we can discuss if we want to the government technology. I'll tell you Ethan, it would be really calamitous if it turns out that the the machine intelligence becomes the real intelligence and the human becomes the artificial intelligence, Well, I'll say this has been such an incredible discussion. I have learned so much and I'm so excited to have the opportunity to meet you, Ethan, and certainly collaborate with you as well. Clement Just to have these conversations so that people to provide awareness, to make those connections and see what's not only happening here in America, but also internationally, and how all of that needs to be connected so that we can ensure that we are driving to a trustworthy state. So thank you both so much for your time. Thank you for your perspective. And Ethan, I so look forward to the session you're doing with the Aspen Institute on March 28th as well. Thank you, Suzanne. Thank you for having me today. Thanks, Susan. Appreciate the time. And thanks for joining us. Ethan, This has been great. Thank you. Thank you. This concludes today's episode of the Fed two Fed podcast. If you enjoyed this episode, please don't forget to subscribe, rate and leave a review. Your feedback helps us continue bringing you thought provoking conversations with the brightest minds in government technology. Stay tuned for our next episode, where we will continue to explore opportunities to harness the power of technology and explore what's next in developing a more innovative and efficient government. Until then, this is the fair to Fed podcast by Govtech Connects. Thank you for joining us.