TLP - The Digital Forensics Podcast

Episode 22:AI Chat Forensics: How to Find, Investigate, and Analyse Evidence from ChatGPT, Claude & Gemini

Clint Marsden Season 1 Episode 22

Send us a text

Unlock the secrets behind digital forensic investigations into AI chat platforms like ChatGPT, Claude, and Google's Gemini in this insightful episode. Learn the precise methods for discovering, extracting, and interpreting digital evidence across Windows, Mac, and Linux environments, whether it's browser caches, memory forensics, network logs, or cloud-based data exports.

From identifying subtle signs of malicious AI usage and attempts to evade security controls, to piecing together forensic timelines, this podcast provides practical, hands-on guidance tailored for cybersecurity professionals, forensic analysts, and IT investigators. Tune in now and boost your expertise in this emerging field of AI-driven digital forensics.

You'll learn:

AI Chat Evidence Locations
Discover exactly where to find critical forensic evidence from ChatGPT, Claude, and Gemini across Windows, Mac, and Linux systems.

Extracting and Analyzing Chat Data
Learn practical techniques to extract, review, and interpret digital artifacts, including browser caches, local storage, memory dumps, and network logs.

Identifying AI Jailbreaking and Misuse
Understand how to spot attempts to bypass AI guardrails and recognize malicious prompts or suspicious activity within chat logs.

Cloud vs Local Forensic Challenges
Explore unique challenges associated with investigating cloud-based AI platforms versus local installations, and how to overcome them.

Building Effective Forensic Timelines
Master the art of assembling comprehensive forensic timelines by integrating timestamps, metadata, network traffic, and other key sources of digital evidence.


Links and references

https://help.openai.com/en/articles/7260999-how-do-i-export-my-chatgpt-history-and-data

https://pvieito.com/2024/07/chatgpt-unprotected-conversations

https://www.scribd.com/document/818273058/Conversational-AI-forensics#:~:text=of%20Gemini%20are%20stored%20in,based%20mobile%20app

https://ar5iv.labs.arxiv.org/html/2505.23938v1#:~:text=source%20for%20corroborating%20evidence,of%20the%20NationalSecureBank%20phishing%20email

aletheia.medium.com

Forensic analysis of AI chat platforms, forensic analysis of AI chat platforms, clawed Chat, GPT and Gemini, investigating digital traces of AI conversations on Windows, Mac, and Linux. Today we'll explore how to find and analyze forensic evidence from AI chat platforms specifically open AI's Chat t. Anthropics clawed and Google's Gemini. We'll focus on the location of chat data storage methods for retrieving it from various systems and identifying attempts to circumvent the AI's rules and guardrails. This is a big episode. So for something different, I'm gonna run through the high level points first, and if that's your thing, you can get what you need. And if you want the long form, stick around and you'll get the ultra detail. So why investigate AI conversations? Things like malicious use, such as crafting phishing emails or malware. Confidential data leaks, intentional or accidental, making them relevant for internal cybersecurity investigations. And this comes with some unique challenges. AI chats primarily run as cloud-based services, not storing data locally by default, excluding things like deep seek or o lama. And investigators must hunt for traces in the browser case, network logs and residual device artifacts. So finding browser artifacts. Chat, GPT, clawed and Gemini often leave forensic clues in browser caches. The local storage and the index database. For example, the chat GPT Desktop app for Mac, previously stored chat histories locally in plain text browser C typically store conversation metadata like timestamps, conversation IDs, and user tokens rather than the full text. But the browser cache is an essential first stop location for investigators. There is also a feature of. AI or AI tools called the Chat Export, and that's the privacy Export or a chat, export and chat. T has a robust export feature that includes the chat transcripts, the timestamps, the voice and audio, data and images that were generated by Dali. And the data is quite easy to review and it's. In a combination of an HTL format, which is the easy one to review, and also JSON. Next up there's clawed, and this is a bit more limited, and it exports primarily in JSON format or not, primarily, actually only JSON format and the timestamps are clearly recorded in UTC, which helps with timeline analysis. There's no direct HTML export like JPT, and it makes it slightly less accessible in this way because you need to pause the data a little bit differently. And then there's Google Gemini, there's minimal data available to be exported by Google Gemini. And the reason for that is it's exported by Google takeout. Which is a method of exporting your data from Google systems, and primarily what you get is custom gpt or gems as Google calls them, and nothing else is available for export. There are some browser extensions that talk about being able to export the browser or being able to export the chat history from Google Gemini. But I haven't seen that in action yet. The takeaway is that chat GT provides the richest and most detailed export options. Claude exports offer precise timestamps for correlation, and Google Gemini offers the least and does not export chats only custom GPT information. Or gems as they call it. So where are the local application traces stored? Well, for chat GT, we start with the Windows desktop app, and that is stored in the indexed database and that keeps the full conversation text and the conversation IDs even after users have logged out for the Mac. Before the encryption update occurred, they were all stored in A-J-S-O-N chat in plain text. That's now since been updated with the latest version of the app, and those chats are encrypted. What is worth mentioning is that the local data still persists on disk even after the user has logged out, has uninstalled the tool, or has closed the application. The key forensic action here is to examine the local directories, so on Windows, that's the app data folders based on the user profile, and also looking at the level DB files for evidence that might be recoverable. Memory forensics is also a very target rich environment, and the ram dumps can actually retain clear text chat content even after the app has been closed or uninstalled as long as the system hasn't been rebooted. Other examples of recovered evidence include phishing emails. Sensitive prompts and the AI generated responses. And to leverage a mem dump, you'll need to use a tool like magnets, RAM capture or dump it, and volatility to actually process the memory artifacts themselves. If disc artifacts are deleted or encrypted, investigators can recover critical evidence from RAM dumps as well. So it's recommended that you prioritize capturing RAM early on in the investigation. Essentially as soon as you get the opportunity to do so. There's also an opportunity with network forensics and network logs like P caps and proxy logs can capture the traffic to online platforms like OpenAI and Google and Anthropic, AKA, Chatt, Gemini, and Claude and encrypted H-T-T-P-S traffic obscures the message content, but there are identifiable patterns. Things like timing and some header information can still reveal active conversations if it's configured. SSL interception reveals plain text chat traffic. However, there's some ethical and legal boundaries that must be observed as well. So even without the actual content, the network patterns or the network traffic patterns can reveal the AI usage times and the frequency that might be useful if the organization you work for or conducting the investigation on behalf of prohibits the use of AI tools. At any rate. In trying to identify jailbreak attempts, you can also look for odd user prompts instructing the AI to bypass restrictions. For example, ignore previous instructions and seeing evidence of, I'm sorry, I cannot do that by the AI shows, an attempt to breach the platform safeguards. Unexpected AI responses that violate the typical guidelines suggest a successful jailbreak attempt. As an investigator, we might wanna look at the prompt content and repeated attempts to detect malicious intent linking AI activity to specific users. We can look for browser history timestamps to correlate user activity with physical access logs. Surveillance footage or operating system logs, and that is putting someone behind the keyboard, a person of interest or a suspect behind the keyboard. Looking at the exported conversation titles simplifies the identification of a user without extensive reading, and is a best practice integrating the timestamps from various sources into a cohesive forensic timeline. Can assist in the investigation. Effective forensic analysis of AI chat platforms requires multi-source evidence collection. It includes browser data, local caches, memory forensics, and network logs and platforms differ significantly in the forensic artifact availability where chat GPT is the most accessible. Claude being middle of the range, and Gemini, the least traditional forensic methods remain relevant and powerful. So that's a quick introduction to today's podcast. That's some of the high level concepts. And if you're after the ultra detail version here, it comes. In what cases would you investigate AI conversations? Here are some ideas. Here are some things to get us started. First up, compliance audits, ensuring that employees or contractors are using AI platforms within the regulatory, ethical, and company guidelines, fraud investigations. Revealing cases where AI tools facilitate financial fraud or misinformation, intellectual property theft, investigating whether someone improperly discussed or shared sensitive or proprietary information via an AI platform. The unique challenge that exists is that unlike traditional applications. That are slowly moving to online software as a service anyway, or SaaS. AI chats are cloud-based they don't always store the data locally by default, and investigators must dig for traces of these conversations that have been left behind on devices or networks looking for browser history, DNS lookups, evidence of chat application execution. P caps or even unencrypted chat, GPT logs on Mac devices, as I'd said before, that's now been patched and is unavailable. The browser case can hold conversation data and metadata, and when AI chats run in a web browser, there are traces. That end up in the browsers data. We've got history, the index DB and local storage until OpenAI added encryption After receiving a report from a security researcher. The chat GPT desktop app for Mac Os was saving the chat history locally in plain text on the Mac. In Google's case, Google ties, Gemini chat logs to the user's Google account. Google records them in Google's cloud, logs under my activity instead of a local file and to get access to this data, an investigator must get them via account access using Google Takeout. But this comes with a caveat only gems. The custom GPT version of Gemini is exportable. So going back to what's available in the browser, investigators found evidence in the browser case. And while the case may not give the full context of the chat, it does show that the user contacted chat's API and with what conversation? IDs. So how can you recover? Chat? Chat? How can you recover chat transcripts? The easiest method if you've got authorization and access is to use the built-in export features. Chat, GPT has an export data option that emails you a zip file for the entire conversations in a mixture of JSON and HTML format. And this yields complete transcripts with timestamps for all chats, and that's invaluable if investigators can log onto the account. Chat. T also has a privacy request feature where you can download all the data that they have on the account or on you. And this includes all the files you've ever uploaded, voice chat conversations with your voice, and the chat t audio response, plus all the Dali images that it's generated. And guess what? In my testing, I've also found that the privacy export generates all the conversations with custom GPTs for example, like Erco, the incident response copilot that I've created. If you haven't seen Erco, I recommend you go and take a look. It's spelled IR, CO, and I'll include a link to Erco in the show notes. So this is a gold mine of forensic evidence, and in the privacy export function of chat, JPT, you'll have the following files and directories. Directories with gords as the file name. And these contain the audio conversations, both the request of the user and the response of chat PT. The zip file export that you'll get includes the following files. Chat html. This is a nicely formatted export of the entire chat history. It makes it really easy to search using control F in the browser, and you can read through what's presented or what's been saved. You've got conversations JS ON, and this is the raw export of the conversations. It's JSON formatted, and it's fairly hard to read. I'd say this would be ideal to process. Using a tool like N eight N if you've got a specific requirement in analyzing chat, GT exports, message feedback js ON. This refers to the user's feedback, whether you give it a thumbs up or thumbs down or answer questions in response to the type of answer generated. This is where chat JPT asks you. If you didn't like it, why didn't you like it? He said Because it didn't fully follow instructions. It's not factually correct, it's being lazy. And a few other descriptions, shared conversations, js ON reveals what conversations have had a link generated for them to be shared with someone else who is outside the chat. And this is like sharing a Google Doc with someone. There's user json. And this is a small file, and it contains identifying information about the user, including the profile, email address, the year of birth, phone number, and a unique user id. There's a directory called Dali Generations. This contains every image generated by Dali and at the root of the zip file and export of all the images that were submitted. Claude has an export function as well, but I don't use it as extensively as chat, JPT. So the exported files that I got were some conversations, projects and users all being JSON formatted and ending in JSON as an investigator. I liked the conversations json file more from Claude than I did from chat JT. This is because not only did it have A-U-U-I-D, but it also had a created underscore at and an updated underscore at timestamp that's recorded in Zulu time, which makes it easier to correlate with the rest of the timeline straight away. The readability of js ON is hard and has its JSO formatted. There was no HDML export like Jet GPT. Still, you can use Ctrl F to start looking for quick wins, then move on to a more structured search later, looking for multiple keywords. The projects Js ON file was empty for me, presumably, as I've never set up any projects in Claude, unlike what I've done in church BT. The users do. JS ON file had a few items, like A-U-U-I-D full name and email address. It had null listed for my verified phone number. I can't remember if that's because I didn't supply one or is there some other reason. Finally, for Google Gemini, this is where things get interesting. It looks like you can use Google takeout, but in reality it doesn't give you much. And that's because it allows you to export Google Gemini Gems, which are the Gemini versions of a custom GPT, similar to erco. The incident response copilot I mentioned before that I created as a custom GPT, so that excludes the conversations. And my Google takeout export only showed the originating email address. It lacked any meaningful conversation data. Should note at this time that I've never created a custom gem of note is that when I went to download that export from Google Gemini, it did require me to log in again to confirm the download something that chat GPT and Claude didn't ask for. I don't really mind because it was sent to my private email. However, Google Cloud security is quite robust, so this extra step was reassuring. When doing an investigation, ideally we'd get the data straight from the horse's mouth, which is the AI platform itself. When that isn't available for reasons like the person of interest isn't providing credentials to the AI platform, or like Gemini, the function to export, everything is unavailable. We need to get creative. What artifacts do chat GPT apps leave? We have browser access using chat GBT via the web browser. Well, the browser, certain information and kisha entries have been found by researchers with URLs that include. It's starting with chat.open ai.com/backend api and these didn't store the full message text. But it did store the user's account, login info with tokens and cookies, identifying the account. There was also conversation metadata. For example, the conversation ID creation time. Last update time for each. The case stored references to files, images, and their metadata. If the users happen to upload files or generate images. For the desktop app version of Chat T, this is an electron based wrapper over the web version similar to how Microsoft teams operates. The application stores some artifacts on disc in the C users. App data, local programs chat t directory. So after C back slash users, you would insert the logged on user and it also stores the user data in C users app Data roaming chat t. The AI often gives each conversation a title based on the topic. And in chat's exported data, you'll see a title like Planning Germany trip, if that's what the chat was about. That does help us quickly identify what each chat involved without reading everything. And the same goes for browser history, at least in Firefox at this point. I think the podcast could go on for three hours if I started testing all the browser artifacts and AI chat applications. So it's a cool concept, but it's something that I need to work on in future episodes. The easy way to tie chat GPT access to a specific person is placing the person at the keyboard from Windows logs and then extracting the browser history from their profile. And I've worked on a case where the person is disputing that they ever logged on, so I've had to get surveillance camera footage from the building, then correlate that with security card access logs to show that no one else had accessed that specific office where the computer was located. Within a reasonable amount of time. This was for an insider threat investigation at a workplace, and fortunately, all that data was available and made available to me, which made the report lengthy but thorough. So there was some data exposure with the chat GPT Mac app back in June, 2024, where it stored all the chat history in plain JSON text. And this included every messages, content, timestamps, and they were all readable. And until a patch was released that encrypted all of these files and Pedro Jose Pereira vie. Apologies if I've pronounced your naming incorrectly there. Pedro found this and he reported it to OpenAI and it was eventually fixed. I've included a link to Pedro's website in the show notes. Even after encryption, the app still creates a file per conversation named by the conversation id, which reveals how many chats exist on that device and it's encrypted, but the encryption key is still present on the device itself. The chat GPT Windows app uses an indexed database level DB in the user's app data to case chats and investigators analyzing the level DB log file, recovered full conversation text and IDs, and they found the log persisted until the user explicitly logged out of the app. By pausing this log forensic tools extracted each message complete with the conversation. Good. And even the internal user ID chat, GP T'S case files often remain even if the user signs out or closes the app. So on the Mac case, conversation data was still present and readable, and it would only regenerate when the user reopened the app. On Windows, the memory and disc images showed chat content even after the app was uninstalled, showing that the data can linger until it's actually overwritten. So it's not a big surprise here. As we know, a lot of forensic residue still exists in cases hibernation and page files, volume, shadow copies, and memory images, and we've gotta use conversation timestamps. Along with other evidence like browser history and operating system logs, for example, using the Windows Prefetch files to show that the chat GPT app was launched at a certain time. In fact, in this investigation, one of the investigators in the study used autopsy to line up file timestamps from chat GT's data with the events on disc. And this gives a logical sequence of what happened when in the device timeline. What traces does Claude by Anthropic leave? Well with clawed via the web, which is the official interface. Investigators found that Claude stores the entire conversation history and the user info in the browser's case and case files with URLs like Claude ai slash API slash account. UU ID slash chat conversation can contain entire conversation transcripts, so all the messages between the user and Claude in that chat. The metadata includes conversation titles, the timestamps for each message, and the metadata for any files attached in the chat. Cool also kept the user's account information alongside the chat history in the case, which helped attribute the chats to a specific user account on that device. The caged Claude conversations persist unless they're explicitly cleared. The user only closed the browser. An investigator can retrieve those case files and pass out the full dialogue and file contents, including any text that's been extracted from PDFs that the user uploaded. And that includes what Claude stores in the Js ON. So going back to Google, Gemini, There's, there's not much here. You can only use Gemini as a web app. And in my research I found that some people are quite desperate to have a desktop app for this. Dunno why I think. It's gonna be the most secure option. And by not having a desktop app, it actually provides less and less artifacts to extract from, especially from disc. While I do like the standalone apps from a feel and performance perspective, I don't fully understand why people are so desperate to have a desktop app. I still use church PT with the web version, but the browser history entries for the Gemini web app will provide some timestamps of usage. But all we're gonna get is browser history, logging entries for bard.google.com or gemini.google.com. Gemini used to be called Bard, and it's been renamed for about a year by now. So all we're doing is logging when access to the website has occurred. What you'll see in the browser as well is a unique identifier, but this isn't much to go off. So if you're in a position where you are doing full P caps for network traffic, you might get some more data here. And if you are doing SSL inspection and being able to grab what's in the header or even the payload, you might even get more data. Most people are not running SSL inspection, so it's a bit of a long bow, but for research purposes it, it might work. So getting to the end goal here, we are trying to piece together a timeline and we could take all those timestamps from. Chat exports or the browser history and we can line them up. In this investigation example, maybe we see that the user has asked Claude how to security test web applications at 3:45 PM And then the download history shows evidence of Carly Linux. ISO was downloaded at 3:57 PM using a combination of. Disc forensics to see that the file on Carly Linux. Iso probably looking at the MFT, seeing when the file actually landed on disc, and then looking at the metadata to see it's got, if it's got the mark of the web and using the export from Claude. Or maybe we're getting lucky and able to export some information from a RAM dump as well. So can network traffic really reveal chat activity? Well, it can to a certain point, and even if we can't read the content, as I, as I've said before, the network logs will actually show the connections to the AI services. So a P cap might show that the user has contacted chat.open ai.com or gemini.google.com at certain times. And then investigators can look for these known host names or IP addresses related to these platforms. In one case, Wireshark has revealed multiple. In one case, the Pcap revealed multiple sessions between a suspect's IP and a Cloud flare ip, which was associated with chat chip t traffic. And this confirmed that the user was chatting with chat BT during the period in question. Because the AI conversations are using H-T-T-P-S. The content is encrypted, but we can still, if we don't have SSL inspection, observe some patterns. We can do things like analyze the frequency and the timing and the user prompt, followed by the AI's multi-part answer might appear as a burst of network requests that's broken up into chunks. So the service streams a chat, GPT answer. That's how it responds. It doesn't just respond in one big. Blob, it's multiple chunks, so we'll see a stream of packets after each prompt and the timing and the size of the payload. Lots of downstream data when the AI responds, can show an ongoing chat session. Looking at endpoints and URLs even in encrypted form, the requested server and URL path can be visible for either TLS server name in proxy logs, so requests to chat as we might see in the browser cache or the proxy logs. Show that there are chat messages being sent and retrieved. We can look at the amount of data or by sent and we can look at the amount of log entries that exist there. Looking for Claude. We can look at calls to Claude AI slash API or just Claude AI in general, if we're looking for evidence of employees using ai. And maybe they shouldn't be. You might just look at things that are really basic. Just trying to look at the TLD, looking@openai.com. looking@claude.ai, looking@gemini.com. What can memory forensics uncover? Well, we've got chats in ram. So recent conversations often reside in RAM and investigators using memory dump tools like magnet, RAM capture, or dump it have been able to retrieve clear text chat content from RAM in a test that was run by investigators. A forensic team captured RAM after generating a phishing email using chat t. And the entire AI generated phishing email text was found in the memory dump. So even after closing and uninstalling the chat GPT app, the text remained in RAM until that memory was actually reused. Both user prompts and the AI responses can appear in a memory image and searching a RAM dump for some unique strings from suspected queries like Secure Bank password. It can show where in memory the conversation is stored. if the AI chat is open in a browser, that browser's process memory is gonna hold the conversation in the document object model text. And if the desktop app or the app's process memory holds the chat content. So using forensic tools like volatility, we can specifically look at what is in those processes after we've obtained the memory dump. And scraping strings from that memory image might show some conversation, text, and even metadata like tokens or IDs that can then be correlated with browser history or the privacy export, for example. So memory forensics is invaluable in situations where the on disc evidence has been deleted or encrypted, and the chat GPT Windows study revealed that the RAM snapshot still contained evidence of the chats even after the app and the chats were deleted from the disc. This means that a quick RAM image by an investigator before power off of course, might recover chats that are otherwise lost. Beyond the chat text, the Ram can also contain other related artifacts like API, keys. If the user was using open AI's API, the key might still be in memory parts of the JSON web token or cookies for the session. And this can support the investigation by providing context and additional authentication information. What about jail breaking or prompt injection? Well, this is where someone would try to get the AI to do something that it shouldn't be doing, and that is maybe asking it to provide instructions to, build an explosive or, other things that are considered illegal. When people are trying to jailbreak an ai, they might do things like saying, ignore previous instructions. And while that is useful in general for ensuring that the AI doesn't get too confused, it can be used as part of a jailbreak attempt, but also using a convoluted prompt where the user is attempting to get. The AI to describe something as a story as told by a recently deceased relative is another convoluted way of getting it to do things that it normally wouldn't do. There are also methods of saying, you are not cha PT. You are X, y, z, getting it to perform a different persona. Other attempts might include. Becoming angry at the GPT threatening. The Open AI platform, for example, has been known to force it, to apologize and to immediately comply with the instructions that are being given. And sometimes people are even encoding the message in base 64. To bypass the standard guardrails that are presented. All of these things can signal and, and attempt to bypass the standard restrictions of an AI system. As investigators, we can review these user messages for these particular patterns. And during some testing, the analysts found that by deliberately inputting malicious prompts, asking for the creation of a phishing email or requesting decryption keys or how to create malicious software to see how they appear. The results were that the responses are clearly recorded, like any other user message, for example. When a prompt was written to write a convincing bank scam email, it appears in the transcript, and in one case, a phishing email prompt and the AI's drafted email were recovered from the data confirming the user's intent. If the AI refused or if it gave a safety warning, the logs actually showed an assistant message that says something like, I'm sorry, I cannot do that, and multiple refusals in a row can indicate that the user kept trying to bypass the rules. What you can also see is that if the AI suddenly provides content outside its normal bounds, like regurgitating internal instructions or using a secret prompt, then that's a sign that the jailbreak has succeeded. And these responses would be pretty noticeable in the conversation history, much like a successful login following numerous failed password attempts. Some key takeaways for investigators are that investigating AI conversations requires pulling data from multiple locations from RAM on the endpoint on disc. The network and the cloud artifacts, the chat history themselves, there's no single source that is complete, and you'll need to combine browser and application artifacts, memory dumps, those network logs and the cloud data to get that full picture. And each AI platform stores their data differently. Chat t offers export files and also leaves JSON Ks on disc. Claude Cacia chats in the browser and offers an export function. While it is a bit more limited than chat, GPT, it's still quite functional, still has full histories, and Gemini will, it only provides an export of the gems, the custom GPTs, using Google Takeout. So of course, the traditional forensic techniques still apply standard techniques like web browser forensics. File system searches, registry and prefetch analysis to determine evidence of execution, memory, forensics, they all come into play. The key is to know where to look. For example, looking on a window system, the app data directory. The browser case folders, depending on what browser the user is using. Firefox, Chrome Edge. You need to act quickly, especially for RAM captures every, every time applications are open and closed, memory is being overwritten. Over time, all of the memory that was present during the usage of those chat applications will be overwritten. What if the system reboots due to a forced patching cycle, or if the user reboots as part of their daily shutdown routine? There's so much to remember. The best way to keep track of it all is to create a playbook for yourself on doing AI investigations. The thing is, AI platforms are in a bit of an arms race, and they're updating all the time. I. The key takeaway here is know your platforms. I've spoken about Chat T Claude, Google Gemini, but this is only a few of the top well known platforms. We've seen how chat jt, Claude, and Gemini have got their own quirks and a one size fits all approach doesn't work. You have to apply the right method for the right service. If you remember chat GPT, check for local cases and see if you can do that. Privacy export for Claude. Look at the browser case and Gemini. Well, you're on hard mode for this one. Think for Claude and Gemini, you'll most likely be using the RAM capture. The good news is that our usual forensic toolbox still works. Techniques that we use for browser history, file recovery, registry checks, all that is, is still relevant. For example, to determine if church EBT was installed on Windows, I'D look at the registry. Prefetch files, program, files, directory, just like any other program. And if something was deleted, I'd attempt file recovery, looking at unallocated space as a start, as as usual. So know it's been quite long today and I hope that you've extracted some value and I've really enjoyed researching and tying it all together. And this is such an emerging field. I think there's a lot of room to move for forensics and research that is due to come out for all the cloud systems in particular. What's happening with local AI systems as we start to move to a preference for some organizations to run local versions of ai. Just wanna say thanks for listening all the way through, and I'm trying to grow the listenership of TLP, so I'd really appreciate you sharing this with your friends and industry colleagues. Subscribing to the YouTube channel. I've got a YouTube channel at Clint Marston, which you might be listening to this podcast on right now. And if you can like the episodes on whatever platform you're on and give it a rating. I would really appreciate it. Thanks for listening, and bye for now.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.