
AI Everyday
Matt Wallace, Tech CTO, covers innovation in AI with an eye on interesting takes for executives, entrepreneurs, and software engineers.
Episodes
27 episodes
AI Everyday #27: Special Guests
For this AI Everyday, I'm turning the reins over to the dynamic duo at the Deep Dive podcast, as they discuss Kamiwaza, the Enterprise Generative AI platform, from the company I co-founded. Please enjoy!(Note: slightly improved episode ...
•
Season 1
•
Episode 27
•
9:21

AI Everyday - #26 - GPT4-o, faster, cheaper, and more cool OpenAI Updates
Matt Wallace talks about the latest OpenAI announcements regarding the release of GPT-4, its improved speed, cost, and performance compared to other AI models. He explores features like Elo ratings, real-time interactions, and potential applica...
•
Season 1
•
Episode 26
•
8:17

AI Everyday - #25 - Can you believe THIS?! Gemini 1m token context & OpenAI Sora
Matt talks about the big things this week - Gemini 1.5 Pro with 1m token context, and OpenAI text-to-video model Sora, which is generating incredible high-quality videos up to 1 minute long.
•
Season 1
•
Episode 25
•
8:18

AI Everyday #24 - Faster and Faster!
7 wild updates from this week in about 8 minutes. #AI moving at crazy speed!
•
8:22

AI Everyday #23 - Hands on & discussion on vLLM - high speed inference engine
Hands on and discussion around vLLM, high performance inference engine supporting continuous batching and paged attention.
•
Season 1
•
Episode 23
•
6:03

AI Everyday #22 - BootPIG: DreamBooth-worthy image modification without fine-tuning
Matt reviews bootpig, a paper/model that provides DreamBooth-style modification of images without fine tuning.
•
Season 1
•
Episode 22
•
5:11

AI Everyday #21 - Phind, a CodeLlama fine-tune, beating GPT4 at HumanEval
Matt discusses his results from testing Phind-CodeLlama-34B-v2, which beat the GPT-4 zero-shot HumanEval test by a good margin.
•
10:30

AI Everyday - #20 - Llama2, GPTQ Quantization, and Text-Generation-Webui
Matt discusses Llama2 with GPTQ quantization, which is much more powerful than previous methods of quantizing model weights, and demos text-generation-webui.
•
9:06

AI Everyday - #19 - Giving ChatGPT root on Linux
Matt gives ChatGPT access to root on a Linux box. It can read and write files and run commands. He tells it to give itself long term memory, and it installs weaviate as a vector db. What's next?!
•
Season 1
•
Episode 19
•
7:30

AI Everyday - #18 - Deceptive headlines for interesting studies
Matt discusses the paper "How Is ChatGPT’s Behavior Changing over Time?" and the crazy headlines resulting from its publication, that are misleading a lot of folks.
•
8:34

AI Everyday #17 - Llama 2
Discussing Llama 2 - where does it stack up, and what does it mean for open LLMs?
•
6:30

AI Everyday - #16 - ChatGPT Code Sandbox w/ Many Examples
On this episode, Matt reviews the ChatGPT Code Sandbox, which just rolled out to ChatGPT Plus subscribers. He goes through a half dozen example use cases from bulk text analytics of PDFs to rapidly adding to a flask app.
•
Season 1
•
Episode 16
•
29:12

AI Everyday - #15 - Databricks Acquires MosaicML, new LLM atop the HF leaderboard, and resources
Matt discusses databricks acquisition of MosaicML, the new Falcon-40B model atop the HF OSS LLM leaderboard, and some resources.
•
9:06

AI Everyday #14 - Hands on With a Simple Langchain Map-Reduce Example
In this episode of AI Everyday, Matt reviews some code and output from a simple Langchain application that uses 2 prompts in a map-reduce chain to digest 3+ hours of podcast transcripts into pages of high-level notes, then digest those notes to...
•
12:28

AI Everyday #13 - ChatGPT Functions, Plug-ins, and the AI-ification of things
Matt discusses ChatGPT Functions, their relationship to plugins on the chatgpt web service, and how functions in the API can be used as a hack to get the GPT API to constrain its output to a tightly fixed format.
•
Season 1
•
Episode 13
•
9:00

AI Everyday #12 - NVDA Blowout and the Future of AI
Matt discusses nVidia blowing up after raising expectations a ton, and about the future of AI in general. It's longer, but it's a must-hear.
•
Season 1
•
Episode 12
•
16:18

AI Everyday #11 - Emergent Behavior and Multi-Step Problem Solving in AI Consuming Apps
Matt discusses a paper quantifying emergent behaviors in LLMs, and looking at Microsoft CodeT and how it applies to general use of LLMs at an app layer.
•
Season 1
•
Episode 11
•
6:56

AI Everyday - #10 - AutoGPT and Quick Updates
AutoGPT, Dolly v2, AWS Bedrock, Vicuna
•
Season 1
•
Episode 10
•
5:02

AI Everyday #9 - We just invented an 8-bit Matrix v0.1!
In this podcast, Matt discusses an incredible paper from Stanford that introduces a tiny virtual world with 25 generative agents that can come up with realistic behavior. The agents use GPT as a brain and have short-term and long-term memory. T...
•
Season 1
•
Episode 9
•
7:25

AI Everday #8 - Meta's Segment Anything Model
Matt plays around and demonstrates meta's new Segment Anything model, which can zero-shot segment just about anything in an image.
•
4:52

AI Everyday #7 - Prompt Chaining vs ChatGPT Plugins - what/why?
We compare prompt chaining, with an introduction to langchain.ai, and why that vs a plugin.
•
Season 1
•
Episode 7
•
5:26

AI Everyday #6 - 5 LLMs in 5 minutes
We review 5 LLMs in 5 minutes, plus an LLM Directory. For the visuals, as we look at representative websites, check out the YouTube version: https://www.youtube.com/watch?v=1WN47vFWI4Q&list=PL3y876l-nINjAq1kFO7DOZd1AiKDWhaWT&index...
•
5:56

AI Everyday #5 - Distilling tokens
We cover token distillation, using ChatGPT 3.5 to reduce the tokens of a previous response, which should be extremely useful when paying for more expensive APIs like ChatGPT-4To see the screen, check the youtube version: https://www.you...
•
8:30

AI Everyday - #4 - Is this the AI Renaissance, or AI Early Days?
In this I look at whether this is the early days of AI or some significant plateau. For evidence, we look at both a technical piece of evidence on training performance, as well as a recent announcement from Databricks to try to answer the quest...
•
Season 1
•
Episode 4
•
4:47
