What's Up with Tech?
Tech Transformation with Evan Kirstel: A podcast exploring the latest trends and innovations in the tech industry, and how businesses can leverage them for growth, diving into the world of B2B, discussing strategies, trends, and sharing insights from industry leaders!
With over three decades in telecom and IT, I've mastered the art of transforming social media into a dynamic platform for audience engagement, community building, and establishing thought leadership. My approach isn't about personal brand promotion but about delivering educational and informative content to cultivate a sustainable, long-term business presence. I am the leading content creator in areas like Enterprise AI, UCaaS, CPaaS, CCaaS, Cloud, Telecom, 5G and more!
What's Up with Tech?
Storage Becomes The AI Bottleneck
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Interested in being a guest? Email us at admin@evankirstel.com
AI feels fast until memory and storage slow everything down. We sit with Michael Wu to unpack a blunt truth: inference is where value happens, and storage now sits on the critical path. Instead of treating SSDs as cold capacity, Phison’s adaptive middleware turns them into a live cache that expands usable memory and keeps models, embeddings, and long context windows close to compute. The payoff is practical and immediate—lean AIPCs and mini workstations run bigger workloads with steadier latency, and teams can scale inference without waiting for DRAM supply to catch up.
We trace the story from CES announcements to real-world deployment. Michael breaks down how OEM integrations and consumer upgrade kits bring adaptive caching to both new and existing machines, why developer and education communities are the first winners, and how this bottom-up momentum seeds better software and on-device AI experiences. For enterprise leaders, we map the route from local experiments to global rollouts: consistent performance across distributed teams, lower cloud egress, and a storage layer tuned for retrieval-augmented generation, fine-tuning, and high-concurrency serving.
Zooming out, we explore where we are in the AI cycle—early, hungry, and building—and how edge devices and “physical AI” will broaden demand for fast, cache-aware storage. Michael also shares Phison’s fabless strategy, the new Pascari enterprise lineup, and the push toward Gen 6 performance that aligns with next-gen model serving. If you care about real-world AI velocity, this conversation shows how to turn a bottleneck into an advantage by rethinking the memory hierarchy from the ground up.
If this helped you think differently about scaling AI, follow the show, share it with a teammate, and leave a quick review so more builders can find it. What’s the first AI workflow you’d speed up with adaptive caching?
More at https://linktr.ee/EvanKirstel
Pfizen’s Mission And Storage Legacy
SPEAKER_00Hey everybody. Fascinating discussion today as we talk about turbo boosting and streamlining enterprise AI adoption with a real innovator in this space at Fison. Michael, how are you? I'm doing good. How are you? I'm doing well. Thanks so much for joining. Um, I've been familiar with you guys for for many years, uh, CES in particular, but really going to dive into your on-premise AI work in the enterprise, which is sounds fascinating. Before that, maybe introduce yourself and what does Pfizen do today? How do you describe the company and the mission you're on?
SPEAKER_01Uh great. Yeah, like a little introduction for Pfizen. So uh Pfizen has been around for uh 25 years. This is our 25th anniversary. Uh, we have been focused on bringing the best storage. Uh, it could be uh any removable storage, uh smartphone storage, uh industrial storage, and now server storage and also space storage all over the world, right? Uh and uh I'm the president of US, and we're just bringing out new, uh faster storage, and also uh something exciting to share here is how do we uh uh make the storage also add to the AI experience? So we're excited to share more of that.
Why Storage Now Drives AI
SPEAKER_00Fantastic. And uh up until the AI boom of uh uh a few years ago, storage was a pretty quiet sort of back office topic, but it really is risen to the forefront. Why is that? Why are enterprise teams caring now much more about storage and efficiencies and scaling and so forth?
SPEAKER_01Uh great question. So I think uh one simple sentence from Jensen Huang kind of set it up, right? Memory and storage now become the AI bottleneck, right? If you think about it, AI is scoring so fast, and you know, when the industry leader is committing every single year refreshable platform, it changes everything. It changes energy, how fast people build the power, it changes how fast the uh the the innovation around you know uh heat dissipation and water cooling, liquid cooling have to go. But it's going so fast that people forget, okay, you have HPM memory, you have DDR memory, but when the companies are spending tons and tons of money building large models, do they make money building large models, training large models? No. What makes money is the inference, where uh you are taking that large model and trying to run the inference everywhere. And because inference generates a lot of data, and data needs to reside on the storage. And so the the need and demand of storage now shifted to the storage because it's impacting the inference experience uh as uh you know, because uh all the uh um all the all the questions, context length, uh a context memory has to store somewhere, and you just create a big demand for a storage for AI today.
SPEAKER_00For sure. And um you dropped some pretty big news at the beginning of the year, right at CES. It's hard to believe it was only a few weeks ago. But what's the headline there? What what do people need to understand about the news, the architecture that you announced?
CES News: Adaptive Middleware
SSDs As Memory Cache For AIPC
SPEAKER_01Oh yeah. Um something exciting is that uh, and it's uh it's you know uh it's uh uh coincidentally uh that you know, two years ago we announced that Mary, uh SSD really needs to augment the memory. Uh at the beginning, it was for fine-tuning and training. So we're taking the large training model and then add the use the SSD to add to the total pool that allows the training the large model, fine-tuning the large model even at the uh a laptop level or workstation level, right, instead of spending millions and millions building GPU base, right? Uh, but uh you know, fast forward two years now, right? The whole uh the whole industry changed to inference, right? Um, like I said, what um what uh Nvidia announced uh about uh how storage is actually gonna help the inference experience downtown. We are just bringing it to the to the laptops, the consumer laptop. You see a lot of consumer uh you know applications. So what we have announced is that we are using the storage to expand the memory of a very uh memory-limited uh you know uh AIPC devices. Our vision is that every single uh uh PCs can be AIPC with the help of a storage to be used as a cache. And the timing is perfect because we are in probably the one of the worst uh shortage super cycle of both DRAM and NAT. And because of the shortage of DRAM, uh the OEM, the PC OEMs are struggling to put enough DRAM onto every single laptop. And and so a different way uh to think about this, and this is what we come into the puzzle, is why can't we use a storage and take a portion of that storage via cache so that you can reduce your DDR memory requirement, right? And that concept just very well accepted. It unlocks all the shortage issues of AIPC build-out for this year. And and people ask, hey, how long is this shortage gonna be? I think from a lot of analyst uh data, at least two years. Uh, you know, in the next two years will be still very short. And then, you know, our CO also share like this whole decade is gonna be tight in general, right? So I think that the the validation of an inference requires SSD to augment the memory. It's it's it's a it's a validation point, and then we'll just bring it out to the entire uh you know, every single device is in the world.
Ideal Users And Early Markets
SPEAKER_00Fantastic. So this sounds really powerful. Sounds like I might be a customer. I've had to upgrade my PC. I'm running some things locally in OpenCloud and all kinds of things going on now. And uh so I imagine that's happening by the millions, but but who's the ideal ideal customer? Is it you know enterprise-wide? Is it developers themselves or other kinds of teams?
SPEAKER_01Yeah, um, as you know, right, uh AIPC hasn't been uh sell well, and part of it is the applications. So at the beginning, uh it's very important to train the trainers. We need to uh we are targeting at people who want to learn about AI and the AI developer, so uh higher education uh researchers who actually uh wanted to invest in themselves and build a developer machine that's able to train larger models to do a lot of inference. That's the first persona we're targeting because they they know that uh they need a affordable equipment to do what they need to do. But our but in the process, if you think about it, when we are enabling a market that allows to get uh affordable devices to train AI developers, then the whole industry, right, would uh start to have uh more demand about using of the AI, uh not just on the cloud, you know, on-prem AI, uh implementation to the corporate world. So I think um, you know, one part is about uh uh building up the AI developer community. And then when there's more and more purification of that, then there will be more software and apps that makes the AI PC actually an AI PC and not just a hardware spec, right? More applications, uh more, and that's our vision. And when you have a lot of this software, uh really um uh AI-driven software that's useful on the AI laptops, that means it triggered that the local storage needs to have more um uh more memory, and and that's where our you know hybrid SSD, which have a cache to augment the memory, will be very beneficial.
How To Get It: OEMs And Kits
SPEAKER_00Fantastic. So if I'm a CIO or maybe a CPO with teams of developers around the world, how do I get access to this technology? Is it through certain OEMs? Do I do I work with you directly? Is this something in in production soon?
SPEAKER_01Yeah, so our first uh target is to really enable the OEMs, right? So uh, and even before then, we are enabled with all the key uh GPU and CPU uh you know chipset companies to enable uh so what's measured behind our SSD that allows uh to expand the memory is this one thing called adapt adaptive middleware, right? The technology is called adaptive. Uh you know, people ask me, why do you call adaptive, right? Uh pretty much where the original idea was we're taking storage to adapt to the memory and make it bigger, right? So so we have a middleware that allows us to orchestrate, to allow the host to orchestra a lot of traffic and use the storage as a cache. So we are designing our hybrid solution directly with the OEM so they can offer a product. Uh currently we are also um enabling also uh an upgrade kit for the adaptive, for example. Uh you go on New Wake, you could find an adaptive kit that you could plug into your PC, install the middleware, and all of a sudden it's like installing five more DRAMs on your laptop, right? But it expands your memory to use for AI, right? Uh so we were gonna have the upgrade kit for the laptop workstation. Uh you've seen top uh, you know, in the uh or NVIDIA or you know, other companies, they're coming up with those uh mini workstation PCs. Yeah. We are also targeting at the upgrade market for that as well.
Where We Are In The AI Cycle
SPEAKER_00Fantastic. So you've been in this journey at Fice and I think for a long time, almost 20 years. You've seen uh all the ups and downs and twists and turns of the industry. Where are you, do you think, in terms of you know, moving from this mode we're in right now, a lot of experimentation and testing and early proof of concepts to mainstream deployment. Where are we in this cycle, do you think?
Supply Chain Strategy And Brand
SPEAKER_01Yeah, um uh as you could see, right, uh if you kind of reflect to the internet this, right? There's a period of time where building infrastructure. And then about you know, 2007, a few years later, after you build the infrastructure, about seven years to be exact, iPhone happened. And then it opened up the floodgates for all the smartphones, which triggered more demand on the cloud, right? So when did AI start, right? If you think about it, right? I think ChatGPT, when they start to have uh real user, end user adoption, 23, right? Uh and from 23 to all the way to last year, it was building that architecture. People are actually still spending this year and next year, right? How about the edge devices? How about the laptop, everything, right? It will it will be part of the puzzle. So I would say we are very early right now, right? Uh we are investing, investing, investing. Companies are starting to make money, make money now, but not to iterate that it will be um it will uh kind of balance the spending, but they're looking at the long-term vision. And the long-term vision probably is three, five years, you know, we're at the beginning of the cycle. And like I said before, we don't think it will happen unless AI is at every edge, uh, edge servers or edge laptops, and also physical AI. AI doesn't live on the computer, it has to live as a robot or thing because we interact with the physical world, right? So we're very beginning, and and when we are starting to get the AI so wide and everywhere, and also physical AI, that means that the memory that allows AI to run smoothly will continue to be a bottleneck. And we think that there's a really good potential for adaptive to unlock all the opportunities.
SPEAKER_00Fantastic. And so you're uh an integral part of this whole supply chain. Um are you a fabulous company or do you manufacture your own silicon solutions? And how's uh how's the supply chain looking for the next two, three years from your unique point of view?
SPEAKER_01Uh great. So um at its core, so we are NANS solution company. And uh one of the critical components of the NANS solution is the controller. So we are Fabless Company. Uh we design the controllers and then we fab it at uh you know TSMC or UMC or other Fab houses. Uh and but we do have a capability for a vertical integration uh from designing controller that fab at the TSMC uh or UMC uh going to design it on a PCB board uh and and put it into a validate it and put it in the same product, and then we have uh brand partners to bring out the whole solution to the market, right? Uh and recently we actually have a Pascali Enterprise lineup, which we are even going to the channels with our own brand. It's called Pascari. Oh, by the okay. Uh, and so that was uh kind of our kind of go-to-market strategy uh in terms of getting our design all the way. Uh, you know, you mentioned about supply chain all the way to the channel.
2024 Priorities And Gen6 Focus
SPEAKER_00Great. Exciting. Um what are you excited about this year? I mean, you have a couple of uh big events ahead, but what's on your radar for the next few months? What are you what are you uh focused on?
Closing And Where To Watch
SPEAKER_01Uh yeah, well, um one of the main things is uh uh this is uh definitely another challenging year for memory shortage, right? And under the memory shortage, uh we have to kind of step out and think what what is our role in this situation and what is the best path for the for the company, right? So we're really focused on how, given the amount of storage, and obviously we work really hard uh to secure the NAND flash memory so we can make devices, is how do we value add on every single piece of land that provides additional value and not just to maximize the profit, but also show to our NAND supplier why do they allocate NAND to us? Right when you could put a piece of NAND storage onto a moon or lunar surface or actually use it to replace this portion of the DRAM or add to the DRAM, right? That that gives them a additional story or uh future uh kind of uh you know uh uh uh kind of long-term thing of uh even when uh the the investment super cycle of the AI subsedes, right? Uh there is still gonna be a continuation of our kind of a demand, right, uh uh to the NAN industry. So that's our focus. So I think pushing out uh the the uh the adaptive to the edge is one very big thing. Uh the other thing is how do we um have our Pascal enterprise lineup, right, to uh be able to be part of the very important uh AI super uh you know infrastructure build out uh by customizing the storage. Uh Gen 6 is exciting. Uh, how do we combine the performance and also have um adaptation of the adaptive onto the the cloud buildup? That will be our focus this year.
SPEAKER_00Well, that'll certainly keep your hands full. Thanks so much for joining and just sharing a peek behind the curtain in an industry that very few people really understand. So much appreciated insight.
SPEAKER_01Yeah, I enjoy the conversation a lot. So thanks for inviting.
SPEAKER_00Thank you. And thanks everyone for listening, watching, checking out our TV show as well at TechImpact.tv. You can find it on Bloomberg TV and Fox Business. Thanks, Michael. Thanks, everyone. Thank you.