
The neXt Curve reThink Podcast
The official podcast channel of neXt Curve, a research and advisory firm based in San Diego founded by Leonard Lee focused on the frontier markets and business opportunities forming at the intersect of transformative technologies and industry trends. This podcast channel features audio programming from our reThink podcast bringing our listeners the tech and industry insights that matter across the greater technology, media, and telecommunications (TMT) sector.
Topics we cover include:
-> Artificial Intelligence
-> Cloud & Edge Computing
-> Semiconductor Tech & Industry Trends
-> Digital Transformation
-> Consumer Electronics
-> New Media & Communications
-> Consumer & Industrial IoT
-> Telecommunications (5G, Open RAN, 6G)
-> Security, Privacy & Trust
-> Immersive Reality & XR
-> Emerging & Advanced ICT Technologies
Check out our research at www.next-curve.com.
The neXt Curve reThink Podcast
Cloud and Neocloud FinOps and Economics (with Hyoun Park)
FinOps has become a thing over the past four to five years as enterprises began to grip on their cloud spending across the board from infrastructure to SaaS software. FinOps was a long time coming and built on pioneering work a decade ago to bring cloud economics into view.
Now we face a new era in FinOps as enterprises ramp up their spending on AI infrastructure services and applications. What is the state of FinOps in an era of AI?
Hyoun Park, CEO and Principal Analyst at Amalgam Insights joins Leonard Lee of neXt Curve to discuss his insights and key findings from FinOpsX and Databrick Data & AI Summit 2025.
Hyoun and Leonard hit on the following topics:
➡️ FinOpsX and Databricks Data & AI Summit 2025 (3:07)
➡️ The State of FinOps (4:39)
➡️ Cloud Migration Behaviors Today (8:29)
➡️ NeoCloud FinOps and Economics (12:07)
➡️ Databricks Data & AI Summit 2025 (17:30)
➡️ Shift from AI, AI, AI to Data and Good Ole' Analytics (17:43)
➡️ Rolling Your Own Model is Passe (22:45)
➡️ Where Do We Land With FinOps and AI Next Year? (24:35)
Connect with Hyoun on LinkedIn at www.linkedin.com/in/hyounpark. You can also follow him at www.amalgaminsights.com. Also, check out his podcast, "This Week in Enterprise Tech"
Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout - https://nextcurvepodcast.buzzsprout.com - or find us on your favorite Podcast platform.
Also, subscribe to the neXt Curve research portal at www.next-curve.com for the tech and industry insights that matter.
Hey everybody. Welcome to Next Curve's Rethink podcast, where we break down the latest tech and industry events and happenings into the insights that matter. I'm Leonard Lee, executive Analyst at Next Curve, and today I have a very special guest, Hume Park of Amalgam. Insights. Hey man. How's it going? Going good. Yeah. A busy week. Welcome, right? Yeah, it's always a busy week, especially this year. It's been really crazy, right? Mm-hmm. So, it's good to have you on, we've been following each other on LinkedIn for. Ages now, right? Oh yeah. Years.
Hyoun Park:Yeah. Always going back and forth on the latest thing.
Leonard Lee:Yeah. And so for all of you out there in Next Curve audience land, if you don't know, he's brilliant and he, shares some great insights on what's happening in ai, all the data analytics stuff. And, I've always appreciated the work that you do as well as the insights that you share. to pretty much everyone.
Hyoun Park:I try to keep it social, I try to talk to everyone.
Leonard Lee:Yeah. Yeah. And you are everywhere. it's pretty amazing, you know? you almost keep up with me. Oh, no, no, not quite. I didn't make it out to Barcelona this year or anything, so. did you miss out on anything? yeah, you missed out.
Hyoun Park:architecture
Leonard Lee:if nothing else.
Hyoun Park:the
Leonard Lee:food, the, the company. Yes. There you go. Yeah. So, before we get started, please remember to like, share and react and comment on this episode and also subscribe here on YouTube and Buzzsprout. Listen to us on your favorite podcast platform. Opinions and statement by my guests are their own and don't reflect mine or those of next curve. And, we're doing this to provide an open forum for discussion and debate on all things, tech and With that, Huon, why don't you introduce yourself to the next curve audience?
Hyoun Park:Yeah, yeah. So I'm Michan Park. I'm the, founder and chief analyst here at Amalgam and Insights. It's a firm I've run for eight years. I've been an industry analyst for 17 years total, and I tend to have two major practice areas. One is around. IT management where I look a lot at the costs of it, as well as some of the downstream observability and inventory management challenges. And then on the data side, I look a lot at data analytics, ai, whatever we're calling this now. But interesting thing to me has always been that, AI side. right now what we're doing is actually a very small subset of the full, world of ai. We just found one really great use case for this generative AI thing, and it's just exploded and taken off.
Leonard Lee:Yeah. And so. What we wanted to talk about. And by the way, thanks for that introduction. And later on in this episode, what we'll do is give you a chance to, provide his contact information as well as, provide you with some links to where you can find his research. there's two things that we wanted to talk about, finops and you are at the finops. Help me again with the name of the event. Oh, it was just finops X. Yeah, finops X as well as the Databricks Conference. Right. And so I just wanna tell you, I have a huge interest in finops. early on when I'm back in my Gartner days, I was working on some early work around finops. At that time, people weren't really conscious of. cloud costs, right? They didn't really have a way of benchmarking the different types of services and, be able to determine region by region or even locality by locality, what the cost was, there was some early work that was being done back then. when you told me that you're at, finops X. I was thinking, man, I gotta have him on because he probably has some great insights and I know you've been tracking that area. pretty persistently. I haven't, to be honest. Mm-hmm. but I'd love for you to share with the audience, what some of your key takes are, and were from that conference, but then also, the Databricks, you started rattling off some really interesting stuff and I cut you off because I wanna. Make sure that we get that organic presentation of your insights. So I'm really interested to hear what you learned at Databricks, but let's talk, start off with finops X and Yeah, what's the state of finops, my friend?
Hyoun Park:So for those of you who don't know, finops is short for. Financial operations. It gets used to talk about the financial operations of cloud cost and cloud computing pretty specifically. Yeah. And five or six years ago, I've been covering this for a decade, but honestly until about four or five years ago, it didn't really take off five or six years ago. The logic around cloud computing was just. Commit to one cloud, get really good at it and build fast.'cause zero interest, just go fast, get more funding and rocket ship. You're cloud computing at all costs. Right. That hockey stick, right? Yeah. And now people are realizing you need to control cloud costs, which makes a whole lot of sense when these get to 50 or a hundred million dollars in costs, It finally starts meaning something, and people are thinking about multi-cloud and hybrid cloud use cases to a greater extent as well. So you'll use Amazon as your main cloud, but you might use Google for a little bit of AI or Oracle because your connectivity is really good. So, that has become a much bigger play and pretty much the biggest theme this year is that finops professionals have been successful in cutting costs. So they found, call it 20 to 30% in wasted cost out of this$50 million. Bucket, and they've done such a good job that they're starting to be asked to manage other areas as well. So they're being asked to look at the data center, they're being asked to look at SaaS, portfolios, which can often be a thousand apps within an Yeah. More. Right? and now honestly, I think they're struggling to figure out how do we take our successes in cloud computing and start expanding it to the rest of the IT ecosystem.
Leonard Lee:Oh yeah. Well. Yeah. And that makes sense because of, growing portion of the, I. Ecosystem or portfolio is quote unquote cloud based and, outsourced. And, SaaS is a huge component of that. Or, or at least an increasing component, right. Actually platform services all the way down. Yeah. So, yeah. Yeah, that makes a lot of sense. And one of the things that we used to look at very early on, this is before, I mean, so it looks like we started looking at this stuff around the same time. Maybe you did, you were looking at it earlier than me. But one of the challenges that, a lot of IT organizations had was benchmarking, cost. Versus their traditional, cost to operating certain workloads and applications. And that was like fundamentally one of the big, breakthroughs when basic finops capabilities started to come into play and data sets started coming into play. Is that. IT organizations can benchmark, cloud versus, our it, service operations, right? And cost, to deliver. Is that also part of the thinking, looking at the entire portfolio and still making that decision between should we continue to host these applications or workloads and data. In our traditional, operations and infrastructure versus cloud. Because, there was a lot of talk about, repatriation, Over the last couple of years, and a lot of that entailed, Hey, let's bring some of these workloads and applications back. And that may or may not entail bringing things back. On-prem or within, a colo data centers, that are cloud quote unquote, cloud native. Right? So what are some of the trends around that type of, application of finops capabilities and practices?
Hyoun Park:So I feel like with finops there's had to be a lot more focus on the subscription cost as well as usage specific costs where you have. Massive transactional volumes that you have to track at, 0, 0 0 1 cent per thing per loan. Yeah. Okay. Whatever it is. And it ends up, growing to millions of dollars. So I feel like that granular control has led companies to be able to think about their, it. More strategically, but at the same time they are realizing that there's some opportunity for repatriation. I think that it is a little overrated in that I don't think anybody that is cloud native is suddenly going to throw 90% of their stuff back into on-prem. It's too long. You can definitely find steady workflows, stable workflows that do not change much month to month, and when that's the case, you are definitely better off moving that back to a dedicated server. That cost structure is pretty straightforward, which is why. There's been use cases like Basecamp recently talking about how they've been able to push millions of dollars of savings by going back to an on-prem pure storage, implementation instead of AWS, for instance. But for every story like that, there's also a story like Zynga that figured out we're a gaming company. We're such, we're so dynamic that it does not make sense for us to repatriate, even though we thought we could do. Our cloud costs were through the roof. But we just don't have the right business model to be able to actually repatriate all that much. It really depends on how dynamic business is.
Leonard Lee:Yeah. And so what's your sense of like the decision criteria or like the high level decision criteria or, organizations are making. In terms of whether or not, they keep things, with their cloud service providers or, SaaS providers versus bringing things back. And I'm just curious because the repatriation topic was such a big deal last year. Mm-hmm. And, and quite honestly, I don't know whether or not that's bleeding into this year. It seems to have tapered off a bit, but, do you see a shift in. How decision makers are thinking about that, migration line, if you will, you or that transition line?
Hyoun Park:Yeah, I, so the honest, most honest truth is I feel like a lot of the. Talk about repatriation started going away as well as cost, simply because of ai. Everybody is so overwhelmed with these new AI budgets and projects that it sucked all the air out of the room out of being rational about anything.
Leonard Lee:yeah.
Hyoun Park:Oh really?
Leonard Lee:That's interesting.
Hyoun Park:I feel like that actually took up some of the. Talk around, rationalization and repatriation that would normally be happening right now. Oh, really?
Leonard Lee:Yeah. So, is there an emerging topic around, finops related to. these neo clouds, you know, some people call'em Cloud 2.0, I think it's a, they're entirely different animals actually. They're more like, it's like supercomputing as a service, right? Oh, yeah. Like core weaves of the world. the GPUs and stuff. Yeah. Yeah. Was that a topic this year? Are they, those costs can be extremely. High. Yes. And they can really get out of control. Right? Yeah. And cost significant dollars. Right. So any talks or there,
Hyoun Park:there's definitely a lot of talk around finops for ai. Which it sounds extremely vague, but, I, I would say the challenge, the fundamental challenge around generative AI is that the token cost itself is fairly straightforward. Everybody knows how much an API calls, costs, but once you start typing in that prompt into a window, there's no way to really predict how long is that prompt gonna be? Is it a, yeah, two sentence prompt, or is it gonna be. 3000 words. And then what level of information are you asking back? Are you asking for something that can be answered in a couple kilobytes or are you gonna ask for something that gives you a three minute movie and 12 million pictures and a novel of information back? And because of that, you can't really predict whether a prompt is gonna be. A kilobyte or a megabyte or a gigabyte, a response. Right. It provides a whole lot of complication on just understanding what is the unit economics of. One prompt call or one AI call when so much could happen. And that is another thing that people are struggling with because we don't have great governance to support.
Leonard Lee:Yeah.
Hyoun Park:Managing the limiting the compute and storage associated with a basic AI call right now. And I think part of that is just. Fundamental tech that needs to be put into place. Those guardrails associated, yeah.
Leonard Lee:were there any talk about the dynamics of pricing as well as model efficiency? Because those two things really shape the top line economics for, any end user, right? I mean, the pricing is very dynamic. it's plummeting, right? But then, like you were saying, although a lot of folks talk about, cost per token, the compute required to generate that token can be very different. Based on the model as well as the infrastructure. right. So how baked would you say the thinking is around this neo cloud type of finops, finops topic, is it Doy or is it like pretty well baked and ready to pull out of the oven?
Hyoun Park:I would say it's pretty doy. Uh, I think we're trying to figure out how much yeast to put in, like Yeah, yeah.
Leonard Lee:can the ingredients
Hyoun Park:yet. Yeah. We don't even, we don't even know what all the ingredients are. Yeah. There was an interesting startup I saw there called a pay IPAY. Uhhuh that was looking at that full value chain and being able to calculate, okay, here's the token, here's the compute, here's the storage, here's everything that went into your, unit use of ai. And they look like they have an interesting tool there. But honestly, when I was talking to the IBMs and Broadcoms and the market leaders in the finops space about this, they were still talking about. The cost of AI in a conceptual fashion, much as we have doing for the past few minutes, understanding that there are multiple units involved in the unit of an ai, but not really being able to calculate it yet. This is actually a great time to get into the game for that regard. Talking about the cost of ai,
Leonard Lee:and you know, it's interesting'cause this feels a lot like the early days of finops with the cloud. Mm-hmm. because that's exactly what people were trying to do. It's how do we get the, market intelligence, but also get insight into performance, cost performance.
Hyoun Park:Yeah.
Leonard Lee:it's exactly the same stuff, right. Except I think with the cloud. with the cloud finops early days, it was just a matter of being able to source the information, right? And then set up, telemetry to help you get that marketplace dashboard going, right? So that you can do comparisons. But then also, there's always this, Question of do we actually move things to the cloud or do we keep things, within our traditional data centers and operations? hosting, let's call it Neo Cloud finops will probably have to go through the same process, The velocity is gonna be a hell of a lot faster just because, you hear Jensen and Lisa Sue talk about one year cadence on the backend call of stuff. So Yeah. It's gonna be really tricky. Yeah. Yeah. But yeah, I agree with you
Hyoun Park:Doy. It's Doy. Another weird problem is that. In say the network world, we have network performance and we can talk about, storage and compute myth database and data warehousing, benchmarks. In ai, we don't have quite the same benchmarks. We have a few benchmarks, that talk about how well. AI can do college level math or something. But that's not really what most of us use AI first. We're trying to solve specific business problems that don't necessarily have well-documented benchmarks yet. So what performance, when we're trying to talk about cost and performance trade-offs, we don't even know how to measure performance in a lot of cases.'cause we haven't done the custom benchmarking needed that aligns to our own in business processes.
Leonard Lee:Yeah, exactly. No, that's great. So, hey, why don't we, move on to Databricks, so, yeah. You made some interesting comments before we hit the record button. So, what were some of the highlights for you at that conference? What are some of the dominating themes?
Hyoun Park:Databricks has been talking ai, ai, ai forever. Everybody's, yes, but. This time around, they actually are spending a lot of more time talking about business intelligence, data integration and pulling back and taking on say the Tableaus of the world, the power bis of the world, companies that have been doing straightforward analytics Databricks is very straightforward about it. They're saying We want to own everything around data, and we realize that we can't get there unless we also have this analytics piece of the puzzle to go along with our data catalogs and our data lakes and Yeah, rivers.
Leonard Lee:Right, right, right, right. Yeah. Yeah, yeah. That's interesting. And so, One of the things that I'm noticing, and it's interesting that you're observing this in that Databricks is Maybe pivoting slightly away from the ai A, like the AI only conversation is that, there's more, especially with the SYNGEN stuff, and even previously with rag. There's a, I wouldn't say that it's a broad. recognition, but a growing one. Mm-hmm. And it is reality that all many of the AI applications that we're looking at, whether it's Geogen or it's a single shot, generative ai, chat bot type stuff or anything in between, there's gonna be a heavy dependence on, conventional. what they call tools now, right? Mm-hmm. Especially with the INGENIX stuff and a lot of those tools, whether it's interface through proprietary interfaces or all this, like, talk about MCP. These, the MCP is not the big deal, right? The fact is MCP is gonna be a tapping into traditional analytics capabilities that are deterministic. they're the old school stuff that are not gonna go away, you know? And so, this is something that I am seeing. Just in my own research. And largely yet to be recognized amongst the vendors that I'm, I talk to. But it's interesting to see that you're seeing that shift at Databricks in terms of this recognition that, we just can't talk about a all time, we want to quote unquote own. The enterprise data, or at least help our customers with ai, we need to focus on the foundational stuff underneath it, right? Because this stuff is not gonna, the AI stuff is not going to operate, with any value in isolation. Is that kind of the way that you saw it or is it, and is that the way that they were representing it? Or is something different?
Hyoun Park:Yeah, I would say that,
Leonard Lee:you can say you're full of it.
Hyoun Park:No, I actually felt it was a lot of the conversations were about being common sense, even when they were, Databricks talked about things like Agent Bricks, their new agent platform to be able to create new agents. The value proposition came down to being able to work with data lineage and being able to create testing environments for your agent. Going back to the standard software development lifecycle. Yeah. Like pushing that to agents, and that's really where everything was shining. Databricks was talking about it's data warehousing capabilities, it's analytics capabilities, data pipelines, all this stuff that is. Honestly not new, but it's stuff that Databricks has to really emphasize to go along with the data lake and Delta Lake discussions that they are much better known for.
Leonard Lee:Yeah, yeah. Well, it seems like it makes sense that they would try to create those bridges, right? Going back to what I was saying before, it is like you have data, you have enterprise data. A lot of it's not gonna move. It's too costly to train any model. Persistently to be relevant and accurate on that data. so you're going to have these compound. AI applications, right? Mm-hmm. Uh, whether you like it or not, right? And I think that's that slow learning that we've had over the last three years.'cause I mean, if you remember with Gen Generative AI initially, the approach was everyone's going to create their own foundation model. Yes. Every company is gonna make their own F Foundation model, and that's going to be the first order of. Quote unquote, these scaling theories. Mm-hmm. I refuse to call them laws, by the way. now that it pivoted to, to rag and fountain tuning, and then now it's pivoted to the, you know, the recent Oh, context. Yeah, yeah, yeah. But this dependence on, the, the realtime systems that we have that are more economical, I think, that is, it seems to be starting to. Coming into the, into discussions more, I, what did you see at Databricks? In your chats, Not just the presentation, what gets presented up there, but some of the conversations you've had with data, bricks folks, their partners as well as customers.
Hyoun Park:Yeah. There, there's not that much interest in building your own model right now. I think that is going to be. After exploring that last year, companies are realizing this is something we might wanna push down the road a year or two, simply because we can get more value in the here and now, just by putting some guardrails around existing models. And we don't necessarily even need the biggest model to do that. Like the companies that were, were talking about testing out open AI's biggest 400 billion, you know, 4 0 5, you know, billion parameter model find they can get just as good results often by using, a 7 billion. Parameter model. Like at some point there's only so many words you can shove into a model to provide more context from a grammatical and semantic and contextual perspective, for the majority of what we do in the business world, you don't need the additional 300 or 400 billion whatevers.
Leonard Lee:Yeah, yeah, yeah. and that's where, They sort of landed with the Moes when the Moes started to come into play. Yeah. But even with the Moes, yeah. It's like, do you need everything or can you fine tune to a more application or domain specific model using various techniques like Laura, et cetera, et cetera.
Hyoun Park:But it ends up having those 12 experts within your model often is better than just having a big differentiated model of everything that includes. You know, every single time that you posted in a CAT video or Yeah, yeah. Or whatever.
Leonard Lee:Yeah. Yeah. It's a mixture of confusion at that point, so. Mm-hmm. Yeah. Okay. Cool. So, hey, wrapping up here, where do you think things are going? Where do you think we're gonna land next year with finops as well as, you know, AI.
Hyoun Park:I think finops, everybody is getting pushed to be more boring. finops is having to deal with data centers. Databricks big announcements was announcing their partnership with SAP, and everybody boring. Sorry. I mean, I mean, SAP is super exciting. S Cisco, Oracle, I love all you guys. Uh, but, uh, but, uh, I. There's a lot of real world context happening right now where company, where I, I think there where rubber hitch the road is that you have to realize all these innovative technologies don't do any good if they don't talk to the 90% of your data and traditional environments that have been running your company for the past 20, 34 years.
Leonard Lee:Yeah,
Hyoun Park:absolutely. Like the basics approach.
Leonard Lee:Yeah. Isn't that funny? That always happens. At least it seemed to happen every single time. Yeah. But, oh, um, just really quick, your thoughts on MCP because everyone's talking about it and going bonkers about it, and I just wanted to bring it up because, you're. In a lot of conversations related to data and the ai, right? And there's all this squishy stuff in between. So I'd love your thoughts on MCP, where you think it's going. what kind of role is it gonna play? In your mind?
Hyoun Park:Yeah, I'm pretty excited about MCPI. I find it to be a relatively easy way to augment a model with the contextual information that you need, because often the jargon and foundational papers and documents you need to ground a model actually don't take up that much data and information overall. But MCP is an easier way to add that. Context rather than having to retrain a model from scratch, which would be far more expensive. Even with the advancements we've seen so far this year in 2025, like deep seek, it's still easier to use MCP. I expect that to be. A very common use case as all of these niche, vendors are trying to figure out how to improve, some sort of manufacturing technique or how to improve a specific type of B2B sales. It's a lot easier to use MCP and use a standard off the shelf model than to try to do anything custom. Anything else Custom by yourself.
Leonard Lee:Yeah. Yeah. Yeah. I mean, it'll be interesting to see how. Standardization or this idea of interoperability with MCP actually works out. I mean, it reminds me of the early days of like B2B exchanges, you know? Old school, I don't know if you remember Commerce one. Oh yeah. But there was this whole idea that, well, we're gonna have a federations,'cause there's a lot of topic about, MTP, servers and registries and stuff like that. We had that same conversation, like 20, oh my gosh, it's like 25 years ago. and at the end of the day, all that stuff ended up becoming as hippies. Ariba, right? So everyone went to point to point, you know, there was no federation. but, yeah, it's, but there are standards. So yeah, I'm, I'm really interested to see how all this stuff plays out. But, hey, thank you so much for jumping on. I'm really glad to have had this chance to chat with you. This is the first time we've actually. Spoken to each other. Right. We've been further down at events for a few years now and, I guess I just had to pull you onto the podcast.
Hyoun Park:my pleasure.
Leonard Lee:Yeah, so why don't you take a moment to share with the audience how they can get in touch with you and, tap into your research and also just simply follow you because you're a brilliant guy.
Hyoun Park:So I post prolifically on LinkedIn. Feel free to follow me there. There aren't that many young parks out there, so you should be able to find the one who's working in tech. my company is called Amalgam Insights and we have our website, amalgam insights.com. I also do a weekly podcast of my own called this week in Enterprise tech, where I, work often with my, colleague, Charlie Raho, who's the head of strategy over at. Symphony ai. he's a former analyst as well. we talk about some of the big news items of the week and provide some analyst perspectives.
Leonard Lee:Yeah, and I'll provide a link below and, yeah, thank you so much again, Hume, for jumping on and, hey everybody. Thanks for hanging out and making it this far. Hope you enjoyed the conversation here. And, please subscribe to our podcast, which will be featured on the Next Curve YouTube channel. Check out the audio version on Buzzsprout, or find us on your favorite podcast platform. And also subscribe to the next curve research portal@www.next-curve.com for the tech and industry insights that matter. And we will see you next time. Thank you so much, Yu. All right. Thanks.