What's Up with Tech?

Orchestrating the Future: How Zebra Technologies is Revolutionizing Edge Computing

Evan Kirstel

Interested in being a guest? Email us at admin@evankirstel.com

The convergence of artificial intelligence and edge computing is reshaping how frontline workers perform their jobs across industries. In this thought-provoking conversation with Tom Bianculli, Chief Technology Officer at Zebra Technologies, we explore the revolutionary concept of the "far edge" – bringing computational intelligence directly to mobile devices and fixed infrastructure at the point of activity.

Tom explains how Zebra has evolved from basic barcode scanning to creating comprehensive solutions that provide visibility, mobility, and intelligent orchestration for retail associates, warehouse workers, nurses, and manufacturing personnel worldwide. Their mission centers on empowering these essential workers with technology that drives productivity exactly where the work happens.

The heart of our discussion focuses on agentic AI, which goes beyond simply informing workers what to do—it actively implements workflow steps by autonomously interacting with other systems. Tom shares compelling examples of how this technology returns valuable time to workers, reduces training requirements, and minimizes errors. In retail environments, these capabilities can produce astonishing efficiency gains of 10-20x for tasks like shelf analysis.

Particularly fascinating is Zebra's development of specialized AI agents for different domains: knowledge agents for operating procedures, sales agents for customer interactions, merchandising agents for retail execution, and device agents for hardware management. This domain specificity delivers significantly more value than generalized AI tools by addressing the detailed requirements of specific workflows.

Looking toward the future, Tom reveals Zebra's vision for wearable AI companions that can see what workers are doing, hear what's happening around them, and provide guidance through audio or haptic feedback. These innovations dramatically compress the "time to competency" for new workers—a critical advantage in industries facing labor shortages and high turnover.

Ready to see how AI and edge computing can transform your frontline operations? Discover practical applications that deliver real ROI today while positioning your organization for the next generation of workplace technology.

Crossing Borders

Crossing Borders is a podcast by Neema, a cross border payments platform that...

Listen on: Apple Podcasts   Spotify

Support the show

More at https://linktr.ee/EvanKirstel

Speaker 1:

Hey everybody, Super exciting conversation today as we dive into the world of AI at the edge with a true innovator in the space from Zebra Technologies, Tom how are you Excellent, Evan.

Speaker 2:

Thanks for having me Really looking forward to the conversation.

Speaker 1:

Well excited to chat. There's so much going on in the industry at Zebra these days. Maybe start with introductions to yourself and how do you describe Zebra Technologies these days?

Speaker 2:

Maybe start with introductions to yourself and how do you describe Zebra technologies these days?

Speaker 2:

Yeah, sure, so well.

Speaker 2:

First off, you know Tom Bianculli, as you said, chief Technology Officer here at Zebra, been in the industry for a couple of decades and really watched it evolve and grow from, you know, basic barcode scanning to perpetual asset visibility and everything that's happening with RFID We'll probably talk a little bit about that into mobile computing, wireless connectivity and really at Zebra we think about serving the frontline, the frontline workers that are getting the job done day in and day out across retail manufacturing, transportation, logistics, healthcare so think about the nurse administering medication bedside for the patient, all the way to the retail worker in the front of the store doing the restocking.

Speaker 2:

And the way we like to think about our solutions is they drive productivity at the point of activity. So where real work is getting done, we're providing visibility, mobility and intelligent orchestration and automation to get the job done. Which is really the exciting thing about that for employees and for me is of course it's to the benefit to the customers we serve in those enterprises. But it's really empowering and brings a lot of fulfillment to use this technology for those frontline workers themselves and helps them get their job done better and that nurse that went to school to care for a patient is able to spend their time doing that, and maybe not some of the more mundane tasks like documenting patient care, because we can handle that in the background with the technology for them. So exciting stuff.

Speaker 1:

Exciting and important mission. Well, let's start with the basics. You're really pushing the boundaries of the edge, pun intended, you talk a lot about something called the far edge, which is new to me. What does that exactly mean? Why is it such a big deal for your customers'?

Speaker 2:

workflows? Yeah, great question. So you know there's so much talk about cloud and we are a big user of the cloud for a lot of our software as a service offerings and for model training, for AI and machine learning as well. But the edge I think people are probably familiar with that sort of is the idea of on-prem servers. So think about that server, closet in a particular location, in an environment that's providing compute locally for various types of processing, various types of workloads.

Speaker 2:

The far edge we define as either the mobile edge or fixed infrastructure appliances that are bringing compute intelligence all the way to that point of activity. As I was saying earlier, and the way we think about it is a couple of things. One is the more compute we can do right at the point of where the data is being collected, the more secure systems are, the less latency you have, the less traffic has to traverse the network and the more cost-effective overall. It is because we can consume the data right there, at the point of activity, without shuttling it back and forth. So we've been doing a lot of work with Qualcomm and actually with Google as well, although of course we're partnered with Google on the cloud side of things, augustrating workloads among the cloud, the edge and the far edge has been something that we've worked on with Google and with Qualcomm. We're using the latest chipsets that we're partnered with them on into our mobile devices to be able to handle more and more of these workloads on that far edge, literally on the mobile computer in the hand of the worker, mounted inspecting product as it's moving through a conveyance line or positioned over parcels as they're moving through a distribution center or warehouse, to be able to provide that real-time visibility and processing all the way down at that far edge.

Speaker 2:

And there's a lot happening there and there's a lot of demand for it. The reason why, as you said, we're talking about it more and more recently is a function of a lot of what's happening with AI and the voracious appetite really AI has for data and to be able to distill down what that data means, to dispatch the right actions and automate parts of workflows, which is extremely important for efficiency and consistency and reducing errors. But also the labor crunch that we're seeing globally really requires this kind labor crunch that we're seeing globally really requires this kind of technology, because we're asking these frontline workers to do more and more but there's less of them, less availability, there's wage inflation, there's high attrition in these positions. So embracing technology to solve those even demographic and macroeconomic challenges associated with labor becomes really critical for our customers.

Speaker 1:

Indeed, and you've been investing a ton around the area of agentic AI, which can mean a lot of different things to different people. What does it mean for your ecosystem and your customers and partners?

Speaker 2:

Yeah, I'm talking about agentic AI, not really too different than the way it's defined across other domains, except our focus is on this again, on that front line, where physical work is actually getting done AI as being able to ingest contextual information, what's happening in the environment around me, and then it's not just able to inform me of, maybe, what I should do, but it's actually able to implement some of those steps, some of those tasks in the workflow by interacting with other systems in an autonomous way to help me get my job done. So, like a really simple example might be, if you came into a retail store as a customer and you had the receipt for something you bought but you didn't have the credit card you bought it with and I'm a new associate, I could ask the agentic AI hey, how do I process this companion that we announced actually earlier in the year? How do I process a return for a customer that has a receipt but no credit card? And a basic agent would be able to say here are the seven steps. But an agentic AI, which is where we're going, is actually saying here's the seven steps and oh, by the way, if you take a picture of the receipt, I'll take care of three or four of those steps for you. So by taking a picture of that receipt that goes into the AI, it then sees the receipt number, it sees the amount that the receipt for, it read what the product is and it interfaces with the back-end point of sale systems to actually execute the workflow that processes the return. So that's returning time to the worker, right, and it's also reducing the amount of training that's required for that worker.

Speaker 2:

So you can imagine I'm just giving that basic example, but I think, similarly, like I snap a picture of a shelf, the AI determines if there's misplaced items, mislabeled items, if there's out of stocks, and then it starts firing off actions and integrating with various systems to go resolve the pallet and automatically count the number of those, verify it against the manifest and give the green or the red light as to whether you can proceed. So this is something that many of our more forward-leaning customers are really banking on agentic, because the ROI for these solutions is in returning time, reducing training, reducing errors, and so if we simply use a basic agent to say, hey, this is what you should do, that's beneficial but it's very borderline from a return on investment benefit. But if we can actually automate steps in the workflow and say something that took eight minutes, take six minutes and so on, especially in these highly repetitive frontline tasks, is really, really powerful for our customers.

Speaker 1:

Sounds extraordinary. You also use the term knowledge agent pretty often, so what's the difference between a knowledge agent and other sort of more traditional AI tools out there?

Speaker 2:

one of which is the knowledge agent that you described, and you could think about knowledge agent as really being everything operating procedure oriented. So you have what I gave with you know, process and return, what's everything I should know about the steps and the workflow to get my job done and I can query that, get the answers and then also again move into this agentic domain where it starts to automate those steps by you know, interfacing with the digital systems. Some of the other agents we have are a sales agent, as an example. So the sales agent is more tuned for being able to do cross-sell, up-sell, answering customer questions, looking at inventory position and those sort of things. So everything that you would normally engage in sort of a sales workflow and a consultative sales engagement on a store floor, as an example. Merchandising agent is a third one we have, which is this idea that I referenced, where you can snap a picture of a shelf and we showcase that together with target stores. Actually, at NRF in January they were in the booth with us and, yeah, we actually in seconds determine what the state of that shelf is and then fire off the appropriate actions. And then we have something called device agent, which is really about interfacing, operating, optimizing and configuring our own devices. So we want to use this technology for helping our customers configure, operate and manage our own devices as well, and that's kind of what device agent is all about.

Speaker 2:

But I'll say one other thing before I turn it back to you. We see this opportunity across every vertical, like if you think about getting the job done in manufacturing on the plant floor and transportation logistics in a cross-, where this is really well received. What we're talking about doing is creating. We have something we call sync, which is a communication and collaboration platform. So think, voice, push to talk, messaging runs on our devices. It's used by, you know, some of the Lowe's Home Improvement uses it as an example for communicating across associates from the back of the store, the front of the store and even being able to route calls from outside the store down to store associates that are on the store floor. That's a very horizontal capability. It doesn't matter what vertical you're in. You need to communicate, you need to message. You probably want to be able to do some level of push-to-talk, and then we plug companion into that, which is the agent framework, and then we can slot in these agents, like the ones I mentioned in retail.

Speaker 2:

But we also want to bring third party agents in. So imagine almost an agent marketplace, if you will, for enterprise frontline workers, and we're going to be partnering with other companies, independent software vendors that are experts in manufacturing. So we have in our ecosystem of partners, we have independent software vendors that all they do is write software for manufacturing. They know manufacturing right now in the workflow and that's probably our second largest vertical at this point. But we're not going to plug in from this agent marketplace third-party agents that can bring that domain-specific capability to those workflows. And that's really critical.

Speaker 2:

I would say. If people are thinking about how do you implement agents in your own operations, I'd really be thinking about not the generalized chat, gpt, which is extraordinarily powerful, but I'd be thinking about how do you work either with partners or with your own team to customize that capability for domain-specific applications? Because when you get into the details of what it takes to execute a manufacturing plant floor workflow as an example, example or retail, that domain specificity really matters and it's the difference between massive increases of productivity and just exemplary ideas of what you can do with things. So it's really the game changer is that domain specificity? So we see that kind of coming together between partners and ourselves to bring that to the market, which we think is really exciting, and there's definitely unique capabilities of that that are required for that frontline workforce.

Speaker 1:

Oh, what a great opportunity. So, as you know, the C-suite increasingly is looking to see practical business value and ROI on new projects and implementations. Can you talk a little bit about the business challenges you're seeing being solved by, you know, ai and edge computing coming together?

Speaker 2:

Yeah, so you know, we're talking so much about what's in the hand of the worker and the mobile computer and interfacing with that, and multimodal is another big word that's flying out there. Right, you don't want to multimodal in a consumer fashion, with AI as sort of voice and text and maybe taking an image with AI is sort of voice and text and maybe taking an image, but we're bringing things like three-dimensional vision, sensing, rfid, camera, video, voice and bringing that all together into multimodal input that can use to sort of steer these workflows and we talk a lot about that in the hands of the worker and driving the human workflow. There's another side of this. There's a whole asset visibility side of things that we speak of, which is bringing when we talk about the edge, I mentioned mobile, which is obviously about that worker and the device they're carrying, but also smart or intelligent infrastructure. So think about RFID readers that are in the ceiling, or machine vision cameras that are mounted on an inspection line or over a conveyor, or 3D cameras. We just did an acquisition of a company called Photo Neo that has 3D cameras that can capture, just as its name implies, information about a 3D scene for being able to guide, as an example, a robotic arm for picking as an example, a robotic arm for picking.

Speaker 2:

So it's really going to be this bringing together of digitization or instrumenting environments, being able to understand where people are, where capital assets are, where inventory is in real time, and then using that together with the device and the AI that's in the hand of the worker to be able to completely orchestrate the environment.

Speaker 2:

So if you think about again a cross-doc facility where you have doctors on either side of the facility, of pallets you know all in the middle that are being broken down and rebuilt back up for being able to ship to specific locations, you've got you know. Again, the four trucks, you have people. So if I can know where every pallet is, I know what truck is at what dock door, I know where every person is through this digitization using RFID and location and vision technology, and then I marry that up with the connected frontline worker and the AI capability they have, you can really orchestrate that entire building moment to moment in the most optimized way. And I think that's really where we're headed. And the word we've been latching onto is the idea of orchestrating workflow using the combination of digitization of environments with connected frontline workers to automate and orchestrate the frontline.

Speaker 1:

Wow, you're making such a big impact right now and it's exciting. It's not a future story, it's not roadmap. So you work in so many industries manufacturing, healthcare, retail, other places where you see far-edge and agentic AI making the biggest impact in 2025?

Speaker 2:

energetic AI making the biggest impact in 2025? Yeah, that's a great question. So, first and foremost, I would say for Zebra there's a little bit of a Zebra lens on this, because where we have the most right to play initially, we think, is in retail. It's our largest vertical. We have a whole host of software assets installed, we have a huge install base of devices and customers and we have lots of customers that are dealing with the pressures I mentioned earlier from a labor, cost attrition and availability point of view. So that we think is going to be one of the first.

Speaker 2:

The other thing about retail that makes it ripe for kind of the first bowling pin, if you will around some of these agenticic AI, is that it's a very stochastic environment. There's not a lot deterministic. If you think about a retail store, you have the public coming in and out. Unfortunately, there's loss and theft, which creates a little bit of chaos on the shelf sometimes in terms of what you think is there may not be there. There's essentially a mini warehouse in the back of the store, so those goods are moving in and you have a very dynamic workforce that is moving between the front end of the store for checkout and stocking and then also selling and interfacing with customers. So that environment is pretty ripe for bringing a orchestrating and coordinating agentic AI capability. Agentic ai capability um we definitely see and we could talk about this kind of where things go beyond the near term. Um, but lots of opportunity in places like warehouse and picking applications with um. I'll just give like a little bit of a hint of what this might look like. We've been talking so much about the device in the hand of the worker, because that's what we're used to with mobile phones and that's, frankly, what our our customers today. But we see the opportunity with AI and particularly generative AI, to have wearable computing become a really interesting modality inside a retail environment.

Speaker 2:

And it's got audio, it's got video and it can see what I'm doing, it can hear what's happening in the environment and it can understand and interpret that in order to direct the actual workflow. So an example might be if I'm doing a picking application, I go to pick maybe the. You know I'm picking an e-com order and I'd reach for the non-gluten free version of a product when I was supposed to pick the one next to it which is the gluten-free. That camera is going to know what should be picked. It's going to see me actually gesture to do the pick and say, hey, you're about to pick the wrong one, and do that either through a haptic feedback or through audio or through voice, and literally become, quite honestly, a companion that I can have a very conversational discussion with. It's sort of watching what's happening and if a mistake is going to be made or there's a nudge that's required, it can go and do that.

Speaker 2:

So even those highly repetitive environments, the less stochastic ones, like a warehouse we're going to see those kinds of workflows become reinvented completely with generative AI. But it's going to take a little while for the hardware to get there, those new modalities to become available, and we think this is actually very congruent and analogous with the recent announcement of Joni Ive and Sam Alt coming together to actually deliver what they call a companion, which includes hardware and this generative AI software. And they've said some similar things. They said it won't have a screen, it'll have a camera, it'll be perpetually aware of what's happening, it'll likely be wearable, and so we're excited by that because there'll be consumer application for that. But again, just like we do in mobile, you know standard mobile computing making that suitable for the enterprise is a tremendous opportunity for us and for our customers. So we're in sneak peek there, we're in early dialogue, but we see that frontier as really the next generation of computing. It is exciting.

Speaker 1:

It is exciting and you're like a kid in a candy store in the CTO office looking at all this tech, but you're also at Zebra, focused on outcomes. Tell me about some of the compelling stories or anecdotes around ROI or operational improvements or efficiencies that come from adopting these edge AI solutions. It's a process, but you must see some initial results.

Speaker 2:

Yes, yeah. So being able to again, particularly in the retail space, being able to return time to that worker, so I would say it's very dependent upon the workflow. So the example I was giving earlier, where you're capturing the shelf, it literally can be an order of magnitude or more, so 10x, 20x in speed, because the way of conventionally doing it is you might be looking at, you know, think about a section of shelf that's, you know, four feet by six feet. You know you could have at least you know, 100 different SKUs, if not more, on that shelf, 100 different types of individual products, and if you're kind of scanning through, you're looking for what's missing, you're looking for what's misplaced, you're looking to see is the price, you know, the correct, the price label correct, and with basically a snap of a picture, we can do all that in almost an instant so that that kind of use case is, you know, massively, just massively changes. You know what, what, what can be done and what the what, the return on and the kind of impact it has.

Speaker 2:

The other, I would say, you know, big one that we're hearing a lot of customers talk about is the I mentioned the attrition challenge and losing workers. But what's really not spoken about very much is the cost of getting a new worker up to speed, and that is something that we're seeing a huge return. So, gartner, the industry analyst, gartner, has a term. They call it time to competency. So how much time does it take you to become competent in your job? And AI, in these agentic workflow execution capabilities that it has, is compressing that time to competency and it's allowing for the onboarding to become much, much quicker, which is also extremely important for things like, you know, peak times of year, like around the holidays, but also with, you know, the change out due to attrition in the workforce.

Speaker 1:

Amazing. Well, you had me at 10 or 20x, so that was incredible. Me a 10 or 20x, so that was incredible. You know many companies, particularly manufacturing healthcare, are a little bit behind the curve and catching up with some of these latest technologies. You know many are just jumping on the cloud now. How do you think about balancing, you know, the centralized cloud intelligence approach versus more edge autonomy, and what's the ideal balancing act for some of them?

Speaker 2:

Yeah, so it's very much an and A and D. Right, it's going to be. All. The way we're doing this and the way we've architected, particularly as it relates to the companion generative AI capability, is sort of a workload leveling the workload across all three of those. Enough intelligence on the device. Excuse me to understand if that workload can be run locally on the device and then also cache on the device most frequently asked questions or most frequently implemented, and we'll do that down to the device and then, if we run out of the processing capability on that device, we'll in a very elastic way reach back to on-prem and then from on-prem back to the cloud. So there's no sort of rigid this runs here and that runs there. There's a sort of an orchestrator, if you will, that is looking at those workloads dynamically and then adjusting and also caching so that the most frequently you know used workflows and questions are right there at the ready on the device.

Speaker 1:

Wow sounds incredible. So, looking ahead, just to wrap up, how far do you look ahead as CTO at Zebra Technologies? You have so much exciting technology at hand and available. What's most exciting for you as you look to three, four, five years out?

Speaker 2:

Yeah, no great question. So I mean we do. Just to answer the first part of that, we look really closely and in a very detailed way at sort of the next three years. It gets a little fuzzier when you get to the next three years, but then we kind of have another horizon that goes out five, and then five plus years is just sort of thesis of what we think is going to happen. And then we spend a good amount of time monitoring and observing what's happening in the environment relative to that thesis. So I hadn't mentioned earlier but we have a venture capital investment arm. We make investments in these. We have a number of those companies in the portfolio today. As an example, this was a little while ago but we made investments in a company called Third Wave Robotics, fox Robotics. These are companies that make autonomous fork trucks, the completely unmanned fork truck for some of these warehouse type environments, and so those are sort of bets that give us a front row seat to see what's happening in some of these technologies that we think are a little bit further out. And that's kind of where we handle the five plus years, and then on the earlier ones we're laying out roadmaps and then really focused on how do we start to operationalize and implement those. In terms of excitement, I would say, scaling this agentic frontline companion AI and having that be a set of value that we can attach to our hardware is something that we're really, really excited about. And that's kind of now to over the next 18 months.

Speaker 2:

And then the concept I mentioned earlier, which is reinventing computing, and you know it's been a long time. You know about the shift from desktop to mobile. We really haven't had a paradigm shift like that in. You know. It's certainly.

Speaker 2:

You know I mean depends on how you measure it, but certainly 20 years is we haven't had a paradigm like that. At the very least, if not longer, depending upon if you consider the smartphone, that first gift or earlier instantiations and we think we're very much looking at that in the next two to three years of distributed computing where you know the laptop and the mobile phone isn't going to go away but you're going to have other computing that's working with you in a much more personal way I was describing earlier and is able to understand and see what's happening around you. But we have autonomous mobile robots that are used in picking applications and in warehouses that work collaboratively with frontline workers to get the picking job done, and we think we're going to continue to see the advent what it's looking like in motion, even more important. So we expect we're going to see more pull through of vision systems, rfid infrastructure, as a result of the continued trajectory around the automation front.

Speaker 1:

Amazing. Well, one of my favorite things to do is watch robot videos robotics on YouTube, so I can't wait to see it in action. Cat videos I think robots are the new cat videos, right? Well, it must be very gratifying, you know, satisfying, for you and your team to see such disruptive, positive change being implemented with customers.

Speaker 2:

It absolutely is, I have to say. You know, the curiosity at Zebra around these things is awesome. It's the reason why I've been here as long as I have, and being able to work with people that are smarter than I am that, you know, inspire me every day. So it's a great team, it's a great industry and, you know, we're privileged to be at the forefront of it.

Speaker 1:

Fantastic Well, thanks for joining and sharing the mission and vision and thanks everyone for listening and watching Really intriguing and insightful update. Thanks, Tom.

Speaker 2:

Thanks so much, Evan. Look forward to the next time. Thank you.