SOS! Can sensor technology improve worker safety? With Kevin Durkee and Zachary Kiehl

October 13, 2020 Daniel Serfaty Season 1 Episode 4
SOS! Can sensor technology improve worker safety? With Kevin Durkee and Zachary Kiehl
SOS! Can sensor technology improve worker safety? With Kevin Durkee and Zachary Kiehl
Oct 13, 2020 Season 1 Episode 4
Daniel Serfaty

The science of human performance measurement allows us to quite literally get under our skin and into our brains to capture measures of human performance in workers, soldiers, medical personnel, and others. In the Age of AI, with both increased miniaturization of sensor technologies and intelligent data analytics, how do we make sense of physiological indicators to improve worker safety and effectiveness? Join MINDWORKS host Daniel Serfaty, as he speaks with Aptima’s Kevin Durkee and Zachary Kiehl, as they share the journey they undertook as researchers and scientists in taking an abstract research idea, turned it into a useful product, and then around it formed a brand new start-up company to reach new markets with their solution. 

Show Notes Transcript

The science of human performance measurement allows us to quite literally get under our skin and into our brains to capture measures of human performance in workers, soldiers, medical personnel, and others. In the Age of AI, with both increased miniaturization of sensor technologies and intelligent data analytics, how do we make sense of physiological indicators to improve worker safety and effectiveness? Join MINDWORKS host Daniel Serfaty, as he speaks with Aptima’s Kevin Durkee and Zachary Kiehl, as they share the journey they undertook as researchers and scientists in taking an abstract research idea, turned it into a useful product, and then around it formed a brand new start-up company to reach new markets with their solution. 

Kevin Durkee: Instead of having 10 humans watching 10 other humans going into confined spaces to make sure they're safe, can you use sensors and use the technology to enable just maybe one or two people watching those 10? And then you save eight or nine. Those are eight or nine more bodies back into the workforce.

Zach Kiehl: We actually did the math and looked through the confined space entry logs and how many hours are spent with a person watching another person work, and we got to the number of 41,000 hours per year just for one air logistics complex. And that's just one facility within the Air Force that does confined space operations.

Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. Today, I have two guests who happen to be my friends and colleagues at Aptima. They are going to take us through the journey they undertook as engineers and scientists as they took an abstract research idea and turned it into a useful product. And then they took that product and around it formed a brand new startup company to reach new market with their solutions. That's pretty much the American dream. Kevin Durkee, who is a lead scientist at Aptima, where he develops innovative solutions to provide human operators with enhanced capabilities that improve their cognitive, motor and perceptual performance.

And Zach Kiehl, a research engineer at Aptima, whose background is in the fields of biomedical engineering, medical imaging, and physiological signals. And, shh, don't tell anyone yet, he's the CEO of that new company. Kevin and Zach, welcome to MINDWORKS. Kevin, let's start with you. Why don't you tell us specifically what made you choose this domain on which we're going to focus today, which is human performance measurement, our ability to literally get under our skin into our brain to capture measures of human performance?

Kevin Durkee: It's been a really long journey. It started out way back in probably undergrad, where I had this altruistic tendency to want to find ways that would help people live better lives, more productive lives, safer lives. I think when I first went down that road, I looked at a couple of different areas to study, and started first as a biology major, which came full circle back to studying physiological sensor data today. Then I moved on to sociology, even looking into the social services sector and social work. But eventually, I just found those things weren't where my talents were best suited.

So I started looking at, how do humans behave? How do humans think? And how can you use the study of humans to better improve people's lives? Is that through training them? Is that through making things work better for them? Designing better process, better organizations? Those are just that really clicked. And being able to study that and make solutions about that, that's really how I got into studying human factors and applied cognition today.

Daniel Serfaty: That's a good answer. Thank you. So it came out of basically an altruistic impulse that you wanted to help, and you could help through social work, you could help through medicine, but you chose human factors and what you called applied cognition, which is the application of cognitive science to better human performance. Maybe Zach, you can tell us what made you choose this domain, which is human performance measurement.

Zach Kiehl: There's a bit of a story, actually. Like a lot of seniors in high school, they make you write a career paper, where you have to talk about what you're going to do when you're older or what college degree you're going to pursue. And I actually chose biomedical engineering as a junior in high school. And the path leading to that decision is very similar to Kevin's, where he mentioned altruism. I always had an interest in math, always had an interest in science, I always had an interest in physiology and anatomy. I think that the human body is just a marvelous creation. And despite all of our technological advancements, we're still not able to replicate a lot of the functionality that the human body's had for thousands of years.

And I was very drawn to that, and I was trying to find a discipline or a domain that fused all those; there's medicine, there's all sorts of things. And then I finally realized that there's this new domain called biomedical engineering. I'm not very old, but biomedical engineering was still a new concept. It was really a fusion of electrical engineering, mechanical engineering, anatomy, and physiology, all into one. And I really started to get interested in it. And I just had a fundamental desire to really see if we can use innovative technologies to improve the lives of humans.

And I wonder, what can we do? All the advancements we have, how can we make our lives better as humans for each other? That naturally drew me to biomedical engineering. And then one thing led to another, and here I am at Aptima trying to use the best technology to solve these types of problems.

Daniel Serfaty: It's fascinating, both Kevin and Zach, because all the people I engage in our field, no matter where they end up, whether they are at the end of their carrier or at the very beginning, whether they are more on the psychology side of things or on the engineering side of things, there is a little bit of idealistic, some would say naive, but in a very good way, impulse that brought them to this field. And that's the one that both of you are mentioning, the deep interest in the human, and the need to find a way to help, to support that. And I like very much that you got at it from very different angles, but you ended up in the same place. Talking about a place, Kevin, you are now the lead scientist of a major division at Aptima. What is it that you do at Aptima?

Kevin Durkee: Yeah. I'm really a Jack of all trades in a way. I spend a good chunk of time interfacing with customers and talking about really the future, and what are the future needs? What are the needs of, in many cases, the war fighters? Since that's a large part of our customer base that we deal with what. Are those future missions they need to accomplish? And what are the limitations of technology today that aren't addressed yet? And so we're really trying to build thought partnerships with our customers to help paint a picture, a vision of what that's going to look like in five years, 10 years, 20 years, and start to seed ways to make that a reality.

It's a progression of steps. We work with the technology we have today to meet some of those short-term requirements, but we also try to get on the leading edge of new technologies that are really going to make peoples do their jobs better and faster and more efficient. There's a lot that goes into that. We have to start with planting the idea, and there's obviously logistic considerations of getting funding. And once you actually successfully startup or program, being able to see that through and make sure it's executed, and then transition when it's all said and done.

Daniel Serfaty: That makes sense. So you basically are the embodiment of what we call the applied research and development in the sense that the motivation to develop a technology or a scientific approach comes actually from a need in the field, and you're trying to forecast those needs years in advance in order to be able to make those development now. Let me give you a follow up question here, at the end of the day, as you say, you want to train people to perform better, or to give them technology so that they can perform better. So why is it so important to be able to, and I use a term in quotes, measure humans? Why that emphasis on measurement?

Kevin Durkee: Yeah. I often like to say that you can't improve what you can't first measure. Obviously you could, but it's much more productive to first measure and understand and analyze not just what is happening, but why is it happening? What's the root cause? So first, you build up this understanding through the collection of data and the generation of, not just any data, but good quality data. And that's an important tool to really unpack a solution, figure out the direction that you have to go. That's how you get your data, is through good measurement, knowing what to measure, and when to measure, and how to measure it.

Daniel Serfaty: So Zach, are there limits about what we can measure or perhaps even what we should measure? At the end of the day, we can measure all kinds of things, but we have to extract meaning from it.

Kevin Durkee: That's a great question. And honestly, the what can we measure continues to change at a very rapid pace, obviously, with all the advancements in technology and the continued miniaturization of sensors in even standoff technologies, where you're starting to be able to send someone's heart rate from a camera or even their position or their physiology through a wall. It really does beg the question of, can we do it? But then, should we do it? I feel like that's something you have to battle getting into the legality versus ethicality. And that's really a line that we've been trying to walk, is that with everyone carrying a smartphone nowadays, handling a physiological ring, people are willingly taking sensors on their body that can really be used to compute some powerful measures of human health, safety, and performance.

And I would say that we are starting to push the envelope on what can be done. And there's a lot of great data that can be pulled from humans, but stopping to ask the question of, should we do this? And how it pertains to an individual's privacy is something that we're constantly evaluating.

Daniel Serfaty: Yeah. I think is going to be a big theme, this notion of enormous amount of data that we leave in the space, in the cloud, in cyberspace, whether it's physiological data, as you know, we can measure people's blood pressure at a distance. Or our behavior, the way we use our credit card, our patterns of movement through GPS, etc. When I ask a question about what should we do and shouldn't do, we pass through a metal detector in airports that measure whether or not there is a piece of metal on us? What if that metal detector will detect that we have a deadly disease, should we tell the person that they have such a disease? What do you think?

Zach Kiehl: I think that's a very interesting question and one that Congress and legislature will continue to challenge to keep up with. So it's one thing, can we do it continues to increase at such a pace that it's very difficult to legislate around. And I know that currently there's a big challenge with that of, I don't want to of course name names, but there's a lot of companies that are getting ridiculed for using data nefariously or at least for personal profit and it does beg the question. So I would generally default to how it pertains to the overall human race, that individual, for instance, was patient zero of COVID-19, then it probably would make sense to put them in a quarantine.

But if they're carrying something that's detected to be not a concern, I guess, or benign, then push them through. So it's definitely a consideration that hasn't historically been a problem, but now is starting to become one, and it's a very interesting use case for sure.

Daniel Serfaty: Yes. I think, Kevin, again, you've been around long enough to know that even on detection of things such as stress workload, when I was a graduate student, the way to do that was usually to give a person a questionnaire and then eventually infer their workload. Today, with your instrumentations by measuring some brain signal or some oxygen signal or some heart rate, you can actually infer a workload at an incredible degree of accuracy. Are there limits to that, to what we can do versus what we shouldn't do?

Kevin Durkee: The technology has come a long ways. I think it still has a ways to go both in terms of its reliability, but also, where's the right place to use it and when is the right time to use it? I think just building off Zach's comments, I think that's a discussion that has to be had on a case by case basis on, who are your end users and who are your stakeholders? Just to throw out a brief example, some of the applications we've been working on for health and safety monitoring has dealt with unions and mechanics and people maintaining aircraft and ships. These are union workers and they take their rights and their data privacy very seriously, understandably so.

So when we talk about instrumenting them, even if it is for an altruistic reason like ensuring the health and safety, they've raised that as a concern. So we have to make sure we have a good discussion early upfront before starting on that journey of building out that product, and we want to make sure it's positioned for success. So that includes not just building something that functions, but also something that has the buy-in and the support of the end user so that everybody wins.

Daniel Serfaty: Thank you for sharing those stories. I think as engineers, as innovators, we increasingly have to deal with that line of innovation prowess and ethical consideration. The more data we can collect and we are collecting in our society, the more we're going to be confronted with this dilemma. So Zach, can you think of an instance during your professional life over the last few years when you had like an aha moment, a discovery, something that really surprised you? Can you share that with us?

Zach Kiehl: I think it's with respect to my own career and career growth. I've been fortunate to be placed in positions where there's the saying that they say, "If you're the smartest person in the room, find another room." And the joke I like to make is that I've never had to leave the room. Starting with an internship at a company known as Leidos, and then I worked at the Air Force Research Laboratory for a little bit, and then pursued some graduate work at Wright State University and then coming to Aptima, I've been blessed with very brilliant coworkers that are always very collaborative.

And the aha moment for me was that there are a lot of brilliant people out there, but there's not a lot of people that can understand the brilliant people but also speak to the business people. So I mentioned earlier I was a biomedical engineer, and that's what my technical training was in, but I also recently completed an MBA. And the reason I did that is I feel like there's not enough people in the world that can really straddle the technical domain while still talking to the lawyers or the executives that maybe speak numbers. And that was an aha moment for me in life in that I really found passion in that, and I personally think that that's a skillset of mine, is having the ability to understand the technical concept and being able to relay it.

As Warren Buffet likes to say, "It's not a good idea if you can't draw it out in a crayon and a five-year-old can understand it." That's what I'm trying to get at, is to take a great technical concept and to sell it to people that are willing to buy into it. The aha moment for me is that I really think that I could excel in that area, and that's currently the route that I've been choosing to pursue.

Daniel Serfaty: Good. Well, you're blessed if you can straddle that line. Go back to a major achievement I know about your life, Kevin, your professional life, is, you ran in collaboration with the Air Force Lab, The HUMAN Lab, it's H-U-M-A-N for the audience. It's an acronym like many things in the military, but this one is a beautiful one because you remember it, it's A HUMAN Lab. And Kevin, you were one of the architects of that lab that truly transformed, I think, not just your professional past, but also that of many of our colleagues, those within the company, but also outside the company. It's one of those Seminole innovation or innovation centers that eventually led to many ideas. Tell us a little bit about it. What was unique about it? What did you learn there?

Kevin Durkee: Yeah. The HUMAN stands for Human Universal Measurement And Assessment Network. So what really made this lab unique was, I think looking in hindsight, it really served as a bridge from decades of laboratory research that had occurred mostly in the university setting on what kind of physiological indicators, so that is to say if someone's brain waves that are occurring as they're doing a task or engaging in a certain stream of thought, or if their heart activity is changing in a certain way, or if their breathing or their movement is changing in a certain way, what kind of associations and predictive power does that have to tell you about their state of being, specifically as it goes to us being more of applied practitioners of human performance, how does that affect their performance?

If you see a certain band power of EEG sensor changing in response to a certain action they're doing on a, say flying a military drone, does that give you something of an indicator or a clue as to their stress level, or maybe they're overly stressed or overly worked, or maybe they're underworked, maybe there, it's a lack of engagement. There's been decades of research on this, many, many published journal papers that we'll find some of these correlations being university research, pretty much they published the paper and there was a finding and that's the end of it. And it's there for reference, and that's great. And there's value to that.

But really what The HUMAN Lab was trying to do was, not only find those indicators, but be able to hide them together, integrate them into an assessment of that person that makes sense and that you can actually take action with. And so what we did was, we set up a simulation environment. The task environment didn't really matter that much, but we chose to use military drones, Air Force RPAs, remotely piloted aircraft. And can you take a simulated RPA pilot, measure many types of physiological data and behavioral data in real time as they're doing the task, and can you make those assessments of, not only their performance, but, what is their state of being, what is their workload?

We looked a lot at workload, a little into other states like engagement and fatigue. If you have that measurement in real time or near real time, what can you change about their work environment about the system behavior to better improve their ability to successfully perform the task? So we went through a couple of phases of that project where we first built out the system to do that. We designed models to make those predictions. And then lastly, to close the loop, we tied that into a couple of different types of adaptive system behavior such as offloading the tasks through automation, offloading tasks to a teammate, or giving adaptive visual cues or auditory cues that might help them that wouldn't otherwise be necessary.

Daniel Serfaty: So that's a very rich set of activities. And I know that you like to describe that, and that was the mantra in a sense of the lab, which is sense, assess, augment. And you described that the sensing is basically collecting all these millions of bytes of data, but the assess had to do with how you apply science to make sense out of these data. And eventually, the augment was, "Okay, now that I know I did the right diagnostic, can I help that performer?" It's a pretty wide range for a single lab to do, and I think that's influenced a lot of our thinking. Perhaps we can end this particular segment by asking both of you briefly to tell me, what did you learn as scientists and engineers from your years in the lab? I know Kevin you spent significantly more years, so I'll start with Zach. What did you learn?

Zach Kiehl: Well, I, as Kevin mentioned, I learned a lot about what can be done with regards to human assessment and the type of information you can glean from putting sensors on individuals. And it's truly pretty phenomenal nowadays. And you mentioned it earlier, Daniel, about the level of intimacy you can get to know with a person by collecting these data. And it opened my eyes for what could be done. And as Kevin mentioned, that really opened my eyes to the possibility of transition, going back to the original discussion about how can we help people. And a lot of these studies ended in a publication, which is great for the scientific community, but I wanted more and really focusing on, how can you bridge the gap between a research study to an end-user that thinks that the sensor you're giving them is going to be used to track their bathroom time?

It's a challenge, but that's where I really got laser focused on applied research and how we can take these great laboratory findings and how they could be transferred to an operational domain. Some of them are more applicable than others. EEG or electroencephalography, we make a joke, there's the saying, "When pigs fly," we talk about, "When EEG gets into a cockpit for the Air Force or to an operational domain," as some of these technologies are still very cumbersome and not really applicable for an operational domain. I naturally was drawn to the technologies that are a little bit more unobtrusive and could be used for really challenging jobs that are out there that need to be done.

Daniel Serfaty: And Kevin, if there is a single thing you remember from all these glorious years spent at the lab, think of things that you learned and that eventually you used later, what is it?

Kevin Durkee: I absolutely would have to say the biggest thing I learned and has stuck with me through the many follow-on projects we've had that used physiological sensors in particular is just how different human beings are. The amount of, we use the term variance, human variance you see. A couple basic examples we've run into is, we might have person A, John Smith, when he gets stressed, he blinks hardly at all, eyes wide open and he holds his breath, he doesn't breathe. But then you have Joe Johnson, he's the complete opposite. Maybe he starts hyperventilating and he has the tendency to shut his eyes.

And we see examples just like that all the time when you're using these type of sensors. So what does that mean? Well, it means that there is no one-size-fits-all way to do this type of work. You have to have good, efficient ways to individualize these technologies to each person who's using it. And that's actually a good segue when we start to talk about artificial intelligence and applications of that. You'll have good, efficient ways to learn the person and what you do with that data. And it's going to vary. It's noisy data, it's hard to work with, and it's going to change a lot from one person to another.

Daniel Serfaty: Kevin, I think this is an excellent point, this notion of in a sense, the data together with the machine learning and the artificial intelligence techniques that come with processing the data, enable us, maybe for the first time in a long time on the technical perspective, to account for this extraordinary richness of individual differences in human beings. I think it's a blessing and it's a great challenge for the engineering community. So Kevin, now that we shared with our audience the story of The HUMAN Lab and what we learn in it, how did we take this breakthrough technology and turn it into a fielded solution? What were some of the secret ingredients, so maybe the enabling agents, that enable us to transition that successfully into the field?

It's a real story for our audience that we can go from research and development to fielded solution in a relatively short time. So there are some lessons I would like to partake with follow-ups. Kevin, tell us the story.

Kevin Durkee: I think the biggest thing we did the right way was to learn all those hard lessons in a pretty safe environment. And that's really what the laboratory offers you. And it's everything from, what are the biggest software challenges or hardware challenges? How far does the wireless signal go? To, what's uncomfortable for humans? To where? What do they don't mind wearing? And also, under what circumstances does the data really shine? And we learn all those lessons in a safe environment where you can measure everything. From there, it just becomes really obvious.

The more you use it in that safe environment, you see, "Okay, this is a really good sensor. This is comfortable. It gets good data. And it meets the type of parameters you would need to get taken into the real world." So that's how we went at it. We found a couple of sensors that would work. So we would take that and take it to the next level. In our case, it was an aircraft mechanic crawling into a really tight confined space environment where there could be low oxygen or explosive hazards, and we'd use crawl, walk, run. So first you simulated and you have them try it out. And then once that starts to work, you move on to the next thing.

Daniel Serfaty: How did you find this application? Just out of all the application you could have found, you found this one. Did you get lucky? Did somebody enable that?

Kevin Durkee: Yeah. So that was something I forgot to mention earlier, The HUMAN Lab, one of the real nice things about it was it wasn't just a laboratory, it was also a way to showcase different example applications. So part of the business model of The HUMAN Lab was to bring in lots of people across industry, academia, across different government organizations. It was located at Wright Patterson Air Force Base, that's the research and development hub of the Air Force. So you get a lot of people doing a lot of really interesting, diverse things coming through. And so The HUMAN Lab was the key stop on that tour for anybody coming in.

So just through the four or five years that we were involved with that lab work, lots of different parties coming in, and eventually, you had a big industry partner come in and they saw the potential and they were able to extrapolate that into what they thought was an important work environment that could use those sorts of technologies.

Daniel Serfaty: I see. And the director of the lab should get credit, Dr. Scott Galster. Still, it's a case in which there was an incentive in a sense, an intellectual incentive to get those technologies out of the lab and into the real world or the field. Is that right?

Kevin Durkee: Yeah, that was absolutely the goal from the beginning. The laboratory was really a means to an end, and the Air Force Research Lab, who's the sponsor of that lab, they have different types of research funding that they allocate. And some of it's very basic lab studies, and others are, the single marker of success is, does it transition outside of the lab to a war fighter or to an Air Force operational mission? In this case, that happened to be the application toward confined spaces, health and safety within the air logistics complexes. Those are big aircraft maintenance depots where there's a lot of hazardous work that takes place, and mechanics have to crawl into areas that really aren't designed for human entry.

And it takes a lot of manpower to keep tabs on them and make sure that safe practices are being done. And that was really what spurred the transition of this technology out of the lab, was, instead of having 10 humans watching 10 other humans going into confined spaces to make sure they're safe, can you use sensors and use technology to enable just maybe one or two people watching those 10 and then you save eight or nine? Those are eight or nine more bodies back into the workforce.

Daniel Serfaty: I'm trying to list all these ingredients, the necessary ingredients for success. So you had basically, primarily a military service, in this case, The Air Force has both a very prestigious lab in, AFRL, The Air Force Research Lab, that wants to push those things out to the field. We have an Air Force component in the field that is willing to try. We have a large industry partner who is willing to partner with us to take some of the technology to the field. And we have also a bunch of very brave scientists and engineers who don't mind to make the jump. Zach, what new development and technologies did we implement with these confined space monitoring system that Kevin just described that enable us actually to take something that can be anywhere, a lab with dozens and dozens of sensor, into something that can sit on a maintenance worker that he or she can wear and go into the field? What are the key technologies that we were able to achieve there?

Kevin Durkee: What I really liked about this effort that Kevin's describing is that we really didn't produce a completely new sensor or a new wireless communication protocol. It was more so what I call innovation by integration, in that we took some of the latest and greatest technologies out there, leveraging cloud computing, leveraging the latest wearable physiological sensors and environmental sensors, and really put them together to explore this unique use case. We found a lot of success in that. It's something that previously just hadn't really been explored. I made the previous anecdote about trying to get technologies out of a laboratory setting into an operational use case.

And I think the reason that doesn't happen very much is as you said, Daniel, there's so many ingredients to success. There's the people that sign the checks, there's the scientists and engineers that in their laboratories in graduate school work trained sometimes on how to put sensors on a guy that basically has a fur coat. We had tested our system on probably 50 people and it worked flawlessly, and then we met someone out in the wild that had a different body type with a lot of body hair, and then our sensors weren't operating as we expected.

You encounter some of these challenges and that's not really an innovation or new technology, but just being able to take your existing technology, modify it in such a way that it can really get the job done.

Daniel Serfaty: How was it received? How did the operators, whether they have body hair or not, receive the system, can you share with us what they say once we ask them to wear a belt or a t-shirt that has all the sensors in it, then go into those dangerous confined spaces that Kevin was describing? Kevin, what are they saying?

Kevin Durkee: Well, fortunately in our case, they had a lot of good things to say, but it goes back to what we said earlier with checking out those bugs, so to speak, in the safe lab setting. We really did ourselves a big favor because frankly, you can lose your end users pretty quickly in terms of their buy-in and being able to use it and wanting to use it. You lose them pretty fast if you show them something subpar, that first impression is just critical. So we were fortunate, we were able to shake out a lot of those issues internally by bringing in lots of people into the lab space.

Then what we show for that first impression with the actual end users is something we're pretty confident, and maybe not 100% bought-in, but we're pretty confident in being an 80 to 90% solution that they're probably going to like, or most of them are going to like.

Daniel Serfaty: Can you think of one example of what those professional maintenance operator or maintenance experts wearing our system, one piece of feedback they gave us that drove us to actually modify the system or improve the system a certain way?

Kevin Durkee: It's kind of funny, one area where we were off the mark a little early, very early in the confined spaces work was, we were convinced that they would want more of a wristwatch. Everybody's wearing Apple watches and Samsung galaxy watches, the smart watch is becoming very popular and we were pretty convinced early on that it would gain steam pretty quickly just due to how commonplace they are, and the data quality is getting very good on them, but all of a sudden we start using it, and these are guys who work with their hands a lot, these are blue collar jobs.

And not only does it get a little uncomfortable when you're turning a wrench and you're wearing a smart watch, but they also were going to break about five or 10 of these a week, just bumping them on the side of the metal. That was a little bit of a surprise, but fortunately, we worked with them. They actually were the ones with, "Well, what if you put it up on the arm, take the straps off the watch, build it into an arm band. It's still a smart watch, you just wear it up a little bit higher on the arm." And that was their idea, we just implemented it. And that problem solved.

And that's really just a simple example of one thing we did, and we thought something was going to work, not quite as we envisioned, but we came to a great solution.

Daniel Serfaty: That's a beautiful example, actually. General Patton's said, "No plan survives contact with the enemy." And I think here, no design survives contact with a user. There was a lot of wisdom out there that is born out of the experience of these people, doing these jobs, that that's gold, neutrally for the design engineer. What's on the CSMs? And again, I'm using an acronym here, the Confined Space Monitoring system, the CSMs resume so far Zach?

Zach Kiehl: Well, thus far, I think we've told the story successfully of transition outside of the lab. So first and foremost, that's something that doesn't always happen, there's a lot of great technologies that maybe end up with a patent or a publication and for whatever reason, there's not an operational need or the technology is not mature enough. So I think the first real bullet point on that resume would be we took a technology out of a lab and put it to an operational use case that had the potential to positively impact someone's health and safety.

Also, increased compliance with OSHA regulations and then cost savings of being able to actually use the system to be a workforce multiplier. Another point on the resume would be the acquisition of funds to mature the system. So while we started in a lab, we certainly didn't have a system that could be easily transferred to this use case of industrial health and safety. So being able to attract the necessary funding to stitch together multiple contract vehicles and contract dollars to fund the needed development was certainly something as well.

And then we actually did the math and looked through the confined space entry logs, and how many hours are spent with a person watching another person work, and we got the number of 41,000 hours per year, just for one A-logistics complex. And that's just one facility within the Air Force that does confined space operations. And as you can imagine, there are multiple LLCs, there are multiple bases and facilities that do confined space operations, and that's just the Air Force. So we've recently started extending it to a Navy application and the commercial sector as well.

So really, the story is just getting started, and I think that there's a lot of opportunity for advancement into new domains and for the system to continue to scale.

Daniel Serfaty: And so if you look at really the CSM who's just being born, and as we said earlier, it will take a village or maybe the betterment of four, it makes many stars to align to enable that kind of success. And one person that also deserves credit is Dr. James Christensen from the Air Force who really took from this applied research area and help us as literally a thought leader and a team member, shepherd it into the fields of applications giving.

Kevin, if you look a little bit to the future and you look at this notion of tracking and measuring workers to ensure their health and their safety, maybe even beyond the notion of maintenance worker, how do you see these general idea of monitoring people? Let's set aside for a second the ethical component, we'll get back to them later, but what are the promises of the future here?

Kevin Durkee: I think the most promising trend is just how easily wearable, just from the hardware standpoint, if you think of just where this technology was even just five years ago, I don't know, 10 or 20, there's just was very little that existed that you could take out of a lab environment and actually do your work while wearing it. And now you're getting it down to the size of just something on the wrist or something on the chest. There's some that's exclusively on the forehead or the ears. So a lot of different options that potential is really being unlocked.

And it's really just going to continue to get less and less obtrusive and lower costs and things people don't always think about it, the cyber security. I think what you're able to do on just the protection of the data is getting a lot more efficient and miniaturized. In summary, it's the ubiquity of it, it's everywhere, but by the same token, it also can be scary in a lot of ways, if it's a hammer and you can swing the hammer in a dangerous way, it can be productive, but it can also be used dangerously. So that gets some of those ethical questions.

Daniel Serfaty: Okay. Well, let me ask both of you a question and I want a yes or no answer, and I didn't prepare you for that question. Suppose that the company you work for Aptima, decide that they have an hour t-shirt or shirt that has sensors in it that they ask you to wear at work, maybe under your work shirt that can measure on a 24/7 basis, different signals that come out of your work, whether you're engaged, whether you're overloaded, whether you're stressed, whether you have eye temperature and you're sick. And based on that, makes decisions. Would you be wearing such a shirt? Kevin, one word.

Kevin Durkee: Yes. I'm bias.

Daniel Serfaty: And Zack.

Zach Kiehl: Yes. I'll eat my own dog food.

Daniel Serfaty: That's pretty brave on your part because obviously, we start talking about what will people do actually with the data, but the fact that you're confident that these technologies are going to advance to a point where they're going to reflect accurately about your state as a worker. And that's very interesting, what we do with the data is another story. But the fact that you feel that those things won't send unnecessary alarms or won't alarm people within the shoe, etc., is interesting to me. And finally, one quick question here, what do you think is the role of artificial intelligence and machine learning in those futuristic systems?

I go back to something very important you said earlier, Kevin, about the importance for acceptance, for accuracy, for honesty, to take into account very different responses by human being, those inherent individual differences that have to do whether it's Jennifer or John above wearing the sensor and having the system in further state based on those signals. What do you think is the role of AI and machine learning in the future for those systems? Zack, you want to take that on?

Zach Kiehl: Sure. Honestly, I think that it is a very powerful tool that needs to be used judiciously. And what I mean by that is asking the question of, can we versus should we? And I really think that the way that technology continues to progress, as Kevin mentioned with the miniaturization of sensor technologies, and heck, we're starting to even see implantables, to be able to sense the biomarkers associated with certain states. This is not science fiction anymore, it's a very real domain. And it really opens up a wealth of opportunities.

And the example I like to give is cognitive assistance, where yes, they can respond to your voice and play your favorite song or set up an appointment for you, but what if they could start to get an insight into your house or read into the data that they're re-feeding and, "Hey, it looks like your body temperature is elevated today, would you like me to schedule a doctor's appointment?" And you really start to see where that could extrapolate and go further. Imagine if your doctor is an AI entity and that maybe there's a human providing oversight or some decision support, but maybe there's not.

And what that really looks like, and the implications associated with that are truly exciting, but also a bit terrifying. Everyone has visions of SkyNet when you start talking about this, and if not done appropriately and judiciously, there are some justifiable concerns there. So personally, I'm very excited, but I also say that it needs to be used obviously for good and that there needs to be legislature in place to mandate that you want to chime in on the same.

Daniel Serfaty: Kevin, you want to chime in on the same question?

Kevin Durkee: Absolutely. One of my historical heroes is Paul Fitts. He was active duty Air Force decades ago, but he's really one of the big pioneers in human factors, which is my background. And he had the Fitts list, general principle of the Fitts lists was, you let humans do what they do best, then you let machines do what they do best. And I think that's a timeless principle and it's really quite brilliant. I find myself coming back to that constantly, especially as AI has hit pop culture and continues to be discussed. I think that's just an important rule to keep in mind as we raise that issue of, where do we best apply AI?

What does AI do really well? Well, it can do a lot of the things beyond what a machine typically would do, which is just following a static set of rules. Traditional automation just does the same thing over and over again, just fairly fast, really efficiently, more than a human can. AI can do that in a little more, it's quite like human in the sense that it can make judgments and make things that would be more traditionally human judgments. But what does AI do better than humans? Well, I think it can take a more objective view with less bias, it can really look at the data, let the data tell a story, let the data make the judgment call.

And that's a skill I think a lot of humans don't have. But by the same token, there's things you're never going to be able to take the human completely out of the loop. The value of a human life is like something you want judged by an AI? No, and I wouldn't say so. I think you're going to want that hard coded into the AI that the value of human life, it's priceless. So you have to continue going back to those sorts of heuristics as you think about the application of AI.

Daniel Serfaty: Thank you both. So we are in that stage in our adventure here, where we had a crazy idea that we developed it during years with the help of the Air Force in a lab, then we took those technologies together with an idea, and we went to the field with real users and they told us that our system needed to be modified. And we did, and we achieved success both with the Air Force and the Navy, but now it was time for new, more crazy idea. It's called SOS, Sentinel Occupational Safety. What is that? It's a new company. And so I will ask Zach to tell us, why did we decide to take the jump from our very comfortable research development and engineering environment and decide to launch a new startup called SOS? Are we crazy?

Zach Kiehl: Probably a little bit. We're all a bunch of scientists and engineers and we definitely have a bit of craziness in us, or we wouldn't have pursued the degrees we did. We really started thinking about our technology and realized that there are a lot of people that still have very dangerous jobs throughout the world. About 5% of total mortalities are actually directly related to one's occupation, so about 3 million people every year die from work-related issues. And I believe the economic impact of injuries, fatalities and diseases from occupational exposure is two and a half to $3 trillion a year.

So although we started with this very small idea of can we monitor the health and safety of individuals working in aircraft fuel tanks, it started to naturally gravitate towards other people's that have dangerous jobs, whether it's nurses on the frontline of COVID, police officers that are facing the ongoing riots, a utility repairmen as they're in the middle of a hurricane trying to restore power. There's a lot of people that have to go day in and day out and perform these critical jobs for our nation's infrastructure and for the health and safety of our nation.

And we really took that idea and said, "I think there might be a commercial use case here, and Aptima isn't particularly suited to sell, sustain, and maintain a solution." We are a bunch of scientists and engineers that at least we think we have a lot of great ideas, but as far as executing and sustaining those ideas is a whole another challenge. And we thought that the only way that was really an opportunity for us to capitalize on that potential was to start a new organization, which is just what we did.

Daniel Serfaty: That's a great answer. And so we identified a need there that we are continuing to explore, and we say, "Okay, let's make a jump." We'll go back to that, jump in a second, but in the meantime, Kevin, you're a senior lead scientists at Aptima, and you have a lot of other projects and other more junior engineers and scientists who report to you have to coach. What should be the relationship between on the one hand, the mothership, Aptima, who is already 25 years old, producing great quality research and development in the human measurement domain and the baby startup like SOS.

Kevin Durkee: Yeah. So I'm going to have to extend my analogy earlier when I mentioned the Fitts list, I think it applies here too, you let Optima do what Optima does best and you let Sentinel do what Sentinel those best. So Aptima is excellent at what it does, which is research and development work, innovating new ideas, figuring out how to bring technologies in a novel way and use them to solve very difficult problems. There's a very different mindset that goes into selling a commercial product, and that's a very different type of business.

Zach and I, and everybody on our team, we're very passionate about wanting to get the Confined Space Monitoring system technology out to the hands of end users and get it out there and get it used. That's really what I think makes us all tick more than anything, we wanted to be out there solving problems and helping people in that way. But is Optima set up to sell that product and distribute that product in the most efficient way? I would argue no. So that was a really big reason why we felt the need to set up a dedicated entity just for that. Just focus on getting out there into these new industries, many of which we've never worked before, mining, oil and gas.

There's a lot of industries out there that can use this technology, so that's an essential role of Sentinel, is to get out there into these commercial environments, get the word out and help sell and distribute it so that they can get a broad user base.

Daniel Serfaty: For our audience, imagine now on the one hand, there is Aptima, late teen, early 20s company doing research and development, and little toddler Sentinel or SOS, platform to launch those ideas in the big world. If you see an arrow, an arrow going from one to another, from Aptima to SOS and another arrow going from SOS to Aptima, what's riding on that arrows? What goes from Aptima to that starter? Many people in our audience are interested in the secret to launching successful startup. What should Aptima give to SOS? What should SOS give to Aptima?

Kevin Durkee: Aptima is always going to have a role in this product. Aptima has the very talented designers and engineers that brought this technology together. This is not a static problem, and it's not a static technology. There's very common technology platforms out there that everybody uses, think about the Apple Watch, think about Microsoft Office, all these popular products have evolved over the years, and they're very different now than they were two, three, four years ago. These are living product, and I think SafeGuard is very much heading down that road.

We need to Aptima to continue innovating, continue bringing the latest and greatest components into the SafeGuard solution so that it stays relevant and addressing the needs that are out there. By the same token, Sentinel is needed to push it out there and collect those requirements. So I think it varies very cyclical and symbiotic between the two entities.

Daniel Serfaty: For our audience, SafeGuard is actually the product that middle startup Sentinel or SOS is launching and producing and marketing. But let's go back to you, Zach. Now, maybe we're going to violate Fitts Law, that's sometime by the way, it's called the HABA-MABA Law, humans-are-best-at and machines-are-best-at. But now, here is Zach, the biomedical engineer, technical leader, who is also by the way, the Chief Executive Officer or the CEO of that new startup. So should Zach, and I'm asking you Zach, should you be a CEO? Should you be an engineer? Can you do both? Or do we have a separation of power as Fitts tell us we should?

Zach Kiehl: That's a great question, Daniel, and I guess the jury is still out as I'm very much still trying to learn the roles of a CEO, but I will say that I think that the world could use more technical CEOs. Every CEO is unique in their thinking and their approach, but you see some of these revolutionary CEOs out there like Elon Musk, where they do have a technical background and being able to take that technical background in combination with a business mindset can really push things forward and really usher in some new concepts and provide the business mindset needed to push that board.

So I'm still definitely learning, and I think there's a lot of work to be done, but I think that at least for now can hopefully fulfill both of those roles. And I certainly don't want to let my technical skills atrophy, so it will very much be a learning process for me, and one that I'm looking forward to tremendously.

Daniel Serfaty: Okay. So if Sentinel is your baby and Sentinel has several uncles and godfathers and godmothers, but if it's your baby, what are you hopes and fears about the new company?

Zach Kiehl: I'll start with the fear and say that I think is that it doesn't get used. At Aptima we have a lot of great technology that we developed that sometimes doesn't transition and it gets set on the proverbial shelf and it's really a shame. Ultimately, there's learning and all things, I think there are hundreds of great innovations coming out of the defense sectors that really struggled to transition into a commercial use case, and it is a challenge for a number of reasons. So that's really my hope is that we can do it successfully and maybe be a model for other would be entrepreneurs.

Daniel Serfaty: Yes, I like that very much. I found out that during all my personal career at Aptima and otherwise in other different boards of young company that I serve on, there is one ingredient that is more important than venture capital, that is sometime more important than innovation or the product or the markets, one ingredient. And that ingredient is a dedicated champion or a champion team that will makes up for all the other fluctuations. I leave that to myself when I started Aptima and I keep relearning that lesson again and again.

I wish you the best, but you're surrounded by a very good team like Kevin and others that are going to support you. Just as a way to add a little more about Sentinel, envision commercial success for the SafeGuard product to product line at SOS. Close your eyes, open them a year from now, and then open them again three years from now, what does it look like?

Zach Kiehl: I love to envision that future and quite frankly, I'm an optimist, so that's the only future I try to imagine, but I remark back to an anecdote we had from a user of the system and he told a story about being stuck in a confined space and how it was the scariest five minutes of his life. He had been in a space that got shut up for the day, he was banging with a wrench on the side of an aircraft fuel cell, and nobody could hear him, and how terrible that was. And honestly, that's really where I want to go is, I know there are dangerous jobs out there, and I quoted some statistics earlier that talk about how big of an issue it is, but with the advances in technology that we've seen, a lot of these incidents are preventable.

That's where I want to see Sentinel and SafeGuard moving forward is actual use in applied settings. And certainly, there'll be some challenges associated with that, especially as we learn what it's like to sustain a commercial product, but I think in one year's time, it would be fantastic to see a number of users using the system, giving us great feedback to improve the system. And that we can hopefully improve their organization both from health and safety standpoint, but also from an efficiency and cost savings. And in three years, continued expansion.

I'd love to see us move into additional markets that maybe aren't specific to the DOD. There's a lot of potential applications of folks doing potentially dangerous jobs that I think could benefit from the use of personalized health and safety monitoring.

Daniel Serfaty: Thank you so much for having shared these adventure throughout this entire oracle of activities that are very often born of some really out of the way idea in a proposal and ends up in something concrete that we all hope through the science of human performance measurement, would eventually help people be more performing, safer and more enjoying their work.

Thank you for listening, this is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast, and tweet us @mindworkspodcast, or email us at [email protected] MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely, and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during these episodes, please visit aptima.com/mindworks. Thank you.