
Fire Science Show
Fire Science Show
214 - Thermal Imagers with Martin Veit
The world looks entirely different through a thermal camera lens, especially in a fire scenario. These devices reveal harsh temperature gradients between hot and cold surfaces, adding another dimension to how fire safety professionals understand and navigate dangerous environments.
Thermal cameras have transformed firefighting operations with astonishing effectiveness. Studies show that in smoke-filled buildings, thermal cameras have significantly improved the changes to identify victims. This technology dramatically reduces search times and increases survival chances, making it an essential tool for modern fire services around the world.
Martin Veit, who recently completed research for the Fire Protection Research Foundation, takes us deep into the science behind these life-saving devices. He explains how thermal cameras detect long-wave infrared radiation (7-14 micrometres) emitted by objects based on their temperature, creating images that reveal what smoke would otherwise conceal. The technology works because many combustion gases are relatively transparent in this part of the spectrum, giving firefighters a crucial advantage in zero-visibility conditions.
We explore the fascinating distinction between "measuring" precise temperatures (which requires understanding factors like surface emissivity and a bit of physics) and simply "observing" temperature differences (which can be sufficient for navigation and victim location). This distinction proves crucial when evaluating how thermal cameras should be tested and certified for firefighting applications.
The conversation delves into the challenges of current testing methods under NFPA standards, which sometimes yield inconsistent results that don't align with human perception of image quality. Martin's research investigates alternative approaches from the field of image processing that could provide more reliable and relevant evaluations, potentially improving both camera certification and opening doors to AI-assisted applications in firefighting.
Read the Martin's report here: https://www.nfpa.org/education-and-research/research/fire-protection-research-foundation/projects-and-reports/measuring-thermal-image-quality-for-fire-service-applications
----
The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.
Hello everybody, welcome to the FireScience show. In today's episode we will be talking about thermal cameras, and that's probably one of my most favorite devices that exist in the world. I would claim to say that if you have a fire safety engineering friend and you'd like to give them an amazing gift that they will enjoy a lot and the budget is on a little higher end and thermal camera is your way to go there. It's just fascinating to look through the world through the lens of thermal imaging, especially as soon as you start applying that into anything fire related and you see those harsh gradients between cold and hot surfaces, how stuff quickly cools or quickly heats up. It really gives another dimension to the eye of a fire safety engineer. But this episode is not just about fascination on thermal imaging. It's more about its practical side and the practical side in the life and safety. Thermal cameras are today a vital piece of equipment used by firefighters in the world to look further into the realm of fire when fighting fires, and we all understand how big advantage they can give to firefighters but also that a faulty thermal camera can be potentially life-threatening. So in this episode I've invited Martin Veit from Zag Frisbee, and Martin has just finished his report for the Fire Protection Research Foundation in which he was looking into different metrics of quality for thermal cameras thermal cameras that are actually being used by firefighters in the United States. In this episode we will dive a little bit deeper into the mathematical part of how to test different images and what does it mean to compare the image quality. But before we reach that point, we have a long discussion about the thermal cameras how do they work, why measuring temperature with cameras is actually quite difficult and why just observing the temperature gradient or differences in a fixed location is not that hard, what you can do with them, how do we test them and how can we improve it for the future. So let's not prolong this unnecessarily. Let's be in danger and jump into the episode.
Wojciech Wegrzynski:Welcome to the Firesize Show. My name is Wojciech Wigrzyński and I will be your host. The FireSense Show is into its third year of continued support from its sponsor, ofar Consultants, who are an independent, multi-award winning fire engineering consultancy with a reputation for delivering innovative, safety-driven solutions. As the UK-leading independent fire risk consultancy, ofar's globally established team have developed a reputation for preeminent fire engineering expertise, with colleagues working across the world to help protect people, property and the plant. Established in the UK in 2016 as a startup business by two highly experienced fire engineering consultants. The business continues to grow at a phenomenal rate, with offices across the country in eight locations, from Edinburgh to Bath, and plans for future expansions. If you're keen to find out more or join OFR Consultants during this exciting period of growth, visit their website at ofrconsultantscom.
Wojciech Wegrzynski:And now back to the episode. Hello everybody, I am joined today by Veit from from Frisbee at ZAG in Slovenia. Hey, martin, good to have you in the podcast. Hello, thanks for having me and thanks for joining me. We have an interesting topic here to discuss and that is your very recent report for the Fire Protection Research Foundation at the NFPA, which is on measuring thermal image quality for fire service applications. I have not had a thermal imaging episode yet, but I think it is quite an interesting topic that actually connects engineers, firefighters, practitioners Come on, everyone likes thermal cameras in fire safety, so perhaps let's start with your background and how you ended up in this project, right?
Martin Veit:So for my background, I'm actually not a fire safety engineer, I'm a civil engineer and then I just stumbled into the field and I guess the first entry into the field was really when I was, I guess, cold called. So a guy from my university, when he went into industry, recommended me to a partner in a company called Vibraxenernen, kenneth Håkvar Jensen, and then he asked if I wanted to work in the company as a student helper. So I worked there for half a year, which was my first entry really into fire protection engineering, and I worked on multiple different things, so fire strategy, but also I worked a bit on documentation and programming and I was supposed to stay. But then I was offered a PhD. That didn't start immediately at Aalbe University, but I was supposed to start on that one Eventually. It took a bit too long for me, it didn't go as planned. But then Grunde Jomaas, my current supervisor, reached out to me on LinkedIn. But then Grunde Jomaas, my current supervisor, reached out to me on LinkedIn. I guess I liked LinkedIn posts.
Martin Veit:And then he wrote to me in Danish, I believe on LinkedIn. So I was very surprised and offered me a PhD in Slovenia. So I thought let's go back to fire protection engineering and then take an adventure.
Wojciech Wegrzynski:Go to Slovenia see what brings. So there are actually positive things from Grundy's LinkedIn activity. Good to know.
Martin Veit:Absolutely Exactly.
Wojciech Wegrzynski:Yeah. And how did you end up studying thermal cameras?
Martin Veit:Right. So I didn't have a background in fire protection engineering. So more or less the first half a year to a year was me figuring out what am I going to do in the PhD. And I think I ended up after a meeting with Andrea Licardini, another one of my supervisors. We had a meeting and then at some point we discussed okay, maybe it could be interesting to look into gas phase measurements, and so I started reading a lot, found some papers, and then I found a paper that used thermal imaging cameras to then characterize the flow field after pool fire, and so I looked into that a bit.
Martin Veit:But then I stumbled into the fact that thermal imaging cameras normally have pretty low resolution, both spatially and also the frame rate. So I thought, okay, how can we accommodate this problem, this issue, this? So I looked into how to improve resolution of videos, so both the spatial and temporal resolution, and I recently presented a paper on that in Greece, using some machine learning to then enhance video footage, specifically of thermal videos. And then at some point during this whole process, I think he received an email about something called student project initiatives from the FPRF, so the Fire Protection Research Foundation, and we looked into the projects that they had and one of them was specifically on measuring the thermal image quality. So I think from the beginning to the end, from my background and then ending up to where I am now, has been a big coincidence, more or less, into this project also.
Wojciech Wegrzynski:Welcome to World of Fire Science. I guess it's a common story. I guess nine out of ten people, I think so We'll have a similar life story about how they ended up in the current location where they are working or the topics that they are dealing with. Good, so a nice coincidence. I remember talking with you some time ago I think it was the conference in Slovenia about your ideas on measuring gas flow fields and using observation.
Wojciech Wegrzynski:It's kind of a holy grail in fire science to be able to incorporate more of optical measurements. They are fast, they are clean, they are easy Air quotes were shown while saying easy, because they're not very easy, but they look easy and it is my strong belief that you can get much more from recording fires and observing fires than you think you could get. But the technology that you're investigating here is not specifically meant to support researchers in their experimental endeavors. It's a technology used every day by firefighters, I presume. Can you tell me more about the types of cameras and the technology and how it is currently overwhelmingly used in the firefighting profession?
Martin Veit:Yes, absolutely so. Firefighters use this technology. So the thermal imaging camera or thermal imager, also called a TIC as a short name, and they use this for a lot of different of their duties when doing structural firefighting. So going into a building and then trying to navigate the building because you might have a building with a fire, you have a lot of smoke can be very difficult to see with your eyes, and then trying to navigate the building, much less identify a person lying on the floor if you can't see anything. So there was this nationwide study in the US I believe, where they looked into the efficacy of thermal imaging cameras and the way firefighters use them. So what they found was that if a firefighter goes into a building with a lot of smoke, they I think it was 60% of the times they couldn't identify a person without the thermal imager. But with the thermal imager they could identify the person 99% of the time. Also, it reduces a lot of the time it takes to locate a person. Also, navigate your way out of the building.
Wojciech Wegrzynski:So is it something that you could call, now, a standard piece of equipment? I wonder how it is around the world. I think in Poland the fire brigades are very strongly equipped with thermal imaging cameras.
Martin Veit:Right. So that is also my belief. So maybe in the 2000s, I think, when they started to route them out to firefighters and then before that, not so much because they were big, they were bulky, but then, when the technology then advanced, you can have a smaller thermal imaging camera, it's easier to use for the firefighter, and so on. So for now at least, also in the US, which was the main focus of the project, because it was an American project, it seems to be very, very common in the fire service to use this thermal imaging, and also I talked to some people in Europe as well which, as you also say, very common to use because it's such an effective equipment.
Wojciech Wegrzynski:I wonder when it's going to be equipped as a part of, you know, the helmet and the body kit itself. I can imagine this kind of device to be integrated with the helmet and the visor in the helmet to actually present some augmented reality, thermal imaging, over what the firefighter directly sees, perhaps with some clever eye tracking. I think every single component of technology that would be necessary to do such a thing already exists. It's just about you know, scale and being able to deliver that. So I'm absolutely convinced this is an element of infrastructure that will be a part of the future.
Wojciech Wegrzynski:And I remember when I rejoined the ITB 15 years ago, we had this massive, bulky thermal camera. It was like a beamer size, you know. You had to have a whole bag with that, and now a thermal camera that's significantly, significantly better in technology is of the size of my, of my iphone. That that's it. That's that. That's the progress in this technology you mentioned. They have low resolution, low frame rate. So how does that compare to devices that people are used to like cameras in their phones, if?
Martin Veit:you have a standard iphone, not necessarily the newest one, but you'll have full hd images right, so you can have very high resolution. If you take an image, you can easily discern details, you see a lot of colors, it's very clear, very visible. But then for the thermal imaging camera it has a resolution much lower than that. If you have like a, if you're going to have very expensive cameras that will have a high resolution, also high frame rate, but the ones that are more affordable, also to the fire service, will typically have a resolution of 240 times 320.
Wojciech Wegrzynski:That's like the common one I think my first phone had a camera of 240 times 320 and it's a very low resolution actually. Uh, what about frame rates? Are we talking about like single image per second? Are we talking something like a movie? 24, 30 hertz? How do they operate in frame rate?
Martin Veit:To a large degree. It's actually pretty good for what you need to do if you want to operate a thermal imaging camera and then go into a building because you don't want it to lag too much or you have too much time in between individual frames because that might make it very difficult to navigate. So the ones that are nfpa approved so the ones that are very common in the us, typically have a frame rate of 30 hertz or even as high as 60 hertz, which is pretty good for a thermal imaging camera and an fpa approved.
Wojciech Wegrzynski:Is there a standard they refer to? Is there something that defines what they should do? Perhaps that's something we should refer the listeners to.
Martin Veit:Right, so the standards that goes into certifying these thermal imaging cameras in the US, so the NFPA standard, that's the NFPA-1801, which is currently being consolidated into 1930, I believe it is which also collects a few other standards into one big standard. But that's the standard that goes into testing and what this thermal imaging camera should be able to withstand.
Wojciech Wegrzynski:And also to do and in terms of what they see, I know also in my thermal cameras there's this annoying parameter which is the temperature range which I have to set. So those devices used by firefighters, do they operate in like whole spectrum of fire temperatures up to, like I don't know, 1200s, or they are narrowed down to a few hundred degrees? What's the target temperature range on those devices?
Martin Veit:So actually, the different temperature ranges. So it sort of depends from camera to camera and the NFPA-approved ones. But I believe there must be like a minimum range in the NFPA standard. For example, some of them go from minus 20 up to 550, and then some goes from minus 40 up to 550, and then some goes from minus 40 up to 1100 degrees Celsius, so it can range. There's a big range that they can visualize and observe.
Wojciech Wegrzynski:But it's not like they cap at 100 degrees, right.
Martin Veit:No, no, no.
Wojciech Wegrzynski:And also to get utility. You don't need it to be beyond the fire range, right, no right?
Martin Veit:to get utility, you don't need it to be beyond the fire range, right? No, right, I mean, you just need it to be able to operate within the expected conditions that you want to go into.
Wojciech Wegrzynski:And, if you think about it, I guess the expected condition is not necessarily a fully flashed over fire. Like what kind of information does a firefighter get from a camera? Right, it's like if you have a fully flashed over fire in front of your face, you don't need to have a thermal camera to confirm that Exactly. Okay, let's move on into how thermal cameras work. So perhaps there's a bit of interesting technical knowledge. So if you could briefly explain to me why my camera in a phone does not capture the temperatures and why this specific piece of equipment does, so we can go into some of the basics.
Martin Veit:I think if you want to go into the diesel and stuff.
Martin Veit:It might be like a full episode in itself. But I guess the idea is that you have different spectrums, electromagnetic spectrum, different regions that you want to observe. So you have the visible spectrum, which is 380 up to 780 nanometers, but then the infrared is also partitioned into different regions. But for long wave infrared cameras the range is 7 microns or 7 micrometers up to 14 approximately. So it's a very different range that they observe. So we have larger wavelengths, so the individual detectors also needs to be bigger. So if you have a standard digital camera, for example, it has a much higher resolution because also it's a different technology. So there's also been a lot of research in that specific field, compared to thermal cameras, which is normally used by the military, for example, if you want to go into building inspections, fire service but there's a lot of more utility necessarily for this common person with a camera. So the main thing is also the technology used, which is why you lower frame rates, lower resolution, but that's due to some of the physics involved.
Wojciech Wegrzynski:How can the thermal camera distinguish between temperatures? Actually? How does it know a thing is at a low temperature? How does it know that the thing is at a high temperature?
Martin Veit:So thermal imaging cameras works on the principle that if you have any objects in front of you, it will emit a certain amount of radiation Right, and then if you have an object that's of higher temperature, it's going to emit more radiation, which is then detected by this thermal imaging camera, meaning that you can discern two different objects of two different temperatures based on the one based, because they emit different amounts of radiation that is then detected by the detector in the thermal imaging camera.
Wojciech Wegrzynski:Okay, I can understand how we can observe walls and solid objects with thermal cameras. What about gases like in observing flames and smoke, etc. Because I know it's not a trivial thing, about gases like in observing flames and smoke, etc. Because I know it's not a trivial thing. So how much smoke do you see on a thermal camera? How much flame do you see in a thermal camera? What do you exactly see?
Martin Veit:You have to be right, it's not trivial. It's a lot easier to detect a wall, an object, something, especially if you want to have accurate gas phase temperatures off a flame. Very difficult to do. But I guess the main principle and why this thermal imaging camera is of utility to a firefighter is because that they operate in the range of 7 to 14 microns, because if you look at some of the combustion gases that is present during a fire, you know If you look at the absorbance, they have a large range where they have very low or no absorbance in that specific range, meaning that they're not going to obscure the image that you're trying to see.
Martin Veit:So you can see true smoke because of the gases' absorbance. So, for example, I used a thermal camera, for example, to make some videos of a small pool fire. I can see that because it still has radiation, blackbody radiation, so I can still see the flame. But if I want to visualize the combustion gases and it's not necessarily super, they don't have a lot of absorbance in the specific region or the spectrum of the longwave infrared camera then I can go down and use a midwave infrared camera so I can visualize the combustion gases more clearly, meaning that I can actually see it fairly clearly, even though it's not necessarily super visible to my eye. So also flames, for example, I used a midwave infrared camera to then make videos of a pool fire and then you can also see. So it was a visible flame, so it was a heptane, so it also has some soot, but then you can also very clearly see the flame and different flame structures and then also some of the combustion gases even above the flame.
Wojciech Wegrzynski:Can you define mid-wave? What spectrum would that be? Is it below seven?
Martin Veit:It is, so I guess it also depends on who you ask. I've seen different things one to five or three to five. I think the one that I would choose is probably three to five microns.
Wojciech Wegrzynski:So in essence, the ones that would be used by the fire service let's say seven to 14 default range, basically the stuff that the fire produces in abundance CO, co2, is transparent to those radiations. Therefore you can see better through them because you're not obscured by a large amount of radiation produced by those hot gases and at the same time it makes it difficult to observe those structures because basically they're transparent as well. So it's kind of a blessing and an issue depends on on on how you look on the problem. I I had an interesting episode on on observing through flames with matt hayler from from this. So yeah, we've been talking about blue light technology, so they've also found a way to to go very near uv radiation, where UV radiation, where the flame is almost transparent to that particular blue light wavelength and you can see through flames and, yes, it actually works like that. So, in essence, a very similar principle in here.
Wojciech Wegrzynski:So how much you need to know to? So let's imagine you're taking a picture with a thermal camera. It gives you some output. It's a simple device. It looks on photons, accumulates them, gives you an outcome. But what exactly? You have to know about the object you're observing to understand the values that you are seeing on your screen, for example, the emissivity of the surface that you are seeing on your screen. For example, the emissivity of the surface that you're observing with thermal camera. Do you need to know the emissivity, the reflectance of the surface that you're measuring?
Martin Veit:As you mentioned, there's a few different things that you need to know, but I guess this also depends on the purpose of what you want to do. Do you want to have accurate gas-based measurements? Do you want to have accurate measurements of the temperature of some solid object, which might be very valuable? But if you don't need that, if you just want to, let's say, identify hotspots, identify heat losses of a building, for example, which is also a common use of thermal cameras, you can do that without getting all of the different specifications correct. So, for example, the emissivity, distance to the object and so on, and also ambient temperature conditions. If you need to have accurate measurements of some phenomena, let's say gas phase or sonded phase, then you need a bit more information on what to do correctly, for example the emissivity. But as far as I know, the getting accurate gas phase measurements of a flame and then fully resolve it is very difficult. It's not so straightforward.
Wojciech Wegrzynski:Well, technically, if you take a camera and point it towards the flame, it will tell you like 600 degrees. It's not a flame temperature of any way, so it must be incorrect. But then I, a long time before COVID, we were doing some facades within pre-rule and we were observing. We had a steel facade, one that you actually the next generation of which is sold in the summer school. We had a prototype of that Long, long time ago and we had the steel plate.
Wojciech Wegrzynski:There was a fire in the cavity in the steel plate and basically we were taking pictures of this facade. It was giving us a very weird numbers and we had a black spray paint which was meant for chimneys, like literally black, very dark, very matte spray paint, and we were painting on that steel facade and the locations in which we painted it it's very black. The temperature measurements in that location were like almost one degree accurate with the thermocouple measurement in the same location. So because we, like you, could very well approximate emissivity of a matte black paint, which is very, very high, whereas for the steel surface that is undergoing heating and a lot of crystal formation transitions, it changes colors, it bends a little bit, it reflects less. In this location. It was a mess. So that was a moment when I realized I really need good control over what I'm measuring. I think the word measure in here is the key, because if you want to measure, you have to put in the effort to measure.
Martin Veit:If you're just observing, here you go, you're welcome to to see the range I think it's a great distinction to have either measure or observe, because for a lot of different things that you want to do with a thermal imaging camera, observations are just fine, right. You don't need to have very accurate temperatures, you just need to discern some difference. Maybe it's hot, maybe it's cold. If it's heat losses or whatever for building. If you want to measure, you have to take a lot more care into what you're doing, control the emissaries and do one of the steps to get accurate temperatures if that's what you want to do. But I think both observation and measurement has a place in fire safety science and what we used to do with it, especially if you're trying to do smoke and fire phenomena.
Wojciech Wegrzynski:There is this thing called optical thickness. Basically, the emissivity of a smoke layer is not just a physical function of the suit inside. It's about how deep the layer is until you reach the optical thickness and then it looks black. And if you don't know how thick the layer is, you cannot even tell how much smoke is in the layer, because you don't know if it's like very thin but very optically thick, or it's just a little smoke spread over 10 meters which will look to you in a similar way from the emissivity of a layer standpoint. So there's a lot of cave-ins.
Wojciech Wegrzynski:So in the end in fires I don't think the firefighters should trust the number they see on the camera, but I think they are very well advised to trust if something is hot or not in relationship to the surroundings. If I had to give a guidance to my firefighting colleagues, who definitely are professionals in using those devices they train. But my observation as a researcher using tools like that is that wow, so hard to get an accurate measurement, but it's, it's super easy to just, you know, get an overview absolutely yeah, and I think that's also valuable to know as a firefighter, like what to do with this should I trust the temperature or not?
Martin Veit:because I also heard from a firefighter. It was in europe, but he said that they didn't have so much money to get thermal imaging cameras. So they had, I think, one good thermal imaging camera, which was for the chief, I believe and then they had cheaper thermal imaging cameras still very useful because, as you say, you can still observe information and get a lot of valuable information even though you don't have the highest resolution or you haven't, you don't have the highest frame rate at all. Still are useful too, even though you don't have the highest frame rate at all. Still are useful too, even though you don't have the newest of the new or the best of the best.
Wojciech Wegrzynski:Are there big differences between particular cameras? How sensitive are they to the radiation of hot objects or gases in fires? Anything like any practical differences, or you can qualify all of them as, let's say, useful for fire and that's it. No, no big differences among them.
Martin Veit:Right. Different cameras of course, made by different manufacturers, will also have differences in quality service. The ones that are certified by the NFPA standard. They primarily use two different detector technologies, so one called vanadium oxide and one called amorphous silicon, and I think at some point I believe VOX of vanadium oxide was more prevalent on the market. But I think amorphous silicon as a detector technology also caught up and is also now part of the majority on the NFPA certified thermal imaging camera list. So there's definitely different materials used and they also have different material properties that also make some slight changes for the thermal imaging camera. But I think to a large degree thermal imaging cameras that are then certified by the NFPA. They have to go through the same test. So even though it's a difference, they take the technology. It should also still be very much of a utility to the fire service.
Wojciech Wegrzynski:And Hjulje, I guess, get to the point of the research grant and the research question that you were involved in. How do you assess the quality of those cameras for fire applications in order to verify that when the new sensor comes to the market or a new product is introduced to the market and wants to get the certification, the certification? So let's perhaps start with how those devices would be tested in accordance to NFPA or for the purpose of certification. What exactly is being assessed when you certify a thermal camera? What you're looking into?
Martin Veit:So currently the NFPA standard that I mentioned before, the 1801, or the new one that it it's been consolidated into it goes through a bunch of tests, for durability, for example, and then the thermal image quality is then assessed with something called the image recognition test and image image recognition test. Yes, so this is then based on some work from dr francine amon back in the early 2000s, where they tried to look into different things or different ways to quantify the thermal image quality.
Martin Veit:And they looked into a bunch of different ones, for example effective temperature range, non-uniformity, spatial frequency response and the ones incorporated into the standard.
Martin Veit:Now is, the main test would be this spatial resolution test or the image recognition test, and essentially you have a bit of math that you have to do so they use something called a contrast transfer function to estimate this spatial resolution. So for this you need a blackbody target and then the test is actually a bit cumbersome because you have this thermal image that you want to certify and then to assess the quality of the spatial resolution, then you take an image of the display and then extract some region of interest and then do some math to then see okay, how well did it respond to the test. So, speaking with the technical committee, what I was told was that, first off, the test is a bit tedious to perform. It also has some consistency issues, so if you repeat it you might not get exactly the same value, also because you have this extra element of a camera taking an image off the display, and then also the image quality. If you have a human and the test side to side, they might also not agree with the image quality.
Martin Veit:So one might say okay if I said this left picture is better than the right one, the test might say otherwise. They could still both pass, but they have some consistency issues at the moment. So the whole idea with the project was to look into how can we accommodate or perhaps improve the current framework being performed to assess the thermal image quality, to make it more consistent and also to have it better aligned with what a human might score the image.
Wojciech Wegrzynski:Yeah, but it's not like a relative comparison between the cameras that are certified. It's more about meeting a threshold, right? Exactly, yeah, Okay, and the thing that camera is shooting do I understand? You called it the display, but it must be a hot object, right? And it's controlled by the NFPA. What pictures of what are you taking during the test?
Martin Veit:So what I meant before was that the thermal imaging camera. It takes an image, so this thermal image, of something called a stencil pattern, which is essentially just a target with a specific pattern, with a specific emissivity. But the image that you take from the thermal imaging camera is not the one that is being assessed. You have an extra step, meaning that you have a camera on the display of the thermal imaging camera that is then taking an image of that display and then from that you extract a region of interest. So you can imagine there's also some things with reflections that display and then from that you extract a region of interest. So you can imagine there's also some things with reflections that you have to take into account because you have the display. You don't want extra reflections that could change the score of the ah, I understand.
Wojciech Wegrzynski:So it's not just assessing the quality of the image taken by the camera, it's about assessing how you can view an object through the camera.
Martin Veit:So it also includes the assessment of the viewfinder, of the screen of the camera, in a way, exactly, and I believe the argument for this was that you have a person also looking at the display, so they want to take it from the very image that you're trying to take all the way up to the display and then treat that as a black box and then trying to assess the quality of this black box that you're trying to take all the way up to the display and then treat that as a black box and then trying to assess the quality of this black box that you have.
Wojciech Wegrzynski:Yeah, that makes sense. That makes sense. I was, I have not been thinking about this in such a way, but it is like testing a system in end-use conditions, something we need to do more in fire science, which you working at Frisbee, you're very, very well aware, because that's something that me and Grunder are battling for all the time. Good, good, very, very interesting. And the assessment is based on this image quality, the one that the test takes, and then apply some mathematical filters to extract some sort of outcomes of that, and then, based on that, it gives you a value, a number that indicates the performance of the device. Do I get it correct?
Martin Veit:So you more or less have this image and then you extract some breach of interest, apply some math we don't need to go into that and then in the end what you get is a single score. So this spatial response, or spatial- resolution.
Wojciech Wegrzynski:So what's the main challenge with this approach other than it's well, I guess being cumbersome and to some extent inconsistent is something that is very difficult for a testing method, so perhaps let's step over that. What is your view on how this can be improved and how we could improve the consistency of this assessment?
Martin Veit:Right, so that gets into the topic of the project models.
Martin Veit:So I guess the main question was what do other people, other industries, currently do in order to assess image quality?
Martin Veit:And then there's a huge field within image processing where you try to assess image quality, which is more or less the field that I then looked at and then saw okay, how can we extract some of this terminology and models and methods that they're currently using this terminology and models and methods that they're currently using and how can we apply that to thermal images and then try to use that for an image quality assessment? So I looked into multiple different approaches, but before that, I think we need to get some terminology, some common terminology. So when you have image quality assessments and you want to make sure that I want to have an assessment of this image and what I would then want to do is I want to either do something called a full reference image quality assessment, where I have an image that might have some distortions, let's say noise or blur or something and I want to compare that to an image of the same object, but with a very high quality. That's full reference image quality assessment.
Wojciech Wegrzynski:So I have an image that I then compare to something else.
Martin Veit:We can't use that for thermal imaging cameras, because if you take an image, you don't necessarily have a reference to then compare with it's more used. If you have, uh, for example, different algorithms to improve the resolution, then you want to perform some distortions and then see how can we improve the resolution of this poor image. Then you can compare.
Wojciech Wegrzynski:Unless you had a super high quality like order of magnitude quality better thermal camera that you could shoot the super cool reference picture, then you could probably do it. But I guess it could be challenging, right.
Martin Veit:So one of the other approaches is something called no reference image quality assessment, meaning that you take an image let's just say for now you take an image from your phone and we want to assess what is the quality of this compared to what a human might score it, so we can use some model that then tries to make a prediction of this. So that's the two main approaches, and I looked into both also because in the end of the project I had like sort of like a, a case study where I want to use these full reference metrics to then compare the no reference metrics. But let's skip ahead to the no reference image quality assessment. So I looked into three main approaches. So one of them is called natural scene statistics. Then you have machine learning-based models and then you have something called saliency-based models. So the first one, the natural scene statistics.
Martin Veit:Essentially what it does is it looks at statistical distributions of the image that you're trying to assess. So let's say you have an image, then you compute something called a wavelet coefficient or wavelet coefficients of the image and then you get some sort of distribution. It's not so important now the math that you do, but what people noticed was if you have different distortions, for example blur, you have a noise. It changes this distribution and that's essentially what the models are built on. Changes this distribution and that's essentially what the models are built on.
Martin Veit:So you can extract different features to then make a model that then computes some sort of image quality. And the whole backbone of this is that you have data sets where people have scored individual images with different distortion circles. It could be JPEG compression, you could have noise, you could have blur, you could have blur. You could have a very wide variety of distortions that you have on images. And then you have individual humans scoring the quality of this image and we can talk a bit about that later. But essentially you could have, let's just say, 100 people scoring different images with different levels of severity of the different distortions, and then you have an image dataset. In the end you can train models on this.
Wojciech Wegrzynski:But it's like an ambiguous scale, like everyone comes up with their own. For one person, the distortion could be nine, for one it could be three. Right, but you solve that by a large sample of you.
Martin Veit:You have a large sample and also what people normally do when they want to create one of these data sets is they have sort of like a calibration round just to sort of like base people in some scale that they can expect to be in.
Martin Veit:So this image is this bad and this image is this good? Approximately these scores. And then you sort of like calibrate the participants and then they go through a lot of images, they score that, and then in the end you'll have a data set where you have different distortions, different images and then different severity of the distortions. And then, once you have a lot of good data, you can then use that to train models on by train models, you mean machine learning.
Martin Veit:Also machine learning, but also some of the natural scene statistics models use support vector machines, for example, to then, I guess, map these features to the scores. But that leads us to the next one. So machine learning, and it's more or less, I guess the most common one would be convolutional neural networks. So you have a lot of different, I guess, machine learning architectures where people try to improve upon previous models and then once again try to see okay, if I have a score of this specific model, how do I compare it with the human score of this image? And then what's the correlation between, if I run this through the data set? So what you want to have is a high correlation between what the model predicts and also what a human would give this image. Because what this does is that you now directly have something where I can say a human would score the image, this specific value, but we also have a model that can do this. So now you actually have something that can predict the image quality of an image, of an arbitrary image. So that's the whole idea, and also the plan would be to do this also on thermal images, because if you have a very high correlation with this model and then human scores. What you can say is the model should score this pretty consistently and also, if you have this higher quality image, this should also be consistently higher than the lower image or the lower quality image. So that's the two first approaches.
Martin Veit:The last one is something called saliency, and this is a bit different because essentially what saliency models do is they predict where people look on images. So say, you have an image with some people and you have some faces Based on eye tracker data, there's large data sets where people have participated. Then if they look at a specific point of this image, then they get a point here, for example. So the idea with these salience models is to predict where people look on images, but once again removing the participants. So train a model and then you can say, okay, based on this model, trained on this data, with eye-trigger data, we predict that a person will look at this point in the image and the way people use this for image quality assessment, because it's not necessarily immediately trivial to see why that would blend into image quality assessment.
Martin Veit:But what people saw was that if you have this saliency model and it creates a saliency map, which is essentially just probability of where people look on the image. Then they can use the saliency map with another image quality assessment model to then boost the performance of the model by a few points. So then in the end what you get is a higher correlation between the model that you trained with the saliency map and then the human score from the data set. So those are some of the models that I looked into. So these three main approaches.
Wojciech Wegrzynski:I have so many more questions about thermal imaging, but let's close the report because I think we're narrowing to the conclusions and giving those three techniques. What were your concluding recommendations to improving the quality of the assessment of those cameras?
Martin Veit:Right. So what I saw when I looked into the different models and also the datasets was that a lot of work in the visible spectrum. But I've only saw maybe one study, I think, that created its own dataset and then trained a model on that dataset to then predict the thermal image quality. So there is some work on it and it can be done even with some of the earlier models the natural scene statistics but at the moment there's a gap in the research because we don't actually have a thermal image quality assessment dataset. We don't have a dataset with also it could be for firefighting, but also just in general we don't actually have a thermal image quality assessment data set. We don't have a data set with also it could be for firefighting, but also just in general we don't have a data set with thermal images where people have scored the quality of this specific image.
Wojciech Wegrzynski:What would constitute a data set? What's a data set? So?
Martin Veit:imagine you put a bunch of people into a room and you have some images in the thermal spectrum. So you have a lot of different thermal images. Then you degrade that to some degree. So let's say we add blur, we add noise, we add some JPEG compression, we ask people to give a specific score of all of these images and then in the end we aggregate that. So you have one big data set, so all of the different images named and then also the corresponding scores from the human that would constitute the data set and what metrics make it a good data set versus a bad data set?
Martin Veit:I mean you need to have a certain amount of people, but also it's a difficult question because a lot of the work was in the visible spectrum. But you need a certain amount of representative distortions that you expect to see on thermal images in general. You also need it, especially if you go. If you want to apply this to the fire service and it should be relevant for firefighters then you also need to have some conditions. That's representative of the scenarios that a firefighter will expect. So it's in the visible spectrum.
Martin Veit:People were sitting in front of monitors. Those are calibrated. You had an evenly illuminated environment, but that environment is not necessarily the same. If you want to have a dataset that's also fitting for firefighters and the thermal image quality in the fire service, perhaps you need to use monitors that are representative of the thermal imaging cameras. It may be three, three and a half inches. So there's a lot of different things and also the distortions that you might expect is also different for infrared images, right, so you might have some radiation reflected off whatever surfaces and so on. But in the report we try to give some recommendations on how to create a dataset and also what we should think about if it should be relevant for thermal images and also some of the challenges that you might run into in the end.
Wojciech Wegrzynski:A follow-up question. Is that? Okay, you're talking here a lot about image quality in relationship to fire fighting, which is absolutely understandable if we're talking NFPA 1801 thermal cameras, but that's not an entire use of thermal cameras in fire safety and I would argue there are, for example, detecting a fire. I think that is a very interesting use of thermal cameras which is currently not your mainstream way of detecting fires. I had an episode on detection a few episodes ago. I had a very interesting episode on waste fires with Ryan Fogelman, who is the globe's number one refurbisher of FLIR cameras, and the reason for that is Ryan is using them to detect fires in his devices in the fire over towers, so they are using thermal cameras to detect fires in the waste facilities. If you've missed that episode, I recommend going there because we talk a lot about practical use of this technology in that episode and I think you know having the ability to stand in a standardized way, in a controlled manner, define the minimum characteristics of a thermal camera is also one of the barricades on the road towards using them as a reliable detection device.
Wojciech Wegrzynski:And, what's interesting, you said that the resolution of your cameras in 320 to 140,. You've classified this as low, and I agree if I want to look at an image, but if I'm trying to cover a field of view that spans over a building or something and I just want to know is there a fire in the room or is there not a fire in the room? Perhaps a simple information any pixel of them is overheated over my fire temperature. Maybe I can go with a camera of a resolution of, I don't know, 50 by 50, 20 by 20. I don't know how low you can go. I know people who go extremely low with the resolution and yet are capable of getting some sort of information through their system. So do you think those metrics and those testing approaches perhaps minus the people assessing the data, because that's very you know, I observe the image type of assessment, but in terms of gathering data resolution, I think we'll also need to come up with tests like that to check the robustness of the future infrared fire detectors really Right?
Martin Veit:And I completely agree. I mean you don't need a full HD image in the infrared spectrum in order to do a lot of the tasks that we use. Thermal imaging cameras for, absolutely, as you say, go lower, also depending on what you need to do with the camera. So the current resolution is more than enough, I believe, to go into burning buildings. I think that the question is not so much should we improve the quality of the cameras for this specific purpose, but more do we have a way that's actually robust, predictable, consistent to test the quality, because you don't want to have a mishap, for example, where you have to fail a camera because of the tests, because of the inherent inconsistencies in the test. You just want to have a robust method.
Martin Veit:I think what this report also opens up to is perhaps more interesting question is what else could we use such a dataset for?
Martin Veit:Because there was a recently published report by NIST that also talks about AI and machine learning in the FHIR service, and perhaps it's not so much the quality that should be in the focus from this report, but maybe the fact that we don't have a dataset of thermal images of firefighting scenarios, because if you have that, you could apply that to a lot of different things. You could apply machine learning directly on the thermal imaging camera. You could try to detect humans. You could improve split-second decisions. You could try to detect humans, you could improve split-second decisions. And if you have a data set, that will not only open up for this image quality assessment, which I believe is also valuable in itself, but perhaps more for research purposes and this consistency issues, but opening up the door to a lot of difference and other opportunities applying machine learning to who knows what right. People are inventive and there's a lot of research on interesting things. But in order to do research, we need good data.
Wojciech Wegrzynski:Let's try to close with some final recommendations and just your final take to the reader of the report and to the NFP committees when they should head with that and what actions should be taken to improve the reliability of those testing methods.
Martin Veit:I think the main takeaway from the report. There's a lot of different recommendations and we look into different models to perform image quality assessments. The main thing we need good data. We need a data set where we can actually train the different models, see if this can be used, see if we can improve the consistency issues, see if we can improve the current framework to testing the image quality assessments, and then I think it will not only open up for image quality assessment, but I think it will also open up a lot of other doors and interesting research that could then, in the end, improve and help firefighters and the fire service, who put their lives at risk every single day in order to protect us as a redundancy in the fire safety strategy.
Wojciech Wegrzynski:Perfect, a better, more robust, useful, open data sets. I cannot ask for anything else in fire science. Martin, thank you very much for coming to the Fire Science Show and discussing the testing methods for thermal cameras, important piece of technology. I hope it was interesting for many of our firefighting friends and I'm sure it was interesting for fire engineers, our firefighting friends, and I'm sure it was interesting for fire engineers as well. Cheers, mate, thank you. Thank you for for having me and that's it. Thank you for listening.
Wojciech Wegrzynski:I must say after the episode I was still a little confused on how the thermal cameras actually work. I'm perhaps it's just me and the way how I use cameras Personally. I'm also a hobbyist astrophotographer and every time I image something, I assign the color to some sort of wavelength spectrum. That's how color photography works, that's how astrophotography works. So you have different elements that emit the different peaks of wavelengths and by measuring where the wavelengths hit, that's your color in thermal cameras. It didn't make sense to me why, how the hell the camera can see the different temperatures, because you know the peak wavelength will come from different bodies at different temperature. It's it's not an easy value to measure. And and then I I was listening to this episode I finally understood. It really is about the intensity of rotation. So the optical pathway focuses the signal on the chip, whatever it is, and then different pixels of that chip respond to different temperature, different intensity of radiation. Because the chip's properties, thermal properties, are very well known. Different radiation intensity will heat up the pixels to different temperatures and therefore it can respond to the differences of observed temperatures. And because infrared is basically electromagnetic wavelength, it gets transported and focused through the lenses almost like a normal light. So I finally got it. So it really responds to the changes of intensity of rotation, not some specific wavelength. Perhaps that's why the earlier cameras had it very difficult to have high frame rates, because you need to heat up those pixels. They have to be very responsive. So, yeah, kind of makes sense. Finally, I finally got it. You can congratulate me in the emails. It took me so long of fascination with thermal cameras to finally finally understand how they work. Thank you, martin, for helping me capture that in there.
Wojciech Wegrzynski:In this episode we've talked about some very fundamental things about the use of thermal cameras and some things that are highly complicated, the ones that Martin was looking into, the quality of image measurements, and I think they perhaps are not the most useful for everyday engineers, but for those who work with this field optical imaging who work with thermal cameras who work with certification, those things are fundamental. Thermal cameras who work with certification, those things are fundamental. Those things make or break your entire certification scheme around approving thermal cameras for use If you want to have a detection. Those things make or break your capability to detect and filter out false alarms. So indeed, I think there will be a lot of people who will be using the contents of this episode in their everyday work. I actually got some ideas for my own experiments with visibility in smoke. After what Martin said. I'm changing the schedule and scope of my experiments a little bit to use some of the clever ideas he put forward in this episode, so I am very happy about that.
Wojciech Wegrzynski:I hope you also found something nice for yourself in this episode. So I am very happy about that. I hope you also found something nice for yourself in this podcast episode, and I'm simply happy that I was able to give the fire science show to to a phd student, to someone who just finished their fire protection research foundation grant. It's amazing that those foundations exist and and fund research like that, and I really want to highlight that it's really amazing that that foundations exist and fund research like that, and I really want to highlight that it's really amazing that there are many ways people can do research in science in the world of fire protection. So that's great, and I am just happy to talk about fire science with you. And well, next week I will be doing the same thing, so see you here again next Wednesday. Cheers, bye.