
Fire Science Show
Fire Science Show
182 - Bias in fire research
Fire is a highly contextualized problem; therefore, there is no such thing as an unbiased or "objective" fire experiment. It is a thing that many researchers would understand but is very rarely pointed out. Where it is not a problem for fire science (more like a 'feature'), it may become one when the results of scientific experiments are directly applied to real-world engineering cases.
In this episode, I cover biases in research, from general ones to highly specific fire safety engineering biases. The list is long, we cover:
- selection bias
- confirmation bias
- measurement/instrumentation bias
- publication bias
- observer bias
- sampling/data analysis bias
- conflicts of interests
We also discuss the contextual nature of fire and fire science related to architecture, fuel, ignition, and environmental conditions. We cover experimental design and measurement techniques. While showcasing all those possible sources of uncertainty and error, it is important to highlight that the science is generally very reliable—you just need to know how to use it.
This is the final episode of 2024, so thank you very much for being here with the Fire Science Show and see you back on the Jan 8th 2025!!! Merry Christmas and a Happy New Year to all of you!
Thank you to the SFPE for recognizing me with the 2025 SFPE Fire Safety Engineering Award! Huge thanks to YOU for being a part of this, and big thanks to the OFR for supporting me over the years.
----
The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.
Hello everybody, welcome to the Fire Science Show. Merry Christmas, happy New Year. It's the last episode of the season of the Fire Science Show, so it's a good time to wish you all the best for 2025. I'm going for a hopefully rightfully earned holiday break, so in today's episode we're going to summarize a little bit of this year, but that's going to happen at the end of the episode and first I'm going to give you some technical content that you came here for in the first place. In today's episode I am going to talk about research bias and the challenges in carrying high-level research in the fire science overall.
Wojciech:As my mission in the Fire Science Show is to bridge engineers and scientists and I believe the podcast is quite successful actually at bridging those communities together. I see more engineers using science-based methods and I truly believe that science-backed engineering is the only engineering that makes sense, like we need to be deeply embedded within scientifical principles, experiments, confirmed theories and so on if we want to do good engineering. But I also understand, as a researcher and someone who's practicing engineering, that research is very rarely straightforward. Research is very rarely easy. It's not something that gives you immediate answers. There are things to that that you have to understand when you want to benefit the most out of the findings that you can find in the scientific literature, and many people would not consider those things, but I think they are very important to be considered. And those things, those biases, those things that influence how research is being done and what it means to the research itself, what it means to the applicability All those things will significantly influence the usefulness of the studies that you use in your engineering, and that's what this podcast episode is all about. I'm going to talk about some general research biases that are true for any discipline. I'm going to talk about specific fire safety, engineering and fire science biases that are highly specific to our discipline. So, yeah, it's a good one, and it's the last one in this year, so you don't want to miss this out. Let's spin the intro and jump into the episode.
Wojciech:Welcome to the Fireiresize Show. My name is Wojciech Wigrzyński and I will be your host. This podcast is brought to you in collaboration with OFR Consultants. Ofr is the UK's leading fire risk consultancy. Its globally established team has developed a reputation for preeminent fire engineering expertise, with colleagues working across the world to help protect people, property and environment. Established in the UK in 2016 as a startup business of two highly experienced fire engineering consultants. The business has grown phenomenally in just seven years, with offices across the country in seven locations, from Edinburgh to Bath, and now employing more than a hundred professionals. Colleagues are on a mission to continually explore the challenges that fire creates for clients and society, applying the best research experience and diligence for effective, tailored fire safety solutions. In 2024, ofr will grow its team once more and is always keen to hear from industry professionals who would like to collaborate on fire safety futures. This year, get in touch at ofrconsultantscom. Okay, it seems that in 2024, there are going to be very limited opportunities to join OFR If you would like to choose that as your career path. But worry not, in 2025, ofr is also going to recruit for a ton of positions, so keep your eyes open. Thanks, ofr, for supporting Firescience Show throughout year 2024. And I am very happy that this will continue in 2025.
Wojciech:Now for today's episode research biases where the inspiration came from. So a year ago, I've published a podcast episode in here about design fires for car parks, where we've talked about how we need to change the paradigm of the design fire for car park, and when I was doing this podcast episode. We were writing a research paper we were finishing a research paper back then about our literature research about experiments on car park fires, and this paper eventually was written. We've submitted it to fire technology and sometime ago, like two weeks ago, we've received an acceptance of that paper with some minor corrections which were implemented, and I hope that the paper will soon be published.
Wojciech:I've realized that there's a lot of resources out there in scientific literature that looks like something you could directly use in your engineering, but actually when you go deep into that it becomes very, very challenging. Actually it's very difficult to actually use them in your everyday practice. And where do those challenges come from? Why research would not be useful directly? So if you're looking for something very simple like, let's say, heat of combustion of a specific material, we have very good, very well established methods to estimate the heat of combustion. You can choose your sample very clearly and very obviously to say, okay, I am looking for this type of plastic material or this type of timber at this density and you can put it into calorimetric bomb and get your heat of combustion. That's not something that would carry a lot of bias. But when you think about applying that knowledge in practice? What does the heat of combustion number mean to the real world problem? I mean, it's the potential heat of combustion, it's the maximum that can go. It cannot go beyond that number. But will you reach that number? That's another question.
Wojciech:Now you get into combustion efficiencies and chemical reactions, vitiated or well-ventilated conditions, the temperatures, materials can burn in different ways. So with even such a simple thing as heat of combustion, even though you get some unbiased number and some result of an experiment, of a test, it's not a guarantee that you nail it 100% in the real world. Now if you think about more complicated stuff, like you wonder, how does a train burn in a tunnel? That's quite a challenge. There have been people who've burned trains and yes, we do have some heat release rate curves from that. We have some emissions, from that. We know what radiation is. You can approximate some sort of a fire spread in the train. Yes, we have data points like that Very few but we have them. But how generalizable are those numbers to the real world? Like, can you say that all trains in the world will burn in the same way? Of course not. Every fire is going to be different because there are so many drivers to a fire, that there are countless, countless ways a thing can burn. And yet we have this little tiny nugget of information, the only information you have to allow you to choose something you design with. Now, when you work with such a scarce data set on a very complicated problem in a very highly specific context of a building, this is when an engineer really has to understand the research biases and understand how the research was performed, to adopt, adjust that outcome from the research into their engineering or apply the outcome of the research into their engineering. But when processing the outputs, when considering what are the results of their study, to fully understand the complexities and the uncertainties in the results of their study, to fully understand the complexities and the uncertainties in the results.
Wojciech:In the paper I've mentioned we were looking into design fires, obviously for car parks, and looking into the literature. We've noticed a lot of electric vehicle experiments in the last 10 years. Like a lot of them, a lot of them. We really have a lot of data on electric vehicles and fires. Now what's interesting in that study is that we've also noticed that most of those tests would be ignited from underneath the battery. The fire would attack the battery and you almost have no results from fires in passenger compartments, nothing on engine fires, nothing on electrical cable fires in the car, just batteries.
Wojciech:Now, if you think about it, if I get a vehicle to burn which costs $100,000 or whatever, and I have this one shot, one opportunity to really get it done, you know, to get my data point and get it published, I want to have a fire. That's meaningful, right. And if you think about electric vehicle fires, there's battery fire, right, that's what we're interested about, not just, you know, internal compartment fires, right. But in the end, because all of the researchers are doing like that, we have no clue. How does fire spread from the passenger compartment into the battery? We have no clue, no idea how much the battery is attacked when the fire is in the vehicle and what it takes to ignite the battery. And if you talk to the researchers, you know it takes a lot to ignite the battery. We've done research our own on electric vehicles and, yes, it takes a lot to ignite the battery. So the viewpoint in the literature is kind of skewed towards a very specific publishable results, publishable science, which is not really the true, objective image of the reality out there. And these are exactly biases that I want to show you in this episode.
Wojciech:Another one ventilation in tunnels. Ah, that's a good one. Actually, whenever you see a study on smoke control in tunnels like magically, there's no flow in the tunnel they always happen in quiescent conditions. Especially if we're talking about CFD or about modelling, small-scale modelling you always have this perfect axis-sy, axis, symmetric, buoyant smoke plume vertical, touching the ceiling, splitting into half smoke, going both directions, and this is how they are studied when in a real tunnel. I've done hundreds of experiments in real tunnels when we've commissioned them. Maybe two or three times in my life I've seen quiescent conditions in the tunnel Like they don't happen. Tunnel is a place where you have immense flow, always, always, there's a flow and especially if you consider tunnel under a traffic, the traffic will create flow whether you like it or not. So actually, the probability that you will start your fire in quiescent conditions zero meters per second in my opinion, is zero. The science is to some extent irrelevant because the bias in the initial conditions prevents you to compare this to real-world conditions. So I'm really triggered by this and, yes, I do my CFD for projects like this because that's the expectation of the market. But I understand that it's not the real image of the world. I've said that especially when you do model studies, because if you do full-scale tunnel research, you have initial velocities because you cannot escape them. That's the real world.
Wojciech:Anyway, in this episode I wanted to highlight some general biases, like true for any science, true for any experiment, and I'll start with those and then I'll try to nail down why do we have some fire-specific biases, what's so special about fire safety, fire science, fire safety engineering that introduces some specific things to our discipline that perhaps are not present in other disciplines, and why you should pay attention to them. So for the general research biases, I have a list of like 10 of them. Let's go through the first five then On my list are selection bias, confirmation bias, measurement bias, instrumental bias, publication bias. So let's start with this group of biases. So first, selection bias. This means there's a bias in the way how the sample that we chose for our research or in general the experiment that we carry out is not representative to the broader population of something Like imagine you're doing studies on batteries and you're burning one specific type of battery chemistry and you claim that all lithium-ion batteries burn the same, when we know that NMC will burn differently from the iron batteries, and the reason for this bias is sometimes the availability of the material. So sometimes it's very tough to get the material that you would really want to work with or that would be really representative. It can be the broadness of materials in the world. There could be hundreds of types of specific material. It's very, very difficult to choose a representative sample and you can just have your own preferences. Well, that's a different bias. But in general, if you select a specific material and it's not representative to the generalized real world conditions, it's a selection bias and you have to look into the papers about how the materials were chosen.
Wojciech:Another one is called confirmation bias. That's an interesting one and that comes to the way how we pursue science as scientists. So we're humans, we love what we are doing, but this is our life, this is our jobs and as every job, you have some job security With every job, well, you have to do it. Well to be a scientist, and because we're scientists, we're also quite connected with the ideas we come up in science. Also, you know, quite connected with the ideas we come up in science.
Wojciech:So sometimes you would have some sort of hypothesis or a belief, going into the science, going into the experiment. And the confirmation bias means that you look at the experiment, you look at the data that you have through that hypothesis that you formulated beforehand. So it's easier to confirm a pre-existing belief than find something new, and that can lead to overlooking critical findings. And this is quite interesting because many times we look at all experiments done by someone else and we start seeing things that those people have not seen, that those people have missed out because of this confirmation bias. So scientists tend to be very favorable towards their own ideas and sometimes you find that reflected in the experiments. It's always best to look at raw data. It's always best to try and come up with your own conclusions from the data that you see. Of course it's challenging because of the context of the study, so you need to understand that as well. But yeah, confirmation bias is a real thing.
Wojciech:Another two, that's measurement bias, instrumental bias. I would say those are similar and they come from how a technology is used to measure. So measurement bias would be some sort of systematic error in measurement tools, let's say the uncertainty of your measurements, and instrumental bias would be what kind of technology was available to you when you pursued an experiment. Both will be very important in fire science as well, because we have very limited tool set to measure fires and I'll actually come back to those when we go to the fire specific part of the episode.
Wojciech:The next I have is publication bias. So publication bias is something that we observe in scientific literature as a whole thing. So this problem is actually quite deep and very, very challenging. The thing is that in modern science, as I said, we're scientists, we have to live our lives, earn our living, and we do that by publishing right. So as a scientist, there's kind of this expectation that you will publish papers and what you write, the way how you do the papers, because creating a paper is a lot of work. It's quite challenging. Actually, you want to create a paper that's going to get published, right. Realistically, you don't want to write a paper that will never be published and it is very difficult actually to publish experiments that failed be published and it is very difficult actually to publish experiments that failed, for example. So the publication bias is something that mostly positive experiments or successful experimental results are published there within the literature and there is very little inconclusive results published and very, very few negative results in the literature, especially if you had a hypothesis and you'd done an experiment. It gives you no conclusive answer to that. That's very challenging to publish and I would add on that the reproduction crisis. So in medicine, when someone publishes a study in a peer-reviewed journal, that's great, but the study is not really used by the community until someone repeats that experiment and publishes a confirmation study that yes, we did the same and it actually worked the same. In our science we don't have that. We almost have no reproducible research. There's very, very few attempts of people repeating other people's experiment just to check if they work or not. I would call that a crisis because this is something very fundamental and very needed for modern science. But more than that, not having negative outcomes, not having null outcomes in the science, is something extremely challenging.
Wojciech:And you know I love the story of how the cosmic background radiation was discovered. I'm not sure if you know that study, but there were some guys who were trying to build a really precise radio telescope and they wanted to, like, reduce any kind of noise that would be around that telescope, you know, to have the cleanest, sharpest radio images of the universe. And they've systematically cleaned everything. There was no source of interference with their telescope and they were shooting pictures of the sky and they were getting this noisy, noisy data. It was super annoying to them, so you could call that a failed experiment. It was super annoying to them, so you could call that a failed experiment, a negative result. Right, they wanted to have a perfect telescope, but they have a lot of noise in their research and then someone pointed them out to some other scientists and they found out that the noise is actually cosmic background radiation, the remnant of the Big Bang, and they went and got the Nobel Prize for that that.
Wojciech:So a negative result turned into a noble price and and you don't know that you will never know where your science can go and you, because of the biases related to hypothesis the confirmation bias, for example you cannot really figure that out. So even if your result looks negative at the first point, it actually could be a positive result for someone else. So I would highly urge everyone to publish everything you get, even if you cannot publish it in peer-reviewed journals. I would highly appreciate peer-reviewed journals to publish this type of papers and I'm confident that if you write a good, convincing editorial letter, it would be considered for publication. But even if not, then perhaps using open access to publish those results could be good, and this is something I truly believe we would need more in fire science. Wow, this was a longer one, but it's a big problem in science that people do not really get and it's something that really slows us down.
Wojciech:Another one is observer bias, so it's kind of related to the confirmation bias previously. It means that the observer can unconsciously influence or interpret the results in a way that align with their expectations. It's basically the error, the systematic error, in which the researcher interprets visual data, their observations and yes, in FHIR we have a lot of observations. So even a simple thing as like did flashover occur in an experiment? I've been in those discussions. So that's an observer bias, someone introducing some errors to the study based on their experience.
Wojciech:The next would be stuff that's related to data analysis, so sampling bias, data analysis bias. This means that the way how you process data can give you different outcomes. There were great papers from University of Edinburgh on how to clean out the noisy data. The data in fire is always noisy because the fires are turbulent in their nature. So whatever you measure, it's probably going to be a noisy measurement and you want these nice clean lines to be used in your engineering. So you have to clean that data and the way how you sample it, the choice of methodology used to clean out the data, will highly, highly influence what outcomes you get. Another one environmental bias. We're going to come back to that in the fire part as well. This is how the environment affects the experiment and this is critical for fire science, so I'll give myself ability to talk about it later.
Wojciech:One more is repetition bias. So people focus on specific results of single experiments, ignoring the inconsistent or divergent results. So this is very interesting and this goes back perhaps to the study of David Morissette when he was burning chairs. He's burned 25 chairs. They roughly aligned, but he was interested in why they diverge and while doing that he found some really, really interesting stuff. There's a podcast episode about that. So really looking into why there is a scatter in your results where does it come from and what does it represent is critical for science, and sometimes scientists would just go over that, would not analyze that. They would flatten out average stuff and just publish results. This is very important to understand that this can happen.
Wojciech:And the final one on my list is conflict of interest or funding bias, and yeah, it's a real thing. So if you are funded by someone, there is unfortunately some possibility that the research could have been influenced by these people. Usually it's not actually. Having worked with a lot of funders, a lot of companies, private companies, private entities I've never been in a situation where they would try to silence me or when they would try to use their power of money to actually do some impact over my work. But I'm not sure how it's generalizable that is. Maybe it's just our case, but science knows this problem of funding bias, especially in medicine there was a lot of conflict of interest, like tobacco industry that was a massive one Oil industry and climate change, silencing some research, putting different narratives. Humanity knows those stories. So it's also something to look for when you are dealing with with scientific research.
Wojciech:And those biases are funding sources or and sponsors. They should always be named in the papers, so you should know. And also one thing that we avoid. That is that we try to go with governmental fundings and whenever we have materials to be used in experiments, we used to purchase them. We very rarely take donations of materials for experiments. I prefer to purchase them to have a real market sample and be kind of independent from the sponsors of the study. But I know not everyone can have this luxury of being financially independent in their research. So yeah, look for that bias. So those were 10 biases that are kind of general to the research, general to science. Every single branch of science, every single experiment could be burned with those biases and I tried to nail how they relate to the fire science. I leave it to you to figure out how much relatable they are. And there will be some biases that are highly related to fire science and we'll go into those.
Wojciech:But first let's set up the stage. So how would an objective experiment in fire science look like? Is it even possible to have an objective study in fire science? And if you think about it deeply, a fire problem is an extremely contextualized problem. In general, fire behavior and fires are extremely contextualized. This context comes from the architecture or the environment in which the fire is occurring. So the place, the physical constraints around your fire, the airflow introduced by the geometry that you're studying fire in, will extremely affect the way how the fire behaves. If you have a fire in an open space, in quiescent conditions, imagine having an aircraft hangar with stable air, nothing influencing with your fire. In there you burn a pile of material, you get this beautiful axisymmetric bouillon smoke plume. Yeah, that is something that's perhaps unaffected by architecture, but this is representative to the real world. Like in every single building, that fire would be affected by architecture in some way. You could have wall interactions, you could have interactions with the ceilings, you will have heat feedbacks, you will have flow feedbacks, you will have changes in oxygen concentration and so on and so forth. So the architecture will always influence our fires and it's not a problem, it's a feature, it's a characteristic of the space of engineering that we're in. Fires are extremely contextualized and it's difficult to present fire in an objective context Objective context, own context and the difference between those two must be very well understood by the fire engineer.
Wojciech:Another thing that goes into this is fuel and the ignition location. So, again going back to Morissette's work, the way how you ignite will highly influence how the fire spreads over your item. The location of the ignition, the energy of the ignition, the material properties all of this will introduce some sort of bias into the study and will set the fire on a very specific course If you ignite it in a different way, you'll get a different outcome. Hell, if you ignite it like in the same place, same way, you may get a different outcome, of course, but if you ignite it in a different way, the outcomes will be different. And now let's think about is there an objective way to ignite an item in the research, because imagine, I have a car to burn down. Is there an objective choice of the location where I put my source of ignition in that car to say, yes, cars burn like this? Of course not. You can't do it with one sample. I probably would have to burn hundreds of them to have some kind of distribution of the causes of fires and then understand how different locations of ignitions act like. But I don't have $100 million to pursue such a study. No one has.
Wojciech:This is why engineers and researchers who carry those tests and the research laboratories have to pick something, and what they pick will highly influence the outcomes of their studies. You have to pay attention to that. That's the case of electric vehicles and fires. You will only find experiments in which the battery is directly attacked, where in the real world this might not be the main representative scenario. If the fire starts in the battery. It's still a different cause of the fire than when you put a two megawatt heptane fire underneath your battery, right? So the fuel and ignition will play an immense role in the fire experiments.
Wojciech:Last one, the conditions in which the burning is happening. And you could have a scatter from extremely well controlled fire laboratory conditions. You control temperature, humidity, airflow, everything. You have the perfect setting Up to the real-world studies where we burn buildings outdoors in whatever weather we can find. It even matters if it's rained the day before. Humidity of the air will be a factor in the experiment. Wind will be a massive factor. Air will be a factor in the experiment. Wind will be a massive factor. I used to think that if you have wind less than, let's say, two meters per second, there's no direct impact, but that's not true. Even a very low wind velocity, even one meter per second, can already change the flow paths in your experiment and, because of that, indirectly influence the experimental outcomes. We've seen that happen in full-scale experiments and it's something that you need to understand when you look through the study.
Wojciech:The thing is I don't want to criticize that it's impossible to carry good studies, because it is possible it's about reporting those things, reporting that context in which the study happened, and the more of that report is present in the research paper, the easier it is for an engineer or someone who wants to use this study to compare the context in which the study was pursued to the context of the real world problem they're trying to solve with that data. This is critical that we have this connection between how the experiment was carried and where do you want to use it. It's not easy to transfer that knowledge and probably if you're not a scientist, you probably should not actually try to fiddle with the data. You cannot say, oh yeah, let me add one megawatt because of this and that that's going to be very difficult to justify to your authorities. But understanding the context and being able to say, okay, this represents a very unlikely fire of a battery, which perhaps is the largest fire I can have from this source, and comparing this with your interior car fire of the closest example from the combustion engine vehicle, and saying, okay, this is how a course of fire inside the passenger compartment looks like and providing another research point, that's an approach to pursue. Also, if we're back to the car park problem and the paper we've wrote, it also matters how they've tested the experiments. Was it open air colorimetry? Was it something that mimicked the car park? There's very few studies where the vehicles would be burned down in something that resembles a car park and yet the feedbacks will influence the cause of the fire. So now, if we go deeper into fire experiments, some things that will influence the outcomes.
Wojciech:Experimental design itself is very challenging in fire experiments outcomes Experimental design itself is very challenging in fire experiments. That means the layout of measurements is actually quite tough, to be honest, to lay out the measurement system in your fire experiment for multiple reasons. First, the measurement systems are quite expensive usually and they're usually rather scarce. So you have to be very picky on where to put your thermocouples and stuff in in your in your compartment fire. There were some crazy attempts where people use like thousands of thermocouples to map everything inside of compartments but then again it took them years to process that data and it was very challenging to actually work with this amount of data. So it brings new challenges.
Wojciech:It does not just it's not a simple solution to use hundreds or thousands of data points and sometimes we even say that you have to burn an experiment once to see where you put your data measurement systems and then you burn the second time, hoping that it's going to go the same way, because now you know where to put your thermocouples and countless times I had this problem where I've misplaced the thermocouple, but by like 20 centimeters maybe. We had that with one of the Greenwall research where I had a beautiful line of plate thermometers in my sample through the geometric middle of my sample, from the bottom of facade to the top of the facade. It was like five continuous meters of measurements, really beautifully outlined in the perfect location, as it seemed. And when we had an ignition the fire burned up like 10 centimeters to the left of this line of thermocouples and I don't have a good outcome of those measurements because the fire did not go over the thermocouples where I thought it will. So that's a measurement bias that I've accidentally, or instrumentation bias that I've actually accidentally introduced into my study. I report that and I am very open about this. But stuff like this happens all the time. We had experiments in which the plume going out from a compartment would not be vertical, it would be skewed and in the same way would not hit the thermocouples.
Wojciech:In general, locating the thermocouples is quite challenging for experiments. In fact, the most repeatable one that we have is the Polish test method for facades, where we introduce wind and it's like counterintuitive, because if you introduce the wind the fire is very dynamic, it moves around the facade, but because it's a controllable wind and it's always the same, the movement is very repeatable between experiments. So in most of them you get a similar scatter of temperatures. There's also some biases in the reporting and investigating the outcomes of fires. I've mentioned the confirmation bias. I've mentioned the publication bias. Those are very true. In the fire science community we're not very lot of people. We have to work, we have to publish. There are just a few fire journals out there, so we fight for the spots in those journals. Fire journals out there, so we fight for the spots in those journals. So yeah, it's possible that there are biases in research that are related to what's a publishable science and how you need to write about your experiment to get it published. But anyway, I've said about that previously, so let's move on.
Wojciech:There are some challenges with the measurement techniques that we use. So oxygen calorimetry is very difficult. You need to understand there's a lag involved in calorimetry because the air has to reach the oxygen measuring point. So that needs some very sensitive calibration and requires you to understand the calorimetry very well and the flow path very well to really get a good measurement. You also rely highly on capturing all the gases. If you don't capture all the gases and don't get all the gases through your oxygen calorimeter, you don't know what was the burning rate of your fuel. You also need to account for combustion efficiency to some extent. So there are some factors that you include in the analysis, correction factors which are well known, but still they need to be applied. So it's not a simple measurement, not an easy measurement. You also have to capture the average flow, temperature and mass flow rate very well to apply oxygen calorimetry, and this means you have to measure in a specific point. You have to allow the flow to stabilize before you measure the averages, and so on. So the way how the calorimeter is constructed will also influence the ability to measure O2 and the flow in the correct way. So there's a ton of uncertainty related or sources of uncertainty that you have to control, for it's not a simple measurement by any mean.
Wojciech:Temperatures are difficult. Actually I had this episode in the podcast. Why is temperature so easy to measure but so hard to interpret, where I went for a very long rant on how hard it is to measure temperatures. It's one of the early episodes of the podcast. It could be quite funny to revisit that one. That was a long time ago, but I think it's pretty good. People enjoyed it so I can relate you to that episode.
Wojciech:In general, the way how we measure temperatures in fires is difficult, especially if we talk about fully grown fires or flames. You're talking about temperatures of around 1000 degrees Celsius. This means that radiation will overwhelm the heat transfer modes and not all thermocouples will respond to that. Equally, thermocouples will also have their own bulk, so a very thin thermocouple will respond to heat flux in a different way than the bulky one, and it also influences your outcomes. If you're looking for more transient phenomena, you have to pick your thermocouples correctly. Also, when you embed thermocouples in your samples, it matters if you drilled a hole and put your thermocouples correctly. Also, when you embed thermocouples in your samples, it matters if you drilled a hole and put your thermocouple through that hole. Or perhaps you put the thermocouple inside your material and then pour the concrete or put your plank of timber over them. It matters if it's perpendicular or parallel to the isotherm Stuff like that matters. It matters if you glue it and how much glue you've used actually. So there are a lot of biases in that.
Wojciech:When you measure them with with infrared cameras, the emissivity of the surface will also matter and you usually don't know that very well and also changes in time. So, for example, if you're measuring temperature of a steel with a with coated steel in chromium, then the chromium will change the color, it will change the emissivity tremendously at some point, which you have to understand. That it happens because otherwise your measurement will show a quick peak in temperature that you will not be able to interpret. The same goes for materials that have water in them. So you're going to see a flattening of the temperature at 100 degrees. It doesn't mean that any heat transfer has ceased to exist in that sample. It means that water is currently evaporating and until it evaporates you're going to see 100 degrees on your thermocouple. And still you have to be able to understand and interpret that. So a lot of challenges. On temperature measurement in fires An entire episode of the podcast is about that, so I would refer you to that episode Now.
Wojciech:I've listed a lot of problems, but it does not mean, I believe, the science is done wrong or the experiments are done incorrectly. They're done right, done right and most of the people I know that do fire science. They take a lot of careful decisions planning to get most out of their experiments. There will be limitations and biases in those, but they are usually quite clearly stated in the scope of the study. What I mean in this episode is not to tell you, ah, science is unrelatable, or science you cannot trust it. You absolutely can't trust those experiments. What I mean is that you take an effort to look how the science was done before you apply it directly into your calculations or into your design. If you take a random paper on EV fires and just take the heat release rate curve from an experiment and drop it into your car park and say this is a real car burning in my car park, this is not a true statement. You've applied a study done in a different context into a new context without even trying to comprehend what the differences would be. Don't do that. But if you read the study, if you read multiple studies, if you understand how they were pursued and you are sure that this is the data point that will work for you in your context, or perhaps you don't have any better data point. This is the only thing you can work with and it comes with these limitations. If you understand that and can clearly convince that those were accounted for, you're in a good, good position. So please read the studies, not just the results. Read the objectives, read the scope. Read the material section how they were chosen. Read the method section how it was measured. Read the conclusions very carefully, because people will tell you how they interpret the data and from those exercises you will be able to get a lot of success in your engineering applying real-world fire science. So that would be it about the biases. Actually, I could go a little longer on them, but I'll stop the rant and let you enjoy your Christmas and just say a few words about the podcast in 2024. So thank you all for being here with me in the year 2024.
Wojciech:It was an exciting and interesting year for the show. The show has matured. It's it's my work now. It's my real job right now and I am very happy to do this job. I probably have the best job in the world if you consider that we've incorporated the podcast this year. So the legal structure has changed a bit. That was a big jump and took some work, but it's done. It's not a hobby project anymore, it's a real, a real job, and I'm very happy to do it. We've published, with this episode, 50 episodes of a podcast, which means we're at like all, 95 episodes per week. This is because I'm skipping the episode next week and I think I've missed one over the year. I'm not really sure right now. Anyway, I think the repeatability of the podcast was pretty good and we've nailed it in almost every single week this year. So that's my promise to you that you're going to find your fire science here on Wednesdays, and I try to deliver that.
Wojciech:The podcast was listed in 115 countries. That's kind of amazing if you think about it. It reaches almost all ends of the world, some ends in a better way, some ends of the world in a little more challenging way. I'm really happy with a lot of listeners from Africa. The country with the most downloads was United Kingdom, so there's no change in here. Uk is still my number one market and high five to all the listeners from UK. But 115 countries and many of them are chasing UK and probably are higher per capita. We had this funny conversation in New Zealand that perhaps New Zealand is the highest per capita in terms of listenership. I also had one listener in Vatican. I really hope that's the Pope. I'll never be able to confirm that, but in my biases I choose to interpret this in this way.
Wojciech:In this year we overall had something like 57,000 downloads of individual episodes. That's quite a lot, to be honest, and the most downloaded episode was the new guideline for PV5 safety with Grunder, episode 155. So Grunder made a lot of noise in the social media, as you'd expect, and it was just a good episode. The next ones from this year were 143 Fire Fundamentals, part 7 CFfd simulations. 134 fire fundamentals, part 5 evacuation equation with david purser ah, that was a good one. 137 immobility fires with adam barove. I think we've started the year with this one and perhaps the next year we'll also start with experiments on electric vehicles from ul. We'll see the fifth one on the list. 154, fire Fundamentals, part 8, compartment Fire. You guys seem to like a lot the Fire Fundamentals.
Wojciech:I need to do more episodes like that because it seems to be highly regarded. So overall the podcast is somewhere in like top 10% of the podcast worldwide. So this, for my ego, is very reassuring. We've even reached the top one in physics in some countries for some episodes. So that's a nice thing. But I'm not doing this for rankings. I'm not doing this for money. I'm doing this to really have an impact on how far science is shared and how far safety engineering is done worldwide, and I hope this impact has been achieved in the year 2024 and hopefully will be achieved in year 2025.
Wojciech:So, finishing this year with this podcast episode, I would like to once again thank you for being here with me. I am looking forward coming back with a new episode on January january 8th is eighth and wednesday yep. January 8th is wednesday. So on january 8th, you shall have your next fire science show episode. The next wednesday is christmas. The wednesday after that is new year's, so you'll have plenty of opportunities to celebrate and have some fun with your families, have some rest, recharge for the next year, come back here and let's do more fire science in 2025. Thank you for being here with me and see you in the next year. Bye, thank you.