AASHTO re:source Q & A Podcast

September PSP Insights

October 11, 2022 John Malusky, Proficiency Sample Program Manager at AASHTO re:source and Pete Holter, Senior Quality Analyst at AASHTO re:source, Joe Williams, Senior Quality Analyst at AASHTO re:source Season 3 Episode 23
September PSP Insights
AASHTO re:source Q & A Podcast
More Info
AASHTO re:source Q & A Podcast
September PSP Insights
Oct 11, 2022 Season 3 Episode 23
John Malusky, Proficiency Sample Program Manager at AASHTO re:source and Pete Holter, Senior Quality Analyst at AASHTO re:source, Joe Williams, Senior Quality Analyst at AASHTO re:source

We take a look at the results of the new Aggregate Gradation & Gravity samples, as well as what's in the works for the Proficiency Sample Program.

Related information: 

Show Notes Transcript

We take a look at the results of the new Aggregate Gradation & Gravity samples, as well as what's in the works for the Proficiency Sample Program.

Related information: 

AASHTO re:source Q&A Podcast Transcript

Season 3, Episode 23: September PSP Insights

Recorded: September 22, 2022

Released: October 11, 2022

Host(s): Brian Johnson, AASHTO Accreditation Program Manager and Kim Swanson, Communications Manager, AASHTO re:source

Guest(s): John Malusky, Proficiency Sample Program Manager at AASHTO re:source; Pete Holter, Senior Quality Analyst at AASHTO re:source; Joe Williams, Senior Quality Analyst at AASHTO re:source

Note: Please reference AASHTO re:source and AASHTO Accreditation Program policies and procedures online for official guidance on this, and other topics. 

Transcribed by Kim Swanson and MS Teams. 

[Theme music fades in.]

 [00:00:02] Announcer: Welcome to AASHTO resource Q & A. We're taking time to discuss construction materials, testing, and inspection with people in the know from exploring testing problems and solutions to laboratory best practices and quality management, we're covering topics important to you. 

[00:00:20] Brian: Welcome to AASHTO re:source Q&A. I'm Brian Johnson.

[00:00:23] Kim: And I'm Kim Swanson and we are joined by three other people. Who else is joining us today, Brian?

[00:00:30] Brian: Well, today we're going to be talking about the latest Aggregate Gradation and Gravity proficiency sample that the results just came out recently so. I've invited John Malusky, the Manager of the Proficiency Sample Program, to talk with us about it. John, welcome.

[00:00:50] John: Many thanks, Brian.

[00:00:52] Brian: And then from the accreditation program side, I brought Joe Williams, who is kind of our intermediary between the Accreditation Program and the Proficiency Sample Program, manages our rules for participation and handles a lot of the monitoring activities that go on when A proficiency sample report is issued, Joe, welcome to the podcast today.

[00:01:15] Joe: Thanks, Brian. Glad to be here.

[00:01:17] Brian: And last but certainly not least, is our resident expert on all things. Uh, that is Pete Holter. Welcome, Pete.

[00:01:25] Pete: Hey, thanks. You keep bringing me back. I don't know why but thank you.

[00:01:28] Brian: You're a fan favorite, for sure, Pete. So, I'm glad you could make some time to be here with us today. Let's get into this. So, this is the Aggregate Gradation and Gravity sample 1 and 2, we don't get to see too many of these one and two samples these days. John, what do you want people to know about this sample and why is it a new sample?

[00:01:47] John: Well, this ones three years in the making and I'm super happy that in the next quarter I can finally close out a change management item. This one’s been hanging around forever, but I'm finally glad that we got it switched. You know, so the premise of the whole change and to start a new program was to try to streamline the process for both laboratories and the proficiency sample crew when it comes to packaging and processing. For years we had the program split up as a Coarse Aggregate and Fine Aggregate. But after so long it didn't seem to make a lot of sense to do that. So, we decided to break it up as degradation versus gradation and gravity. So that's kind of the premise in the background of the change. It seemed like it made a lot of sense for us when we did the background investigative work on. It saved some laboratories from purchasing an extra sample that they really didn't need. So, we felt like it was a pretty good benefit to the participants.

[00:02:43] Brian: OK, so for a proficiency sample participant, what does this look like that’s different now? So, they're used to getting two rounds to perform the gradations and the specific gravity tests and a few other tests, but what do they get now, and what tests are included?

[00:03:00] John: This round is basically a combination of your full set of sieves for C136 or AASHTO T 27. So, we're looking at material from possibly the one inch all the way down to the minus 200 material which includes the washing portion. The laboratory will also get a separate baggy of product that's all fine aggregate in moist condition, which will be used for fine aggregate specific gravity and uncompacted void content and sand equivalent. We sent out a second bag that was specifically for the gradation sample, which I kind of touched on a little bit. All of the plus 4 material in that bag was to be used for the coarse aggregate specific gravity portion. So we batched the sieve analysis samples and the other bag of fine aggregate in proportions. So, we were certain that laboratories would have enough to perform all the adequate tests.

[00:03:54] Brian: OK, so there's a little difference there that I think people probably noticed is, you know, usually in the old samples for fine aggregate gradation and specific gravity, they used to get a small bag.  There was for the fine aggregate gradation.  In these new samples you mentioned a small bag of fine aggregate too, but is that used for the same purpose?

[00:04:16] John: No. So, it wasn't really, I wouldn't say a small bag, it was about 25 to 2600 grams wet and that was to be used for specific gravity uncompacted void content and sand equivalent.

[00:04:29] Brian: OK. So, if you are used to the old sample, you notice right away when you open these boxes up that it's something different, right? [John: Oh yeah, completely.] So, it and if you are looking at that and you say what do I do with this?  Where would you go to find the instructions if you were a participant for this one?

[00:04:45] John: So, the instructions are always posted on our website. They are posted for each specific sample round. They're made available on the shipping date. You know, and they're posted on the website under a link that says data sheets and instructions.

[00:04:58] Kim: Yeah, John, we will include a link to the data sheets and instruction page of our website. In the show notes of this episode. If people are curious about where to find the instructions and data sheets for each sample, and I remember for this particular sample t here was some confusion about what the bags were for in this box, so I know we also did tweet out a photo with an explanation of what bags were used for what tests. So that's another resource for our listeners to follow us @aashtoresource on Twitter. And you can get some helpful hints about that too.

[00:05:35] Brian: OK. Let's talk about how the results went this time. So, it was different. People had to get used to it different process a little bit like handling the way they prepare this sample and test it.  What did it look like in the end? Were the results comparable to the previous rounds or was there confusion?

[00:05:54] John: Well, I'm going to say that there it wasn't completely perfect. There was definitely a little bit of confusion by some participants. We noticed a few laboratories who used the fine aggregate bag that was in moist condition, and they use that as part of their sieve analysis sample. We're not exactly sure what they did, but we noticed a higher percent passing on some values that were passing the 200. So that indicated that these laboratories used that fine agg bag, which had more minus 200 and it compared to the bulk sample, that was a mixture of coarse and fine aggregates. So that was one thing that we noticed. But that being said, the standard deviations were very similar to what we saw with the previous rounds. So, when we would compare our old coarse agg data for sieve analysis and our old fine aggregate data for sieve analysis, the standard deviations and coefficients variation were very similar. So that was one thing that was really good. We saw consistent results even though we made a wholesale change.  So, there were a majority of laboratories who understood what we did and made it work. Which is really good and reassuring.

[00:07:03] Brian: Yeah, that's great. Next, I want to talk about the... how we're dealing with the ratings after the report is out.  Now Joe Williams, let me ask you about that. Because this is a new round, I assume you had to update our policy to address that this new sample is covered. What changes could people find on the Proficiency Sample Rules and the AASHTO Accreditation Program documents now?

[00:07:24] Joe: Well, the big change was obviously just getting the correct test and the correct samples like John said, it used to be broken up into fine and coarse aggregate, but now it's broken up into test specific.  But really, the biggest issue that we are having with monitoring is just going to be for this first set of sample rounds with the degradation and the gradation and gravity, because to monitor any low ratings, we have to go and look at their previous round of fine aggregate and coarse aggregate in order to correctly monitor a laboratory’s ratings and pull them all together like that. So now.  Just for this first set of sample rounds, we're sort of dealing with three different samples at the same time instead of just the regular 2.

[00:08:11] Brian: OK, well, let's, let's look at the complication there. Pete, so people may not know this about you, but one of the things that you do. In your role, that's certainly not defined by our position description, is you do a bunch of coding to help us identify the laboratories that have repeat low ratings, how complicated it is it for you to be able to do that when you're switching between the old fine agg and coarse agg samples in this new aggregate gradation gravity sample.

[00:08:41] Pete: The switch itself is simple. But I've been building these PHP scripts from scratch and there's a lot involved in that to make sure we're grabbing the data correctly and evaluating which labs need to be suspended and that takes a lot of time, but this switch from the fine and coarse agg to now the graduation versus degradation samples that's a small little thing.

[00:09:09] Kim: I want to go back and touch on what I think Joe covered, but I feel like it might be buried in there. So, I just want to make sure that I'm understanding this correctly. So, even though the this is a new sample, accredited laboratories’ data from the previous samples of the coarse and fine aggregate. We'll still count. So, if they had low ratings on a test now, and they had the ratings on the equivalent test last year, then that equals 2 low ratings in a row. Am I understanding that correctly?

[00:09:38] Joe: Yeah, that's correct, Kim.

[00:09:40] Pete: Yeah, and I guess the only little nuance to that is for the number 4 sieve. The result on this year's gradation sample is only being compared with the number 4 sieve result on last year's coarse aggregate sample, the fine aggregate program included a number 4 sieve result, but it's always suppressed and so just so labs understand that. Ohh and I guess there's the other one.  The wash result. So, the in the gradation sample the washer result is being compared with last year's fine aggregate result not the coarse aggregate result which we do not evaluate well had not in the past evaluated so.

[00:10:23] Brian: Yeah. So just for background, the reason why we weren't including the number 200 wash result for accreditation purposes is because that material was shipped clean and any dust or any.  Minus 200 material that was produced out of that sample could be variable based on the damage occurred through shipping. So, we didn't really want to use that as a mode of suspension of accreditation or an indication that that laboratory is not performing well because maybe they're aggregate didn't get roughed up like some of the aggregate that went farther or was handled more roughly. So, like that is why we didn't include it before.  But it is being included now because we've got that mix of finding coarse aggregate. So, Joe, let's talk about what happens if somebody is facing suspension after this round of aggregate. What can they do are, you know, this is the first round. Presumably we would not want to send them a blind sample of the only samples that we have. So, they would know what the results are right away, right. So, what do we do in this case if somebody gets low ratings and needs to get a blind sample to resolve this issue.

[00:11:43] Joe: Yeah, Brian, since there's only sample one and two, there's a 50/50 shot of knowing which sample you were sent. So, we're not sending gradation and gravity samples as any blind samples to temporarily reinstate suspension issues from PSP testing. What we do is we look at the reason for the suspension of for example if it came from one of the fine sieves on the sieve analysis, we will send you a previous a fine aggregate sample. If it came from one of the coarse sieves, we'll send you one of the previous coarse aggregate samples. If the suspension came from a course and a fine sieve, then the laboratory would actually have to get both a course and a fine aggregate sample from a previous sample round. The other reason for that is just inventory as well. We want to kind of build up our inventory of gradation and gravity samples and also sort of use up some of our inventory of the current samples we already have.

[00:12:40] Kim: So, when can customers expect to be able to see extra proficiency samples, or XPS, for this sample that we're taking it a couple years? Or is it next year or when are we looking like we're going to start sending out aggregate gradation and gravity samples as a XPS for accreditation issues?

[00:12:50] John: Yeah, I would anticipate not until at least year three. As Joe mentioned, we have a backlog of uh, previous fine agg and coarse aggregate samples and we try to move those samples throughout. For XPS, extra proficiencies within a five-year time period. If not, we discard them after five years. So, we'll probably be cycling through as many of those previous rounds as we can before we dip into the new gradation and gravity samples.

[00:13:17] Kim: Thank you. I did not know that we only keep inventory for five years. I learned something new today.

[00:13:23] John: I will say this. We keep them longer than five years, but we'll only send them for AAP XPS purposes for five years. Outside of that, if a laboratory wants to use them for training or you know, we get some random requests for an oddball test and they want our material, then we'll send it to them. But as it pertains to accreditation purposes, we won't send them anything past five years.

[00:13:44] Brian: So, Pete, there there's another complication that comes with the combined sample that that we have been talking about recently. Do you want to let our listeners know about what that is?

[00:13:54] Pete: Yeah. So, we've been offering accreditations for just a portion of T 27 and C 136 for labs that were only testing fine aggregate by those methods. And.  We've decided, in conjunction with the change to the Proficiency Sample Program, to no longer provide these partial accreditations and to update all of those labs. And there are only eight of them in our program to the full standards, so that they'll be required to get satisfactory ratings on the course agg portion of this gradation sample in moving forward.   

[00:14:42] Joe: Hey, Pete and Brian, we offer a similar accreditation distinction for C 88 as well for fine aggregate or coarse aggregate testing. Well, we be doing the same thing with those from the degradation program or are we going to keep doing that as we have been.

[00:15:02] Pete:  Yeah. The plan is to get rid of all of these splits where we're now offering accreditation for either just the coarse agg or just the fine agg. So, this applies to, I believe this is all of them, the wash and the gradation. So, T 11/C117 for the wash, T 27/C136 for gradation it's also going to apply like you said, the soundness test T 104 and C88.  And finally, although not covered by a proficiency sample program, lightweight particles T113 and C123. So, labs accredited for partial accreditations for those standards will all be getting updated to the full standards.

[00:15:43] Brian: Yeah. And I want to mention something about that too, that designation isn't terribly meaningful. I know that there are some people who will say, well, we only normally do this. The issue is you don't know what you're going to get necessarily if somebody, if your client wants you to test different size material, you may find out that you need to do it and then you should be able to accommodate that kind of request and. And really, what? It only comes down to is like for.  Lightweight pieces. We're talking about a different strainer, right? And for sieve, we're talking about additional sieve and the stack, but you've already shown competency for the lower sieve size and for the process of performing a sieve analysis. So, it really shouldn't be much of a stretch. Now when we looked at the data for these rounds for those that small number of laboratories, Pete, did you find that they were actually performing the test on this combined sample, or did you see?  That they just weren't submitting the data.

[00:16:42] Pete: Yeah. Well, for the labs accredited for just the fine aggregate portion of the gradation test, we found a few occurrences of we missed getting them enrolled in the gradation sample altogether. So, other than that, we've noticed well, some labs that are accredited for just the fine agg I believe were submitting data for the coarse agg portion. And in looking back at assessments for those labs accredited for only the fine agg gradation they had sieves evaluated during the assessment in most cases in many cases for all the sieve they would need to be able to participate and so they're, for the most part already equipped with the signs they need and. So, we run into a hurdle where a lab.  Uh doesn't have a mechanical shaker for coarse agg testing. We'll have to take it as it comes and see how we can accommodate a lab like that.

[00:17:41] Brian: Yeah, I'm glad you mentioned that because I was thinking about some laboratories think that the act of performing a gradation on coarse aggregate means that they have to use one of those giant shakers with a rectangular box sieve. But that's not really an implication of this. You know, a lot of laboratories will just have a stack that includes the coarse aggregate sieve in it, and that's what we're looking for in this kind of gradation because we don't have sending. So much material that you need to use the big box shaker, right, John?

[00:18:14] John: You know that's that was actually one of the other reasons why we made this change. We had a lot of laboratories who were providing feedback indicating that the old coarse aggregate sample was about 12.5 kilograms, and it was so much material laboratories would have to split and quarter multiple times and then recombine after serving. So, there were way more opportunity to make transposition errors, mathematical errors, loose material. So, one of the major. Drives for this change also was to cut that sample size down to about 2800 grams of a mixed gradation. It allowed a lot of laboratories to either use a single set of 12-inch diameter sieves in a rotational shaker, or if they had to use 8-inch diameter sieves, they only had to do two or three splits instead of having to do 10 or 15 because of the size. So, one other major benefit to adjusting the program, the way that we did.

[00:19:09] Kim: So, I have a question for all of you. Because this is the first year of the sample, so was anything unexpected or surprising to you? 

[00:19:17] John: So, the one of the things that really surprised us from the production standpoint where the amount of comments that we received about material being outside of the bags and in the boxes. We probably received 40 to 50 laboratories who indicated this and we reshipped, I am going to say around 35 to 40 packages, sometimes multiple packages to the same facility.  We were a little bit surprised that we had some of these issues with material coming out of the bags. This is the same packaging process that we used for coarse aggregate samples for the last four decades. So, we were a little bit shocked that we had comments from laboratory saying that there was, you know, #4 material, half inch material.  You know inside the box, and they were concerned about the sieve analysis portion itself. Now we did have a tolerance built in around the product and any laboratory whose sample was outside that tolerance was able to contact us and have a free replacement sent. And we did that for I said several participants. 

[00:20:27] John: But given this feedback that that we've been provided with, we're going to go ahead and make some pretty substantial changes next year to the bags and making sure that we don't run into this issue again. We're going to probably do some slide seal bags and gusset style bags. So, they're a little bit larger and hopefully more durable. Given that feedback it, it appears that it was a manufacturing defect for some of it. There were several comments about the bags being busted along the seams where they're heat treated, which is something that we don't have control over. We check a lot or two to make sure that they're OK, but it could be part of a box of 1000 bags that that we just, you know, didn't catch so. One shock, one surprise. But we've got corrective action going on and hopefully we make things better for the 2023 round.

[00:21:14] Brian: Right. Yeah. Kim, that continues to be the golden question in all interviews. So, thanks for throwing that in there. I think we've covered this pretty well though today. Thank you all for coming on to talk about it.  And I thought I was done, but Pete looks like he has something else he wants to interject.

[00:21:30] Pete: I saved the longest for last in in case we didn't have time to get to my question here or observation. But and this is not about you know aggregate testing in particular, but all the proficiency sample programs we see this from CCRL and from AASHTO.  And that's when we're looking at what labs are getting suspended.  So, for this proficiency sample, we have more than 90 labs getting suspended., based on two rounds of low ratings or no data. And one thing we always see is that there are some tests that have a lot more labs getting suspended. When compared with other tests where it's like hardly anyone ever gets suspended.  And OK, so like for example, the final agg specific gravity test, we have 36 labs flagged for suspension.  

[00:22:32] Pete: We have more than 1000 labs are accredited for this.  But 36 labs for suspension? That's like 2 1/2 percent of those labs. But then we have a test like fine aggregate angularity. T304/C1252. We have almost 500 labs accredited for that test, but we only have one lab flagged for suspension for that test.  So, 2 1/2 percent are getting suspended for specific gravity and only .2% of the labs accredited for fine aggregate angularity are getting suspended.  So, you have, more than 10 times more.  Getting suspended for fine agg specific gravity.  And the curious thing to me is, so why is there such a disparity between different tests and some tests labs seem to get suspended a lot more often than for other tests?

[00:23:33] Brian: I want to clarify that Pete is not saying that 10 times more labs are getting suspended for the same test. He's comparing two different tests.  

[00:23:46] Pete: And again, it's the same thing happens with the CCRL PSP's, there are some tests. Labs seem to be getting suspended for low ratings a lot more often than for other tests and. it seems like a mystery to me. Why that's the case? Why isn't it? Yeah, about 10% of the labs accredited for this or that test are getting suspended.

[00:24:08] Brian: I have a theory on this one, but I'd like to hear John.

[00:24:11] John: Part of it, Pete, you know the one thing that you may not be able to compare when you're talking, you know test methods, right, it's not apples to apples and oranges to oranges versus parameters that fall into each test property or each test. So, for example, uncompacted void content, the accreditation program is taking action on one test property. You're taking accreditation action on the average of the two runs, fine aggregate specific gravity that are four test properties.  That are associated with that. So just inherently there there's more of a chance for your laboratory to see a resulting suspension because you have more parameters available for low ratings. But when it comes to the analysis, just looking at that, that’s set, I mean you know when we do our verification and we look at the data, the amount of paired data that is used in the analysis and the amount of laboratories who are considered to be outlying an invalid, it's a consistent percentage and we have flags built into the data analysis to evaluate that. And if we see numbers that fall so far outside of those limits, we'll suppress the line item.  But as it relates, we have, you know, like I said, specific parameters that define those bounds and the percentage shouldn't vary that much.

[00:25:32] Brian: I'm going to say my theory then. So not only I agree with that that there's four more line items, so that's the automatically you've got a bigger chance of a suspension compared to the one. The other thing is that fine aggregate specific gravity result is so subjective cause you have to determine that moist conditioning and that weighs really heavily into your test result. And just as being an assessor, you would see a wide range of subjective determinations on that SSD conditions for the for the material. So, I think that probably leads to the some variability as well.

[00:26:06] John: Yeah, and one quick reference too, is a coefficient of variation that you see on the website for the fine aggregate and coarse aggregate gravities. You're looking at less than 1% variability, some are even below a half a percent. Bulk specific gravity, SSD value is .245% for coefficient of variation, whereas uncompacted void content, you're looking at a percent and a half. So, there's less variability with that sample. You know, just because people have been testing it. You know, I feel like they're pretty proficient at it. So, I think you know just.  It's one of those things. It's a pretty good test method. It does what it needs to do and it's really easy to.  To make a mistake that can cause you a low rating, it's just the nature of the beast, I guess.

[00:26:53] Brian: Thanks, Pete. I appreciate you asking that question. I think that that was something that was missing from our conversation earlier. So, I appreciate the inclusion of that in this episode. 

[00:27:02] Kim: So, before we go, John, is there anything else happening or coming down the pipeline for the proficiency sample program that you want our listeners to know about?

[00:27:10] John: Yeah, Kim. So one of the things that this sample is prompted us to do given the quantity of questions and comments and feedback that we received about you know the new design and new scheme of the samples, we are actually going to work on producing video based sample announcements. So instead of everyone receiving the bulk e-mail, that just gives you the generic information we will be trying to embed a link in there which will take you to some sort of media platform, whether it be YouTube or whatever we decide. And we will be shooting a video of an essentially an unboxing of what is in your proficiency sample box, and hopefully giving people a better idea of what's in there, how to prepare the product and material, and get it ready for testing.

[00:27:54] Kim: As the communication manager, I am excited about that. Is there anything else coming down the pipeline that anyone else wants to talk about?

[00:28:02] Pete: Yeah, something else I want to ask about John.  What about any the potential for any new tests being included next year's sample? We have a lot of labs accredited for like flat and elongated or fractured faces. Any thoughts about potential for any new tests?

[00:28:21] John: I haven't really seen too many requests, Pete. When we add new test to the proficiency sample scope, it's usually on a request basis and we need to have a high enough number of laboratories who are interested in it. You know, the biggest change that we probably have coming down the line next year is the addition of the polymer-modified emulsified asphalt sample. That we've received a lot of requests for that new material type. The test won't be very different than what's in the current emulsion round, but we're going to. Add that program just due to the requests for the different material type. That being said, if anyone out there is interested in having a new test method added and you know seeing a bump in the industry, we can always do our best to try to get it in one of the programs. You just need to let us know.

[00:29:08] Kim: And how would people let us know?

[00:29:09] John: Well, they could use data sheet comments, feedback, emailing any one of us, quality manager Tracy Barnhart, myself, the general e-mail accounts just any way that you can get a hold of us. Tracy keeps a log of it all. And every year we go over that information in our management reviews and try to see what we can add to the program.

[00:29:30] Brian: I think we’re; I think we're actually ready to sign off though, everybody agree? Joe does that sound good to you.

[00:29:35] Joe: Yeah, that sounds good to me.

[00:29:37] Brian: We have definitely covered it all now, so thanks so much for your time today and we hope you enjoyed this episode.  

[Theme music fades in.]

[00:29:44] Announcer: Thanks for listening to AASHTO re: source Q & A. If you'd like to be a guest or just submit a question, send us an email at podcast@aashtoresource.org or call Brian at 240-436-4820. For other news and related content. check out AASHTO re:source's Twitter feed or go to aashtoresource.org.