
GovCon Bid and Proposal Insights
GovCon Bid and Proposal Insights
Civil Engineering Services Pacific (CESPAC)
In this episode, we explore the upcoming Civil Engineering Services Pacific (CESPAC) MA-IDIQ opportunity released by the Department of the Air Force – Pacific Air Forces. With an estimated value of $400 million and a set-aside for 8(a) firms, this contract offers a powerful entry point into major infrastructure projects across the Pacific region.
We break down the presolicitation details, estimated timeline, number of awards, and what your business should be doing now to prepare for the Q3 2025 solicitation release.
Get ahead of the competition, listen now
Contact ProposalHelper at sales@proposalhelper.com to find similar opportunities and help you build a realistic and winning pipeline.
Welcome back to the Deep Dive. We're stepping into the pretty complex world of government contracting today.
Speaker 2:Yeah, it's a space where the details really really matter.
Speaker 1:Absolutely. I mean, the fine print isn't just legalese right, it's basically the map to accessing billions of dollars. Understanding these evaluation criteria is just critical.
Speaker 2:That's stod on. These aren't, you know, simple off-the-shelf buys. They're very specific processes. They're designed to make sure the government gets exactly what it needs.
Speaker 1:And that idea of best value. Well, it can be a lot more specific than people might think. Definitely, it's often very granular, exactly so today we're sort of pulling back the curtain on one of these processes. Our deep dive is focusing on the evaluation criteria for a big US Air Force contract opportunity. It's the Civil Engineering Services, pacific CE, spac.
Speaker 2:Right, and the number for that, if you're looking it up, is RFP TradTech 552 and 525 R0003.
Speaker 1:So to really get a handle on this, we've been digging into the core documents.
Speaker 2:Yeah, we looked at the key stuff, specifically the really crucial Section M from the request for proposal. That part spells out exactly how the Air Force is going to evaluate proposals, how they'll pick the winners.
Speaker 1:And also that statement of facts form which sounds like it plays a pretty big role in proving your experience.
Speaker 2:It does. Yeah, Very important for substantiation.
Speaker 1:Okay, so our mission here is to take these sources and give you a clear kind of actionable understanding Like what's the Air Force really prioritizing? How exactly will they score potential contractors?
Speaker 2:And what boxes do you absolutely need to check? What proof do you need to show to actually win a piece of this? It's a pretty significant effort.
Speaker 1:Yeah, think of this as maybe your shortcut to understanding the nitty-gritty mechanics of a major government buy.
Speaker 2:Okay, so let's unpack how this whole thing works. First off, it's for an indefinite delivery, indefinite quantity contract, an IDIQ.
Speaker 1:Okay.
Speaker 2:And the government says right up front they plan to make multiple awards. Specifically, they're aiming to award to three different contractors.
Speaker 1:Three winners, okay, out of maybe quite a few bidders. I imagine yeah. And the scale here?
Speaker 2:Huge the total value across all the task orders they might issue under this IDIQ. Well, it can reach up to $400 million.
Speaker 1:Wow, $400 million. That's a really substantial amount of potential work.
Speaker 2:It absolutely is. So the big question is how do they pick those three?
Speaker 1:Right.
Speaker 2:And it's not just about finding the lowest price tag. This is what they call a best value source selection. But and this is key with a very specific definition for this contract, they're using a method called highest technically rated offerors, or HTRO. It follows the rule book, federal acquisition regulation FRR, part 15.3. The goal is basically to find the three companies whose proposals are the highest technically rated and whose prices are also OK, you know, complete, balanced and reasonable.
Speaker 1:Ah, ok, so best value here Isn't that typical situation where they might, you know, weigh things like maybe accept a slightly lower technical score if the price is really good, or vice versa.
Speaker 2:Exactly. That's a really critical distinction. Based on these documents. Best value here is strictly defined. You have to be one of the top three technically rated offerors and have a technical score the government actually validates and have a price that passes those checks reasonableness, completeness, balance.
Speaker 1:So no wiggle room on trading price for tech score.
Speaker 2:Nope, the RFP explicitly says there will not be a subjective trade-off process between technical and price. Technical rank is first, price only gets looked at for those top technical folks.
Speaker 1:Got it. That structure really sets the past. Then Is there any kind of minimum guarantee for the winners?
Speaker 2:Yes, there is A small one. There's a minimum guarantee task order for $3,000.
Speaker 1:Okay.
Speaker 2:And that's specifically tied to attending the post-award conference. But it's important the source points this out. This is not a requirements contract.
Speaker 1:Right.
Speaker 2:So winning the main IDIQ contract doesn't actually guarantee you'll get any more task orders beyond that. First $3,000, although they do mention a mock project might get awarded.
Speaker 1:Okay, so the absolute, critical first step if you want a shot at any of that $400 million is proving you're one of those top three highest technically rated offers.
Speaker 2:Exactly.
Speaker 1:So let's dive into that single evaluation factor Technical experience.
Speaker 2:Right. Technical experience is the only factor they use to rank proposals initially and, interestingly, offers actually self-score their own experience, mark.
Speaker 1:BLYTHESONE JR, they score themselves.
Speaker 2:MELANIE WARRICK JR. Yeah, based on criteria laid out elsewhere in the RFP in Section L. Section M refers back to it.
Speaker 1:MARK BLYTHESONE, JR. Ok, here's the catch. I assume the government validation process, your self-score isn't just accepted, MELANIE WARRICK.
Speaker 2:JR? Not at all. This is where it gets rigorous. Like you said, the government evaluation team goes through your proposal, checks your evidence and validates those scores you claimed.
Speaker 1:And I bet that validation can lower your score from what you submitted.
Speaker 2:Oh, absolutely, it definitely can. If the proof isn't there. And here's how the ranking and validation actually works. The government lists proposals based on those self-scores. Highest down they start validating from the top score downwards. Scores highest down they start validating from the top score downwards. Now if during validation they check someone's proof and have to lower their score and that drop kicks them out of the top three validated spots, and then what? Then they stop evaluating that one and move to the offer with the next highest self score on the original list and they keep validating down that list. They continue this process until they've successfully validated scores for at least three different offers.
Speaker 1:Wow, okay. So that puts a massive emphasis on making sure your self-score is not just, you know, optimistic, but perfectly backed up. You need the proof.
Speaker 2:Exactly. You could think you're number one based on your self-assessment, but if you can't prove every single point with verifiable evidence the government can check you could easily fall out of the running.
Speaker 1:It's really about proving the experience, not just having it.
Speaker 2:Precisely so. Section M breaks down this technical experience factor into two main buckets program management, or PM, and staffing and retention SR.
Speaker 1:Okay, let's dig into those program management areas first. What kind of things are they looking for there?
Speaker 2:Well, they're really focused on experience that's relevant to managing these kinds of complex civil engineering services, especially in specific, sometimes difficult environments.
Speaker 1:Makes sense.
Speaker 2:So under PM the criteria lean heavily on geographic experience and being able to manage scale.
Speaker 1:Geographic experience? Yeah, especially in the Pacific, I'm guessing that seems to be the theme here.
Speaker 2:It really is. There's significant weight on having experience managing broad labor categories. The document says 20 or more different job types, specifically within the Pacific theater.
Speaker 1:Why is that so important?
Speaker 2:Well, the source suggests it shows you really understand the complexities, the nuances of finding and managing a really diverse skill set in that particular region Sourcing, managing. It's different there. They also look for general experience managing projects and staff on site but in non-US locations or territories more broadly, and then specifically experience outside the continental US but not in the Pacific.
Speaker 1:So Ocana's experience generally, and then non-Pacific Ocana specifically. Yeah, Right.
Speaker 2:It shows you can handle the unique labor markets and logistics of operating overseas in different places. That's valuable for hiring people outside the US.
Speaker 1:And you mentioned managing scale as well.
Speaker 2:Yes, exactly. They want proof you can handle large teams. There are specific points for managing large numbers of employees, both in non-US locations generally, and then another point specifically for managing large teams within the Pacific Theater itself.
Speaker 1:So they can gauge if you can handle potentially big task orders.
Speaker 2:Pretty much. It gives insight into your capacity.
Speaker 1:Now there was also something specific about ANAS Advisory and Assistance Services in the Pacific under PM.
Speaker 2:Ah, yes, that's PM5, Experience Providing ANAS Within the Pacific Theater. The document notes it's relevant because it relates to providing quality staffing, especially in remote parts of the region.
Speaker 1:Okay, seems straightforward.
Speaker 2:But here's what's really interesting and a key takeaway from Section M that specific point PM5, is the designated first tiebreaker.
Speaker 1:The first tiebreaker.
Speaker 2:So if companies end up with the exact same validated technical score, the one with proven ANAS experience in the Pacific under PM5 wins the tie. It seems the Air Force really sees that specific experience as a critical difference maker if all other technical points are equal. Wow.
Speaker 1:That really highlights how much they value that specific capability in that location.
Speaker 2:Yeah.
Speaker 1:Okay, let's shift to the second main technical category.
Speaker 2:Yeah.
Speaker 1:Staffing and retention SR. What are they looking at here? Metrics practices.
Speaker 2:Yeah, this bucket is all about your ability to find the right people, hire them and, importantly, keep them, especially again in the challenging Pacific theater. The evaluation looks closely at key metrics.
Speaker 1:Like retention rate, is that one?
Speaker 2:Exactly, sr1 is retention rate and they're specific. It's calculated over at least two years using a formula Retained full-time equivalence divided by required full-time equivalence times 100. Okay, a specific formula Right. And the source emphasizes why. A high retention rate shows you can keep qualified people on board. That reduces risk for the government, ensures mission continuity.
Speaker 1:It makes sense.
Speaker 2:And significantly SR1, that retention rate. That's the second tie breaker.
Speaker 1:Ah, so if PM5, the ANAS Pacific experience, doesn't break a tie, Then your documented retention rate does. Yeah.
Speaker 2:It's the next deciding factor.
Speaker 1:Interesting. Okay. What other metrics under SR?
Speaker 2:Well, there's SR2, which is fill rate, Again a formula FTEs you have on board divided by the total FTEs required times 100. This gives them insight into how strong and efficient your recruiting process is.
Speaker 1:How quickly you can get positions filled.
Speaker 2:Sort of. But SR4 is more directly about speed. That one looks at your average time to fill a Pacific theater vacancy, measured in days. Okay, specifically in days, okay, Specifically in the Pacific again yes, and the document explicitly calls this out as crucial. It shows your efficiency, your expertise in filling gobs quickly in what they call a top priority region, the dynamic Pacific theater.
Speaker 1:Got it. And wasn't there also a point about bonuses?
Speaker 2:Yep SR3. It's a simple yes-no question Do you use retention or relocation bonuses?
Speaker 1:Okay.
Speaker 2:It's a straightforward question about whether you employ that common strategy to you know, incentivize effective hiring and keep people, especially in tough locations. Just another piece of the puzzle about your staffing approach.
Speaker 1:Okay, so summing up the technical side, it's a very detailed, highly validated assessment. They're looking at specific provable experience points, with a heavy, heavy emphasis on working effectively in the Pacific and managing a stable, qualified workforce there.
Speaker 2:That's a good summary.
Speaker 1:All right. So after all that technical scoring and validation, then comes factor two price.
Speaker 2:Exactly. Price evaluation only happens after the technical evaluation is complete. And again, this is where that HTRO method is so important. Only the offerors who actually achieve one of the top three highest government validated technical scores will even have their price looked out.
Speaker 1:Seriously so. If your technical score after validation doesn't land you in that absolute top tier, they don't even open the price envelope, metaphorically speaking.
Speaker 2:That's how it reads. Your price volume isn't considered if you're not in that top validated group. Technical score is the absolute gatekeeper.
Speaker 1:Okay, wow. And they're not evaluating the price for the whole potential $400 million ceiling, are they?
Speaker 2:No good point. The source is clear on that. The price they evaluate is for a mock task order, a representative sample project.
Speaker 1:Oh, okay.
Speaker 2:The document actually identifies it specifically as DET2, and it notes the government reserves the right to actually award this mock task order to the winners later.
Speaker 1:Okay, so they look at the price for this mock project. What are they checking for? Just the lowest number?
Speaker 2:No, not just lowest. They evaluate it for three things completeness, balance and reasonableness and they use standard government techniques for this, the ones found in FR 15.404.
Speaker 1:Completeness, balance, reasonableness yeah, what would make a price fail those checks Like what's incomplete or unacceptable?
Speaker 2:Well, the source gives some pretty clear examples. Bidding $0 for a labor category that obviously costs money, that's a fail.
Speaker 1:Right.
Speaker 2:Or just completely omitting a rate you were supposed to provide or other missing pricing info. Any of those can make your proposal incomplete, unacceptable.
Speaker 1:OK, what about unbalanced or unreasonable?
Speaker 2:Unbalanced might be. You know, pricing some things way too high and others way too low, maybe to game the system and unreasonable is just well too high overall compared to what they expect or what others propose. Both of those are grounds for rejection too.
Speaker 1:So let's play this out. Say you're one of the top three technically validated offerors, but your price for the mock task order gets flagged as, say, incomplete or unreasonable. What happens then? Do they just pick the offeror with the next best price among those top three?
Speaker 2:No, it doesn't work like that. This goes back to the HTRO structure. If your proposal is technically top ranked but your price is deemed unacceptable, your entire proposal is removed from consideration at that point.
Speaker 1:The whole thing out.
Speaker 2:Yep. They don't then look at your second best price or anything. Instead, they go back to the list of offerors ranked by their validated technical score and they evaluate the price of the next highest technically scored offer who wasn't in the initial top three. And they keep doing that, validating the technical checking the price of the next one down the technical list, until they find three offers who are both in that highest technically rated group and have a price that passes the complete, balanced and reasonable checks.
Speaker 1:That is. That's a really interesting process. It seriously emphasizes that technical capability is the first absolute hurdle and price is more like a secondary pass fail check, just for that elite group.
Speaker 2:Exactly. It fundamentally shapes how companies need to approach bidding on this Technical excellence. Provable excellence comes first.
Speaker 1:Okay, so let's try to summarize the winning combination here, based on everything in these sources.
Speaker 2:All right, so award goes to the three offerors who have the highest government-validated technical scores and, critically, whose proposed price for that mock task order is judged to be complete, balanced and reasonable.
Speaker 1:That's not all, though, right.
Speaker 2:No, there are other standard checks. Their proposal also has to conform to all the other requirements in the solicitation. They have to be determined responsible. That means meeting general business standards found in FR 9.104-1. And any potential organizational conflicts of interest, oci those have to be deal with, avoided, mitigated or neutralized somehow.
Speaker 1:And tying this whole technical scoring piece together is that burden of proof thing you mentioned earlier.
Speaker 2:That seems huge, Absolutely critical. Cannot stress this enough. The source document explicitly states the responsibility, the burden is entirely on the offer to substantiate every single point they claim in their self-score.
Speaker 1:Every point.
Speaker 2:Every point. This is where that statement of facts form is vital. It has to have the details and, importantly, government point of contact information so the evaluators can actually verify your claims.
Speaker 1:And the consequences if you can't substantiate something.
Speaker 2:They're severe. If you fail to substantiate even one single claim, the score for that claim can be adjusted downward, potentially the source says even to zero for that specific element. Wow, so you can't just say, yeah, we did that. You have to provide clear, verifiable evidence that the government can independently confirm.
Speaker 1:That really demands incredibly diligent record keeping beforehand and then super clear proposal writing that explicitly links every claim to the specific proof.
Speaker 2:It's absolutely non-negotiable if you want to succeed in this kind of evaluation.
Speaker 1:Okay, and just to quickly recap those tiebreakers again.
Speaker 2:Right. If there's a tie among the validated technical scores, first tiebreaker is PM5, that advisory and assistance services experience in the Pacific. If they're still tied after that, it goes to SR1, the retention rate.
Speaker 1:And if somehow they're still tied after both of those?
Speaker 2:Then the document says it's a random selection among those still tied.
Speaker 1:A random draw, wow, okay. Now what strikes me is, after we've talked through all these objective points, these formulas, the validation process, section M still apparently includes a line acknowledging subjectivity.
Speaker 2:Yeah, it does. It says something like the source selection process by its nature is subjective. Therefore, professional judgment is implicit throughout the entire process.
Speaker 1:So, even with this very structured score based approach, there's still a layer of human judgment involved.
Speaker 2:There's still a layer of human judgment involved. I mean interpreting the evidence provided, validating those claims, making the final call on responsibility and overall conformance. It's not purely a mechanical calculation. Professional judgment is still part of it.
Speaker 1:Right, that adds an interesting layer. Ok, so beyond the technical score, the price check, responsibility, OCI. Any other final points from the source?
Speaker 2:Well, it does mention the government's intention is to award, based just on the initial proposals, without holding discussions.
Speaker 1:So submit your best shot right out of the gate.
Speaker 2:Basically, however, they do reserve the right to conduct discussions if they feel it's necessary, but typically that would only be with offers they've already determined are in the competitive range.
Speaker 1:Okay, so let's bring this all back. What does this incredibly detailed look mean for you, the listener? Why is it important to get this deep into these specific criteria?
Speaker 2:Well, I think for anyone who works in or around government contracting, or even if you're just curious about how huge amounts of public money get allocated for vital things like global civil engineering services, this deep dive really shows the precise logic at play.
Speaker 1:Yeah, it really highlights that winning isn't just about saying you have general experience, or even about being the cheapest.
Speaker 2:Not at all. It's about proving you have specific, verifiable experience in the exact areas the Air Force has flagged as critical, like managing diverse labor categories, handling large teams, especially in distinct regions like the Pacific, and showing you have solid staffing and retention practices.
Speaker 1:And, crucially, meticulously documenting and proving every single one of those claims so the government can check it off, validate it.
Speaker 2:Right Plus, having a price that passes that detailed scrutiny for a representative project, but only after your technical merit has already put you in the top group. It's not some weighted average, it's a sequential evaluation technical first, then price validation for the leaders.
Speaker 1:So yeah, this detailed look really shows how the Air Force is blending that validated technical skill with price reasonableness for a C-SPAC, that validated technical skill with price reasonableness for a C-SPAC. They're putting a huge premium on provable experience in areas vital to Pacific operations and making contractors carry that heavy burden of proof.
Speaker 2:Which you know leads us to maybe a final provocative thought for you to mull over after hearing all this.
Speaker 1:Okay.
Speaker 2:Given this intense focus on verifiable past performance in very specific locations and on detailed staffing metrics things like retention rates, average time to fill vacancies how might companies proactively think about structuring their internal data tracking? How might they manage project execution now and collect those government references now specifically to position themselves better for these kinds of detailed, best value government contracts years down the road?
Speaker 1:That's a great point. The operational data you capture and how you manage it today could literally be the key to winning major contracts tomorrow.
Speaker 2:Exactly. It's about preparing long before the RFP even drops.