GovCon Bid and Proposal Insights
GovCon Bid and Proposal Insights
Civil Engineering Services Pacific (CESPAC)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode, we explore the upcoming Civil Engineering Services Pacific (CESPAC) MA-IDIQ opportunity released by the Department of the Air Force – Pacific Air Forces. With an estimated value of $400 million and a set-aside for 8(a) firms, this contract offers a powerful entry point into major infrastructure projects across the Pacific region.
We break down the presolicitation details, estimated timeline, number of awards, and what your business should be doing now to prepare for the Q3 2025 solicitation release.
Get ahead of the competition, listen now
Contact ProposalHelper at sales@proposalhelper.com to find similar opportunities and help you build a realistic and winning pipeline.
Introduction to CE-SPAC Contract
Speaker 1Welcome back to the Deep Dive. We're stepping into the pretty complex world of government contracting today.
Speaker 2Yeah, it's a space where the details really really matter.
Speaker 1Absolutely. I mean, the fine print isn't just legalese right, it's basically the map to accessing billions of dollars. Understanding these evaluation criteria is just critical.
Speaker 2That's stod on. These aren't, you know, simple off-the-shelf buys. They're very specific processes. They're designed to make sure the government gets exactly what it needs.
Speaker 1And that idea of best value. Well, it can be a lot more specific than people might think. Definitely, it's often very granular, exactly so today we're sort of pulling back the curtain on one of these processes. Our deep dive is focusing on the evaluation criteria for a big US Air Force contract opportunity. It's the Civil Engineering Services, pacific CE, spac.
Speaker 2Right, and the number for that, if you're looking it up, is RFP TradTech 552 and 525 R0003.
Speaker 1So to really get a handle on this, we've been digging into the core documents.
Speaker 2Yeah, we looked at the key stuff, specifically the really crucial Section M from the request for proposal. That part spells out exactly how the Air Force is going to evaluate proposals, how they'll pick the winners.
Speaker 1And also that statement of facts form which sounds like it plays a pretty big role in proving your experience.
Speaker 2It does. Yeah, Very important for substantiation.
Speaker 1Okay, so our mission here is to take these sources and give you a clear kind of actionable understanding Like what's the Air Force really prioritizing? How exactly will they score potential contractors?
Speaker 2And what boxes do you absolutely need to check? What proof do you need to show to actually win a piece of this? It's a pretty significant effort.
Speaker 1Yeah, think of this as maybe your shortcut to understanding the nitty-gritty mechanics of a major government buy.
Speaker 2Okay, so let's unpack how this whole thing works. First off, it's for an indefinite delivery, indefinite quantity contract, an IDIQ.
Speaker 1Okay.
Speaker 2And the government says right up front they plan to make multiple awards. Specifically, they're aiming to award to three different contractors.
Speaker 1Three winners, okay, out of maybe quite a few bidders. I imagine yeah. And the scale here?
Speaker 2Huge the total value across all the task orders they might issue under this IDIQ. Well, it can reach up to $400 million.
Speaker 1Wow, $400 million. That's a really substantial amount of potential work.
Speaker 2It absolutely is. So the big question is how do they pick those three?
Speaker 1Right.
Speaker 2And it's not just about finding the lowest price tag. This is what they call a best value source selection. But and this is key with a very specific definition for this contract, they're using a method called highest technically rated offerors, or HTRO. It follows the rule book, federal acquisition regulation FRR, part 15.3. The goal is basically to find the three companies whose proposals are the highest technically rated and whose prices are also OK, you know, complete, balanced and reasonable.
Speaker 1Ah, ok, so best value here Isn't that typical situation where they might, you know, weigh things like maybe accept a slightly lower technical score if the price is really good, or vice versa.
Speaker 2Exactly. That's a really critical distinction. Based on these documents. Best value here is strictly defined. You have to be one of the top three technically rated offerors and have a technical score the government actually validates and have a price that passes those checks reasonableness, completeness, balance.
Speaker 1So no wiggle room on trading price for tech score.
The HTRO Evaluation Process
Speaker 2Nope, the RFP explicitly says there will not be a subjective trade-off process between technical and price. Technical rank is first, price only gets looked at for those top technical folks.
Speaker 1Got it. That structure really sets the past. Then Is there any kind of minimum guarantee for the winners?
Speaker 2Yes, there is A small one. There's a minimum guarantee task order for $3,000.
Speaker 1Okay.
Speaker 2And that's specifically tied to attending the post-award conference. But it's important the source points this out. This is not a requirements contract.
Speaker 1Right.
Speaker 2So winning the main IDIQ contract doesn't actually guarantee you'll get any more task orders beyond that. First $3,000, although they do mention a mock project might get awarded.
Speaker 1Okay, so the absolute, critical first step if you want a shot at any of that $400 million is proving you're one of those top three highest technically rated offers.
Speaker 2Exactly.
Speaker 1So let's dive into that single evaluation factor Technical experience.
Speaker 2Right. Technical experience is the only factor they use to rank proposals initially and, interestingly, offers actually self-score their own experience, mark.
Speaker 1BLYTHESONE JR, they score themselves.
Speaker 2MELANIE WARRICK JR. Yeah, based on criteria laid out elsewhere in the RFP in Section L. Section M refers back to it.
Speaker 1MARK BLYTHESONE, JR. Ok, here's the catch. I assume the government validation process, your self-score isn't just accepted, MELANIE WARRICK.
Speaker 2JR? Not at all. This is where it gets rigorous. Like you said, the government evaluation team goes through your proposal, checks your evidence and validates those scores you claimed.
Speaker 1And I bet that validation can lower your score from what you submitted.
Speaker 2Oh, absolutely, it definitely can. If the proof isn't there. And here's how the ranking and validation actually works. The government lists proposals based on those self-scores. Highest down they start validating from the top score downwards. Scores highest down they start validating from the top score downwards. Now if during validation they check someone's proof and have to lower their score and that drop kicks them out of the top three validated spots, and then what? Then they stop evaluating that one and move to the offer with the next highest self score on the original list and they keep validating down that list. They continue this process until they've successfully validated scores for at least three different offers.
Speaker 1Wow, okay. So that puts a massive emphasis on making sure your self-score is not just, you know, optimistic, but perfectly backed up. You need the proof.
Speaker 2Exactly. You could think you're number one based on your self-assessment, but if you can't prove every single point with verifiable evidence the government can check you could easily fall out of the running.
Speaker 1It's really about proving the experience, not just having it.
Speaker 2Precisely so. Section M breaks down this technical experience factor into two main buckets program management, or PM, and staffing and retention SR.
Technical Experience Factor Breakdown
Speaker 1Okay, let's dig into those program management areas first. What kind of things are they looking for there?
Speaker 2Well, they're really focused on experience that's relevant to managing these kinds of complex civil engineering services, especially in specific, sometimes difficult environments.
Speaker 1Makes sense.
Speaker 2So under PM the criteria lean heavily on geographic experience and being able to manage scale.
Speaker 1Geographic experience? Yeah, especially in the Pacific, I'm guessing that seems to be the theme here.
Speaker 2It really is. There's significant weight on having experience managing broad labor categories. The document says 20 or more different job types, specifically within the Pacific theater.
Speaker 1Why is that so important?
Speaker 2Well, the source suggests it shows you really understand the complexities, the nuances of finding and managing a really diverse skill set in that particular region Sourcing, managing. It's different there. They also look for general experience managing projects and staff on site but in non-US locations or territories more broadly, and then specifically experience outside the continental US but not in the Pacific.
Speaker 1So Ocana's experience generally, and then non-Pacific Ocana specifically. Yeah, Right.
Speaker 2It shows you can handle the unique labor markets and logistics of operating overseas in different places. That's valuable for hiring people outside the US.
Speaker 1And you mentioned managing scale as well.
Speaker 2Yes, exactly. They want proof you can handle large teams. There are specific points for managing large numbers of employees, both in non-US locations generally, and then another point specifically for managing large teams within the Pacific Theater itself.
Speaker 1So they can gauge if you can handle potentially big task orders.
Speaker 2Pretty much. It gives insight into your capacity.
Speaker 1Now there was also something specific about ANAS Advisory and Assistance Services in the Pacific under PM.
Speaker 2Ah, yes, that's PM5, Experience Providing ANAS Within the Pacific Theater. The document notes it's relevant because it relates to providing quality staffing, especially in remote parts of the region.
Speaker 1Okay, seems straightforward.
Speaker 2But here's what's really interesting and a key takeaway from Section M that specific point PM5, is the designated first tiebreaker.
Speaker 1The first tiebreaker.
Speaker 2So if companies end up with the exact same validated technical score, the one with proven ANAS experience in the Pacific under PM5 wins the tie. It seems the Air Force really sees that specific experience as a critical difference maker if all other technical points are equal. Wow.
Speaker 1That really highlights how much they value that specific capability in that location.
Speaker 2Yeah.
Speaker 1Okay, let's shift to the second main technical category.
Speaker 2Yeah.
Speaker 1Staffing and retention SR. What are they looking at here? Metrics practices.
Speaker 2Yeah, this bucket is all about your ability to find the right people, hire them and, importantly, keep them, especially again in the challenging Pacific theater. The evaluation looks closely at key metrics.
Speaker 1Like retention rate, is that one?
Speaker 2Exactly, sr1 is retention rate and they're specific. It's calculated over at least two years using a formula Retained full-time equivalence divided by required full-time equivalence times 100. Okay, a specific formula Right. And the source emphasizes why. A high retention rate shows you can keep qualified people on board. That reduces risk for the government, ensures mission continuity.
Speaker 1It makes sense.
Speaker 2And significantly SR1, that retention rate. That's the second tie breaker.
Speaker 1Ah, so if PM5, the ANAS Pacific experience, doesn't break a tie, Then your documented retention rate does. Yeah.
Speaker 2It's the next deciding factor.
Speaker 1Interesting. Okay. What other metrics under SR?
Speaker 2Well, there's SR2, which is fill rate, Again a formula FTEs you have on board divided by the total FTEs required times 100. This gives them insight into how strong and efficient your recruiting process is.
Speaker 1How quickly you can get positions filled.
Speaker 2Sort of. But SR4 is more directly about speed. That one looks at your average time to fill a Pacific theater vacancy, measured in days. Okay, specifically in days, okay, Specifically in the Pacific again yes, and the document explicitly calls this out as crucial. It shows your efficiency, your expertise in filling gobs quickly in what they call a top priority region, the dynamic Pacific theater.
Speaker 1Got it. And wasn't there also a point about bonuses?
Speaker 2Yep SR3. It's a simple yes-no question Do you use retention or relocation bonuses?
Speaker 1Okay.
Speaker 2It's a straightforward question about whether you employ that common strategy to you know, incentivize effective hiring and keep people, especially in tough locations. Just another piece of the puzzle about your staffing approach.
Speaker 1Okay, so summing up the technical side, it's a very detailed, highly validated assessment. They're looking at specific provable experience points, with a heavy, heavy emphasis on working effectively in the Pacific and managing a stable, qualified workforce there.
Speaker 2That's a good summary.
Speaker 1All right. So after all that technical scoring and validation, then comes factor two price.
Speaker 2Exactly. Price evaluation only happens after the technical evaluation is complete. And again, this is where that HTRO method is so important. Only the offerors who actually achieve one of the top three highest government validated technical scores will even have their price looked out.
Speaker 1Seriously so. If your technical score after validation doesn't land you in that absolute top tier, they don't even open the price envelope, metaphorically speaking.
Speaker 2That's how it reads. Your price volume isn't considered if you're not in that top validated group. Technical score is the absolute gatekeeper.
Speaker 1Okay, wow. And they're not evaluating the price for the whole potential $400 million ceiling, are they?
Speaker 2No good point. The source is clear on that. The price they evaluate is for a mock task order, a representative sample project.
Speaker 1Oh, okay.
Speaker 2The document actually identifies it specifically as DET2, and it notes the government reserves the right to actually award this mock task order to the winners later.
Speaker 1Okay, so they look at the price for this mock project. What are they checking for? Just the lowest number?
Speaker 2No, not just lowest. They evaluate it for three things completeness, balance and reasonableness and they use standard government techniques for this, the ones found in FR 15.404.
Speaker 1Completeness, balance, reasonableness yeah, what would make a price fail those checks Like what's incomplete or unacceptable?
Speaker 2Well, the source gives some pretty clear examples. Bidding $0 for a labor category that obviously costs money, that's a fail.
Speaker 1Right.
Speaker 2Or just completely omitting a rate you were supposed to provide or other missing pricing info. Any of those can make your proposal incomplete, unacceptable.
Speaker 1OK, what about unbalanced or unreasonable?
Speaker 2Unbalanced might be. You know, pricing some things way too high and others way too low, maybe to game the system and unreasonable is just well too high overall compared to what they expect or what others propose. Both of those are grounds for rejection too.
Speaker 1So let's play this out. Say you're one of the top three technically validated offerors, but your price for the mock task order gets flagged as, say, incomplete or unreasonable. What happens then? Do they just pick the offeror with the next best price among those top three?
Speaker 2No, it doesn't work like that. This goes back to the HTRO structure. If your proposal is technically top ranked but your price is deemed unacceptable, your entire proposal is removed from consideration at that point.
Speaker 1The whole thing out.
Speaker 2Yep. They don't then look at your second best price or anything. Instead, they go back to the list of offerors ranked by their validated technical score and they evaluate the price of the next highest technically scored offer who wasn't in the initial top three. And they keep doing that, validating the technical checking the price of the next one down the technical list, until they find three offers who are both in that highest technically rated group and have a price that passes the complete, balanced and reasonable checks.
Speaker 1That is. That's a really interesting process. It seriously emphasizes that technical capability is the first absolute hurdle and price is more like a secondary pass fail check, just for that elite group.
Speaker 2Exactly. It fundamentally shapes how companies need to approach bidding on this Technical excellence. Provable excellence comes first.
Speaker 1Okay, so let's try to summarize the winning combination here, based on everything in these sources.
Substantiation Requirements and Tiebreakers
Speaker 2All right, so award goes to the three offerors who have the highest government-validated technical scores and, critically, whose proposed price for that mock task order is judged to be complete, balanced and reasonable.
Speaker 1That's not all, though, right.
Speaker 2No, there are other standard checks. Their proposal also has to conform to all the other requirements in the solicitation. They have to be determined responsible. That means meeting general business standards found in FR 9.104-1. And any potential organizational conflicts of interest, oci those have to be deal with, avoided, mitigated or neutralized somehow.
Speaker 1And tying this whole technical scoring piece together is that burden of proof thing you mentioned earlier.
Speaker 2That seems huge, Absolutely critical. Cannot stress this enough. The source document explicitly states the responsibility, the burden is entirely on the offer to substantiate every single point they claim in their self-score.
Speaker 1Every point.
Speaker 2Every point. This is where that statement of facts form is vital. It has to have the details and, importantly, government point of contact information so the evaluators can actually verify your claims.
Speaker 1And the consequences if you can't substantiate something.
Speaker 2They're severe. If you fail to substantiate even one single claim, the score for that claim can be adjusted downward, potentially the source says even to zero for that specific element. Wow, so you can't just say, yeah, we did that. You have to provide clear, verifiable evidence that the government can independently confirm.
Speaker 1That really demands incredibly diligent record keeping beforehand and then super clear proposal writing that explicitly links every claim to the specific proof.
Speaker 2It's absolutely non-negotiable if you want to succeed in this kind of evaluation.
Speaker 1Okay, and just to quickly recap those tiebreakers again.
Speaker 2Right. If there's a tie among the validated technical scores, first tiebreaker is PM5, that advisory and assistance services experience in the Pacific. If they're still tied after that, it goes to SR1, the retention rate.
Speaker 1And if somehow they're still tied after both of those?
Speaker 2Then the document says it's a random selection among those still tied.
Speaker 1A random draw, wow, okay. Now what strikes me is, after we've talked through all these objective points, these formulas, the validation process, section M still apparently includes a line acknowledging subjectivity.
Speaker 2Yeah, it does. It says something like the source selection process by its nature is subjective. Therefore, professional judgment is implicit throughout the entire process.
Speaker 1So, even with this very structured score based approach, there's still a layer of human judgment involved.
Speaker 2There's still a layer of human judgment involved. I mean interpreting the evidence provided, validating those claims, making the final call on responsibility and overall conformance. It's not purely a mechanical calculation. Professional judgment is still part of it.
Speaker 1Right, that adds an interesting layer. Ok, so beyond the technical score, the price check, responsibility, OCI. Any other final points from the source?
Speaker 2Well, it does mention the government's intention is to award, based just on the initial proposals, without holding discussions.
Speaker 1So submit your best shot right out of the gate.
Speaker 2Basically, however, they do reserve the right to conduct discussions if they feel it's necessary, but typically that would only be with offers they've already determined are in the competitive range.
Speaker 1Okay, so let's bring this all back. What does this incredibly detailed look mean for you, the listener? Why is it important to get this deep into these specific criteria?
Speaker 2Well, I think for anyone who works in or around government contracting, or even if you're just curious about how huge amounts of public money get allocated for vital things like global civil engineering services, this deep dive really shows the precise logic at play.
Speaker 1Yeah, it really highlights that winning isn't just about saying you have general experience, or even about being the cheapest.
Speaker 2Not at all. It's about proving you have specific, verifiable experience in the exact areas the Air Force has flagged as critical, like managing diverse labor categories, handling large teams, especially in distinct regions like the Pacific, and showing you have solid staffing and retention practices.
Speaker 1And, crucially, meticulously documenting and proving every single one of those claims so the government can check it off, validate it.
Speaker 2Right Plus, having a price that passes that detailed scrutiny for a representative project, but only after your technical merit has already put you in the top group. It's not some weighted average, it's a sequential evaluation technical first, then price validation for the leaders.
Strategic Implications for Contractors
Speaker 1So yeah, this detailed look really shows how the Air Force is blending that validated technical skill with price reasonableness for a C-SPAC, that validated technical skill with price reasonableness for a C-SPAC. They're putting a huge premium on provable experience in areas vital to Pacific operations and making contractors carry that heavy burden of proof.
Speaker 2Which you know leads us to maybe a final provocative thought for you to mull over after hearing all this.
Speaker 1Okay.
Speaker 2Given this intense focus on verifiable past performance in very specific locations and on detailed staffing metrics things like retention rates, average time to fill vacancies how might companies proactively think about structuring their internal data tracking? How might they manage project execution now and collect those government references now specifically to position themselves better for these kinds of detailed, best value government contracts years down the road?
Speaker 1That's a great point. The operational data you capture and how you manage it today could literally be the key to winning major contracts tomorrow.
Speaker 2Exactly. It's about preparing long before the RFP even drops.