The Digital Transformation Playbook

Human and Machine: How the Air Force Is Navigating the AI Era

โ€ข Kieran Gilmurray

What happens when artificial intelligence meets national security? The United States Air Force is navigating this crucial intersection with a comprehensive strategy outlined in their Air Force Doctrine Note 25-1. This document provides a roadmap for harnessing AI's tremendous potential while acknowledging its inherent limitations and risks.

TLDR:

  • AI represents the latest chapter in warfare's technological evolution, following WWII computing advances and precision capabilities demonstrated in Operation Desert Storm
  • The Air Force distinguishes between narrow AI (specialized tools), artificial general intelligence (which doesn't currently exist), and expert systems that complement human expertise
  • Human-machine teaming places humans in various oversight roles: in-the-loop (approving machine recommendations), on-the-loop (allowing machine action unless vetoed), or up-the-loop (machine decides independently)

At its core, the Air Force approach centers on human-machine teaming rather than replacement. AI serves as an amplifier for human capabilities, combining machine strengths (rapid data processing, pattern recognition) with uniquely human qualities (judgment, intuition, ethical reasoning). This partnership manifests across all core functions โ€“ from enhancing early warning systems and targeting in air superiority missions to optimizing logistics and maintenance for global mobility operations.

The strategy makes crucial distinctions between different AI applications. It differentiates narrow AI (specialized tools) from artificial general intelligence (which doesn't currently exist), and carefully delineates human oversight roles in various scenarios. Perhaps most importantly, it recognizes that while AI excels at complicated problems with consistent patterns (like logistics optimization), it struggles with complex "wicked problems" that require deep human understanding of context and consequences.

Building this AI-enabled force isn't simply about acquiring technology. It demands creating an AI-fluent workforce, establishing robust data management practices, securing sufficient computing power, and recruiting diverse talent across technical and ethical domains. All while adhering to strict ethical principles: responsible, equitable, traceable, reliable, and governable.

The document acknowledges a profound tension: the United States commits to ethical AI development while potential adversaries like China and Russia aggressively pursue military AI applications without similar constraints. This raises fundamental questions about balancing technological advantage with core values โ€“ questions that will shape not just military strategy but our broader relationship with increasingly autonomous systems.

Support the show


๐—–๐—ผ๐—ป๐˜๐—ฎ๐—ฐ๐˜ my team and I to get business results, not excuses.

โ˜Ž๏ธ https://calendly.com/kierangilmurray/results-not-excuses
โœ‰๏ธ kieran@gilmurray.co.uk
๐ŸŒ www.KieranGilmurray.com
๐Ÿ“˜ Kieran Gilmurray | LinkedIn
๐Ÿฆ‰ X / Twitter: https://twitter.com/KieranGilmurray
๐Ÿ“ฝ YouTube: https://www.youtube.com/@KieranGilmurray

Speaker 1:

Ever think about how quickly AI is changing? Well, everything around us.

Speaker 2:

It's pretty staggering.

Speaker 1:

Yeah, from your phone figuring out what song you might like to huge companies, predicting market shifts. Ai is just, it's in so many corners of our lives now. But what happens when that same really powerful technology starts reshaping something as critical as, say, national security?

Speaker 2:

That's the big question, isn't it?

Speaker 1:

Welcome to the Deep Dive. Today we're taking, hey well, a deep dive into the United States Air Force's Strategic Playbook for Artificial Intelligence, that's right. Our source material is the Air Force Doctrine, note 25 to 1, specifically focused on AI.

Speaker 2:

You can think of it as their official guide. Really, it's a foundational document.

Speaker 1:

Yeah, and it's built from joint policies, public law, lots of academic research and expert consultations too.

Speaker 2:

A pretty comprehensive look.

Speaker 1:

So our mission today is basically to unpack this vital document for you.

Speaker 2:

Yeah, break it down.

Speaker 1:

We're going to pull out the most important insights about AI's. You know immense promise, but also its inherent risks.

Speaker 2:

The upsides and the downsides.

Speaker 1:

Exactly, and its practical applications and all the crucial considerations for how the Air Force actually plans to use it.

Speaker 2:

How it works in the real world or how they want it to work.

Speaker 1:

Right, so by the end you'll have a real shortcut, hopefully, to understanding the nuances of the Air Force's whole approach to AI. Sounds good, let's get into it. Okay, you know, when we talk about AI transforming warfare, it's really just the latest chapter in this long, pretty fascinating history of technological leaps, isn't it?

Speaker 2:

Absolutely. It builds on what came before.

Speaker 1:

Just look back to World War II. You had the British bomb machine, basically an early computer, cracking the German Enigma cipher.

Speaker 2:

Which was huge game-changing for the Battle of the Atlantic.

Speaker 1:

Totally turned the tide. Give the Allies this like exquisite intelligence. Then a bit later you had those huge electronic supercomputers, all vacuum tubes Right crunching numbers from missile programs, space stuff. Massive calculations. And then, of course, silicon-based semiconductors came along, miniaturizing everything. That really drove innovation, didn't it? Cold War systems advanced weapons Putting computing power directly onto platforms, and that shifted the Air Force from just relying on sheer mass. You know quantity.

Speaker 2:

To achieving things like those unprecedented precision attacks we saw in Operation Desert Storm unprecedented precision attacks.

Speaker 1:

We saw in Operation Desert Storm Exactly that precision was key, and now we're squarely in what this document calls a data driven era.

Speaker 2:

Yeah, that term comes up a lot.

Speaker 1:

Academic and commercial innovations are making these cutting edge AI capabilities more affordable, more available than ever before.

Speaker 2:

It's not just military labs anymore.

Speaker 1:

Not at all. Our world is just saturated with smart devices right Connecting us through this huge digital domain. Unprecedented amounts of information flowing around, and the Air Force sees incredible promise in harnessing this. They expect AI to supercharge intelligence surveillance and reconnaissance.

Speaker 2:

Which they call ISR.

Speaker 1:

Right ISR. Imagine sifting through mountains of data to find those hidden meals in a haystack almost instantly.

Speaker 2:

That's the goal finding the signal in the noise.

Speaker 1:

And it's about fueling robotics advancements enabling intelligent swarms of autonomous agents maybe doing tasks that were impossible without direct human control.

Speaker 2:

Swarms are definitely a big area of interest.

Speaker 1:

We're also talking about accelerating training, gaining a significant information advantage, strengthening readiness boosting efficiency across the board and even generating synthetic experiences like virtual environments for both machines and humans to learn from.

Speaker 2:

Practice scenarios testing.

Speaker 1:

Yeah, and beyond that. Ai can give planners these advanced tools for really complex tasks like optimizing supply chains, running sophisticated war games with real-time analysis.

Speaker 2:

Even recommending new scenarios potentially finding strategic advantages in international partnerships. It touches so many areas because, at its core, AI promises to help commanders make more informed decisions, better decisions, hopefully, across every level of warfare strategic, operational, tactical.

Speaker 1:

From the big picture down to the details.

Speaker 2:

Exactly, and from a national security standpoint, it's seen as absolutely critical for the US to lead in developing safe, secure and trustworthy AI.

Speaker 1:

Trustworthy is a big one there.

Speaker 2:

Huge and also fostering responsible international governance around it. It's about getting the benefits while carefully managing the global risks.

Speaker 1:

Okay, so that brings us neatly to the flip side.

Speaker 2:

Right the risks Because, while AI is this powerful force multiplier, the document also notes it could be a cost-effective way for countries with fewer resources to potentially erode the US strategic advantage.

Speaker 1:

Level the playing field in some ways maybe.

Speaker 2:

Potentially, and this really plays into what the document calls great power competition. Okay, take China, for instance. They've publicly stated their goal be the world leader in AI development by 2030.

Speaker 1:

That's ambitious.

Speaker 2:

Very Right, and they're already using AI extensively for well monitoring and repression internally, but also focusing on becoming an intelligentized force.

Speaker 1:

Intelligentized.

Speaker 2:

Yeah, Applying AI to command decision making, logistics, cyber ops, swarms, missile guidance, you name it. Similarly, Russia is making big strides in unmanned aerial vehicles, UAVs and autonomous ground vehicles, even underwater systems.

Speaker 1:

So it's not just China.

Speaker 2:

Not at all. Both nations are aggressively integrating AI for more effective command and control, targeting reconnaissance, electronic warfare, the whole spectrum.

Speaker 1:

And here's a key pitfall. The document really highlights something that jumped out at me Delegating decision making to AI could potentially lower the threshold for conflict.

Speaker 2:

That's a major concern.

Speaker 1:

Think about it AI can accelerate actions and reactions far beyond human speed.

Speaker 2:

Much faster.

Speaker 1:

So it raises this really important question about where the human hand, or maybe the human mind, still needs to be firmly in control.

Speaker 2:

Precisely. And then you have the very real concerns about data access. Ai development is often collaborative right.

Speaker 1:

Right, you mentioned commercial, academic, military partners.

Speaker 2:

Exactly All with potentially different information security standards, so carefully figuring out what data AI systems can access and who can access it, becomes paramount.

Speaker 1:

Because the risk is what adversary is getting it.

Speaker 2:

That's one risk. Another significant one they mentioned is called data poisoning. Poisoning, yeah, adversaries intentionally feeding bad data into an AI system during its training or operation.

Speaker 1:

So it learns the wrong things or makes bad calls.

Speaker 2:

Exactly. It can lead to entirely erroneous decisions. It creates pathways for deception, surprise and ultimately it really challenges our trust in the AI.

Speaker 1:

Which could slow down adoption, even if the tech works perfectly otherwise.

Speaker 2:

Right. If people don't trust it, they won't use it effectively.

Speaker 1:

That's why the document really hammers home the need for responsible AI through understanding and experimentation.

Speaker 2:

You have to build it right, test it thoroughly and understand its limits.

Speaker 1:

It's not just about building it, it's making sure it's trustworthy, resilient, definitely. Now, when we talk about AI, you hear so many different terms thrown around. It can get confusing.

Speaker 2:

Oh yeah, Alphabet soup sometimes.

Speaker 1:

The Air Force Doctrine Note actually steps in here to clarify some of these nuanced distinctions that exist across academia and industry.

Speaker 2:

Which is helpful.

Speaker 1:

While the Department of Defense, the DOD, has its official definitions, this AFDN gives clear descriptions, just to make sure everyone within the Air Force is kind of speaking the same language about AI's potential.

Speaker 2:

On the same page. So, generally speaking, ai is technology that approximates human cognition. That might be through rule-based systems or, more commonly now, machine learning. But there's a crucial limitation, the document points out. Unlike humans, current AI does not critically examine the validity of its system's inputs or the broader consequences of its outputs.

Speaker 1:

It doesn't have that like critical thinking or judgment.

Speaker 2:

Exactly. It follows its programming and data, but doesn't step back and ask does this make sense in the bigger picture?

Speaker 1:

That distinction feels really key. So the document dives into some essential terminology. It defines narrow AI, which is sometimes called weak AI.

Speaker 2:

Right.

Speaker 1:

That's AI limited to performing a very specific task like a specialist tool, right. Incredibly good at one thing, maybe recommending movies or playing chess.

Speaker 2:

But it can generalize. It won't suddenly learn to bake a cake, as you said.

Speaker 1:

Right. Then there's the big one artificial general intelligence, or AGI. This is the sci-fi dream, basically.

Speaker 2:

Human-like understanding, learning across any task.

Speaker 1:

And the document is crystal clear here AGI does not currently exist.

Speaker 2:

Nope, and the consensus is it is not possible with today's capabilities. We're not close.

Speaker 1:

Good to clarify. And finally, they mention expert systems.

Speaker 2:

Yeah, these are designed to complement human experts for narrowly defined problems. Think of a medical diagnosis consultant system.

Speaker 1:

helping a doctor Okay, so that's narrow, general and expert.

Speaker 2:

But most of what we think of as modern AI what's really driving things today is built on machine learning or ML.

Speaker 1:

Right ML.

Speaker 2:

These are statistical algorithms. They learn from data and generalized unseen data, without being explicitly programmed for every single scenario.

Speaker 1:

So it's like teaching by example, not by rules.

Speaker 2:

Kind of yeah, like showing a child thousands of pictures of cats to teach it what a cat looks like, rather than giving it a precise definition. They extract patterns from huge data sets.

Speaker 1:

But it's important to remember.

Speaker 2:

Right that ML does not possess the ability to understand or utilize these disciplines independently. It's just finding statistical patterns based on how it was programmed and the data it saw.

Speaker 1:

Which brings us to training data and test data.

Speaker 2:

Absolutely crucial. The quality, the quantity, the diversity of this data, it's everything.

Speaker 1:

Yeah.

Speaker 2:

Because any biases or inconsistencies in the data.

Speaker 1:

Will inevitably lead to incomplete or erratic results from the AI. Garbage in, garbage out, pretty much.

Speaker 2:

Building on that, you have neural networks, NENs. These are inspired by the human brain, basically.

Speaker 1:

Oh so.

Speaker 2:

They're composed of interconnected units called neurons, arranged in multiple layers. It's a computational model, okay, and deep learning. Dl is simply a multilayered neural network, usually many layers, that can learn from incredibly complex data sets. It can reveal patterns not recognizable by people.

Speaker 1:

Because it can process so much more complexity.

Speaker 2:

Exactly.

Speaker 1:

So, okay, those are kind of the building blocks, but what does that mean for real-world applications? The Air Force might use. The document highlights several. There's natural language processing NLP might use. The document highlights several. There's natural language processing NLP that uses deep learning to help computers comprehend, interpret and generate human language.

Speaker 2:

Like your voice assistant, siri or Alexa, or machine translation tools.

Speaker 1:

Right. Then we have generative AI, which is huge right now. It analyzes vast amounts of data to find patterns and then creates new content.

Speaker 2:

Totally new stuff words, audio images, even video.

Speaker 1:

And when these systems can handle multiple types of inputs at once, like images and text.

Speaker 2:

They're called multimodal AI, processing different kinds of data together.

Speaker 1:

And, of course, the incredibly popular large language model or LLM.

Speaker 2:

Like JET, GPT and others.

Speaker 1:

These use probabilistic models analyzing language patterns to understand and generate human language in a remarkably fluid way.

Speaker 2:

They predict the next word, essentially based on massive data sets.

Speaker 1:

And finally, computer vision or CV. This is the AI field that extracts meaningful information from visual inputs.

Speaker 2:

Using techniques like image processing, pattern recognition, think surveillance, target recognition, medical imaging.

Speaker 1:

Seeing and interpreting the visual world.

Speaker 2:

Precisely Now. Related to all this, it's really critical to clarify two terms, especially for military applications Automation versus autonomy.

Speaker 1:

Okay, what's the difference there?

Speaker 2:

Automation is about a system performing narrow and constrained tasks with low levels of complexity. Think of a car factory assembly line.

Speaker 1:

Repetitive Predictable.

Speaker 2:

Exactly. It's precise, but it doesn't make independent choices based on changing conditions. Autonomy, on the other hand, involves rules-based processing to undertake a variety of tasks of varying levels of complexity.

Speaker 1:

More dynamic.

Speaker 2:

Yes, Think of a self-driving car. It has to dynamically react to unknown variables on the road other cars, pedestrians, unexpected obstacles.

Speaker 1:

So the Air Force uses a mix of both automation and autonomy.

Speaker 2:

Yeah, to augment their airmen's performance yeah. And the document points out something interesting With each system's process repetition its competence builds.

Speaker 1:

It gets better with practice.

Speaker 2:

Right and that in turn really boosts the airman's confidence in using that system for decision making. It builds that necessary trust.

Speaker 1:

That makes sense. You trust what works reliably. So, with this foundation laid, how does the Air Force actually plan to put AI into action? Their core approach, you mentioned, is human machine teaming HMT.

Speaker 2:

That's the central concept.

Speaker 1:

The idea here, as the document puts it, is that AI will augment the performance of airmen. It functions as an amplifier for their capabilities.

Speaker 2:

Not replacing humans, but empowering them. That's the key message.

Speaker 1:

So it's about synergy.

Speaker 2:

Exactly where the synergy comes in. It's about strategically combining human strengths, our intuition, our reasoning judgment with machine strengths like that lightning, fast data processing.

Speaker 1:

To maximize capability.

Speaker 2:

Right, and the document stresses that military discretion lies with airmen. Humans retain the ultimate decision making authority.

Speaker 1:

Always a person responsible.

Speaker 2:

That's the principle. Now the dynamic relationship between the human and the machine. How much autonomy the machine gets really depends on the situation and the risk tolerance. Well, for instance, if an occasional false prediction from an AI is tolerable, maybe not critical, the machine might be given more leeway. But if miscalculations could have significantly detrimental effects, then human judgment needs to be much higher, much closer supervision.

Speaker 1:

Makes sense Adjust the oversight based on the stakes.

Speaker 2:

Exactly.

Speaker 1:

And to give people a bit more context, the document mentions some common ways this HMT plays out, terms you might hear.

Speaker 2:

Right these constructs.

Speaker 1:

There's human in the loop. That's where the machine recommends something, but a person makes the final decision. Click yes or no.

Speaker 2:

Person approves or denies.

Speaker 1:

Then human on the loop. Here the machine's recommendation is implemented, unless a person actively steps in and vetoes it.

Speaker 2:

Acts by default, unless overridden.

Speaker 1:

And finally human up the loop, machine decides, person cannot override.

Speaker 2:

Full autonomy in that specific action, which obviously requires immense trust and careful design. And a huge part of getting to that trust, as you can imagine, is transparency and understanding. The document really highlights that transparency, explainability, responsivity, predictability and even directability of AI systems are crucial for understanding system behavior and building confidence in the HMT.

Speaker 1:

You need to know how it's working, why it's making the recommendations it is.

Speaker 2:

If it's just a black box, it's very hard to trust it, especially when the stakes are high.

Speaker 1:

Okay, so how does this HMT philosophy translate into specific Air Force Corps functions? How will AI actually impact their key missions day to day?

Speaker 2:

Right, let's look at the applications.

Speaker 1:

Let's start with air superiority gaining and maintaining control of the air.

Speaker 2:

Okay, AI is set to facilitate improved execution, especially in contested environments. Think enhancing early warning systems, detection systems, ISR targeting.

Speaker 1:

Helping aircraft support missions like combat, air patrol, cap or suppression of enemy air defenses, exactly.

Speaker 2:

And the document points to things like those autonomous swarms we mentioned and also semi-autonomous collaborative combat aircraft, CCAs.

Speaker 1:

The robot wingman concept.

Speaker 2:

Sort of yeah, these could be used across many air superiority missions. And, you know, in a pretty striking demonstration of this growing confidence, oh yeah, the secretary of the Air Force was actually a passenger back in May 2024 in a modified F-16, an F-16 that had an AI-controlled dogfighting module.

Speaker 1:

Wow, the AI was flying the plane in a dogfight scenario, with the Secretary on board.

Speaker 2:

Apparently so. That shows a huge leap in trust and capability demonstration.

Speaker 1:

Definitely makes a statement Okay, moving to global precision attack, striking targets accurately.

Speaker 2:

Here AI is expected to improve aircraft and munition targeting AI. Computer vision CV combined with better target recognition could help minimize civilian risk, for instance.

Speaker 1:

Better identification, less collateral damage. Hopefully.

Speaker 2:

That's the aim. Ai modeling is also expected to enhance stealth capabilities, finding ways to reduce signatures, and AI will accelerate that critical decision cycle in sensor-to-shooter integration.

Speaker 1:

Getting information from a sensor to a weapon faster.

Speaker 2:

Much faster, allowing for more precise effects delivered more quickly.

Speaker 1:

Okay, how about rapid global mobility, moving personnel and equipment?

Speaker 2:

AI is seen as vital here for something called agile, combat employment or AC.

Speaker 1:

AC right Adaptive basing moving quickly.

Speaker 2:

Yeah, ai could help prioritize operating locations. Use predictive analysis for maintenance, creating predictive maintenance processes, so you fix things before they break.

Speaker 1:

Optimizing logistics, which is always a huge challenge.

Speaker 2:

Immense and there's a great example here an Afwarex collaboration with an industry partner. They developed a semi-autonomous airlift capability. Picture this During the Bamboo Eagle and Agile flag exercises in August 2024, this AI-enabled asset successfully delivered urgently needed mission-capable parts orders just in time multiple geographically separated locations.

Speaker 1:

So like an autonomous cargo drone. Basically.

Speaker 2:

Something along those lines, yeah.

Speaker 1:

Yeah.

Speaker 2:

And it really relieved pressure on traditional human crewed airlift assets. Delivered parts where they were needed when they were needed.

Speaker 1:

That's a very practical application.

Speaker 2:

Very Now in global intelligence surveillance and reconnaissance, isr. Again, multimodal AI shows enormous promise for things like real-time pop-up threat detection and identification.

Speaker 1:

Seeing unexpected threats as they appear.

Speaker 2:

Yeah, and fusing data from different sensors for better multi-domain situational awareness, trying to eliminate those frustrating disconnects between different intelligence platforms.

Speaker 1:

So everyone sees the same picture faster.

Speaker 2:

Hopefully and autonomous ISR platforms also open up the potential for persistent collections in previously denied areas where it might be too risky to send crewed aircraft.

Speaker 1:

Staying on station longer, seeing more.

Speaker 2:

And finally, command and control C2, the brain of the operation.

Speaker 1:

How does AI fit in there?

Speaker 2:

It will assist with targeting resource allocation, planning, scheduling, complex decision support tasks. And imagine AI-enabled communication networks providing incredible resiliency and survivability. How so? They could potentially redirect communication pathways to instantaneously restore connection if one link gets severed, or re-vector data feeds automatically if a higher C2 echelon gets degraded or knocked out.

Speaker 1:

Self-healing networks, essentially.

Speaker 2:

That's the idea A huge leap in continuity and robustness.

Speaker 1:

And all these applications. They kind of tie into this broader thing they call the autonomous collaborative platforms or ACP ecosystem.

Speaker 2:

Right, it's not just one system, but a network of systems working together. Right, it's not just one system, but a network of systems working together. This involves developing various semi-autonomous aircraft. Variants like the YFQ-42A and YFQ-44A are mentioned.

Speaker 1:

Experimental designations.

Speaker 2:

Yeah, likely testbeds. The aim is to provide low-cost, relevant combat capability that works in concert with the more expensive traditional crewed platforms.

Speaker 1:

A mix of human-p, piloted and autonomous systems.

Speaker 2:

That's the vision.

Speaker 1:

So OK, after covering all this potential, what does it truly take to successfully employ AI within the Air Force? What are the prerequisites?

Speaker 2:

Well, the document highlights a core requirement building an AI ready force.

Speaker 1:

AI ready. What does that mean exactly?

Speaker 2:

It means Air Force personnel need to be AI fluent, and that's a proficiency beyond just basic literacy, beyond just knowing the buzzwords.

Speaker 1:

It's deeper than that.

Speaker 2:

Yeah, it's about really comprehending the application, the interpretation and how to effectively navigate these AI systems, knowing what they can do, what they can't do, how to use them properly.

Speaker 1:

And why is this fluency so critical?

Speaker 2:

Because it underpins everything else. It enables effective collaboration between humans and machines that HMT we talked about. It allows for truly informed decision making. It's essential for risk mitigation.

Speaker 1:

Understanding the limitations and potential pitfalls.

Speaker 2:

Exactly. Ultimately, it keeps personnel competitive, gives them that strategic edge for integrating even more technologies down the line, and it lays the very foundation for the entire AI ecosystem. The Air Force envisions.

Speaker 1:

You can't build it without people who understand it.

Speaker 2:

Precisely, but beyond the human capital, there are other critical enablers and also significant challenges.

Speaker 1:

Okay, like what?

Speaker 2:

First data. We touched on this, but the document really stresses that data must be treated as a valuable enterprise-level asset.

Speaker 1:

Like fuel for the AI.

Speaker 2:

Exactly like fuel. The accuracy and reliability of AI models are heavily dependent on the quality of the data used for training and testing, and this poses huge challenges in how data is collected, managed, curated, labeled and properly conditioned.

Speaker 1:

Getting the right data in the right format, clean. It's a massive undertaking.

Speaker 2:

Second, compute. Training and running these sophisticated AI models requires a massive amount of compute power, gpus, specialized hardware. Which isn't cheap Not at all and it drives a recurring demand signal for increased computer and technology acquisitions. It's an ongoing need. Third talent.

Speaker 1:

Back to people.

Speaker 2:

Back to people. The Air Force needs to find, recruit, develop and, importantly, incentivize individuals across a whole range of specialties.

Speaker 1:

Not just coders.

Speaker 2:

No developers, operators who use the system, scientists, ethicists, data managers, policy people the whole spectrum to build this AI fluent force.

Speaker 1:

And that requires outreach right. They can't just find these people internally.

Speaker 2:

Right Outreach to strategic partners academia, industry, bringing expertise in.

Speaker 1:

So, given all those challenges, data compute talent, it makes you wonder where is AI best suited in military applications. What problems can it truly solve? And, maybe even more importantly, what problems can it not solve?

Speaker 2:

This is a really crucial distinction the document makes and it's worth pausing on. It explains that AI really excels at complicated problems.

Speaker 1:

Complicated. How's that defined?

Speaker 2:

These are problems that possess consistent structures and activity patterns over time, things that can often be solved or at least modeled with clear mathematical formulas or algorithms.

Speaker 1:

Predictable, even if complex.

Speaker 2:

Generally, yes, ai is perfect here because it can use probabilistic mathematics to detect patterns at scale and speed. Think logistics, optimization, inventory management, predictive maintenance, things with underlying regularities.

Speaker 1:

Okay, so that's where AI shines. What about the other kind?

Speaker 2:

It struggles with complex problems, sometimes called wicked problems. These are unpredictable, with frequently changing rule sets and patterns of interaction.

Speaker 1:

Much fuzzier, less defined.

Speaker 2:

Exactly. They're incredibly difficult, maybe even impossible, for current AI to adequately model because the underlying dynamics are constantly shifting or poorly understood. Think about broad social issues, political instability or even something like nuclear deterrence strategy.

Speaker 1:

Things with deep human factors.

Speaker 2:

Fundamentally yes, these fundamentally require human characteristics like an understanding of context, judgment, wisdom and ethical considerations. Ai just isn't equipped for that kind of reasoning.

Speaker 1:

So AI can help with tasks within those complex problems.

Speaker 2:

It can handle complicated subtasks, analyze data related to the complex problem, but it cannot solve the overarching complex, wicked problem itself. That still requires human leadership and judgment.

Speaker 1:

That distinction feels incredibly important for setting realistic expectations.

Speaker 2:

It really is.

Speaker 1:

And it also brings us back to some key concerns around deploying AI reliably and crucially ethically.

Speaker 2:

Absolutely Critical concerns. One is good data and bias awareness. We keep coming back to data, but it's that important.

Speaker 1:

Quality results from AI depend on good data management.

Speaker 2:

Right, and the challenge is obtaining accurate, properly labeled and conditioned data without introducing unintended or harmful biases. Because if the data itself is biased, the AI will likely perpetuate or even amplify that bias. Think about it If historical data reflects past discrimination, an AI trained on it might make discriminatory predictions or recommendations without anyone intending it to.

Speaker 1:

Which can erode trust and have really harmful outcomes.

Speaker 2:

Absolutely so. The document emphasizes needing diverse data sets, transparent and explainable algorithms and regular audits to try and catch and mitigate these biases.

Speaker 1:

It's an ongoing effort. What else?

Speaker 2:

Cyber defense of blue AI. Protecting our own AI systems, robust cyber defense is required.

Speaker 1:

Because adversaries will try to attack them.

Speaker 2:

Definitely, we talked about data poisoning, manipulate the input data, but adversaries might also try to gain access to the AI models themselves.

Speaker 1:

Deal the algorithms.

Speaker 2:

Or worse, maybe reverse engineer them. For example, if they could figure out exactly how an ISR targeting model works, they might develop ways to fool it, like creating digital camouflage.

Speaker 1:

Hiding in plain sight from the AI.

Speaker 2:

Potentially so. This requires multi-layer protection. Yeah, Strong access controls, encryption, intrusion detection systems the works.

Speaker 1:

And finally the big one AI and ethics.

Speaker 2:

This is flagged as a significant concern, and rightly so. The DoD has mandated that all AI capabilities must adhere to ethical principles. They must be responsible, equitable, traceable, reliable and governable the retreg principles Right. Traceable, so you reliable and governable. The retreg principles Right. Traceable, so you know why it made a decision. Governable, so you can control it. Reliable, so it works as intended. Equitable, avoiding bias. Responsible, overall use.

Speaker 1:

That's a high bar.

Speaker 2:

It is, and the document acknowledges a critical challenge here. Adversaries are not always beholden to the same ethical guidelines.

Speaker 1:

We might constrain ourselves ethically, while they don't.

Speaker 2:

That's the dilemma how do you maintain an edge while upholding your values? It's a profound challenge.

Speaker 1:

Wow, okay, what a journey through the Air Force's strategic thinking on AI. We've really covered a lot from the promise and the peril through the terminology.

Speaker 2:

Human machine teaming.

Speaker 1:

The core applications, the enablers, the challenges. It all comes back to that point in the AFDN 25 to 1, doesn't it that true technological innovation is unlocked not by the technology itself, but how we are able to conceptualize and apply it?

Speaker 2:

It's about the human element, the strategy, the integration.

Speaker 1:

So, as you, our listener, consider the rapid advancement of AI, whether it's a national security, like we discussed, or, you know, other fuels impacting your life, maybe ask yourself this question Is pursuing the efficiency of AI always justify if it means we potentially lose some of that deep knowledge and maybe even the serendipitous learning that comes from humans wrestling directly with complex problems? You know the insights you get from the struggle itself.

Speaker 2:

That's a really interesting thought the value of the human process, not just the outcome.

Speaker 1:

Exactly and where really is that critical balance between machine speed and human wisdom best found?

Speaker 2:

A question we'll likely be debating for a long time.

Speaker 1:

Probably Something to keep exploring.

People on this episode