The OODA Loop And AI: How Orientation Affects Fairness

Contents

Artificial intelligence is rapidly becoming an inherent and permanent part of our lives. At this critical inflection point, it is vital to ensure that human rights and agency remain the guiding force for this powerful technology.

To understand what this means for fairness, we can turn to the OODA Loop, a decision cycle developed by military strategist John Boyd, representing four iterative phases:

  • Observe
  • Orient
  • Decide
  • Act

Secure Your LLMs

PromptShieldâ„¢ is the first AI-powered firewall and defense platform that protects enterprises against the most critical AI prompt risks.

Beyond The Battlefield: The Versatility Of The OODA Loop

While originally designed for combat scenarios, the OODA Loop has found applications far beyond the battlefield in business, cybersecurity, law enforcement, and even litigation.

In fact, Jamie Dimon, CEO of JPMorgan Chase, credits the OODA Loop with helping him evaluate complex scenarios under uncertainty.

So, what exactly are these phases?

  • Observe: This is about gathering information about unfolding circumstances and external factors.
  • Orient: In this crucial phase, you analyze observations through the filters of culture, heritage, experience, and new data.
  • Decide: Based on your orientation, you form a hypothesis or a plan of action.
  • Act: Finally, you implement your decision, effectively testing your hypothesis in the real world.

Together, these create the acronym OODA. Importantly, feedback loops connect all phases, enabling continuous adaptation.

$35/MO PER DEVICE

Enterprise Security Built For Small Business

Defy your attackers with Defiance XDRâ„¢, a fully managed security solution delivered in one affordable subscription plan.

The Pivotal Role Of Orientation

Among the four phases, Orientation is the most crucial and complex, and it is often the least understood. To effectively orient a decision, you need to interpret the observed data through five key contextual lenses:

  • Cultural traditions: Our upbringing’s values, norms, and beliefs act as a filter, shaping what we consider “normal” and influencing how we judge situations and what decisions seem appropriate.
  • Genetic heritage: Our inherent traits and instincts, like fight-or-flight responses or risk tolerance, influence how we react to stimuli and process information, subtly guiding our decision-making.
  • Past experiences: Previous successes, failures, and lessons learned form a mental backdrop, helping us quickly interpret new situations but also potentially introducing bias if we assume new scenarios are identical to old ones.
  • Analytical synthesis: This is our ability to reason, break down information, and recombine it into a meaningful understanding. Strong analytical skills help us filter noise and connect facts, leading to clearer orientation and sounder decisions.
  • New information: The continuous influx of fresh data and changing circumstances constantly reshapes our understanding. Effectively integrating this ensures our decisions reflect the latest reality, preventing us from acting on outdated perceptions.
The OODA Loop And AI

Each of these lenses influences how we orient to our observations, filtering and framing the raw data. Together, they shape our perception of situations and directly impact the quality and speed of our decisions within the OODA Loop.

Recognizing that each person or organization may orient differently based on these factors highlights why refining our orientation by broadening experiences, challenging biases, and staying open to new information leads to better decision-making over time.

How Orientation Manifests In AI

In AI systems, orientation parallels the data, algorithms, and design principles shaping model behavior. Since AI reflects the contexts encoded in its training data and design choices, orientation inherently carries biases

Think of it this way:

Just as our human orientation is built from our experiences and beliefs, an AI’s “orientation” is constructed from the information it’s trained on and the rules it’s given. Because AI inherently reflects the contexts embedded in its training data and the choices made during its design, this “orientation” naturally carries biases.

These aren’t just technical biases, like imbalances in the data, but also fairness and cultural biases that were present in the human-generated information it learned from, whether intentional or not.

Because of the massive amounts of data learning that feeds AI, it is extremely challenging to evaluate the neutrality or bias of the data going into training the models. This is why it is important to address it after learning, but before going to market with these tools.

The Complexity Of Fairness Across Different Orientations

Fairness isn’t a fixed concept; its perception often varies significantly depending on one’s orientation. For example:

  • Cultural Orientation: Facial recognition systems trained primarily on Western faces often perform poorly on individuals from non-Western ethnicities, leading to higher misidentification rates. This reflects a cultural orientation bias in the training data and assumptions about “normal” features. What may be considered accurate and fair in one cultural context ends up discriminatory in another.
  • Historical Orientation: In credit scoring, AI models that rely on historical financial data may unintentionally perpetuate systemic disadvantages faced by marginalized communities. To address this, some fairness approaches involve adjusting scores or granting “credit boosts” to historically underserved groups to counteract prior exclusion and discrimination.
  • Socio-Economic Orientation: Online education platforms using AI to personalize learning may assume students have reliable internet access and modern devices. This assumption biases the system against learners in rural or low-income areas with limited connectivity, resulting in unequal learning opportunities. The AI’s orientation, shaped by a resource-rich environment, misses this digital divide.

Therefore, true fairness in AI is inseparable from whose orientation the system reflects, and whose it overlooks

Critiques And Considerations On The OODA Loop’s Usefulness

Not everyone agrees on the OODA Loop’s uniqueness or depth.

For example, aviation historian Michael Hankins cautioned that the loop’s flexibility can dilute its meaning, making it a generalized model rather than a profound insight. It’s one of many ways to describe intuitive decision-making.

While the OODA Loop effectively highlights the criticality of “orientation,” other well-known frameworks also contribute to understanding decision-making and continuous improvement.

These include the PDCA (Plan-Do-Check-Act) cycle, often used for quality management, and Design Thinking, which emphasizes empathetic problem-solving and iterative prototyping.

More recently, comprehensive AI ethics frameworks like the NIST AI Risk Management Framework or principles outlined in the EU AI Act provide detailed guidelines for responsible AI development.

Yet, it is the OODA Loop’s simplicity and adaptability that make it a particularly valuable tool for framing AI fairness challenges in an era where cultures and new information are constantly factoring into the decision loop.

Ethical Blind Spots In AI and Orientation

AI’s rapid emergence as the fastest-growing market in history is pushing us into uncharted territory, confronting us not only with technical advancements but also with unprecedented ethical, cultural, and anthropological challenges.

Given the central role of orientation, we must constantly discuss and ask ourselves crucial questions about its impact on fairness.

Some pressing ethical questions remain underexplored:

  • Whose cultural and historical contexts are encoded in AI systems, and who is excluded?
  • How might AI perpetuate “epistemic injustice” by privileging certain worldviews?
  • Are current fairness metrics too narrow, overlooking nuanced social impacts?
  • How transparent are AI decisions to users from diverse backgrounds?
  • What long-term societal consequences might arise from deploying AI without fully understanding these orientations?

Toward Fairer AI: Embracing Orientation

To create AI that is genuinely fair, practitioners must:

  • Engage Diverse Communities: Implement participatory design workshops, user co-creation sessions, and ongoing feedback loops with individuals from varied backgrounds to deeply understand their notions and expectations of fairness.
  • Incorporate Interdisciplinary Insights: Build diverse development teams that include ethicists, social scientists (e.g., anthropologists, sociologists), and humanities scholars alongside engineers to bring a holistic perspective to AI design.
  • Develop Fairness Metrics Sensitive To Contexts: Move beyond simple aggregated metrics by employing subgroup analysis, intersectional fairness assessments, and context-specific impact evaluations that account for the unique experiences of different populations.
  • Prioritize Transparency And Explainability: Create clear documentation of training data sources and characteristics (e.g., “model cards”), provide user-friendly explanations for AI decisions, and develop mechanisms for users to challenge or provide feedback on outputs.
  • Treat Orientation As A Foundational Design Principle: Integrate ethical considerations and fairness assessments from the very inception of AI projects, ensuring they are core to the system’s architecture and development lifecycle, rather than a separate audit or afterthought.

The Breach Report

PurpleSec’s security researchers provide expert analysis on the latest cyber attacks.

Firewall that's on fire

Conclusion

Understanding how to properly orient your observations is a critical piece of ensuring that your decision-making is based on accurate and fair reasoning, allowing you to identify potential pitfalls and gaps in the information you possess.

The OODA Loop serves as a powerful reminder that fairness is deeply rooted in how we orient ourselves within a complex and ever-changing world. AI systems inherit these orientations, thereby both reflecting and shaping societal values.

The framework succeeds in demonstrating that AI fairness isn’t just about technical metrics but about whose worldview gets encoded into systems.

Recognizing this profound connection and committing to treat orientation as a foundational design principle rather than a mere technical afterthought can guide the development of AI that is more inclusive, just, and truly aware of its ethical responsibilities, ensuring human rights remain the steering wheel for this transformative technology.

Remember, feedback is key to this loop, as it’s the continuous feedback that transforms it from a simple decision line into a dynamic, adaptive cycle.

Clearly, we have much to consider as this technology not only surpasses our own capabilities, but gains more and more control over daily events in our lives.

Picture of Joshua Selvidge
Joshua Selvidge
Joshua is cybersecurity professional with over a decade of industry experience previously working for the Department of Defense. He currently serves as the CTO at PurpleSec.

Share This Article

Recent Newsletters