“I don’t care what that report says, the threat actor has to be utilizing vector #1 because that is what our threat model says they’ll use….”

Although the comment above is offered for humor, cyber security professionals are routinely at risk of making analytical errors very similar to the one implied. These errors are rarely the result of poor intent, lack of effort, or insufficient technical skill. Instead, they stem from cognitive vulnerabilities that are psychologically inherent in human decision-making and analysis. These vulnerabilities, commonly referred to as cognitive biases, manifest daily in security operations centers (SOCs), incident response teams, and cyber threat intelligence units, where analysts must interpret incomplete or ambiguous indicators, often under intense time pressure.

Cyber threat analysis and subsequent decision-making based on such analysis, by their very nature, require individuals to make judgments about adversary intent, system behavior, and appropriate response actions. Despite massive advances in SIEMs, SOAR platforms, endpoint detection, and automated analytics, the primary integrator of information remains the human brain. While humans are remarkably capable of processing complex information, decades of research in cognitive science and decision-making consistently show that people rely on mental shortcuts to make assessments and decisions, especially in periods of uncertainty and stress. While these shortcuts greatly aid in speed and reduce mental workload, they introduce potential biases and render cyber threat analysis and decision-making at risk of serious errors.

In cyber security environments, where defenders must rapidly correlate alerts, assess adversary behavior, and recommend actions with incomplete data, the conditions that amplify cognitive bias are often unavoidable. Among the many biases identified in psychology and decision science, two are especially relevant to cyber operations: anchoring and confirmation bias. This article examines how these biases affect cyber security decision-making and outlines practical approaches organizations can use to mitigate their impact.

Anchoring

Anchoring occurs when an individual places disproportionate weight on the first or earliest information encountered and subsequently undervalues, discounts, or ignores later evidence. Once an initial interpretation or hypothesis is formed, subsequent analysis tends to revolve around that initial anchor, even when new information suggests an alternative explanation. Research suggests this effect may be influenced by both memory-related factors (such as primacy effects) and a natural human tendency to reduce complexity by stabilizing early interpretations.

History offers numerous examples of anchoring bias influencing high-stakes decision-making. For example, in late 1944 Allied commanders in Europe were anchored to the belief that German troops were too exhausted to mount a major offensive. However, even when highly reliable intercepts, aerial reconnaissance, frontline patrols, and local resistance networks reported first-hand sightings of significant German troop movements, leaders dismissed these warnings and seemingly anchored their beliefs and decisions on their initial beliefs. This left Allied forces thinly spread and unprepared when the German assault erupted. The ensuing battle inflicted approximately 90,000 allied casualties (making the Battle of the Bulge one of the deadliest battles in World War II).

A cyber security analogue occurs during incident response. For example, early alerts may classify suspicious activity as commodity malware or a low-level intrusion. As additional telemetry emerges—indicating persistence, lateral movement, or advanced tradecraft—teams may nevertheless remain anchored to the original assessment of commodity malware. This could create a delay in reframing the threat leading to a delay in containment efforts and allow adversaries additional time to expand access or exfiltrate data.

Confirmation Bias

Confirmation bias occurs to a cognitive tendency for humans to interpret and recall information that confirms one’s pre-existing beliefs. Essentially, once a particular explanation feels “right,” humans unconsciously prioritize evidence that reinforces that explanation, even when alternative interpretations are equally or more plausible. Confirmation bias is similar to anchoring but differs in that anchoring implies holding to an initial or preformed explanation or belief, confirmation bias implies seeking out, emphasizing and stronger recall of information that fits one’s belief, explanation or expectation.

In cyber operations, confirmation bias often manifests when SOC teams continue to hunt almost exclusively for indicators that support an assumed intrusion vector (e.g. phishing) while overlooking identity logs, cloud access anomalies, or internal lateral movement that point to a different compromise path. In more subtle cases, confirmation bias leads analysts or decision makers to stop actively searching for additional evidence altogether, believing the matter already settled. This effect can be magnified if anchoring and confirmation bias operate together, locking one into a narrow interpretive mental frame.

The Risks of Anchoring and Confirmation Bias in Cybersecurity

Anchoring and confirmation bias pose significant operational risks because they distort threat perception at precisely the moment when adaptability and sound judgment are most critical. When threat analysts, decision-makers or leaders anchor on early indicators or selectively confirm pre-existing hypotheses, organizations risk misclassifying incidents, underestimating adversary sophistication, and delaying appropriate response actions. While numerous examples exist, during the now infamous 2020 SolarWinds incident, many affected organizations’ leaders seemingly clung to outdated models and placed too much trust on a compromised vendor, both of which suggest the presence of anchoring and confirmation biases.

Over time, these biases can also normalize flawed analytical practices. Repeated exposure to biased decision-making can reinforce incomplete threat models, discourage dissenting views, and erode organizational learning. In high consequence environments these cognitive errors can transform manageable security events into prolonged, costly breaches. Further adding to the risk emanating from these biases is malicious actors can attempt to exploit cognitive biases in their techniques and methods. (An example of how adversaries attempt to exploit cognitive biases was evident in the 2018 Olympic Destroyer cyber incident in which Russian actors deliberately forged code previously associated with North Korean hackers to damage systems prior to the opening ceremony of the Olympic Games).

Overcoming Anchoring and Confirmation Biases

Research suggests that while these and other cognitive biases are essentially inherent to human thinking, they are not inevitable and their impact can be mitigated or even removed through deliberate practices. Three practical approaches relevant in the field of cybersecurity include effective use of decision-support tools, targeted training, and structured anti-bias work processes.

Decision-Support Tools

When efficiently employed, decision-support tools can reduce overreliance on early alerts or dominant narratives by externalizing information and presenting it in structured, easily comparative ways. Visual timelines, kill-chain representations, and integrated dashboards that correlate endpoint, network, and identity data can help analysts reassess evolving evidence rather than fixating on initial reports. Essentially, it is more difficult to remain anchored or discount evidence if it is visually, clearly and publicly presented.

Further, the growing adoption of AI and generative AI tools also offers potential benefits if or when used thoughtfully. Querying AI systems to summarize evidence, make alternative hypotheses, or challenge existing assessments can encourage threat analysts and decision makers to compare competing interpretations rather than defaulting to the first conclusion reached. This offers an opportunity to slow one’s thinking and force more rational decision making to a situation.

Training and Awareness

Training and awareness programs focused on cognitive bias can also mitigate risk. Research in decision-making suggests that simply educating professionals about common biases increases awareness of their own thought processes and reduces the likelihood of biased judgment. Importantly, such training is arguably most effective when it is presented in a hands-on manner with other team members and when contextualized to the actual environment.

For cyber teams, this means combining training on these biases with realistic, scenario-based exercises. Tabletop simulations, red-team/blue-team engagements, and incident response drills that explicitly require participants to articulate why they are making certain assessments can identify or mitigate bias and help teams develop corrective habits under operational conditions.

Anti-Bias Work Processes

Another effective strategy is the deliberate introduction of work processes designed to slow and challenge thinking at critical decision points. Cognitive psychology consistently shows that reflective, deliberate thinking reduces reliance on the mental shortcuts that foster biases. However, in cyber security this must be done prudently so that the delay in better thinking practices creates a situation in which the cure is as bad, or worse, than the disease.

One practical approach is assigning a respected and experienced individual to serve as a real-time “devil’s advocate.” This person’s role is not to obstruct decision-making, but to actively question prevailing assumptions, propose alternative hypotheses, and highlight evidence that contradicts the dominant narrative. A respected expert offers the potential to be able to fill this role efficiently and effectively, thus adding value without a time cost. Relatedly, research indicates that objective observers are often better at detecting cognitive errors than those directly responsible for assessments or decisions.

A second technique involves structured idea generation which we humorously refer to as “brainstorming without an umbrella.” In this approach, stakeholders are asked, in advance, to prepare a small number of ideas or hypotheses related to a threat or a decision. Every participant is required to share their ideas without any interruption, commentary, or debate. This process reduces the influence of hierarchy and dominant personalities while ensuring a variety of alternative interpretations are surfaced.

Conclusion

Anchoring and confirmation bias represent persistent and often underestimated risks in cybersecurity threat analysis and decision-making. While these cognitive vulnerabilities cannot be eliminated entirely, organizations can meaningfully reduce their impact through thoughtful use of decision-support tools, targeted training, and deliberate analytical processes.

As cyber operations increasingly resemble contested, high-tempo intelligence environments, the ability to recognize and mitigate cognitive bias is not merely an academic or theoretical concern, it is a practical necessity. By addressing how analysts and leaders think, not just what technologies they deploy, organizations can improve detection, response, and resilience in the face of ever evolving cyber threats.

Notes and References (Suggested)

Kahneman, D. Thinking, Fast and Slow. Foundational work on cognitive bias, dual-process thinking, and decision-making limitations.

Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Seminal research establishing anchoring and other biases.

Nickerson, R. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Comprehensive review of confirmation bias.
Klein, G. Sources of Power. Research on decision-making under time pressure and uncertainty.

Furnham, A., & Boo, H. (2011). Review of the anchoring effect in judgment and decision-making.

Beevor, A. (2015). Ardennes 1944: The Battle of the Bulge. Viking. Discussion of the battle suggesting the cognitive biases discussed herein.

Kaspersky Lab. (2018). The Olympic false flag: How infamous OlympicDestroyer malware was designed to confuse the cybersecurity community. Article discussing the role of a false-flag to obscure attribution in a cyber incident.

Shabad, V. (2025). Cognitive blind spots in security frameworks: From cybersecurity to AI governance. University of Liverpool. Excellent article providing additional details and amplification of the ideas presented herein to include impacts of AI

Christopher J. Tatarka
Adjunct Professor at Metro State University | Website |  + posts

Chris Tatarka, PhD, is an adjunct professor and leadership trainer whose work spans decision-making, critical thinking, and analysis. With a BA and MA in psychology, an MPA in public administration, and a PhD in business administration, he draws on decades of experience in military and public-sector leadership to help both organizations and students strengthen their decision-making and leadership effectiveness.

Brian J. Morgan
Cyber Coordination Cell Director at Minnesota National Guard | Website |  + posts

Brian Morgan is a cybersecurity leader with more than two decades of experience building cyber defense programs and leading enterprise security initiatives in the private and public sector. He is an Army Lieutenant Colonel in the National Guard with over 23 years of military experience in the fields of Intelligence, Communications, and Cyber Operations. He has an MBA and holds various professional certifications including CISSP, CISM, CEH, and others.

Leave a Reply

Your email address will not be published. Required fields are marked *