Paris – June 12, 2019 – Forcepoint X-Labs, the world’s first dedicated research division to combine deep security expertise with behavioral science research, has released a whitepaper: “Thinking about Thinking – Exploring Bias in Cybersecurity with Insights from Cognitive Science”. Authored by psychologist Dr Margaret Cunningham, the whitepaper examines six universal unconscious human biases and explores how a deeper understanding of cognitive science plus the application of advanced analytics can improve decision making in cybersecurity – for both the end user and the industry.
Global cybersecurity leader Forcepoint launched the X-Labs division in March 2019 with the remit of using data insights from the entire Forcepoint product portfolio and external research to drive innovation in modern, risk-adaptive security solutions. Forcepoint examines a wide range of bias in humans as well as data-driven analytics, with a goal of creating more flexible and effective cloud-first cybersecurity solutions appropriate for today’s intricate landscape.
Six Human Biases Skewing Security Strategies
The whitepaper, part of Forcepoint’s series on cognitive science in cybersecurity, covers six analytical biases in-depth, exploring aggregate bias, anchoring bias, availability bias, confirmation bias, the framing effect and the fundamental attribution error.
“We are all subject to cognitive bias and reasoning errors, which could impact decisions and business outcomes in cybersecurity,” said Dr Cunningham, Principal Research Scientist at Forcepoint. “However, an exceptional human trait is that we are able to think about thinking, thus can recognise and address these biases. By taking a different approach and avoiding those instances where automatic thinking does damage, we can improve decision making,”
“We regularly see business leaders influenced by external factors”, adds Nicolas Fischbach, global CTO at Forcepoint. “For example, if the news headlines are full of the latest privacy breach executed by foreign hackers, with dire warnings regarding outside attacks, people leading security programs tend to skew cybersecurity strategy and activity against external threats.”
This is availability bias in action, where an individual high-profile breach could cause enterprises to ignore or downplay the threats posed by malware, poor patching processes or the data-handling behavior of employees. Relying on what’s top of mind is a common human decision-making tool, but can lead to faulty conclusions.
Confirmation bias also unconsciously plagues security professionals. When individuals are exploring a theory for a particular problem, they are highly susceptible to confirming their beliefs by only searching and finding support for their hunch. For example, an experienced security analyst may “decide” what happened prior to investigating a data breach, assuming it was a malicious employee due to previous events. Expertise and experience, while valuable, can be a weakness if people regularly investigate incidents in a way which only supports their existing belief.
La déconnexion entre la sécurité et les objectifs de l’entreprise a eu des conséquences négatives pour 89 % des personnes interrogées et a augmenté le succès des cyber-attaques dans un tiers des entreprises en France.
L’Unafo, qui organise la semaine du logement accompagné du 7 au 10 décembre, publie ses premiers chiffres sur les profils et parcours des personnes entrant et sortant du logement accompagné (en 2020) dans la région Grand Est. Le logement accompagné joue un rôle primordial pour l’accès au logement des personnes les plus précaires et notamment […]
Une enquête commandée à Forrester révèle que la centralisation des secrets et l’utilisation d’outils cohérents sont la clé pour protéger les innovations dans DevOps.