Why a 'Near-Miss' Database Is Key to Improving Information Sharing

Organizations disclose attack details, though information may be limited, following a breach, but what if they did the same with close calls?

1774591620254

RSAC 2026 CONFERENCE — San Francisco — When people talk about transparency in cybersecurity, they are usually referring to organizations disclosing breaches and incidents. At RSAC Conference this week, two security experts made the case for why focusing on near misses can strengthen security defenses. Wendy Nather, senior research initiatives director at 1Password and Bob Lord, head of consumer working group at hacklore.org, emphasized how the industry needs to prioritize transparency and outlined ways to do so — starting with sharing near misses.

Information sharing, which encompasses threat intelligence, indicators of compromise, and reports of vulnerability exploitation, is an essential way to help combat and stay ahead of cyber threats. The victim blame game, shame, finger-pointing, and regulatory punishments contribute to a lack of transparency, particularly when it comes to ransomware. But that needs to change if organizations want to be proactive.

Getting Down to the Root Cause​

Exposure without exploitation or an identity compromise attempt stopped by architecture are two examples of a near miss, Nather explained. The former is something she frequently observes because many companies struggle to implement sufficient logging capabilities.

"A near miss is anything that almost happened, that makes you say, 'Wow, if it wasn't for that thing, it would have been really bad,'" she said.

Companies celebrate moments of heroics or good luck. They recognize that a threat or attempt was a close call, but then everyone, including managers, simply returns to work, explained Lord, noting that mindset leads to a lack of conversations around near misses in the wild.

"Not trying to use a near miss as an opportunity to run through the full incident response plan is a big waste of time," he said.

To promote transparency around near misses, the industry needs to eliminate the blame game, he urged, particularly because human error relates to the proximate cause of an issue and not the root cause.

Finding the root cause presents enough challenges as it is.

"The idea that the term 'root cause' is not an actual conclusion — it's the label we place on our decision to stop looking further — this blew me away," Lord revealed. "If you've ever had an argument with coworkers over what the root cause of an accident was, probably one of the major reasons is you defined when you were going to stop looking differently than they did."

Stop Before Blaming the Human​

Humans take the brunt of the blame rather than the systems and technologies companies use; the speakers agreed that it's a problem. Employees may be part of the cause of an incident, of course, such as by clicking on a phishing link or falling for a vishing attack, where a threat actor impersonated the IT help desk. Conversely, maybe an employee or company didn't do something securely, like implement multifactor authentication or reused passwords.

"I hate the saying, 'Humans are the weakest link,'" Nather said. "How about we build systems so that humans don't have to be responsible?"

Human error is brought up constantly as attackers deploy increasingly sophisticated social engineering tactics to gain access to a victim organization. But human error should signal the start of the investigation, not the conclusion, said Lord, who described it as a "social judgment."

The mindset that systems created a problem shifts responsibility from systems to humans, who are inevitably going to fail, he added. Systems naturally drift toward higher risk under pressure, efficiency, and competing goals, Nather and Lord explained. Humans will try to bypass security to operate faster.

"Anytime you're tempted to blame someone for a near miss, it's a signal that you should look deeper at the system, not the person," Nather said.

Developing a Near-Miss Database​

Eliminating the blame game around human error could promote better information-sharing practices by relieving fear or embarrassment. For example, feeling safe enough to elevate near misses to company executives could provide a "gold mine of information," Nather said.

To level that up across the industry, Nather suggested aggregating data around near misses so as not to single any one company out. That way, more people may be willing to share, which could "help regulators and the industry," she said.

Trust happens between individuals, not organizations, Nather added. Hearing real-life stories from individuals is more beneficial compared to a set of standardized information.

Developing a voluntary near-miss reporting channel, such as how the government handles breach reporting, is one way to achieve this. Submissions could be confidential or anonymized and an explicit safe harbor from regulatory contractual requirements, replacing compliance burden with ways to drive improvement.

The data set could detail what almost happened, what stopped it, which control mattered, and what assumptions were proved wrong. From there, trends and lessons could be published without naming organizations. That way, Nather and Lord hope to shift near-misses as "evidence of confidence, not weakness."
 
Top
Cart