
The Cassandra Trap: Why the Person Who Sees the Risk Gets Blamed for It
We were eighteen months into a platform migration at a large multi-site Enterprise. I was sitting in a steering committee, reviewing the go-live timeline. Leaders were walking through their team's progress, highlighting team wins, setting up the next quarter and communicating their intentions. I simply had a single slide, showing three risks I'd identified in our integration testing data, each one capable of delaying launch by weeks if left unaddressed. To address these risks was going to take cross team collaboration. It wasn't something that a single team was going to be able to mitigate.
When it became my turn to present, I never got past the second risk.
The CIO, who had been nodding along, leaned forward. "Jon, I appreciate the thoroughness, but I think you're complicating what should be a straightforward discussion." The atmosphere in the room shifted. Not dramatically, no one stormed out or raised their voice, but something tangible occurred. People looked down. They opened their laptops. Phones suddenly became the focus as people feverishly swiped their screens.
Three weeks later, two of those three risks materialized. The launch slipped. And in the post-mortem, nobody referenced my slide, as if it had never happened.
The Cassandra Effect
In Greek mythology, Cassandra was cursed with the ability to see the future but condemned never to be believed. A similar phenomenon occurs in organizations. The technical leader who correctly identifies a risk, who maps and communicates the failure modes in advance, and then watches powerless as it unfolds exactly as predicted, that is the Cassandra Effect. The organization is surprised by the failure as if someone had spoken it into existence from thin air.
The Cassandra Effect has been documented across domains from intelligence analysis to aviation safety to organizational change. The pattern is consistent. The warning comes from someone whose expertise is real but whose authority is conditional, and the organization's confidence in its existing trajectory overrides the signal. By the time the prediction proves correct, the messenger has usually been discredited, reassigned, or gone. The Cassandra Effect isn't a communication problem. It's a structural one. The discrediting can take many different forms but often the leader's concerns are downplayed behind closed doors by others that aren't able to see the pending disaster. This in itself is often a reason to have leaders that are qualified in their domain, not just good managers.
Chris Argyris described this as “defensive routines”. These are the organizational behaviors that protect individuals and groups from embarrassment or threat. In parallel they simultaneously prevent the organization from learning. Argyris' concept of single-loop learning describes organizations that adjust their actions without ever questioning the governing assumptions behind those actions. Double-loop learning on the other hand would force organizations to question their own assumptions. The thermostat analogy is useful. Single-loop learning adjusts the temperature based upon a signal; it doesn't care why the signal occurred. Double-loop learning on the other hand asks whether the thermostat is measuring the right thing in the right room and then applying the right rules. This is where culture fit can suddenly be weaponized, looking for leaders that match the narrative and communication style, rather than culture 'add' through diverse thinking and practices.
Reflecting on those steering committee meetings, a defensive routine was operating exactly as Argyris described. The migration timeline everyone was blindly endorsing was not just a schedule. It was an expression of executive commitment. A promise made to the board. A reflection of the CIO's judgment. It was the wider team demonstrating 'strength' by reinforcing each other's viewpoints, holding hands in agreement. My integration data didn't just threaten the timeline. It also threatened the narrative. It threatened the board's impression and expectations of the CIO. It threatened the group's consensus with the risk of challenging their credibility. I became an outlier and was immediately no longer one of the 'team'. Defensive routines exist precisely to protect narratives from data that might disrupt, and often, expose them.
The phrases and communications that signal this defensive mechanism are remarkably consistent. "They complicate the discussion." "Not a team player." "Doesn't read the room." "Too negative." "Doesn't want to partner". These are not assessments of the information being presented. They are assessments of the presenter's willingness to maintain the group's governing assumptions. The information becomes invisible and what remains is the friction.
Executive Momentum Bias and the Scapegoating Function
Executive commitments and decisions do have a gravity. A senior leader publicly endorsing a direction. Publicly communicating a timeline. Signing off on a strategic vendor agreement. Excitedly revealing the future and the necessary reorganization. And this gravity is then translated into organization-wide investment with funding, people, and prioritization. An executive leader then requesting resources to slow down, to evaluate, or to pause the initiative takes organizational energy and political capital. More and more energy is required to reverse that commitment and it increases with every meeting. With every status update. With every board presentation. At some point there is only forward. I call this "executive momentum bias". It's one of the key reasons Michael Canic highlights for the 70% failure of all organizational change projects detailed in his book 'Ruthless Consistency'. His diagnosis is that most organizations fail directly because of leaders focusing on what they do rather than what their people experience. The 70% failure rate has been so consistent since 1970, resulting in an estimated $3 trillion in waste globally per year, that many leaders simply don't believe change or success will ever happen, so why have people rocking the boat on the doomed journey?
Warren Boeker's 1992 study showed that when organizational performance declines the likelihood of leadership dismissal increases. Surprisingly, not in proportion to the leader's actual contribution to the problem. But instead, in proportion to the organization's need to signal that something has changed or needs to change. The scapegoat serves a symbolic function. Removing or limiting the person who identified the risk is organizationally cheaper than addressing the risk itself, while simultaneously creating a mechanism to manage the message. The removal provides the cover and creates the excuse to redirect resources, pause the initiative, seek external consultants, all while attaching blame without ever saying it.
The BBC's handling of its 2008 Digital Media Initiative is a well-documented example of the Cassandra Effect. The project consumed over £100m before being abandoned in 2013. Internal stakeholders who raised concerns 24 months earlier were consistently marginalized even though the platform was servicing less than 200 users. John Linwood, the Chief Technology Officer, was eventually dismissed. Not for creating the problem, but for simply inheriting the problem and being closest to it when its failure became undeniable. The organization needed a narrative of accountability, and the person closest to the technical reality was the most convenient character to be cast in that role. During John Linwood's tribunal for unfair dismissal, internal emails revealed that despite having expressed grave concerns about the project for 12 months, the BBC Executive Board had decided that John's dismissal was a foregone conclusion before any disciplinary procedures had even started.
What makes the pattern so persistent is that this is learned behavior, and how organizations behave under pressure. The organization's desire to maintain consensus. The organization avoiding the discomfort of revisiting settled decisions. The organization's instinct to associate the message with the messenger. Amy Edmondson's research suggests that 85% of employees have felt unable to raise a concern with their manager or leader at some point. That number doesn't describe a failure of an individual's courage. Instead it describes a system that reliably punishes certain kinds of information.
Three Failure Modes
In my career I have observed three distinct ways this mechanism materializes, each with its own organizational signature.
The first is absorption without action
The risk signal is acknowledged by the wider team. Sometimes even praised by leaders. However, no resources are allocated to address it. Instead, you get a hearty "Great catch, we'll keep an eye on it." The information enters the organizational record but never enters the organizational posture. It’s the most common, and the easiest to miss. It allows everyone to feel that due diligence was performed while changing nothing. This often raises its head 3 months later with surprise and complaints of "well, if you thought it was that serious, you should have done something!"
The second is re-framing as temperament
The risk signal is converted from a data point into a personality characteristic. The person raising concerns becomes "the one who always sees problems." This is where the "complicates discussions" label begins its work. Once the signal is re-categorized as a personality trait, it no longer requires a substantive response and the meeting can seamlessly move on knowing that a future coaching session with your boss is probably on the cards. Organizations don't address personalities, they manage around them, or manage them out.
The third is symbolic removal
When the risk materializes and can no longer be ignored, the person who identified it becomes the most available explanation for why it wasn't addressed. "If they had communicated it differently we would have done something." "If they had been more collaborative then we could have partnered together." The organizational logic here is circular but effective. The risk was not addressed because the person who raised it was difficult, and we know they were difficult because the risk was not addressed.
Each of these modes serves the same function. They protect the organization's governing assumptions from introspection. And they do so by converting a signal problem, "we have a risk", into a people problem, "we have a difficult person."
Engineering a Different Gravitational Field
The natural conclusion of this analysis is advice such as "create psychological safety" or "reward dissent." That advice is correct but unfortunately insufficient. Edmondson's work demonstrates clearly that teams with higher psychological safety surface problems earlier and perform better. It is insufficient because it treats the symptom, people not speaking up, without addressing the mechanism, the organizational economics that make speaking up costly.
What has worked better for me is changing how those signals move, not just asking people to be more willing to raise them.
In practice, this is about creating a better structure, not just a better culture. Meaningful data shows up without needing a person to defend it in the room. Test failures, integration gaps, anything that matters gets surfaced in a way that is already visible. Not because anyone loves dashboards, but because information that comes through a system is harder to ignore than information tied to a person, their personality or the perceptions people have of them.
Reviews are set up so risk is expected, embraced, surfaced regularly. Not suddenly introduced into a room of leaders that have already agreed a project is going to be successful. It has a place in the conversation before anyone has volunteered it and disturbed the ambience.
Lastly, separating what is true from what we are going to do about it matters more than it sounds. When those happen at the same time, people defend decisions instead of examining the signal. This is particularly difficult in highly technical environments where the tendency is to start solutioning, single-loop learning, rather than exploring why it happened in the first place, double-loop learning.
None of this eliminates the underlying dynamics as organizations will always find it easier to remove awareness and manage the message than to remove risk. But you can change the cost structure. You can make it more expensive to ignore a signal than to process it.
I think about that steering committee. Not with resentment, the CIO was doing what organizational gravity demanded. I have been in rooms where I failed to hear signals because they threatened commitments I had already made. The pattern is not about good people and bad people. It is about a system that reliably converts accurate perception into organizational friction, and the leaders who survive are the ones who learn to route their signals through channels the system cannot easily dismiss.
The slide with the third risk is still on my laptop somewhere. The risk it described was a data synchronization issue between the payment interface system and the new inventory platform. It took eleven weeks to remediate once it surfaced in production. The cost was significant. We missed our delivery date, and it cost $m's in extended software costs. The steering committee eventually addressed it with the same urgency I had requested months earlier.
Nothing changed except this, the risk became undeniable. And by then, the question was no longer whether to address it. It was who to blame for the delay, and often who saw it first. Sadly, the organization still didn't learn, set on dissecting why the message about the problem wasn't clear enough or communicated with more urgency, doomed forever to relive failure after failure.
If any of this feels uncomfortably familiar, the question isn’t whether your organization has risk signals. It’s whether those signals can survive the system long enough to influence a decision.
At Nocelion, I built the Signal Assessment to make that visible. It traces how risk information actually moves through your organization. Where it gets acknowledged, where it gets re-framed, and where it disappears entirely.
Most leadership teams assume they are operating on the best available information. The reality is usually more complicated than that.
You can explore it here: https://nocelion.com/assessments/signal
References
- Argyris, C. (1990). Overcoming Organizational Defenses: Facilitating Organizational Learning. Allyn and Bacon.
- Boeker, W. (1992). Power and managerial dismissal: Scapegoating at the top. Administrative Science Quarterly, 37(3), 400-421.
- BBC / National Audit Office. (2013). The BBC's Digital Media Initiative. National Audit Office.
- Edmondson, A.C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350-383.
Is your organization removing its own early warning systems?
If the patterns in this article feel familiar, a structured assessment of your technology leadership dynamics can surface what the executive team isn't hearing.