
When Architecture Becomes Territory: How Siloed Loyalty Chains Turn Engineering Evidence into Political Challenges
The room was quiet in the wrong way.
I was sitting in what was supposed to be an architecture review at a mid-market logistics company — one of those meetings where engineers present technical findings and the group decides what to do about them. A platform engineer had just walked the room through a dependency analysis showing that three services owned by different teams shared an undocumented database connection pool. Under load, any one of them could starve the others. The data was clean. The diagrams were clear. The failure mode was obvious to anyone who'd operated distributed systems at scale.
What happened next was not an engineering conversation.
The director whose team owned one of the services leaned back, crossed his arms, and said, "I think we need to be careful about teams reaching into other teams' domains." A VP nodded. The platform engineer, who had been presenting with the quiet confidence of someone who'd done the work, went still. Within five minutes, the conversation had shifted from "how do we fix this shared failure mode" to "whose job is it to look at this." The finding didn't get rejected on technical merit. It got reclassified as a boundary violation.
I've watched this scene play out at multiple organizations over my career so far. Different companies, different stacks, different org charts. The script barely changes. And every time, the cost lands the same way: in production, at two in the morning, when the failure mode nobody was allowed to talk about finally arrives.
The Problem That Isn't Communication
The conventional diagnosis when cross-team technical findings get dismissed is a "communication problem." Leaders invest in better documentation, more readable diagrams, clearer presentations. They send engineers to storytelling workshops. They create Confluence spaces specifically for cross-team observations.
None of it works. Because the problem was never communication.
What I've seen across multiple organizations — from growth-stage SaaS to Fortune 500 Enterprises — is something more structural. When an engineer surfaces a concern that crosses team boundaries, the organization doesn't process it as engineering evidence. It processes it as a territorial challenge. The messenger's competence isn't questioned. Their standing is. The implicit question isn't "is this technically accurate?" It's "who authorized you to look at our system?"
This is not a failure of individual managers or directors. It is a predictable outcome of how organizations build loyalty hierarchies. Henri Tajfel and John Turner described this mechanism in 1979 with Social Identity Theory (SIT) — the finding that humans automatically categorize themselves into in-groups and out-groups, and that group membership shapes how information is received. An observation from inside the group gets processed as insight. The same observation from outside the group gets processed as intrusion.
SIT wasn't developed to explain software organizations. But it explains software organizations with uncomfortable precision. When teams form strong internal identities — their codebase, their on-call rotation, their deployment pipeline — cross-boundary observations trigger the same in-group/out-group processing that Tajfel and Turner documented in minimal group experiments. The engineering content of the message doesn't change. The social processing of the message does.
This is why your best principal engineers keep raising the same architectural concerns, getting the same deflections, and eventually going quiet. They haven't stopped seeing the problems. They've learned that seeing the problems across boundaries carries a social cost that no amount of "psychological safety" posters will offset.
The Defensive Lexicon
Here is a diagnostic you can run on your own organization without commissioning a single study. Listen to the language your leaders use when a cross-team technical concern surfaces. Not the content of their response — the framing.
"We need to respect team boundaries." "That's really a question for their team to investigate." "I'd want to make sure we're not stepping on toes." "Let's have them take a look at it from their side."
I call this the defensive lexicon, and I've heard every one of these phrases at organizations that genuinely believed they valued collaboration. The phrases sound reasonable. They sound mature. They sound like the kind of boundary-respecting, empowering language that a healthy organization would use. That is what makes them dangerous. They provide a socially acceptable mechanism for converting engineering evidence into a jurisdictional question — without anyone having to say "I don't want to hear this."
The tell is what happens after these phrases land. Does the technical concern get investigated on its merits? Or does it enter a social negotiation about who is allowed to investigate it, who needs to be "brought along," and whose feelings need to be managed before the data can be discussed? In every organization where I've tracked this pattern, the answer has been the second. The concern doesn't die. It goes underground. Engineers stop raising cross-boundary issues in formal channels and start raising them in DMs, in hallway conversations, in exit interviews.
And then the incident happens. And the postmortem reveals what everyone already knew.
Architecture Is Not Territory
The fundamental error is treating architecture as territory. When organizations assign team ownership of services — which is correct and necessary for operational clarity — they often make an unintended second assignment: they grant teams intellectual ownership of the architectural implications of their services. 'My team owns this service, therefore my team owns the truth about how this service interacts with everything else'.
This is architecturally illiterate. The entire point of distributed systems is that failure modes emerge from interactions between components, not from components in isolation. No single team can own the understanding of how their service behaves in the context of services they didn't build. That understanding belongs to the system. It requires cross-boundary observation by definition.
Conway's law tells us that organizations produce system designs that mirror their communication structures. The under-examined reality is this: when communication structures are gated by loyalty hierarchies, the system's actual architecture — the runtime behavior, the failure modes, the load patterns — diverges from the architecture any single team can see. The gap between what the system does and what any team believes the system does grows in direct proportion to how effectively cross-boundary observations are suppressed.
I've seen this gap widen until it becomes the primary source of production incidents. Not bad code. Not insufficient testing. Not inadequate monitoring. The monitoring was fine — within each team's boundary. The failures lived in the spaces between.
What Actually Works: Governance as Social Translation
The answer is not cultural messaging. I've sat through the all-hands meetings. I've watched executives implore their organizations to "break down silos" and "think like one team." These messages have roughly the same half-life as New Year's resolutions. They make people feel good for a week. They change nothing structurally. SIT-driven behavior doesn't respond to aspirational messaging because it isn't conscious. You cannot tell people to stop processing cross-boundary observations as threats any more than you can tell them to stop flinching.
What works is structural intervention. Specifically, governance mechanisms that translate cross-boundary engineering observations out of the social frame and into an evidence frame before group identity processing can reclassify them.
Architecture Review Boards (ARBs) are the most common mechanism, but most ARBs I've encountered fail at exactly this function. They become presentation venues where teams showcase their own work for approval rather than forums where cross-boundary observations get investigated. The ARBs that work — and I've built several that did — share three structural properties.
First, they separate the observer from the observation. Findings get submitted as evidence artifacts — Architecture Decision Records (ADRs), dependency analyses, load models — not as presentations by the person who found them. The social identity of the messenger is decoupled from the engineering content of the message. This is not anonymity for its own sake. It is a deliberate structural choice to prevent SIT-driven processing from determining the outcome.
Second, they operate on dual trust systems. Engineering credibility (is this technically sound?) and executive credibility (does this matter to the business?) are evaluated on separate tracks. In loyalty-hierarchy organizations, these two forms of credibility are fused — you earn engineering credibility by having executive backing, and executive backing flows through loyalty chains. Effective ARBs break this fusion. A technically sound finding gets investigated regardless of whether the person who raised it reports to the right VP.
Third, they produce binding architectural guidance that applies across team boundaries, with explicit escalation paths when teams resist. The guidance doesn't ask teams to voluntarily coordinate. It establishes shared architectural constraints that exist above the team level, the way building codes exist above individual contractors. No one asks a contractor to voluntarily coordinate with the electrician. The code tells both of them where the load-bearing walls are.
These are not radical interventions. They are basic governance. The reason they feel radical is that most organizations have a governance gap at exactly the layer where cross-boundary architectural concerns live. They have team-level ownership (good) and executive-level strategy (also good) and nothing in between that can process engineering evidence that doesn't belong to any single team.
The Cost You Are Already Paying
I want to return to that quiet room. The platform engineer who presented the dependency analysis was one of the strongest systems thinkers I'd worked with across several engagements. Within six months, she left. Not because anyone was cruel to her. Not because the culture was toxic in any dramatic way. Because she got tired of watching technically sound findings get converted into jurisdictional negotiations. She didn't file a complaint. She didn't make a speech. She just stopped raising things, and then she stopped showing up.
The database connection pool issue she'd flagged caused a cascading failure four months after she left. The postmortem was thorough. It identified the shared dependency. It recommended cross-team ownership of shared connection infrastructure. It used phrases like "improved communication" and "better visibility." Nobody mentioned that an engineer had surfaced the exact finding months earlier and been told, politely, to stay in her lane.
This is the cost. Not the incident — incidents happen. The cost is that your best people, the ones who see across boundaries, learn that seeing across boundaries is socially penalized. They don't become worse engineers. They become quieter ones. And then they leave. And you are left with a system that has removed its own awareness — the people who could see the failure modes — while keeping the fragility that produces them.
If your architecture reviews feel like territory negotiations, you don't have a communication problem. You have a governance gap where engineering evidence should be. The fix is not to tell people to collaborate harder. The fix is to build the structural mechanisms that make cross-boundary observation a governed engineering activity instead of a social risk.
The room should never be quiet in that particular way. Not if you want to keep the people who can hear what the silence means.
References
- Tajfel, H., & Turner, J.C. (1979). An integrative theory of intergroup conflict. In W.G. Austin & S. Worchel (Eds.), The Social Psychology of Intergroup Relations (pp. 33-47). Brooks/Cole.
- Conway, M. (1968). How do committees invent? Datamation, 14(4), 28-31.
Is your architecture governance converting evidence into territory?
If cross-boundary architectural concerns die in your organization before reaching the people who can act on them, the fix is structural — not cultural. I help technical leaders build governance mechanisms that make cross-team observation institutional rather than personal.