Nocelion uses artificial intelligence tools in our practice. We're publishing this page because we believe that describing AI usage honestly — with specificity, not boilerplate — is the minimum standard of integrity we expect from the organizations we advise. Leading by example starts here.
What AI Does in Our Work
Content development. AI assists with research synthesis, structural drafts, and editing of articles, frameworks, and published guides. All published content reflects Jon Hathaway's analysis, judgment, and voice. AI does not determine positions, conclusions, or recommendations — it assists in expressing and structuring them.
Research and analysis. AI tools accelerate literature review, data synthesis, and summarization across technical, regulatory, and business domains. All significant claims in published content are sourced and reviewed by a human before publication.
Client communication. AI may assist in drafting initial versions of written communications — proposals, follow-ups, reports. Every client-facing deliverable is reviewed, edited, and approved by Jon Hathaway before delivery. The human judgment is not optional.
Assessment tooling. Nocelion's Meridian AI Readiness platform uses AI to analyze SDLC artifacts and surface structured signals about delivery coherence. Meridian's agent architecture is on-premise by design — your source code and organizational data do not leave your network. The interpretation and recommendations from any Meridian engagement are delivered by Jon Hathaway, not generated autonomously by the platform.
What AI Does Not Do in Our Work
AI does not issue strategic recommendations without human review. No Nocelion recommendation — on vendor selection, technology strategy, organizational structure, or AI adoption — is produced or delivered by an AI system acting autonomously. All recommendations carry Jon Hathaway's name and judgment behind them.
AI does not generate client-facing analysis without human validation. Where AI tools assist in analysis, outputs are reviewed against source material, validated for accuracy, and edited for context before they reach a client.
AI does not replace the expertise you're engaging. Nocelion's value proposition is 25+ years of hands-on enterprise technology leadership, including Fortune 500-scale execution, crisis incident response, and vendor negotiation. AI tools make that expertise more efficient to apply. They don't substitute for it.
Our Governance Standard
We hold our AI usage to the same standard we recommend in client engagements: human judgment governs every material output.
If a piece of analysis, a strategic recommendation, or a client deliverable would change based on human review — and it often does — then human review is not optional. The moment AI output is treated as final without human validation, the accountability chain breaks. We don't break it.
This means our AI-assisted work is sometimes slower than it could be if we simply accepted first-pass outputs. That's intentional.
The AI Providers We Use
Nocelion uses AI services from three providers: Anthropic (Claude), OpenAI (GPT models), and Google (Gemini, via Vertex AI). We use each through their commercial API or enterprise tiers — not consumer accounts.
This distinction matters for data handling. All three providers explicitly commit, under their commercial API and enterprise terms, that customer data processed through those tiers is not used to train or improve their foundation models by default. To be precise: when we use these services, the content of our requests is transmitted to the provider's infrastructure to fulfill each API call. What does not happen is that data being retained and fed into model training pipelines. Anthropic retains commercial API logs for 7 days before deletion. OpenAI and Google maintain similar commitments under their business terms.
In practical terms: no Nocelion work product and no client data processed through our AI tools is used to train frontier models.
Nocelion does not use consumer-tier accounts (ChatGPT Free/Plus, Claude Free/Pro, Gemini consumer) for any client-related work. Those tiers carry different data handling terms and are not appropriate for professional advisory work involving client context.
Internal and Engagement-Specific Models
Beyond third-party AI services, Nocelion develops and trains models internally — both general-purpose models built for recurring use cases across our practice, and purpose-built models trained specifically for individual client engagements.
Data isolation is absolute. Client data is never mixed across engagements. When a model is built for a specific engagement, it is trained exclusively on data from that engagement. It is not used to train models deployed for other clients, and it is not incorporated into Nocelion's general-purpose model library.
At engagement close, all client data is deleted — including the source data and any models derived from it. No residual copy is retained in any form. This applies to raw data, processed artifacts, and any model weights trained on client-specific inputs.
What Nocelion does retain is our own expertise. Every engagement sharpens our analytical approaches, solution patterns, and advisory judgment. The responses we develop, the frameworks we apply, and the solutions we construct become part of how we practice — and they inform the ongoing development of our internal consulting models over time.
This is exactly how every consulting practice operates. McKinsey gets better at turnarounds by doing turnarounds. A restructuring attorney gets sharper with every chapter filing. An experienced CTO advisor gets better at diagnosing organizational dysfunction by having diagnosed it many times before. The practitioner learns; the client's confidential information stays confidential. Nocelion operates under the same principle — with the added specificity that the data boundary is enforced at the model level, not just as a matter of professional discretion.
To be explicit: if Nocelion develops a solution framework for your engagement, the methodology sharpens our practice. The data, organizational context, and specifics that came from your organization are deleted at close and play no further role.
Why We're Publishing This
The organizations that use AI without disclosing it — or disclose it only in legal fine print — are making a bet that opacity is safer than transparency. We think that bet is wrong, both ethically and strategically.
If a client later discovers that AI contributed to work we represented as fully human-generated, that's a trust problem we created. If we use AI and tell clients, we create an auditable standard we're accountable to. We prefer the accountability.
We also work with mid-market leaders who are actively evaluating AI adoption decisions. If we can't demonstrate what responsible AI governance looks like in our own practice, we have no credibility advising it in yours.
Questions
If you want to understand specifically how AI was used in any Nocelion engagement or deliverable, ask. We'll tell you.