The wrong question every safety team is asking about AI
Which AI is best? is the wrong place to start. Strategy comes first. What problem are we actually solving? Only then do data, workflows, and transformation.
👋 Hello, Lucas Domingues here and welcome to the 6th edition of Safety 4.0 Insights in 2026. As AI in EHS content floods the internet, true signal is becoming rare. This newsletter is where we keep it real: human, SafetyTech clarity, and practical AI for EHS leaders. Thanks for helping make it a leading publication.
🎉 Special congratulations to the Safety 4.0 Accelerator cohort from May. Global leaders from ABB, Bard Phamaceutical, Bureau Veritas, Marsh McLennan etc. Special thanks to Mark Taylor and Matthew Hart.

🎓 Now is a great time to learn and upskill in AI, safetytech and digital transformation. Here's how I can help you:
- Applications are OPEN for the June Cohort (starting on June 2nd) of the Safety 4.0 Accelerator (IOSH approved and CPD certified). A global 5-week on-line interative programme designed by and for senior EHS Leaders to build digital fluency and readiness in the digital age. [special discount for groups or in company].
Let's go 🚀
Walk into any EHS team in 2026 and you will find at least one of three tools open on someone's screen: ChatGPT, Claude, or Microsoft 365 Copilot. Often all three. What you will rarely find is a clear strategy and policy explaining when to use which or whether the data being typed in should be there at all.
This edition is not a feature comparison. It is a practitioner's decision framework built on four sequential questions — the strategy, data, workflows, and how safely is the model itself built.
Three different products, three different jobs
Despite being lumped together as "AI tools," the three products have different features and that difference is exactly what should drive your choice.
- ChatGPT is OpenAI's chat product, powered by the GPT-5 family. A generalist with web search, image generation, and code execution.
- Claude is Anthropic's assistant, with Opus 4.7 as its most capable model. A one-million-token context window holds several long technical documents in working memory at once. Like ChatGPT, the consumer version is a public service unless deployed through an enterprise channel.
- Microsoft Copilot is productivity software with AI bolted in. It runs inside your Microsoft 365 tenant, respects existing permissions and sensitivity labels, applies enterprise data protection by default, and does not train foundation models on your prompts.
The first two are general-purpose intelligences. The third is an enterprise-grounded assistant. Treating them as interchangeable is often not a good idea.
AI-native EHS platforms
Generalist AI is not the only path. A growing category of AI-native EHS platforms embeds models directly into incident reporting, observation capture, audit workflows, and leading-indicator analytics. This path is worth considering with some interesting vendors already shipping great solutions into the market.
Before "which tool," ask "what problem"
The most expensive mistake in EHS AI adoption is not picking the wrong tool. It is picking any tool before agreeing on what problem you are hoping to solve.
Strategy is the essential part most teams skip becuase they are not prepared to develop one. The hype is real and only creates noise.
EHS leaders often jump from "AI is here" to "let's get licences" without defining the outcome the technology is meant to produce. The result is predictable: pilots that go nowhere and a quiet retreat to old habits six months later.
A problem-fit AI strategy in EHS answers four questions, in order:
- what outcome are we trying to change (not "use AI" that is an input);
- what is the friction in the current process (AI magnifies systems that work and exposes systems that don't);
- who owns the outcome (without a named owner with authority to change process, it is a demo); and
- what does "good" look like in measurable terms (tied to indicators the business already tracks).
AI is a leverage tool. It magnifies a system that already works, and exposes a system that doesn't."
Once those four are settled does the question of which tool become useful. Data, workflow fit, and model safety are filters applied to a defined problem. Applied to an undefined one, they just produce a more rigorous form of confusion.
➡️ Try it: Complete this sentence first: "We are using AI to [outcome] by [changing what in the process], owned by [name], measured by [indicator]." If you cannot complete it, the problem is not yet defined.
Once strategy is set, ask "which data"
Most poor AI choices in EHS start the same way: someone copies an incident report or RIDDOR draft into a public chat window. The output may look good. The data exposure is not. ICO guidance is clear: organisations remain accountable for personal data processed through generative AI, and putting identifiable information into a public consumer tool can constitute an unauthorised disclosure.
Microsoft 365 Copilot runs inside the Microsoft 365 service boundary, so prompts inherit your tenant's contractual protections. ChatGPT and Claude offer enterprise tiers with similar protections but those are a procurement decision, not the default.
Once data is safe, fit becomes the lever
Each tool has genuine strengths. Claude is currently my tool of choice. Its reasoning is next-level and I love the fact I can build "skills" into it, and it is the tool I trust most for nuanced safety writing, where tone and accuracy both matter. That preference is not a ranking. ChatGPT has its merits too and Copilot is great for enterprises using the Microsoft suite because data stays inside your tenant by default. From a risk and governance perspective, it has earned trust from most enterprises.
How safely is the model itself?
The last filter most EHS teams skip: how seriously the vendor takes the safety of the model itself. The Future of Life Institute's AI Safety Index grades developers across six domains: risk assessment, current harms, safety frameworks, existential safety, governance, and information sharing [6]. In the Winter 2025 edition, Anthropic and OpenAI tied at C+, with Google DeepMind close behind. Every other developer scored D or below; no company scored above D on existential safety.

Strategy first, then the roadmap
The teams getting real value from AI in EHS are not the ones with the fanciest licences. They are the ones who answered "what problem are we solving" before "which tool should we buy and roll out" and then built a roadmap with clear implementation milestones, ownership and project management.
🙏 Thank you for reading!
Have a great day, stay safe, stay ahead.
Lucas
References
- OpenAI. Enterprise privacy at OpenAI.
- Anthropic. What's new in Claude Opus 4.7. Claude API Documentation, 2026.
- Microsoft. Microsoft 365 Copilot architecture and how it works. Microsoft Learn, 2026.
- Microsoft. Enterprise data protection in Microsoft 365 Copilot and Microsoft 365 Copilot Chat. Microsoft Learn, 2026.
- Information Commissioner's Office (ICO). Guidance on AI and data protection.
- Future of Life Institute. AI Safety Index — Winter 2025. Released December 2025.
- Microsoft. Data, Privacy, and Security for Microsoft 365 Copilot. Microsoft Learn, 2026
Transparency Note: AI (Claude Opus 4.7) was used whilst curating parts of this edition. All opinions are my own.
Responses