top of page

AI, ChatGPT and Secure Working: What Organisations Need to Understand

  • info6674647
  • Dec 12, 2025
  • 3 min read
ree

Artificial intelligence tools such as ChatGPT, Copilot and other large language models (LLMs) are now widely used in the workplace. They are fast, accessible, and extremely effective at drafting text, summarising information and supporting decision-making.

However, there is growing confusion between AI tools and secure working environments - particularly in sectors where information sensitivity, professional judgement and liability matter.


Understanding the difference is critical.


AI tools are not secure workspaces


Tools like ChatGPT are best understood as thinking and drafting aids, not secure systems of record.


They are excellent for:

  • structuring ideas

  • drafting generic text

  • summarising non-sensitive information

  • improving clarity and consistency


They are not designed to be:

  • controlled evidence repositories

  • auditable professional systems

  • client-segregated workspaces

  • environments for handling sensitive operational detail


This distinction is often overlooked.


An AI chat interface may feel private, but it does not provide the access controls, audit trails, retention policies or evidential defensibility required for professional work.


AI is now embedded in everyday capture tools 


A growing category of AI-enabled productivity tools now combines hardware recording with automated transcription and summarisation. Devices and applications designed to capture meetings or conversations can be extremely useful for personal productivity, but they also illustrate how easily sensitive information can leave controlled environments.


When recordings or summaries are processed through external AI platforms, organisations may lose visibility over where data is stored, who can access it, how long it is retained, and whether it is reused or shared. In security, resilience and risk-management contexts, this is particularly important: conversations often contain operational detail, assumptions, vulnerabilities or decisions that were never intended to exist outside a secure system.


The issue is not the technology itself, but the absence of clear rules around when such tools are appropriate, and when they are not.


Where the real risks arise


The primary risks are not the AI models themselves - they are how they are used.


Common failure points include:

  • copying sensitive client data into AI tools

  • pasting site layouts, vulnerabilities or incident details into chat interfaces

  • treating AI outputs as professional judgement

  • blurring the line between drafting support and decision-making


“Industry security guidance emphasises that organisational culture, process and communication are as crucial as technical controls in securing AI use.” https://www.ncsc.gov.uk/guidance/ai-and-cyber-security-what-you-need-to-know


In security risk management and resilience, these risks matter. Once sensitive information leaves a controlled environment, it cannot be “un-shared”.


Secure tooling is about control, not convenience


A secure working environment is defined by controls, not cleverness.


Proper secure tooling provides:

  • identity-based access

  • client segregation

  • audit logs (who accessed what, and when)

  • retention and deletion controls

  • enforceable governance


This is why sensitive information should always be handled in secure document environments, not AI chat tools.


AI can sit around secure tooling - but should not sit inside it.


A sensible, professional approach to AI


The safest and most effective organisations are not banning AI - they are using it deliberately.


A sensible approach includes:

  • clearly defining what data must never be used with AI

  • limiting AI use to drafting and efficiency, not judgement

  • separating evidence handling from AI-assisted writing

  • ensuring all outputs are reviewed and approved by professionals - keeping the human in the loop

  • retaining clear accountability for decisions


In practice, this means using AI to support thinking and productivity, while keeping sensitive information within secure, governed systems.


How State2 Security approaches AI


At State2 Security, we recognise the value AI can bring when used responsibly.


We use AI tools selectively to:

  • improve drafting efficiency

  • enhance clarity and consistency

  • support internal planning and communication


AI is never permitted to:

  • access client evidence

  • analyse site-specific vulnerabilities

  • replace professional judgement


All sensitive information is handled within secure, access-controlled systems, and all outputs are reviewed and owned by qualified security professionals.


The bottom line


AI is a powerful tool, but it is not a secure workspace.


GPT's are best understood as thinking and drafting aids, not secure systems of record.


Secure work tooling is an engineered system with controls, assurance, and defensibility.


Organisations that understand this distinction are far better placed to gain the benefits of AI without introducing unnecessary risk.


In security, resilience and risk management, the goal is not to be the most technologically loud...it is to be the most reliable.


FAQ's


1. Is using ChatGPT inherently unsafe?

AI tools add efficiency but are not secure workspaces for sensitive information.


2. How should organisations govern AI?

Through clear policies, access controls and professional review processes.


3. What’s the difference between AI tools and secure systems?

AI tools are drafting aids; secure systems provide controlled evidence and auditability.


 
 
 

Comments


bottom of page