2 min Security

Director of cybersecurity service uploaded sensitive documents to ChatGPT

Director of cybersecurity service uploaded sensitive documents to ChatGPT

Last summer, the acting director of the US cybersecurity service CISA entered sensitive government documents into a public version of ChatGPT. This led to several internal security alerts and an investigation into the damage within the Department of Homeland Security (DHS).

This was reported by Politico, based on sources within DHS. The documents in question were CISA contract documents marked for official use only. Although the files were not classified, they may not be shared publicly under applicable rules. Security systems within CISA detected the uploads in August. They repeatedly sounded the alarm to prevent possible data leaks or the unintentional dissemination of sensitive information.

It is noteworthy that the acting director received explicit permission to use ChatGPT in advance. This permission was an exception, as the AI service was blocked by default for other DHS employees. This is precisely why the incident is sensitive. This is because the leadership of a cybersecurity organization is expected to handle internal guidelines with care.

AI important for government modernization

After the detection, DHS launched an internal investigation to determine whether the uploads affected the security of federal systems. The results of this investigation have not been made public. CISA responded that the use of ChatGPT was limited and temporary, and that it occurred under established conditions. The agency emphasized the importance of AI for government modernization, in line with the Trump administration’s policy to accelerate its use across federal organizations.

Ars Technica reports that the timing of the publication coincides with increasing political and administrative pressure on CISA’s leadership. The acting director was recently questioned by the US Congress about previous staff reductions and the organization’s overall internal preparedness. In that context, the ChatGPT incident takes on added significance, as it fits into a broader picture of concerns about governance and risk management.

According to Ars Technica, the internal investigation into the uploads began last summer. CISA declined to confirm whether it has been completed. This lack of clarity raises questions about transparency and may explain why the incident became public only months later, through media reports.