2 min

The EC has issued a directive to staffers not to use Generative AI services for “critical” work.

This week the European Commission issued a set of internal guidelines for its staff regarding the use of Generative AI tools such as ChatGPT and Bard.

The document is known as “Guidelines for staff on the use of online available generative Artificial Intelligences tools”. It along with its accompanying note were seen by POLITICO. They were made available via the Commission’s internal information system.

According to the document’s introduction, its purpose is “to help staff members assess the risks and limitations of online available generative Artificial Intelligence (AI) tools and set conditions for their safe use in working activities of the Commission”.

The accompanying note reads: “The guidelines cover third-party tools publicly available online, such as ChatGPT. They aim at assisting European Commission staff in understanding the risks and limitations that online available tools can bring and support in appropriate usage of these tools”.

“Assessing the risks and limitations” of AI

The first risk outlined in the document is that of disclosure of sensitive information or personal data to the public. The guidelines note that any input provided to an online generative AI model is then transmitted to the AI provider. They can then use that information to feed future generated outputs available to the public.

Naturally, the EU wants to avoid that from being an issue. Therefore, EC staff is forbidden from sharing “any information that is not already in the public domain, nor personal data, with an online available generative AI model.”

Staff must also be aware that the AI’s responses might be inaccurate or biased. In addition, they should consider whether the AI might be violating intellectual property rights.

Most importantly, workers should never “cut and paste” AI-generated output into official documents.

Finally, staff are told to avoid using AI tools when working on “critical and time-sensitive processes.”

The guidelines also issue an exception for their own AI services. “Discussed risks and limitations are not necessarily relevant for internally developed generative AI tools from the Commission. Internal tools developed and/or controlled by the Commission will be assessed case by case under the existing corporate governance for IT systems.”