2 min

Google developed a new module for its Tensorflow Privacy toolkit. With this toolkit, the TensorFlow AI models based on machine learning can be tested on privacy more effectively.

Privacy of AI models is still a major area of discussion for developers, according to Google. There are no clear guidelines yet on how exactly a fully private AI model can be developed. As a result, AI models are still ‘open’ to data leaks when training datasets and therefore pose a threat to the privacy of this data.

Adding ‘noise’

For this purpose, Google has already developed a tool, TensorFlow Privacy, based on ‘differential privacy’, to ensure that data leaks during the training of AI models are prevented as much as possible. By adding ‘noise’ to the training of data models, examples of training data become ‘hidden’. So it becomes difficult to ‘eavesdrop’ on the trained data.

The disadvantage of this security technology is that it has been developed for assumed worst-case scenarios and can also strongly influence the accuracy of AI models. Additionally, hackers have also been found to be able to carry out attacks on AI training sets much more easily.

Membership interference attacks

So-called AI classifiers for machine learning models can also be intercepted with membership inference attacks. Membership interference attacks are easy to carry out, cheap and can be carried out with little knowledge of AI models.

With these types of attacks, hackers can predict when a piece of data has been used during training. If they perform these predictions accurately, they are likely to be able to identify data used in a training set. This, in turn, constitutes an invasion of the privacy of this data.

Vulnerability score module

The TensorFlow Privacy module should help developers better test their machine learning AI model training sets for vulnerabilities and potential breaches of privacy-sensitive data. The module only uses the output of the AI models, instead of internals (weights) or input samples. With the test, a vulnerability score is achieved that determines whether the AI model in question is leaking data from the training set.

The results should help developers to build better private AI models, to improve the search for the right model architecture and to use specific control techniques such as early stopping, dropout, weight decay and input augmentation. Or to be able to collect more data.

Better architectures and privacy-by-design

Ultimately, according to Google, this should lead to better architectures based on privacy-by-design and better choices regarding the processing of data. In the near future, the tech giant wants to further develop the new module for TensorFlow Privacy to prevent membership interference attacks outside the training sets.