The U.S. Department of Commerce requires makers of the most advanced AI models to share critical details about these with the government. Companies that provide the infrastructure to train such models also need to disclose key details. The government wants to know the ins and outs of capabilities, security measures and other issues to get a better handle on the safe development of AI.
The Biden administration’s new rules focus on very large AI models, which require huge amounts of computing power. For example, it is mandatory to report the existence of any model that requires more than 1026 integer operations or floating-point operations (FLOPS), or models trained on biological data with more than 1023 operations.
In addition, companies providing AI infrastructure must hand over information when their systems can reach network speeds of more than 300 Gbi per second or perform more than 1020 operations per second. In practice, that means an infrastructure of many tens of thousands of GPUs, if not a hundred thousand.
Harmful purposes
Among other things, developers must provide information about the capabilities of their models, how secure they are, and what security tests (such as red-teaming) have been performed to prevent hacking or misuse.
One of the biggest concerns is that malicious parties could misuse advanced AI for harmful purposes, such as cybercrime, or even to help develop biological, chemical, or nuclear weapons.
These rules build on temporary guidelines that took effect last year after an executive order on AI security. Because AI development is happening at a breakneck speed, the government is trying to keep up with the big tech companies by instituting rules that should somewhat rein in the industry’s biggest players.
Read also: OpenAI is shaping AI policy, and that concerns Europe too