OpenAI has announced Safety Gym, a solution for training AI models through reinforcement learning. Reinforcement learning is the training of AI models using punishments or rewards.
A number of companies, including Intel’s Mobileye as well as Nvidia, have proposed a framework to ensure safe and logical decision making by AI models. The U.S. company OpenAI has therefore devised Safety Gym, a collection of tools for the development of AI models that has certain safety restrictions during training. Also, the level of safety of certain algorithms and the extent to which those algorithms avoid errors while learning can be compared, writes VentureBeat.
OpenAI has devised a new form of learning by punishment and reward. This reinforcement learning implements functions that limit the AI, but at the same time provide it with a greater degree of security. For example, models for self-driving vehicles can be made significantly safer. The OpenAI approach is called ‘constrained reinforcement learning’, and according to OpenAI it is a step towards much safer artificial intelligence.
The company explains the approach as follows in a blog post, with a model for autonomous vehicles as an example: “In normal reinforcement learning, you would pick the collision fine at the beginning of training and keep it fixed forever. The problem here is that if the pay-per-trip is high enough, the agent may not care whether it gets in lots of collisions (as long as it can still complete its trips). [With] constrained reinforcement learning, you would pick the acceptable collision rate at the beginning of training, and adjust the collision fine until the agent is meeting that requirement.”