3 min Analytics

Google introduces principles for responsible AI

Google introduces principles for responsible AI

Google has proposed a set of principles for more responsible development of AI solutions. With this, the tech giant wants to make a contribution within the ethical discussion about the possibilities of AI solutions.

With the rise of AI tools such as ChatGPT, GPT-4 and Bard, the discussion about ethical use has increased considerably. This includes pointing out the potential risks that these services may pose. Calls for laws and regulations surrounding the new AI capabilities are therefore increasing.

Google and ethics around AI

In a blog, Google indicates that within this discussion, the knife cuts both ways. First, the dangers and risks of the new services must be countered. But on the other hand, it should also be kept in mind that these services do bring many benefits. For example, for predicting disasters or developing highly effective medicines.

Google also states that AI and the resulting tools are still emerging technology. This means that risk and opportunity mapping extends “beyond mechanical programming rules to training models and determining outcomes.”

Google realizes that proper ethical development and use of AI do require regulation. In doing so, the tech giant says it is already working with several organizations.

Tip: The jobs most at risk to generative AI like ChatGPT

Principles for risk mitigation

The tech giant is now presenting principles that should apply to regulations that can minimize the risks surrounding AI. They should also ensure proper accountability around the technology.

First, the new rules around AI should be built on existing laws and regulations. This shows that there is already a lot of regulation in this area, including for privacy and security or other issues surrounding the use of AI applications.

Second, there needs to be a proportionate risk-based framework that focuses specifically on applications. This indicates that AI is a technology that applies to multiple tasks. This, according to Google, calls for more customization and distributed responsibility for developers, deployers and end users.

Third, interoperable practices for AI standards and governance should be encouraged. This should take place in an international framework.

Expectations, interoperability and transparency

As a fourth principle, Google states that the same expectations should apply to both non-AI-based and AI-based systems. This would allow even the most imperfect AI systems to improve themselves on existing processes.

As a fifth and final principle, Google states that AI developers should show transparency in their work and the capabilities of their solutions. This builds more trust and enables end users to get the most out of these developments.

As examples of AI frameworks that already embrace many of these principles, Google cites the U.S. National Institute of Standards and Technology AI Risk Management Framework and the AI Principles and AI Policy Observatory framework of the United Nations economic cooperation and advocacy organization, the OECD.

Also read: “AI model development should be paused for security measures”