The Future of Responsible AI: Striking a Balance between Regulation and Innovation

Jun 21, 2023

Richard Bakker

The future of responsible AI
The future of responsible AI
The future of responsible AI

As we stand today, ChatGPT has quickly become a household name, but there are other remarkable AI-powered applications that have emerged in recent months, often with less attention. As a photography and editing enthusiast, I was amazed by what I saw in Adobe Firefly, which generates background images based on user input, and DragGAN, which enables outfit changes, facial expression alterations, and more. These advancements made me think about what the future holds in store for AI technology. However, as with any innovation, the need for regulation is becoming increasingly apparent.

Advancements in AI Regulation

Significant strides have recently been made in the realm of AI regulation. In a previous blog post, I discussed the initial requirements outlined in the EU AI Act. However, it is worth noting that the draft version of this act was approved last month, marking a substantial leap forward toward a more regulated playing field. The act prohibits certain AI systems, such as real-time biometric identification, and it is expected to receive full approval in 2024, followed by a two-year implementation period.

In addition to the EU's efforts, the US Senate recently held its inaugural AI hearing, aiming to discuss the regulation of AI. Prominent figures like Sam Altman (OpenAI), Gary Marcus (Geometric Intelligence), and Christina Montgomery (IBM) testified during the hearing. While attempts were made to draw analogies between AI and previous transformative technologies like the printing press, the internet, and social media, it was encouraging to witness a sense of urgency regarding control and risk mitigation. Co-founders of OpenAI, Greg Brockman, and Ilya Sutskever emphasized the need for regulation, proposing the establishment of an international regulatory body similar to the International Atomic Energy Agency to oversee AI development. In addition, AI experts and public figures expressed their concern about AI risk in a public statement.

FRISS's Commitment to Responsible AI

At FRISS, we have long advocated for responsible AI practices. It is ingrained in our core values to ensure transparency, fairness, and non-discriminatory AI models. We firmly believe that AI should be used as a supportive tool within existing processes, promoting efficiency and accuracy while safeguarding against biases. We, therefore, apply the 7 principles for trustworthy AI in our platform, based on the “ethics guidelines for trustworthy AI” developed by the European Commission’s High-Level Expert Group on AI:

In the coming months, we will delve deeper into some of these crucial aspects of responsible AI, offering our insights and perspectives.

The Future of Responsible AI

The future of responsible AI lies in striking a balance between innovation and regulation. As AI continues to evolve at a rapid pace, it is essential to monitor and regulate its implementation to prevent potential harm.

The approval of the draft EU AI Act and the US Senate AI hearings represent significant milestones in the journey toward responsible AI. By embracing responsible AI, we can unleash the full potential of this transformative technology while mitigating risks and creating a brighter future for all.

Download our eBook “Insurance Digital Transformation: Maximizing Opportunities Through Fraud Mitigation” to learn more about AI’s role in digitizing the insurance industry.