The Future of Responsible AI: Striking a Balance between Regulation and Innovation

As we stand today, ChatGPT has quickly become a household name, but there are other remarkable AI-powered applications that have emerged in recent months, often with less attention. As a photography and editing enthusiast, I was amazed by what I saw in Adobe Firefly, which generates background images based on user input, and DragGAN, which enables outfit changes, facial expression alterations, and more. These advancements made me think about what the future holds in store for AI technology. However, as with any innovation, the need for regulation is becoming increasingly apparent. 

Advancements in AI Regulation 

Significant strides have recently been made in the realm of AI regulation. In a previous blog post, I discussed the initial requirements outlined in the EU AI Act. However, it is worth noting that the draft version of this act was approved last month, marking a substantial leap forward toward a more regulated playing field. The act prohibits certain AI systems, such as real-time biometric identification, and it is expected to receive full approval in 2024, followed by a two-year implementation period. 

In addition to the EU’s efforts, the US Senate recently held its inaugural AI hearing, aiming to discuss the regulation of AI. Prominent figures like Sam Altman (OpenAI), Gary Marcus (Geometric Intelligence), and Christina Montgomery (IBM) testified during the hearing. While attempts were made to draw analogies between AI and previous transformative technologies like the printing press, the internet, and social media, it was encouraging to witness a sense of urgency regarding control and risk mitigation. Co-founders of OpenAI, Greg Brockman, and Ilya Sutskever emphasized the need for regulation, proposing the establishment of an international regulatory body similar to the International Atomic Energy Agency to oversee AI development. In addition, AI experts and public figures expressed their concern about AI risk in a public statement. 

FRISS’s Commitment to Responsible AI 

At FRISS, we have long advocated for responsible AI practices. It is ingrained in our core values to ensure transparency, fairness, and non-discriminatory AI models. We firmly believe that AI should be used as a supportive tool within existing processes, promoting efficiency and accuracy while safeguarding against biases. We therefore apply the 7 principles for trustworthy AI in our platform, based on the “ethics guidelines for trustworthy AI” developed by the European Commission’s High-Level Expert Group on AI: 

  • Transparency 

  • Diversity, non-discrimination, and fairness 

  • Technical robustness and safety 

In the coming months, we will delve deeper into some of these crucial aspects of responsible AI, offering our insights and perspectives. 

The Future of Responsible AI  

The future of responsible AI lies in striking a balance between innovation and regulation. As AI continues to evolve at a rapid pace, it is essential to monitor and regulate its implementation to prevent potential harm.  

The approval of the draft EU AI Act and the US Senate AI hearings represent significant milestones in the journey toward responsible AI. By embracing responsible AI, we can unleash the full potential of this transformative technology while mitigating risks and creating a brighter future for all. 

About the author

Richard Bakker is Head of Data Science at FRISS. He is an all-round data manager with over 15 years of experience in data analytics, data science and insights. His focus has always been in the field of finance, risk management and fraud management.

He has highly developed analytical skills and is used to apply these skills in a dynamic business environment, being able to quickly understand complex businesses and translate data into strategic business information and translate strategy into data solutions. 

About FRISS

What would your processes look like if you could instantly trust your customers? Knowing when to trust keeps you in control of your processes – automating as much as possible. FRISS is the leading provider of Trust Automation Solutions for P&C insurers. Their real-time, data-driven scores and insights give instant confidence and understanding of the inherent risks of all customers and interactions. 

Based on next generation technology, FRISS allows you to confidently manage trust throughout the insurance value chain – from the first quote all the way through claims and investigations when needed.

Because speed and convenience have altogether redefined what it means to serve consumers, it is time to start building the relationships your customers demand and deserve. www.friss.com 

More from this Author