As rapidly growing generative AI technologies unleash a new generation of innovation, they also signal an equally sharp stream of ethical challenges for businesses. With the EU AI Act due to be implemented by 2025, ethical considerations are becoming increasingly important for technology leaders – especially those in highly regulated industries such as insurance.

Why It Matters

The scope of generative AI is staggering, with the ability to transform and shape a wide range of industries in the coming years. The technology holds the potential to increase efficiency and improve customer experiences from developing custom insurance policies to automating the claims process. But this power can also bring great dangers. It can reinforce biases, be abused for nefarious purposes or act as a “black box”, making decisions without transparency and/or accountability – if left unchecked.

In anticipation of the imminent adoption of the EU AI Act, the first comprehensive artificial intelligence legislation in the history of the world, it is particularly important to address these ethical issues. The regulation uses a tiered approach, identifying AI systems based on their risk classification, with additional requirements for high-risk applications (highly relevant to many parts of arms of the financial services industry[1].

 What It Means for Tech Companies

It will be mandatory for tech companies operating in Europe. However, ethical AI practices go beyond regulatory compliance and support long-term business by maintaining the trust of stakeholders intact. The major areas to keep your focus on are:

Reduction of bias: Bias is both implicit and explicit in AI training data, so it tends to remain embedded in generative AI systems. This would be problematic in insurance, as it could lead to poor pricing or discrimination. It is imperative that companies take proactive measures to detect and neutralise such biases.

Transparency and Explainability: Many AI systems operate as “black boxes” that are difficult to penetrate, which is a significant hurdle in sectors where decisions need to be justified. To make the AI model transparent, can insurtech companies explain such (AI-driven) decisions to customers or a regulator[3]?

Data Privacy and Security: The processing of large amounts of data makes generative AI a risk of information leakage that can damage brand reputation. Therefore, adequate security measures to ensure the confidentiality and integrity of sensitive information are essential. This is particularly important in the insurance sector, where personal and financial information is often processed.

Prevention of Misuse: The ability to create persuasive text, images and video with generative AI also brings new risks with the potential to create false claims or deep fake evidence. Organizations need to better protect themselves from misuse.

 How Tech Companies Can Act Responsibly

1. Adopt a comprehensive set of Ethical AI Frameworks in line with the EU AI Act and adhere to best practices, which should be a part of regular ethical audits of AI systems.

2. Invest in Diverse AI Teams: Create diverse teams to build and manage AI systems. Involving  diverse groups allows for the detection and negation of potential biases that may escape a largely homogeneous team.

3. Increase Angry: Reveal how AI models do more of what they do. In insurance underwriting, have well-reasoned justifications for potential policy decisions based on AI data.

4. Work with Regulators and Industry Peers: Engage with regulators and industry alliances to create responsible AI. Such collaborations could be based on the European Commission’s guidelines for research initiatives on the responsible use of generative AI[1].

5. Inform Stakeholders: Create awareness among the stakeholders, including all employees, partners and customers, of what generative AI can and cannot currently do, with regular updates. This will also help managing expectations and prevent further misuse.

6. Conduct Robust Testing and Monitoring: Implement rigorous pre-deployment testing procedures for AI systems, and ongoing post-deployment monitoring processes to identify and address any ethical issues that arise.

Real-World Examples

Consider a hypothetical scenario where an insurtech company uses generative AI to automate policy underwriting. The AI system might inadvertently discriminate against certain demographics due to historical biases in the training data. To address this, the company could:

– Implement fairness constraints in the AI model to ensure equal treatment of different demographic groups.

– Use techniques such as adversarial debiasing to actively remove discriminatory patterns from the model’s decisions.

– Conduct regular audits of the AI system’s output to identify and correct any emerging biases.

Another example could be in claims processing, where a generative AI system is used to detect fraudulent claims. Ensure transparency and fairness:

– The company could develop an explainable AI model that provides a clear rationale for flagging a claim as potentially fraudulent.

– Implement a human-in-the-loop approach where AI flags suspicious claims, but final decisions are made by human experts.

– Regularly update the AI model with new data to adapt to evolving fraud patterns while ensuring that the updates don’t introduce new biases.

As we navigate the exciting but challenging landscape of generative AI, it’s critical for technology leaders to prioritize ethical considerations. By doing so, we not only comply with upcoming regulations such as the EU AI Act but also build trust with our customers and contribute to the responsible development of AI technology. The future of AI in Europe is not just about innovation; it’s about ethical, transparent and responsible innovation that benefits society as a whole.

Citations:

[1] https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/guidelines-responsible-use-generative-ai-research-developed-european-research-area-forum-2024-03-20_en

[2] https://www.silicon.eu/brandvoice/ethical-considerations-regarding-the-use-of-generative-ai

[3] https://www.pwc.de/en/digitale-transformation/generative-ai-artificial-intelligence/the-genai-building-blocks/the-ethical-imperative-in-ai-development.html

[4] https://urheber.info/media/pages/diskurs/call-for-safeguards-around-generative-ai/c93a5ab197-1681904353/final-version_authors-and-performers-call-for-safeguards-around-generative-ai_19.4.2023_12-50.pdf

[5] https://www.linkedin.com/pulse/european-union-ai-act-ethical-guidelines-merely-stasinopoulos-phd–iwx7f

[6] https://www.eitdigital.eu/newsroom/news/2024/the-latest-eit-digital-report-delves-into-generative-ai-and-europes-quest-for-regulation-and-industry-leadership/