Interview: How do organizations need to address the new risks that Generative AI brings?
Heather Gentile, Executive Director of Product for AI Governance and Regtech within IBM’s Data and AI software division, joined the team at The AI Summit New York to discuss AI governance and sustainability. #
The below interview is a transcript of the interview with Heather, and forms part of our AI in Cybersecurity series. Explore the 2024 cybersecurity agenda here or secure your pass now to the most informative and exciting AI event in 2024!
Introduction: AI in Cybersecurity
In this interview, Heather shares with us her insight into the risks and governance challenges associated with generative AI, particularly large language models (LLMs). The discussion has a focus on how organizations can address these risks through AI governance frameworks, ethical considerations, and compliance measures. She highlights the importance of establishing an AI ethics board, aligning with organizational standards and values, and implementing risk management and compliance plans. There is clearly a need for explainability, transparency, and understanding of the technology's inner workings in today’s world is paramount. She also discusses the potential guidance from government legislation, such as the EU AI Act, and the iterative nature of these regulations. Additionally, Heather touches on the opportunities and potential return on investment (ROI) of generative AI in areas like customer experience, engineering, and digital client experiences.
Susie Harrison:
Hi and welcome to day one of the AI Summit New York. I'm delighted to be joined by Heather Gentile, Director of Product Management at IBM Data and AI Software.
Can you tell us, how do organizations need to address the new risks that generative AI brings?
Heather Gentile:
There are many ways to think about that. At IBM, we’ve been in the AI governance technology space for many years. With generative AI, some risks have heightened, and new risks have emerged. For example, large language models (LLMs) can hallucinate, and there are concerns about sensitivity around personal data and filtering hate speech, aggression, and profanities. We’ve been working closely with our research team to build new guardrails to address these challenges.
New Governance Challenges in Generative AI
Heather Gentile:
We’ve always tested for bias, fairness, drift, and performance in predictive machine learning models. With generative AI, we’re integrating new guardrails to address these new types of risks.
AI Legislation and Governance
Susie Harrison:
AI governance has been a big topic recently. For example, the UK had Rishi Sunak's AI summit at Bletchley Park, and the EU is expected to introduce new legislation next year (2024).
How do you think governments can balance legislating AI to protect against its negative impacts while keeping up with its rapid evolution?
Heather Gentile:
Governments are not trying to stifle innovation, but like any new technology, there need to be requirements and controls to adopt AI responsibly. The key is to involve the right stakeholders, such as establishing AI ethics boards, and ensuring alignment between AI use and the organization’s standards and values. Legislation will help provide guidance on transparency, explainability, and accountability, and it will evolve iteratively. Organizations preparing early by reviewing frameworks like the Executive Order or the EU AI Act will be better equipped when the requirements become official.
Top Priorities for AI Governance in Enterprises
Susie Harrison:
What should enterprises prioritize when setting up governance frameworks for AI?
Heather Gentile:
Many organizations are making AI governance strategic to the whole organization, rather than having it driven only by data science or IT. With generative AI’s investment and innovation opportunities, more stakeholders are involved, including risk management, compliance, and even marketing teams. The rise of tools like ChatGPT has also led organizations to focus on HR, training employees, and monitoring behaviour to ensure accountability in how AI is used.
Key Use Cases for Responsible AI
Susie Harrison:
At AI Business, we surveyed our audience, and they identified predictive analytics, customer experience, and chatbots as the top applications for responsible AI.
Does that align with IBM’s perspective?
Heather Gentile:
Yes, those use cases align closely with what we see. Employee productivity, particularly in customer assistance, is a big focus. Ensuring employees have access to governed, reliable data helps streamline their work. We’re also seeing opportunities in engineering, such as using Watson Code Assistant to convert code efficiently with minimal human intervention. These internal use cases, where organizations have confidence in their data, offer some early wins and ROI before expanding AI usage.
Exciting Developments in AI for the Coming Year
Susie Harrison:
What excites you most about the future of AI? What are you looking forward to in the next year?
Heather Gentile:
There’s a lot to be excited about. IBM has had great success with AI for Business, improving digital client experiences at events like Wimbledon and the US Open. These low risk use cases allow us to innovate without dealing with sensitive data. We’re also working on methodologies to evaluate AI’s ROI, balancing the ethical considerations and the financial benefits. I see this focus on ROI and ethical AI becoming more prominent in the next year.
Find Out More
Stay at the forefront of AI in Cybersecurity trends, innovations and solutions by registering for our 2024 show.
Or, see more highlights from the 2023 New York AI Summit.