Leading with Purpose: A Conversation on Responsible AI with Kumar Subramanyan
In the rapidly evolving landscape of artificial intelligence, responsible leadership has become paramount for organizations seeking to harness AI's potential while mitigating its risks. We recently sat down with Dr. Kumar Subramanyan, Ambassador for The AI Summit New York and Head of Modelling & Analytics at Unilever R&D, to discuss responsible AI leadership.
Dr. Subramanyan leads a global team of data scientists at Unilever, delivering analytics capabilities and AI solutions across multiple product categories. With over 25 years of R&D experience, he brings extensive expertise in Data Science, Big Data Analytics, Predictive Modelling, and Digital Transformation. Previously, he served as Director of Clinicals at Unilever R&D in Shanghai and has been leading data science initiatives at Unilever for the past 11 years.
How do you define "Responsible AI Leadership," and why is it essential for shaping the future of industry and society?
KS: Responsible AI leadership can be defined as having a broad and holistic view of the impact of AI in various domains, and for society at large, and having a set of checks and balances to ensure that all aspects of AI solutioning and implementation take into account these factors consistently - data quality, integrity and transparency, model choice and performance, human-centricity and business value creation (RoI). This is essential to ensure there is trust and widespread adoption of AI among consumers and in enterprises, and to allay fears of uncontrolled and malicious use of AI. This is also critical to ensure and manage societal transition to a highly automated and AI-enabled workforce.
Can you share an example of how your organization has implemented responsible AI practices to drive positive change?
KS: In my previous organization (I am in a job transition currently), this was taken very seriously, and we established an AI Assurance team (later renamed as AI Collective) that was responsible not only for developing an enterprise-wide AI strategy (identifying priority business areas to develop AI solutions and harmonize solution infrastructure), but also to consider business integrity, data privacy and consumer experience and impact at the outset of every AI initiative.
What steps can leaders take to ensure AI systems are ethical, transparent, and free from bias?
KS: There are several things to consider in developing AI systems or solutions that are free from bias, ethical, and transparent. Most of this resides in the data layer that goes into training these models. The data must be inclusive for the model to provide bias-free outcomes, but this is very hard to achieve in most cases (note that human decisions also have bias) - and this is addressed via transparency - very important that AI solutions/models declare the scope of training data used, so that the user can determine potential biases and use the model outputs appropriately. A similar approach needs to go into the ethical considerations of AI solutions - there must be clarity on the intended use/users and the impact the outputs will have on the users, organization, and society at large.
How can collaboration between industries, governments, and academia advance responsible AI adoption?
KS: This collaboration is critical - most of the cutting-edge AI development is happening within the industry in collaboration with academia, and the key here is that this is happening at a breakneck speed. Governments in most countries (including the US) are now waking up to the medium to long-term societal impact of AI and are establishing regulatory frameworks to manage this in an appropriate manner. Industry and government are often on opposite sides of this debate, and given the current geopolitical flavor, governments in different countries are also not aligned on how to develop and regulate AI innovations."
What advice would you give to emerging leaders looking to champion responsible AI in their organizations?
KS: Organizations must take a holistic and longer-term impact view of the AI system/solution they are developing before they fully scale and implement these in organizations and set up governance that looks beyond the short term data privacy, user adoption, RoI metrics of specific solutions.
As AI continues to transform industries and societies worldwide, leaders like Dr. Kumar Subramanyan remind us that responsible implementation isn't just about mitigating risks – it's about creating sustainable value and building trust. By embracing a holistic approach to AI development and deployment, organizations can harness the transformative potential of these technologies while ensuring they benefit humanity at large.
Join Dr. Kumar Subramanyan this December 10-11, where will be speaking at The AI Summit New York as part of his role as an Ambassador for the event.
To stay on top of the latest news in AI for business, subscribe to The AI Summit Series newsletter, Beyond The Summit.
)
