The AI Summit New York

News

Oct 09, 2023

AI Governance Unveiled: Pioneering Ethical AI Transformation

AI Governance Unveiled: Pioneering Ethical AI Transformation
Amid the rapid evolution of AI, an expert delves into the realm of ethical governance in the era of technological transformation. 

AI governance encompasses a set of rules, practices, and processes crucial for ensuring the responsible and ethical utilization of artificial intelligence. It serves as the key to surmounting the challenges outlined above, ultimately unlocking the boundless potential hidden within AI. 

“When you think about AI governance, it's actually designed to help you get value from AI faster, with guardrails around it.” 

In her riveting presentation, Priya Krishnan, Director of Product Management for Data and AI at IBM, illuminates the path to AI's infinite potential. Explore transformative trends such as confident AI deployment, ethical excellence, dynamic regulatory landscapes, and collaborative stakeholder engagement. Delve deep into the AI governance solution, replete with comprehensive, open, and automated features. 

This 20-minute keynote will allow you to answer: 

  • How does my organization operationalize AI with confidence?  

  • How do we better manage AI risk to avoid brand degradation?  

  • How does my organization scale while complying with the growing AI regulations? 

 


 

For The AI Summit New York 2023, discover more about AI governance and its pivotal role in helping you attain your business objectives on our Headliners Stage 

 

The transcript: 

00:00 

Good morning. How's everyone? Awesome. Thank you for being here. As Josh said, I'm Priya Krishnan, and I lead product management for data science and AI governance at IBM data and AI portfolio. Today's topic is all about AI governance, right. And I'm going to be talking through the session. And just in terms of a few logistics, if y'all have questions, I will stay back after this. And you're welcome to come talk to me. But you're also welcome to come and grab us at the IBM booth as well, for any more questions. We do want to hear from you. So please come and talk to us. I'm going to start with a story. Anyone recognize this car? That's right, the DeLorean made famous from the Back to the Future movies, right? I have a friend on my team who actually owns this car. And he calls it a labor of love. Right. And from him, I learned a few things and an interesting story about this car and the owner, John DeLorean. So John DeLorean was a rising star in the automotive industry. During that time, however, he found that there were certain practices that were unethical in the automotive industry itself. So his vision was actually to make a car that is actually wanted to do something ethical, and he wanted to design a car that was safe, fuel efficient, and inexpensive. Okay, so he did a few things that we'll consider the hair of his times, for instance, he put a large back taillight in the car so that the car can be visible at night. Okay.  

There wasn't required at that time. But he really wanted to think ahead. And he did that. There were a few more things that he wanted to do as well, like airbags. They were not required regulations didn't want them. But he actually thought they were important. And he wanted to put them in the car. But he ran into production challenges. So he, he said, Okay, let me push this out. And let me do this a little later. That was okay, at that time, okay. And even the third brake light as well. So he had all these things that he wanted to do, but they were considered important. They were considered visionary, but they were not necessary. Right? Now. Think about now, there's a distinct change in the automotive industry, from then to now right, the regulations have changed. So have all the stipulations around what is required that goes into a car. That's a no brainer today, every car needs to have these things, right. So I'm telling you this story, because I just want you to think about a parallel in the AI industry as well. So just like the automotive industry, there is a shift that has happened in the AI industry, where there is a clear before and after picture. Right? Now we'll talk about what trends have changed in the industry here.  

02:57 

We all know that AI is here to stay. If this conference is proof of that, then we all know that we're all here for that reason, right? So it is here to stay, and everybody wants to get value from AI. So if that is obvious, then what is really stopping us. There are four key trends that we're seeing over and over again, as we work with clients. The first one is operationalizing AI with confidence, moving from experiments to productions, being able to do so with confidence is the first challenge. And the first trend that we're seeing. The second one is really important, which is the responsible use of AI to manage risk and reputation. And we'll talk about each one of these in detail as well. The third one is something that's happening around us in the industry, just like the car regulations here. There are a huge number of AI regulations coming from external fate industries to us. The last one is that this whole arena of the playing field around this has changed. Many, many number of stakeholders are participating here. So everybody is involved, and everybody has a stake to make AI successful in the business. Let's talk about each one of them.  

04:23 

The first one is operationalizing AI with confidence here, right? Why is this a challenge here? I'll give you an example. One of the clients that we worked with, they had 700 models that they had built, and they had no idea how they were built. They had no idea which stage those models were and they had no automated way to even see what was going on. So the landscape was fragmented. Everybody had built their models using the tool of choice, but there was no way to know anything about these in the entire business landscape. So because they didn't have this visibility, they didn't have a way to monitor or catalog or even know what is going on with these models, they just could not make decisions fast enough, and they could not move these models into production. Okay? Even if you have that in place, the lack of transparency and explainability, this is really, really important. And more often than not, when we think about model explainability, and transparency, we think about models that are already in production. But that's not the case, you have to think entire lifecycle, you have to think about even before something gets built, are we able to think about, am I using the right data for this? Is this the right kind of model? Do I have bias in my data? Do I have bias in the models that I'm even testing. So it starts throughout the lifecycle, like I said, lack of transparency, and explainability and lack of track, tracking it into end. And finally, automation. Without this last step, there is no scaling, that's going to happen. More often than not think about, think about this example that I gave you about 700 models distributed. And I say, I guess I'm able to put some monitoring in place, I'm able to track it. But I do this in sort of a manual way. I can do it once, I can do it twice. But I am never going to be able to scale as I get more data as I get more models. And as I build more applications. So this is the first challenge that we're seeing quite a bit of it, the lack of transparency, lack of end to end lifecycle monitoring, and cataloging and lack of automation. Okay. The second one is around risk and reputation. This should be familiar for us, even as consumers you and I think it's clear that we want to be able to give our money, and we want to trust a company that has ethical AI practices. And once the trust is lost, it's really hard to get it back. Nobody wants to be in the press for the wrong reasons, right. So think about companies that want to be able to manage the risk and the reputation around this. However, there are companies that are actually going a step further. And they are thinking about making this proactively a part of their strategic imperative to think about ethical and transparent AI. And we should all be thinking about that more explicitly. If we haven't, we don't want to catch it after the shoe drops, right? We want to be able to proactively think about this as a first principle of design.  

07:36 

The third one we talked about is the external force, which is around the regulations. In the past, what happened was these were strategies that were getting formulated. Think about the data governance industry, how that changed, right? They were principles, strategies, and then they moved into actual regulations. AI is going through the same shift here. So these were strategies in the past, but now these are actually getting translated into real policies that companies have to follow. And today, these regulations are coming throughout the globe, in every industry. And if you actually look at the timeline, you can see how fast it's coming. It used to be once a year, maybe once in two years. But today, there's two regulations a year. That's just the start, right? Even just recently, you've seen that New York City hiring law and the United States Government also released a new Regulation Law for hiring. Right? It's coming rapidly. It's coming across the globe, and it's coming to every industry. It's not just secluded in one set of industries here. Right? So how do you really proactively manage these regulations? Effectively, not just for today, but your setup for tomorrow as well. 

08:54 

Finally, stakeholders. We at IBM always say that data science is a team sport, because all of you need to be able to play together. Now the playing field is actually increased to even beyond data scientists, even beyond what we call model regulators, because everybody has a stake here. When you think about AI today, right? If there is a risk to profitability, that's a problem. Actually, there was one thing that I wanted to talk about and regulations, right. The regulations, they're not only coming at a rapid pace, but the non-compliance of the regulation causes a company fines and affects the bottom line, the EU AI act, for instance, actually, it could fined up to 6% of your company's global revenue, right? Nobody wants that. So the CFO gets involved because there is a risk to profitability. There is a brand and reputation risk. So obviously, you have a CMO that's getting involved. So there's multiple stakeholders in an enterprise that are actually coming together to make sure that your AI is ethical, it's governed, it's effective, and it gives what it's supposed to do.  

10:09 

Alright, so we talked about these four challenges, and that is where comes the idea and the need and the promise of AI governance, right? You can read through all of this stuff. But I like to think of this as when I think of guardrails. I think of this as a car, right? Or think of even a Formula One car, for instance, right? The idea of these cars, there are safety checks inside the car, there are so many people putting up so many things to make sure that the car is safe. But none of that is to slow down the car, it's actually to help the car get faster, safely. Okay. So when you think about AI governance, I would urge you to think about it that way. Very often people think about the word of governance and associated negatively. But that's really not the case. When you think about AI governance, it's actually designed to help you get value from Ai faster, with guardrails around it. That's the whole idea of AI governance, you want to be proactively managing this. So you're not caught in a bad state later. So what is a good AI governance solution, or as from IBM, when we think about an AI governance solution? We think about this in terms of three key capabilities. And these three capabilities, again, are designed based on the problems and the trends that we've seen. And working with many, many, many clients here, right? The first one is lifecycle governance. This is where you're able to monitor catalog and understand what is actually going on with your models, right? What data was used, what kind of model experimentation was done? What kind of models are there for me, can I automatically know what is happening as the model moves through the lifecycle? I'll give you an example of this. One of the clients we worked with, they had a data science team. And they were asked to build a model, they took about a month or two to build the model. And then they had a model validator team, that was a very small team. And the model validator team took these models, and they had to validate them. And they took another two months to actually go through. And as they were going through and validating these models, they had questions like, well, what data was used? Did you try a different variation of the model? So all these questions, were going back to a data scientist who's completely forgotten what they built two months ago, or maybe they even left the company. Right? So then they had to go back and manually go look at what they've built, and send it to the validator team. All this was done back and forth through Excel. So months and months of effort wasted in just going back and forth between these people, right. But if there was a way to automatically capture what is going on with this, nobody needs to have to do this manually, right? Every stakeholder again, is visible to all the stakeholders, and they know exactly what is going on. So by the way, by doing this, the lifecycle governance AI solution, we were able to bring down the time from months to just weeks. Okay, so that's one example of the next one is around managing risk. Okay, I know where my models are, I know about the metadata. I know everything about it. But can I take that and effectively create customized dashboards and workflows, so that all the stakeholders can see what is going on. And everybody sees what they need to see. Remember, we talked about the playing field has increased. So this really helps us manage it through a consistent and automated workflow. Again, one more example of this is one of one of our clients, they asked the data scientists to build a model for a use case, right? So what the data scientists team did was they went and picked a model that was already running in production, because it was a similar use case. So they went and picked that model. And they started to do that. What they realized was that the original model was built for United States. So the business controls were completely different. The new model was to be for customers in EU where the business controls are completely different as well. Right? So there is really a risk to kind of reuse this models. But if you did not have a good workflow and business controls that actually are visible throughout the workflow. It's really hard for people to do this because they'll spend time building this, then somebody needs to validate it, they're gonna be pushing it back. There's just this back and forth multiple times. So again, proper automation, workflows to manage risk and compliance is also critical.  

14:59 

The third one, Once again, as the trend we saw in compliance, regulatory compliance, how do I take the regulations that are critical, and translate them into business controls that I can use within my organization? I'll give you two examples. One client was came to us and said, Look, I know these regulations are coming, I want to make sure I want to proactively abide by them. But I do not want my data scientists to start thinking about how to translate these into what they need to do to build a model. Can you guys take care of it? Can you put it into the workflow, so that everybody knows what to do with it? Okay, so that's becoming more and more prevalent every day for us, when we hear from clients, they want to make sure they adhere to regulations, but they also do not want to burden that on the teams that are building these models or validating these models. They want to be able to do this automatically through the process yet. Another one example was there was a client that really wanted to abide by the Fair hiring practices. Okay. So what they did was when they were using looking at resumes for hiring, they got rid of attributes like gender, as an example. So they were like, okay, my model is fair. It's free from bias, right? That's what you think. But then there was an element called working at night shift in one of those, right. And if you think about it, if women self select not to work at night shift, that's a bias that's introduced indirectly into the model. Right, and you wouldn't catch that. So you need tools that will be able to catch those kinds of indirect biases as well. If you did not do that, again, you're gonna be out of compliance with regulations here. So these are all some critical elements, when you think about an AI governance solution. More than all of this, again, it's this is the technology part of it, right? It's beyond that as well. But in terms of the technology part of it, what is when you think about a good governance solution, there are three aspects that you need to keep in mind.  

17:06 

The first one is comprehensive, I cannot stress this enough, because it is not enough that you have put in some model metrics and monitoring in your production. And then you're happy to go, it is important that the orchestration of this is throughout the lifecycle, right from the time that somebody is using data, can you ensure that the data is free from bias, that's a great place to start. So right from the time that you get the data, to your development of the models, to your testing, to your validation to production, make it consistent throughout the lifecycle, the second one is open. The first things is that when we talk to clients, like I said, somebody who had 700 models, right, they didn't use just our technology to build these models, they've used a variety of technologies, because their data scientists want to use whatever they want to use to be comfortable with. Right? So a good solution should be able to augment whatever technology is already in place, and whatever processes are in place as well. So open is another thing, it's really important to be able to augment what is already in place. The last one is automation. Again, none of this would work until it is automated, so that you can operationalize it at scale. Okay.  

18:30 

So I talked about, I've talked about the AI governance problems, I talked about the solution in the three pillars that we talked about. And it's important to be comprehensive, open and automated. I do want to stress on the importance of this, right? It's not technology alone, that's going to carry you all the way here. A good AI governance solution has the trifecta of people process and technology together. Right. And more often than not, that's where we start. When we go in. The first thing is we actually do a workshop with clients. Sorry. And we talk about who are the stakeholders? Who is invested in making this successful? Can everybody be committed to making this successful? Whoever needs to be right? Think about the stakeholder map that's increased as well. So make sure that the right people are involved in the process. The second one is the process itself, right? What are the processes that need to be augmented? What are the processes that need to be eliminated? What are the processes that need to be created? New, right? It doesn't always have to be something new. You don't have to throw out what you have. Again, you could augment what you have. So think about the process element of it as well. And finally the technology as well. So pick somebody who will be able to guide you along the way and think about the tenants that I talked about comprehensive, open, automated and being able to manage all the risks and help you with the reputation and regulations as well.  

20:01 

Finally, I want to leave you with when we at IBM, we think about trust and transparency. And we have three main principles here. The first one is that it's AI, but it's AI with humans. We always believe that this is going to be the case, this will be the case for quite some time, I cannot predict 20 years out, but definitely for the foreseeable future here. The second one and very important to us is that the data and insights always belong to the Creator. We do not want to store your data we do not want to do anything with it is your data is your insight, it's going to be sitting with you. The last one is that new technology, including AI systems must be transparent and explainable. So when we build our tools and technology, we do abide by these principles. And these are our first ethical AI principles that IBM. Okay, so I want to leave you with finally that thought that I want to bring it back to the car as well, right? Really think about AI governance as being able to help you with all the guardrails to help you move faster. It is not a bad thing. It's actually something that you can get started today to help you for the future. And it doesn't have to be you don't have to boil the ocean. There are many, many things that you can do small steps that you can start and take today. So then, as you scale your business, you can actually scale with confidence. Right? Join us for a session, John Thomas, who's my colleague, he's going to be talking about industry case studies, whatever I talked about, he's going to talk about practical applications that we've done at 1155. It is the industry stage out there as well. So with that, that's the end of my presentation. So put on your seatbelts, the airbags are there. Let's go. Thank you.

Loading

Our 2023 Sponsors included

Headline Partners

Loading

Diamond Sponsors

Loading

Gold Sponsors

Silver Sponsors

Bronze Sponsors

Associate Sponsors

Loading

Media & Strategic Partners