The AI Summit New York

News

Nov 21, 2023

Bridging the Gap Between IT and AI - A Vital Discussion That Could Shape Your AI Journey

Bridging the Gap Between IT and AI - A Vital Discussion That Could Shape Your AI Journey
Success with AI is all about the interplay between IT infrastructure, AI practitioners, and strategic visionaries. 

The AI Summit New York 2022 was a melting pot of innovation and strategic thinking, offering a glimpse into the future of AI in business. It became a pivotal moment for reshaping our understanding of AI success. 

One particular discussion that stole the limelight was on the quest to bridge the gap between IT and AI. The discourse carried the potential to chart a path toward a more integrated and impactful AI journey. Thanks to our speakers, luminaries: 

  • Rich Nanda, Principal – US Strategy & Analytics Offerings Leader, Deloitte

  • Mike Segala, Principal, Deloitte

This conversation transcended the conventional markers of AI success. It wasn't just about financial gains; it was a fertile ground for transformative ideas poised to redefine the narrative. Aligning IT infrastructure with AI initiatives took center stage, becoming a transformative concept that unlocks AI's full potential. It empowers organizations to navigate the shifting AI landscape with confidence. 

For the first time, you’re invited to catch-up on the full talk, free, below. 

As The AI Summit New York 2023 approaches this December, join us in exploring the critical insights that await.  

The Transcript: 

00:00 Mike 

Well, thank you everybody for showing up. We have about 15-20 minutes, give or take. And then we will have some time for questions. So hopefully, some of you get inspired by some of the things we'll talk about and want to share some of the things that you're going through, and we could just have an open dialogue. So let's get into it. 

00:16 Rich 

Well, it's great to be here, my name is Rich Nanda. I'm a partner at Deloitte, where I run our strategy and analytics business. I'm also a practitioner at the intersection of strategy and analytics. I've worked for many global CPG clients over the years, and almost always helping them with issues around growth and efficiency of their growth using digital technologies like AI and data. 

00:44 Mike 

Awesome. And my name is Mike Segala, I used to be the CEO of a company called SFL Scientific that we were recently acquired by Deloitte and we'll talk about that in a little later. Now, I'm the principal that leads the SFL business within Deloitte. As a bit of a background, I came out of the world, I was a particle physicist back in about 2010-2012, started my AI career, really, at the beginning of the space of truly commercialization of AI coming out of academia, and now into the industry space. At SFL, we were an AI consulting company. So what we were working on, and what we want to talk about today is really geared towards, we've had the experience of working with all sorts of shapes and sizes of organizations, literally from one or two people sitting in their basement, watching them develop products and services that they eventually brought to an IPO to series A B, C, D client are organizations to large fortune five hundreds. And every single one of them, regardless of the shape, the size, the industry that they came from, the biggest barrier that we have seen repeatedly across hundreds of these companies in now is really, how do you really bridge the gap in the overall understanding and communication from folks like myself, who is a foundational technologist, running AI projects thinking about innovation? How do I go and talk to Rich and say, Hey, Rich, I got this great idea. Let's go do something innovative and Rich, the flow? Do we really want to do that we want to drive revenue, we want to think about outcomes. So this kind of barrier of what we see in the industry is probably in my mind, the largest gap that we really have to face isn't the technology isn't the algorithms and the data. It's nothing. It is the relationship difference that folks like myself, and folks like Rich have that we need to bridge together and start having dialogues that move the needle, to actually getting to outcomes. So that's what we want to talk about today. Very quickly, you could spend a lot of time on these topics. And we're going to just give you a little bit of the thoughts of what we see with our customers, how we think about it. And hopefully it'll spark some interest with you all. 

02:50 Rich 

Right, well look this this idea that business and technology need to come together to create significant outcomes. This isn't a new challenge or opportunity in the economy. What's new and different now, though, is the kinds of technologies that we're talking about, particularly and especially with AI, they aren't just enabling the business, making the back office more efficient, sure, they can do those things. But this is fundamentally changing how companies compete, how they connect with their customers, what kind of relationships they can create with their customers how their products and services differentiate. And when that happens, the domain of technology, the domain of the scientists, comes right into the middle of what you know, the C suite is trying to do. And the stakes are higher. And that's what's different, the stakes are higher. And so the connection between the two sides has to be has to be hand in glove. And so as a as a business centered leader, you take a look at this show, you take a look at the logos on this page, and the whole thing can be pretty darn intimidating to figure out. What you know, is I'm hearing a lot about the real world applications of AI from my teams. I'm seeing peer companies and competitors do really interesting things around intelligence and how they're enriching how they compete. But I see this landscape, and I don't know where to start. I don't know how to get my team organized around it. And that can be very intimidating. But what it can't be is it can't be stunting, right. You can't let it let it stop you. And so you need guys like Mike, to help you navigate and talk to you in the right terms so you don't become The Odd Couple. 

04:51 Mike 

So, for me, I don't know about a lot of you. I'm assuming a lot of you are practitioners like me. This is very exciting. But this is a blessing and a curse. Like, let's be honest with ourselves about the problems that we want to solve on an AI perspective. Today, it is most likely, only a couple of months ago, you might not have been able to solve those problems, let alone a year or two ago. And you see that right? Who saw ChatGPT come out last week. Right? Is anybody else as amazed as I am? I mean, if you look at the results there, I think it's a foundational shift in the way that we'll start thinking about using algorithms like that to just completely embed in and make obsolete many of the services that we've had today, right. Like, to me, that's almost the equivalent of what Google did from a search perspective for early day internet, right? We have a long way to go, you need to make money about it, I get all the caveats, that nonsense, but it's foundationally changing the way that I think a lot of business will think about in the next year or two. So that happened last week. Now let's imagine that I'm a product manager. And I wanted to go build a project that I've been working on for the last year or two. I going to go to Rich and say, Hey, man, you've just sunk a couple million bucks into this thing, open AI put out some really cool stuff, I want to change our entire product landscape. Like that's not appropriate. Right. So the pace of innovation that we have here, access to data sources, infrastructure, GPU, CPU, GPU, FPGA access to algorithms that are now proliferated in the open-source community, that is a blessing to us on the innovation side. But to folks like Rich, it really is just nothing but nonstop heartache and risk, because what is Rich need to do Rich needs to build products, and bring services to customers that they are going to pay him for, and not call up that he they're mad about it. Right? Like his job is to deliver excellence to his customer. My job is to think and be scientifically minded and innovate. When you have this pace moving very, very quickly, those two things are absolutely opposing. So you have to find a fundamental middle ground such that you are developing, but keeping the lens out here. So how do you think about doing that, and that's what we'll talk about next for a second. So what we're going to get into is kind of the four things that we think about just kind of really, really high level things about how do we start bridging the gap between the expectations and the needs, needs could be tangible needs, I need GPUs, I need CPUs, in the emotional needs, I need to work on cool projects, I need to show revenue between folks like myself and you guys, mostly right, the practitioners, the folks deep in the weeds, and Rich, and other folks like yourselves as well, who are posed to think about the fundamental problem of building a business and building revenue. Because we normally don't talk to each other, we don't speak the same language, we speak very, very differently to each other. And that's going to be the biggest gap that we need to think about filling. 

07:43 Rich 

I think the you know, the one add, I would have, as we just transitioned to the four places we'd suggest you start is we're also at a peculiar moment in time, where, you know, not only do our business leaders, you know, have to have a practicality around growth, but the efficiency of growth now, do more with less, be more careful with risk, right, we're gonna be in a period of at least a year or two, right, where we're just going to have to be more efficient around how we grow. And so the degrees of freedom are tighter, which means the coordination between the technologists and the scientist, and the business has to be even better. Alright, so what we're going to talk about for the next little bit and give you some examples about our four places, we'd suggest that every team, every one of these cross functional teams, has alignment and focus. So the first is, where do you start and have some practical framing? For why you choose where you start, you know, that balances, building capability with creating impact related to that? What's the right way to measure create horizons around success? And when we're achieving success? How do we talk about that progress in a way that keeps the alignment tight, keeps the investment going, right and keeps the impact accelerating? And then lastly, you can have business managers, product owners, you can have data scientists, but what role does the technology organization have to play in making sure things scale and sustain over time? And how to bring all three legs of the stool together on the journey? 

09:31 Mike 

So where do we start? I've worked on probably well over 1000 AI use cases across different clients. And regardless, at the end of the day, everybody asked that same fundamental question like, where do we start? How do we choose what to work with? So we've kind of thought about this in a way that again, this is abstract and high level but if you are in the practice here of thinking about how do you build an AI practice, or how do you manage it? These are the ways that I think about selecting and building a portfolio of offerings from a use case perspective. So if you look at this graph, right, I think we're all probably familiar with AI stuff. So a graph shouldn't be too bad. On the y axis, we have near term ROI, right? I need money now. Okay, higher up, I need money quicker, lower down, I got a little more tolerance. On the x axis, we have transformation. There's a lot of things that we see in AI that really isn't transformation. It's standard business process and things like that, that's still good. But as we get farther and farther out the curve, we get into really, really novel stuff ChatGPT would be a great example of that. So what I want to advocate for you, and I'm gonna quickly go through the four of these is the way I think about each one of these quadrants, if you're thinking about building your own, and then we're going to kind of think about these quadrants as we move forward. So the first is the plug and play. Be very wary of this. If a vendor calls you up, and he's like, listen, so and so I hear you're doing really cool molecular dynamics, I got your solution, you press this button, I solve cancer for you, very skeptical. There's a lot that can be solved in plug and play. But if anybody is coming to you, it's going to be a highly commoditized display. So if you're thinking about building a unique offering, or something bespoke, if it's that easy, every single person in the world has access to it. 

11:21 Rich 

No business leader would ever fall prey to a vendor with an easy button. Right? Okay, never happens. 

11:27 Mike 

It happens all the time. But be very careful here, right. So there's absolutely space for it. But if you're trying to build something unique in the offering, just be wary of that. Another thing that it doesn't allow you to do is if we want to build a real AI offering, that means we have to give our data scientists and our practitioners the opportunity to experiment, the opportunity to fail, AI has a huge failure risk as it should be it is a scientific process. And if you're only doing things that, by definition work out of the box, you're never going to actually get to the place where you're building capabilities internally. And your practitioners actually have the offering and the ability to go out and learn and do something themselves. So that's the huge trade off here. If Rich comes to me and says I need something short term, say okay, but I still need to think about the longer tail space, the snake oil, the snake oil is bad, right, I can give you near term results, and I can transform your whole business, don't buy that one, just run for the hills. And when you and I talk about that, we get a lot of snake oil today, it's okay, we don't worry about that. So these top two, I would say, maybe 10-15% of your total offerings should probably sit in that top left, there's a lot of cool things that you can do in there. But again, needs to be scaled. The bottom left is where you want to spend probably a good chunk of your time. These are use cases where you need to use your data assets. If somebody's just selling you something or offering you something where you don't actually need to use your data, it's probably not going to be really relevant to your exact business problem. You should be able to take some code and leverage some of the even the vendors here, right, the data robots, the sage makers, the IBM's of the world that have really good accelerators, but then use that to accelerate your workflow and still build something unique on top of it. Okay, that's where your balanced portfolio is, you'll most likely have to do something on the IT space, meaning this thing shouldn't just automatically exist within your infrastructure already. That doesn't make sense. How could it. So you need to be thinking about there's a data problem, there's an algorithm problem, there's an integration problem that all should sit in your balance portfolio, these projects, four weeks, eight weeks, 12 weeks, give or take, in that time, I should be able to tell you, I have technically proven feasibility. And I have a path back to ROI. If you can't tell me that in eight weeks, you should not be doing the project yet. So that should probably make up about 80% of your portfolio. This guy here the transformation, I would advocate strongly. Unless you've been in the space for probably two to three years and have solved several use cases in this balanced portfolio. You're not yet ready to ask for the investment to do a transformational play. If I go to Rich, say Rich, I need 3 million bucks, I want to go solve a really interesting space. I want to do sensor fusion on geospatial imaging, map it with Ad Tech Data and predict Walmart stock prices. Richard say you're out of your mind, completely out of your mind. Because you've never actually done something down here. That's proven out ROI to me. I haven't earned the right to do this yet. Now, if I delivered Rich 15 projects here and I crushed every single one of them, he should say You know what? Go for it. You've proven a methodology. You've proven that landscape to me, go do this. Right. So you have to earn the right to move into that regime and it usually takes organizations sophisticated organizations several years to build up that right. Okay. Hopefully that's a little bit of framing for you guys, as you start thinking about kind of the portfolio of use cases to think about and go after. 

14:52 Rich 

I think as you're transitioning back the only thing I'd say is it's you know, it's joking a little bit about falling prey to the vendor, but it's easy on both the practitioner and scientist side and on the business side to chase the hype, right, and so there's some grounding some mutual conversations and grounding that have to happen to build the momentum, the capability to kind of go push the transformation agenda. It's, it's, it's very easy to kind of want to push into the transformation agenda too quickly. 

15:22 Mike 

So as I go to the next slide, I want us to remember this balanced and transformation right quadrant three, quadrant four. So question two, what Rich mentioned, and two slides back was how do we actually think about measuring success? We fall prey as technologist where I can say a whole bunch of random words and talk about R squared and MAPE scores and all this craziness and get people to be excited about and say, no, no, I'm gonna solve this problem, you're gonna make a whole bunch of money. I think we, as technologists do a really bad job of over promising short-term ROI, because we want our projects funded. That might be really nice in the short term, but in the long term, it's really going to come back to haunt us. So I want us to stop doing that. However, what I want us to really think about is not all ROI needs to come from a financial return on the actual project. There are other ways to prescribe value from an AI project that in their own right are as important as being able to say, Hey, I've reduced cost, I've increased revenue, right? That's a financial ROI. Every time we do a project, we learn something. Every time we learn something, we grow capabilities, our people get smarter, they're able to think about the longer-term visions of what they're doing. So what I really want us to start thinking about from a value perspective is the difference between perceived value versus delivered value. perceived value is the standard stuff, Hey, Rich, I just shaved us $3 million, because they automated half of our workforce. That's not really realistic, derived value, or delivered value is all the other intangibles and qualitative things that we learned along the way, I was able to get five of my people spun up and learning an entire new programming language, I understand now how to write in code into neural networks. I have three new eminence papers, I've published things on LinkedIn, I've attended conferences, all of these intangibles downstream will make your business more and more valuable. So when you're equating the value back up the stack, and I'm talking from my side up to Rich, all of that needs to be put in your foundational thesis statement. Otherwise, you will most likely never just be able to have this golden ROI financial metric, it really just doesn't exist today. So be very, very weary of that and think from the larger vision statement. Good? Yeah. All right. Okay, next one I want to talk about is kind of following up on that is different ways to think about ROI, right, just kind of a bridge from that last point of view. So I think there's three real ROIs that we can talk about. One is measurable ROI. Meaning, like we talked about, I made more money, I reduced labor, I did something that's a proxy for doing good in my business. Most of the time, I would say 99.9% of all businesses and use cases, that's the only thing they really talk about is this measurable ROI. Unfortunately, there's only really a few use cases, maybe in the operations space, that that kind of hit that, like, how do you prescribe ROI to a really heavy R&D workload, it's incredibly hard, because usually AI sits in an entire value stream, it is not just this one little nugget that you can measure and say, Oh, because I wrote some really cool x g boost algorithm, or YOLO V seven, I was able to do this such that it did this such that I save this amount of money, that usually doesn't happen in that linear fashion. So we have to think about the other types of ROI is that we have right strategic ROI, which I just had talked about as well, where we're thinking about the three to five year vision that we want to go right from that quad three to quad four, if I want to prove innovation, if I want to get there, I surely need to be able to align towards my three to five year roadmap and proving that I've gained capabilities. From the capability standpoint, it's the same thing, right, I need a team of people that I know will be able to anchor me such that we're actually performing and delivering really results. So every time that you're thinking about prescribing ROI backup to the stack, you should not be talking about r squared, or MAPE scores or RMSE. That is not what Rich and his peers need to hear. They need to understand these three things, and how we are making meaningfully acquainting that in a quantitative and qualitative way, and why one is important and in different circumstances, and how do we measure that's going to be my biggest advice to you because if you only do the first left, or if you talk about r squared, you might as well just go home, right? You're never gonna get that buy in in that larger scale investment. 

19:45 Rich 

And my advice to the group is look, a really good CEO or general manager or C suite, this idea of a system of ROI that's not going to be a foreign or a concept they're going to reject, like that's going to resonate quite well. But it's incumbent upon the practitioners that are initiating these projects to present them with the story of the system of ROI. And that's going around this wheel from the hard and the soft ROI and how those two have to work together. As well as the prior concept on the short term and the long term and showing how the build comes. And perhaps in the short term, there is going to be more strategic and capability ROI. But you know, what are the indicators to show those things are happening, such that, you know, we know, in two or three quarters, the financial and measurable ROI comes into play. So be very forthright and proactive in talking about this system of ROI that has all three dimensions, and how they should deliver over time as a concert. 

20:54 Mike 

Alright, last leg here. I think we as a community, have been paid in we as mean AI practitioners, and I put myself squarely in here. We haven't paid enough attention to how do we not just go and ask for permission to the line of business? But how from Day Zero? How do we start really thinking about this? Again, I'll use the word valley of death moving from POC to production to actual ROI, most of that hinders on it, and it doesn't hinder on it in a negative way, it hinders on it because it is foundationally set to make sure that the business runs and risks are reduced. The issue with it in comparison to AI is it is highly deterministic, right? You have some servers, you have some security protocols, you have your cybersecurity, there is a very fixed way that that needs to run to make sure that your business I'm sure a lot of your fortune 500 businesses here doesn't implode. And we don't have a leak and all sorts of other things like that. AI is the exact opposite. It's probabilistic. I think a lot of you understand that and know what that means. The implications of a probabilistic model or data in a highly deterministic IT stack. I'm sure a lot of you have experienced why that becomes very, very hard. If you just finish your cool little POC, you wrote, throw it over the wall to your it friends and say, Oh, go run this thing. It'll never work, right? So 80-90% of POCs fail, why did they fail, they don't fail, because the data is not good. And the algorithms aren't good, and the people aren't good and the business isn't good. They fail because we do not early enough, equate everybody in from a stakeholder perspective that needs to be there to make it successful. So if we, Rich and I aren't then also bringing in our third person, which we should have up here, which is kind of the IT representation, we will never actual crossed this chiasm of actually moving from a POC to a real ROI driven kind of AI workload. That's my last big piece of advice for everybody. 

22:51 Rich 

Yeah, and the other thing we all need to recognize is IT and IT executives and teams, they've been on this 30 year journey of building relevance and credibility with the business. And it's pretty good. Now it's as healthy as it's ever been. And so how can that team in that capability be an ally to getting great AI done in an organization at scale in a sustainable way. And so bringing the three legs of the stool together early and often really is essential to make that happen. All right. So look, we've been on our own journey, Mike described that he joined the Deloitte team along with our other SFL scientific colleagues about two months ago now. Deloitte, you could think of as we were in that lower left quadrant 90,000 consultants doing a lot of really interesting things, helping our clients build capability around data analytics, and AI. And bringing in a firm like SFL scientific allows us to push the envelope of what we do to bring in more of that transformational science based innovation into the team. And over the pre close months, we were together, and since we've closed, we've been taking our own advice. So we thought maybe we’d close with how we've dealt with those four same pieces of advice that we gave you all. 

24:26 Mike 

Awesome. Yeah. So should look very similar. So how did we think about and get started, right? Because I have to, I have to hold on to my own morals and say like, Hey, Rich, like I can't just go do really cool innovation work like that'll never, you know, support the business. So we have to think very closely about Deloitte is already doing a tremendous amount of AI across all industries. So we know that we have a really, really strong quad three business, so I need to be able to support probably 80% of the work reloads in quad three which then open up me to be able to spend about 20% of my quad four, my innovation space. So how do we think about prioritizing those So we want to focus on markets that are prone to start pushing the envelope on innovation. Right? You can think about those spaces, probably Pharma is one of the best thoughts of that, right? They constantly need to evolve their space. Because science is incredibly hard, people will always be sick, right? And we need to think about more and more kinds of bespoke medicines to make them better. So if you think about that, right, there's a few industries in the few spots that you would tend to pick to actually go and build the business on. So very similar, right, Rich, and I and a lot of others, even here in this room spent a lot of time thinking about as we're going to business together, just like we've described, where are we going to start? What markets? Do we focus on? What use cases do we want to attack, and then make sure we have a very structured business that 80% of that could be homeruns, and wins? And 20% of that should be long shots that we want to take a more systematic approach to doing something novel. But that's okay, because we've committed to those ratios. And we know that the 80% will support the 20% innovation space. 

26:04 Rich 

Yeah. And then, as we've been thinking about, you know, the horizons of success and how we measure success, and we've been very clear, you know, as one of one of the people that had to go to our management team and our board to get the acquisition approved, we've been very clear from day one that, yes, we'll make financial progress in the near term, and along the way, but we're going to measure whether this deal is successful in multiple years, right? So are we building a capability? Are we pushing the broader Deloitte organization more into the transformational space of face of work? And how is that bigger capability, we're growing on top of SFL Scientific, feeding the strategic ambition we have, for the kind of work we want to do. And so being very clear, even at the outset of the project, the deal in our case, what those you know, what those metrics were, and how they were going to play out over time and getting the board on down the line to them. And quite frankly, the hardest thing now is to keep the practitioners reminded that, you know, success isn't going to be determined in the first quarter, or the second quarter, we are playing the long game. But once in a while Mike and team, you know, they're results oriented, so they like to, you know, like to put points on the board more quickly than we necessarily need them to be there. 

27:28 Mike 

And the last thing that I'll call out before we break for any questions is, we spent a lot of time upfront thinking about as we want to move those novel workloads into production. What does that mean from an IT infrastructure perspective? So we, we had talked very early on and Deloitte has committed over the last couple of years, and now moving forward to that scaled hybrid environment, right? Getting in various DGX from an on prem and in a hybrid perspective, thinking about new ways from an ML ops and a dev SEC ops at a data ops perspective, being at conferences like this, seeing what the innovation space is pushing, because not only do we need to deliver that to our customers and clients, but I want to deliver that internally. Right? So we have a lot of time and thought spent with our IT fellows thinking through as we get more and more sophisticated, how can the workloads in the IT not slow us down, but actually empower us to go faster? Right. And that's been a huge focus for us moving forward. So with that, I think we're, I think we're probably out of time, and maybe a question or so unless there were any closing remarks. 

28:27 Rich 

All good. Thank you for your time, I think maybe a question or two. And then we'll get off the stage and certainly be available for chats afterwards as well.  

28: 39 Mike 

Yes sir 

 

Q&A session 

28:42 Audience 

So, I understand the focus on revenue and dollars right? Large corporations move slowly and deliberately, very often by design, more often than not, we're spending other people's money in return for making them more money. In that space in the make versus buy equation when it comes to capabilities like this. How have you found that playing out in terms of I bought you made the mentioned of ChatGPT just this week? If I made like a $20 million investment in technology that is now obsolete, say three years ago. Now do I carry that technical debt forward? Do I deprecate it? Do I therefore just choose to buy technology consulting and services externally, instead of burdening my P&L with it? Where does the make versus buy equation for here? 

29:33 Mike 

I can give my perspective first. I strongly believe that if it's something bespoke to your business that you are uniquely going to market with, you should be making it. If you are buying a technology that is the one core thing unique that you're bringing to your customers. Well, there's really no reason that they necessarily would need to come to you every single time you haven't really cornered that space. So the most successful organizations that we have seen is they are always open to buy, when the buy is incremental to the business, and they always make when the make is innovative to what they're going to foundationally bring to the market that they are going to be known for best in class in. So that's the way that we've seen it be done very successfully.  

30:21 Rich 

I think the only thing I'd add is even if you choose to make, it doesn't mean you can't ask for help, right? And so there's lots of partners out there that would be willing to help build, but then not necessarily on, you know, what's built, and so got to be thoughtful about when you want to go fully alone versus seek support. I think with that, we probably got to get off. 

30:43 Mike 

I think we have one more question, we all right, wants to ask. All right, come on. And then we'll go quick, we probably be round. 

30:50 Audience 

So the structure that you guys outlined, that was really helpful to see one thing that I was curious about, especially thinking about, so like the data scientist side, of course, they're kind of playing around with, you know, whatever data they get, the other side is growing whatever business and growing whatever audience. And so the question I have is really, where does who's responsible for avoiding, I guess, bias? And where does that kind of plug in here? Just from? Like, because I can see how naturally it would just fall somewhere in between? So is it you know, whose responsibility isn't to make sure bias is not a part of whatever product? And yeah, 

31:35 Mike 

it's both of our responsibilities. And to be fair, it's more than just our responsibility, right? So real sophisticated organizations are thinking about this from governance and oversight, a steering committee, whatever you want to call it. It is my responsibility as the technology expert to discuss explicitly why there may be bias in anything that I'm doing algorithmically, data inputs, and so forth. It is Rich's opportunity and responsibility to discuss bias from the end user, the customer, the business's point of view, and right, and then the IT, it just goes on and on. It's cascading. It's everybody's responsibility collectively. And if you haven't yet, it's my point about early in the room. First of all, if we're not sitting and talking about that foundationally, we will never do it, and it'll become too late. Because we'll already have a really cool result. And we're like, Yeah, forget it. That's not the way to do it. So you need to address it upfront and understand those biases across the stack and say, Do I want to take that? Is that risk worth it? And if it is, how do we negate and how do we start neglecting that bias such that it's not actually there? foundationally but that's everybody. Right? 

32:39 Rich 

Well said, All right. All right. Thank you for your time. 

Loading

Our Sponsors

Headline Partners

Loading

Diamond Sponsors

Loading

Gold Sponsors

Silver Sponsors

Bronze Sponsors

Associate Sponsors

Loading

Media & Strategic Partners