Charting the Future: Which Industries Will be Disrupted Most by AI?
The long-term benefits of AI are soon be witnessed, here specialists predict which sector faces most disruption from the technical revolution.
From camera-first shopping experiences to personalized recommendations, experts at The AI Summit New York 2022 envision a future where technology enhances our lives.
Listen in for guidance on how to navigate the ethical considerations, the rise of avatars, and the steps to unlocking a new era of innovation.
Thanks to our panelists:
Trevor Gormley, Co-founder of Liquid Intelligence and Advisor, AI Journal
Eric Petajan, Principal Systems Engineer - Video, CV, AI SME, AT&T Mobility
Kamayini Kaul, Digital Data and Analytics Executive, CSL Behring
Frank Schneider, Terminal Manager, Verint
Ankit Mangal, Director, Wayfair
Be part of the visionary journey! Witness the evolution of AI and digital twins, December 6-7, on our Industries Stage. Embrace the future of technology alongside industry elites and tech aficionados. Reserve your spot now.
Panel: Future of AI - Which Industries Will be Disrupted Most by 2023?
Awesome. Well, my brain might be mush for this one. We've had some great content throughout the day and yesterday, really excited for a plethora of reasons, but specifically some of the resumes on this panel are pretty significant, I would say that normally I would introduce, but they're so long that it might actually take me longer to explain and elaborate on their resumes. So we'll go down the line and if you just give a one-to-two-minute briefing of who you are and what you've done, that'd be great to start.
Awesome. Alright. Hi everyone. I'm Ankit Mangal currently working as head of analytics for our supply division in Wayfair. Wayfair is an online retailer that sells furniture online and before working in e-commerce slash retail, I worked in banking and before that I was in healthcare and throughout, I've been working in statistics, data analytics, machine learning, it keeps changing the names and right now it's AI, but that's where I've been busy.
Eric Petajan, I work for AT&T Mobility - repeating my bio from the last talk a few minutes ago. So I measure quality of experience for visual applications. It's my second time with AT&T. First time I was at AT&T Bell Labs researcher from 1984 to 2000, Alcatel-Lucent and then startup companies for 14 years. The last one was live clips Clipping NFL, plays for NFL, Sunday Ticket for DirecTV and they bought us 2014, and 2015 AT&T bought DirecTV. I ended up back where I started. So I also work on all serve as the resident expert in AT&T mobility for Video XR, any visual application. And we're in the process of transforming how we build out our network to be application QoE based instead of peak bitrate based.
Kamayini Kaul, I am representing two organizations here today. So from an industry background perspective, I've dabbled in both healthcare and financial services significantly more in healthcare, a decade in medical devices, medical device engineer by trade imagery, construction, early days of computer vision, and the second decade's been predominantly in pharma, multiple small and large-sized pharmas and a short stint like Ankit here in financial services. And the second organization I represent here is actually a gender focused, female oriented Women in Data and AI, leadership group that we call WLDA. Our chairperson, Asha for that organization was covering and moderating a couple of panels here today. So I'm going to try to do my best rendition of making a plug for WLDA here as well. I know we're talking about trying to bring back the mass exodus of women that we saw happen over the course of the last two and a half years in the pandemic. And so I'm going to make a plug for that. And so when I share trends here in the industries that I've played in, the gender divide gap is a major issue. So with that also going to hand over my shared mic here to Frank.
Hi everyone, my name is Frank Schneider, and my title is AI Evangelist at Verint. Verint is a contact center company that is essentially listened to or monitored 40% to 60% of all of the calls and chats in contact centers that you've even probably had in the last 25 years. I come to Verint like the gentleman, if everybody aware of an acquisition. I was the co-founder of an AI startup called Speakeasy ai. That was my second AI startup in a row. I've been in the conversational AI space for probably about 10 years and been really lucky and fortunate to consult with a lot of large brands in regard to contact center, AI, conversational AI, and my job is to sort of share those learnings with the market.
Awesome. Yeah, so a lot of good knowledge going to come out of this group. I want to start with getting the cat out of the bag. I'm very curious what the perception of AI disruption looks like in pharma. I think that addressed point could be extremely broad and complex, so if you could distil down into terms that I could probably understand through a couple drinks would be awesome.
Sure. And so maybe let me take the lens of both, right? Since I do have the med devices as well as pharma background, pharma and life sciences, I'll generalize that within niches of healthcare, I think generally speaking, all of us are consumers of healthcare. You don't want unfettered unvalidated medicinal products or therapeutics being ingested in human or around humans. And so generally speaking, the industry is never going to be cutting edge, bleeding edge, fast follower, generally speaking, always the case. You want to be innovative with medicines and bringing novel therapies to patients and just looking at population health burdens, look at what we did with turning around vaccines in the face of the pandemic and there's going to be more emphasis in that space. So generally speaking as an industry, I would say med devices is probably farther along just because of looking at two decades of evolution and maturity of graphics and computer vision applications, life sciences and pharma, it's a very different answer depending on which functional area in life sciences and pharma you're talking of.
There are some very highly regulated spaces that pharma plays in when it comes to R&D of new medicinal products or if it comes to a supply chain where you have to heavily monitor product quality and manufacturing practices. So it's been generally slower. I would say at the early cusp of that. If I talk in s-curves of diffusion and adoption, we're at the very, very early tip of the S going up. But in certain other areas like corporate functions where you have legal procurement, HR and such, and commercialization, quite frankly, especially ML applications, I think we are, we're very much in the middle of that S going up. So let me pause, take a rest here and see if there are other perspectives we want to gather as well
Would love to hear on the IVR and contact center space. Obviously there's continual improvement of chat systems. We've seen conventional conversations through Siri, Alexa, et cetera, become a little bit more robust in terms of their applications. What are you seeing in the space where it really, for a long period of time there was a lot of tension oriented where people were not as friendly about certain chat systems and it's obviously improved. Can you talk about some of the advancements that you've seen and what you're excited about for 2023?
Yeah, it's interesting. I think naturally if you take away, some people would argue that there's a marketing veneer around the terminology AI that I don't like to have those debates, but I'll indulge for a second and say automation is the theme, right? And if automation is the theme, the contact center world has always been at the cusp of innovation on that. For the simple fact, and this is kind of crass, is that it's always been seen as a cost center, right? Your contact center is potentially low paid labor but doing high stakes situations and you want to cut those costs and smart brands that are into customer experience and are looking to differentiate and try to get brand loyalty and try to increase revenue and maintain dollars without churn. They're thinking of things differently. So along with the way to say contact centers have always been at the front of automation and then it gets a bad reputation because you were always trying to live this world of automation means containment, dirty words like containment and deflection and hey, we're either trying to prevent you from getting to a human or we're trying to displace a human.
And because of that as a sort of ethos, you run into AI malpractice where you're just not worried about experience and outcome oriented solutions, worried about just being that barrier and cutting those costs. I think the pendulum is swinging drastically to the point where because of the device in everyone's kitchen that we all play music on and ask for the weather and get your news alert and things like that because pressing a button on your phone and being able to text to someone when you're driving and have the accuracy for that sort of, what I always say is the court's stenographer version, just typing that out. Be comfortable. People are getting used to having basic elements of automation in their lives and have an expectation that's increasing drastically. So we talk about S-curves there and they don't want to spend more money, but they have a different level of expectation that even just getting to a human in a certain amount of time might not meet.
They want to be able to self-serve, they want to be able to get things done. There's a certain appetite for that. So to me now the contact center pendulum is swung to how can we look at things that are actually business-based KPIs around revenue generation, around conversational design. And if you think about design thinking, it always starts with empathy for the user. And so now that's the way people are starting to look at AI, it's design thinking applied to an NLU system, whereas maybe three or four years ago it wasn't design thinking, it was what's your accuracy level? Oh, you're 98% accurate and that's it. But now we're moving into that sort of empathy for the user and trying to get things done.
And I think I'm excited to touch on this. You led this in perfectly, we didn't even stage that, but the economic value I think is a big question and we have a computer vision expert here, and I think it would be great to understand from your perspective, obviously there's been an array of improvements across the video processing segment of the space. Where are you seeing businesses? And obviously we'll get industry specific facets and examples that we can have, but where are you seeing the most excitement for the next year or so in the computer vision space, in video processing, et cetera?
Well, within the next year, the XR domain is full of computer vision problems that are still actively attempting to being solved while we wait for the glasses to finally appear for AR. But I think in general, the most exciting development, disruptive development is going to be with avatars, our personal representatives that could be automated. And this dovetails with the call center application. Natural language processing is kind of the core of ai, the touring test passing, the touring test, and we're making a lot of progress. A lot of people are comfortable having the voice agents in their home, Alexa and Siri and so forth and come to rely on them. I know we do in our household, when those agents start to take on a visual persona and that's a two way, and people start to trust having cameras in their home, it's secure. It's not going to send video out that's inappropriate outside the household.
I think we're going to see people use these intelligent agents as extensions, as themselves, as proxies for themselves to get actual work done. Maybe one, your agent, we're going to be talking to another agent. My people will talk to your people. Let's say that do lunch, the agents will do lunch and get the work done. You don't have to worry about your updating your health insurance program or whatever. All that annoying work will be done for you in a trusted way. In order to do that, we got to overcome uncanny valley. The agent can't be distracting in their appearance. It's got to work really well. And then ideally, if we had to have an immersive technology like glasses, that agent could be in your house helping you set up your Wi-Fi router, plug the ethernet in there and really provide an equivalent of a physical person helping you in your home
And with you at Wayfair. I want to touch on two points. I think obviously the elephant in the room has been obviously supply chain the last two or three years with covid. I'm curious for one point, what is the most exciting technology that you think Covid bred in the supply chain realm? And then secondarily, to your point Eric, I think where are we looking consumer experience wise in your opinion, and what is Wayfair talking about internally to make that consumer experience more palatable for people when you can't have those tangible things, you can't touch the sofa, obviously you can think about it and you can see it, but how are you overcoming those two obstacles?
That's a great question. So in Wayfair, we have two different sides of the spectrum that we are dealing with in AI. So the first one is the customer experience side. So let me talk a little bit about that. So in customer experience, a lot of the work that we are doing is in the AR VR space, it's almost horizon three. So Wayfair is launching physical stores. Now, these are not typical physical stores. So when you go in there, you may be able to visualize what's not in the store. So if you're seeing this particular black chair and you want to see how it would look like in your house in an orange color, you can visualize that in AR VR mode in the physical store. So you get the hybrid omnichannel experience where you're not only seeing, touching and feeling the chair, but you can also see how it would look like in your room through an Oculus in a different option, in a different size, in a different color.
So that brings customer experience in a different scale and it increases our conversion rate, increases customer's loyalty back to us. So that's one example. Another example is we are creating more and more 3D models. So when we have AR VR and you want to put up your phone and see how this would look like, you may want to rotate it. And for that we need 3D models. Now, 3D models are really, really expensive, so it costs us around 50 bucks to create a 3D model versus now we have technology from Nvidia where we can stitch together a couple of images and create a 3D model and it costs us less than 20 cents. So that increases the adoption that leads to much better customer experience. The other thing that the other side of the spectrum where we are using AI is more towards some of the stuff that Frank has talked about, which is reducing the work that doesn't need humans.
For example, any retailer including Wayfair, Target, Walmart, we have millions of SKUs that we sell on a website, and these SKUs are provided by thousands of suppliers. There's a lot of duplication that happens. Now the impact is a customer may see 50 different products that all look slightly different, and that is a poor customer experience. In the past, what we had is that we had offshore agents looking through all those individual products and seeing these to look very similar. Let me eliminate one of them and let me group one of them. And now we're using algorithms that can create scores by looking at visual images of the product and the different dimensions of the product and then dedupe the products by themselves. So catalogue management is where we have seen a lot of cost savings in ai.
So I haven't gotten through one of these panels yet where the first 15 minutes didn't talk about ethical impact. So I think we're at the time mark where I think we can talk a little bit about it. I think there's been a lot of skepticism concern, obviously with automation's takeover if you would, so to call of people's jobs training, et cetera. I'd love to kind of open this discussion up a little bit further with how are each of your industries impacted by natural language processing per se, where there's been an improvement, it's actually been more beneficial because it kind of opens the envelope for people to have more time on creative functions instead of having to do the mundane automated tasks. Can you guys, anybody touch on the relative opportunities that it's actually created to reduce that redundant mundane task?
Yeah, maybe I can start. And this is where while pharma within up healthcare is a long cycle business, I think this is where you want to preserve innovation time devoted by bench scientists devoted by very significantly steeped in process and sensitive process manufacturing of drugs. You want to preserve this time for the bench to really highly specialized bench to focus. So I think the part where NLP and NLU has definitely taken off in a big way, and even in pharma and life sciences, even though we might not see the economic value materialize within the next, well, definitely not 2023. I mean there might be some because some of the farmers did start quite early on that NLP and NLU investments, three areas where we're going to categorically see, and then the ethical and how we manage bias and fairness principles around this.
One is going to be around cohort identification for clinical trial conduct. Just using NLP NLU to identify the right cohorts for inclusion exclusion in clinical trials across structured and unstructured data sets has been something that has been alluding us for a while. And so, there's been substantial investments in this space, and there's quite a few pharmas that have started to use NLP NLU to start to shrink that clinical inclusion and compress the clinical conduct time. The second area where we're seeing this is actually just industry literature, scientific industry literature being combed by whether these are medical directors or medical science liaisons who are combing through to identify that next frontier, that next applicability candidate disease group that we want to go after environment. So there's definitely that. And then the third, which is this is where NLP NLU, we've been talking about this for a while.
Where it gets a little tricky is medical NLP NLU. Yes. I tried to run ChatGPT yesterday on some of the NLP NLU that we trip up over all of the time. It's still an imperfectly trained set of models. I mean, human language gets complicated enough then you throw in there medical language that doesn't always follow form syntax that natural human language does that makes it significantly harder. And that's been the biggest impediment to applying that to speeding all unstructured data notes that are being captured in EMRs and your electronic health records as patients. So that's taken off in a big way as well. So I would say that be on the lookout, that's going to be an area that's going to come up and we are doing a lot of AI governance around those types of use cases to ensure that there is all aspects of bias and fairness and equity that is run through all of the governance internal groups, including legal, including compliance, including model dev management practices as well. Let me plug there and see if others might want to chime in as well.
Yeah, no, I was just going to say I think the contextual variability right in healthcare, it has a much grander impact. I think there's been a couple of examples that have been thrown out. I know Target had a mishap with a model and then Amazon several years ago I think had an issue with their hiring and training algorithm that they were using for recruitment. The high-level example was essentially that they were trying to have a model read resumes and they had 30 years of historical engineering data on resumes and the majority of it was skewed towards men. And so obviously inputs from 30 years ago, even though it's a lot of data, the data quality matters, the duration of where that data quality is obviously being extracted from has a big impact. So I think what becomes even more interesting is that contextual variability of why people are doing something and that data point that's created, how is it that we are going to get over the barrier of allowing consumers, especially in your business, get a little bit more excited about health, the digital health aspect, right? Because there's kind of this barrier of I'm willing to let the data be used if it's for buying razors or buying deodorant or what have you. But when you start talking about very specific health elements, I think it becomes a bit more obtuse. So how are you guys envisioning where exactly the overlaying line is separated with ethical nature? Because obviously you can't apply the same rules to each industry. So where does the blanket start, I guess ethically in your guys' opinion that would make the most sense moving forward?
I can take a quick crack at it, but I think on AI, the couple of dimensions that matters in this particular question, so one is how much is the risk appetite in a particular industry? So when we think about healthcare or when we think about financial services, the risk of being wrong has much implications than a typical consumer goods company like a retail company. In the worst case, if Wayfair does a bad job with 3D, AR and VR, a customer will bounce off the website. They may not come back. Versus in healthcare, if somebody gets wrong, if a surgeon gets it wrong, the risk appetite is a lot lower. So I think that is one dimension that will determine how far AI will go in each of the industries. The industries that are heavily regulated are where we expect less adoption, or they'll be fast follow versus fast moving consumer goods industries like retail will be at the forefront and we'll try a lot of things because the implications are less.
And just to build on that point, right? Depending on the industry and the floor of what can happen if the risk materializes and when it comes to human lives, for example, you want every airplane that you have aboard to be a 16 sigma manufacturing engine shop, you don't want to get on that airplane, otherwise similar analogue in healthcare, right? I think the governance around AI adoption is going to get regulated and is going to be getting standardized in many cases by health agencies. The globe over the FDA has started to put something out there in terms of here's the minimum AI explainability plan, here's the minimum AI bias and model management and model development practices that you have to adhere to. A lot of health agencies across the globe are doing that. So there's going to be that minimum standard that's going to be just established by regulators in this space, at least in healthcare.
And by the way, this keeps evolving for medical devices versus inhuman therapeutics or diagnostics that might touch in vivo human components. This is still an emerging space where regulators are putting that governance minimum upfront. And so you'll see more evolution in this. There's the other checks and balances that happen quite a bit in healthcare just because of the nature of, again, lives at stake. You have medical institutional review boards for us in life sciences in bringing a new drug to market in clinical trials that we want to conduct and conduct at provider site and sometimes ambulatory or remote site settings. Even today with the advent of covid, everybody's talking about trying to do cyclists trials, there is still an institutional review board that's going to be that minimum hurdle screen that you have to clear in terms of ethics, bias, peer reviews of the protocol, you're setting up the inclusion and exclusion criteria. Is it representative enough for population outcomes to be scalable across ethnicities, across minorities, et cetera. So I think there's that checks and balances that have been put into place, at least in our industry. I'm sure that this is an evolving space in the consumer side, which again, to Ankit’s point, the less material, the loss of life or loss of catastrophic dollar magnitude, the more latitude you have in terms of how quickly you launch this and how governed you have to be about it.
Yeah, so it'd be interesting in the contact center space when you get something wrong, obviously it'd be great if we got chat systems up to a hundred percent accuracy, but it's very difficult to do. So can you kind of elaborate on the economic value that where we are in terms of chat systems and the improvement to where, where's the ceiling? What is the next evolution of chat systems in your opinion? Are we talking about nuanced behavior changes based on dynamic dialogue? If I'm somebody from New York, I obviously have a different accent than somebody from Southern California. Can you kind of dig into a little bit just the nuance of the improvements that you've seen and what you're excited about moving forward?
It's funny from a modality perspective, you touched on sort of voice and digital. So when I hear chat, I think of typing to things. And then when you mentioned accent, I think tangibly you started with accuracy and if someone saw my speech earlier or my presentation earlier, you're going to hear this again. Customers want to do five things. They want to buy, sign up or enroll, put that in one bucket. They want to use or enjoy the service or the product. They want to pay for it, they actually do want to pay for it. They want to report a problem and get it fixed, and then they want to cancel or end the usage of it. And that's pretty much life. So what I'm excited about is not a sort of dance or calculus around can we move the accuracy on the input for that intent from 97 to 98, what I'm excited about is, and that's what I was talking about Wayfair when I came up here.
Wayfair has incredible live human agent support. So the risk for them a little bit different than the ethics calculus, but the risk for them is we can't disrupt that brand and now we're going to put AI in front of customers. And inherently there's a stigma around that, whether it's because you yelled agent at IVRs for 12 years, or you messed with, I'm not going to mention it wasn't this telco brand, but another telco brand's chat bot like 15 years ago, you're thinking, this is not for me, I'm not even going to deal with it. But I had an experience with Wayfair where I wanted to return a piece of furniture, it had a lookup table, and here I am loving their customer support. So I'm thinking, let me hurry up and get the live chat, or let me get the phone number and call and handle this.
And it had an AI algorithm that did a lookup table and saw, hey, this TV console a couple hundred bucks might cost us 125 to ship it back. Let's just tell them to donate it to charity if it's busted and we'll send them a new one and never spoke to a human. And then later on called Wayfair for a different issue. You've got one of your lovely agents in North Carolina, they always say where they are, and they know who you are. So when I get excited about the future, it's really AI augmentation. It's what Verint would call is the engagement capacity gap. It's really hard right now for contact centers in this industry that I'm consulting with to meet the needs of customers now with their 1500, 3000, whatever the case would be, agents right now. So it's not a matter of let's lop off 20% of our headcount because we threw a bot in front of our customers.
It's how do we make this whole engine run and how do we put the customer at the center of things? And even when we were talking about the pharma use case and the ethics, and I tend to think on the user side, assuming all the boxes are checked off for governance and regulatory compliance and all the acronyms that I dare not say all that we need to adhere to. Ultimately the user is going to define if you take a transparent approach, what do I opt into? What do I want to share for this use case? What do I feel comfortable dealing with? And some of that giving a chance to learn or listen to your actual customers or users will help inform both the contact center approach as well as some of those more delicate high-risk areas.
Yeah, that's a great answer. I do want to talk. You brought up avatars. I think this is a good segue to kind of moonshot the discussion. We're talking about disruptive in 2023. There's obviously a ton of discussion around synthetic data, digital twins, et cetera. Can you kind of just speak to where your vision is going with the digital twin notion or the avatar being your liaison? Obviously, I think people generally tend to use Siri and Alexa as you mentioned, for what's the weather, the menial tasks, and obviously we've talked about the risk to that is like, well, if Alexa tells me that it's going to be 52 outside and it's 70, that's not that big of a deal. Can you discuss a little bit where you envision avatars going because I think each individual industry is going to benefit from some sort of synthetic replica or liaison that can basically be your middleman if you would, or middle woman that allows you to improve or not have to deal with decisions and be able to make them pretty accurately.
So I think this will be rolled out in phases and tell me if I'm wrong about this. My projections are wrong, but I think maybe the first step is to have not the paperclip, but an actual avatar representing the user with their facial gestures, with their voice, but with augmentation, maybe the voice is modified in some way to protect the call center agent's privacy and with their appearance modified maybe completely, maybe augmented in some way, but mainly having the facial expressions and the pleasant appearance there in front of the user. And then you build out from that not just facing their torso, maybe arms and maybe now you're in an immersive environment so they can actually be in your home to help you point to where to plug in your ethernet connection and so forth. Beyond that, if the agents are actually autonomous, then you have a serious security concern.
And this is where blockchain technology may come in chain of custody for the credentials. The current, we're using passwords still, which is nobody likes that your phone is your most secure interface to the internet. It's face ID on the iPhone, I use it all the time. That's getting better and better. The security is greatest on your phone compared to your desktop. So I think we need some standards, maybe some regulations, maybe industry forums to provide a trusted way of multimodal biometric access. Continuous identity verification of the user on both ends is going to be super important as we start to engage more and more proxies to do our
Work for us. Yeah, I think the AR, VR and digital twin of your own home with floor plans could be a very interesting thing. And also, it's exciting when the tech frontier really gets to that level. I think I've seen a lot of hack demonstrations where there are certain things that look really cool and then you pop the pin in the balloon and the thing kind of implodes a little bit. What's your grand vision of the implications that some sort of digital representation of somebody's home could potentially provide moving forward?
It is very much possible, and we are actually working on it. So what we are working towards is a camera first shopping experience. So typically what happens is that if you're trying to shop at Wayfair, you come to our website, you start shopping for what you think is going to look good, versus what we are building towards is camera first experience. You pull up your camera, you take a 360 of your room, and that's how you start shopping because then suddenly out of 20 million SKUs that we sell, they start showing you 20 SKUs that will actually fit into your room. And when we say fit, it's not just the visual appeal, but also what's the right size. Should that be a two-seater? Should that be a three-seater? Should that be a five-seater section so far? So
Now you're giving recommendation engines as well based on that profile and the house? Yeah,
Exactly. And it's not only that we are looking at the dimensions of the room, but we are looking at what do you have as the wallet? What do you have as the table lamp? What's your style? Is it nautical, is it traditional, is it contemporary? All of that will go into our recommendations, and that's happening to an extent right now, but in 2023, definitely we'll be looking to make more progress on
That. That's great. I think we're coming up close on time. We've got four minutes, so we'll open it up for q and a and let people, if you want to just come to the mics in the back of the room and give us your name and shoot away.
Hi, Lucas FinCo from Strata Gain. So we're talking about disruption, right? So can I get a prediction from the panelists, what industry or company you think might meet its demise due to AI?
Industry or company that might meet its demise? Yeah, because of ai, I am not going to be the grim reader.
That's a morbid question.
Well, let me perhaps preface it with this, right? I mean, the more heavily regulated the industry, the less likely that you're going to see a device in the near term and for good reasons at good measures that we all talked about right now. But I think generally speaking, the minimized agentless or virtual agent contact centers, which every industry has, I think there might be some devices there and there may be significant automation there. And no, I mean that's true even in healthcare, and this is one of the areas where we can, so it helped us look very different today, right? Our IVRS systems, whether it's for patient services, you're trying to call and find out about a new drug or whether it's IVRS about reporting a safety incident on a drug. So that I think is definitely, there will be some demise there, so to speak, but let me see if others have perspectives.
I think less of a demise, but more of a reduction I anticipate because I think AI augmentation is the right strategy, even for retail, which is a little bit risk less risky avenue because even though we have a large offshore team that is doing a lot of duplication detection, we are not going to eliminate those jobs. We are not even able to get to all the things that we need to get to. So it's not so much of like, let's cut the resources and put everything towards AI. We are using AI for augmentation of our workforce, so in 2023, at least I see some reduction, not so much of a deice.
I was just going to say, I think the interesting bit, at least from my perspective, has been seeing the improvement and aggressive changing in checkout counters. It's seeming like the improvements there seem to be getting better and better. And if you go to a grocery store or you're shopping for something, if you know what you want, you get in and you can scan it and you've got contactless payment, I'd be very surprised in 10, 15 years if we still have people checking people out. I think that that's an industry that has radically changed and will continue to do so. Awesome. Thank you. Yeah, thank you everybody. Thank you. Have a good rest.