The AI Summit New York


Nov 15, 2023

Unlocking Cross-Functional Synergy in AI Development: A Deep Dive into Fostering Collaboration

Unlocking Cross-Functional Synergy in AI Development: A Deep Dive into Fostering Collaboration

Bridging the culture and knowledge gap between design teams and technical ML skills.

In a riveting panel discussion featuring luminaries from Airbnb, Coinbase, Meta, and YouTube, the keys to fostering harmonious collaboration between design and technical teams in the AI and ML domain were unveiled.  

Listen in as our expert panelists share their captivating journeys, highlighting the crucial role played by curiosity, determination, and humility in their experiences. 

Distinguished panelists include: 

  • Claire Lebarz, Head of Data Science, Airbnb 

  • Vik Scoggins, Product Management Lead, Coinbase 

  • Carly Burton, Director of Design & Research, AI Infra, Meta 

  • Tobe Okeke, Product Growth Science and Analytics Lead, YouTube 

Learn how data fluency and data-driven design are reshaping the AI landscape. Explore their insights into fostering cross-functional synergy that leads to remarkable product development and user experiences. 

As we navigate the AI landscape, one thing is clear: the collaboration between design and technical teams is the driving force behind innovation. Make sure to watch this discussion to uncover the keys to successful cross-functional synergy in AI development. 

Join us for next exclusive panel discussion to gain a deeper understanding of how design and technical teams collaborate to shape the future of AI.  


The Transcript: 

00:00 Introduction 

Great. So we are getting into our second panel discussion today. I'm very excited about this one actually. So we are going to have Claire Lebarz, from Airbnb. We have Vik Scoggins, from Coinbase, Carly Burton from Meta, and then Tobe Okeke from YouTube. And the discussion is going to be about standardizing AI access for all bringing the culture and knowledge gap between design teams and technical ML scales. I look forward to it. Thank you. 

00:32 Claire 

Great. Thank you for the introduction. Can you hear me okay? Great. Well, I'm Claire. Yes, I worked at Airbnb until September, but I just joined a new rocket ship called Malt, which you may not have heard about, but it's the case now the topic of today. I'm very excited about what we're going to talk about, like how do we make machine learning teams and design teams work together? How do we bridge the cultural gap. And we have a great group to tell us about their own experience and what I've seen work in this space. So Carly, who is a design director for AI Infra at Meta, and as really a kind of a diverse background, you've done a lot of things, ethnography, you've done also product management, you've seen applied AI, really a lot of interesting things that we're going to learn about .Tobe, who is the Product Growth Science and Analytics Lead at YouTube. And I've just learned that before we get into the room, she also has her own company, and she's doing a lot of interesting things on the side. And we have Vic, whose product key that conveys machine learning lead really the expert that conveys and fascinating to hear yesterday from you that is actually working as a consulting with many different teams on machine learning, application recommender system risk, personalization, all the things that could I add conveys anything you'd like to add, maybe for the audience to get to know you a little bit better? Did I cover it? Great. Awesome. Well, on the topic of machine learning and design teams working together, we I thought we maybe could start with our panelists kind of own journey into the space. Obviously, you know, some people have more or less kind of straight paths into machine learning. But yeah, how did you guys get into that? Maybe Carly, can you tell us? How did you get into machine learning and learn about the field? 

02:35 Carly 

Yeah. Hi, everyone. So my love affair with AI, I think started when I did an ethnographic study around the world, I got to meet with around 400 digital narratives and understand from them, what did they expect out of technology. I wasn't expecting at that moment to then go on a 10 year journey of learning everything I could about AI. But through that ethnography I found that they really had a deep connection. And we're almost expecting AI to have empathy. And at the time, I was actually building data lakes and looking at these data sets that were very unstructured and inconsistent and incomplete. And I was thinking to myself, like, how are we going to meet these needs. And then I was also looking around the table. And it was mostly data scientists and engineers. And at the time I was doing product management was like, All right, this is a space for design for research for product managers, how do we all come together and think about building AI based on what the end user experience would need to be. So in that moment, I sort of took a plan and there was threefold. One, pick a space where I could go really deep, I know that the field is vast, it wouldn't make sense to try to learn everything. And so for me, that case was time series, and going from zero to one building an ensemble model that we ended up patenting great learning experience. The second thing was, I wasn't going to go back to school, I think that you can learn a lot whenever you get to solve real world problems. So I just surrounded myself with like, incredibly bright people that all had a difference of background and perspective from myself. And then the third part was, this one's a bit personal, was have a lot of self compassion. I knew that it was going to be very difficult, and I was going to have to get comfortable with being incredibly uncomfortable. I think AI is esoteric. It's, you know, a field where people go to get PhDs like yourself, that spend their whole lives working on a very specific problem space. And so I needed to humble myself to this journey. And now 10 years later, a couple of data products behind me working on some really complex problems that matter that hopefully can talk about today, but was my journey.  

04:49 Claire 

Very inspiring. How about you, Tobe? 

04:51 Tobe 

Hi, everyone. I am Tobe and I think my journey started from maybe after I completed my Bachelor's because I have a bachelor's in Computer Science and a Master's in Computer and Information System. So I pretty much worked in various roles and domains in tech. And I decided to pivot into data science and analytics after my master's program in 2015. So for me, I have always worked in the infrastructure and network inside at the early stage of my career. But over time, I saw the importance of data during my master's program when I was exposed to like the data modeling, the data warehouses and all of that stuff. So I started my career. I started as a data analyst in an e Commerce Industry where I was analyzing descriptive and diagnostic analysis. But over time, I pivoted into like the financial sector, and also worked in three different healthcare sectors, before I got the job at Google, which, where I work on the YouTube team. So over time, throughout my career in all these industries, one thing was unique or common, because I was always analyzing marketing and product data. So I have always led like the analytics around like, tracking the effectiveness of marketing campaigns. So being able to work with like different industry that spent from like $5,000, in marketing to like millions of dollars in marketing, and being that source of that person that has been relied on to actually provide models and also like predict the effectiveness of marketing campaign. So over time, I started building models, mostly for marketing, which centered around regression models, and also classification models, as well as clustering for like marketing, segmentation and customer segmentation. So one thing is different at Google is like the way Google does AI is completely different from like, how they do it everywhere. I have worked, they have like a lot of in house tool Google, they have something called vertex AI, whereby you can automate like all these models and build those things. So and by the side, just like she said, I also have, I'm also the founder of data tech compant, which is more like a data analytics consulting and AI company, where I also promote data literacy, especially among under represented community. So that was how I got to AI. 

07:18 Claire 

Another very, very interesting path. All right, Vic, how about you? 

07:23 Vic 

Yeah, so I definitely didn't start my career, thinking I was going to get involved in this space. So I started my career as a mechanical engineer. So I was designing buildings for new construction, renovations, things like that. And so I wanted to do something a little bit more fast paced, I joined a startup that was sort of adjacent to where I was working in, in mechanical engineering and energy. And it was there that I worked a lot with data scientists and kind of we were doing a lot of time series forecasting, trying to predict, you know, how buildings are operating, where might they reduce energy consumption, things like that, with the sort of proliferation of smart meters, which are now pretty ubiquitous, but they didn't always used to be that way. And so that was fun. And really, it's something that I wanted to stay in, I thought, you know, even though I wasn't a trained data scientist, or, you know, AI practitioner, it was something that I thought, you know, I was interested in and I could get better at. And so, you know, I had a short stint at a bank, where we were looking at sort of like, who might we target for loan offers, I worked at we work where I was working on the growth team there, trying to sort of grow our market share within the enterprise segment, I worked on things like lead scoring, I worked on things like, you know, predicting what spaces people might be interested in, based on their needs, things like that. And then I joined Coinbase, about two and a half years ago. And so I am the lead on the product on the machine learning team there where we build models for very specific use cases related to risk personalization, search, things like that. And then we also build an in house ML platform. So it kind of it's not just building the models, but how do you actually sort of train them, serve them, make sure that you know, you have, you know, data processing, all that other stuff. And then a lot what I really enjoy about this particular role is that it's not just about building models for like offline analysis, it's more about how do you actually build a production ready model that can sort of be served in real time at low latencies. And then maybe lead to sort of other sort of cascading business impact, whether that's reducing fraud, increasing revenue, things like that. So it's like very close to the value of where like, basically ultimately, where the company cares what the company cares about. And I would say one thing that I really enjoyed is that there's so many different places where machine learning applies. So if you're a very curious person like myself, with a lot of industries that you might be interested in, people ask, Oh, where would you want to work? I don't really you don't always have a great answer to that question, but I feel good knowing that My skill sets in data can sort of translate across different industries. And so that's good. But yeah, I never, I never thought I would be working in this. 

10:08 Claire 

It sounds like data was a thread for a lot of us like, obviously, like AI and ML is nothing without the data. But definitely very, very interesting. So you're gonna see all the different paths and learning along the way, right, which I think is going to be a theme for us. Do any of you have maybe a horror story to share when you're thinking about, you know, the gap between designers and machine learning teams? On the other side? Have you seen, you know, you have examples of things going wrong, or antipatterns, you've noticed any horror stories that come to mind? 

10:42 Tobe 

I can share. Yeah, so I can share like, a scenario or like, story being that my, I'd say my role sits at the intersection of data science, product management, as well as goods marketing, right. And over the years, I used to be a web dev. So I was building like product myself. So I've always noticed, like, there is this gap, or the bridge between like the designers as well as the engineers. So in this context, the engineers will be like the machine learning engineers. So always, I think one thing that strikes across all the industries that I have worked on is always I would say, communication issue. Like the engineers don't understand how to translate those engineering specification into like business layout, and business stakeholders don't know how to, like translate those. So in this context, like the designers, they understand like everything they want to like, design, or like the mock up or the prototype and all, but it's always like hard in terms of codifying them. So we're not saying the designers need to understand how to write code, but it's more like understanding the basics will always help like, getting them for in terms of understanding what the end product will look like, from the engineering standpoint. 

12:09 Carly 

I might want to talk about like a framework of what creates horror stories, if that's like helpful, because I think there's like a lot of examples of when models go wrong, right? When you expect them to perform a certain way, and they don't maybe it was overfit. Maybe it wasn't trained on the right data. But the people involved in crafting that in, you know, traditionally in product, we all know, you look at the intersection of like, what's desirable, what's feasible, what's viable. And that's where you get your requirements. And then when you introduce a model into that product, things can kind of go anywhere, right? You're introducing kind of a new agent of change. And so I think that one of the frameworks that I'd like, you know, maybe people to think about is wrapping responsibility as another concentric circle into that. And for product teams together to say like, well, like, Is this even something that we should be introducing? Like, what's the application here? What problem are we are we solving? Do we have actually the adequate data, in order to train the model to get it to be at the performance that we need it to be? Is the output of that when someone makes a decision off of it? Is it going to inform them to take a better action? Is it a multiplier? Are we removing human agency? I think that without that responsibility circle, we can end up into these horror stories of you know, like, oh, well, the business problem is this. And, you know, yes, we see a market need for it. And like we can do it, we can physically do it. But if you don't bring together all of these different disciplines, and I think maybe this is your question on like, you know, where design goes wrong as designers, I think, naturally, we'll think about that experience. And, you know, through that training, the connection across those different dimensions, and then PMs often are thinking about, you know, like, alright, yeah, what is the business context? So what is the data telling me, and then an engineer wants to, you know, build something. And so you really need to pull all of these disciplines together, and anchor them on that problem, and then have this kind of framework so that you avoid the horror stories. But, you know, I know people in horror stories that didn't mean to get there because of just these unintended consequences of like incomplete datasets, or like, we didn't go through we set the wrong reward function. And the person who said the reward function didn't realize what the implication was gonna be and so on. So I think, you know, more generally looking at it, 

14:31 Claire 

that's a very interesting framework, maybe a question back to you, and then you can see what you're thinking about that Vic, which is today at Meta, like, do you do you implement that framework like who in the organization is implementing that is it the product manager, design? How do you ensure that that cross functional group is working in the way you described? 

14:54 Carly 

Yeah, so we do and it was something that whenever I joined a collaboration is like one of the key values actually, that I have for my team, along with adaptability and curiosity. I feel like if you don't have those traits, it's really hard for teams to, you know, look at problems, flip them around. I also felt that, you know, we have a lot of tensions that we have to trade off. We're not just serving one customer, we're obviously an internal team. But you know, we have customers that are building content, understanding models, they're building ranking and recommendation, you know, natural language processing, no language left behind all of these different use cases, have different needs. They require different tools, and how do you take all of these different desires? And then viability? You know, I'm sure you guys watch our earnings reports, you know, there are some specific things that we need to do that are very clear. And then feasibility I mean, the field is changing all the time, like, do you use a declarative or an imperative framework, whatever you're using pipelines, you know, all of these trade offs, and then the responsibility factor, right? Like, we don't want to be reactive to regulations, right? Like, we have to reduce the anarchy and development, we have to look at privacy and governance, and that has to be integrated, it cannot be an afterthought, where you're just like, oh, yeah, I'm gonna do you know, my day in and future prep, and authoring, and I'm gonna deploy this model, and everything's great. These things have to come together. And so I think, in in any type of development process, whether you're, you know, building to produce these models, or you're taking an applied AI and bringing into a product, I think stretching the practice to include responsibility is is table stakes at this point, I mean, like, I think the industry has matured enough to, to change that three, three vain diagram into four. 

16:44 Claire 

as a product lead, does that resonate with you? 

16:48 Vic 

Yeah, definitely does. You know, I think, on that last point, part of being I think, a good product manager is trying to anticipate what sort of obstacles might be out there. And so certainly on the responsibility side, you know, that people are going to ask you, how does the model work, what happens in this case, whatever, you know, just, just general, things like that, and you need to just be able to answer it in not just a quantitative, you know, ml friendly way, but in a way that's going to be sort of resonates with your, with your audience. And also, another thing I've come to find out is that there's not really a lot of hard and fast rules, that maybe legal and regulatory teams, compliance teams and the like, will say, you know, it's, it's written, it's, this is what you have to do, it's more about trying to negotiate and trying to explain why something sort of should be permissible, and then working pragmatically with them. But certainly, you know, along the theme of like, you know, responsibility and things like that, there's just kind of like, you have to be kind of empathetic to, to what sort of arguments you might hear or, perspectives that exist and, and try to just work with work with people on that. 

17:57 Claire 

Great, thank you. So whose responsibility? Is it?  

17:58 Carly 

Every bugs? You know, it's interesting, because I, I know, with this talk, you know, it's about these different disciplines. But I, I really think you got to put the disciplines almost aside and like, what's the problem? And how do we come together and bring our strings to solve that problem in the best way? And, you know, design has just changed so much over the last 15-20 years, you know, it's like, so? Well, you know, I think if you were to ask every person in this room, like, tell me what a designer is, and what do they do? One person might describe an industrial designer, another person might just define a graphic designer, you know, in your service designers, you have, you know, product designers, and you have all these things that actually might be interesting to talk about, sort of like, with the evolution of design as a practice, how that has started to converge and ask for new collaboration models, with product managers, with data scientists with engineers. And how hybrid the talent is coming out of the market. You know, I mean, I have a designer who's also a comp sci major, and she programs her own her own stuff, you know, so it's, it's very evolution. All right. 

19:20 Claire 

Great. Well, you talk about market, I have a question for you on hiring, right. When you're shaping these teams with, you know, different functions, whether you're hiring for designers for machine learning engineer, data scientist, how has it changed over the years, you know, as you're building more and more, you know, ML products and working on ML products? Do you think about how that collaboration factor when you're hiring people, that's something you look for? Yeah. How do you approach hiring in the context of having all of these functions work together? 

19:54 Tobe 

I think for me, I'm having been in the position whereby I have hired had like designers as well as data scientists, I think it has changed over time, especially in the tech industry, it's kind of like at this point, even if you're just like a graphic designer now, or just like designing marketing creatives and all, they just expect you to be kind of like data driven art, because they believe everything starts and ends with data, especially when the industry is more on the digital product side of things. Alright, so starting from like, the UX designers, the web designers, the graphic designers, and all I think, at this point, everyone, I mean, one way they are that they are trying to, like learn about data and also learn about I mean, even if it's just the basis, like not necessarily like coding or building the model themselves, but I think that has, they really seemed like this will actually help them stand out and gain competitive edge over other applicants. 

20:57 Carly 

I think that sort of data fluency piece, though, is not lost on design, like the actual core way of coming out a problem is often bottoms up where you want to go and you pull thick data, right? It's qualitative in nature. And it's smaller, right? Like, whereas, you know, data scientists are working with big data, we both have a deep appreciation for finding sentiment, right? And in that sense making process and where are those patterns? So I think that there's that commonality. I also think for designers and engineers, they're highly iterative in their process. It's not about landing on the right thing right away. I mean, you have hypotheses, and then you want to, like, you know, build through it. One of the things I, when I've approached hiring is very much around these kind of active characteristics that I hope a candidate has, which is more around, I mentioned, Curiosity has a value. That's critical, right? Because it's like, how are they going to approach the problem? And how are they going to continue to grow? A second thing is grit. Like, I want to see stories of how they were in a really deep problem, and they were able to get out of it. Because there needs to be resilient. And this, this this space as you work through it. And I think the third one is maybe a little softer. And that's humility because we don't all have the answers. I mean, every day is something to learn about. And you have to I think, humble yourself to sort of that and that we can be a part of it, right? Like we came out of an AI winter, there's a lot of momentum, there's interesting things happening in open AI. And just how do you, when you look at your candidates, it's not about just assessing their product thinking or their, you know, linear regression skills, or like what model or paper that they wrote, it's sometimes about these characteristics beyond the craft, and that's how you can create these super teams that run really fast, and that they're collaborative versus competitive with one another. 

22:51 Vic 

Yeah, big plus one on curiosity and sort of grit resilience. I would say another thing that, you know, I look at, in particular with ML engineers is the ability to sort of understand how a model will ultimately be used and work within a production system. What I've often seen is that, you know, it's one thing to build a model that, again, offline use, maybe for research purposes, and it has kind of one, one finding, but being able to kind of think about, you know, if you go through a certain flow or, you know, products using is meant to use a decision from a model in some way, how do you actually integrate it within, you know, the whole holistic experience. And so, and that will often translate into system design to an extent and so you kind of have to have some ability to understand like the ramifications and implications for model design decisions on a broader system design, and then try to get out ahead of that, anticipate it, think through it, and things like that. So that's, that's one challenge. I've seen that even strong machine learning engineers on paper, have when occasions when they're actually in, in the thick of things. So 

24:05 Claire 

once we've hired these great people? Do you have any recommendation? Or have you seen ways to organize teams and these people work together? That works more or less well, for them to collaborate? I'm actually thinking about your current role, actually, at Coinbase, where you are your machine learning product, and you have your own team, but you also you are consulting with different product teams across the company. And so your role by definition is a bridge right? With, with people maybe less knowledgeable on machine learning. Do you have any perspective on you know, what type of roles do you need for really kind of the sauce to work well, or a ways to organize a team set? You would recommend people?  

24:52 Vic 

Yeah so I would say communication is obviously you know, very important, but when you're collaborating with other teams, and you're sort of like representing your team, the PM can't be in every single room and be the only person who sort of solicits requests or questions around how to use ML. Sometimes it is an engineer who's taking a question from somebody else. And so I would just say like the ability to sort of empathize, be kind of impact driven. Think about what you know, ultimately what impact you're having, as opposed to initially, the idea that ML idea is very important. The other thing I would say is that there's no sort of there are unicorns, people who can do everything seemingly, but that's not practical. And so you kind of have to acknowledge sort of the strengths and weaknesses that some people have, like, there's people that I know who are great at zero tolerance applications, they know, they know really that the first instance of an ML, the first instance of a system that leverages ML may have very little ML, it may be heuristics driven, and but they know how to kind of build the foundation and the pipes to be able to support a more advanced ML use case later down the road. And so they can kind of see that zero to one to two beyond. And that's a great person to have when you are starting from scratch. And you're saying, Well, I mean, here's kind of like where I want to go, but I don't necessarily know how to get there. Exactly. And there's other people who are like, really like deep MO practitioners, they really may not even care that much about the system, but they're actually really wanting to try this new technique that they read about, right. And so these, you know, these skill sets that people have, you know, can be, they can be relevant, different ways. And so part of it is just like understanding that and trying to, like, work with it and find opportunities for everybody. 

26:40 Carly 

I sort of feel like your question depends, though, on like, are you working on applied AI? Or are you actually like, in the kitchen building, you know, tools or the infrastructure to like, craft these models? And so the answer would be dependent on that. But in absence of that, since everyone's coming from a different place, I tried to break up the problem in like, three ways, I look at the people. And so that's around, like, what's the skills that I've got, you know, what levels, what chemistry exists, I look at the problem. And then I look at the domain, and I try to marry teams based on those different elements. And in some cases, like, I'm glad this isn't being recorded, you don't actually have the pod model, because you don't need it. You know. And that's, I think, also about today, we have to do a lot less, a lot more with less. Because, you know, like, you said, there's these unicorns, I mean, like, talent is not everywhere, right? And you have to grow people, and you have to sort of like give people that opportunity, you know, to reach in and, you know, do product requirements definition or do some coding or whatever that may be. So I think that it's about, you know, like, what, how you're like, What's the case, and then breaking it up that way? That's how I thought about it, at least. 

28:05 Tobe 

For me, I think plus one to what, Vic and what she said, as Carly said, as well, just to add to her point, I think some of the roles are more like I see roles whereby you're like individual contributors. And at that point, you might be like the only person like maybe overseeing the project from start to finish. And I would say like, my role is an IC role, though I have like TVCs that do like the data pipeline stuff and all but in terms of like building the model itself, it's more like, Hey, you talk about the business problem and all of that stuff. So I think one thing that has actually worked so well for us is, like you said, we all need to learn from each other. And we're all responsible at that point. So it's something called project documentation. There's no project that has to be executed by like the machine learning engineers, or like the product management, or like any analytics team, at Google that doesn't require project documentation. And I think that's a very great way for everybody to learn about what is going on. You always start with the background, the problem, the methodology, as well as the breakdown of the work stream and I think so far even when I joined it was kind of like a struggle here in like different terms, but it's like you have something to refer back to and learn from it and understand where at every stage of the project so I think that is also a great way to actually force that at. 

29:36 Claire 

Very, very interesting. I like the question of organization, I'm just going to share one thing before I gonna ask my next question and then hopefully we can open up for guys questions. For me, the really the moment that shifted my career and in relationship to design was the day I reported to the Chief Design Officer at Airbnb, right so I was As a head of data for the for the team reporting to the Chief Design Officer, and I was like, wow, that opened my mind. This is what you're thinking about. These are the questions you're asking yourself, right. And this is how we approach the business. And it really made a huge difference for my entire team also to get that empathy, right, that kind of opening the box of this is how you kind of you, you think about problems. And so I love when you say marrying people, you know, getting the perspective definitely made a difference. All right. You're all at your own journey, learning experience through machine learning. And you know, we hire people, we make them work together, but as leaders and people, people leader, how do you think about your people journey into machine learning? They're kind of learning how do you upskill people into the domain? Other things you've seen work? Well, not well? You are teaching a class I know on design and AI. So I'm very interested to hear more about that. And I also know you mentioned right at your teaching, you're teaching daily, in your work in some ways, right? So what do you see what what do you teach people? And do you have any recommendation for people, you know, class classes, and maybe add their resources, saying that you found really helpful to bring people to the next level of ML. 

31:25 Carly 

I'll jump in. So there's a couple of approaches that I take. One is obviously reverse mentoring, because they think that everybody has something to offer. And it's about looking at your talent pool and finding out what that is, and then making the right connections. And then the second part is always having a learning program. That's always my expectation, with the teams that I manage. And that learning program I like when it's developed from the internal group that you're not just bringing people from the outside and dropping them in. Because I think often whenever you empower the person to share, it helps them also internalize a bit more about what it is that they know and what they're doing. It builds better culture, it builds connections. And so you know, it's complimentary. And I think the third part is, is helping people break it down. If you were just to say, like, I want you to be an expert in AI, and ML was like, That's really scary. And I think it's almost sort of like, pick an area and just go really deep, and try to do it yourself. The best way to learn is through doing, you're also talking to somebody who like I, I went, you know, to Colombia, here in New York, that's what brought me here. And I was like, I'm gonna go back to school one day, and I was like, once I started this kind of, like, there's so many learning resources out there at your fingertips. I was like, I'm never going back to school. You know, I like I learned through experience. And so I encourage others to do that. But I give them structure. I don't just like say figure it out, I find out what's your passion. And so I have one designer who like really wants to be the data SME. And so I said, Alright, let's look at data and feature prep, just as the process and development. Let's look at these tools. Let's look at just these model types. Let's talk about data labeling, and we break it down. And then that gives them confidence. And then they start to show up with you know, ability to kind of show value. And then so I don't have any designers or researchers on my team saying, hey, I want to see that the table, I want to see that table. Every single one of them aren't deceit, and they have the respect and they're working on the strategy. And so it's really great. So I guess I'm much more find it organically than go get it programmatically. But make your organic process programmatic, if that makes sense. 

33:44 Tobe 

All right, for me, first one, the structured programming. It's very important when we start learning, like a lot of people. I mean, we hear the buzzwords since the pandemic, I know like the rate at which people are pivoting into the data science, machine learning and AI has really skyrocketed. However, some people are still lost in the journey because they're just like literally like hopping from one free training to the next free training and all of that stuff. And at the end of the day, they never complete the program, because definitely they don't have any roadmap or any like career trajectory on how they want to like evolve over time. For me, what I always say is AI is broad, decide on which aspect or which subcategory you want to focus on, be machine learning, computer vision, robotic deep learning and all then you can decide on the specific domain as well. Like in my case, I've always specialized in like marketing and product domain. So there are some type of model that data scientists build and there's some type of model that data science that focus in marketing builds things like MMM, marketing mix model, things like causal impact, causal inference, and all of that stuff. So if you pick a domain specifically, you're gonna learn better and become an expert in that domain. Rather than just being a generalist is like at the end of the day, you can be a data scientist. And when I say MML, you might not even know what I'm talking about, because that's peculiar to like the marketing team, being able to like measure the effectiveness of marketing campaigns, and also like drive ROI and all of that. And I have always had the mission to promote data literacy. And I will say, I am super excited and proud because I started my company in 2020. And I have helped more than 200 people of color break into the data science and analytics field successfully, all of them with zero technical background zero Tech experience. So I would say I have a strength in actually like breaking things down for like non technical people. 

35:52 Vic 

Yeah, I like the make organic, programmatic type of approach. Because I think, you know, when you don't really have a playbook for something, you're just sort of applying different patterns and trying to just do the best you can, but then you'll over time, see what works, what doesn't, and then you'll sort of try and create a process around it. So I think we do the same thing. You know, I am not as hands on with the ML engineer, as the engineering manager is, so they have far more intimate understanding of like, how to really up level people, but from what I've seen is like, you've got some people that are early in their career, they, you know, they're out of school, they're very bright. I think for these types of folks trying to figure out, like, put them in an area where they're in a position to succeed. But then I think giving ownership of some piece that can be very small, but just giving them that sense of ownership gives them the motivation gives them sort of that recognition over time to kind of keep going. And then you've got other people that are a little bit more senior, maybe they've been focusing on a very specific area, maybe they want to do something different. And so when that's the case, that is a little bit more challenging, because they are so deep and maybe an area and maybe they don't like it, wherever you're sort of putting them or you're just you just kind of have to like figure out where they can where they can thrive outside of their area. And maybe if they want to go back to what they were doing before. That's okay, too. So, you know, I don't really have a great answer for that. But I would just say, trying to just do the best you can, you know, I wish I could say more there. I would say though, as a product manager, I am like always coming back to being able to kind of like understand where how a model like fits in with the business that you are in, right? And being able to kind of like have an open mind around where might we have opportunities to apply ML. And so once you can kind of like, just just do that think outside the box and be a little bit, you know, creative even. I'm always encouraging engineers to do that. Because as a product manager, again, like I'm talking about the problems I'm talking about, you know, the people that we have and sort of what the strategy of the company is, and then I'm always relying on others to sort of like, help, you know, get ideas and do things like that.  


38:01 Claire 

So, sorry, I just got this sign that we are out of time. But we're going to be outside for a few minutes if you have questions for the panelists. Thank you so much. 


Our Sponsors

Headline Partners


Diamond Sponsors


Gold Sponsors

Silver Sponsors

Bronze Sponsors

Associate Sponsors


Media & Strategic Partners