The AI Summit New York

News

Jan 17, 2023

Reflections on Responsibility

Reflections on Responsibility

Stratyfy Data Science Engineer Bogdan Loukanov attended the 2022 edition of The AI Summit New York, where practitioners, policy makers, and business leaders met to discuss practical implications of AI for enterprise organizations.

Here are his reflections…

Responsible AI (RAI) took center stage last month at The AI Summit New York, where it was clear that fairness, transparency, explainability, security, privacy, and governance are no longer just “nice to have” when it comes to ML and AI solutions. 

Stratyfy Data Science Engineer Bogdan Loukanov attended the 2022 edition of The AI Summit New York, where practitioners, policy makers, and business leaders met to discuss practical implications of AI for enterprise organizations.

Here are his reflections…

Responsible AI (RAI) took center stage last month at The AI Summit New York, where it was clear that fairness, transparency, explainability, security, privacy, and governance are no longer just “nice to have” when it comes to ML and AI solutions. 

From Google and IBM, to the World Bank and the United Nations, organizations from around the world highlighted their efforts in light of the evolving industry. Many spoke to growing government support of RAI as well, with talk of the White House recently delineating an AI Bill of Rights, and New York City instituting a law to regulate Automated Employment Decision Tools.

So, what does this mean for AI practitioners in industry? 

It means that both the public interest and current and forthcoming government regulation require us to incorporate RAI principles into our decisioning systems. It means that either we will choose on our own to build trust into our models from the outset, or the decision will be made for us, whether by the market or by regulators.

As we know, AI has the potential to do enormous good, as well as serious harm. It is being used to discover new medicines, to fight climate change, to make loans, and to determine hiring and school acceptance outcomes. AI affects more and more people every day, and at Stratyfy, we know that if we’re not extremely thoughtful about every stage of the AI development lifecycle, these benefits will fall narrowly to a privileged few, at the expense of others. Moreover, the more we can implement truly responsible AI, the faster we can use it to address major world challenges safely and efficiently. 

So, let’s review the tenets of building responsible AI systems, drawing upon learnings from last month’s Summit.

1) AI Must Make Fair Decisions

As we stand at this technological precipice, we have a choice. Do we want to drag the biases and discrimination of yesterday and today into tomorrow? We have an opportunity to use AI to help us in fact overcome these obstacles, rather than perpetuate them.

At the AI Summit, Renée Cummings, a Professor of Data Science at the University of Virginia, reminded us that we need to be “designing futures.” Will these futures be for everybody, or only for a select few? 

To start, a decision system’s fairness metrics must be determined prior to its development. This is not simple given that there are a multitude of them, and many are mutually exclusive. Once the system is deployed into production, these metrics need to be monitored over time; their selection and design are critical to successful monitoring. Having diverse teams working at every step in these processes is also crucial so that we don’t perpetuate biases of the past and present. 

2) AI Must be Transparent and Explainable

Cummings underscored what happens when decisions are made for us, without us: our autonomy, agency, self-actualization, self-realization, and ultimately identity are undermined. Therefore, at Stratyfy we believe in “human-in-the-loop,” or prioritizing the roles of humans in AI systems.

Pamela Gupta of the IEEE Standards Association noted that this does not mean, however, that humans merely “push the button” to initiate a drone strike. Human input must be a critical component of any autonomous process from end to end. Google’s Aishwarya Srinivasan, for example, described the feedback loops she employs in model training and monitoring, where human feedback is requested and directed back through the AI system. 

Human-in-the-loop is an effective way to ensure that we understand why a model makes the decisions that it does. If we do not understand its decisions, we cannot possibly be a part of them. And sometimes we need to ask, is a more complex model even necessary for the problem at hand? 

3) AI Must Not Impinge upon Our Privacy and Security

As we move through this technological revolution, public safety must of course remain a top priority, which is why at Stratyfy we see policy and regulation as a key component in advancing RAI. The White House’s AI Bill of Rights, for example, foreshadows increasing action to protect consumers, and this year, several private and public organizations including Stratyfy joined forces to form MoreThanFair, a coalition dedicated to improving access to affordable and inclusive credit.

No progress is worth eroding principles of our social contract. We have already witnessed myriad examples of large-scale privacy and security violations with the emergence of new technologies over the last decade. As with the fairness consideration, we should view AI as an opportunity to improve on the status quo, rather than to perpetuate (or deteriorate) it.

4) AI Must be Appropriately Governed

All of these principles sound great, but without proper accountability there is of course no guarantee they will be followed by everyone, all the time. An impassioned audience member during one of Cummings’s presentations drew a poignant analogy to this effect: as with a driver whose license is revoked for driving under the influence, failure to adhere to the principles of RAI must be punished, or else the “driver” will repeat their offense. Cummings agreed. 

Another avenue to promote governance, as explained by IBM’s Priya Krishnan, is for external regulatory guidelines to be integrated directly into organizations’ internal processes. In this way, accountability can be distributed throughout the entire AI development lifecycle. Governments and private organizations both share responsibility in holding everyone accountable.

It's Clear: Responsible AI is Here to Stay

If one thing was clear from last month's Summit, it's that transparent, responsible AI is quickly becoming the new standard -- which requires ongoing ecosystem dialogue and collaboration. At Stratyfy, we’ve been committed to providing responsible, transparent AI solutions since our inception, and we're working with mission-aligned partners and customers to do so. By designing, building, and implementing AI responsibly and collaboratively, we can maximize its enormous benefits to shape a more inclusive, fairer financial system for all.

This piece was originally published on LinkedIn by Stratfy and is republished here with their permission.

Loading

Our Sponsors

Headline Partners

Loading

Diamond Sponsors

Loading

Gold Sponsors

Silver Sponsors

Bronze Sponsors

Associate Sponsors

Loading

Media & Strategic Partners