The AI Summit New York

News

Oct 10, 2024

Navigating the Next Frontier: AI and Robotics Policy and Regulation in the USA

Navigating the Next Frontier: AI and Robotics Policy and Regulation in the USA

These emerging technologies offer extraordinary potential, but they also raise complex ethical and legal questions that require proactive policymaking. In the U.S., and globally, lawmakers and industry leaders are grappling with how to balance innovation with safety and accountability.

At the upcoming AI Summit New York 2024, Coran Darling, Associate at DLA Piper LLP (US), will address these critical issues as part of a panel discussion titled "The Next AI Frontier: Robots, Brain Chips, and AGI." In an exclusive interview with the AI Summit New York team, Coran shared insights on the latest developments in AI and robotics policy, as well as how governance frameworks are evolving to meet the challenges posed by these transformative technologies.

What are the latest advancements in governance and policy related to AI and robotics? How are governments and international bodies addressing the complexities of regulating AI-driven robotic systems? 

Governance and policy continue to develop strongly internationally, with many governments and international bodies each playing their part. 

Unsurprisingly, recent attention has heavily focused on the regulatory and policy movements in the EU, as the EU AI Act and its corresponding initiatives begin to take effect. The AI Act does not function in isolation and relies on the guidance of complimentary resources, including harmonized technical standards (e.g., those under development by CEN and ISO) and codes of conduct (e.g., the code for General-Purpose AI that is currently under development). This is by no means the only EU development. Focus has also recently returned to initiatives designed to level up existing policy and legislative regimes, including product liability. The developing AI Liability Directive is a prime example of this as it plans to bring the previous, outdated, rules into a position where they can confidently meet the challenges of new technology, including AI and physical robotics.

This push towards new effective governance and policy measures is also felt across the Atlantic. The US has made several strides towards effectively governing AI and robotics, including the development of a Blueprint for an AI Bill of rights and the Executive Order on Safe, Secure, and Trustworthy Development and use of AI. These documents have formed the foundation of many legislative initiatives and have directed the thinking of legislators and policy drafters as they seek to develop harmonized approaches to technology that encourages innovation while protecting citizens. A major advance in this respect recently came with the announcement that federal agencies had completed all 270-day actions outlined under the Executive Order, including establishing guidance and protocols for safe internal use of AI and the empowering of NIST to develop key resources that will form the basis for effective governance of robotics and AI at both a governmental and private organizational level. These developments have been accompanied by several (occasionally controversial) legislative efforts at a federal and state that intend on ensuring that the US is ready for the increasing development and use of AI within its borders. 

Similar advances are developing across other major jurisdictions, including Latin America, Asia, and Africa. Many are currently in the process of developing their own governance regimes to ensure that they do not fall behind their international peers. However, many of these countries have indicated that they intend on taking a similar approach to the EU, demonstrating a "Brussels Effect". It is unclear whether this decision is a correct one, as the EU functions as a very distinct jurisdiction and the rules currently set to be implemented have not been tested by regulatory interpretation or enforcement. The next several months therefore serve to be an interesting period for governance and policy from a regulatory perspective.

Alongside these regulatory developments, international organizations continue to push forward our understanding of AI and robotics as they interact with society. Earlier this year, NIST launched guidance on risk management for generative AI that aligns its previous Risk Management Framework with the challenges of today's technology. MIT also recently published a live database of over 700 identified and categorized risks that can be used by organizations and governments to understand and mitigate many of the concerns that they are uncovering as AI becomes more involved in operational management. Similarly, the OECD continues to push forward many of its influential initiatives that inform the understanding of some of the biggest challenges and opportunities faced when seeking to govern AI and robotics.

The next few years are likely to be a critical juncture for governance and policy of AI at a national and international level and it is therefore necessary that regulators, governments, and international organizations take a calculated and well reasoned approach that provides society with the necessary rules for safe and responsible development of technology while leaving room for innovation to flourish.

As AI continues to advance within robotics, how do you see the balance between innovation and ethical considerations being managed? What role do governance, policy, and industry standards play in ensuring that AI robotics serve the greater good? 

The balance between innovation and ethical considerations, particularly safety, continues to be an emerging and developing tightrope to walk. 

Previous technology trends indicate that self-governance is unlikely to suffice in protecting society and individuals from harm and that governance and policy with some degree of "bite" is likely to be necessary motivators for responsible development and deployment. Effective, and easy to navigate, systems for outlining rules for development and deployment, with clear processes to allow regulatory investigation and redress where harm has occurred are key components in building societal trust in the technology. 

However, governance, policy, and standards should not act as a barrier to innovation. Instead, rules should be put in place that provide protection for individuals, but also allow organizations to clearly understand what they can and can't do. Where possible, complimentary initiatives, such as regulatory sandboxes, should also be considered to allow organizations to navigate these rules with the safety and understanding that they will have support from government and other international organizations as they tackle new and developing challenges.

Join us at the AI Summit New York 2024

Join the conversation on the future of AI governance and regulation at the AI Summit New York 2024, where Coran Darling and other industry leaders will explore the next frontier of AI, robotics, and advanced technologies in the panel session 'Next AI Frontier – Robots, Brain Chips, AGI'. Don’t miss your chance to learn from the experts, network with industry peers, and gain valuable insights into how governance frameworks are shaping the future of AI.

Secure your spot today and be part of the discussion that will define the future of AI and robotics!

Loading

2024 Sponsors

Headline Partners

Loading

Industry Partners

Loading

Diamond Sponsors

Platinum Sponsors

Loading

Gold Sponsors

Silver Sponsors

Bronze Sponsors

Associate Sponsors

Loading

Media & Strategic Partners

VISIONAIRES VIP LOUNGE AND VIP PROGRAM SPONSORS

Loading

 

The Hackathon Sponsors

Loading