Machine Learning News Hubb
Advertisement Banner
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
Machine Learning News Hubb
No Result
View All Result
Home Artificial Intelligence

Can the U.S. and China collaborate on AI safety? | by Jeremie Harris | Sep, 2022

admin by admin
September 8, 2022
in Artificial Intelligence


Ryan Fedasiuk on the art of the possible when it comes to China AI policy

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: The TDS Podcast is hosted by Jeremie Harris, who is the co-founder of Gladstone AI. Every week, Jeremie chats with researchers and business leaders at the forefront of the field to unpack the most pressing questions around data science, machine learning, and AI.

It’s no secret that the US and China are geopolitical rivals. And it’s also no secret that that rivalry extends into AI — an area both countries consider to be strategically critical.

But in a context where potentially transformative AI capabilities are being unlocked every few weeks, many of which lend themselves to military applications with hugely destabilizing potential, you might hope that the US and China would have robust agreements in place to deal with things like runaway conflict escalation triggered by an AI powered weapon that misfires. Even at the height of the cold war, the US and Russia had robust lines of communication to de-escalate potential nuclear conflicts, so surely the US and China have something at least as good in place now… right?

Well they don’t, and to understand the reason why — and what we should do about it — I’ll be speaking to Ryan Fedashuk, a Research Analyst at Georgetown University’s Center for Security and Emerging Technology and Adjunct Fellow at the Center for a New American Security. Ryan recently wrote a fascinating article for Foreign Policy Magazine, where he outlines the challenges and importance of US-China collaboration on AI safety. He joined me to talk about the U.S. and China’s shared interest in building safe AI, how reach side views the other, and what realistic China AI policy looks like on this episode of the TDs podcast.

Here were some of my favourite take-homes from the conversation:

  • Given the competitive pressures involved in the race for AI supremacy, a high degree of trust would be required between the U.S. and China in order for both sides to agree to norms and standards around safe AI development. That trust doesn’t currently exist.
  • The U.S. has taken several unilateral measures to define AI safety and robustness standards on itself. These include things DoD Directive 3000.09 which sets rigorous rules for the development and use of autonomous and semi-autonomous weapons. Unfortunately, China interprets these measures as intentional attempts to preempt the imposition of international laws on autonomous weapons at the fora such as the U.N.
  • At the same time, China has some rules on safe AI development, but it’s unclear whether they apply to the Chinese military, or just to the private sector. In the absence of assurances as to who exactly is subject to those rules, and in light of China’s reluctance to offer clarity on this point, trust-building is difficult.
  • Ryan proposes a clear red line on autonomous control: in hist view, AI-powered nuclear weapon launches should be off the table. Ryan points out that prominent U.S. military figures like Lieutenant General Jack Shanahan have already clearly stated that autonomous nukes are off the table. But no Chinese political or military figure has been willing to make the same assurance publicly.
  • It’s unclear who actually has the capacity to affect Chinese autonomous weapons policy. Individual researchers seem to be aware of some of the risks associated with using increasingly capable AI systems to automate more and more defence applications, but Chinese military leadership doesn’t.
  • China seems to do considerably more “safety-washing” than other countries. For example, the Chinese military will commit to not building autonomous weapons systems, but only systems that meet definitions of autonomy that are so narrow that they don’t apply to any systems that either the U.S. or China are even considering fielding. (Ryan cites the idea of China’s guarantee that it won’t build autonomous drones that lack the ability to be recalled — a largely meaningless but highly publicized concession that does nothing to address the kinds of systems that are actually being contemplated by China and the U.S.)
  • 0:00 Intro + disclaimer
  • 2:15 China as a core point of focus
  • 4:30 Chinese AI strategy
  • 10:00 Competition as risk
  • 17:20 Having constructive conversations
  • 22:20 Understanding China’s policies
  • 27:15 A shared interest in AI alignment
  • 32:45 Issues with regulating AI on an international level
  • 40:15 Is collaboration a good thing?
  • 44:15 Impact of the highly scaled transformer models trend
  • 47:15 Wrap-up



Source link

Previous Post

Customer Churn Prediction & Probability Machine Learning Model | by Da Data Guy | Sep, 2022

Next Post

The release of Vision Studio 2.0.0 is scheduled for November | by Taka Wang | Hello Nilvana | Sep, 2022

Next Post

The release of Vision Studio 2.0.0 is scheduled for November | by Taka Wang | Hello Nilvana | Sep, 2022

How To Start Coding | For Beginners | by Elizaveta Gorelova | Sep, 2022

Interesting Machine Learning Projects : Rock Vs Mine Prediction | by prkskrs | Catalysts Reachout | Sep, 2022

Related Post

Artificial Intelligence

Dates and Subqueries in SQL. Working with dates in SQL | by Michael Grogan | Jan, 2023

by admin
January 27, 2023
Machine Learning

ChatGPT Is Here To Stay For A Long Time | by Jack Martin | Jan, 2023

by admin
January 27, 2023
Machine Learning

5 steps to organize digital files effectively

by admin
January 27, 2023
Artificial Intelligence

Explain text classification model predictions using Amazon SageMaker Clarify

by admin
January 27, 2023
Artificial Intelligence

Human Resource Management Challenges and The Role of Artificial Intelligence in 2023 | by Ghulam Mustafa Shoaib | Jan, 2023

by admin
January 27, 2023
Deep Learning

Training Neural Nets: a Hacker’s Perspective

by admin
January 27, 2023

© 2023 Machine Learning News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

Newsletter Sign Up.

No Result
View All Result
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.