Will the Pentagon adopt these five AI principles?

0
Reading Time: 3 minutes
A military advisory committee endorsed a list Oct. 31 of principles for the use of artificial intelligence by the Department of Defense, contributing to an ongoing discussion on the ethical use of AI and AI-enabled technology in both combat and non-combat purposes.
A Defense Innovation Board study recommends that the Department of Defense adopt five principles for the ethical use of artificial intelligence. (monsitj/Getty Images)
“We do need to provide clarity to people who will use these systems and we need to provide clarity to the public so they understand how we want the department to use AI in the world as we move forward,” said Michael McQuade, vice president for Research, Carnegie Mellon University, who sits on the Defense Innovation Board and led the discussion on AI ethics.
The Defense Innovation Board is an independent federal committee made up of members of academia and industry that gives policy advice and recommendations to DoD leadership. Recommendations made by the DIB are not automatically adopted by the Pentagon.
“When we’re all said and done, the adoption of any principles needs to be the responsibility of the secretary of the department,” said McQuade.
The report is the result of a 15-month study conducted by the board, which included collecting public commentary, holding listening sessions and facilitating roundtable discussions with AI experts. The DoD also formed a DoD Principles and Ethics Working Group to facilitate the DIB’s efforts.
Those principles were also pressure-tested in a classified environment, including a red team session, to see how they stood up against what the military perceives as the current applications of AI on the battlefield.
For the purpose of the report, AI was defined as “a variety of information processing techniques and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task,” which the DIB said is comparable to how the department has thought about AI over the last four decades.
Here are the six principles endorsed by the board:
1. Responsible. Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use and outcomes of AI systems.
2. Equitable. DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.
3. Traceable. DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.
4. Reliable. AI systems should have an explicit, well-defined domain of use, and the safety, security and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
5. Governable. DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automatic disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.
The language of the final principle was amended at the last minute to emphasize the importance of having a way for humans to deactivate the system if it is causing unintended harm or other undesired behaviors.
According to McQuaid, the AI ethics recommendations built upon other ethical standards the DoD has already adopted.
“We are not starting from an unfertile ground here,” said McQuaid.
“It is very heartening to see a department … that has taken this as seriously as it has,” he continued. “It’s an opportunity to lead a global dialogue founded in the basics of who we are and how we operate as a country and as a department.”
The board recommended that the the Pentagon’s Joint Artificial Intelligence Center should work to formalize these principles within the DoD, and an AI Steering Committee should be established to ensure that any military AI projects are held to that ethical standard.
Beyond those recommendations, the report also calls on the DoD to increase investment in AI research, training, ethics and evaluation.
AI ethics have become an increasingly hot topic within the military and the intelligence community over the past year. In June, the inspector general of the intelligence community emphasized in a report that there was not enough investment being put into AI accountability. And, at the Pentagon, the newly established JAIC has announced that it will be hiring an AI ethicist.
[1]https://www.c4isrnet.com/artificial-intelligence/2019/10/31/will-the-pentagon-adopt-these-five-ai-principles/

About The Author

References

References
1 https://www.c4isrnet.com/artificial-intelligence/2019/10/31/will-the-pentagon-adopt-these-five-ai-principles/

Bir yanıt yazın

Bu site, istenmeyenleri azaltmak için Akismet kullanıyor. Yorum verilerinizin nasıl işlendiği hakkında daha fazla bilgi edinin.