Google has revised its AI Principles, previously promising to avoid military applications, to now allow for the development of AI for weaponry. The update, noted between late January and early February 2023, erases former specific pledges that aimed to prevent overall harm and military applications of AI established since 2018. While the company’s leadership argues that democracies should guide AI's development, the loss of these commitments signals a profound shift in Google's approach to weaponized AI technology.
Google has updated its corporate rules to permit the development of artificial intelligence for weaponry, abandoning previous pledges to avoid military applications.
The changes to Google's AI Principles allow for 'responsible' development in alignment with international law, but lack the earlier commitments to prevent harm.
Executives cited the pace of AI evolution as a reason for the shift, advocating that democracies should lead AI's development guided by values like freedom and equality.
The new rules have erased explicit commitments to keep AI away from applications that could cause injury, reflecting a significant policy shift since 2018.
Collection
[
|
...
]