Google Now Scrubs its Former Do No Harm Stance, If It was Ever Real, By Brian Simpson
Way back in 2018, Google had established self-monitoring ethical principles that it would not engage in work which harmed people, or could harm them, or in surveillance. Well, if that principle was ever upheld, which given censorship over the Covid period, which it was not, it is gone now. The pro-Russian site, RT.com reports: "The original guidelines explicitly stated that Google would not design or deploy AI for use in weapons or technologies that cause or directly facilitate injury to people, or for surveillance that violates internationally accepted norms.
The latest version of Google's AI principles, however, has scrubbed these points. Instead, Google DeepMind CEO Demis Hassabis and senior executive for technology and society James Manyika have published a new list of the tech giant's "core tenants" regarding the use of AI. These include a focus on innovation and collaboration and a statement that "democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
Margaret Mitchell, who had previously co-led Google's ethical AI team, told Bloomberg the removal of the 'harm' clause may suggest that the company will now work on "deploying technology directly that can kill people."
According to The Washington Post, the tech giant has collaborated with the Israeli military since the early weeks of the Gaza war, competing with Amazon to provide artificial intelligence services. Shortly after the October 2023 Hamas attack on Israel, Google's cloud division worked to grant the Israel Defense Forces access to AI tools, despite the company's public assertions of limiting involvement to civilian government ministries, the paper reported last month, citing internal company documents."
Google's reversal of its policy comes amid continued concerns over the dangers posed by AI to humanity. Geoffrey Hinton, a pioneering figure in AI and recipient of the 2024 Nobel Prize in physics, warned late last year that the technology could potentially lead to human extinction within the next three decades, a likelihood he sees as being up to 20%.
Hinton has warned that AI systems could eventually surpass human intelligence, escape human control and potentially cause catastrophic harm to humanity. He has urged significant resources be allocated towards AI safety and ethical use of the technology and that proactive measures be developed."
It is predictable, that Google would go the way of all other IT/AI big Tech firms; it is simply programmed into them!
https://www.rt.com/news/612242-google-removes-ai-weapon-ban/
Comments