By accepting you will be accessing a service provided by a third-party external to https://blog.alor.org/
The Ever-Present Dangers of AI By Brian Simpson
We have been covering the dangers of AI, something the good men who do nothing have, naturally enough, been doing nothing about. But, this is a hard one to act on because technology is harder to oppose than finance, covering us like the atmosphere. But, it can be deadly:
“The power of AI” is “too dangerous” to be held by “any one entity, any one government, any one company,” declared Dr. Robert Epstein, senior research psychologist at the American Institute for Behavioral Research and Technology, during a Thursday interview on SiriusXM’s Breitbart News Tonight with hosts Rebecca Mansour and Joel Pollak. Mansour noted the unavoidable integration of programmers’ and developers’ biases into their algorithms, highlighting a Monday-published Financial Times column addressing the phenomenon of values embedded within programming code: Computer algorithms encoded with human values will increasingly determine the jobs we land, the romantic matches we make, the bank loans we receive and the people we kill, intentionally with military drones or accidentally with self-driving cars. How we embed those human values into code will be one of the most important forces shaping our century. Yet no one has agreed what those values should be. Still more unnerving is that this debate now risks becoming entangled in geo-technological rivalry between the US and China. The fusion of political biases and financial interests with Internet search algorithms — and with AI — via technology companies and governments is a far-reaching matter, explained Epstein. Centralization of power related to internet search — and more broadly, the dissemination of information — is dangerous, cautioned Epstein.”
The good professor is not the only one concerned about Google, apart from us, who are concerned about everything, almost. AI expert Meredith Whitaker has published an article expressing the same concern about the centralisation of power that has come with Google, and its dangers:
“In April, two of the organizers of the Google Walkout, Meredith Whittaker and Claire Stapleton, came forward with the stories of the retaliation they’ve faced as a result of speaking out at the company. Claire left Google in June—yesterday was Meredith’s last day. Here’s the note she shared internally: July 10th was my 13-year Google anniversary, and today is my last day. My experience at Google shaped who I am and the path I’m on. It’s hard to overstate how grateful I am for the teachers, mentors, and friends along the way, or how surreal this moment is. I still can’t imagine my badge not working. The reasons I’m leaving aren’t a mystery. I’m committed to the AI Now Institute, to my AI ethics work, and to organizing for an accountable tech industry — and it’s clear Google isn’t a place where I can continue this work. This has been hard to accept, since this work urgently needs doing. Google is one of the most powerful organizations on the planet; I’ve had the privilege to see it grow from a few thousand committed people to the behemoth it is today. The company has emerged as a global leader in AI (the result of some combination of strategy, luck, timing, and massive centralized data and compute resources). This has helped propel Google’s entry into “new markets” — healthcare, fossil fuels, city development and governance, transportation, and beyond.
The result is that Google, in the conventional pursuit of quarterly earnings, is gaining significant and largely unchecked power to impact our world (including in profoundly dangerous ways, such as accelerating the extraction of fossil fuels and the deployment of surveillance technology). I’m certain many in leadership — who learned what Google was and why it was great over a decade ago — don’t truly understand the direction in which Google is growing. Nor are they incentivized to. How this vast power is used — who benefits and who bears the risk — is one of the most urgent social and political (and yes, technical) questions of our time. And we have a lot of work to do. The AI field is overwhelmingly white and male, and as the Walkout highlighted, there are systems in place that are keeping it that way. This, while marginalized populations bear most of the risks of biased or harmful AI. The AI industry and the tools it creates are already widening inequality, enriching the powerful and disadvantaging those who are struggling. Addressing these problems, and making sure AI is just, accountable, and safe, will require serious structural change to how technology is developed and how tech corporations are run. Ethical principles and in-house ethical reviews are a positive step, but we need a lot more.
I’ve had an amazing time here. I climbed my way from an entry level role at Google in 2006 to an established position as a researcher and public voice on AI issues. I marshalled and presented evidence in the service of more accountable technology. I’m proud of what I did, and grateful to work with amazing colleagues. I have tried hard to offer evidence and pathways for positive structural change, but over time I realized that my presence “at the table” was more about the appearance of an inclusive debate, rather than seriously contending with the problems in the company. In the meantime, the issues of AI, bias and inequity grew more urgent, and I became increasingly worried. Part of my response was to co-found the AI Now Institute at NYU with Kate Crawford, establishing a home for rigorous research that could examine the social implications of AI, and communicate this to the public. This has been an unqualified success, and we’ve already had extraordinary impact across research and policy.
The other part was to begin organizing: history shows that centralized power rarely concedes without collective action. What began as an experiment — can we apply labor organizing to address tech’s ethical crisis? — became one of the most difficult and gratifying efforts I’ve ever been involved in. Organized tech workers — you! — have emerged as a force capable of making real change, pushing for public accountability, oversight, and meaningful equity. And this right when the world needs it most. Leaving Google is deeply emotional for me, and I don’t know all of the ways I’ll miss it. I’m lucky because I get to continue my work at AI Now. And I’d be much sadder if I didn’t see many hundreds of Googlers establishing themselves as leaders, contributing their brilliance to organizing, and refusing to stand silent in the face of leadership’s dangerous complicity. Please, keep going! The stakes are extremely high. The use of AI for social control and oppression is already emerging, even in the face of developers’ best of intentions. We have a short window in which to act, to build in real guardrails for these systems, before AI is built into our infrastructure and it’s too late. I offer my unwavering support and love to those of you who continue to do amazing work here, and who have taken risks to support others. In solidarity with all of you who will continue this essential work within Google, I’ll close by offering an incomplete map of where I see future tech organizing moving.
• Unionize — in a way that works
There are good unions and there are awful unions, but building structural power that will allow Google workers to hold leadership accountable is something worth doing. And generally, this is called a union. This doesn’t mean letting an outside union “organize” Google and dictate worker concerns (this would be a bad model, in my view). In many places it’s quite possible to DIY a union. It does mean continuing to build strong relationships with each other, and doing this in a way that recognizes both prior art and the significant, specific concerns plaguing the tech industry — including its outsized influence on all other sectors. And it means continuing to place equity concerns at the center of organizing, and including TVCs at the helm of decision-making — the company (and “the future of work”) is moving in a direction where soon everyone but upper management will be a TVC. In considering which structure best accomplishes these goals, I would advocate boldness, remembering that the labor protections we have were won through organizing and collective action, not the other way around.
• Protect conscientious objectors and whistleblowers
We’ve seen too many reports of retaliation and punishment against those who speak up about unethical projects and toxic workplace conditions. This serves to prevent necessary change and to make accountability impossible. Google needs worker-led structures that can ensure it’s safe to speak about the darker side of the company. These should include protections for whistleblowers who alert the public to dangerous or unethical projects that put them at risk. The public deserves to know how, and where, powerful technical systems are shaping their lives and opportunities.
• Demand to know what you’re working on, and how it’s used
Too often, those designing and developing technical systems don’t know how they’ll be used, or by whom (see: Maven, Dragonfly, etc). The right to know what you’re working on, and how it’s applied, should be recognized as fundamental. And to uphold this right, Google’s infrastructures and processes need to adapt, providing a “chain of title” from design through to application. This is also a structural requirement for meaningful accountability and compliance. Such a demand should be at the core of ethical organizing, and could be extended to ensure that the public is aware of where specific technologies that impact their lives and communities are being applied, and by whom.
• Build solidarity with those beyond the company
The application of Google’s tech goes well beyond the relatively homogeneous Google campuses (“billions of users or none,” I’ve heard many an exec opine). As such, people living in contexts well outside of Google are often in the best position to speak to the true impacts of Google’s tech — whether it be the click-workers training data for AI models, or the communities most impacted by YouTube’s engagement-driven algorithm. Holding Google accountable and ensuring a safe workplace and will require that tech worker organizers form strong alliances with independent researchers, journalists, and communities on the front lines. This has the added benefit of building more powerful organizing structures.”
This is an optimist’s view, that change will come from within Google, but I doubt is, since people like her will simply be eliminated and replaced by more obedient migrant IT workers, and she has already gone. Thus, the system rolls on, regardless. Little will really happen to produce lasting change unless, as always, the great inert mass of sheeple, stop eating, or rather smoking grass for a moment, and do something. At least those still having a pulse!