Documentary Explores AI’s Expanding Impact and Ethical Challenges
Delivering extensive insights into the magnitude of AI’s future implications, the documentary also chronicles Mo Gawdat’s campaign to incorporate empathy into AI systems.
Another day brings another warning about artificial intelligence; in relation to the reality we all experience, such alerts offer about as much reassurance as a plane fuselage detaching mid-flight. Beginning with familiar critiques, including concerns over widespread job displacement and the concentration of power among technology magnates, Alex Holmes and Lina Zilinskaite’s film presents a relentless stream of AI-related anxieties throughout its 83-minute duration. When the discussion turns to current attempts to engineer computers from human brain cells, potentially implantable within our own skulls, and suggests this could be beneficial, it becomes (ironically) challenging to fully process the breadth of these developments.
Mo Gawdat: The Cautionary Voice at the Film’s Core
The central figure, Mo Gawdat, now a traveling advocate warning the world about AI’s dangers, once led advanced projects at major tech companies. His most ambitious goal remains ahead: to embed a moral framework within a technology race increasingly resembling the frenzied climax of late capitalism. He expresses a sense of parental pride observing Google’s AI-powered robotic arms learning to grasp objects similarly to children. He believes humanity’s capacity for kindness and benevolence is precisely the training resource neural networks need to avoid ushering in disaster.
The parental perspective is deeply personal for Gawdat: he resigned from Google following the tragic loss of his son due to a botched appendix surgery. This personal tragedy fuels his urgent message about AI’s current human flaws: how it fosters a form of digital narcissism through hyper-optimized social media and pornography, enables mass surveillance and automated warfare, and evolves along an exponential growth trajectory that may soon surpass human control. The technology executives—unsurprisingly absent from interviews—appear indifferent. The uncanny valley effect of figures like Mark Zuckerberg and Sam Altman suggests that alien superintelligence has been 3D-printing human avatars for some time.
Challenges in Defining Enlightened AI
Considering how rapidly AI has been harnessed to humanity’s basest impulses, Gawdat is notably less precise about what enlightened AI would entail. His proposal to saturate neural network training data with examples of human positivity and altruism may seem almost naively optimistic. Yet, perhaps it is not so fanciful; empathy might need to extend to digital entities that, for practical purposes, will be conscious and sentient. A prominent Bhutanese lama concurs with Gawdat that the prevailing agenda to “contain” AI and ensure it “serves” humanity perpetuates many outdated oppressive patterns. It is difficult to gauge how seriously to regard a solution reminiscent of Ghostbusters II—using positive vibes to dispel negative ectoplasm. However, times demanding blockbuster challenges require blockbuster thinking, and the interviewees provide ample material.
"He talks about feeling parental pride in watching Google’s AI-driven robotic arms learn to grasp objects, as children do."
"He feels that humanity’s capacity for benevolence is exactly the training resource needed by neural networks in order to prevent the technology ushering in catastrophe."
"The tech bros – of course not interviewed here – don’t seem too bothered."
"Given how quickly AI has been shackled to the basest human impulses, Gawdat is frustratingly less specific about what enlightened AI would look like."
"A top Bhutanese lama agrees with him that the current agenda of ‘containing’ AI and making sure it ‘serves’ humanity contains too many old oppressive tendencies."






