Sign up for alerts from The Irish Times
Just click on “Allow Notifications” on the message appearing on your browser to activate them.
We will send you a quick reminder in the future, in case you change your mind.
‘We have to worry about what human beings will choose to do with weapons that are no more intelligent than an iPhone, but which are ruthlessly competent at identifying, tracking and killing targets,’ says Martin Ford, author of Rule of the Robots
Virtually any job that is fundamentally routine or predicable, says Martin Ford, has the potential to be automated in full or in part. Photograph: iStock
Artificial intelligence (AI) is a force for good that could play a huge part in solving problems such as climate change. Left unchecked, however, it could undermine democracy, lead to massive social problems and be harnessed for chilling military or terrorist attacks.
That’s the view of Martin Ford, futurist and author of Rule of the Robots, his follow-up to Rise of the Robots, the 2015 New York Times bestseller and winner of the Financial Times/McKinsey Business Book of the Year, which focused on how AI would destroy jobs.
In the new book, Ford, a sci-fi fan, presents two broad movie-based scenarios.
The first is a world based on Star Trek values, where Earth’s problems have been solved. Technology has created material abundance, eliminated poverty, cured most disease and addressed environmental issues. The absence of traditional jobs has not led to idleness or lack of dignity as highly educated citizens pursue rewarding challenges.
The alternative dystopian future is more akin to The Matrix, where humanity is unknowingly trapped inside a simulated reality.
“The more dystopian outcome is the default, if we don’t intervene. I can see massive increases in inequality and various forms of entertainment and recreation such as video gaming, virtual reality and drugs becoming attractive to a part of the population that has been left behind,” he tells The Irish Times.
Ford’s extensive research for both books involved talking to a wide cross-section of those working on the frontiers of artificial intelligence. While the unpredicted Covid-pandemic punctuated the intervening years, most of what he wrote in 2015 has been amplified, he feels. Covid if anything has acted as an accelerant for AI and robotics, with enduring effects in areas such as remote working, social distance and hygiene.
On the positive side, AI has led to huge medical advances including the recent rapid Covid vaccine development and deployment. With the pace of innovation slowing in other areas, AI is potentially a game changer in areas such as the climate crisis, he says.
Worries of the bad effects of AI, however, permeate his thoughtful new volume on the subject.
Employment is one.
Virtually any job that is fundamentally routine or predicable – in other words nearly any role where workers face similar challenges again and again – has the potential to be automated in full or in part.
Studies suggest that as much as half of the US workforce is engaged in such work and that tens of millions of jobs could evaporate in the US alone. This won’t just affect lower-skilled, low-wage workers, he warns. Predictable intellectual work is at especially high risk of automation because it can be performed by software, whereas manual labour, in contrast, requires a more expensive robot.
Ford is generally pessimistic that workers will be able to move up the value chain or move to areas less affected by the rise of AI. Some will, he acknowledges, but he wonders will truck drivers, for example, become robotics engineers or personal care assistants.
Moreover, many of the new opportunities being created are in the gig economy where workers typically have unpredictable hours and incomes, all of which points to rising inequality and dehumanising conditions for a large section of the workforce.
Surveillance is another issue of concern. He highlights the case of the use of an app developed by the firm Clearview AI in the US.
In February 2019, the Indiana State Police were investigating a case where two men got into a fight in a park, one pulled a gun, shot the other man and fled the scene. A witness had filmed the incident on a mobile phone and the police uploaded the images to a new facial-recognition system they had been experimenting with.
It generated an immediate match. The shooter had appeared in a social media video with a description that included his name. It took just 20 minutes to solve the crime even though the suspect had not been previously arrested and did not hold a driver’s licence. When this was revealed along with other information about the firm, it ignited major data-privacy concerns.
Data privacy is one thing but the capacity for AI to generate deepfakes takes this to another level. Ford offers up a scenario in which a politician’s voice could be imitated in the run-up to an election, planting comments that would deliberately damage their reputation. Spread virally on social media, it might be hard to undo the stickiness of this. How many people would hear the denial or choose to believe the fake was not authentic?
A sufficiently credible deepfake could literally shape the arc of history and the means to create such fabrications might soon be in the hands of political operatives, foreign governments or even mischievous teenager, he says. In the age of viral videos, social media shaming and “cancel culture”, virtually anyone could be targeted and have their careers and personal lives destroyed.
Because of its history of racial injustice, the US may be especially vulnerable to orchestrated social and political disruption, he observes. “We’ve seen how viral videos depicting police brutality can almost instantly lead to widespread protests and social unrest. It is by no means inconceivable that, at some point in the future, a video so inflammatory that it threatens to rend the very social fabric could be synthesised – perhaps by a foreign intelligence agency.”
There are smart people working on solutions. Sensity, for example, markets software it claims can detect most deepfakes but inevitably there will be an arms race between the poachers and gamekeepers. He likens this to the race between computer virus creators and those who sell cybersecurity solutions, one in which malicious actors tend to maintain a continuous edge.
An example of the difficulties in this area was highlighted by an experiment that found that simply adding four small rectangular black and white stickers to a stop sign tricked an image recognition system of the type used in self-driving cars into believing it was instead a 45mph speed-limit sign. A human observer might not even notice and certainly wouldn’t be confused but AI’s error could have fatal consequences.
Ford paints an even more terrifying scenario of lethal autonomous weapons. Consider the possibility of hundreds of autonomous drones swarming the US Capitol building in a co-ordinated attack. Using facial recognition technology, they seek and locate specific politicians, carrying out multiple targeted assassinations. This chilling vision was sketched out in a 2017 short film called Slaughterbots, produced by a team working under the direction of Stuart Russell, professor of computer science at the University of California, Berkeley, who has focused much of his recent work on the risks of AI.
This disturbing vision is quite realistic, he believes.
“My own view is rather pessimistic. It seems to be that the competitive dynamic and lack of trust between major countries will probably make at least the development of fully autonomous weapons a near certainty. Every branch of the US military as well as nations including Russia, China, the United Kingdom and South Korea are actively developing drones with the ability to swarm.
Low barriers to entry mean that even small, under-resourced groups could also gain access to this type of warfare. Commercial drones could be easily modified, he explains. “We have to worry about what human beings will choose to do with weapons that are no more intelligent than an iPhone, but which are ruthlessly competent at identifying, tracking and killing targets.”
This is a near-term rather than long-term worry, he adds, and the window to act is closing fast.
AI needs to be the subject of regulation, he maintains, not by politicians in Congress or elsewhere but by specialist authorities in the same ways that financial markets are regulated.
Ford also worries about China and devotes a large section of the book to what he views as an AI arms race between China and the West. As well as concerns about privacy for its citizens and human rights for oppressed minorities, he worries about the capacity of China to export, not alone its all-pervasive AI technology to other regions but also its world view, which is very much at odds with western values.
“It’s going to become more Orwellian. To live in China will be to have every aspect of your life tracked. Maybe it will be like boiling a frog and people will not notice or care, but we certainly don’t want that here [in the West].”
One possible silver lining for China’s citizens, he concedes, however, is that crime rates collapse with AI-based surveillance. That’s a trade-off that might just be worth considering.
Rule of the Robots: How artificial intelligence will transform everything, by Martin Ford, is published by Basic Books, New York.
Invalid email or password.
Unfortunately USERNAME we were unable to process your last payment. Please update your payment details to keep enjoying your Irish Times subscription.