We’re sorry, this feature is currently unavailable. We’re working to restore it. Please try again later.
Humans are ill-prepared for how much artificial intelligence will reshape our lives, says Toby Walsh, a world-respected professor of AI at the University of NSW. But among the dire warnings, there is some good news.
By Greg Callaghan
“The Big Tech companies building artificial intelligence have immense wealth, which gives them immense power. And they aren’t governed in the same way as old-fashioned corporations.”Credit:iStock
Every time we ask Siri a question on our iPhones, we’re using artificial intelligence, of course, but in what other ways will it infiltrate our lives?
AI already goes beyond just Siri or Alexa. Every time you get directions from Google Maps, it’s AI that’s working out the shortest path from A to B. When you get a film recommendation on Netflix, it’s AI that knows about people’s preferences and a little too much about you. More than three-quarters of the movies watched on Netflix are those the algorithms choose for us. And we’re spending more and more time locked away in digital and virtual realities. If [Facebook founder] Mark Zuckerberg is right, we’ll all be enjoying the metaverse very soon. And there’s a distinct possibility we’ll find these artificial realities more attractive than the real physical world. That’s something to worry about right now.
Are the applications of AI virtually limitless?
It’s hard to imagine a part of our lives that won’t be touched by AI. Hal Varian, chief economist at Google, has a good way to predict the future: simply look at what rich people have today. Rich people have chauffeurs. And in the future, AI will give us autonomous cars that drive us everywhere. Rich people have personal bankers that manage their money. And soon, we’ll all have robo-bankers that manage our more modest assets.
Of course, there are also ways AI will infiltrate our lives that won’t be so good. AI might be used in our workplace to decide who gets promoted, in our courts to decide sentencing. We need to make some important choices about putting limits on AI.
That’s supposing we’re offered those choices.
Yes, it’s disturbing to think how authoritarian governments can misuse AI. Even in democracies, we’ve witnessed the abuse of AI by Cambridge Analytica
[in 2016, the British data firm, working with Donald Trump’s election team, harvested 87 million Facebook profiles to predict and influence voting decisions]. According to The Economist’s Democracy Index, the number of democracies has declined in the past decade, and the global average score fell to its lowest level since the index began in 2006.
You spearheaded a petition signed by thousands of AI experts across the globe about the dangers of autonomous AI in warfare. But it’s not humanoid robots you’re losing sleep over.
When we talk about killer robots, it’s not Terminator but much simpler technologies like the semi-autonomous drones being used in Libya, Syria and, most recently, Ukraine. Humans are increasingly being removed from the decision-making in such drones, replaced by computers. This will take us to a dystopian future that will look like a bad episode of [the sci-fi series] Black Mirror. Like other technologies such as chemical weapons or blinding lasers, we must regulate against AI-driven autonomous weapons. I’ve spoken about the risks of killer robots at the United Nations half a dozen times. And thousands of my colleagues, other experts in AI, share these concerns. As do 26 Nobel laureates, the UN Secretary-General, the European Union Parliament, 61 per cent of the public in 26 countries, and civil society organisations like the International Committee of the Red Cross and Amnesty International.
Toby Walsh says George Orwell got one thing wrong in his book 1984: “It’s not Big Brother, people watching people; it’s computers watching people.”Credit:Grant Turner/UNSW
What alarms many of us is that the two authoritarian superpowers, China and Russia, already use AI to control their populations through surveillance and misinformation.
The use of AI by authoritarian countries like China and Russia worries me, too. George Orwell got one thing wrong in his book 1984. It’s not Big Brother, people watching people. That has limits. No, it’s computers watching people. You can surveil a whole nation in real time. And we see certain countries starting to go down that road. Look at the terrible things being done to the Uyghurs [in north-west China], enabled in part by AI technologies such as facial recognition.
You moved to Australia from the UK in 2005. How does Australia rate globally in its research into AI?
Australia punches above its weight in AI. We’re easily in the top 10, and by some measures even top five, in the world. This was one of the reasons I moved here, along with the better weather. Having said that, the federal government has been investing much less than other nations in AI. Whoever wins the next election needs to invest more to keep us ahead. Under the most recent governments, we saw a hostility to universities, and despite all the talk about investment in the technology sector, this hasn’t materialised. We are squandering our lead and we can’t afford to.
If we slip behind, will this affect our national security?
Most definitely. AI will change how we fight war: autonomous submarines are being developed by both the Chinese and the Russians. And China is increasingly using AI to develop its industrial base even further … economic strength, after all, leads to military strength.
You write that AI cannot match the intelligence of a two-year-old, but it can do narrow-focused tasks that have profound consequences. Can you give any examples?
I could tell you about AI that diagnoses COVID-19. Or about the AI that I worked on to route trucks more efficiently, saving a major Australian company from emitting thousands of tonnes of carbon dioxide. But let me give you instead some of the more unusual applications that have come out recently. How about the AI invented here in Australia by the music production company Uncanny Valley that composed a Eurovision song that won the very first AI song contest? Or the AI-designed craft beer on sale here in Australia courtesy of the folks at the Australian Institute for Machine Learning?
You’ve described how AI can augment human intelligence: make us better chess players, composers, medical specialists. Isn’t this just old-fashioned cheating?
We’ve always used tools to augment what we can do. We weren’t the fastest animal. Or the strongest. But we were the smartest, so we built tools to make us the fastest and strongest. And we’re now building tools to make us even smarter.
So far, AI is a collection of single-purpose, different technologies. Will we see the day when it can create a super-intelligent conscious being?
There are no laws of physics that we know about that would prevent us replicating human intelligence in silicon. And machines would have many advantages over human frailty. Computers will be faster, will have more memory and will never forget. They can look at data sets beyond human imagination, identifying patterns that our puny brains would never spot. Think what we could do with even more intelligence on the planet. Cure cancer? Generate limitless clean energy with nuclear fusion? Solve world hunger and poverty? Unlock the mystery of where all the missing socks go?
Walsh explains what we can do to prevent the harmful consequences of mutant algorithms in “10 Minute Genius | Mutant Algorithms” on UNSW’s Youtube channel.Credit:Yaya Stempler
You’ve said the AI revolution is being spearheaded by companies that don’t adhere to the usual corporate rules. How so?
The Big Tech companies building AI have immense wealth. That gives them immense power. And they aren’t governed in the same way as old-fashioned corporations. Take Snap Inc., the company behind Snapchat. When it listed on the New York Stock Exchange, it sold its shares to the public with no voting rights at all. Despite this, the IPO raised $5 billion, and the stock closed its first day up 44 per cent. What was the US Securities and Exchange Commission thinking? How did we go from executives of public listed companies being accountable to the shareholders, to them being accountable to no one but themselves?
The big bogeyman is the threat to privacy …
Our privacy has already been eroded over time, thanks to technologies like CCTV, and we haven’t noticed. If we aren’t careful, it can wipe out our last bit of privacy.
Facial-recognition technology, powered by AI, has charged ahead in leaps and bounds. One study even claimed that algorithms could predict a person’s sexuality from one image of their face.
The claims that software could tell someone’s sexuality from a single image have been shown to be bogus. And it’s not just bogus, but dangerous. There are countries where homosexuality is still illegal, and even a few where it is punishable by death. We continue to struggle to get facial recognition software to recognise people of colour, to recognise women, and most especially to recognise women of colour.
On the flip side, you’ve described how facial-recognition software was used to reunite more than 10,000 children in Indian orphanages with their parents. Tell us some other good-news AI stories.
First up is Halicin, a new type of antibiotic recently discovered using AI. It’s named after HAL, the AI computer in 2001: A Space Odyssey. With drug-resistant bacteria on the rise, we desperately need to discover new antibiotics like this. Second is Deepmind’s AlphaFold software, which can accurately predict the shape of proteins. This promises to unlock the secrets to life. This was arguably one of the most important stories last year, not just in AI, but in the whole of science.
Will AI result in significant job losses?
Jobs are a concern. But the conversation here should also be about the jobs that technology creates, and about the skills people need for those new jobs. In the Industrial Revolution, we made some significant structural changes to society to support the disruption industrialisation brought to the workplace. We introduced unions, labour laws, a welfare state, pensions and other reforms to support people and ensure the benefits were fairly distributed. We need to think in similar, bold ways about this AI future.
Does AI have the potential to make us better human beings?
I hope so. I’m 57. We’re the first generation in history to be leaving our planet in a poorer condition than we found it. AI can remind us of what makes us special as humans. I’m a glass-half-empty guy for the short-term of humanity; a glass-half-full kind of guy for the longer term.
Toby Walsh’s Machines Behaving Badly: The Morality of AI (Black In, $33) is out next week.
To read more from Good Weekend magazine, visit our page at The Sydney Morning Herald, The Age and Brisbane Times.
The best of Good Weekend delivered to your inbox every Saturday morning. Sign up here.
Copyright © 2022
We’re sorry, this feature is currently unavailable. We’re working to restore it. Please try again later.