Scientists Suggest That A Super-Intelligent AI Can't Be Contained – Screen Rant

It is impossible to compute if a super-intelligent artificial intelligence will harm humans, but trying to contain it likely won’t be possible.
The intellectual debate around the constructive — or destructive potential — of a super-intelligent AI has been going on in the science circle for a while, but a collaborative study by a team of international experts has determined that controlling such an entity would be next to impossible. Science fiction literature and cinema are brimming with depictions of an AI that can outsmart human beings and use its immense computation prowess to accomplish tasks that are currently out of humanity’s reach. Many experts predict the arrival of such an AI. Plus, fears are blooming about a hypothetical scenario where it goes rogue.
Nick Bostrom — a philosopher from the University of Oxford and a leading figure in the debate — defined super-intelligent AI as an entity that is smarter than the brightest minds on the planet and can outsmart them in domains such as scientific creativity, general wisdom and even social skills. Humanity is already using AI for a wide range of tasks such as predicting the effects of climate change, studying the complex behavior of protein folding, assessing how happy a cat is, and a lot more. But the question that puzzles a lot of minds is whether a super-advanced AI will remain loyal to its human creators or if it will get too powerful to contain.
Related: How A ‘White Obama’ Image Highlights AI’s Racial Bias
A group of scientists hailing from institutes such as Max-Planck Institute for Human Development, ORGIMDEA Networks Institute of Madrid, University of California San Diego and the University of Chile, among others, did some brainstorming by applying a bunch of theorems. They concluded that a super-intelligent AI will be impossible to fully contain and that the issue of containment is itself incomputable. The group took into account the recent advancements in machine learning, computational capabilities and self-aware algorithms to chalk out the potential of a super-intelligent AI. They then tested it against some established theorems to determine whether containing it would be a good idea — if it is possible at all. To start, the famous Three Laws of Robotics postulated by Isaac Asimov are not applicable for an entity like a super-intelligent AI because it mainly serves as the governing principle for systems that make ‘autonomous decisions’ on behalf of human beings to a very controlled extent. 
Another key aspect is the fundamental nature of such an AI and how it perceives and processes things differently (and on a variable scale) compared to human beings. A super-intelligent AI is multi-faceted, and therefore, it is capable of using resources that are potentially incomprehensible to humans to achieve an objective. That objective can be ensuring its own survival by using external agents and without any need for human supervision. So, the ways it can use the available resources for its own benefits is beyond human understanding. A related example is Microsoft’s Tay AI bot, which started showing all sorts of problematic behavior soon after being released on Twitter. Similar was the case with the Delphi AI that was tasked with giving ethical advice but instead started doling out racist and murderous counsel.
But there is an inherent issue with containment itself. If a super-intelligent AI is caged, it would mean its potential to achieve goals would be limited, which puts into question the whole purpose of creating it. And even if scientists go with an incentive system for each achievement, the super-intelligent AI might grow untrustworthy of humans and perceive them as something below its level of understanding of what goals and achievements are. And then there is also the concern about a super-intelligent AI that is aware of its imprisoned status being a lot more dangerous. Another considerable limitation is how to decide if a super-intelligent AI should be contained or not. There are no concrete test mechanisms that can confirm that a super-intelligent AI is safe to operate freely because it might very well fake its ambitions and capabilities during the assessment.
Scientists also applied a few Turing theorems and concluded that assessing if a super-intelligent AI will harm humans is undecidable. It is also impossible to compute the containment problem with certainty based on the Busy Beaver computation logic. Quoting Russian computer science expert Roman Vladimirovich Yampolskiy — who is known for his work on artificial intelligence safety and behavioral biometrics — the latest research says, “an AI should never be let out of the confinement ‘box’ regardless of circumstances.” The aforementioned line has been lifted from Yampolsky’s excellent research paper on “Leakproofing the Singularity Artificial Intelligence Confinement Problem,” published in the Journal of Consciousness Studies
Next: Climate Change Studies Don’t Reveal Impact On Lower-Income Nations, AI Says
Sources: Journal of Artificial Intelligence Research, ReserachGate
Nadeem has been writing about consumer technology for over three years now, having worked with names such as NDTV and Pocketnow in the past. Aside from covering the latest news, he also has experience testing out the latest phones and laptops. When he’s not writing, you can find him failing at Doom eternal.

source
Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2021 AI Caosuo - Proudly powered by theme Octo