Every country must decide own definition of acceptable AI use – ZDNet

Citizens and governments will need to determine what they deem to be acceptable uses of artificial intelligence, including whether the use of facial recognition technology in public spaces should be outlawed or accepted, says Telenor Research’s AI and analytics head.
By for By The Way | | Topic: Artificial Intelligence
Every country including Singapore will need to decide what it deems to be acceptable uses of artificial intelligence (AI), including whether the use of facial recognition technology in public spaces should be accepted or outlawed. Discussions should seek to balance market opportunities and ensuring ethical use of AI, so such guidelines are usable and easily adopted. 
Above all, governments should seek to drive public debate and gather feedback so AI regulations would be relevant for their local population, said Ieva Martinkenaite, head of analytics and AI for Telenor Research. The Norwegian telecommunications company applies AI and machine learning models to deliver more personalised customer and targeted sales campaigns, achieve better operational efficiencies, and optimise its network resources. 
For instance, the technology helps identify customer usage patterns in different locations and this data is tapped to reduce or power off antennas where usage is low. This not only helps lower energy consumption and, hence, power bills, but also enhances environmental sustainability, Martinkenaite said in an interview with ZDNet.
The Telenor executive also chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe, transitioning ethics guidelines into legal requirements. She also provides input on the Norwegian government’s position on proposed EU regulatory acts.
Singapore wants widespread AI use in smart nation drive
With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.
Read More
Asked what lessons she could offer Singapore, which last October released guidelines on the development of AI ethics, Martinkenaite pointed to the need for regulators to be practical and understand the business impact of legislations.
Frameworks on AI ethics and governance might look good on paper, but there also should be efforts to ensure these were usable in terms of adoption, she said. This underscored the need for constant dialogue and feedback as well as continuous improvement, so any regulations remained relevant.
For one, such guidelines should be provided alongside AI strategies, including the types of business and operating models the country should pursue and highlights of industries that could best benefit from its deployment. 
In this aspect, she said EU and Singapore had identified strategic industries they believed the use of data and AI could scale. These sectors also should be globally competitive and where the country’s largest investments should go.  
Singapore in 2019 unveiled its national AI strategy to identify and allocate resources to key focus areas, as well as pave the way for the country, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. These included manufacturing, finance, and government. 
In driving meaningful adoption of AI, nations should strive to look for “balance” between tapping market opportunities and ensuring ethical use of the technology.
Noting that technology was constantly evolving, she said it also was not possible for regulations to always keep up. 
In drafting the region’s AI regulations, EU legislators also grappled with several challenges including how laws governing the ethical use of AI could be introduced without impacting the flow of talent and innovation, she explained. This proved a significant obstacle as there were worries regulations could result in excessive red tape, with which companies could find difficult to comply.
There also were concerns about increasing dependence on IT infrastructures and machine learning frameworks that were developed by a handful of internet giants, including Amazon, Google, and Microsoft as well as others in China, Martinkenaite said.
She cited disconcert amongst EU policy makers on how the region could maintain its sovereignty and independence amidst this emerging landscape. Specifically, discussions revolved around the need to create key enabling technologies in AI within the region, such as data, compute power, storage, and machine learning architectures. 
With this focus on building greater AI technology independency, it then was critical for EU governments to create incentives and drive investments locally in the ecosystem, she noted. 
Rising concerns about the responsible use of AI also were driving much of the discussions in the region, as they were in other regions such as Asia, she added. 
While there still was uncertainty over what the best principles were, she stressed the need for nations to participate in the discussions and efforts in establishing ethical principles of AI. This would bring the global industry together to agree on these principles should be and to adhere to them. 
United Nation’s human rights chief Michelle Bachelet recently called for the use of AI to be outlawed if they breached international human rights law. She underscored the urgency in assessing and addressing the risks AI could bring to human rights, noting that stricter legislation on its use should be implemented where it posed higher risks to human rights. 
Bachelet said: “AI can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”
The UN report urged governments to take stronger action to keep algorithms under control. Specifically, it recommended a moratorium on the use of biometric technologies, including facial recognition, in public spaces until, at least, authorities were able to demonstrate there were no significant issues with accuracy or discriminatory impacts. 
These AI systems, which increasingly were used to identify people in real-time and from a distance and potentially enabled unlimited tracking of individuals, also should comply with privacy and data protection standards, the report noted. It added that more human rights guidance on the use of biometrics was “urgently needed”. 
To better drive adoption of ethical AI, Martinkenaite said such guidelines should be provided alongside AI strategies, including the types of business and operating models the country should pursue and highlights of industries that could best benefit from its deployment. 
In this aspect, she said EU and Singapore had identified strategic industries they believed the use of data and AI could scale. These sectors also should be globally competitive and where the country’s largest investments should go.  
Singapore in 2019 unveiled its national AI strategy to identify and allocate resources to key focus areas, as well as pave the way for the country, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. These included manufacturing, finance, and government. 
In driving meaningful adoption of AI, nations should strive to look for “balance” between tapping market opportunities and ensuring ethical use of the technology.
Martinkenaite noted that governments and regulators worldwide would have to determine what it meant to be ethical in their country’s use of AI and ability to track data as its application was non-discriminatory. 
This would be pertinent especially in discussions on the risk AI could introduce in certain areas, such as facial recognition. While any technology in itself was not necessarily bad, its use could be deemed to be so, she said. 
AI, for instance, could be used to benefit society in detecting criminals or preventing accidents and crimes. There were, however, challenges in such usage amidst evidence of discriminatory results, including against certain races, economic classes, and gender. These could pose high security or political risks. 
Martinkenaite noted that every country and government then needed to decide what were acceptable and preferred ways for AI to be applied by its citizens. These included questions on whether the use of AI-powered biometrics recognition technology on videos and images of people’s faces for law enforcement purposes should be accepted or outlawed. 
She pointed to ongoing debate in EU, for example, on whether the use of AI-powered facial recognition technology in public places should be completely banned or used only with exceptions, such as in preventing or fighting crime. 
The opinions of its citizens also should be weighed on such issues, she said, adding that there were no wrong or right decisions here. These were simply decisions countries would have to decide for themselves, she said, including multi-ethnic nations such as Singapore. 
“It’s a dialogue every country needs to have,” she said.
Martinkenaite noted, though, that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance or quality assurance in place.
She urged the need for continual investment in machine learning research and skillsets, so the technology could be better and become more robust. 
She noted that adopting an ethical AI strategy also could present opportunities for businesses, since consumers would want to purchase products and services that were safe and secured and from organisations that took adequate care of their personal data. 
Companies that understood such needs and invested in the necessary talent and resources to build a sustainable AI environment would be market differentiators, she added. 
FICO report released in May revealed that nearly 70% of 100 AI-focused leaders in the financial services industry could not explain how specific AI model decisions or predictions were made. Some 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems”, while 35% said their organisation made efforts to use AI in a transparent and accountable way. 
Almost 80% said they faced difficulty getting their fellow senior executives to consider or prioritise ethical AI usage practices, and 65% said their organisation had “ineffective” processes in place to ensure AI projects complied with any regulations.  
By for By The Way | | Topic: Artificial Intelligence
Banking
UOB banks on $368M digital investment to grow Asean customers
Banking
Singapore to link up with Malaysia on cross-border payment transfers
E-Commerce
Ninja Van snags $578M in Series E, pulling in Alibaba as new investor
E-Commerce
Singapore hawkers get help going online with central delivery interface
Please review our terms of service to complete your newsletter subscription.
You agree to receive updates, promotions, and alerts from ZDNet.com. You may unsubscribe at any time. By joining ZDNet, you agree to our Terms of Use and Privacy Policy.
You agree to receive updates, promotions, and alerts from ZDNet.com. You may unsubscribe at any time. By signing up, you agree to receive the selected newsletter(s) which you may unsubscribe from at any time. You also agree to the Terms of Use and acknowledge the data collection and usage practices outlined in our Privacy Policy.
Legal framework for artificial intelligence advances in Brazil
The Congress passed a bill that outlines guidelines for public policies and principles to be followed by companies, which now needs to be voted by the Senate. …
Best Amazon Echo 2021: Which Alexa device is right for you?
Amazon now has an entire army of Echo devices. Some listen to you. Some also watch you. Which should you choose? We help you decide.
Read AI wants to gauge your sentiment, engagement, talk time during meetings
Billed as the Waze of meetings, Read AI is looking to use AI to figure out how that Zoom call is really going.
UOB banks on $368M digital investment to grow Asean customers
Singapore bank unveils plans to invest up to SG$500 million to beef up its digital capabilities and drive efforts to double the number of retail customers across Asean to seven million …
Meet Amazon Astro, Amazon’s Jetsons robot play for $1,499.99 ($999.99 for limited time)
“In 5 to 10 years every home will have a robot,” said Dave Limp, senior vice president of devices and services at Amazon.
Amazon launches Echo Show 15, aims to be a home hub for $249.99
Echo Show can be mounted on a wall and used as everything from a shared calendar to kitchen TV or a hub for the home.
Mammoth AI report says era of deep learning may fade, but that’s unlikely
Scholars see deep learning’s pitfalls and limitations, the computer industry just sees a huge opportunity in matrix multiplications
South Australia uses facial recognition drones to help save koalas
Researchers from Flinders University and the South Australian government believe using drones could help keep count and better understand the movement of koalas. …
Crisis management and incident management in the digital era
When it comes to crisis and incident management in the cloud/digital era, hope is not a strategy!
© 2021 ZDNET, A RED VENTURES COMPANY. ALL RIGHTS RESERVED. Privacy Policy | Cookie Settings | Advertise | Terms of Use

source
Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2021 AI Caosuo - Proudly powered by theme Octo