How does the EU AI Act regulate artifical intelligence? – Dataconomy

The European Union is disturbed by the lack of comprehensive regulation of artificial intelligence. The EU AI Act is an important step that will determine the future of artificial intelligence in the context of personal data protection.
It’s a lawless world for artificial intelligence in today’s society. The European Union has a proposed solution called AI Act. Critical decisions about people’s lives are increasingly being made by AI programs without any regulation or accountability.
This can result in the imprisonment of innocent people, poor academic performance among students, and even financial crises. It is the first law in the world designed to regulate the entire sector and avert these harms. If EU succeeds, it could establish a new global standard for AI governance across the world.
Table of Contents
Here’s a brief summary of everything you need to know about the EU’s AI Act. Members of the European Parliament and EU member countries are currently amending the legislation.
Back in Berlin! Data Natives 2022, in person and online – tickets available now!
The AI Act is very aggressive in its goals. It would need more checks on “high-risk” applications of AI, which have the most potential to cause damage to people. This might include systems for grading exams, recruiting workers, or assisting judges in making legal and judicial decisions. The bill’s first draft also contains restrictions on the use of AI deemed “unacceptable,” such as computing people’s trustworthiness based on their reputation.
The proposed legislation would also ban law enforcement agencies’ use of facial recognition in public spaces. There is a vocal group of influencers, including members of the European Parliament and nations like Germany, who want a total prohibition on its usage by both government and corporate bodies because they claim it allows for massive surveillance.
If the EU is able to execute this, it would be one of the most stringent bans yet on facial recognition technology. San Francisco and Virginia have imposed limits on facial recognition, but the EU’s prohibition would apply to 27 nations with a population of over 447 million people.
By requiring that algorithms receive human review and approval, the bill should prevent humans from being harmed by AI in the event of an accident. According to Brando Benifei, an Italian member of the European Parliament who is a key player in preparing amendments for the bill, people may trust that they will be safeguarded from the most harmful forms of AI.
The AI Act also requires people to be notified if they encounter deepfakes, biometric recognition technologies, or AI applications that claim to be able to read emotions. Lawmakers are also discussing whether the legislation should include a system for individuals to file complaints and seek compensation if they have been damaged by an AI system.
One of the EU bodies working on amending the bill is also calling for a prohibition on predictive policing technologies. Predictive policing systems employ artificial intelligence to evaluate massive data sets in order to proactively deploy police to high-crime areas or try to forecast whether someone will become criminal. These algorithms are highly contentious, with critics alleging that they are frequently racial and lack transparency.
The GDPR, or the European Union’s data protection regulation, is one of the most well-known tech exports from the EU. It has been emulated in California to India. The EU’s approach to AI, which focuses on the riskiest AI, is a model that other advanced nations embrace. If Europeans can figure out how to regulate technology effectively, it might serve as a template for other countries wanting to do so as well.
“US companies, in their compliance with the EU AI Act, will also end up raising their standards for American consumers with regard to transparency and accountability,” explained Marc Rotenberg, the Center for AI and Digital Policy head.
The bill is also being watched closely by the Biden administration. The US is home to some of the world’s biggest AI labs, such as those at Google AI, Meta, and OpenAI, and leads multiple different global rankings in AI research, so the White House wants to know how any regulation might apply to these companies. For now, influential US government figures such as National Security Advisor Jake Sullivan, Secretary of Commerce Gina Raimondo, and Lynne Parker, who is leading the White House’s AI effort, have welcomed Europe’s effort to regulate AI. 
The debate over the EU AI Act is also being closely monitored by the Biden administration. The United States has several of the world’s largest AI laboratories, including those at Meya, OpenAI, and Google AI and it leads many different global rankings in AI research. As a result, the White House is seeking information on how any legislation would apply to these firms. For the time being, prominent government figures in Washington, such as National Security Advisor Jake Sullivan and Secretary of Commerce Gina Raimondo, have praised Europe’s efforts to regulate artificial intelligence.
“This is a sharp contrast to how the US viewed the development of GDPR, which at the time people in the US said would end the internet, eclipse the sun, and end life on the planet as we know it,” said Rotenberg.
Despite some unavoidable wariness, the United States has compelling reasons to embrace the bill. It is extremely concerned about China’s rising tech influence. According to Raimondo, for America, maintaining a Western edge in technology is still a question of “democratic values” prevailing. It wants to keep close ties with the EU, a “like-minded ally,” and prevent it from drifting away, Fedscoop reports.
Some of the requirements in the bill are physically impossible to fulfill right now. The bill’s initial draft stated that data sets should be free of errors and that humans should be able to completely comprehend how AI systems operate. If a human checked for completeness, it would take hundreds of hours to ensure that data sets are completely error-free. Even today’s neural networks are so complicated that their creators don’t know why they reach their judgments.
Regulators and external auditors are also wary of the mandates that tech businesses must implement in order to comply with legislation.
“The current drafting is creating a lot of discomfort because people feel that they actually can’t comply with the regulations as currently drafted,” says Miriam Vogel, The CEO of a nonprofit organization called EqualAI. She also heads the newly formed National AI Advisory Committee, which advises the White House on AI policy. There are also those saying that lawyers are at risk of losing their jobs to AI by 2030.
There’s also a heated debate going on about whether the AI Act should ban face recognition outright. It’s a contentious issue because EU nations dislike when Brussels tries to tell them how to handle national security and law enforcement issues.
In other countries, such as France, the government is considering special rules for the use of facial recognition to protect national security. In contrast, the new German government, another major European nation and an influential voice in EU decision making, has stated that it supports a total ban on face scanning in public places.
There will also be a debate about which types of AI should be labeled as “high risk.” The AI Act includes a variety of AI applications, such as lie detection tests and systems for allocating welfare payments. There are two competing political factions: one that fears that the broad scope of the legislation will stifle innovation, and another that claims that the bill does not go far enough to protect individuals from significant injury. Some consider facial recognition risky, is big data going too far?
A frequent complaint from Silicon Valley lobbyists is that the new rules will add to the burden on AI firms. The EU disagrees. The European Commission, the EU’s executive body, argues that only the riskiest category of AI applications would be covered by the AI Act, which it predicts would apply to 5 to 15% of all AI apps. If you wonder how tech giants use artifical intelligence, learn how businesses utilize AI in security systems, here.
“Tech companies should be reassured that we want to give them a stable, clear, legally sound set of rules so that they can develop most of AI with very limited regulation,” explained Benifei. 
Organizations that do not comply with the AI Act will be fined up to $31 million (€30 million) or 6% of worldwide yearly sales. Europe has shown a propensity to hand out fines to tech businesses in the past. In 2021, Amazon was fined $775 million (€746 million) for failing to adhere to the GDPR, and Google was fined $4.5 billion (€4.3 billion) for violating EU antitrust regulations in 2018.
It will be at least another year before a final text is decided upon, and several years before firms must comply. There’s a chance that hammering out the fine points of such a comprehensive bill with so many contentious components may take longer than expected. The GDPR took more than four years to negotiate and six years to come into force in the EU. Anything is conceivable in the world of EU legislationmaking. If you are into artificial intelligence and ML systems, check out the history of Machine Learning, it dates back to the 17th century.

We are looking for contributors and here is your chance to shine. Click the button below to learn more!
AI making BI Obsolete

Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published.

© 2022 AI Caosuo - Proudly powered by theme Octo