Five lessons to fix the failures of AI – Information Age

  • Welcome to Information Age!

    Technology is moving extremely fast and you don’t want to miss anything, sign up to our newsletter and you will get all the latest tech news straight into your inbox!

  • I want to recieve updates for the followoing:


Technology is moving extremely fast and you don’t want to miss anything, sign up to our newsletter and you will get all the latest tech news straight into your inbox!
I want to recieve updates for the followoing:

I accept that the data provided on this form will be processed, stored, and used in accordance with the terms set out in our privacy policy.
No thanks I don’t want to stay up to date
Engineers have yet to invent the perfect artificial intelligence system for forecasting public trust. If such a tool existed though, it would issue some pretty stark warnings about AI itself. The technology is accused of failing to protect users from harmful content (as in the case of Facebook), discriminating against ethnic minority patients being treated in hospitals and against women applying for credit cards. Its critics say it risks hardwiring bias into crucial choices about the services we receive.
In recent months, those concerns have entered the mainstream. White House science advisers now propose a Bill of Artificial Intelligence Rights, spelling out what consumers should be told when automated decision making touches their lives.
This much is clear: the digital industry cannot ignore these concerns. They will only become more pressing and prominent. But the debate is not: ‘AI: right or wrong?’ – the technology won’t go away. It’s too useful and too widely used. The challenge instead for any firm that deploys machine learning is simple: get it right before your customers lose faith.
The good news is we can fix these problems. Yes – it’s a complicated business, but we can spell out the essential lessons explaining what needs to done simply enough.
These systems rely on two things: a machine learning model that analyses data to teach itself how to make decisions, and the raw data itself. Businesses must get both right to avoid getting important decisions horribly wrong.
Improving standards in AI ethics requires equal representation in technology and new laws, such as the EU’s proposed AI legislation. Read here
Lesson one – Make sure the data you use to train your AI isn’t more likely to give a negative result for one group of people than for another. Imagine you run a loans firm and use this technology to work out who is likely to default. If, by chance, your historical data happens to show a greater number of women defaulting than men, your AI could unfairly discriminate against women for ever more.
Lesson two – If you have less data for women than men, or one ethnic group than another, make sure that’s reflected in your maths. Otherwise, you’ll reach unfair decisions because parts of society are under-represented.
Lesson three – Once the system is running, test it. Set performance targets and watch closely to make sure your AI doesn’t begin discriminating against individual groups in society. If you are doing the job right, this should be a never-ending process.
Lesson four – AI systems aren’t crystal balls; they don’t know what will happen in the future. They just work out how likely something is to happen. Imagine that AI-powered loans firms again. There will be few more important decisions than working out when to refuse someone credit. Is it when your AI model says there is a 50% chance of an applicant defaulting on a loan, or a 75% chance, or 90%? The experts call this the ‘probability threshold’, and choosing where it falls is vital.
The final lesson is perhaps the hardest: you have to be able to explain the decisions that your AI model makes. Artificial intelligence systems cannot be black boxes whose working is beyond reason or challenge. Companies should be as accountable for choices made by AI as for any others.
Errors are inevitable – no system is perfect – but bias is not. Most of us cannot see inside the algorithms that power this technology, but we can make sure they generate fair outcomes. AI has issues, but collectively we know how to fix them. Get it right and we can ensure that fairness is woven into our decision-making technologies. Get it wrong, and both companies and consumers will pay the price.
Written by Sray Agarwal and Shashin Mishra, data scientists at Publicis Sapient and authors of ‘Responsible AI: Implementing Ethical and Unbiased Algorithms
The pace of change has never been this fast, yet it will never be this slow again.
15 November 2021 / New business development director Kinra will lead development, implementation, and coordination plans to increase business […]
15 November 2021 / Cyber threats are increasing in rate, variety and sophistication: UK businesses had to combat an […]
15 November 2021 / All that hard work you, as a CIO, put into getting off on-prem and into […]
15 November 2021 / The debates around the 26th UN Climate Change (COP26) summit in Glasgow this year haven’t […]
12 November 2021 / The long-term deal, which is IT infrastructure services provider Kyndryl‘s first since becoming an independent […]
12 November 2021 / Over the course of the past year, new trends and threats have emerged, leading to […]
12 November 2021 / Defence in depth is a basic tenet in security. It means putting multiple layers of […]
12 November 2021 / Engineers have yet to invent the perfect artificial intelligence system for forecasting public trust. If […]
11 November 2021 / The Internet of Things (IoT) offers the potential to connect up billions of devices around […]
© Bonhill Group Plc,
29 Clerkenwell Road, London EC1M 5RN
T. 020 7638 6378
Part of the Bonhill Group.

source
Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2021 AI Caosuo - Proudly powered by theme Octo