Conversations around responsible artificial intelligence (AI) are heating up as the ethical implications of its use are increasingly felt in our daily lives and society. With AI influencing life-changing decisions around mortgage loans, healthcare, parole and more, an ethical approach to AI development isn’t just a nice-to-have – it’s a requirement.
In theory, companies want to produce AI that’s inclusive, responsible and ethical – both in service of their customers and to maintain their brand reputation; in practice, they often struggle with the specifics.
Creating AI that’s inclusive requires a full shift in mindset throughout the entirety of the development process. It involves considering the full weight of every crucial decision in the build process. At a minimum, a full revamp in strategies around data, the AI model (programmes that represent the rules, numbers and any other algorithm-specific data structures required to make predictions for a specific task) and beyond will be needed.
It’s the responsibility of the people who build AI solutions to ensure that their AI is inclusive and provides a net-positive benefit to society. To accomplish this, there are several essential steps to take during the AI life cycle:
1. Data: At the data stage, organizations collect, clean, annotate, and validate data for their machine learning models. At this phase of the AI life cycle there’s maximum opportunity to incorporate an inclusive approach, as the data serves as the foundation of the model. Here are two factors to consider:
Without representative data, you can’t hope to create an inclusive product. Spend the majority of the time on your project making sure you’ve got the data right or partner with an external data provider who can ensure the data is representative of the group for whom your model is built.
2. Model: While perhaps less weighty than the data element, there are critical opportunities during the building of the model through which to incorporate inclusive practices.
Strategizing and delivering on the right objectives (for instance, a KPI that measures bias) will take you a long way toward building a responsible end product.
3. Post-deployment: Some teams feel their work is mostly done after they deploy their model, but the opposite is true: this is only the beginning of the model’s life cycle. Models need significant maintenance and retraining to stay at the same performance level and this can’t be an afterthought: letting performance dip could have serious ethical implications under certain use cases. Incorporate the following best practices as part of your post-deployment infrastructure:
The above isn’t an exact blueprint, but offers a starting point for transitioning inclusive AI from a theoretical discussion to an action plan for your organization. If you approach AI creation with an inclusive lens, you’ll ideally find many additional steps to take throughout the development life cycle. It’s a mission-critical endeavour: for AI to work well, it needs to work well for everyone.
Mark Brayan , Chief Executive Officer, Appen
The views expressed in this article are those of the author alone and not the World Economic Forum.
From data to the AI model itself, these steps can help businesses make AI inclusive, responsible and ethical throughout its development and implementation
An IESE study analyses more than 100 firms working with deep-tech startups to understand the challenges they face and best practices to better collaborate.
Subscribe for updates
A weekly update of what’s on the Global Agenda
© 2021 World Economic Forum