New guidance on Machine Learning – plenty for humans to learn too – Lexology

Review your content’s performance and reach.
Become your target audience’s go-to resource for today’s hottest topics.
Understand your clients’ strategies and the most pressing issues they are facing.
Keep a step ahead of your key competitors and benchmark against them.
add to folder:
Questions? Please contact [email protected]
A triumvirate of healthcare regulators has published ten guiding principles (the Principles) concerning the development of Good Machine Learning Practice (GMLP). The Principles shed light on the risks posed by artificial intelligence (AI) products that depend on machine learning. Humans should take note.
Machine learning has the potential to transform how patients are treated by deriving insights from the vast amount of data generated in healthcare settings every day. Machines are designed to improve performance by learning from these data.
The Principles, published last week by the US Food and Drug Administration (FDA), Health Canada and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA), are intended to serve as the bedrock for developing GMLP and facilitate growth.
We have looked at the Principles to highlight litigation and regulatory risks that healthcare companies and their insurers should guard against.
A product could be placed on the market before "real world" considerations are assessed, based on doctors’ experiences. The product may function perfectly, but only in the mind of a software engineer who does not have clinical experience. The Principles encourage obtaining multi-disciplinary clinical expertise throughout a product’s lifecycle to ensure a product is as safe as possible.
If a data set is taken from too narrow a patient group, the result could be sub-optimal treatment of medical conditions where AI is used to assist diagnosis. The Principles encourage ensuring that data adequately reflect the intended patients’ age, gender, sex, race and ethnicity.
Recent history is littered with examples of products whose risks were not fully understood until many months or years after they had been prescribed to large numbers of patients. The Principles advocate using machine learning models that support the mitigation of risks at the outset, based on an understanding of the clinical risks and benefits.
Humans interact with technology but are fallible; they may misinterpret the results of AI analysis or be too dependent on AI where common sense would show that a different decision should be made. The Principles recommend that manufacturers address how people interpret data when developing AI models.
Conditions in the testing phase of a product must be relevant to real-life conditions, or else the product will under-perform (or perform differently) when it is placed on the market. The Principles support making laboratory conditions as relevant as possible by predicting what the intended patient population and the clinical environment will be.
Where a product is alleged to have caused an injury, the courts will assess the information and instructions that accompanied it. Clear warnings by a manufacturer, about a product’s intended use and limitations, can make the difference between a product being deemed "safe", as opposed to "defective." The Principles remind manufacturers to ensure that users are provided with clear information that is appropriate for the intended audience, whether providers or patients.
The full set of Principles can be found here. The MHRA states that the Principles will be used to inform areas where the International Medical Device Regulators Forum (IMDRF), international standards organisations and other collaborative bodies could work together to advance GMLP, including over setting regulatory policies. In the meantime, manufacturers, healthcare providers and their insurers can take note of the Principles in developing manufacturing practices intended to drive down the risks of AI products that are dependent on machine learning.
add to folder:
If you would like to learn how Lexology can drive your content marketing strategy forward, please email [email protected].
"I find the newsfeeds to be extremely beneficial as a means of keeping up with changes in the law. I've made a regular practice of sharing a number of the items with members of our HR staff. Please keep up the good work."
© Copyright 2006 – 2021 Law Business Research

source
Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2021 AI Caosuo - Proudly powered by theme Octo