Ethical AI needs to thrive in SecOps: 3 key guidelines – TechBeacon

Get up to speed fast on the techniques behind successful enterprise application development, QA testing and software delivery from leading practitioners.
World Quality Report 2021-22: 8 key takeaways for your software team
What software testing looks like in 2021: 3 key elements
Microservices quality issues? A modern DevOps approach can help
Modernize your performance testing: 6 tips for better apps
Why humans are core to DevOps success
Trends and best practices for provisioning, deploying, monitoring and managing enterprise IT systems. Understand challenges and best practices for ITOM, hybrid IT, ITSM and more.
4 things IT Ops teams need to know about data management
The new IT services model: Why you need to get product-centric
The 8 flavors of serverless: How to choose wisely
5 steps to becoming a data-sharing master
How AIOps is a game-changer for predictive analytics and CloudOps
All things security for software engineering, DevOps, and IT Ops teams. Stay out front on application security, information security and data security.
Ethical AI needs to thrive in SecOps: 3 key guidelines
SecOps tooling in 2021: SIEM remains in the driver’s seat
DevSecOps and hybrid cloud: 4 items for your security checklist
5 infrastructure security tasks your developers can automate
What you need to know about KVKK data-privacy requirements
TechBeacon Guides are collections of stories on topics relevant to technology practitioners.
TechBeacon Guide: SecOps Tooling
TechBeacon Guide: World Quality Report 2021-22
TechBeacon Guide: The State of SecOps 2021
TechBeacon Guide: Application Security Testing
TechBeacon Guide: Data Masking for Privacy and Security
Discover and register for the best 2021 tech conferences and webinars for app dev & testing, DevOps, enterprise IT and security.
DevOps Enterprise Summit Virtual – US
Webinar: Threat Hunting—Stories from the Trenches
Webinar: Cybersecurity Executive Order Challenges and Strategies
Webinar: Data Privacy and CIAM—Complete Your Identity Stack
Webinar: App Sec in a Cloud-First Culture
Security operations centers (SOCs) increasingly rely on network data flows as they collect telemetry from devices and monitor user behaviors. To make these massive data flows manageable, SOCs turn to rules, machine learning, and artificial (or augmented) intelligence to triage, de-duplicate, and add context to the alerts about potential dangerous or malicious activity.
Pushing the boundaries of what machine learning can deliver when nourished by massive data has already led to significant invasions of privacy, especially when the efforts are driven by business demands. More often than not, ethics has taken a back seat when applying machine learning and AI. Companies such as ClearView AI and Cambridge Analytica have vastly overreached in their analysis of consumer data because they could, using consumer data without explicit permission and offering nothing in return.
The pushback against these abuses has fueled greater consideration of ethical issues in data collection, machine-learning models, and the pursuit of AI-augmented services. These issues are no less significant within the walled garden of an organization. Employees deserve the same consideration from the internal groups that collect data, most often security operations and human resources.
The ethical considerations of data collection, data analysis, machine-learning models, and AI analytics need to be a focus of every security operations team. With time pressures and stretched resources, security teams often pursue the simplest approach to an operations problem, but ethics should never be ignored.
Here are three key guidelines to help your company use data collection, machine learning, and AI responsibly in your security operations.
While today’s technology is far from the visions of humanlike AI put forth in popular media, the combination of vast datasets and better analytical techniques does deliver significant benefits that we did not have a decade ago. Machine-learning models and well-managed data can give security operations a significant advantage in detecting attackers.
However, the technology must be used responsibly. Companies should have strict policies in place to narrow the focus of any use of employee data for machine learning. Alerts should be based on behaviors: Did a user access systems at unusual times, from an unknown location, or in an anomalous way? However, no employees should be identified until enough evidence has been uncovered to suggest either that their device or identity has been compromised or that they are taking risky actions.
Similarly, the principle of data minimization is an important one for responsible AI: Collect only the minimum amount of data required to support your use case, and no more. Data that you do not store is data that cannot be breached, stolen, or compromised.
Increasingly, this sort of data collection will be covered by regulations, such as the EU’s recently proposed regulation for ethical AI, often described as GDPR for AI. Security operations should treat violating such regulations as part of their threat model.
Companies need to understand the decision-making process of AI algorithms to understand their analysis. Do not make a model so complex that you cannot understand it. When purchasing technology from a vendor, make sure that its system explains any alert. Explainable AI is a trend that is critical to the development of ethical AI and the ability of companies to make informed decisions about whether the AI’s recommendation is a good one.
The corollary of this is that models that cannot be understood should not be used at all, or only with significant warnings to users and restrictions on usage. Many systems, for example, allow the user to specify a certain sensitivity. Using that control to reduce sensitivity until one or more results appear will likely produce unwarranted investigations, potentially alienating employees.
The machine-learning architecture may be limited by the need to understand the model’s conclusion. Deep neural networks often produce results that are hard to explain, especially when the network ingests massive datasets. If the analyst cannot understand the reason for the alert, the neural network will be difficult to trust.
Another part of understanding the model is to realize how the training data can cause bias and lead to poor conclusions. For example, if an insider threat model is trained on another company’s dataset where the threats entirely came from the corporate finance department, then the model may learn this bias and try to apply it, unfairly, to your company’s finance employees. While this bias is obvious, others are much more subtle.
On many security teams, everyone seems to know about “that user,” the one who keeps clicking on bad attachments and links. They may think that identifying the user is a good thing, but security operations can be effective without knowing the identity of users. Instead, you focus on behaviors, systems, and traffic flows and only de-identify the user when there is sufficient evidence to warrant an investigation.
Companies now have systems that profile users and learn all about them. Is that a good thing? It is neutral, but abusing such systems is easy and poses significant ethical issues. And while notification can mitigate legal issues, it does not fully address the ethical issues.
Companies should not identify individuals whose systems or credentials are flagged as possibly compromised until enough evidence has been collected to warrant the uncloaking of the individual’s identity. Human resources should own the uncloaking process and the identity of people, so as to minimize abuses by security operations analysts who are not trained in the laws covering employees.
Anonymization needs to be strong to prevent casual abuses. Despite the known limitations of anonymization methods, you have an ethical obligation to perform due diligence to respect the data privacy of the company you protect.
Just thinking about the ethics of AI is not enough. Companies need to implement due diligence and best practices, spell out policies, have a division of labor to reinforce those policies, and communicate the organization’s policies effectively.
Learn from your SecOps peers with TechBeacon’s State of SecOps 2021 Guide. Plus: Download the CyberRes 2021 State of Security Operations.
Get a handle on SecOps tooling with TechBeacon’s Guide, which includes the GigaOm Radar for SIEM.
Get up to speed on cyber resilience with TechBeacon’s Guide. Plus: Take the Cyber Resilience Assessment.
Put it all into action with TechBeacon’s Guide to a Modern Security Operations Center.
Get the best of TechBeacon, from App Dev & Testing to Security, delivered weekly.


Brought to you by

I’d like to receive emails from TechBeacon and Micro Focus to stay up-to-date on products, services, education, research, news, events, and promotions.
Check your email for the latest from TechBeacon.

source
Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2021 AI Caosuo - Proudly powered by theme Octo