By Editor|2022-05-03T16:59:18+00:00May 3rd, 2022|Comments Off on A Corrupt Mind: As AI grows in use, the possibility of people training it with deliberately flawed data also grows

A Corrupt Mind: As AI grows in use, the possibility of people training it with deliberately flawed data also grows

More and more applications are turning to machine learning, in attempts to develop more sophisticated, and less resource intensive, approaches for detecting malicious or undesirable behaviors. As cybersecurity continues to adopt these approaches, there is a risk, however, that hackers could deliberately manipulate one of the sets of data used to train these tools in such a way as to allow them to avoid detection.

What’s interesting is not just that such an approach is possible, but that studies have shown that it only required a very small amount of deliberately flawed data to have significant effects on a system’s defenses – research presented at last year’s HITCom security conference in Taipei, by Cheng Sin-ming and Tseng Ming-huei, showed that machine learning could be tricked into ignoring malicious code by manipulating only 0.7% of the data used to train it. This is of particular concern as many machine learning tools rely on open-source data, which is itself subject to error.

Instead, when relying on public data sets, companies need to be careful to ensure those have been processed and reviewed by a known, trusted source, to ensure the robustness of the set, and remove all items in error – whether those were created accidentally or deliberately. Without doing so, results cannot be trusted, and cybersecurity could be compromised.

Source:

https://www.bloomberg.com/opinion/articles/2022-04-24/ai-poisoning-is-the-next-big-risk-in-cybersecurity

Share This Story, Choose Your Platform!

About the Author: Editor