October 4, 2017
-
Anthony Aragues
,

Hacker Tactics - Part 3: Adversarial Machine Learning

<p><em><strong>Adversaries are constantly changing and improving how they attack us. In this six-part series we’ll explore new or advanced tactics used by threat actors to circumvent even the most cutting-edge defenses.</strong></em></p><p>The overwhelming trend right now is to take problems old, new, and of large scale and apply machine learning or artificial intelligence to them. It’s so ubiquitous that many of the consumers of machine learning results are unaware. This increased trust and reliance on machine learning results brings new threats and requires new thinking around security of it.</p><h3>What is adversarial machine learning? (And what’s machine learning?)</h3><p>Machine learning is the method of allowing a system to learn a complex model based on data that is labeled and trained by people. The system can then compare future unlabeled data to that model to determine how closely it fits in the form of a score or category. Any score or category that is applied to more data than consumer interactions or employees behind that data is likely machine learning based. Mature machine learning systems have automation built around the labeling and collection of data to keep the models up to date and relevant. This makes a big difference in accuracy but is the first area of concern.</p><p>Results are usually accurate when training data is hand-selected and the results are closely examined. The rest of the time the process of tuning a model is much more automated to keep up. The system will regularly take labeled data from various sources to update the model. It makes the assumption that this data is accurate and should be used to improve the model. If the data submitted is off or intentionally wrong, the model is then thrown off.</p><p>You’ve likely experienced this first hand. For example, if someone has logged into your Amazon or Netflix accounts as you, all account activity is falsely assumed to be yours. The following recommendations are subsequently different because of the selections they made. This is a pretty benign (if not annoying) scenario, but the same concept can be applied to security and business decisions.</p><p>Malicious actors engage in adversarial machine learning when they deliberately manipulate the input data. Exploiting vulnerabilities of the learning algorithm in this way can compromise the security of the entire system.</p><p>Examples of adversarial machine learning include:</p><p><strong>Biometric recognition</strong><br/> Attackers may target biometric recognition, where they can then:</p><ul><li>Impersonate a legitimate user via fake biometric traits (biometric spoofing)</li><li>Compromise users’ template galleries that are adaptively updated over time</li></ul><p><strong>Computer Security</strong><br/> Malicious actors can exploit machine learning in computer security by:</p><ul><li>Misleading signature detection</li><li>Poisoning the training set</li><li>Replacing the model elasticity</li></ul><p><strong>Spam filtering</strong><br/> Attackers may obfuscate spam messages by misspelling bad words or inserting good words.</p><h3>Why is it used?</h3><p>We rarely question results, ask where they come from, or how they might change. It’s relatively new, and being defensive with it always lags behind. The technology’s ability to adapt is the core reason it’s used and also makes it easier to exploit.</p><h3>How is it advanced?</h3><p>Adversarial machine learning is advanced largely due to the complexity of machine learning itself. Malicious actors would need a thorough understanding of how machine learning works.</p><p>No matter how confident someone may be of the accuracy of their training set, an attacker can manage to replace the model directly if it is not protected. This doesn’t require anything specific to machine learning as a practice, it’s just not often listed as a critical asset.</p><p>Machine learning security products can also be exploited by adversaries to an extent. They are tuned to avoid false positives as much as possible. If your model is supposed to find something bad and you are mimicking something good according to the model it will think it’s good. This will also not throw any alarms. Detection evasion is therefore one of the oldest and most commonly used malicious activities.</p><h3>History</h3><p>In this now famous and simple example (<a href="https://arxiv.org/abs/1412.6572" target="_blank">https://arxiv.org/abs/1412.6572</a>), once the random snow is added to the training set the model is much more confident that the picture is a random snow than it ever was sure it was a panda.</p><p><img alt="" src="https://cdn.filestackcontent.com/wInVFE9NQIWcpYNwLdM4"/></p><p>In order to prevent the pandas from being classified as random pixels you need some sort of checks on the data before it is used in the training set. This can be difficult to get right because if you overly define it, it will limit the flexibility of the model to find unintuitive relationships.</p><p>There’s a more detailed exploration of techniques here: <a href="https://blog.openai.com/adversarial-example-research/" target="_blank">https://blog.openai.com/adversarial-example-research/</a>. This needs to be translated to other contexts as well.</p><h3>How do you defend against adversarial machine learning?</h3><p>1) Add security measures to automated training of machine learning</p><p>2) Protect access to machine learning models</p><p>3) Make creation of results transparent</p><p>4) Notify when when something is outside the model</p><p>Something that is rarely mentioned is how machine learning results are presented. They are usually very opaque. Going back to Netflix as an example, when you see a recommendation that has you questioning your taste in media you can see a brief “recommended because: … ” and you can then point to the family member that poisoned your training set or recognize you have some outliers in your taste.</p><p>This is rarely done in other products, especially in security solutions. This is a critical component in catching issues in the process. If you see an IP address and a risk score, you probably don’t have any more information than what was used to create the score so you have to trust it or know how it used that information to create that score. Due to the nature of machine learning, it’s not as easy as showing an arithmetic equation. However there are some things that would help and machine learning can provide.</p><p>1. Machine Learning Model: This lets you know the approximate technique used</p><p>2. Key training samples: What are the top matches?</p><p>3. Top factors with weight: There are hundreds or more data points that are used in these models. In each result there are top data points that made an impact in that result.</p><p>With this information available you could identify a number of things that need to be adjusted or have more trust in the result.</p><p>One of the scarier realities about machine learning attacks is that they are not isolated to security products. They are everywhere and integrated into our lives. The more we trust them without being able to verify the more vulnerable we become.</p><p><em>Click here to check out the second part of this series, <a href="https://www.anomali.com/blog/hacker-tactics-part-2-supply-chain-attacks">Supply Chain Attacks</a>. </em></p>

Get the Latest Anomali Updates and Cybersecurity News – Straight To Your Inbox

Become a subscriber to the Anomali Newsletter
Receive a monthly summary of our latest threat intelligence content, research, news, events, and more.