Reducing the impact of AI-powered bot attacks
Bot attacks are drawing more and more headlines with tales of identity theft. The wealth of consumer data available on the dark web through breaches, social media and more are sold to hackers to compile online consumer profiles to take over accounts for money, products or services.
The question of who is real and can be trusted, and how companies should defend against this issue remains unanswered. For next generation bot detection solutions to be effective, there is a need for much higher precision in the level of user behavioral analytics that must be implemented.
Automated versus AI-powered bots
Most people are familiar with automated bots – chatbots and the like – that are actually software applications that can use AI to interact with human users to accomplish a task (i.e. book a hotel, answer customer service questions, etc.), though some are simply rules-based. However, advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behavior
Companies like Amazon have been investing in AI and machine learning techniques for a number of years, from fulfillment centers to Echo powered by Alexa, to its new Amazon Go. Amazon’s AWS offers machine learning services and tools to developers and all who use the cloud platform. But malicious bots can now leverage these exact capabilities for fraudulent purposes, making it difficult to tell the difference between bots and true human users.
Hackers and fraudsters are harnessing the latest tools available, and constantly changing their techniques to make their attacks more effective, faster and adaptable to safeguards. This makes it almost impossible for security teams to model attacks and attack behaviors. Mobile bot farms where bots are implemented across thousands of devices to appear more human-like are just one example. Malicious algorithms can be introduced to AI-powered bots with the aim to replicate real-world audiovisual signatures to impersonate true users and unlock security systems. These bots find various ways to extract money from websites and accounts – including account takeover, where large bot collectives crack passwords and test stolen credentials (passwords, social security numbers, etc.) as quickly as possible in order to break in to user accounts. Those same stolen credentials from various accounts can also be pulled together and used to create entirely new, synthetic identities, opening the gates for an entirely new method of identity fraud.
These tools are the product of advancements in AI techniques like deep learning – a subset of machine learning that has networks which are capable of learning unsupervised from data that is unstructured or unlabeled.
AI is also being used to easily scale up pre-existing attacks by enhancing the human labor required to carry them out – enhanced captcha breaking systems, faster identification of vulnerabilities in existing defense systems, faster creation of new malware which can avoid detection, and more effective selection of phishing targets by collecting and processing information from a large number of different public domain sources. Using AI agents, these attacks would be made much more precise, targeted and amplified – creating a multiplier effect to a malicious campaign and expanding the reach of potential victims.
“Un”-predictive analytics will lead the charge in reducing the impact of attacks
While prevalent in the financial industry, these attacks have the potential to impact many more. For instance, with online ticket sales, an AI-powered bot could perform check-out abuse by pretending to be a human customer, then buying out all the tickets for an event within a minute. Similarly, the ad tech industry continues to suffer major losses thanks to ad fraud. In 2016, it was estimated that nearly 20 percent of total digital ad spend was wasted, and $16.4 billion would be lost in 2017. Click-fraud also presents an issue, where bots repeatedly click on an ad hosted on a website with the intention of generating revenue for the host site, draining revenue from the advertiser.
If machines can be taught to behave like humans, how can we stop them? Traditional approaches currently used to block these attacks have proved ineffective, such as task-based authentication where a “user” is asked to perform a specific task like CAPTCHA. Such approaches only focus on known fraudulent behaviors, rather than continuously adapting and learning as patterns change.
Similarly, current solutions have attempted to determine bot behavior from human behavior, which are almost too nuanced to decipher. Instead, solutions should focus on finding anomalies in real user behavior and fighting AI with AI. Using machine learning and AI algorithms, it is possible to continuously learn patterns of user behavior based on the muscle memory they exhibit when they walk, sit, stand, type, swipe, tap – even the hand they prefer to hold their device in can be used to create personalized user models.
Using these traits can enable solutions to pick up on the smallest deviation in “normal” user behavior and can be flagged immediately as a potential fraud attempt. It is important to combine this with other environmental traits including the links between device, network, social, location and biometric intelligence. Dynamic layers of sophisticated user models will need to be implemented in order to stay ahead of AI-powered bot attacks.
Comments