The big-awaited commercial promise of AI has begun to materialize in past years, with AI-powered systems already in use since many areas of business operations. Even as these systems helpfully automate manual tasks and enhance decision making and other human activities, they also emerge as promising attack targets, as many AI systems are casa to massive amounts of data.
In addition, researchers have grown increasingly concerned about the susceptibility of these systems to malicious input that can corrupt their logic and affect their operations. The fragility of some AI technologies will become a growing concern in 2019. In some ways, the emergence of critical AI systems as attack targets will start to mirror the sequence seen 20 years ago with the internet, which rapidly drew the attention of cyber criminals and hackers, especially following the explosion of internet-based e-Commerce.
Attackers won’t just target AI systems, they will captured all AI techniques themselves to supercharge their own criminal activities. Automated systems powered by AI could probe networks and systems searching for undiscovered vulnerabilities that could be exploited. AI could also be used to make phishing and other social engineering attacks even more sophisticated by creating extremely realistic video and audio or well-crafted emails designed to fool targeted individuals. AI could also be used to launch realistic disinformation campaigns. For example, imagine a fake AI-created, realistic video of a company CEO announcing a large financial loss, a major security breach, or other major news. Widespread release of such a fake video could have a significant impact on the company before the true facts are understood.
And just as we see attack toolkits available for sale online, making it relatively easy for attackers to generate new threats, we’re certain to eventually see AI-powered attack tools that can give even petty criminals the ability to launch sophisticated targeted attacks. With such tools automating the creation of highly personalized attacks–attacks that have been labour-intensive and costly in the past–such AI-powered toolkits could make the marginal cost of crafting each additional targeted attack essentially be zero.