Artificial Intelligence: 3 Potential Attacks

A recent Turing test claim sparks concerns about how AI could make attacks much easier for cybercriminals.

Joe Stanganelli

June 12, 2014

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

On June 7 -- the 60th anniversary of Alan Turing's death -- a computer program purportedly became the first ever to pass the Turing test with no restrictions placed on the questions asked of it. Though some have argued that this program did not truly pass the artificial intelligence test, it is doubtless that this is at least a significant development in the steady advance of artificial intelligence technology.

Enterprise IT pros should pay attention, because experts foresee a number of security bugaboos awaiting a world of AI. Here are three potential AI exploits.

Untraceable social engineering
Artificial intelligence -- with or without the capability to imitate a specific, specifically trusted person -- could also be used someday to make social engineering "risk free," according to Kyle Adams, chief software architect for Junos WebApp Secure at Juniper Networks.

"What might happen is an attacker could build a chat emulator application and install it on a small embedded Linux system," Adams told us in an email interview. "The embedded system would have an integrated cell chip [allowing] the Linux system to reach the Internet and place phone calls... At some scheduled time, it would [contact] the target and carry out the attack."

"This device could then be plugged in at a coffee shop and left unattended," he said. Such a chip could be integrated into power supply, in which case "it would likely go unnoticed for a long time (it would just look like a surge protector)."

The device Adams describes could be pre-loaded with an AI-enabled social engineering attack script (removing the need to receive information from the attacker), or it could download attack scripts from a hidden service via Tor (allowing the attacker to rinse and repeat untraceably for future customized attacks with the same device). Then the device would communicate information back to the attacker (encrypted, of course) either through Tor or a highly popular IRC channel on a server with hundreds of thousands of users.

"From an evidence trail perspective, it would not be possible to tell who, if any, of the 1,000 people in the channel were responsible for the message," Adams said. Alternatively, with a Tor implementation, even if authorities could identify the hidden service(s) the device used, they would not be able to figure out who built or communicated with it.

DDoS attacks on people
Imagine the inconvenience of a crank call multiplied by a bazillion.

Like the social engineering tactics he describes, Adams envisions AI-enabled botnets fooling -- and wasting the time and resources of -- organizations' customer support staff via online portals and phone support.

"This allows an attacker to completely outpace a company's ability to support its users, and thus no one gets support," he said. As opposed to a regular DDoS attack that exhausts unfiltered computer resources, with this type of attack, "you are exhausting human resources. Filters won't work, and it's far more expensive and time-consuming to increase the scale of your human resources."

What's more, even if such an attack could be caught and stopped, the brand damage would be enormous.

Optimized scamming
Various phishing efforts and 419 scams can take a lot of time and effort. Cons require a con artist.

"Software that can take the place of [scammers] and only pass on verified leads would allow them to scale their efforts to a previously unimagined level," Adams said. "The more people they can cast the net over, the less work they have to do to refine the results, the more effective the campaign and the more people are effectively compromised."

Moreover, if society can build technology that can mimic a human, it can build technology that can mimic a specific human, such as a boss, a friend, or a family member.

Kevin Warwick, a visiting professor at the University of Reading, foresees the havoc this could wreak. "Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime," he told The Independent.

About the Author

Joe Stanganelli

Joe StanganelliAttorney, Beacon Hill LawJoe Stanganelli is founder and principal of Beacon Hill Law, a Boston-based general practice law firm. His expertise on legal topics has been sought for several major publications, including US News and World Report and Personal Real Estate Investor Magazine. Joe is also a professional communications consultant. He has been working with social media for many years, even in the days of local BBSs, well before the term "social media" was invented. From 2003 to 2005, Joe ran Grandpa George Productions, a New England entertainment and media production company. He has also worked as a professional actor, director, and producer, and playwright.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights