Wireless Infrastructure

05:00 PM
Joe Stanganelli
Joe Stanganelli
News
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Artificial Intelligence: 3 Potential Attacks

A recent Turing test claim sparks concerns about how AI could make attacks much easier for cybercriminals.

On June 7 -- the 60th anniversary of Alan Turing's death -- a computer program purportedly became the first ever to pass the Turing test with no restrictions placed on the questions asked of it. Though some have argued that this program did not truly pass the artificial intelligence test, it is doubtless that this is at least a significant development in the steady advance of artificial intelligence technology.

Enterprise IT pros should pay attention, because experts foresee a number of security bugaboos awaiting a world of AI. Here are three potential AI exploits.

Untraceable social engineering
Artificial intelligence -- with or without the capability to imitate a specific, specifically trusted person -- could also be used someday to make social engineering "risk free," according to Kyle Adams, chief software architect for Junos WebApp Secure at Juniper Networks.

"What might happen is an attacker could build a chat emulator application and install it on a small embedded Linux system," Adams told us in an email interview. "The embedded system would have an integrated cell chip [allowing] the Linux system to reach the Internet and place phone calls... At some scheduled time, it would [contact] the target and carry out the attack."

"This device could then be plugged in at a coffee shop and left unattended," he said. Such a chip could be integrated into power supply, in which case "it would likely go unnoticed for a long time (it would just look like a surge protector)."

The device Adams describes could be pre-loaded with an AI-enabled social engineering attack script (removing the need to receive information from the attacker), or it could download attack scripts from a hidden service via Tor (allowing the attacker to rinse and repeat untraceably for future customized attacks with the same device). Then the device would communicate information back to the attacker (encrypted, of course) either through Tor or a highly popular IRC channel on a server with hundreds of thousands of users.

"From an evidence trail perspective, it would not be possible to tell who, if any, of the 1,000 people in the channel were responsible for the message," Adams said. Alternatively, with a Tor implementation, even if authorities could identify the hidden service(s) the device used, they would not be able to figure out who built or communicated with it.

DDoS attacks on people
Imagine the inconvenience of a crank call multiplied by a bazillion.

Like the social engineering tactics he describes, Adams envisions AI-enabled botnets fooling -- and wasting the time and resources of -- organizations' customer support staff via online portals and phone support.

"This allows an attacker to completely outpace a company's ability to support its users, and thus no one gets support," he said. As opposed to a regular DDoS attack that exhausts unfiltered computer resources, with this type of attack, "you are exhausting human resources. Filters won't work, and it's far more expensive and time-consuming to increase the scale of your human resources."

What's more, even if such an attack could be caught and stopped, the brand damage would be enormous.

Optimized scamming
Various phishing efforts and 419 scams can take a lot of time and effort. Cons require a con artist.

"Software that can take the place of [scammers] and only pass on verified leads would allow them to scale their efforts to a previously unimagined level," Adams said. "The more people they can cast the net over, the less work they have to do to refine the results, the more effective the campaign and the more people are effectively compromised."

Moreover, if society can build technology that can mimic a human, it can build technology that can mimic a specific human, such as a boss, a friend, or a family member.

Kevin Warwick, a visiting professor at the University of Reading, foresees the havoc this could wreak. "Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime," he told The Independent.

Joe Stanganelli is founder and principal of Beacon Hill Law, a Boston-based general practice law firm. His expertise on legal topics has been sought for several major publications, including US News and World Report and Personal Real Estate Investor Magazine. Joe is also ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
6/18/2014 | 6:59:00 AM
Re: Colbert
You are right, some businesses will try to invest in the hype and ignore the technology and innovation part of the equation. I feel, in the short run these businesses might manage to capture a bit of the market share, but in the long run only the businesses that have created a solid value proposition will be able to sell their product as AI -- consumers will be asking deeper question, such as, what's the IQ level of AI version 4.2, etc.
MarciaNWC
50%
50%
MarciaNWC,
User Rank: Strategist
6/17/2014 | 11:05:37 AM
Re: Colbert
Probably won't stop anyone from marketing as AI; it's so much catchier than machine learning!
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
6/17/2014 | 8:15:40 AM
Re: Colbert
Good point, as with the current level of machine learning and user pattern recognition, it would be hard to classify these technologies as AI. If a system has been programmed to detect, for instance, a user login from a new web browser, resulting in the creation of an automated alert or a service shut down after a pre-set count, then machine learning should not be sold to the consumer as AI. 
MarciaNWC
50%
50%
MarciaNWC,
User Rank: Strategist
6/16/2014 | 11:32:53 AM
Re: Colbert
Interesting concept of distributed AI for security, Brian. This isn't quite it, but apparently at least one vendor says it uses AI for cybersecurity: http://www.theinquirer.net/inquirer/news/2325001/android-app-claims-to-use-artificial-intelligence-to-fight-cyber-threats.

 

 
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
6/16/2014 | 3:33:03 AM
Re: Colbert
It's amazing how quickly a word like "Android" can transition from meaning, a robot that appears human, to an OS that has nothing to do with appearance, and next to a software that appears human, all this in the span of 4 years, or so.

If attackers are going to use AI to inflict damage, then I guess better AIs will be needed to defend against such attacks. In a way it is better to have multiple AIs in a decentralized network, performing different activities, than to have a single super AI with lots of resources.
MarciaNWC
50%
50%
MarciaNWC,
User Rank: Strategist
6/13/2014 | 11:14:17 AM
Colbert
Stephen Colbert had some fun with the reported Turing test breakthrough this week. "The 'robolution' is upon us!" With "Bina the activist Android," it's already here, he joked.
Hot Topics
13
Drones: The Next WLAN Menace
Lee Badman 7/22/2014
7
10 Handy WiFi Troubleshooting Tools
Ericka Chickowski, Contributing Writer, Dark Reading,  7/22/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
Video
Slideshows
Twitter Feed