As artificial intelligence (AI) breaks into the mainstream, there is a great deal of misinformation and confusion about what it’s capable of and the potential risks it poses. Our culture is rich with dystopian visions of human ruin at the feet of all-knowing machines. But many people also appreciate the potential good AI might do for us through the improvements and insights it could bring.
Computer systems that can learn, reason and act are still in their infancy. Machine learning requires huge data sets. For many real-world systems, like driverless cars, a complex blend of physical computer vision sensors, complex programming for real-time decision making and robotics are required. For businesses adopting AI, deployment is simpler, but giving AI access to information and allowing any measure of autonomy brings serious risks that must be considered.
What risks does AI pose?
Accidental bias is quite common with AI systems and can be entrenched by programmers or specific data sets. Unfortunately, if this bias leads to poor decisions and potentially even discrimination, legal consequences and reputational damage may follow. Flawed AI design can also lead to overfitting or underfitting, whereby AI makes decisions that are too specific or too general.
Both these risks can be mitigated by establishing human oversight, by stringently testing AI systems during the design phase and by closely monitoring those systems when they are operational. Decision-making capabilities must be measured and assessed to ensure that any emerging bias or questionable decision-making is addressed swiftly.
These threats are based on unintentional errors and failures in design and implementation, but a different set of risks emerges when people deliberately try to subvert AI systems or wield them as weapons.
How might attackers manipulate AI?
Poisoning an AI system can be alarmingly easy. Attackers can manipulate the data sets used to train AI, making subtle changes to parameters or crafting scenarios that are carefully designed to avoid raising suspicion while gradually steering AI in the desired direction. Where attackers lack access to datasets, they may employ evasion, tampering with inputs to force mistakes. By modifying input data to make proper identification difficult, AI systems can be manipulated into misclassification.
Checking the accuracy of data and inputs may prove impossible, but every effort should be made to harvest data from reputable sources. Try to bake in the identification of anomalies, provide adversarial examples to empower AI to recognize malicious inputs and isolate AI systems with safeguard mechanisms that make it easy to shut down if things start to go wrong.
A tougher issue to tackle is inference, whereby attackers try to reverse engineer AI systems so they can work out what data was used to train them. This may give them access to sensitive data, pave the way for poisoning or enable them to replicate an AI system for themselves.
How could AI be weaponized?
Cybercriminals can also employ AI to assist with the scale and effectiveness of their social engineering attacks. AI can learn to spot patterns in behavior, understanding how to convince people that a video, phone call or email is legitimate and then persuading them to compromise networks and hand over sensitive data. All the social techniques cybercriminals currently employ could be improved immeasurably with the help of AI.
There’s also scope to use AI to identify fresh vulnerabilities in networks, devices and applications as they emerge. By rapidly identifying opportunities for human hackers, the job of keeping information secure is made much tougher. Real-time monitoring of all access and activity on networks coupled with swift patching is vital to combat these threats. The best policy in these cases may be to fight fire with fire.
How can you use AI to boost company security?
AI can be highly effective in network monitoring and analytics, establishing a baseline of normal behavior and flagging discrepancies in things like server access and data traffic immediately. Detecting intrusions early gives you the best chance of limiting the damage they can do. While it may initially be best to have AI systems flag abnormalities and alert IT departments so they can investigate, as AI learns and improves, it may be given the authority to nullify threats itself and block intrusions in real time.
Just as AI can model normal behavior and learn how users interact with systems, how to recognize vulnerabilities and malware and how to understand what constitutes an emerging threat, it can also learn when alerts are effective. As the data set grows and receives more feedback on its decision-making, it can gain more experience and get better at the task of defending your network.
With a major skills shortage in information security, any AI system that can shoulder some of the burden and enable limited staff to focus on complex problems will be of benefit. As companies look to reduce costs, AI is fast becoming more attractive as a replacement for people. It will bring benefits and it will improve with experience, but forward-thinking companies must plan to mitigate the potential risks now.