Businesses can implement AI-powered security solutions into their systems to protect against online & offline security issues. Though AI is an effective solution to protect organizations from cyberattacks, it also enables attackers to launch complex, automated attacks.

Another aspect of AI security is the security of machine learning systems powering decision making of companies and autonomous systems. It has been proven that simple changes in inputs can cause these systems to fail, enabling attackers another attack surface. Therefore, companies need to consider security when implementing AI solutions.

What is AI Security?

AI is shaping multiple aspects of security. Here we explain all aspects of AI security. However, the rest of the article will focus on AI in cybersecurity as this is the most common AI application in the security field today.

AI in cybersecurity

AI both presents opportunities for information/cybersecurity professionals to improve their cyber defenses and new threats as cyber attackers leverage modern, publicly available machine learning algorithms.

Using AI to improve cybersecurity

Organizations leverage artificial intelligence to enhance their security against cyberattacks such as malware, phishing, network anomalies, unauthorized access of sensitive data. These tools use machine learning algorithms to learn from historical data and detect anomalies to enable organizations to prevent and manage cyberattacks effectively and efficiently. For example, AI powered deception technology helps delay and identify cyber attackers.

Defending against AI driven cyber attacks

>90% of cybersecurity professionals in the US and Japan anticipating malicious AI-powered attacks. This is because AI research is publicly available and it can be used to build intelligent, continuously learning exploits by attackers.

Alejandro Correa Bahnsen, Cyxtera’s vice president of research, states:

An average phishing attacker will bypass an AI-based detection system 0.3% of the time, but by using AI this ‘attacker’ was able to bypass the system >15% of the time

For example, deepfakes are highly realistic videos, audio recordings, or photos generated by AI techniques. Some of their potential malicious uses include:

  • Overcoming biometric security systems
  • Infiltrating social networks
  • Using realistic video/audio/photos for manipulating users and gaining access to corporate networks/information

AI-Powered physical security systems

Cameras record and transfer data to image recognition systems to identify threats (e.g. trespasser identification with cameras).

Securing AI systems against adversarial attacks

With AI technology, organizations have new processes such as data ingestion, preparation and labeling, model training, inference validation, and production deployment. These processes are new layers added to the organization’s tech processes that need to be protected from adversarial attacks. In adversarial attacks, attackers change the inputs of machine learning models to cause the model to make mistakes.

Since few deep learning systems are currently in production, adversarial attacks are still a mostly theoretical threat. Once deep learning systems start making important decisions, the importance of these threats will increase significantly. For example,

  • autonomous driving systems can be manipulated with subtle changes to road signs or their surroundings
  • industrial automation systems can similarly be manipulated for industrial sabotage

Why is it important now?

As an organization collects more data from different resources, potential points of cyberattack increases. According to a survey by Capgemini Research Institute, 69% of enterprises believe AI is necessary for cybersecurity due to increasing amount of threats that cybersecurity analysts can handle. Survey results show that 56% of the firms say their cybersecurity analysts are overwhelmed and 23% are not able to detect all breaches.

Another survey by TD Ameritrade, Registered Investment Advisors (RIA) are more willing to invest in new artificial intelligence cybersecurity ventures. With all these investment opportunities, AI security market forecasted to reach USD 38 billion by 2026 from USD 8 billion in 2019, at CAGR of 23.3%

Biggest tech investments survey: Cybersecurity is the category that is invested most
Source: TD Ameritrade

What are its use cases and leading companies for these use cases?

E-mail monitoring: E-mail is a common target for cyber threats. AI monitoring software helps improve the detection accuracy and the speed of identifying cyber threats.

  • Tessian

Network threat analysis and Malware Detection: Organizations use AI to identify malicious malware and the differences between real and artificial users to prevent fraud access.

  • LogRhythm
  • SparkCognition
  • Cylance
  • White Ops
  • Versive
  • Cybereason

  • Cylance
  • Anomali
  • Fortinet

  • Palo Alto Networks
  • Shape Security
  • Cujo AI

AI against AI-based threats: Hackers are using AI as well. Organizations need AI to prevent an organization from AI-based threats.

  • Check Point

AI to automate repetitive security tasks: Organizations leverage AI to automate repetitive tasks of security analysts so that they can shift their focus on more important tasks. 

  • Vectra

Fortinet and Palo Alto Networks are the two leading AI security companies that generate 1.8 billion USD and 2.27 billion USD respectively.

If you want to improve the security of your organization but don’t know where to start, here are a few pieces of our research about cybersecurity:

If you still have unanswered questions, please feel free to contact us:

Sources:

Gartner Report: Top 10 Strategic Technology Trends for 2020

BCG

How useful was this post?

Click on a star to rate it!

As you found this post useful...

Follow us on social media!

How can we do better?

Your feedback is valuable. We will do our best to improve our work based on it.

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*