Image source: possessed photography via unsplash
05.03.2021

AI in Cyber Security: Attack Weapon and Protective Shield at the Same Time

Cybersecurity | IT Protection | Attack Vectors

Artificial intelligence – or “AI” for short – is intended to replicate human thinking in a way that is as automated or mechanized as possible. AI is also playing an increasingly important role in cyber security: it is used both as an attack weapon and to defend against cyberattacks. Which side will win in the end?

It’s the nightmare for every cyber security expert: cyberattacks supported by artificial intelligence. Phishing emails that rely on social engineering and precisely analyze certain behaviors would be many times more harmful with AI. It could help to write e-mail texts that would no longer be distinguishable from those of real senders. The attacks would be intelligently automated, malware attacks would run faster and more effectively. And the most threatening thing is that with every failed attack, the attacker learns from his own mistakes and improves his techniques with each subsequent attack.

But what causes cyber security experts headaches as a serious threat also offers the opportunity to strengthen their own protective shields against cyberattacks and to be able to better identify attackers. AI will therefore be a curse and a blessing at the same time.

AI as an offensive weapon

Cybercriminals are increasingly using artificial intelligence as a weapon. With the help of penetration techniques, behavioral analysis and behavioral imitation, AI can carry out attacks much faster, more coordinated and more efficiently – and on thousands of targets at the same time.

AI looks for vulnerabilities

Cyber attackers use AI, which automatically examines a large number of interfaces in the victim’s IT for vulnerabilities. In the event of a “hit”, the AI can distinguish whether an attack at the vulnerability can paralyze the system or whether it can be a gateway for malicious code.

“AI-as-a-Service”

Hackers are already offering AI-based systems on the darknet as “AI-as-a-Service”. These are ready-made IT solutions for criminal hackers without any major knowledge of artificial intelligence. This also lowers the barrier to entry for many smaller hacker gangs.

Guess passwords
AI-based systems already exist that can automatically guess passwords through machine learning. In addition, new dangers arise for AI-protected IT networks:

AI-driven malware
Cybercriminals most often use AI in connection with malware sent via email. The malware can imitate user behavior even better through AI: Intelligent assistants can create texts of such high semantic quality that recipients find it very difficult to distinguish them from real emails.

Self-learning phishing attacks
Adapting a phishing email to the sender’s writing style has so far required knowledge of human nature and background knowledge. With the help of AI systems, information available online can be extracted in a more targeted manner in order to tailor websites, links or emails to the target of an attack. AI systems learn from past mistakes and successes and improve their tactics with each attack.

AI as a protective shield

AI will play a major role in cyber security in threat detection and defense against cyberattacks. Learning algorithms are supposed to recognize the behavior patterns of the attackers and their programs and take targeted action against them.

Time-saving pattern recognition

AI applications are particularly strong at recognizing and comparing patterns by quickly filtering out and processing the essentials from large amounts of data. This pattern recognition makes it easy to find hidden channels that are harvesting data, faster than human analysts could.

Identify spam emails
The classic filter methods for identifying and classifying spam e-mails on the basis of statistical models, blacklists or database solutions are reaching their limits. AI solutions can help identify and learn complex patterns and structures of spam emails.

Authenticate authorized users
Passive, continuous authentication is a future field for AI algorithms. Sensor data from accelerometers or gyroscopes is collected and evaluated during the use of the device. In this way, AI prevents unauthorized use of the device.

Detect malware
Conventional malware detection is usually based on checking the signatures of files and programs. When a new form of malware appears, the AI then compares it with previous forms in its database and decides whether the malware should be automatically fended off. In the future, AI could evolve to detect ransomware, for example, before it encrypts data.

Spying on attackers using algorithms

Hackers almost always use infiltrated programs or commands. Artificial intelligences could learn here, for example, which programs a malicious code opens, which files it overwrites or deletes, and which data it uploads or downloads. According to corresponding patterns, the trained AI algorithm can then look for suspicious activity on users’ computers.

Decrypt the identity of attackers

The AI’s algorithms could also soon find out the identity of attackers. This is because programmers leave individual traces in their program code. These can be found, among other things, in the style of the comments that programmers write in their program lines. Learning algorithms can extract these traces and thus assign the code to an author.

Cyber security not possible without people

Cyber security should definitely not be left exclusively to artificial intelligence (AI). Only a team of man and machine can be successful in the fight against cyberattacks. Because the threat situation changes almost daily. New attack methods, new vulnerabilities and repeated human error lead to a complex mix of contingencies for which a purely AI-based system can never be prepared.

Therefore, it is better to trust our expertise! Contact us so that we can develop a cyber security concept for your company together.