Home upcoming technologies Upcoming technology Cybercriminals using artificial intelligence! how you can defend against unpleasant AI… in...

Cybercriminals using artificial intelligence! how you can defend against unpleasant AI… in 2020

cybercriminals

When it comes to offensive AI by cybercriminals, it will actually be a long while before we see the rise of the robots or that fiction that has come to occupy our minds and the cultural myths of popular imagination. But attackers already have access to many open source AI tools that can supercharge those social engineering tactics that they use to elicit sensitive information through trickery.

cybercriminals

In this article, we will discuss the role AI is likely to play in email threats of tomorrow, looking at the many current applications of AI in legitimate business practice that are likely to be repurposed for malicious intent.Email security is really the achilles heel of cybersecurity, with over 94% of cyber threats starting with an email And broadly speaking, there are two major barriers that need to be bypassed in order for an emailed attack to be successful.

Security protocols and The Judgement of the Recipient

The first is that the security protocols of the targeted inbox need to be overcome. And second, the better judgment of the recipient And AI can be used to break down both of these barriers, and we will discuss two ways in which this can happen.

Ultimately, what this is going to show us is that AI is not going to be used just to trick our technology, but will also be used to trick us, the human defenders by enhancing the deceptive nature of what looks legitimate but are actually spoofed emails.

Let’s start at the domain level: Cybercriminals

Purchasing thousands of new domains and sending malicious email on mass is a tried and tested technique that cybercriminals have been leveraging for a very long time And here’s why it works—a traditional security tool can work by analyzing email, but in isolation and it measures them against a static list of known bads.

By way of analogy, it’s a bit like having somebody like a security guard standing at a perimeter of the organization at the physical premises, and asking individuals who enter, “are you malicious”? But if the domain is brand new, it won’t have reputation, and the traditional tools have very limited ability to identify potentially harmful elements in any other way, and so they have no choice but to let them in.

Now with automation, attackers are generating new domains faster than ever before, and with the help of AI tools the ROI, speed and scale of this automation can be increased for the process of domain generation and mass mailing. And so, this is a clear example of how AI can be used to exploit vulnerabilities in the technology themselves, but it can also be used to play into the vulnerability in our own brains upon receiving these emails.

This can actually be done because AI can make fake communication seem real, enhancing those social engineering tactics by increasing the appearance of reality or veracity of the spoofed Let’s just think about how AI is already suggesting copy edits or layout suggestions for marketing campaigns to be more successful with email marketing initiatives such as coupons, newsletters, or other electronic content.

This same technique is within reach of the attackers that can leverage email marketing principles and AI to supercharge and improve a wide range of subject lines, copy, and email bodies that will respond quickly to what’s trending in the news.

Also, self-learning AI that can scrape social media sites will further reduce the time for research and reconnaissance for attackers to then impersonate known individuals, and they’ll do this by suggesting copy that mimics their style and tone and voice and Indeed, AI text generators, like GPT3, which can already write basic poetry and prose quite well, will also end up opening endless opportunities for crafty attackers who are willing and able to capitalize on these new technologies.

Let’s not forget about deep fakes the notoriously realistic content that can be produced by AI technology using generative adversarial networks, which can be plausibly used in a variety of creative ways, giving attackers yet another leg Attackers can also use AI enabled deep fake technology to create profile images and entire social media profile of non-existent but realistic looking people, further enhancing the appearance of in reality what is a spoofed communication, or even to generate fake In fact, Forrester estimates that AI- enabled deep fakes will cost businesses a quarter of a billion dollars in losses in 2020. As nation states and advanced cyber criminals both gain increasing access to but also develop offensive AI by cybercriminals.

Cyber security teams need to implement defensive AI to detect and respond to threats in an immediate and an autonomous The emergence of offensive AI is fast approaching, so it’s not a matter of if these attacks will occur, but rather when will organizations be exposed to an AI driven cyber As criminals learn to supercharge their attacks, organizations can stay a step ahead as cybercriminals.

Because while offensive AI is still emerging on the global threat landscape, defensive AI is a tried and true reality available today. To learn more about how to defeat offensive AI with defensive AI, please visit our website at syedlearns.co Also, please check out our blog for more threat finds from the wild, linked in the description below. Remember, you can always reach out to us through our website, but also through social media. Thank you so much and see you next time!

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here