Machine vs Machine, the rise of Artificial Intelligence and CyberSecurity
The scurge of the Internet is the plethora of viruses and malware we are targeted with on a daily basis. For many of us, we probably don't even know we're being attacked and many don't realise they've already been compromised.
AI is capable of learning from vast amounts of data much quicker than humans, it can even pick out patterns from many more data sources that even the most highly skilled humans would simply be unable to ingest, let along comprehend. So with this AI can be a formidable tool for fighting Cyber Crime.
However, it can also be a formidable foe, too
As the WEF mentioned, the growth of AI as an adversarial is already happening. Artificially Intelligent viruses and malware are already fighting against their AI cybersecurity counterparts.
The initial application of AI in cyber-threat tended to focus on parsing through mountains of seemingly unconnected data to find patterns and relationships that help an attacker find a weak spot. That weak spot could be a way to attack a corporation's network, a pattern of human behaviour, or simply password cracking.
However, this is already evolving into more sophisticated methods, as Dark Reading mentions here.
As an example; in the fight against cyber crime, AI can build a picture of what a normal day looks like inside a company's network. Monitoring and building relationships between email traffic, web visits, chats, automated processes, physical access to building via key cards and much more. It creates a "fingerprint" of what that network behaviour looks like. This is an activity that is almost impossible for a human to reliably and accurately perform.
AI cybersecurity defences would then look for a change in that fingerprint, an anomaly that would indicate someone or something is in the network doing something it shouldn't be doing.
However, AI malware could now do the same fingerprinting to learn how to move around the network undetected.
The last example I'll leave you with is one of deception. As you may have seen in Google's AI art experiments, deepfakes and even every day when you use your Amazon Echo or Google Home, AI is getting very good at pretending to be human.
This is a very real and very scary use of deepfake voices to scam an emplyoee out of €220,000 by pretending to be his boss.
Resistant AI is one of a growing number of tech companies and startups that are taking AI cybersecurity to the next level. Having just received an additional €2.5M in funding, they are ramping up their team and capability to provide even more protection for sensitive financial systems.
The fascinating thing about Resistant AI is that they protect AI systems from AI compromise, using AI.
Many organisations are now using artificial intelligence to make discussions like credit-scoring or payment approvals, to make more accurate decisions, reduce risk and cut cost, but many of these systems are considered to be "blackboxes". Meaning that the decisions happen inside the system, but the reasons for those decisions make it hard to verify.
This makes it very attractive for hackers to compromise, as if you can't easily verify it, how can you detect when something is wrong?
Resistant AI are looking at this area, using AI to monitor the behaviour of other AI and reduce fraudulent activities.
For us as consumers, it should mean that we experience less false-positives (e.g. when your perfectly valid transaction gets declined and you have to call the bank), and also reduced overall costs as the risk of the companies we are doing business with is better managed.
This is definitely a company I'll be watching, because as I said in my Medium article, machine vs machine is what got me here today.
Do you work for Resistant.ai?