First Line of Defense for Cybersecurity:Blockchain

[ Cross Post by Author]

This article is the first of two parts dealing with cybersecurity : Prat 1: First Line of Defense for Cybersecurity : AI. Part 2 : Second Line of Defense for Cybersecurity: Blockchain.

The year 2017 wasn't a great year for cyber-; we saw a large number of high-profile cyber attacks; including Uber, Deloitte, Equifax and the now infamous WannaCry ransomware attack, and 2018 started with a bang too with the hacking of Winter Olympics. The frightening truth about increasingly cyber-attacks is that most businesses and the cybersecurity industry itself are not prepared. Despite the constant flow of security updates and patches, the number of attacks continues to rise.

Beyond the lack of preparedness on the business level, the cybersecurity workforce itself is also having an incredibly hard time keeping up with demand. By 2021, there are estimated to be an astounding 3.5 million unfilled cybersecurity positions worldwide, the current staff is overworked with an average of 52 hours a week, not an ideal situation to keep up with non-stop threats.

Given the state of cybersecurity today, the implementation of AI systems into the mix can serve as a real turning point. New AI algorithms use Machine Learning (ML) to adapt over time, and make it easier to respond to cybersecurity risks. However, new generations of malware and cyber-attacks can be difficult to detect with conventional cybersecurity protocols. They evolve over time, so more dynamic approaches are necessary.

Another great benefit of AI systems in cybersecurity is that they will free up an enormous amount of time for tech employees. Another way AI systems can help is by categorizing attacks based on threat level. While there’s still a fair amount of work to be done here, but when machine learning principles are incorporated into your systems, they can actually adapt over time, giving you a dynamic over cyber criminals.

Unfortunately, there will always be limits of #AI, and human-machine teams will be the key to solving increasingly complex #cybersecurity challenges. But as our models become effective at detecting threats, bad actors will look for ways to confuse the models. It’s a field called adversarial machine learning, or adversarial AI. Bad actors will study how the underlying models work and work to either confuse the models — what experts call poisoning the models, or machine learning poisoning (MLP) – or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.

Four Fundamental Security Practices

With all the hype surrounding AI we tend to overlook a very important fact. The best defense against a potential AI cyber-attack is rooted in maintaining a fundamental security posture that incorporates continuous monitoring, user education, diligent patch management and basic configuration controls to address vulnerabilities. All explained below:

Identifying the Patterns

AI is all about patterns. Hackers, for example, look for patterns in server and firewall configurations, use of outdated operating systems, user actions and response tactics and more. These patterns give them information about network vulnerabilities they can exploit.

Network administrators also look for patterns. In addition to scanning for patterns in the way hackers attempt intrusions, they are trying to identify potential anomalies like spikes in network traffic, irregular types of network traffic, unauthorized user logins and other red flags.

By collecting data and monitoring the state of their network under normal operating conditions, administrators can set up their systems to automatically detect when something takes place -- a suspicious network login, for example, or access through a known bad IP. This fundamental security approach has worked extraordinarily well in preventing more traditional types of attacks, such as malware or phishing. It can also be used very effectively in deterring AI-enabled threats.

Educating the Users

An organization could have the best monitoring systems in the world, but the work they do can all be undermined by a single employee clicking on the wrong email. Social engineering continues to be a large security challenge for businesses because workers easily can be tricked into clicking on suspicious attachments, emails and links. Employees are considered by many as the weakest links in the security chain, as evidenced by a recent survey that found that careless and untrained insiders represented the top source of security threats.

Educating users on what not to do is just as important as putting security safeguards in place. Experts agree that routine user testing reinforces training. Agencies must also develop plans that require all employees to understand their individual roles in the battle for better security. And don't forget a response and recovery plan, so everyone knows what to do and expect when a breach occurs. Test these plans for effectiveness. Don’t wait for an exploit to find a hole in the process.

Patching the Holes

Hackers know when a patch is released, and in addition to trying to find a way around that patch, they will not hesitate to test if an agency has implemented the fix. Not applying patches opens the door to potential attacks -- and if the hacker is using AI, those attacks can come much faster and be even more insidious.

Checking Off the Controls

The Center for Internet Security (CIS) has issued a set of controls designed to provide agencies with a checklist for better security implementations. While there are 20 actions in total, implementing at least the top five -- device inventories, software tracking, security configurations, vulnerability assessments and control of administrative privileges -- can eliminate roughly 85 percent of an organization’s vulnerabilities. All of these practices -- monitoring, user education, patch management and adherence to CIS controls -- can help agencies fortify themselves against even the most sophisticated AI attacks.

Challenges Facing AI in Cybersecurity

AI-Powered Attacks

AI/Machine Learning (ML) software has the ability to "learn" from the consequences of past events in order to help predict and identify cybersecurity threats. According to a report by Webroot, AI is used by approximately 87% of US cybersecurity professionals. However, AI may prove to be a double-edged sword as 91% of security professionals are concerned that hackers will use AI to launch even more sophisticated cyber-attacks.

For example, AI can be used to automate the collection of certain information — perhaps relating to a specific organization — which may be sourced from support forums, code repositories, social media platforms and more. Additionally, AI may be able to assist hackers when it comes to cracking passwords by narrowing down the number of probable passwords based on geography, demographics and other such factors. 

More Sandbox-Evading Malware

In recent years, sandboxing technology has become an increasingly popular method for detecting and preventing malware infections. However, cyber-criminals are finding more ways to evade this technology. For example, new strains of malware are able to recognize when they are inside a sandbox, and wait until they are outside the sandbox before executing the malicious code.

Ransomware and IoT

We should be very careful not to underestimate the potential damage IoT ransomware could cause. For example, hackers may choose to target critical systems such as power grids. Should the victim fail to the pay the ransom within a short period of time, the attackers may choose to shut down the grid. Alternatively, they may choose to target factory lines, smart cars and home appliances such as smart fridges, smart ovens and more.

This fear was realized with a massive distributed denial of service attack that crippled the servers of services like Twitter, NetFlix , NYTimes, and PayPal across the U.S. on October 21st , 2016. It’s the result of an immense assault that involved millions of Internet addresses and malicious software, according to Dyn, the prime victim of that attack. "One source of the traffic for the attacks was infected by the Mirai botnet". The attack comes amid heightened cybersecurity fears and a rising number of Internet security breaches. Preliminary indications suggest that countless Internet of Things (IoT) devices that power everyday technology like closed-circuit cameras and smart-home devices were hijacked by the malware, and used against the servers.

A Rise of State-Sponsored Attacks

The rise of nation state cyber-attacks is perhaps one of the most concerning areas of cyber-security. Such attacks are usually politically motivated, and go beyond financial gain. Instead, they are typically designed to acquire intelligence that can be used to obstruct the objectives of a given political entity. They may also be used to target electronic voting systems in order to manipulate public opinion in some way.

As you would expect, state-sponsored attacks are targeted, sophisticated, well-funded and have the potential to be incredibly disruptive. Of course, given the level of expertise and finance that is behind these attacks, they may prove very difficult to protect against. Governments must ensure that their internal networks are isolated from the internet, and ensure that extensive security checks are carried out on all staff members. Likewise, staff will need to be sufficiently trained to spot potential attacks.

Shortage of Skilled Staff

By practically every measure, cybersecurity threats are growing more numerous and sophisticated each passing day, a state of affairs that doesn't bode well for an IT industry struggling with a security skills shortage. With less security talent to go around, there's a growing concern that businesses will lack the expertise to thwart network attacks and prevent data breaches in the years ahead.

IT

A modern enterprise has just too many IT systems, spread across geographies. Manual tracking of the health of these systems, even when they operate in a highly integrated manner, poses massive challenges. For most businesses, the only practical method of embracing advanced (and expensive) cybersecurity technologies is to prioritize their IT systems and cover those that they deem critical for business continuity. Currently, cybersecurity is reactive. That is to say that in most cases, it helps alert IT staff about data breaches, identity theft, suspicious applications, and suspicious activities. So, cybersecurity is currently more of an enabler of disaster management and mitigation. This leaves a crucial question unanswered — what about not letting cybercrime happen at all?

The Future of Cybersecurity and AI

In the security world AI has a very clear-cut potential for good. The industry is notoriously unbalanced, with the bad actors getting to pick from thousands of vulnerabilities to launch their attacks, along with deploying an ever-increasing arsenal of tools to evade detection once they have breached a system. While they only have to be successful once, the security experts tasked with defending a system have to stop every attack, every time.

With the advanced resources, intelligence and motivation to complete an attack found in high level attacks, and the sheer number of attacks happening every day, victory eventually becomes impossible for the defenders.

The analytical speed and power of our dream security AI would be able to tip these scales at last, leveling the playing field for the security practitioners who currently have to constantly defend at scale against attackers who can pick a weak spot at their leisure. Instead, even the well-planned and concealed attacks could be quickly found and defeated.

Of course, such a perfect security AI is some way off. Not only would this AI need to be a bona fide simulated mind that can pass the Turing Test, it would also need to be a fully trained cyber security professional, capable of replicating the decisions made by the most experienced security engineer, but on a vast scale.

Before we reach the brilliant AI seen in Sci-Fi, we need to go through some fairly testing stages – although these still have huge value in themselves. Some truly astounding breakthroughs are happening all the time. When it matures as a technology it will be one of the most astounding developments in history, changing the human condition in ways similar to and bigger than, electricity, flight, and the Internet, because we are entering the AI-era.

Ahmed Banafa Named No. 1 Top Voice To Follow in Tech by LinkedIn in 2016

Read more articles at IoT Trends by Ahmed Banafa

References

https://www.csoonline.com/article/3250086/data-protection/7-cybersecurity-trends-to-watch-out-for-in-2018.html

https://gcn.com/articles/2018/01/05/ai-cybersecurity.aspx

https://www.darkreading.com/threat-intelligence/ai-in-cybersecurity-where-we-stand-and-where-we-need-to-go/a/d-id/1330787?

https://www.itproportal.com/features/cyber-security-ai-is-almost-here-but-where-does-that-leave-us-humans/

https://www.linkedin.com/pulse/wake-up-call-iot-ahmed-banafa

Leave a Reply