The false positive issue that has become prominent in the cybersecurity industry affects every usage of software designed to identify and warn against cyberattacks, security breaches, data leaks, and more. False positives are simply incorrect predictions of issues that lead to an inaccurate positive outcome; in cybersecurity programs, the positive outcome or class typically refers to an alert indicating a threat.


While some may argue that it is better to receive alerts when there is no issue as opposed to the alternative, false positives have contributed to a decline in reactivity and problem management. When more than half of all alerts are false positives, it becomes difficult to appropriately respond to each occurrence due to the recognition that there is a high chance an alert is not genuine. 


Not only do false positives negatively impact the responsiveness of cybersecurity professionals, they also can result in wasted resources such as time, energy, and finances. When cybersecurity software presents an alert, an analyst must manually check every alert; this task is remarkably daunting, impractical, and ultimately unsustainable.


At the heart of this problem is the complex, dynamic corporate networks that feature numerous IP addresses. Rather than attempt the futile act of monitoring data across a massive network, most cybersecurity programs instead integrate a universal solution, failing to account for nuanced differences and discrepancies and resulting in a high volume of false positives.


In order to optimize cybersecurity proceedings, limit the number of wasted resources, and more accurately predict and address genuine threats, the most appropriate solution is the application of artificial intelligence (AI) and machine learning. According to the Director of Client Success at, Russell Gray, there are three waves of AI. Of these three waves, the third is best suited to tackle the issues with false positives.


There are benefits and limitations of each AI wave. The first wave are static systems that implement standard rules of operation and adhere to them without the ability to learn or develop further. First wave AI systems require a significant amount of manual tuning, so they are not well-suited for cybersecurity measures as they are largely the cause of many existing false positives.


Second-wave AI systems are more advanced in that they are able to learn from data sets to which they have been granted access. While these systems are adept at evolving, they are limited by the data they can access, meaning they can be easily manipulated by faulty data or malicious agents.


The optimal system of AI for cybersecurity purposes is third wave. Also known as context-aware AI, these systems are capable of developing an understanding of their environment beyond limited data sets or static regulations. Third wave AI systems can adapt and grow as needed in response to their environment, which is ideal for cybersecurity. Integrating third wave AI into cybersecurity programs can help limit the number of false positives over time and further optimize and enhance existing measure of detection and protection.


Reducing the number of faulty alerts by utilizing AI capabilities is an effective method of maximizing the skillsets of cybersecurity professionals and analysts. In order to better protect data, especially on a corporate level, third wave AI should be incorporated into existing programs.