Avoiding Risk Acceptance With Security Alerts | Critical Start
Prepare your business with On-Demand Breach Response
Webinar Series | Once More unto the Breach | Lessons Learned from Billion Dollar Breaches

Avoiding Risk Acceptance With Security Alerts


As the shortage of security professionals grows, most organizations struggle to attract and retain the talent necessary to mitigate risk. Though analysis efficiency in investigating security alerts is improving with automation advancements, organizations still face an overwhelming number of false positives generated by activity that is not malicious.

When managing false positives, there are three primary methods traditionally used:

    1. Resource-oriented: This approach adds headcount so there are more analysts to investigate alerts.
    2. Input-oriented: This approach disables inputs or alters correlation logic that generates alerts.
    3. Priority-oriented: This approach prioritizes security alerts into critical, high, medium and low. It targets the highest-priority alerts for triage and response until resources are exhausted.

The resource-oriented approach isn’t an option for most organizations due to the high cost and long implementation timelines. Those who have the budget will face the challenge of finding talented analysts and avoiding turnover.

Input-oriented and priority-oriented are both methods of controlling false positives by accepting unquantified risk. Modifying inputs and correlation logic to lessen false positives may prove effective but introduces the risk of missing malicious activity (false negative). Reducing the number of security alerts by ignoring lower-priority alerts or modifying a security product’s alert thresholds doesn’t reduce false positives enough to justify the risk of missing cybersecurity attacks. To address the shortcomings of resource-oriented and input-oriented false positive management, the priority-oriented approach remains prominent and is delivered as a feature by most security products.

Focusing resources on critical alerts at first seems intuitive. However, most security products lack the business context necessary to assign criticality. While some security products integrate with knowledge sources like asset lists and Active Directory to contextualize alert subjects, there is not a scalable way to provide context to the activity generating the alert. The priority-oriented approach accepts risk by ignoring lower-priority alerts that are never resolved. While this decision may have been made by the organization to reduce the number of alerts, it is unlikely diligence was performed to quantify the risk involved. As highlighted by the Target breach, even less “exciting” alerts determined to “not warrant immediate follow up” can lead to a significant breach.

Resolving alerts without accepting risk requires resolving every alert without crippling the effectiveness of security tools by changing alert thresholds or ignoring security events. Because none of the methods of managing false positives above will result in a no-accepted-risk outcome, three principals must be adopted:

    1. Priority is irrelevant until both the subject and action are reviewed by an analyst.
    2. Every false positive must be listed in a registry for trusted behavior.
    3. Every alert should be compared against this trusted behavioral repository to allow automated resolution of false positive (known good events).

The concept of “unprioritizing” is a unique challenge. Prioritization itself isn’t the problem; rather, it’s how prioritization is applied.

By aggregating every alert with the same priority, every alert must be resolved in the order of arrival. During triage, analysts with knowledge of the business and its processes provide the required context for proper prioritization.

Until this context is added, the intent of the alert’s action is unknown. Machine learning (ML) and artificial intelligence (AI) claim to provide value during this step, detecting anomalous user activity, but anomalous does not mean malicious.

Additionally, ML and AI typically rely on cumulative risk scoring, requiring actions to meet a specified level of anomalous activity before triggering a detection, adding the risk of missed detections when malicious behavior doesn’t meet that threshold.

ML and AI may also exacerbate the problem of false positives with environment changes like new domain administrators or employees changing roles. ML and AI increase detection capabilities, but those detections also require triage by analysts.

Though an approach to resolve every alert regardless of priority requires a large initial investment, it does scale over time. Resolving every alert represents the only solution to manage security alerts without accepting unnecessary risk. While risk acceptance is a business decision, previous methods of false-positive reduction fail to present a reasonable alternative that detects attacks before a breach occurs. Resolving every alert provides an alternative to legacy approaches and moves the conversation to reasonable risk acceptance focused on stopping breaches versus controlling budgets.

 

By Randy Watkins | CTO, CRITICALSTART

Featured in Forbes | January 17, 2020

Contact an MDR Specialist Today

Get in Touch
PREVIOUS RESOURCE
Path 11 Copy 2 Created with Sketch.
NEXT RESOURCE
Path 11 Copy 3 Created with Sketch.

Related Content

Categories