Security threats such as distributed denial-of-service (DDoS) attacks disrupt businesses of all sizes, leading to outages, and worse, loss of user trust. These threats are a big reason why at Google we put a premium on service reliability that’s built on the foundation of a rugged network. 

To help ensure reliability, we’ve devised some innovative ways to defend against advanced attacks. In this post, we’ll take a deep dive into DDoS threats, showing the trends we’re seeing and describing how we prepare for multi-terabit attacks, so your sites stay up and running.

Taxonomy of attacker capabilities

With a DDoS attack, an adversary hopes to disrupt their victim’s service with a flood of useless traffic. While this attack doesn’t expose user data and doesn’t lead to a compromise, it can result in an outage and loss of user trust if not quickly mitigated.

Attackers are constantly developing new techniques to disrupt systems. They give their attacks fanciful names, like Smurf, Tsunami, XMAS tree, HULK, Slowloris, cache bust, TCP amplification, javascript injection, and a dozen variants of reflected attacks. Meanwhile, the defender must consider every possible target of a DDoS attack, from the network layer (routers/switches and link capacity) to the application layer (web, DNS, and mail servers). Some attacks may not even focus on a specific target, but instead attack every IP in a network. Multiplying the dozens of attack types by the diversity of infrastructure that must be defended leads to endless possibilities.

So, how can we simplify the problem to make it manageable? Rather than focus on attack methods, Google groups volumetric attacks into a handful of key metrics:

  • bps network bits per second → attacks targeting network links
  • pps network packets per second → attacks targeting network equipment or DNS servers
  • rps HTTP(S) requests per second → attacks targeting application servers

This way, we can focus our efforts on ensuring each system has sufficient capacity to withstand attacks, as measured by the relevant metrics.

Trends in DDoS attack volumes

Our next task is to determine the capacity needed to withstand the largest DDoS attacks for each key metric. Getting this right is a necessary step for efficiently operating a reliable network—overprovisioning wastes costly resources, while underprovisioning can result in an outage.

To do this, we analyzed hundreds of significant attacks we received across the listed metrics, and included credible reports shared by others. We then plot the largest attacks seen over the past decade to identify trends. (Several years of data prior to this period informed our decision of what to use for the first data point of each metric.)

DDoS attacks.jpg

The exponential growth across all metrics is apparent, often generating alarmist headlines as attack volumes grow. But we need to factor in the exponential growth of the internet itself, which provides bandwidth and compute to defenders as well. After accounting for the expected growth, the results are less concerning, though still problematic.

Architecting defendable infrastructure

Given the data and observed trends, we can now extrapolate to determine the spare capacity needed to absorb the largest attacks likely to occur.

bps (network bits per second)
Our infrastructure absorbed a 2.5 Tbps DDoS in September 2017, the culmination of a six-month campaign that utilized multiple methods of attack. Despite simultaneously targeting thousands of our IPs, presumably in hopes of slipping past automated defenses, the attack had no impact. The attacker used several networks to spoof 167 Mpps (millions of packets per second) to 180,000 exposed CLDAP, DNS, and SMTP servers, which would then send large responses to us. This demonstrates the volumes a well-resourced attacker can achieve: This was four times larger than the record-breaking 623 Gbps attack from the Mirai botnet a year earlier. It remains the highest-bandwidth attack reported to date, leading to reduced confidence in the extrapolation.

pps (network packets per second) 
We’ve observed a consistent growth trend, with a 690 Mpps attack generated by an IoT botnet this year. A notable outlier was a 2015 attack on a customer VM, in which an IoT botnet ramped up to 445 Mpps in 40 seconds—a volume so large we initially thought it was a monitoring glitch!

rps (HTTP(S) requests per second)
In March 2014, malicious javascript injected into thousands of websites via a network man-in-the-middle attack caused hundreds of thousands of browsers to flood YouTube with requests, peaking at 2.7 Mrps (millions of requests per second). That was the largest attack known to us until recently, when a Google Cloud customer was attacked with 6 Mrps. The slow growth is unlike the other metrics, suggesting we may be under-estimating the volume of future attacks.

While we can estimate the expected size of future attacks, we need to be prepared for the unexpected, and thus we over-provision our defenses accordingly. Additionally, we design our systems to degrade gracefully in the event of overload, and write playbooks to guide a manual response if needed. For example, our layered defense strategy allows us to block high-rps and high-pps attacks in the network layer before they reach the application servers. Graceful degradation applies at the network layer, too: Extensive peering and network ACLs designed to throttle attack traffic will mitigate potential collateral damage in the unlikely event links become saturated.

For more detail on the layered approach we use to mitigate record-breaking DDoS attacks targeting our services, infrastructure, or customers, see Chapter 10 of our book, Building Secure and Reliable Systems.

Cloud-based defenses

We recognize the scale of potential DDoS attacks can be daunting. Fortunately, by deploying Google Cloud Armor integrated into our Cloud Load Balancingservice—which can scale to absorb massive DDoS attacks—you can protect services deployed in Google Cloud, other clouds, or on-premise from attacks. We recently announced Cloud Armor Managed Protection, which enables users to further simplify their deployments, manage costs, and reduce overall DDoS and application security risk.

Having sufficient capacity to absorb the largest attacks is just one part of a comprehensive DDoS mitigation strategy. In addition to providing scalability, our load balancer terminates network connections on our global edge, only sending well-formed requests on to backend infrastructure. As a result it can automatically filter many types of volumetric attacks. For example, UDP amplification attacks, synfloods, and some application-layer attacks will be silently dropped. The next line of defense is the Cloud Armor WAF, which provides built-in rules for common attacks, plus the ability to deploy custom rules to drop abusive application layer requests using a broad set of HTTP semantics.

Working together for collective security

Google works with others in the internet community to identify and dismantle infrastructure used to conduct attacks. As a specific example, even though the 2.5 Tbps attack in 2017 didn’t cause any impact, we reported thousands of vulnerable servers to their network providers, and also worked with network providers to trace the source of the spoofed packets so they could be filtered.

We encourage everyone to join us in this effort. Individual users should ensure their computers and IoT devices are patched and secured. Businesses should report criminal activity, ask their network providers to trace the sources of spoofed attack traffic, and share information on attacks with the internet community in a way that doesn’t provide timely feedback to the adversary. By working together, we can reduce the impact of DDoS attacks.


Source: Google Cloud Blog