DNS Load Balancing is DevOps' Secret Weapon
October 04, 2017

Steven Job
Tiggee

Load balancing at the DNS (Domain Name System) level has been around for a few decades now, but it didn't become crucial until recently as technology is moving to the cloud. DNS is the perfect solution for managing cloud systems because it operates independently of hosting providers — meaning DNS records can be configured to manipulate how much and what kinds of traffic reach certain endpoints through a third party provider.

With the growth of cloud-based services, infrastructure is more commonly managed as code rather than in a data center. That means you can alter a single DNS record and potentially knock your application or website offline. This has actually happened a few times.

Conversely, you can leverage DNS records to optimize traffic flowing to your domains or servers. GeoDNS and network monitoring can supercharge your traditional DNS management, paving the way for automated DNS management.

Automated Load Balancing

The latest craze in both SaaS and DevOps has been automation, from chatbots to task automation. The DNS industry has been offering basic automation for roughly a decade now in the form of DNS failover. This service automatically reroutes traffic away from non-responsive endpoints to healthy ones.

DNS load balancing uses similar techniques to test the availability and performance of endpoints. But load balancing also allows you to send traffic to more than one endpoint simultaneously. You can even set different weights for each endpoint. Load balancing is commonly used by organizations that want to use more than one vendor, say for a multi-CDN implementation.

This method offers the flexibility to use more than one provider and take advantage of different service offerings. For example, you may want a particular CDN for video streaming but they don't perform well in some regions. You can use DNS load balancing to serve vendors only where they perform the strongest.

You can even use load balancing to cut costs! Most vendors charge drastically different prices depending on the region, but you can work around it if you create location-specific rules that favor lower cost providers. When you use more than one vendor, you also reduce the risk of single provider outages.

Cloud Migration

Load balancing is a viable asset during migrations, whether you're moving to more cloud-based systems or rolling out something new.

A well-planned strategy can ensure you maintain availability and limit performance degradation during the migration. You can use record pools, which are groups of endpoints that are served to users, and slowly increase the traffic sent to your cloud endpoints. If something goes wrong, only a subset of your end-users will be affected, and you can easily roll back your changes to a previous version.

Roll Out

You can use the same strategy we just mentioned but combined with GeoDNS features to slowly roll out an application or feature to new audiences. GeoDNS services like GeoProximity and IP Filters allow you to create unique rules that dictate how your end-users are answered based on their location, ASN, or IP address.

Let's say you have a new app you want to roll out to your US users and then to your Europeans users. You can create an IP Filter for US-based users that returns the box where the application is stored. Just make sure you have a rule for "world" applied to a record that sends users to a different endpoint.

The Big Picture

As the internet grows, the world gets smaller and organizations need to maintain performance no matter where their end-users are. DNS load balancing offers easy scalability and unparalleled customization. Now is the best time for DevOps to begin implementation, before the demand catches up.

Steven Job is President and Founder of Tiggee, the parent company of DNS Made Easy and Constellix

The Latest

July 19, 2018

Despite 95 percent of CIOs expecting cyberthreats to increase over the next three years, only 65 percent of their organizations currently have a cybersecurity expert, according to a survey from Gartner. The survey also reveals that skills challenges continue to plague organizations that undergo digitalization, with digital security staffing shortages considered a top inhibitor to innovation ...

July 17, 2018

In my first blog in this series, I highlighted some of the main challenges teams face with trying to scale mainframe DevOps. To get past these hurdles, the key is to develop an incremental approach that enables teams to capture value along each step of the journey ...

July 16, 2018

The key to mainframe DevOps success is in quickly identifying and removing major bottlenecks in the application delivery lifecycle. Major challenges include collaboration between mainframe and distributed teams, lack of visibility into the impact of software changes, and limited resource flexibility with scaling out necessary testing initiatives. Now let's take a closer look at some of these key challenges and how IT departments can address them ...

July 11, 2018

How much are organizations investing in the shift to cloud native, how much is it getting them? ...

July 10, 2018

In the shift to cloud native, many organizations have adopted a configuration-as-code approach. This helps drive up application deployment velocity by letting developers and DevOps teams reconfigure their deployments as their needs arise. Other organizations, particularly the more regulated ones, still have security people owning these tools, but that creates increased pressure on the security organization to keep up. How much are organizations investing in this process, and how much is it getting them? ...

June 28, 2018

More than a third of companies that use serverless functions are not employing any application security best practices and are not using any tools or standard security methodologies to secure them, according to the State of Serverless Security survey, conducted by PureSec ...

June 27, 2018

The popularity of social media platforms and applications is spurring enterprises to adopt "social business" models to better engage with employees and customers and improve collaboration, according to a new study published by ISG ...

June 25, 2018

The previous chapter in this WhiteHat Security series discussed Codebase as the first step of the Twelve-Factor App and defined a security best practice approach for ensuring a secure source control system. Considering the importance of applying security in a modern DevOps world, this next chapter examines the security component of step two of the Twelve-Factor methodology. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

June 21, 2018

DevSecOps is quickly gaining support and traction, within and beyond information security teams. In fact, 70% of respondents believe their culture can embrace the change needed to fuse Security and DevOps, according to a new survey of 80 security professionals by Aqua Security ...

June 20, 2018

The larger the company size, the higher the proportion of low IT performers, according to the State of DevOps: Market Segmentation Report from Puppet, based on the 2017 State of DevOps Survey data ...

Share this