Fail Forward, Fail Fast - Part 2
August 24, 2017

Mike Cuppett
Author of "DevOps, DBAs, and DbaaS"

Being able to deploy distinct code elements quickly, matched with the ability to deploy the next release version or the previous version, facilitates moving forward, even on failure. The small program unit minimizes the production impact upon failure — maybe only a few people experience the problem instead of a large set of application users when large code deployments go wrong.

Start with Fail Forward, Fail Fast - Part 1

Continuous Integration, Continuous Testing

Besides implementing small code segments, there are two additional reasons why fail forward has proven successful: continuous integration and testing.

For DBAs whom you mentor, that means shifting direction from isolated inlands of specific tasks to inclusion directly into the code-producing effort. Code, schema changes, and even job scheduling tasks have to assimilate into the software code process, including the way DBA code is built, tested, version controlled, and packaged for release. Server clones, each built from the same script, eliminate platform variability, making application systems more resilient. For this reason, all software has to be managed without variability from start to finish. The only exceptions are new or modified code requested by the business or customers.

The continuous flow of code into production may initially disorient DBAs because the release and postrelease support model has been a brutalizing cultural norm for decades. It is patterned like this: deployment night = pull an all-nighter and then get a little sleep before being called back into the office because the business is about to implode on itself (a total distortion of reality) if the problem is not fixed promptly.

After hours of troubleshooting, someone discovers that the C++ library was not updated on the production system, causing updated code to run incorrectly with the older library files. In this case, the production system obviously was a huge variable, requiring separate work to upgrade the compiler that was missed as the release progressed. Variability burns you nearly every time.

When the production system has to remain, the best move is to clone the nonprod environments from the production server. Once the first nonprod server is built, the process can be automated to manage additional server builds.

When something like an upgrade to the C++ libraries is needed, test for backward compatibility; if successful, upgrade production, clone production, and start the nonprod builds. When older code fails (perhaps due to deprecated commands or libraries) and forces the upgrade to be included in a larger release of all code needing to be modified for the new libraries, very stringent change management processes must be adhered to. This scenario is becoming more rare because agile development and database management tools have been built to overcome these legacy challenges.

Tools of the Trade

Agile development and DevOps have not only changed how code is built, tested, released, and supported, and changed how teams collaborate to be successful, but new suites of tools were also specifically built to transform the SDLC. There is a movement away from waterfall project management — serialized code progression starting with development and then proceeding to testing, integration, quality assurance, and production.

New opportunities to create applications in weeks or even days has led to products being produced and then held for release until the company can be officially formed and readied for business operations. That reality did not seem possible a short 10 years ago.

Powerful tools have enabled businesses to move from "scrape together a little money, spend most of the money forming the company, start coding, go hungry, sleep in the car, beg for more money from family and friends, visit Mom and Dad to get laundry done and consume real food, and release version 1 in desperation, hoping to generate enough revenue to fix numerous bugs to be released as version 2" to an early-capture revenue model in which the application is built and readied to release and generate revenue, possibly even while the paperwork to form the company is underway.

Imagine releasing an application on the day the company comes into existence, possibly even recognizing revenue on day 1. Today, if the product is conservatively successful, the continuously growing revenue stream allows focus toward new products instead of figuring out where the next meal comes from. Tools empower possibilities.

Best time ever for software startups!

Working with tools comes easily for DBAs. Logically developing process flows to incorporate database administrative tasks accelerates the SDLC. The biggest challenge may be selecting which tools are needed from among the plethora of popular DevOps tools.

As DBAs progress through the stages necessary to transition, become educated and share knowledge, learn that small failures are a part of the plan, morph their tasks into the mainstream workflow, and become tool experts, DevOps teams become stronger by sharing experiences, technical skills, improved collaboration, and (most importantly) trust.

This blog is an excerpt from Mike Cuppet's book: DevOps, DBAs, and DBaaS

Mike Cuppett is a Business Resiliency Architect for a Fortune 25 healthcare organization, and the author of "DevOps, DBAs, and DbaaS: Managing Data Platforms to Support Continuous Integration"

The Latest

August 16, 2018

There once was a time in software development where developers could design, build and then think about their software's security. However in today's highly connected, API-driven application environment, this approach is simply too risky as it exposes the software to vulnerabilities ...

August 15, 2018

Microservices are a hot topic in IT circles these days. The idea of a modular approach to system building – where you have numerous, smaller software services that talk to each other instead of monolithic components – has many benefits ...

August 13, 2018

Agile is expanding within the enterprise. Agile adoption is growing within organizations, both more broadly and deeply, according to the 12th annual State of Agile report from CollabNet VersionOne. A higher percentage of respondents this year report that "all or almost all" of their teams are agile, and that agile principles and practices are being adopted at higher levels in the organization ...

August 09, 2018

For the past 13 years, the Ponemon Institute has examined the cost associated with data breaches of less than 100,000 records, finding that the costs have steadily risen over the course of the study. The average cost of a data breach was $3.86 million in the 2018 study, compared to $3.50 million in 2014 – representing nearly 10 percent net increase over the past 5 years of the study ...

August 08, 2018

Hidden costs in data breaches – such as lost business, negative impact on reputation and employee time spent on recovery – are difficult and expensive to manage, according to the 2018 Cost of a Data Breach Study, sponsored by IBM Security and conducted by Ponemon Institute. The study found that the average cost of a data breach globally is $3.86 million ...

August 06, 2018

The previous chapter in this WhiteHat Security series discussed dependencies as the second step of the Twelve-Factor App. This next chapter examines the security component of step three of the Twelve-Factor methodology — storing configurations within the environment.

August 02, 2018

Results from new Forrester Consulting research reveal the 20 most important Agile and DevOps quality metrics that separate DevOps/Agile experts from their less advanced peers ...

July 31, 2018

Even organizations that understand the importance of cybersecurity in theory often stumble when it comes to marrying security initiatives with their development and operations processes. Most businesses agree that everyone should be responsible for security, but this principle is not being upheld on a day-to-day basis in many organizations. That’s bad news for everyone. Here are some best practices for implementing SecOps ...

July 30, 2018

While the technologies, processes, and cultural shifts of DevOps have improved the ability of software teams to deliver reliable work rapidly and effectively, security has not been a focal point in the transformation of cloud IT infrastructure. SecOps is a methodology that seeks to address this by operationalizing and hardening security throughout the software lifecycle ...

July 26, 2018

Organizations are shifting away from traditional, monolithic architectures, with three-quarters of survey respondents delivering at least some of their applications and more than one-third delivering most of their applications as microservices, according to the State of DevOps Observability Report from Scalyr ...

Share this