Experimentation Done Right: It's All About the Data
May 01, 2019

Dave Karow
Split Software

By now, the concept of experimentation in software development is fairly well known. Most development teams understand at a high level the benefits that can be achieved through experimentation. Perhaps the most important of those is the ability to identify positive or negative impacts of a feature — in terms of both app performance and customer experience — earlier in the development process.

But many companies simply are not getting the most out of their efforts around experimentation or are reluctant to fully embrace it. Why? For starters, there's risk. And the degree of risk a company is willing to tolerate varies depending on their business.

Earlier this year, for example, Instagram decided to make a widespread change to the swipe button feature in its interface. The company was immediately flooded with negative feedback from many of its app users. In truth, the risk was relatively inconsequential to the ubiquitous social app because, despite user backlash, the change was rolled back and barely registered a blip on the company's radar. In other words, it is unlikely that Instagram lost money over the hiccup and in media interviews spokespeople were able to simply brush it off as a "bug."

For another, less innovative company however, taking that same risk with a major feature of a product or app could prove far costlier and more detrimental. Nearly every software feature release is designed to make improvements to the software itself. The reality is that not every feature release will result in positive improvement and the consequences for this could be severe: loss of users, revenue, or worse — both.

The goal is being able to run worthwhile experiments without disrupting the things you need to do to run your business. And that's where many companies attempting experimentation run into problems.

Experimenting the Hard Way

Companies understand that protecting their user experience is important and that releasing faster is important. That's the reason they have invested in monitoring tools, alerting systems and continuous delivery pipelines. When it comes to experimentation though, they often find that it is hard to do at scale in a repeatable fashion. The lack of tooling designed specifically for the experimentation problem set means someone has to step up and do a lot of ad-hoc data science work to make sense of results. The need and interest are there, as most teams know intuitively that better data will help them learn what works and does not work, earlier in the development process.

Your business may be doing many of the things one would associate with experimentation, and even reaping some benefits. Maybe you are performing canary rollouts or a/b tests, which have allowed you to accelerate feature releases or measure the impact of features. The problem is, the operational cost of doing those things can be high. You may be able to run a few experiments, but you will not be able to run very many because it's simply too difficult a process to do ad-hoc. As a result, features are actually being released more slowly because of the operational cost or dependence on scarce data science resources for one-off study of the results. The rate of innovation has slowed, reducing the value of experimentation.

Data is the Key

If instead businesses approach experimentation in a way that controls risk and streamlines ingestion and analysis of results data, they can do it in a much more effective way. And the key to doing that is access to data: how easily can you observe changes to key metrics when you conduct experiments? When data is siloed or must be manually curated during every experiment, it is less valuable and actionable. You also run greater risk of different teams drawing entirely different conclusions from the same data, because there's no common point of reference from which to make decisions. Lots of companies collect data today, but the data relevance — its breadth and scope — is not what it needs to be in order to be able to make actionable decisions.

Companies must remove the roadblocks separating them from their data. After all, if you are going to make decisions about anything, you want to be able to do it with the strength of relevant information. By making data ubiquitous, rather than scarce, you can establish a common language for measurement, which is the first step to being able to do meaningful experiments that positively impact both the user and your business.

Purpose-built experimentation platforms marry actionable data to changes that multiple teams make, eliminating the overhead and inconsistency of ad-hoc data analysis. With reliable tooling in hand and a repeatable process for making contextual decisions, companies can more easily embrace experimentation at scale. As the cost of "turning the crank" and making sense of the data for each experiment goes down and the number of experiments goes up, these companies give themselves more opportunities to unlock innovation and course-correct quickly in the face of failing ideas.

Dave Karow is the Continuous Delivery Evangelist at Split Software
Share this

Industry News

September 17, 2020

env0, a developer of Infrastructure-as-Code (IaC) management software, announced the availability of its new open source solution for Terraform users, Terratag.

September 17, 2020

Push Technology announced a partnership with Innova Solutions, an ACS Solutions company, specializing in global information technology services.

September 17, 2020

Alcide achieved the AWS Outposts Ready designation, part of the Amazon Web Services (AWS) Service Ready Program.

September 16, 2020

Portshift announced serverless container security support for AWS Fargate.

September 16, 2020

Sonatype and NeuVector announced a new integration that provides a comprehensive view of all Kubernetes and Container open source risk in one place.

September 16, 2020

Pure Storage entered into a definitive agreement to acquire Portworx, a Kubernetes data services platform enterprises trust to run mission-critical applications in containers in production.

September 15, 2020

OutSystems announced a series of new tools and capabilities that will empower organizations of all sizes to build applications quickly, build them right, and build them for the future.

September 15, 2020

VMware unveiled new offerings to help customers further accelerate their app and infrastructure modernization initiatives. VMware vSphere 7 Update 1, VMware vSAN 7 Update 1 and VMware Cloud Foundation 4.1 product releases streamline customer adoption of Kubernetes and support stateful applications with new developer-ready capabilities and enhance scalability and operations with new features.

September 15, 2020

Oracle announced the general availability of Java 15 (Oracle JDK 15).

September 14, 2020

Actifio announced a global alliance with Persistent Systems, a global solutions company with deep technology expertise, to help enterprises with data stack modernization and acceleration of digital transformation initiatives.

September 14, 2020

Perforce Software announced the release of the Helix TeamHub Command-Line Client (hth-cli).

September 14, 2020

StackRox secured an additional $26.5 million in funding.

September 10, 2020

JourneyApps announced the official launch of its OXIDE Integrated Development Environment (IDE) which ushers in a new paradigm of building, deploying and managing secure and powerful business applications.

September 10, 2020

Solo.io announced the WebAssembly OCI Image Specification, which defines a standard format for bundling and storing a Wasm module and its metadata as an OCI (Open Container Initiative) image in order to facilitate interoperability across different solutions.

September 10, 2020

Flexential announced new dedicated Hosted Private Cloud - vCenter Access capabilities that enable organizations to use industry-leading third-party tools to manage workloads and data protection requirements on a single, consolidated cloud platform.