This report contains the results of Redgate's latest annual survey SQL Server database professionals, across a range of industries and company sizes. Over 700 organizations were asked whether they had adopted, or were planning to adopt, DevOps practices and how many of them had applied the same principles to their databases. The report looks at how things have developed over the last 12 months, and what key challenges and requirements are driving DevOps adoption in 2018.
For over two decades now, software testing tool vendors have been tempting enterprises with the promise of test automation. However, the fact of the matter is that most companies have never been able to achieve the desired business results from their automation initiatives. Recent studies report that test automation rates average around 20% overall, and from 26-30% for agile adopters. Read this paper to explore the 6 factors contributing to these dismal automation results — and insights into the best path forward.
Although Artificial Intelligence (AI) is nothing new, applying AI techniques to software testing started to become feasible just the past couple years. Inevitably, AI will soon become part of our day-to-day quality engineering process. But before we get caught up in the exuberance of the technology, let’s take a step back and assess how AI can help us achieve our quality objectives. It’s been suggested that AI could be applied to actions such as prioritizing testing and automation, generating and optimizing test cases, enhancing UI testing, reducing tedious analysis tasks, and helping to determine pass/fail outcomes for complex and subjective tests. However, should AI be applied in these cases? And where else could it assist?
The Software Fail Watch is an analysis of software bugs found in a year’s worth of English language news articles. The result is an extraordinary reminder of the role software plays in our daily lives, the necessity of software testing in every industry, and the far-reaching impacts of its failure. The 5th Edition of the Software Fail Watch identified 606 recorded software fails, impacting half of the world’s population (3.7 billion people), $1.7 trillion in assets, and 314 companies. And this is just scratching the surface — there are far more software bugs in the world than we will likely ever know about. Download the report for a detailed analysis of 2017 software fails, including:
- The overall impact on businesses, users, time, and assets
- How the number and type of software fails compare to previous years
- Software fail trends within and across industries — finance, retail, consumer tech, services (e.g., internet, telecom), public services, healthcare, transportation, and entertainment
- The biggest stories, hacks, and glitches that made headlines or slipped under the radar
Read this paper to learn how AI can take software testing to the next level, including:
- Why AI is now more feasible — and critical — than ever
- What AI really is and how it’s best applied
- How AI can help us test smarter, not harder
- The role of smart testing technologies that aren’t technically “AI” (e.g., self-healing technologies)
The past few years have brought a sea change in the way that applications are architected, developed, and consumed — increasing both the complexity of testing and the business impact of software failures. Read this paper to learn:
- What is Continuous Testing
- Where traditional test automation falls short in modern development and delivery processes
- The 3 main differences between Continuous Testing and test automation
- How testers can address each of the 3 key elements of Continuous Testing