AI's Impact on Frontend Development: Feature Development, Accessibility QA and Testing
September 24, 2024

Winston Hearn
Honeycomb

It seems that 2024 is the year that AI is infiltrating the world. Everywhere you turn, companies are announcing AI features, rolling out new AI models to help with specific use cases, and promising that AI will revolutionize everything. GitHub led the way with its Copilot product, which helps developers automate writing code. A recent industry survey described Copilot as a tool that "is like pair programming with a junior programmer." Considering frontend development has long been a maligned skillset in the industry, it's an open question of how the domain will be affected as AI continues to mature.

Frontend developers are responsible for the code powering interfaces that users directly interact with through mobile and web UIs. As such, there's enormous pressure on them to avoid bugs, consider all users, and ensure the user experience is reliable and usable. In the past decade, frontend devs have also seen a massive explosion in code complexity; the adoption of frameworks like Vue, Angular and React means that devs are working in large codebases with many dependencies and abstractions. On top of this, their code is executed on users' devices, which have an untold number of variables at play, between different OS or Browser versions, screen sizes, and CPU and memory availability.

With all these pressures at play, the question of whether or not AI is a useful tool for frontend development is critical; it's the difference between reducing toil or making a complex job even harder. Let's look at three responsibilities inherent in frontend engineering to see how AI could help or hinder the work.

Feature Development

For frontend developers working on web apps, adding or updating features is a core job responsibility. Every new feature is going to be a mix of generic tasks (creating a component and setting up the boilerplate structure) and many more specific tasks (all of the business logic and UI). For most features, the generic tasks are at most 10% of the work; the bulk of the work is going to be all the tasks for building the thing that doesn't exist yet.

When considering whether AI is useful for a given context or not, the right question is: how much of this task can I assume is shaped like something in the corpus of training data the AI model was built on? The details matter, too; an AI is not necessarily generating code modeled on your codebase (some newer products are starting to do this, but not all offer it), it is generating code modeled on all codebases in the training.

Any developer who has worked at a few companies (or even at a few teams in the same company) knows — to steal a phrase from Tolstoy — that every service's codebase is unhappy in its own way. These factors — the global nature of the AI model training, and the unique specifics of your codebase — mean that automation of feature development will be sketchy at best. Expect a lot of hallucinations: function arguments that make no sense, references to variables that aren't there, instantiations of packages that aren't available. If you're using AI products for this type of work, pay close attention. Don't attempt to have it write large blocks of code, start with smaller tasks. Review the code, and ensure it does what you want. Just like any tool, you can't expect perfection out of the box, you need to learn what works well and what doesn't, and you always need to be willing to sign off on the output, since, ultimately, it has your name on it.

Accessibility

The issues above can have a compounding effect which is worth considering. A key responsibility for web and mobile developers is ensuring that every person who wants to use their UI can, which requires ensuring that accessibility standards are met. If you're automating your feature development code, you're going to run into two issues with AI.

The first is simply accessibility is not yet a domain where we can be prescriptive about how to make a specific feature accessible. Accessibility experts use their knowledge of a variety of user needs (kinds of disability and the UX that best works for them) to evaluate a given feature and determine how to make it accessible. There are some foundational rules — images and icons should have text descriptions, interaction elements should be focusable and in a reasonable order, etc — but often, it requires skilled reasoning about what a user's needs are and how to effectively achieve them through code. The contextual nature of accessibility means AI will, at best, get you started, and at worse, accidentally introduce barriers to access through simple mistakes that human developers regularly make.

The AI would do this because of the nature of model training: current-generation AI products cannot solve a problem that isn't accurately modeled in their training data. The unfortunate reality of the web is that it is horribly inaccessible at large, and the state of mobile apps is not much better. This is terrible for the millions of disabled people who are trying to navigate the modern world, and it is terrifying when we imagine a world with much more AI-generated code.

Disabled people are constantly encountering experiences that are inaccessible and prevent them from achieving their goals on the site or in the app. Their experiences will only be magnified and amplified in a future built on AI trained on the code that exists now. All the resulting AI models will be great at generating code modeled on the current state of web and mobile apps, which means the inaccessibility that is widespread today will be perpetuated and expanded through AI generation.

To prevent that, devs will need to be vigilant in testing to ensure accessibility remains and no regressions are shipped.

QA and Testing

Thankfully, developers have a robust set of practices available to them to ensure known issues aren't replicated. Once you have a defined set of requirements and functionality, you can write tests to ensure the requirements are met and the functionality is available.

Here, we find a really promising area for AI to improve work. Testing is one of the toils of frontend development; it involves the repetitive work of writing code that validates that a given component or feature does what it's supposed to. Here, we start to find a shape of problem that AI could genuinely be useful for. Tests are often very repetitive in structure, and the assertions are all based on explicit requirements or functionality. Plus, tests are all executed in a protected environment that succeeds or fails, so it's easy to know if the code is doing what you want.

Except, of course, many devs who have written tests have learned that sometimes a passing test doesn't mean the test is actually proving something works. Here, the comment that Copilot is "like pair programming with a junior programmer" is helpful to keep in mind. The key to success with this type of tool is extra attention to detail. AI products could be immensely helpful for writing test suites and improving code coverage, but extra care will need to be paid to ensure that every test is actually testing and asserting the things it claims to. One thing that the current gen of AI products is great at is coming up with edge cases; all the unexpected ways that a thing can break. Ensuring these cases are covered and don't regress is a key goal of software testing, so this sounds like a great way to leverage these products.

Conclusion

AI is booming in popularity, but as we've seen in these three examples, frontend developers who adopt it may find value but should be aware of the risks. An AI is only as good as its underlying model. Knowing what kinds of problems a model was trained on and what data is likely to actually be in its training corpus is immensely helpful for thinking about its usefulness for a given task.

Winston Hearn is Senior Product Manager, Honeycomb for Frontend Development
Share this

Industry News

October 03, 2024

Check Point® Software Technologies Ltd. announced its position as a leader in The Forrester Wave™: Enterprise Firewalls, Q4 2024 report.

October 03, 2024

Sonar announced two new product capabilities for today’s AI-driven software development ecosystem.

October 03, 2024

Redgate announced a wide range of product updates supporting multiple database management systems (DBMS) across its entire portfolio, designed to support IT professionals grappling with today’s complex database landscape.

October 03, 2024

Elastic announced support for Google Cloud’s Vertex AI platform in the Elasticsearch Open Inference API and Playground.

October 02, 2024

Progress announced the recipients of its 2024 Women in STEM Scholarship Series.

October 02, 2024

SmartBear has integrated the load testing engine of LoadNinja into its automated testing tool, TestComplete.

October 01, 2024

Check Point® Software Technologies Ltd. announced the completion of its acquisition of Cyberint Technologies Ltd., a highly innovative provider of external risk management solutions.

October 01, 2024

Lucid Software announced a robust set of new capabilities aimed at elevating agile workflows for both team-level and program-level planning.

October 01, 2024

Perforce Software announced the Hadoop Service Bundle, a new professional services and support offering from OpenLogic by Perforce.

October 01, 2024

CyberArk announced the successful completion of its acquisition of Venafi, a provider of machine identity management, from Thoma Bravo.

October 01, 2024

Inflectra announced the launch of its AI-powered SpiraApps.

October 01, 2024

The former Synopsys Software Integrity Group has rebranded as Black Duck® Software, a newly independent application security company.

September 30, 2024

Check Point® Software Technologies Ltd. announced that it has been recognized as a Visionary in the 2024 Gartner® Magic Quadrant™ for Endpoint Protection Platforms.

September 30, 2024

Harness expanded its strategic partnership with Google Cloud, focusing on new integrations leveraging generative AI technologies.

September 30, 2024

OKX announced the launch of OKX OS, an onchain infrastructure suite.