Security in Bytes: Decoding the Future of AI-Infused Web Applications
April 09, 2024

Ty Sbano
Vercel

As companies grapple with the rapid integration of AI into web applications, questions of risk mitigation and security are top of mind. AI-infused coding and secure defaults offer the potential for improved security, but organizations are still challenged with practical steps beyond just writing intent into policies and procedures. Further there are unique challenges with consumer-facing models not related to work, but something that must be managed as part of the growing attack surface.

Standing before a new wave of change and technology, the established security fundamentals should stay the same. For organizations, this includes effective policy and guidelines — which provides a paved path for trusted AI models, ensuring proper contractual language and avoidance of allowing your data to train public models, and an understanding of how to utilize open-source projects. For consumers, it's essential to recognize your privacy rights based on your geographical location alongside which various privacy models you opt-in to online, as usage patterns are unique to each individual. As our understanding of the technologies expands, tailored rules and guidelines will follow suit in order to safely harness AI benefits like faster iteration for organizations and enhancing user experiences for consumers.

Bridging Ethics, Privacy Policy, and Regulatory Gaps

The ethical and secure use of data within web applications is an ongoing issue that companies and government bodies alike are confronting, as evident in President Biden's first-ever executive order on AI technology last fall. The foundation of AI data security is largely reliant on individual company privacy policies, dictating how data — including those in AI applications — is managed and safeguarded. These policies, outlined in adherence to data privacy laws and regulations, encompass user consent and security measures for protecting consumer data with tools like encryption and access controls. Companies are held accountable for the data handling practices outlined within their privacy policies, which is crucial in the context of AI data use and security as it reinforces a framework for ethical practices within emerging technology that may not have direct regulatory requirements yet. This way, consumers can rest assured that their data is protected regardless of which application — AI or not — they may be interacting with.

While it can be difficult for organizations to know where to begin when it comes to ensuring the integrity of data use, a good place to start is to determine the purpose of the AI or large language model (LLM) to be trained, and then to differentiate if the model will be used internally or externally. Internal models often involve sensitive company data, including proprietary information, which underscores the need for robust security to safeguard against threats.

On the other hand, external models require a focus on user privacy and transparency to build as well as maintain consumer trust. For example, these consumer-facing models may have different ethical considerations, such as biases within models and broader societal impacts as citizens interact with these technologies in a public-facing way. By differentiating between these models, organizations can better navigate data protection regulations and ethical factors associated with each context to ensure the responsible and effective use of AI for themselves and their consumers. With respect to external models, they may just be enabling faster indexing and enablement of customer service within a chat bot that doesn't require sensitive PII to add significant effective business value with limited risk.

Despite ongoing discussions on how this technology will be regulated, a holistic approach that combines privacy policy frameworks with technical facets and differentiations is currently one of the most effective ways to ensure the confidentiality, protection, and integrity of data within AI applications.

A New Era of AI and Cybersecurity

As technology rapidly evolves, so does the cyber threat landscape with bad actors exploiting AI's capabilities to cause millions of dollars in damage. While the full extent of AI's impacts on the cybersecurity landscape is yet to be determined, new guidance shows how adversaries can deliberately confuse or even "poison" artificial intelligence (AI) systems to make them malfunction — and there's no foolproof defense that their developers can employ. Traditional cybersecurity measures often rely on predefined rules and signatures, inherently making them less adaptive to emerging threats and new technologies. AI-driven machine learning algorithms continuously learn from new data as it is available online, allowing them to adapt and evolve as quickly as cyber threats become more sophisticated. Additionally, AI can process enormous amounts of data quickly, enabling it to detect patterns that may go unrecognized by traditional measures. This offers organizations unique insight into dynamic attack patterns and allows them to proactively respond to potential threats in real-time.

Open source projects can be a powerful tool in unlocking AI's value when it comes to threat intelligence and detection. Both consumers and organizations have long benefitted from open-source projects due to the collaboration and transparency the community offers. Additionally, the open-source community's emphasis on teamwork and visibility has also led to well-kept documentation of security challenges brought about by emerging tech like AI. Open source learning models provide real-time analysis of attack patterns and how data is shared: imagine, for example, a world where you could share information across Web Application Firewalls for Remote Code Execution (RCE), Distributed Denial of Service (DDoS) or even zero day attack patterns and everyone could benefit in blocking and shutting down traffic before damage is caused. We're on the verge of an evolution of greater practical opt-in intelligence in which teams can submit data at a rate previously only known via shared threat feeds or as rudimentary as email threads for faster processing and indexing. By offering a more agile and responsive defense approach against the broadening landscape of cyber threats, AI has the potential to provide a cybersecurity advantage that we're on the precipice of unlocking. We've already begun to see great community-driven efforts with the Open Web Application Security Project, including ten security considerations to consider when deploying LLMs which will continue to iterate as we uncover more about the breadth of AI's capabilities.

Securing the Future

The rapid integration of AI into web applications has propelled organizations into a complex landscape of opportunities and challenges, especially when it comes to security for themselves and their consumers. If an organization has decided to pursue AI in its web application, a recommended initial step is for the security team to maintain a close partnership with the engineering teams to observe how the code is shipping. It's crucial for these teams to align with the establishment of the Software Development Lifecycle (SDLC), ensuring a clear understanding of the effective touchpoints at every stage. This alignment will guide your practices, helping determine where the team should facilitate meaningful reviews and touchpoints to ensure security practices are properly implemented in the AI applications. Recognizing the dual nature of AI models — internal and external — also guides organizations in tailoring security measures to protect sensitive company proprietary data or privacy policies prioritizing user safety in consumer-facing models. AI introduces a paradigm shift in the dynamic cyber threat landscape, as it amplifies the attacks possible by threat actors in web applications while also offering adaptive and real-time threat detection capabilities for organizations. Open-source projects bring transparency and collaboration to consumers and organizations working with AI, but it's paramount to balance innovation and risk tolerance.

The combination of established security fundamentals, evolving AI capabilities, and collaborative open-source initiatives provide a roadmap for organizations to begin safely integrating AI into their web applications. The careful navigation of these intersections will open the door to a future where innovation and security coexist, unlocking the full potential of AI for organizations and ensuring a secure digital landscape for consumers.

Ty Sbano is Chief Information Security Officer at Vercel
Share this

Industry News

July 25, 2024

Backslash Security introduced its Fix Simulation and AI-powered Attack Path Remediation capabilities.

July 25, 2024

Check Point® Software Technologies Ltd. announced the appointment of Nadav Zafrir as Check Point Chief Executive Officer.

July 25, 2024

Sonatype announced that Sonatype SBOM Manager, its Enterprise-Class Software Bill of Materials (SBOM) solution, and its artifact repository manager, Nexus Repository, are now available in AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).

July 24, 2024

Broadcom unveiled the latest updates to VMware Cloud Foundation (VCF), the company’s flagship private cloud platform.

July 24, 2024

CAST launched CAST SBOM Manager, a new freemium product designed for product owners, release managers, and compliance specialists.

July 24, 2024

Zesty announced the launch of its Insights and Automation Platform.

July 23, 2024

Progress announced the availability of Progress® MarkLogic® FastTrack™, a UI toolkit for building data- and search-driven applications to visually explore complex connected data stored in Progress® MarkLogic® platform.

July 23, 2024

Snowflake will host the Llama 3.1 collection of multilingual open source large language models (LLMs) in Snowflake Cortex AI for enterprises to easily harness and build powerful AI applications at scale.

July 23, 2024

Secure Code Warrior announced the availability of SCW Trust Agent – a solution that assesses the specific security competencies of developers for every code commit.

July 23, 2024

GFT launched AI Impact, a new solution that leverages artificial intelligence to eliminate technical debt, increase developer efficiency and automate critical software development processes.

July 23, 2024

Code Metal announced a $13M seed, led by Shield Capital.

July 22, 2024

Atlassian Corporation has achieved Federal Risk and Authorization Management Program (FedRAMP) “In Process” status and is now listed on the FedRAMP marketplace.

July 18, 2024

Mission Cloud announced the launch of Mission Cloud Engagements - DevOps, a platform designed to transform how businesses manage and execute their AWS DevOps projects.

July 18, 2024

Accelario announces the release of its free TDM solution, including database virtualization and data anonymization.