Reading Time: 11 minutes

Generative AI promises vast opportunities for efficiency, so naturally, businesses are rapidly looking for ways to incorporate AI into their operations. One prominent option is to use large language models (LLMs), and one of the more convenient ways to engage with LLMs is through APIs. 

Developers can gain access to popular LLMs, such as those from Google or OpenAI, through APIs with free tier offerings, allowing them to integrate natural language processing (NLP) into their applications. This drives the adoption of generative AI for a wide array of applications and platforms. 

latest report
Learn why we are the Leaders in API management and iPaaS

APIs facilitate the transfer of important data between vendor LLMs and the enterprises that use them. However, this presents critical security vulnerabilities that need attention. When poorly implemented, APIs and LLMs risk inadvertently granting unauthorized access, exposing information, and being open to other security attacks.

The rise of LLMs

Artificial intelligence (AI) has been a long-standing area of interest in computer science, albeit ones that weren’t seen as feasible outside of specific applications. Historically, AI was confined to niche tasks in tightly controlled environments, lacking the versatility to undertake unrelated tasks. The introduction of generative models changed this to pave the way for the sophisticated LLMs prevalent in today’s generative AI domain.

LLMs are continuously improved with the aid of pre-trained models and refined data repositories. These enhanced LLMs are now available to the public, making this transformative technology accessible to a broader audience. LLMs can generate diverse content formats, including text, designs, audio, and video, which can be embedded with other applications to execute specific tasks such as data analysis and information retrieval.  

As a result, businesses are rushing to harness the power of generative AI to automate processes, enhance customer service, optimize research and development, and improve overall operational efficiency. 

While the potential of LLMs is evident, their integration into business operations has concurrently exposed vulnerabilities in security frameworks and shifted the goalposts for digital transformation. To navigate the complex landscape of digital security challenges inherent to accessing LLMs, organizations need a robust API management and security framework to address and mitigate these concerns preemptively.  

The relationship between APIs and LLM security risks

LLMs that support generative AI output require access to data. LLMs linked to sensitive company data present added security risks, but these risks can be reduced by using APIs and API management. 

The Open Worldwide Application Security Project (OWASP) is an international group focused on software security. They recently outlined 10 significant security risks associated with LLMs, which include: 

  1. Prompt injection
  2. Insecure output handling
  3. Training data poisoning
  4. Model denial of service
  5. Supply chain vulnerabilities
  6. Sensitive information disclosure
  7. Insecure plugin design
  8. Excessive agency
  9. Overreliance
  10. Model theft

Some of these vulnerabilities can be solved or monitored with API management resources. A common security risk that can be prevented using API management is prompt injection. This security risk affects LLMs that learn from prompts, like Bard or Chat GPT. APIs that limit the LLM’s access and enhance its functionality within well-defined bounds fix this problem. Without them, malicious users can manipulate LLMs into performing unauthorized actions. These actions include exposing sensitive information or acting as an unwitting agent.

To preemptively safeguard against this and other exploitations, businesses can institute API tokens to restrict the LLM’s access to external commands or parts of databases. Doing so reduces the risk of manipulative inputs or the inadvertent exposure of sensitive information. Organizations can mitigate LLM security risks with API management and API security capabilities. 

The breadth of these security threats also suggests that relying on LLM vendors alone for security can result in undesired outcomes. Organizations need a robust API management and API security strategy to reduce the risk of connecting to and using LLMs.

Examples of LLM-associated API failures 

On December 4, 2023, the Lasso Security research team successfully uncovered and exposed over 1500 API tokens within the GitHub and Hugging Face repositories. This security lapse gave unauthorized access to 723 accounts belonging to major entities such as Google, Meta, and Microsoft.

The gravity of this incident becomes apparent when considering the potential ramifications of the researchers harboring malicious intent. With this type of access, malevolent actors could have manipulated training sets, compromising the outputs of LLMs for millions of users globally. 

A Lasso Security researcher, Bar Lanyado, said he was “extremely overwhelmed with the number of tokens we could expose, and the type of tokens. We were able to access nearly all of the top technology companies’ tokens and gain full control over some of them.” 

Another example of API security failure occurred in 2023 when Microsoft AI researchers inadvertently granted unrestricted permissions to a staggering 38 terabytes of sensitive information, encompassing internal messages, secret keys, and more. This lapse occurred during the development of an LLM training set, which was inadvertently published to GitHub with erroneous permissions.

These incidents serve as poignant examples of critical oversights in API-related security measures, raising pertinent questions about the mechanisms leading to such lapses, particularly within the ranks of prominent technology enterprises. How do events like this happen, and how do they happen to some of the world’s biggest and best tech companies?

How API management tools improve security

Some AI vendors provide comprehensive security solutions with LLMs. One example is Einstein by Salesforce, where its Trust Layer works as a safety net for users. Developers of secondary AI applications built directly off Google Cloud or Open AI with multifaceted functionalities made available to the public may inadvertently prioritize features over security considerations. Recognizing this, organizations looking to integrate new AI derivations quickly are presented with the problem of implementing AI without compromising security and managing new APIs. 

Thankfully, there are solutions that provide the proper API protection, governance, and management to help your organization solve this problem. With MuleSoft’s Anypoint API Manager and Anypoint Flex Gateway, IT can configure and apply security policies to monitor and manage what information flows in and out of LLMs, protecting sensitive company data and personal information. 

Empowering organizations further, MuleSoft’s API Management capabilities grant the capability to enforce tailored authentication policies, implement rate limits, and leverage the newly introduced Policy Development Kit (PDK) for Anypoint Flex Gateway to simplify the creation of adaptable security policies. This comprehensive toolset facilitates user authentication, sensitive information identification, discovery of previously undetected deployed APIs, and enhanced control over digital operations and LLM utilization.

Learn more about the capabilities offered by API Manager and Flex Gateway in our tutorial.