At Tines, we take pride in both the flexibility and security of our platform: it’s what allows us to do things like safely connect to any HTTP API in the world, and seamlessly deploy in fully air-gapped environments. Similarly, our AI capabilities have been designed from the ground up to be secure and private, with no logging, internet transit, or training on your data.
When it comes to bringing your own AI model to Tines, while you still retain all the flexibility of the platform, you also gain some new security responsibilities. That’s why we’re sharing our approach to threat modeling and protecting an AI integration with Tines - to ensure that all our customers can use AI with confidence, no matter how the underlying AI model is integrated.
Securing your foundation model
When talking about a Large Language Model (LLM), most people are generally referring to the foundation model that has been trained on massive amounts of data. OpenAI’s family of GPT models, or Anthropic’s family of Claude models, are all widely-used foundation models that power generative AI tools like ChatGPT and Tines Workbench.
If you use a pre-trained foundation model from OpenAI, Anthropic, Google, Meta, or a similar reputable provider, then the rest of this section isn’t necessary for you. You can (and should) expect that these foundation models are trained with AI threats in mind. From a privacy perspective, you should make sure that your model provider respects your data protection requirements around using your data for training purposes.
If you’re building your own foundation model, it’s important to start with a framework for identifying and mitigating threats. Data source poisoning, inadvertent sensitive data disclosure, system prompt leakage, and prompt injection are all common examples, but not the only things to consider. We recommend using a framework like the OWASP Top 10 for LLM Applications or Google’s Secure AI Framework to build out a comprehensive threat model and set of mitigating controls.
Key recommendations:
When using a pre-trained foundation model, ensure your model provider meets your data protection and privacy requirements
When building your own foundation model, apply threat frameworks like OWASP Top 10 for LLMs, Google’s Secure AI Framework
Securing your model serving provider
While there are many ways to integrate an AI model into an application, Tines uses standardized REST APIs, whether we’re serving the underlying foundation model or you are. Securing a model serving provider is no different than securing any other HTTP API. For example, you need to think about authentication, authorization, encryption in transit, denial of service (DOS) attacks, and so forth.
The risks outlined in the OWASP Top 10 for Web Applications provide a strong starting point for thinking about threat modeling for your model serving provider. Of course, if you’re using a vendor like AWS Bedrock or Google Vertex AI to serve your models, many of those defenses are built into the vendor platform by default - but you should take time to read their documentation and understand how to properly enable their authentication and authorization features. You should also make sure that you understand if and when your requests (and the corresponding responses from the provider) may be logged within the provider’s systems.
Key recommendations:
Use OWASP Top 10 for Web Applications to guide threat modeling
Understand vendor security features and enable proper authentication/authorization
Review if and how the provider logs requests and responses
Securing your AI integration with Tines
At a minimum, you should ensure that all communications between your model serving provider and Tines are encrypted via HTTPS, even inside your internal networks. For Tines cloud customers (and self-hosted customers using a third-party provider API), Tines will automatically choose the strongest modern encryption algorithms supported by your provider, to protect your data even during transit over the internet. If you self-host your Tines instance in AWS, we recommend using a PrivateLink or hosting your model serving endpoint in the same VPC.
When generating a provider API key (e.g. from Anthropic or OpenAI) for your custom provider integration within Tines, you should follow the same security best practices for secret storage and rotation that you do for other sensitive authentication keys. The number of humans who have access to the API key should be limited, and if you decide to retain it outside of Tines, it should be secured in a secrets manager or enterprise password vault with access logging enabled.
Key recommendations:
Use the strongest encryption algorithms supported by your provider (default for Tines cloud customers and self-hosted customers using a third-party API)
For a self-hosted Tines instance in AWS, use PrivateLink or host your model serving endpoint in the same VPC
Follow best practices for API key storage and rotation, including limiting access and secure storage
How Tines secures your AI usage
Tines’ commitment to securing your data remains the same whether you’re using our AI provider or your own. We don’t use your data to train our AI, and you maintain full control over how long data is retained in workflow history, whether that data is an AI output or an API output. Our robust audit logging, role-based access control, and workflow change control mechanisms ensure that you can uphold strong governance and full visibility over how AI is utilized in your Tines instance.
As AI becomes increasingly integrated into critical workflows, we’re excited to see how our customers unlock new capabilities in orchestration and automation, while still maintaining peace of mind over how those capabilities are secured.
Happy building!