Data Protection & Security
AI data protection and security at Benchling
At Benchling, we’ve put the integrity of your data first from day one. With AI innovation in Benchling, our core principles of trust, choice, transparency, and reliability remain unchanged. We’ve chosen to work with leading model providers that sign contractual agreements for protecting your data. The AI models that we use are hosted by Amazon Bedrock and OpenAI. Amazon Bedrock is an AWS service that hosts a variety of models such as Anthropic Claude.
No surprises with AI
At Benchling, we prioritize transparency and are committed to keeping our customer fully informed about when and how AI is being used. LLM-powered features are provided at our customer’s discretion, on an opt-in basis. We will clearly communicate any use of AI in our services, ensuring that you are aware of how your data is being processed.

Data retention and data training
Benchling uses third-party model providers for many of its AI features, and we always require that these providers store data temporarily or in-memory only. After this period, all data is required to be securely purged. No third parties that either develop (Anthropic) or host the AI models we use (OpenAI, Amazon Bedrock) are permitted to train models on customer data. Additionally, for Benchling’s own models, Benchling never uses customer data for training without customers’ permission.

Committed to keeping your data secure
At Benchling, trust is in our DNA. We recognize that we have a big responsibility ensuring your data is both protected and secured, and we consider this a core commitment to our customers.
