Data Protection & Security

AI data protection and security at Benchling

At Benchling, we’ve put the integrity of your data first from day one. Benchling AI is built on the same enterprise-grade security that protects your R&D data in Benchling today. We only partner with leading model providers that sign contractual agreements for data protection and keep data security as their top priority.

No surprises with AI

At Benchling, we prioritize transparency and are committed to keeping our customers fully informed about when and how AI is being used. LLM-powered features are provided at our customer’s discretion, on an opt-in basis. 

No surprises with AI

Data retention and data training

Benchling uses third-party model providers. They store data temporarily or in-memory only, and securely delete it after processing in accordance with contractual requirements. No third parties that either develop (Anthropic) or host the AI models we use (OpenAI, Amazon Bedrock, Google Vertex AI) are permitted to train models on customer data. Benchling’s own models never use customer data for training without customers’ permission.

Data retention and data training

Your data stays yours, always

At Benchling, trust is in our DNA. We recognize that we have a big responsibility ensuring your data is both protected and secured, and we consider this a core commitment to our customers.

Committed to keeping your data secure