Fluidfit AI is built with security and governance at its core, combining advanced data protection measures with enterprise-level controls to keep customer data safe and compliant.
This article explains how Fluidfit protects customer data across different AI model categories and provides governance controls to manage AI usage securely.
Fluidfit aggregates hundreds of AI models, each with their own data treatment policies, but they can be broadly categorized into three main groups.
Fluidfit AI leverages industry-leading foundational models provided by organizations such as OpenAI and Google. When accessed via their enterprise APIs, these providers state in their Terms of Service that customer data is not used to train or improve their models.
These services may temporarily retain limited data strictly for purposes such as abuse detection and system monitoring, in accordance with their policies. Fluidfit AI interacts with all foundational models exclusively through secure API integrations, ensuring that customer data remains protected and governed under enterprise-grade security standards.
Fluidfit AI integrates with third-party hosted models provided by a range of vendors, each governed by their own Terms of Service. Examples of such models include Remove.bg, Topaz Upscaler, Higgsfield.ai, Kling, and others. These terms may vary by provider, particularly with respect to data retention policies and the use of customer data for model training.
We reviewed our primary model providers’ policies and identified differences in how input data is handled. In particular, some providers may allow the use of uploaded images for model improvement, while others do not make explicit commitments regarding training practices.
Our two largest third-party model providers temporarily retain uploaded images solely for purposes such as abuse detection and system monitoring, and typically purge this data within one hour. However, they do not provide explicit guarantees that such data is excluded from model training.
A careful review of the specific third-party specialty model is highly recommended.
For open-source models hosted directly by Fluidfit AI, we maintain full control over the data environment and enforce strict security and governance standards. All images are protected during transmission and storage.
Fluidfit AI ensures that no customer images are used for model training under any circumstances. Any data that is retained within our systems is accessible only to the customer, ensuring complete data protection against training or fine-tuning.
Fluidfit AI provides robust model governance capabilities, enabling organizations to maintain granular control over which AI models are available for use across their environment. Administrators can centrally manage model access, with the ability to easily enable or disable specific models through intuitive administrative controls.
This approach ensures that organizations can enforce internal policies, manage risk, and adapt quickly to evolving compliance or security requirements while maintaining full oversight of AI usage.
Enterprise-grade API integrations with foundational models
No customer data used for training in Fluidfit-hosted models
Model-level governance and access control
Administrative enable/disable of AI models
Secure transmission and storage of customer data
Transparency across third-party model policies