As organizations begin using generative AI, trust becomes the most important factor. At Salesforce, trust is the number one value. Einstein Generative AI is built to help businesses innovate while keeping their data secure and their AI responses reliable and safe.
Your Data Stays Secure
With Einstein Generative AI, your data remains private and protected. Salesforce has strong agreements with large language model providers like OpenAI. These agreements ensure that customer data is not stored, reused, or trained on by the AI models. This allows organizations to take advantage of generative AI without worrying about data security.
Einstein Trust Layer: Designed for Trust
As Salesforce adds more generative AI features, keeping customer data safe becomes very important. The Einstein Trust Layer is built to protect data privacy, improve AI accuracy, and make sure AI is used in a responsible way across Salesforce.
The Einstein Trust Layer is a set of features, rules, and processes that work in the background whenever Einstein Generative AI is used.
How Data Flows Through the Einstein Trust Layer?
When AI is used in Salesforce, data follows a secure path:
A prompt is sent from a CRM app (like Sales or Service Cloud).
The prompt passes through the Einstein Trust Layer.
The prompt is sent to the LLM (Large Language Model).
The LLM creates a response.
The response comes back through the Einstein Trust Layer.
The final response is shown in the CRM app.
This full process helps keep data secure at every step.
Prompt Journey
To get an AI response, Salesforce sends a prompt to the LLM.
Prompts can be created using Prompt Builder and can be called from Flow or Apex.
Before the prompt reaches the LLM, the Einstein Trust Layer checks and prepares the data.
The Einstein Trust Layer Features.
Secure Data Retrieval and Grounding
For better and more accurate answers, the AI needs context from Salesforce data. This is called grounding.
Grounding means adding CRM data to the prompt, such as:
Record fields
Related lists
Flow data
Apex data
Security is always respected:
Only data that the current user has access to is used
Salesforce role and field-level security rules are followed
Grounding happens at run time, based on user permissions
This ensures users see only what they are allowed to see.
Data Masking for the LLM
To protect sensitive information, the Einstein Trust Layer uses data masking.
Sensitive data is detected in two ways:
Pattern-based detection (like emails, phone numbers, names)
Field-based detection using Salesforce data classification and Shield encryption
Once detected:
Sensitive data is replaced with placeholder text
The real data is never sent to the LLM
After the response is generated, the data is safely restored
This prevents private data from being exposed to external AI models.
Prompt Defense
To help decrease the likelihood of the LLM generating something unintended or harmful, Prompt Builder and Prompt Template Connect API use system policies.
These policies:
Guide the LLM on how to behave
Prevent harmful or unwanted responses
Protect against prompt injection and jailbreaking attacks
Stop the AI from answering questions it does not have data for
No comments:
Post a Comment