1. Introduction

Over two decades ago, Salesforce pioneered the first multitenant cloud platform, setting a precedent in the industry. Since then, Salesforce has significantly expanded its footprint, serving hundreds of thousands of businesses and millions of users from various industries and regions. Salesforce has also enhanced its Customer360 product suite through strategic acquisitions. However, shifts in the market and industry in recent years necessitated a reevaluation of the Salesforce Platform:

  1. The emergence of public cloud providers who invest heavily in infrastructure.
  2. Increasing data residency and regulatory demands across different sectors and countries.
  3. The need for handling real-time data and transactions at a much larger scale due to the rise of social and mobile technologies.
  4. Rapid advancements in machine learning and AI, particularly in Generative AI.
  5. Growing requirements for cybersecurity, system availability, performance, and resilience.
  6. A demand from customers for an integrated suite that balances a loosely coupled but coherent architecture.

In response to these challenges, Salesforce embarked on a mission four years ago to completely transform its platform from the ground up. This initiative aimed to address the aforementioned challenges and lay the groundwork for the next generation of applications and customer use cases, all while upholding our application availability goals.

The launch of Agentforce at Dreamforce 2024 and the diagram below represents the culmination of this extensive effort, involving thousands of Salesforce Technology and Product organization team members. Currently, more than 85% of our customers have transitioned to this new platform. The successful migration of a majority of our customers, including those with the most demanding workloads, underscores the ingenuity of our engineers and reaffirms Salesforce’s core values of Trust, Customer Success, and Innovation.

In this white paper, crafted in collaboration with top engineers, a detailed exploration is provided for builders who appreciate the complexities behind major technological transformations. The paper delves into the essential architectural enhancements that keep the platform scalable, secure, and ready to handle future applications while meeting the changing needs of customers. It’s recommended to begin with the Architecture Overview chapter to understand the full picture. From there, readers can either continue in sequence or explore the chapters that capture their interest the most.

Srini Tallapragada
President & Chief Engineering Officer of Salesforce

Platform Architecture Overview

2. Architecture Overview

The architectural principles of the Salesforce Platform have remained unchanged as they capture the foundation and differentiation for how we engineer features and capabilities:

The current Salesforce Platform represents the latest stage in the evolution of Salesforce’s capabilities since the 2008 debut of the Force.com Platform. Recent key transformations include:

These changes have expanded and refined the platform’s capabilities without significant disruptions, thanks to robust abstractions that allow Salesforce engineers to advance our technologies seamlessly with minimal customer disruption. The robust abstraction also continues to be key to the Salesforce Platform’s value of simplifying the technical complexities of enterprise-grade software, like security, availability, and technology conventions, so app developers can focus on solving their unique challenges. The Salesforce Platform’s capabilities are highlighted below:

Next Gen Platform Architecture Overview

The Salesforce Platform is shown as a set of layers that make up the system. Each layer represents a group of related features that are important to applications built on the platform. The sub-boxes within each layer provide illustrative examples of these capabilities. Each lower layer’s capabilities are integrated with all the layers above, ensuring a consistent and coherent experience across the entire Salesforce application suite.

The Salesforce Platform embodies extensive engineering transformations across all layers of a mature technology platform developed over the past 20 years. Driven by evolving customer demands and new technologies, these changes enable support for new app types and solutions. The transformations are interconnected, with changes in lower layers influencing the evolution of all subsequent layers above.

The Salesforce Platform is structured into several layers, each contributing to its comprehensive capabilities:

3. Hyperforce

Salesforce has been developing global data center infrastructure for nearly 25 years, predating many current Hyperscalers and IaaS vendors. Hyperforce, the current generation of Salesforce’s infrastructure evolution, is designed to operate across multiple public cloud providers worldwide.

It’s tailored to meet customer needs for elastic B2C scale, global data residency, enhanced availability, top-tier security, and regulatory compliance. Hyperforce standardizes infrastructure across all Salesforce products, facilitating rapid integration of new acquisitions.

Hyperforce ensures delivery of the Salesforce Platform, allowing for swift deployment of new features and applications, meeting data residency and regulatory compliance requirements in 20 regions across the world.

3.1 Architectural Principles

During Salesforce’s transition to Hyperforce, significant differences in services, interfaces, and compliance levels among hyperscalers were identified. To build a robust and portable foundation for the Salesforce Platform, these architectural principles were adopted:

  1. Infrastructure as Code: Utilizing a domain-driven architecture, this principle involves declarative coding for infrastructure, creating immutable artifacts, and automating infrastructure on-demand using standards like Kubernetes and Service Mesh.
  2. Zero-Trust Security: Implementing a zero-trust security model with comprehensive defense strategies including identity management, authentication, authorization, network isolation, least privilege security policies, and encryption of data both in transit and at rest.
  3. Managed Services: Emphasizing the use of multitenant and multi-cloud services, this principle enhances portability across different infrastructures and environments such as commercial, government, and air-gapped systems.
  4. Built-in Resilience: Mission-critical services are spread across multiple Availability Zones to ensure high availability. Data is replicated across Availability regions. Services are also labeled with availability tiering to manage service level objectives and resilience planning.
  5. Fully Observable: Integration of all services into a standard observability platform for efficient monitoring, which includes log collection, metrics gathering, alerting, distributed tracing, and tracking of service operations like traffic volume, error rates, and resource utilization.
  6. Automated Operations: This includes automated management of infrastructure lifecycle and predictive AIOps (AI for operations) for maintaining quality of service, detecting, and addressing service degradations, and failure detection.
  7. Automated Scale: Focusing on scalability and cost-efficiency, this principle allows for operational flexibility across different scales without increasing operational risks, abstracting specific account limits related to the cloud provider.
  8. FinOps Aware: Public cloud brings infrastructure agility, but with the risk of elevated costs. We embrace an efficiency-driven engineering culture throughout the service lifecycle, without compromising on availability, security, and customer trust.

These principles guide the development and operation of Salesforce’s Hyperforce platform, ensuring it remains adaptable, secure, and efficient across various environments.

3.2 Infrastructure Concepts

The Salesforce Platform and its supporting services run on the Hyperforce Foundation, which comprises multiple Hyperforce Instances. These instances are strategically distributed across various countries to align with customer preferences for geography and availability. To meet stringent data residency and operational requirements, one or more Hyperforce Instances can be optionally grouped and designated as an Operating Zone. Each instance is regularly updated to ensure safety, scalability, and compliance with local and legal standards.

Hyperforce Instances are made up of several Hyperforce Functional Domain instances, which are clusters of services delivering specific functionalities. Foundational functional domains provide critical services like security, authentication, logging, and monitoring, all of which are essential for other Hyperforce services. Business functional domains support various Salesforce products such as Sales Cloud, Service Cloud, and others, facilitating their product functionality.

Services within a Functional Domain may be organized into Cells, which are scalable and repeatable units of service delivery. The Hyperforce Cell corresponds to what is traditionally known as a "Salesforce instance" wherein one or more Salesforce organizations (org) reside. A Cell is a scale unit as well as a strong blast radius boundary. Supercells provide a logical grouping of multiple Cells to demarcate a larger blast radius due to shared services across Cells. Multiple Supercells may be present in a Functional Domain. Cells and Supercells allow Hyperforce to scale horizontally within a Functional Domain while also maintaining strong control on the size of the blast radius.

Each Hyperforce Instance is mapped to one Availability Region, a concept found in all public cloud infrastructures, and is capable of operating independently of all other Hyperforce Instances. All mission-critical services and data in the Hyperforce Instance are distributed and replicated across at least three Availability Zones, to achieve ‌fault tolerance and stability. Furthermore, data backups are copied to other suitable Hyperforce Instances for business continuity and regulatory compliance.

Hyperforce infrastructure is continually evolving, as new Hyperforce Instances and Cells are created or refreshed in place. Customers are insulated from changes in the physical details of Hyperforce. All externally visible customer endpoints are accessed via stable and secure Salesforce My Domains (for example, acme.my.salesforce.com) that securely route traffic to the current data and service location. Outbound traffic (e.g., Mail, Web callouts) are best implemented using secure mechanisms like Domain Keys Identified Mail (DKIM) and mTLS, to ensure that customers’ on-premise infrastructure isn’t hardcoding the physical detail of Salesforce infrastructure, such as IP addresses that can change over time.

Platform Infrastructure Concepts

3.3 Network Security

Hyperforce Functional Domains are designed with robust security measures. Each domain is secured at the perimeter and isolated, with services within a domain separated into dedicated accounts for added security. Communication between services is facilitated securely via Service Mesh or similar protocols. Traffic management is handled by ingress and egress gateways that inspect, route, and apply necessary controls like circuit breakers or rate limits to all incoming and outgoing traffic.

Services within a Hyperforce Functional Domain are grouped into Security Groups, with only those in the edge group exposed to the public internet. Runtime security policies enforce communication rules between different security groups, adhering to the principle of least privilege to ensure services have only the necessary access.

Each geographical region has a Hyperforce Edge Functional Domain that terminates transport layer security and employs programmable web application firewall policies to preemptively address threats. This ensures that only legitimate traffic reaches Hyperforce endpoints while maintaining a secure and efficient customer experience. Additionally, internal network links between Hyperforce Instances are tightly controlled, and all log data containing personally identifiable information is anonymized to comply with GDPR standards.

3.4 Hyperforce Grid Control Plane

A Hyperforce grid comprises multiple Hyperforce Instances sharing the same control plane, that is designed to isolate sensitive workloads where appropriate. It ensures zero leakage of any customer or system data, platform metadata, or monitoring data across grids. The Control Plane consists of redundant Hyperforce Instances that host essential services for creating, managing, and monitoring customer-facing Hyperforce Instances.

Service and infrastructure code for all Hyperforce services is securely developed within a dedicated control plane functional domain, utilizing source code management, continuous integration, testing, and artifact building services. The generated code is scanned for threats and vulnerabilities before it is packaged into standardized, digitally signed containers and stored in image registries. Code deployment is handled by authorized pipelines in the Hyperforce Continuous Delivery system, with deployment privileges restricted to authorized teams and operators. An Airgapped Control Plane handles additional safeguards necessary in such environments.

Identity and Access Management (IAM) services enforce just-in-time approval to limit access duration and actions, while audit trails monitor all activity, feeding into real-time detection systems to identify and alert on any suspicious activities.

3.5 Hyperforce Cost Management

As Salesforce transitions its services to Hyperforce on public clouds from its first-party data centers, it’s crucial to revamp our budget creation, cost visualization, and resource optimization strategies.

Our cost management approach isn’t just about cutting costs; it’s a strategic process that differentiates between products aimed at growth versus those that are stable. It plans for consumption-based pricing and margins that uphold product availability, aligning with our core value of Trust. Public cloud accounts are organized hierarchically and linked to specific products and executives. Detailed service-level resource tagging, enriched with organizational metadata, helps pinpoint costs for individual microservices. Tools like Tableau and Slack, along with advanced forecasting tools, are employed to provide executives and teams with real-time data on costs, forecasts, and budget analyses, instilling confidence in ‌future financial planning.

To ensure optimal cost management, Salesforce employs a mix of Compute Savings Plans, Spot Capacity, and On-Demand Capacity Reservations (ODCR), guaranteeing the necessary capacity. These reservations are managed through advanced time-series forecasting and custom dashboards, allowing for human oversight and decision-making. Setting achievable goals on unit transactional cost reductions (the cost to process a defined volume of business transactions) is an effective strategy to drive improvements. The Hyperforce Unit Cost Explorer tool enables teams to analyze and manage unit cost trends, attributing costs to specific services, and identifying new improvement opportunities. The Salesforce Cloud Optimization Index, or “COIN” score, assesses services against a dynamic list of savings opportunities, motivating service teams to maintain optimal resource efficiency.

In our unwavering commitment to Sustainability, we actively pursue reductions in our carbon footprint, setting specific targets to decrease our unit Carbon to Serve, a measure of emissions relative to work performed.

4. Enterprise-Grade Trust

Security and availability are crucial foundational aspects of our enterprise-grade platform, essential for maintaining customer trust. At Salesforce, these controls are integral to the Salesforce Platform, automatically enforced through shared services and software frameworks. This built-in approach ensures that individual systems benefit without requiring additional effort.

Managing and continuously enhancing this extensive array of security and availability controls across thousands of services and hundreds of teams presents a significant challenge. However, it’s crucial, as overlooking even a minor detail can result in a security breach or system outage.

4.1 Security

Hyperforce is a secure and compliant infrastructure platform that supports the development and deployment of services with advanced security features. It offers strong access control, data encryption, and compliance with security standards. Salesforce adheres to over 40 security and compliance standards such as PCI/DSS, GDPR, HIPAA, FedRamp, and more.

Key security principles include Zero Trust Architecture (ZTA) and end-to-end encryption, ensuring the protection of customer data across all processing stages. Salesforce adheres to security standards and best practices from the secure software development lifecycle to production operations, as well as robust application-level security practices to mitigate potential threats.

Zero Trust Architecture (ZTA)

The ZTA cybersecurity paradigm ensures that all users, devices, and service connections undergo authentication, authorization, and continuous validation, regardless of location. ZTA and Public Key Infrastructure (PKI) are essential for modern cybersecurity, establishing trust boundaries and secure communication without relying on perimeter security.

However, PKI deployments often overlook the importance of certificate revocation and governance over root certificate authorities. Salesforce’s implementation of certificate revocation is robust and scalable, supporting end-to-end PKI security.

Additionally, Hyperforce enforces ZTA through mutual transport layer security between services, using short-lived private keys and just-in-time access for users with role-based access control.

End-to-End Encryption

Salesforce ensures the protection of data in transit by using TLS with perfect forward secrecy cipher suites, which secures data as it travels across the network between user devices and Salesforce services, as well as within the Salesforce infrastructure domains.

For data at rest, Salesforce employs a key management system supported by hardware security modules. In its multitenant platform, each tenant is assigned a unique encryption key, preventing any crossover of keys between tenants.

The security of communication and encryption is heavily dependent on entropy for generating keys or random data. Recognizing the vulnerability of cryptographic protocols to attacks due to predictable key generation, Salesforce mitigates this risk by sourcing entropy from multiple origins for all key generation processes.

Secure Software Development Lifecycle

Salesforce has a customized JDK to meet many compliance standards, such as Federal Information Processing Standard (FIPS), simplifying the process for developers and operators by eliminating the need for them to undertake compliance work themselves. This customization not only helps prevent risks such as XML external entity injection (XXE) but also enhances Salesforce’s cryptography agility and ability to interchange cryptography strategies as needed. It allows the transformation of non-compliant code—whether developed internally or sourced from open repositories—into FIPS-compliant code without necessitating a complete rewrite, thus reducing the workload on development teams and maintaining adherence to secure-by-default design principles.

Additionally, Salesforce has incorporated frameworks to counter vulnerabilities like cross-site scripting (XSS), request forgery (CSRF), and SQL injection by integrating protective measures into the Secure Software Development Lifecycle (SSDL).

A centralized secrets management system, reinforced by role-based access controls (RBAC), is implemented to secure both services and user access. Furthermore, code scanning tools are employed to prevent the accidental exposure of secrets in production environments through source code management systems.

Hyperforce Operational Security Capabilities

Phishing remains a significant threat to organizations, leading Salesforce to implement phishing-resistant multi-factor authentication (MFA) in accordance with a number of industry best practices, including CISA (Cybersecurity and Infrastructure Security Agency) Zero Trust principles. This includes hardware-backed keys for employees with production access and a secure kernel for controlled access to cloud service provider accounts.

To maintain a robust security posture, Salesforce has standardized security controls and integrated cloud-native security services into Hyperforce, providing enhanced visibility, threat detection, and policy enforcement. A comprehensive security information and event management system is in place for real-time monitoring, alerting, and reporting, which is supported by a thorough vulnerability management program and cloud security posture management tools to continuously identify, assess, and remediate vulnerabilities.

Additionally, a web application firewall filters and monitors HTTP traffic to protect against various attacks, and a range of network security tools including firewalls, intrusion detection and prevention systems, virtual private networks, and endpoint detection and response agents are utilized to provide continuous monitoring and threat detection. Network segmentation and micro-segmentation are implemented to minimize the attack surface and contain potential breaches.

Salesforce has also developed and implemented a robust incident response plan tailored to the unique challenges of Hyperforce, featuring predefined procedures for identifying, containing, and mitigating security incidents, ensuring a rapid and effective response to potential security threats.

4.2 Availability

Salesforce manages mission-critical customer workloads that demand high availability. Our strategy for high availability includes various organizational facets such as our service ownership model, incident management, and operational reviews. Key technical elements of our strategy include our monitoring architecture, AI-driven operations automation, and automated safety mechanisms for production changes.

Availability Architecture Standards

To consistently achieve high availability across thousands of services, a three-step approach manages technical risks at scale.

First, availability architecture standards are established, defining best practices such as:

Second, a multi-layered inspection model ensures services meet these standards. This includes automated chaos testing, scanning, and linting for anti-patterns, and architecture reviews with senior architects to catch issues not addressed by automation.

Third, solutions are integrated into Hyperforce to ease adherence to these standards. This includes automatic telemetry collection, default redundancy, and failover mechanisms, and built-in protections like load shedding and DDoS defense, all activated by default for individual services.

Monitoring and Observability

Salesforce handles an immense volume of telemetry data, including metrics, logs, events, and traces, which traditional monitoring solutions can’t always manage effectively.

To address this, Salesforce developed a comprehensive observability system that integrates with its software development lifecycle, operations, and support functions. This system provides a unified experience for engineering and customer support teams, while meeting ‌scale needs and reducing licensing costs for third party software.

The metrics infrastructure at Salesforce, built on OpenTSDB and HBase, supports large-scale collection, storage, and real-time querying of time-series data. Non-real-time use cases utilize Trino and Iceberg, handling over 2 billion metrics per minute to provide insights into CPU utilization, memory usage, and request rates. For log management, Salesforce uses Splunk for its powerful indexing and search capabilities. Apache Druid supports real-time ingestion and analysis of large-scale event data, crucial for understanding user interactions and system events. Distributed tracing across microservices is managed with OpenTelemetry and Elasticsearch, aiding in identifying specific latency and failure points.

Salesforce also implemented an Application Performance Monitoring (APM) infrastructure that integrates with its technology stacks for data collection and telemetry stores. This auto-instrumentation of applications simplifies data collection and ensures consistent telemetry across services. APM’s unified dashboard correlates various data types, enhancing the ability for engineers to monitor performance, diagnose issues, and optimize systems through a cohesive interface.

By standardizing observability tools, Salesforce links disparate telemetry types across services using distributed tracing. This creates a comprehensive service dependency graph, visualizing the entire service ecosystem and tracing requests with fine granularity. This capability is crucial for pinpointing issues, identifying bottlenecks, and supporting AI-driven features like anomaly detection, predictive analytics, and automated remediation.

AIOps Agent

To speed incident resolution times, we’ve developed an AI Operations (AIOps) Agent that automatically detects, triages, and remediates incidents on behalf of human operators, with intervention only in a minority of cases. The AIOps Agent is a scalable multi-agent-reactive toolkit designed to facilitate the development of complex, reactive agent-based systems. It’s highly modular and can be enhanced with various tools to extend its functionality. It’s designed to efficiently scale with increasing numbers of agents. Key features include a reactive architecture, enabling agents to dynamically react to changes in their environment; tool enhancement, allowing for easy integration of tools to extend agent capabilities; and a pluggable planning module, which enables customization of agents’ planning strategies by plugging in different planning modules.

Swift proactive detection is accomplished for 82% (at the time of writing) of our core CRM product incidents with advanced machine learning models from our Merlion library, a publicly available open-source library developed by our AI research team. Merlion is an ensemble of machine learning models like Isolation Forests, Stats, Random Forests, and long short-term memory (LSTM) neural networks that process the extensive telemetry data generated by our systems in near real-time.

61% of incidents (at the time of writing) are automatically resolved by the agent’s actions. Our AIOps Agent can process and triage data vectors such as logs, profiling, diagnostics, time series, and service-specific artifacts to recommend remediation actions. The AIOps Agent controller and planner choose an agent with specific skills to perform actions in production.

For the remaining incidents that require human involvement, at time of writing the AIOps Agent efficiently triages 43% of unresolved issues to the appropriate service teams. It does so by intelligently understanding the nature and context of each incident using the in-house fine-tuned model XGenOps, which is trained on operational datasets like problem records, incidents, JFRs, and logs, ensuring that it is directed to the team with the necessary expertise.

Continuous Deployment

To manage the risk of outages from nearly 250,000 production changes made weekly, fully automated deployment systems are used to enforce safe change practices, eliminating human error. Off-the-shelf systems weren’t scalable or customizable enough, prompting the development of more tailored solutions.

The custom continuous deployment system ensures safety through multiple layers, following industry-standard blue/green deployment strategies:

  1. Mandatory testing evidence for each change.
  2. Initial canary testing of changes.
  3. Staggered deployment with controlled blast radius.
  4. Soaking and health checks between deployment stages.
  5. Mitigating conflict with existing moratoriums and incidents.

Additionally, continuous integration systems have been optimized to run millions of AI-selected tests, enabling rapid releases while minimizing regression risks.

5. Metadata Framework

The core architectural principle of the Salesforce Platform is its metadata-driven design. Salesforce engineers create multitenant services and data stores. Each application on the platform is essentially a collection of metadata that tailors how these multitenant services are utilized by individual customers. That’s why a common marketing phrase for the Salesforce Platform is that “everything is accessed with metadata”.

The platform emphasizes structured and strongly-typed metadata. This metadata serves as an abstraction layer between customer experience and the underlying Salesforce infrastructure and implementations. This approach enhances both the usability and quality of applications. For instance, instead of using SQL schema definitions and queries, customers interact with structured metadata like entities, fields, and records via Salesforce Object (sObject) APIs. This design allows the platform to integrate new data storage technologies or modify existing ones without necessitating application rewrites, thereby supporting continuous development best practices.

Metadata-Driven Platform

The Salesforce Platform architecture features a “layered extension” approach that supports four key personas in building and extending apps:

  1. Salesforce Engineering: Teams develop native apps like Sales Cloud and Service Cloud, which are deployed across all services and runtimes through an extensive release process. These apps are made available to all tenants through licensing and provisioning mechanisms.
  2. External Partners: Independent Software Vendors (ISVs) and other partners can extend the metadata created by Salesforce to build value-added solutions, such as schema extensions on Sales Cloud data models or additional validation rules for Service Cloud case records. They can package these solutions for distribution to multiple customers.
  3. Organization-specific IT Admins and Developers: They can customize their applications beyond what ISVs offer, tailoring solutions to meet unique business challenges like proprietary or region-specific processes.
  4. Individual End Users: End users can personalize their app experience, such as changing the column order in a list view or setting a default tab.

Each persona can independently iterate on the same application by ensuring that lower layers don’t depend on changes of personas in the higher layers and upholding strong versioning and backward compatibility contracts.

One feature that highlights the “layered extension” concept is the Record Save Order of Execution, which ensures that business logic from all four layers is applied in a predictable sequence. This allows more specific, higher-layered business logic determined by the org admin or IT developer to appropriately override lower-layered logic during record saving that might be provided by Salesforce or an external partner.

Additionally, the platform’s metadata frameworks utilizes a “Core” runtime and a proprietary Object-Relational Mapper (ORM) with multitenancy built-in, connected to a relational database. This Core runtime enables shared memory state, referential integrity validations, and transactional commits, which prioritizes app stability and enhances the reliability of app deployments. The architecture has been continually evolving to support the growing scale of application complexity. For example, as of June 2024, there are now over 85,000 entities defined by Salesforce, and over 300 million custom entities have been defined by our customers.

Historically, the Core runtime hosted the majority of platform and app functionality. The current architecture of the Salesforce Platform now includes hundreds of independent, metadata-driven services. The Core runtime remains the single system of record for application metadata, leveraging the unique benefits of a monolithic architecture for metadata management. The relevant metadata is synchronized to local caches in independent services, powering the diverse array of scalable services for application runtimes.

6. Data

Data is an essential asset for organizations, and Hyperforce provides a reliable foundation for its storage at Salesforce. The key challenge is to store data in a manner that optimizes its utility for applications. The Salesforce Platform has transformed the data layer by accommodating various storage and access requirements. It effectively balances costs, read/write speeds, storage capacity, and data types to meet diverse needs.

As AI and analytics increasingly shape enterprise applications, data has emerged as a pivotal element. Its importance lies in its ability to enable AI and analytics to learn, analyze, make decisions, and automate processes.

Data originates in System of Record (SOR) databases, fulfilling the operational requirements of businesses. It then transitions through various transformations to big data platforms, which are essential for powering AI and analytics-driven applications.

Effective management of data, from transactional information to analytical insights, is crucial for extracting value and supporting sophisticated applications. Salesforce Database (SalesforceDB) stands out as a premier transactional database for managing SOR data, while Data Cloud serves as a robust big data platform that enhances AI and analytics capabilities.

6.1 Transactional Database

Transactional data and metadata is essential in the Salesforce Platform. SalesforceDB is a modern, cloud-native relational database designed specifically for Salesforce’s multitenant workloads, similar to other cloud databases from major providers but with custom features for Salesforce’s architecture. It extends PostgreSQL, separates compute and storage, and leverages Kubernetes and cloud storage, enhancing operations with tenant-specific functionalities like encryption and sandboxes.

SalesforceDB handles all transactional CRM data, upwards of 700 billion transactions per month as of writing, as well as metadata for Data Cloud and related services. Its primary objectives are to ensure trust through durability, availability, performance, and security; scale for large customers; and facilitate simplified, reliable cloud operations. It achieves these goals with a design that separates compute and storage layers, an immutable, distributed storage system, and log-structured merge tree data access. This enables advanced features like per-tenant encryption of data in storage and efficient sandboxes and migrations.

Architectural Overview

A high-level diagram of the SalesforceDB architecture:

Transactional Database

The SalesforceDB service architecture runs across three availability zones, with compute and storage replicated across these zones to ensure the system remains available even if any node or entire zone is lost. All services run in Kubernetes to enable automated failure recovery and service deployments.

To provide high levels of durability and availability, the ultimate system of record for SalesforceDB is cloud storage like AWS’s S3. Operations such as archiving and cross-region replication are managed at this cloud storage level. Storage objects are immutable, enhancing data distribution and replication for high availability.

Due to high latency in cloud storage, SalesforceDB uses storage caches to access data. These caches are distributed storage systems that maintain temporary copies of storage objects in a cluster of nodes, ensuring replication and durability as needed by the database. Separate caches are used for transaction log storage and data file storage.

The SQL compute tier consists of a primary database cluster and two standby clusters in three different availability zones. The primary cluster handles all database modifications, while the standby clusters only handle query operations.

LSM Storage and Access

SalesforceDB utilizes a log-structured merge tree (LSM) data structure, where changes are initially recorded in a transaction log and accumulated in memory. The committed changes are then collectively written out into key-ordered data files, which are periodically merged and compacted to optimize storage efficiency.

This structure effectively eliminates concurrent-update issues that are common in databases that update storage directly. By using the LSM approach, SalesforceDB supports critical features such as immutable storage, making it a robust solution for managing Salesforce workloads.

Immutable Storage

Data in storage is immutable; once data files are written and made visible, they don’t change. Transaction logs are append-only, simplifying data access patterns and enhancing reliability. This structure supports uncoordinated reads, simplifies backups, boosts scalability, and facilitates storage virtualization, making it well-suited for cloud environments.

Availability and Durability

Transactions in SalesforceDB are committed across multiple availability zones, which ensures that there is no data loss even if a node or zone fails. If a failure occurs, in-flight, transactions are aborted, and committed transactions are successfully recovered. Since failures don’t lose committed data, failover to new nodes is automated.

Cluster management software automatically handles failovers by monitoring quorums and managing ownership transfers. This process isn’t only used in emergencies but also routinely during regular patching, enhancing the system’s reliability through constant use. Short database restarts are typically unnoticed by end users, maintaining a seamless user experience.

Salesforce does three major schema updates per year, with smaller schema updates weekly. SalesforceDB provides zero-downtime schema operations that enable these updates to be done with no customer impact.

System of Record

Our transactional database serves as the primary repository for customer data, which is cached across multiple availability zones and stored in the cloud. Each data block is secured with an immutable checksum, verified by both the storage layer and the database engine. The database performs lineage tracking to detect any out-of-order changes or missed versions and runs ongoing consistency checks between indexes and base tables.

For ransomware protection, databases are archived in separate storage under a different account, including both full and incremental transaction log backups. These backups are regularly validated through a restoration testing process. Additionally, cloud infrastructure is pre-configured but not activated, ready to manage data restoration requests as needed.

Scale

Each Salesforce org is housed in a Hyperforce cell, which includes the SalesforceDB service. This setup allows for rapid global scaling through the creation of new cells via the Hyperforce architecture, and traffic can be easily shifted between cells to manage load. However, as customer workloads and business demands increase, the capacity of a single database instance may be insufficient.

To address this, SalesforceDB employs a horizontal scaling architecture for both its storage and compute tiers. Cloud storage is virtually unlimited, and the cache layers automatically scale to meet demand. Additionally, the compute tier can expand by adding more database compute nodes, which efficiently read from shared immutable storage without needing coordination. This approach allows SalesforceDB to achieve scalability that matches or exceeds that of leading commercial cluster database architectures, without requiring special networking or hardware.

Multitenancy

Salesforce is a multitenant application where a single database hosts multiple tenants. Each table record includes a tenant ID to distinguish its ownership, and tenant isolation is maintained through automatic query predicates added by Salesforce’s application layer.

SalesforceDB is tailored to this model, supporting tenant-specific DDL, metadata, and runtime processes, enhancing reliability, performance, and security. It combines the low overhead of a tenant-per-row model with the efficiency of a tenant-per-database schema.

In SalesforceDB, tenant IDs are part of the primary key in multitenant tables, which cluster data by tenant in the LSM data structure, enhancing access efficiency. This setup not only facilitates efficient data access and per-tenant encryption but also simplifies tenant data management. Tenants can be easily copied or moved with minimal metadata adjustments due to the compact metadata structure.

6.2 Data Cloud and Data Lake

AI and data capabilities are essential in modern enterprises, with a “single view of customer engagement” becoming a critical leadership focus. Salesforce leads in facilitating this engagement by integrating data, AI, and CRM into a virtuous circle, driven by AI insights and powered by data. Centralizing customer data into a single source of truth is crucial but challenging due to data fragmentation and the complexity of system management.

SalesforceDB is optimized for high-performance transactional workloads on structured data, whereas AI and analytics workloads require handling large volumes of unstructured data from various sources and performing complex queries and batch processing. To address these needs, Salesforce has developed Data Cloud, a platform designed to break down data silos, unify, store, and process data securely and efficiently, supporting AI and analytics demands, and enabling real-time enterprise operations.

Data Cloud and Data Lake

Data Cloud, built on Hyperforce, serves as the foundational platform for AI and Analytics, offering:

Data Cloud Architecture supports a number of components and capabilities, which are outlined below.

Connectivity, Transformation and Harmonization

Data Cloud supports efficient ingestion pipelines from various structured and unstructured data sources, for batch, near real-time, and real-time data processing. Data Cloud’s ingestion service operates on an Extract-Load-Transform (ELT) pattern, designed for low-latency and suitable for B2C scale. Real-time ingestion includes APIs and interactive streams, while near real-time sources cover detailed product usage. Once ingested, data is extensively transformed to prepare, harmonize (for example, unifying various contact types), and model it for effective querying, analytics, and AI applications. The platform also includes a wide range of ready-to-use harmonized data models.

Data Cloud integrates seamlessly with Salesforce applications such as Sales Cloud, Service Cloud, Marketing Cloud, and Commerce Cloud. Additionally, it offers hundreds of connectors for external data sources, ensuring smooth data integration.

Lakehouse for Big Data

Data Cloud features a native lakehouse architecture based on Iceberg/Parquet, designed to handle large-scale data management and processing for batch, streaming, and real-time scenarios. This architecture supports both structured and unstructured data, crucial for AI and analytics applications.

Lakehouse for Big Data

In cloud-based data lakes like Azure, AWS, or GCP, the fundamental storage unit is a file, typically organized into folders and hierarchies. Lakehouse enhances this structure by introducing higher-level structural and semantic abstractions to facilitate operations like querying and AI/ML processing. The primary abstraction is a table with metadata that defines its structure and semantics, incorporating elements from open-source projects like Iceberg or Delta Lake, with additional semantic layers added by Data Cloud.

Abstraction Layers in Lakehouse:

Data Cloud Lakehouse supports B2C scale, real-time ingestion, processing, schema enforcement and evolution, snapshots, and uses open storage formats.

At the core is attribute-based access control (ABAC) support, which dynamically evaluates access based on attributes related to entities, operations, and the environment. This system supports both discretionary and mandatory access controls.

Complementing ABAC, a detailed data classification system categorizes data by sensitivity and purpose, enhancing compliance, risk management, and incident response.

Together, ABAC and this classification system provide comprehensive data governance, ensuring data within Data Cloud is managed securely and efficiently.

Metadata-driven

Data Cloud integrates deeply with the Salesforce Platform for metadata, packaging, extensibility, user experience, and application distribution via AppExchange. Customers can define and manage the metadata for Lakehouse streams and tables, just like other Salesforce metadata. Every data object (including federated or external tables) is represented as a Salesforce object, and modeled as virtual entities backed by data storage in Data Cloud. They can be used by developers to build applications on the Salesforce Platform.

Zero Copy Federation and Extensibility

Data Cloud offers extensive support for zero-copy federation, allowing users to integrate with external data warehouses like Snowflake and Redshift, lakehouses such as Google BigQuery, Databricks, and Azure Fabric, as well as SQL databases and various file types including Excel. Data Cloud supports file and query-based federation, with live query and access acceleration as shown in the figure. Labels (1) and (2) illustrate Data Cloud’s query (including live query pushdowns) and file-based federation for accessing data from external data lakes/warehouses/data sources; and label (3) highlights acceleration of federated access from external data lakes/data sources. Labels (4) and (5) illustrate query and file-based sharing of data from Data Cloud with external data lakes/warehouses. Zero-Copy capability also extends to unstructured data sources like Slack and Google Drive, which can be accessed by Data Cloud’s unstructured processing pipelines. Additionally, Data Cloud facilitates the abstraction of Salesforce objects and data access for data federated from external sources, enabling access to such data across the Salesforce platform and applications.

Zero Copy Federation and Extensibility

Customer Data Platform (CDP)

Data Cloud integrates a CDP that features advanced identity resolution capabilities, creating unified individual identifiers and profiles along with comprehensive engagement histories. This platform is adept at handling both business-to-business (B2B) and business-to-consumer (B2C) frameworks by supporting identity graphs that utilize both exact and fuzzy matching rules. These identity graphs are enriched with engagement data from various channels, which helps in building detailed profile graphs with valuable analytical insights and segments.

Additionally, the CDP enables effective segmentation and activation across different platforms such as Salesforce’s Marketing Cloud, Facebook, and Google. It processes customer profiles in batch, near real-time, and real-time, which allows for immediate decision-making and personalization. This functionality enhances interactions in both B2C and B2B scenarios, ensuring that businesses can respond swiftly and accurately to customer needs and behaviors.

Data Graphs

Data Cloud offers an enterprise data graph in JSON format, which is a denormalized object derived from various Lakehouse tables and their interrelationships. This includes a "Profile" data graph created by CDP that encompasses a person’s purchase and browsing history, case history, product usage, and other calculated insights, and is extensible by customers and partners. These data graphs are tailored to specific applications and enhance generative AI prompt accuracy by providing relevant customer or user context.

Additionally, plans are in place to expand these data graphs to include knowledge graphs that capture and model derived knowledge, such as extracted entities and relationships from unstructured data. The real-time layer of Data Cloud utilizes the Profile graph for real-time personalization and segmentation.

Real-Time Layer

Real-Time Layer

Data Cloud’s real-time layer is engineered to process events such as web and mobile clickstreams, visits, cart data, and checkouts at millisecond latencies, enhancing customer experience personalization. It continuously monitors customer engagement and updates the customer profile from CDP with real-time engagement data, segments, and calculations for immediate personalization.

For example, when a consumer purchases an item on a shopping website, the real-time layer quickly detects and ingests this event, identifies the consumer, and enriches their profile with updated lifetime spend information. This allows for the personalization of their experience on the site in sub-seconds. Additionally, this layer includes capabilities for real-time triggering and responses, enabling immediate actions based on customer interactions.

Data Actions

Data Cloud is an active platform that supports the activation of pipelines in response to data events. For instance, a significant event, such as a drop in a customer’s account balance can trigger a Salesforce Flow to orchestrate a corresponding action. Similarly, updates to key metrics, such as lifetime spend, can be automatically propagated to relevant applications.

Data Compute Fabric

Data Cloud features elastic scaling compute clusters that efficiently handle processing tasks. It offers robust management for both multitenant and dedicated compute environments. Additionally, it provides managed support for Spark and SQL. BYOC (Bring Your Own Compute/Code) features support multiple programming languages, including Java, Python, and Spark, allowing for the integration of custom transforms, models (including LLMs), and functions, enhancing extensibility.

Unified Queries

Data Cloud’s Query Service provides advanced querying capabilities, featuring extensive SQL support for both structured and unstructured data via Trino and Hyper. It enhances functionality with operator extensibility through Table Functions, allowing for diverse search operations across text, image, spatial, and other unstructured data types. These capabilities are seamlessly integrated with relational operations, such as selecting customer records. This unified approach enables the generation of targeted and personalized results, facilitating more precise LLM responses using RAG.

Unstructured Data Processing

Data Cloud supports storage and management of structured (tables), semi-structured (JSON), and unstructured data seamlessly across data ingestion, processing, indexing, and query mechanisms. Data Cloud supports various unstructured data types beyond text, including audio, video, and images, broadening the scope of data handling and analysis. The figure below illustrates the two sides of grounding (ingestion and retrieval).

Unstructured Data Processing

Data Cloud manages unstructured data by storing it in columns as text or in files for larger datasets. It supports data federation for unstructured content, which allows for the integration and management of data from multiple sources.

The data is then prepared, chunked, embeddings are generated, and processed for keyword indexing and vector indexing. Data Cloud hosts multiple out-of-the-box and pluggable models for chunking and embedding generation. Data Cloud supports automated and configurable transcription of audio and video content for subsequent processing and indexing. Search service is used for keyword indexing. For vector indexing, Data Cloud stores vector embeddings in a native vector database like open-source Milvus.

The Query Service facilitates ensemble queries across structured, keyword index, and vector indexes, maintaining strict visibility and permissions, thus enhancing RAG.

Semantic Layer

Data Cloud features a headless Semantic Layer with APIs designed to enhance business semantics and AI/ML-driven analytics, similar to Tableau Einstein. This layer includes a semantic data modeling service that enriches traditional analytical models with business taxonomy such as measures and metrics.

Its semantic query service utilizes a declarative language to interact with these models, translating queries into SQL for accessing data from both native and federated data sources within Data Cloud.

This integration facilitates scalable and interactive analytical explorations, reports, and dashboards, compatible with third-party visualization tools.

6.3 Data Caching

Caches are essential for fast access to frequently used data. Salesforce uses many caches across the Salesforce Platform, including in Core Application Servers, SalesforceDB, and at the Edge. The Salesforce Platform and applications need a scalable, tenant-aware caching solution with low latency and high throughput. This solution must allow Salesforce engineers to control what’s cached and for how long, ensuring that their data is not evicted by system noise or other customers’ data. Vegacache, a Salesforce-managed caching service based on Redis, is tailored for a polyglot, multitenant, and public cloud environment. It’s widely used by Salesforce services and accessible to platform developers via Apex programming language APIs. Operating at scale in Hyperforce, as of writing, Vegacache handles over 2 trillion requests daily with sub-millisecond response times.

Vegacache instances, running in Kubernetes containers accessed via Service Mesh, are deployed across multiple Availability Zones to balance data availability and latency. It scales automatically based on system load, ensuring data availability and slot ordering preservation. Vegacache provides guaranteed cache size per customer and offers protection against noisy neighbors, with resilience against infrastructure failures through replicated data storage.

For the Salesforce Platform developers, Vegacache makes it possible for developers to cache Apex Objects and SOQL database query results, reducing CPU usage and latency by eliminating unnecessary data fetches from SalesforceDB. It supports Put(), Get(), and Delete() operations, keeping frequently used objects readily accessible.

6.4 Asynchronous Data Processing

Salesforce natively supports asynchronous data processes and architectures for enhanced workflow flexibility, resiliency, and scalability.

Salesforce engineers first leveraged message queues to decouple bulk and large data processes, as well as coordinate processes between independent systems. These message queues were abstracted from the external developer via Platform features, like Bulk API queries or Asynchronous Apex. The Salesforce Platform then introduced log-organized event streams built on a robust messaging infrastructure of internally managed Apache Kafka clusters. This enabled event-based architecture with a publish/subscribe interaction model, and was productized to external developers as Platform Events.

Both message queues and event streams continue to be highly-leveraged technologies of apps and solutions built on the platform, especially as they leverage more features, clouds, and external systems hosted on independent runtimes. Communicating via versioned event schemas enables independent software development lifecycles of the different runtimes. The decoupling of systems via events also helps manage load spikes and elasticity/scale of individual runtimes to support a higher overall resiliency and availability of an app.

Search features at Salesforce, crucial for applications ranging from global search to generative AI, face unique challenges that shape our architectural approach:

  1. Scale: Supporting hundreds of thousands of customers and millions of tenants, our cloud-native search solution is designed for massive scale while remaining cost-effective.
  2. Customer Diversity: Salesforce’s diverse customer base across various industries has unique and complex search requirements due to extensive customization of the platform, involving numerous object types and fields.
  3. Operability: The search solution must be resilient and highly available, supporting data residency, tenant lifecycle operations like regional migrations, and sandboxing, and maintain low-indexing latency with fairness between tenants.
  4. Relevance at Scale: Enhancing the relevance of search results to meet diverse user queries is critical, especially as we scale relevance algorithms to accommodate various tenants, data types, and search scenarios.
  5. AI and Semantic Capabilities: Search supports Machine Learning and generative AI, particularly for Retrieval-Augmented Generation (RAG).
  6. Seamless Integration: To ensure a cohesive user experience, Salesforce’s search technology integrates deeply with the broader Salesforce Platform, including metadata models and AI/data services.

Salesforce’s cloud-native solution, SeaS (Search as a Service), is built on Solr, an open-source distributed search engine. Salesforce has significantly extended and optimized Solr to meet our unique challenges and has integrated it deeply with Salesforce applications and platform, incorporating semantic technologies to enhance AI applications and search relevance.

SeaS employs a compute/storage separation architecture, allowing for scalable distribution of indexes across nodes and rebalancing loads and availability across zones during failures. It features automatic sharding and resizing of shards, zero-downtime upgrades, and optimizations like replica lazy loading and archiving to cater to rarely used indexes.

The architecture also includes a low-level index implementation optimized for a large number of fields, auto-complete, spell-correction, and bring-your-own-key encryption. Managing around 6,000 Solr nodes globally, Hyperforce search infrastructure uses multiple independent clusters (Hyperforce cells) in each region to balance cost and control, automatically placing client indexes based on load, domain, and type.

Salesforce’s search relevance pipeline employs learning-to-rank techniques, adapting to the diverse needs of our customers and supporting features such as result ranking. It also includes entity predictions from user queries and past interactions.

The stack incorporates a vector search engine for semantic search and AI applications, integrated with Data Cloud for generative AI capabilities. This includes a comprehensive pipeline for data transformation, hybrid search support, and a catalog of configurable rankers, such as Fusion Ranker.

Generative AI leverages search backends to deliver advanced features like Search Answers for natural language queries and retrieval functions in Einstein Agent.

Salesforce’s search features span various contexts, including global search, lookups, search answers, community search, related lists, setup, mobile, machine learning, and generative AI applications. This broad functionality is achieved through tight integration of the Search stack with Salesforce’s metadata system and UI ecosystem, enabling seamless support for both standard and custom objects.

On the relevance front, models are continuously refined by learning from user interactions and evaluated through A/B testing, enhancing search result accuracy. This process also supports bootstrapping models for AI applications via knowledge transfer.

Additionally, integration with Data Cloud enhances search capabilities on data objects through no-code configurations and allows for the composition of search functions within data pipelines, such as including search statements in SQL queries. The integration extends to the AI Platform, enabling search queries to be used as retrievers in Prompt Builder for grounding.

7. AI

AI has reshaped the technology landscape, and the Salesforce Platform, with its integrated and rich data layer, positions Salesforce to deliver impactful AI experiences to customers. Salesforce began its AI transformation nearly a decade ago and has led the field since 2013, focusing on research, ethics, and product development to empower businesses to solve complex problems and drive growth.

Leveraging the core value of Innovation, Salesforce introduced Einstein Predictive AI, enabling businesses to analyze data, automate processes, understand customers, and optimize operations with a comprehensive suite of AI-powered tools such as Einstein Prediction Builder and AI bots. With the rise of Generative AI, Salesforce launched Agentforce, a platform that merges predictive and generative models to offer advanced AI capabilities while prioritizing data privacy.

7.1 Core AI Principles

Salesforce AI stack follows these core principles:

7.2 Architecture Overview

AI Architecture Overview

The AI stack consists of several key components:

  1. AI Platform: This platform layer is responsible for managing, training, and fine-tuning AI models used in both predictive and generative applications. It offers out-of-the-box (OOTB) services, trust services and foundational models for training, testing, and performing inference on models. Additionally, it supports the integration of your own predictive and generative models, allowing you to bring custom models within the platform.
  2. AI Foundational Services: This includes the AI Gateway, Feedback Framework, RAG, Agentic Orchestration, Agent Evaluation and Reasoning services which facilitate the integration of business applications with the AI stack.
  3. AI Powered User and Agent Experiences: Salesforce delivers specialized AI-powered applications through its cloud services. Customers can also create custom experiences leveraging any components of the platform—such as Flow, Apex, or even Lightning Web Components (LWC)—to create AI-powered experiences seamlessly integrated into their workflows and business processes.
  4. Einstein Studio: This component features tools like Agent Builder, Prompt Builder, Testing center and Model Builder, designed for creating both generative and predictive AI experiences. It offers end-to-end support for developing/training, testing, and tuning AI models

7.3 Foundational Components

Einstein Trust Layer

Einstein Trust Layer

The Einstein Trust Layer is available in select use cases to help safeguard customer data in generative AI applications by offering robust features:

  1. Data Privacy: Strong masking and privacy controls protect sensitive information from being accessed by external AI models.
  2. Security: Ensures a secure data processing environment and prevents unauthorized access.
  3. Trust: Maintains customer control over data, with no third-party AI storage or usage.
  4. Accuracy: Enhances AI outputs by using relevant Salesforce data to ground prompts.
  5. Content Moderation: Offers both pre- and post-content moderation, customizable data masking for sensitive information (PII/PCI/PHI), and toxicity classification for large language model (LLM) responses.

AI Gateway

The AI Gateway provides a unified interface for accessing and managing various LLMs and predictive models. It acts as a bridge between Salesforce and the world of LLMs, abstracting the complexities of different LLM providers and customers’ own Predictive AI models, offering a consistent way to interact with them. The Gateway integrates with multiple LLM providers, allowing customers to choose the best model for their needs, and incorporates robust data security measures to help manage costs associated with using different LLMs.

Feedback Service is a component that collects, analyzes, measures, and utilizes user feedback to retrain and refine AI models. It plays a crucial role in the continuous improvement of AI-driven features and functionalities within the Salesforce Platform.

Retrieval Augmented Generation (RAG)

RAG is a vital technique that enhances search capabilities with generative AI, leading to more informative and accurate responses. Utilizing the extensive Salesforce Data Cloud and the integrated Vector Database, Einstein AI Platform quickly retrieves relevant data for a user’s query. This data is then used as grounding for LLMs to generate optimal responses.

Additionally, this method boosts response speed and user trust by including source data in responses. RAG is extensively employed in the Agentforce platform, particularly for applications like Einstein Service Agent and Einstein Sales Agent, highlighting how it surfaces relevant information for these use cases.

7.4 Agentforce Platform

As AI models advance, the development of agents to automate tasks that require reasoning is the next step. These agents serve as intelligent assistants, capable of understanding and responding to queries in natural language, allowing users to design, test, and deploy them for various tasks. A crucial component of this system is the Planner Service, which functions as follows:

  1. Interprets User Request: It analyzes the user’s input to determine intent.
  2. Builds a Plan: It formulates a structured plan to address the user’s needs.
  3. Launches Actions: It executes the plan by initiating actions directly or through other services.

The Planner Service orchestrates the process, ensuring that the agent efficiently fulfills user requests by managing and executing the necessary steps.

Agentforce

Agentforce represents a platform for building Agents, enabling customers and ISVs to create automated AI agents for applications like Service Agents and Sales Agents. These agents can process and respond to customer inquiries in a natural, human-like manner, handling a broad spectrum of business tasks, and delivering significant benefits to both businesses and their customers.

The workflow of an Agent includes:

  1. Activation: The agent is triggered by predefined criteria such as a customer’s request across various channels.
  2. Understanding and Responding: It employs Natural Language Processing (NLP) to grasp the customer’s query, intent, and sentiment, then consults Salesforce’s knowledge base or other data sources to craft an appropriate response.
  3. Handling Complexities: If faced with a complex issue or needing human oversight, the agent can smoothly hand over the interaction to a human agent.
  4. Continuous Learning: The agent learns from each interaction, continuously enhancing its responses and overall performance.

Einstein Studio

Salesforce Einstein Studio provides a low-code platform that enables customers to integrate AI into their Salesforce applications and workflows, making AI technology accessible beyond data scientists.

Key features of the studio include:

Salesforce Einstein combines predictive and generative AI, leveraging the unified metadata framework of the Salesforce Platform and Data Cloud to deliver intelligent, personalized, and effective business solutions.

8. Application Platform Services

The Salesforce Platform’s App Ecosystem is distinguished by its integration of capabilities across App Platform Services, API, User Experience, and Developer Experience layers. App Platform Services are common capabilities that are used to build and customize most apps on the Salesforce Platform, whereas business capabilities are generally more solution-specific. This app ecosystem is built on five key capabilities, which guide the app development process.

  1. Tenancy: This involves the logical separation of data and metadata within a multitenant service, allowing authenticated users to access specific data and functionalities. This is most visible to customers when they receive a Salesforce Org upon registration.
  2. Entities: Representing database tables, entities consist of fields similar to table columns. Entity and Field metadata includes attributes for data modeling like data types and API names, as well as functional attributes, like if the entity is query-able or write-able. This abstraction, rather than direct manipulation of the data store itself, allows Salesforce to seamlessly introduce and switch storage technologies without requiring updates from IT developers, ensuring continuous app functionality.
  3. Access Controls: These controls regulate user access to data and features, primarily based on user identity and specific policies. Policies are made up of rules and feature toggles, and govern the entities, fields, and features that can be accessed. The policies and permissions are captured in “permission sets”, and access is granted by assigning permission sets to user identities.
  4. Layered Extension: As previously discussed, this supports the independent development of metadata and apps by different roles including Salesforce engineers, external partners, IT admins, and end-users, facilitated by structured save orders and metadata namespaces.
  5. Packaging: This feature allows for the bundling and distribution of metadata across Salesforce tenants, streamlining the update and distribution process of apps without the need for rebuilding.

Beyond these key capabilities, App Platform Services also includes:

9. Automation

Automation is what makes an app dynamic and is crucial for the digital transformation of essential business processes.

Salesforce Process Automation was created to address key challenges faced by customers, including the need for streamlined and efficient business processes as organizations scale. These challenges often involve workflows that require excessive manual effort, leading to inefficiencies and higher operational costs. Customers seek a solution that can automate these processes, minimize manual labor, and maintain consistency and accuracy.

A significant issue was the absence of a user-friendly tool that allowed non-technical users to design and implement business processes without extensive coding skills. Moreover, there was a need for a solution that could integrate securely, scalably and seamlessly with existing automated Salesforce tasks such as data entry, approvals, notifications, and complex multi-step processes.

Salesforce Process Automation meets these needs by offering a robust yet intuitive platform for creating automated workflows. It enables users to build and customize flows through a visual interface, accessible to both technical and non-technical users, thus automating repetitive tasks, enforcing business rules, and streamlining processes within the Salesforce ecosystem.

9.1 Architecture and Capabilities

Visual Logic Builder: Customers and ISVs use the Flow Builder, a drag-and-drop interface, to create process automation flows without coding. This visual tool is user-friendly for all technical levels, allowing business analysts and administrators to easily design complex automations.

Flow Builder enables customers to create versatile flows that operate in various contexts, supported by the Core Flow Engine:

The Offline Flow Engine can run without a connection to the Salesforce app server. Offline Flow powers automation for Field Service mobile use cases.

The High-Scale Flow Engine powers marketing flows. It offers B2C scale for processing a high volume of long-running flows simultaneously.

All use cases and environments are enhanced by a unified metadata model in Flow Builder, which supports a variety of powerful logic elements applicable across all Process Automation flows:

9.2 Automation Across the Salesforce Platform

Salesforce Process Automation offers seamless integration with other Salesforce products and third-party systems, ensuring smooth data flow between applications for a unified view of business processes and customer interactions. It supports various integration methods such as APIs, web callouts, and MuleSoft connectors.

External Services and MuleSoft connectivity within Salesforce enables connections to external APIs and the utilization of their data within Salesforce Process Automation. Registering the API schema allows for the creation of invocable actions that integrate seamlessly into flows, facilitating the automation of processes with external data sources. MuleSoft’s robust integration capabilities ensure seamless data flow between Salesforce and other applications, eliminating data silos and providing a unified view of business processes.

10. User Experiences

The User Experiences capabilities on the Salesforce Platform enable end-users to interact with applications through various deployment options across browser-based Lightning Applications, Experience Sites, mobile-native, ai-oriented, collaborative UX, or embedded components using Lightning Out.

10.1 Lightning Design System

The Salesforce Lightning Design System (SLDS) is a comprehensive design framework that fosters the creation of consistent and accessible user interfaces with Salesforce’s design principles for a cohesive user experience across all products. It empowers Salesforce engineers, customers, and partners to build applications that feel native across the Salesforce ecosystem.

The key features of the design system include:

The SLDS framework continues to evolve to support richer styling hooks and deeper customization capabilities so that components can be reused while still being customized to meet unique branding and theming requirements. Our design system aspiration is to make Salesforce fast, easy, and compelling to use with AI.

10.2 Lightning Web Stack

Salesforce’s browser-based interface, known as Lightning, offers a consistent UI container and a metadata-driven UI framework and collection of technologies for Salesforce engineers, IT admins, developers, and partners to rapidly develop UI with a consistent Salesforce aesthetic, as well as extension points for complete control to re-style and re-brand. The Lightning Web Stack includes several technologies:

Salesforce engineering has incorporated lessons from previous UI technologies and contributed to web standards bodies, influencing the development of standards-based component implementations. For example, Salesforce continues to be a member of approximately 20 W3C working groups. The Lightning Web Components and Lightning Web Stack align with these industry standards, reducing complexity for developers.

10.3 Mobile

Mobile continues to be a growing and critical interface for users to interact with Salesforce apps. Salesforce provides a native mobile app so all browser-based Lightning apps can become mobile apps without having to write new code. Salesforce also offers a spectrum of tools, SDKs, and capabilities for creating fully custom native apps optimized for devices. These include:

Mobile Customization Framework (MCF) significantly enhances the development of native Salesforce mobile applications by offering ease of use and extensive customization options. Key advantages include:

Offline and low-connectivity scenarios are an increased concern when using apps on mobile devices. The mobile technology stack prioritizes building apps that can be offline-first. Key features include:

Nimbus is the Platform’s production-ready solution that simplifies the process of accessing device capabilities for hybrid app developers. Traditionally, bridging the gap between JavaScript and mobile native code was a complex task. However, with Nimbus, developers can now harness the full potential of mobile devices without delving into low-level coding. Key features include:

As AI continues to transform what’s possible with Salesforce apps, Salesforce also provides a differentiated user experience by leveraging on-device, task-specific AI models alongside cloud-based solutions:

10.4 AI and Collaboration

Non-model UI for natural language and multi-turn interactions with our app will continue to grow in prevalence. Future developments are expected to enhance integration between models, device capabilities, and applications, improving user interactions through more intuitive voice and text interfaces. On-device metric collection will also allow for personalized adjustments based on user preferences.

Collaboration is essential among all users, including both humans and agents, to harness the combined strengths of automation and human oversight. This collaboration is particularly crucial for complex business interactions involving an organization’s employees and its customers. Slack serves as a primary tool within the Salesforce Platform, facilitating this interaction through direct messaging and multi-user channels tailored to specific discussion topics. These discussions can range from spontaneous, user-created conversations to more structured dialogues centered around specific data within a user’s workflow, such as a detailed Slack message thread addressing a significant customer issue.

Looking forward, the Salesforce Platform plans to enhance the collaborative experience currently provided by Slack. This expansion will aim to fully utilize the extensive capabilities of the platform, enriching the way users interact and collaborate within the digital workspace.

11. Developer Experience

The Developer Experience capabilities on the Platform provide tools for building, customizing, testing, and deploying apps, focusing on the spectrum of low-code through pro-code approaches, ensuring equal opportunities for developers of all skill levels.

11.1 AI for Developers

AI and “Developer Assistants” are revolutionizing the developer experience by simplifying and accelerating the creation of efficient, high-quality applications. At Salesforce, our AI Research and Developer Experience teams are continuously iterating and exploring how predictive and generative AI, along with specialized internal code models, can be transformed into powerful developer assistants. These assistants are natively integrated with tools developers already use, like VS Code and Code Builder, making them more relevant and impactful.

In the spirit of our core value of Innovation, one key advancement was the introduction of AI-based code analysis to identify anti-patterns and hotspots in Apex code, and then provide critical recommendations to improve their implementation. The identified issues usually waste computing resources and often lead to incidents at a high-scale. This was launched as ApexGuru Insights in January of 2024.

In the first year following its launch, over 2,800 Salesforce orgs have used ApexGuru to analyze and improve their Salesforce implementation. More than 22,000 recommendations have been successfully implemented, leading to a savings of 28,000 CPU hours each week. This enhancement not only boosts performance but also contributes to environmental sustainability by reducing CO2 emissions by 135 kg weekly, aligning with our core value of Sustainability and commitment to lower carbon emissions.

Another key development is around pro-code solutions for AI-based development that are productized as “Agentforce for Developers”. These generally available extensions include new capabilities and are integrated with the Salesforce Extension Packs in Visual Studio Code and Code Builder, which include:

As of the time of writing, there are over 31,000 developers actively using this technology monthly, with 4.2 million lines of code accepted. This comprehensive suite ensures a flexible, integrated, and efficient development environment, catering to a wide spectrum of development needs within the Salesforce Platform.

12. Application Suite

Our application clouds, including Sales Cloud, Service Cloud, Marketing Cloud, and Commerce Cloud, are built on the Salesforce Platform, offering leading business capabilities and composing our Application Suite to drive Customer Success. Key features include:

Salesforce is committed to enhancing its applications to deliver a unified set of capabilities, utilizing all the foundational technology outlined in this white paper. Central to this transformation are key priorities that steer the design and development of Salesforce’s application suite.

12.1 Scale and Performance

Our application teams specialize in performance and scalability, utilizing advanced performance labs to create exact replicas of our production environments with synthetic data. This setup allows for extensive simulation of parallel user journeys to ensure each new feature is thoroughly performance tested and its impact assessed. When bottlenecks are identified, we implement rate limits and other measures to protect system health while also gathering data to drive resolution.

Our systems are designed for horizontal scaling to utilize the flexibility of the public cloud effectively. Automated checks ensure that updates or enhancements don’t adversely affect performance. We employ predictive autoscalers that proactively manage system load, not just reacting to increased demand but anticipating and adjusting beforehand.

Autoscaling is crucial for minimizing ‌cost to serve by reducing unused capacity. We monitor system running costs closely, identifying and addressing any inefficiencies in auto-scaling or resource use. While cost efficiency is important, we prioritize reliable application delivery, opting for autoscalers that scale up quickly and down slowly to maintain customer trust, even if it incurs higher costs.

12.2 Seamless Integration

Common Data Model

Data models are fundamental to all business operations at Salesforce, influencing APIs, navigation, UI displays, and the reports that can be created. They’re integral to the platform’s functionality.

Our application suite shares a common data model across Sales Cloud, Service Cloud, Commerce Cloud, Marketing Cloud, and Industries Cloud. This contributes to our integrated suite, providing consistent behavior and interoperability, and clear paths for upgrades and extensions.

For example, the sharing of Account and Product entities across all Clouds allows users in both Marketing Cloud and Sales Cloud to exchange data, metadata, UI components, and business logic. This integration helps break down silos and fosters cross-functional collaboration.

Virtual Data Platform

A common data model across all Salesforce Clouds significantly enhances integration but may not meet all complex partner integration needs. The Data Cloud Common Data Model expands on this by extending the shared data model benefits beyond Salesforce’s typical data boundaries, accommodating more extensive integration scenarios.

Layered Extensibility

Salesforce’s Metadata Framework allows various groups like engineering teams, ISVs, partners, admins, and end-users to customize and expand their applications within distinct layers of extensibility without interfering with each other. This structure supports a scalable environment where modifications by one group don’t disrupt others, maintaining system integrity.

A prime example of the Framework in action is the Unified Knowledge product, which integrates all knowledge sources into a data lake. This setup includes a semantic layer and retrievers, enhancing predictive and generative AI capabilities across Sales Cloud, Service Cloud, Marketing Cloud, and Commerce Cloud. It incorporates a data model for unstructured and semi-structured knowledge linked to the existing structured knowledge model.

Additionally, the Framework uses metadata to define custom relationships between data types, facilitating advanced query generation. This allows application teams to create customizable applications that leverage this comprehensive knowledge base, while ISVs, partners, and customers can further enhance application capabilities by modifying metadata relationships or developing custom retrievers for specific business use cases.

Enhance the Common Data Platform

Customer data is securely stored across various platforms like SalesforceDB and Data Cloud, and is standardized and normalized regardless of its structured or unstructured format. This ensures consistent data handling through a unified format known as the sObject, which supports a cohesive data platform across all customer data.

This standardization enables a single API for all data operations, a unified interface for triggers in Apex, and custom workflow creation with Flow. It also supports Einstein Analytics, allowing for customized data views and integration with generative AI tools like Prompt Builder for intelligent response generation based on customer data.

Additionally, Salesforce applications integrate with various data stores to enhance business process flexibility within products. For example, in Marketing Cloud, Flow is used to manage multi-touch customer experiences, with options to use pre-designed templates or build custom Flows that integrate marketing with other business processes, all based on underlying customer data.

12.3 Shared Business Capabilities

Applications leverage and enhance shared services such as identity resolution, content orchestration, personalization, analytics, LLM Gateway, and Reasoning Services, enabling rapid innovation and delivery. These services support real-time data processing, AI-driven insights, and enriched user experiences, providing a comprehensive 360-degree customer view.

Benefits include improved efficiency through intelligent automation and predictive analytics, scalability for increasing data and user interactions, and robust security and compliance. The platform’s customization capabilities allow organizations to quickly adapt to changing needs, fostering growth and operational excellence.

Innovation at the Application Tier is propelled by the Salesforce Platform and individual applications, enhancing the Salesforce ecosystem and establishing applications as industry leaders.

12.4 Multi-channel and Channel-optimized

Salesforce applications are designed to meet users across a variety of platforms, including web, mobile, email, SMS, WhatsApp, and other channels. They optimize ‌each channel’s native capabilities to enhance user experience and efficiency.

Features include multi-month offline capabilities for Salesforce Field Service users, browser push notifications, and wide-screen layouts for service agents in Lightning Service Console, and high-performance storefronts and co-pilots for Commerce shoppers.

The metadata platform ensures that Salesforce, its partners, and customers can immediately benefit from these capabilities right out of the box.

12.5 Rapid Delivery of Innovation

Salesforce’s Foundation Services, Platform, and Shared Business Capabilities allow applications to quickly adapt to market shifts and technological trends, enabling rapid innovation delivery. For instance, with the advent of generative AI, Salesforce swiftly utilized existing AI services like the NLP Trust Layer and Intent Detection to incorporate prompt templates into the Universal Communications Platform. This integration enhances messaging and phone functionality across products, facilitating more personal client connections.

Following the trend towards autonomous AI, Salesforce launched Agentforce, a solution that capitalizes on these existing investments to automate business use cases with agents efficiently, without the need for building from scratch.

12.6 Path to the Application Suite

We’ve rebuilt Marketing Cloud and Commerce Cloud on the Salesforce Platform, enabling these Clouds to share the same infrastructure, platform, metadata, data, AI, UI components, and business logic, while benefiting from the full power of the Salesforce Platform. This also enables us to have seamless integration across all of our clouds and the capabilities that Commerce Cloud and Marketing Cloud deliver become part of the shared business capabilities that can be leveraged by the other applications. This is our integrated application suite vision delivered.

The Salesforce Platform’s journey has led to the development of an integrated application suite that combines Sales Cloud, Service Cloud, Marketing Cloud, and Commerce Cloud into a unified solution. Available from the Salesforce Starter Edition onwards, this suite offers multi-channel outreach, customer relationship management, and business insights in one cohesive package. Regardless of the edition chosen, users can access the core capabilities of Sales Cloud, Service Cloud, Marketing Cloud, and Commerce Cloud, ensuring a consistent experience across all levels.

13. Industry-Specific Solutions

Salesforce Industries products for Financial Services, Health, Life Sciences, Media, Energy and Utilities, Manufacturing, Auto, Consumer Goods, Retail, Net Zero, Public Sector, Education, and Nonprofit, extend our application products and platform to provide tailored solutions that address industries’ unique challenges. They streamline operations and enhance productivity by incorporating industry-specific workflows, compliance measures, and data models.

13.1 Layered Architecture Approach

Our products utilize a layered architecture. At the base is the Salesforce Platform and horizontal applications like Sales Cloud and Service Cloud, serving as the foundation for all industry solutions. Above this layer, there’s a reusable business logic layer that encapsulates horizontal capabilities such as feedback management, CPQ (config, price, quote), and service management.

The top layer features domain-specific customizations tailored to meet specific industry requirements, leveraging the underlying platform for enhanced scalability and efficiency. For example, in the manufacturing vertical, this setup optimizes production planning through accurate forecasting. In the life sciences sector, it provides pharma sales teams with mobile offline solutions that efficiently manage workflows and sample handling while complying with various geographic regulatory requirements.

13.2 Industry-Specific Capabilities

Trusted AI Excellence: Our trusted generative AI solutions provide industry-specific AI capabilities. These include agents and prompt engineering, which facilitate low-code/no-code automation and digitization in sectors such as healthcare, life sciences, and financial services. Additionally, features like document/text mining and summarization cater to industries handling large volumes of data, aiding in information extraction and insights gathering.

Customized agents enhance 3-way communication between agents and customers, leading to quicker resolutions. The trust layer of the Salesforce Platform facilitates adherence to all compliance and regulatory standards across industries.

Data, Insights, and Intelligence with Regulatory Compliance and Security: Salesforce Industries offers a comprehensive 360º view with stringent data privacy, sharing, and security measures tailored to specific industry regulations like GDPR, HIPAA, and FedRamp. Salesforce integrates data from various sources, enabling compliance and security, and enhances these solutions with additional features like Shield Encryption BYOK (Bring Your Own Keys) for tenant data encryption.

Elevated User Experience: Salesforce Industries emphasizes a seamless user experience that is tailored to industry-specific needs to enhance the user journey. This includes tools like the Actionable Resource Center, Experience Cloud templates, and OmniStudio-based solutions.

Digitization, Integration, and Onboarding: Salesforce Industries provides digitization, integration, and onboarding through low-code to no-code solutions, leveraging tools like Flows and Omnistudio for new customers and offering migration solutions for existing CRM systems. Integration with external systems and data is streamlined via the connectors offered by MuleSoft. Salesforce also includes industry-specific service processes, such as dispute management for retail banking.

Mobile and Offline: Salesforce Industries provides robust domain-specific support for the Salesforce Mobile App and Field Service Mobile App. For highly specialized domains requiring advanced offline support, Industries provides bespoke Mobile Apps built on Salesforce Mobile SDKs.

Common Business Capabilities: Salesforce Industries builds on a foundation of common business capabilities, enabling consistency and productivity while tailoring solutions to unique industry needs, such as different appointment booking systems for banks and hospitals. Integrated with the broader Salesforce ecosystem, Salesforce provides a holistic Customer 360 view, making it a vital part of the Salesforce clouds and products.

14. Analytics

For years, the Analytics and Business Intelligence (BI) platform market has promoted visual self-service and AI-driven automated insights for end users to help them make quicker, more data-driven decisions. However, we know not everyone has seen this come to fruition due to several challenges:

  1. Disconnected Insights: Insights aren’t integrated into users' workflows, making it difficult to take action on the insights, despite their potential to inform decision-making.
  2. Data Overload and Silos: Data continues to grow rapidly and remains compartmentalized, leading to disorganization and security risks. Organizations face a dilemma between a chaotic, self-service data environment and a restrictive, well-governed data environment.
  3. Distrust in Data: The expansion and fragmentation of data has eroded users' trust in the insights derived from company data.
  4. Lack of Composability: There’s a significant absence of composability and reuse in work processes, forcing users to repeat tasks and without clear avenues for monetization.

Tableau Einstein is designed to broaden the cycle of visual analytics by bringing together business users and data professionals, facilitated by AI. It provides timely, trusted metrics and insights via the Salesforce Platform, facilitating ubiquitous access to actionable insights.

Analytics

Tableau Einstein addresses challenges by:

14.1 Best of Class Analytics Experiences

Tableau Einstein builds on Tableau’s leadership in data analysis tools by offering an open platform that enhances capabilities and integrates experiences. Key features include:

14.2 AI-Powered Analytics

Tableau Einstein is fundamentally designed with AI as a leading design principle, enhancing Tableau Einstein’s ability to deliver highly connected, trusted, and collaborative AI-powered data tools.

14.3 Integrated Solution-Driven Business User Experiences

Tableau Einstein enhances business user experiences across various platforms like Slack and Salesforce, and through new analytics features like Tableau Pulse, all accessible via Einstein Agent to simplify analytics engagement. Key aspects include:

14.4 Tableau Semantics

The Tableau Semantic Layer serves as a crucial bridge between raw data and user interpretation, simplifying data analysis, decision-making, and application development, and enhancing AI-driven context and retrieval. Key features include:

14.5 Integrated Workflows and Actionable Insights

Tableau Einstein offers integrated solutions that enhance data-driven decision-making and trusted automation, featuring simple actions, predefined Flows, scheduling, and API integrations. Key components include:

14.6 Composable Developer Platform

Tableau Einstein offers a composable developer platform with no-code, low-code, and pro-code options for application development, all utilizing Tableau Semantics on Data Cloud. Key offerings include:

15. Integration

While the Salesforce Platform offers a comprehensive suite of integration capabilities to address a broad range of digital challenges, many customers operate within enterprise architectures that have developed over time through the use of various vendors and technologies.

Modern enterprises face challenges with system integration and business process automation, often resulting in data silos and inefficiencies. The Salesforce Integration Platform, leveraging the power of MuleSoft, tackles these issues by facilitating the rapid development and enhancement of automated processes. It ensures seamless system connectivity, enhances information flow, and supports decision-making across different platforms, thereby reducing labor costs and automation expenses. This layer is crucial for creating, managing, and monitoring integrations between Salesforce services and other custom or third-party services.

Systems are defined through APIs, which serve to:

For effective communication, APIs are described using the OpenAPI Specification (OAS) for immediate synchronous exchanges, and AsyncAPI for asynchronous, event-driven communications. The Salesforce Integration Layer provides robust capabilities to integrate and manage any system, enhancing connectivity with Salesforce's data, AI, and app functionalities, regardless of whether the systems are native to Salesforce or from other providers.

Complex integrations require advanced transformations and need robust tools, including universal connectivity, an integrated development environment (IDE) for building integration workloads, and a runtime platform to deploy, manage, and oversee these integrations.

To further expedite the integration process, we offer accelerators and industry-specific templates that encode common integration patterns and needs.

15.1 Universal Connectivity

Salesforce’s modern approach to universal connectivity is interpreted connectivity, a metadata-centric approach for developing connectors that can be executed against any engine for any use case without programming. The metadata models comprehend how to connect to remote services to:

For systems not using HTTP-based APIs, Salesforce offers hundreds of pre-built connectors and a complete SDK for building custom connectors. For systems without any API access, Salesforce offers Robotic Process Automation (RPA) that uses Agents to automate repetitive, rule-based tasks that are typically performed by humans. These tasks can include data entry, transaction processing, and responding to simple customer service queries. To extract information from documents, Salesforce offers our Intelligent Document Processing (IDP) that leverages AI to automatically extract, classify, and process data from various types of documents, such as invoices, contracts, and forms. However information exists, Salesforce offers an automated way to retrieve and manipulate it.

15.2 Next Generation Integration Development Environment

MuleSoft Anypoint Code Builder (ACB) is our next-generation IDE designed for API and integration development, featuring a modern, unified experience with VS Code as the backend. It offers several key capabilities:

15.3 Runtime Platform

To effectively run built connectors and workflows, a runtime platform is essential. Our MuleSoft Anypoint Platform manages the lifecycle of integration workloads that implement integration processes, including development, testing, deployment of workloads to production, and retirement.

The runtime platform offers a complete solution for the most complex integration needs.

16. Ecosystem and AppExchange

The Salesforce Ecosystem, a key effect of the Salesforce Platform, encompasses a comprehensive network of partners, including System Integrators (SIs) and consulting partners who assist customers in developing, configuring, and optimizing complex Salesforce solutions; and Independent Software Vendors (ISVs) who create applications and solutions on the platform. These ISV Apps are available on AppExchange, Salesforce's application store launched in 2006, which now features over 9000 applications with more than 13 million installs as of June 2024.

AppExchange ensures high-quality solutions through a rigorous review process involving code analyzers, security scanners, and reference implementation guides, all in close collaboration with Salesforce. This platform also provides ISVs with license management tools to tailor application licensing and monetization, supporting various pricing models including user-based and consumption-based options.

The "metadata-driven platform" principles enable ISVs to extend Salesforce’s native apps and metadata, easing the development of data models, business logic, and user interfaces. The Salesforce Platform supports a broad range of solutions, from industry-specific applications to highly customized, branded apps that utilize technologies like Lightning Web Components for UI and Apex Code for business logic.

The concept of "packaging" is crucial for the distribution of these apps across various Salesforce orgs. Packaging involves the serialization of metadata into an artifact that can be installed by any Salesforce customer, using underlying technologies designed for metadata management across various environments. A unique aspect of packaging is that it allows installations in environments unknown to the developer.

To enhance control and safety, “manageability” features within packaging enable ISVs to safely upgrade parts of an application because others can’t depend on these parts, while allowing customers to own and manage other parts. For instance, ISVs can set certain metadata, like custom settings, to “managed”, making them invisible and non-editable by the customer, thus preventing disruptions in the customer’s environment. Managed packages include these manageability controls, whereas unmanaged packages treat deployed metadata as customer-created, which can’t be upgraded post-deployment.

Since the inception of the AppExchange and the Platform, there’s been a notable increase in both the number and complexity of packages being created and installed. In response to these demands, the Platform introduced the Second-Generation Packaging Architecture in 2020. This new architecture enhances the modularity of managed packages, improves versioning flexibility, allows for namespace sharing, and supports declarative dependencies, among other advancements in the software development lifecycle.

A critical measure for the development of new products and features is their compatibility with packaging and readiness for ISV use. The platform emphasizes the rapid availability of its capabilities to partners, enabling the Salesforce Ecosystem to leverage the innovative potential of the Salesforce Platform effectively and beyond Salesforce’s out-of-the-box offerings. This, however, is an area of ongoing investment to ensure all capabilities described in this document that are available to Salesforce internal developers are also available to our ISV developers.

17. Salesforce on Salesforce

In the spirit of our core value, Customer Success, Salesforce acts as “Customer Zero” for all applications and services on the Salesforce Platform, leveraging customer-facing products internally where possible. This provides significant advantages:

Additionally, all software updates destined for production are initially deployed to a dedicated "Salesforce on Salesforce" Hyperforce instance as part of a staggered deployment process. Since August 2020, this instance has successfully hosted GUS, Salesforce’s org for engineering teams, as well as Salesforce’s CRM operations, showcasing Hyperforce’s robustness and readiness for any customer. This strategy allows internal teams to test and surface any issues well before production deployments to external customers.

18. Delivering Transformation at Scale

Since its founding in 1999, Salesforce has experienced multiple technology transformations. However, the transformation involving the Salesforce Platform was particularly significant due to its scale and the rapid pace at which changes were implemented. This transformation required a simultaneous evolution of all major architectural components to achieve an integrated platform. To ensure this transformation was iterative and minimally disruptive to stakeholders and trailblazers, the Salesforce Technology group also had to evolve its engineering and product delivery practices.

The Salesforce Technology group is a large and diverse team, comprising over 3000 teams located across 23 sites in 14 different countries. This group operates on a grand scale, delivering more than 200 product releases and implementing 250,000 system changes each week. In line with the broader company ethos, the Technology group is guided by five core values: Trust, Customer Success, Innovation, Equality, and Sustainability. These values are integral to shaping the group’s strategy, guiding its execution, and influencing daily decisions.

Adhering to our core values, the Salesforce Engineering 360 framework equips engineering teams with action-oriented dashboards and comprehensive insights into their operations, setting clear expectations for standards and best practices within the organization. This holistic view encompasses various critical areas including availability, security, compliance, quality, accessibility, developer productivity, agile product development, and cost efficiency. To provide these insights, the framework processes billions of records from hundreds of internal engineering systems, such as security systems, production health logs, code repositories, development environments, CI/CD, and release/work planning and tracking systems, all built on the Salesforce Platform utilizing latest innovations from Data Cloud, Tableau, AI, and Slack.

Thanks to our top value of Trust, service ownership is deeply rooted in our engineering culture. Each service and product is designed to not only meet but exceed their Service Level Objectives (SLOs) related to availability and incident management metrics like TimeToDetect and TimeToRestore. Our approach to change management, release readiness, and problem management adheres to high standards. Security is integrated into every phase of our Secure Development Lifecycle, adhering to the secure-by-default principle. Quality and performance are prioritized through the Agile Testing Methodology, which includes millions of automated testing across unit, functional, integration, and load/scale tests within our CI/CD pipelines.

Architecturally, we focus on developing shared capabilities to enhance leverage and efficiency, thereby improving quality. For instance, we’ve developed managed services within Hyperforce to meet diverse needs such as compute and data management, enabling product teams to focus on product innovation while central teams enhance these services in terms of security, availability, and cost-efficiency.

Our operations are agile, fostering innovation delivery to customers. Each of the over 3000 teams has the autonomy in how to implement the agile framework, using either Scrum or Kanban. Product development planning across the organization is structured with various timelines, including a 3-5-year long-range plan for strategic direction, followed by annual execution plans and further broken down into 4-month product release plans, which inform bi-weekly sprint plans. Products, features, and bug fixes are deployed through multiple release vehicles to cater to diverse customer needs, including three major annual releases, bi-weekly releases, and daily releases.

Productivity is critically important given our scale. We utilize the SPACE framework to measure productivity effectively, supported by a comprehensive set of metrics provided by the Engineering 360 system. To enhance engineering efficiency, significant investments are made in AI, which saves developers an average of 25 minutes per day. We also focus on improving tools and experiences for our internal developers to streamline the development lifecycle, with investments in workflow, build tools, development setups, safer releases, and security services yielding significant benefits.

19. Conclusion

In conclusion, the Salesforce Platform has undergone a remarkable transformation over the past four years, evolving from the pioneering multitenant cloud platform to a trusted, integrated, AI, and data-empowered platform that powers a suite of applications and services in their region of choice. This evolution was driven by the need to address emerging challenges such as the rise of public cloud providers, increasing regulatory demands, and advancements in AI and machine learning.

The introduction of Hyperforce, Data Cloud, and generative AI technologies has significantly enhanced the platform’s capabilities, ensuring it remains at the forefront of innovation while maintaining trust and reliability. The successful migration of the majority of our customers to this new platform underscores the ingenuity and dedication of our engineers.

As we continue to innovate and adapt to changing market demands, the Salesforce Platform is well-positioned to support the next generation of applications and customer use cases, reaffirming our commitment to customer success and technological excellence.