Our forward-looking statement applies to roadmap projections.

Roadmap corresponds to August 2024 projections.

Guide Overview

Event-driven architectures support the efficient production and consumption of events, which represent state changes. They make it possible to build more flexible connections between systems, support processes that take place across systems, and enable a system to provide near real-time updates to one or more other systems. While the advantages of event-driven architectures are easy to see, implementation details are not always as clear. What capabilities do you need to consider in event-driven architectural patterns and what specific problems do these patterns solve? What are the special considerations or optimal patterns for implementing event-driven architectures that involve Salesforce?

This guide walks through the landscape of eventing tools available from Salesforce, and covers our recommendations for tools (or combinations of tools) that are most appropriate for various use cases. This guide addresses patterns and tools for building optimal event-driven architectures when working with Salesforce technologies. For information about data-level integrations involving Salesforce, see our Data Integration Decision Guide.

Key Takeaways

Tools for Event-Driven Architecture with Salesforce

Salesforce offers multiple tools and patterns that you can use in your event-driven architecture. This table contains a high-level overview of tools that are available from Salesforce.

Description Required Skills
MuleSoft Anypoint Platform Platform that enables data integration using layers of APIs. Pro-code
Composer Declarative integration tool that enables users to build process automations for data without writing code. Low-code
MuleSoft Anypoint JMS Connector Connector that enables sending and receiving messages to queues and topics for any message service that implements the Java Message Service (JMS) specification. Pro-code
MuleSoft Anypoint Apache Kafka Connector Move data between Apache Kafka and enterprise applications and services. Pro-code
MuleSoft Anypoint Solace Connector A connector for Solace PubSub+ event brokers with native API integration using the JCSMP Java SDK Pro-code
MuleSoft Anypoint MQ Connector A multi-tenant, cloud messaging service that enables customers to perform advanced asynchronous messaging among their applications. Pro-code
MuleSoft Anypoint MQTT Connector An MQTT (Message Queuing Telemetry Transport) v3.x protocol-compliant MuleSoft extension. Pro-code
MuleSoft Anypoint AMQP Connector Enables your application to publish and consume messages using an AMQP 0.9.1-compliant broker. Pro-code
MuleSoft Anypoint Event-Driven (ASync) API Industry-agnostic language that supports the publication of event-driven APIs by separating them into event, channel, and transport layers. Pro-code
MuleSoft Anypoint MQ Multitenant cloud messaging service that enables customers to perform advanced asynchronous messaging between their applications. Pro-code
MuleSoft Anypoint Data Streams Framework available within MuleSoft Anypoint for publishing and subscribing to streaming data. Pro-code
Salesforce Platform Apache Kafka on Heroku Heroku add-on that provides Apache Kafka as a service with full platform integration into the Heroku platform. Pro-code
Change Data Capture Publishes change events, which represent changes to Salesforce records. Changes include creation of a new record, updates to an existing record, deletion of a record, and undeletion of a record. Low-code to Pro-code
Outbound Messages* Actions that send XML messages to external endpoints when field values are updated within Salesforce. Low-code
Platform Events Secure and scalable messages that contain custom event data. Low-code to Pro-code
Pub/Sub API API that enables subscriptions to platform events, Change Data Capture events, and/or Real-Time Event Monitoring events. Pro-code
Event Relays** Enables platform events and change data capture events to be sent from Salesforce to Amazon EventBridge. Low-code
Generic Events (Legacy)*** Custom events with arbitrary payloads that are not tied to Salesforce data changes. Low-code to Pro-code
PushTopic Events (Legacy)*** Events that provide a secure and scalable way to send and receive notifications of Salesforce data changes matching a user-defined SOQL query. Hybrid
*Salesforce will continue to support outbound messages within current functional capabilities, but does not plan to make further investments in this technology.
**Event Relays only connect to AWS Eventbridge
***Salesforce will continue to support PushTopic and Generic Events within current functional capabilities, but does not plan to make further investments in this technology.

Patterns At-a-Glance

The table below compares the various attributes of the patterns outlined in this document. Use it as a quick reference when you need to identify potential patterns for a given use case.

Pattern Near Real-Time Unique Message Copy Guarantee Delivery Identify Message Recipients Reduce Message Size Transform Data
Publish / Subscribe (Unique Copy) X X X X
Fanout X X
Claim Check X X X X
Passed Messages X X X X X
Streaming X X X X
Queueing X X X X

Pattern Walkthroughs

There are a variety of event-driven architecture patterns. Some are general purpose patterns that can be used in scenarios that don’t have any special requirements outside of being event-driven. (See Well-Architected - Interoperability for more information.) Others are applicable to specific use cases, such as integrations involving large data volumes or use cases that call for longer message retention. This section covers these patterns, along with example use cases and implementation tools that are available from Salesforce for each pattern.

Publish / Subscribe

The diagram below depicts a typical publish / subscribe pattern with multiple publishers and subscribers sharing data through an event bus. This foundational pattern forms the basis for the more specific patterns that can be found throughout the rest of this guide. Some key characteristics of this pattern are:

This Level 2 diagram shows an example of the publish / subscribe pattern that includes multiple publishers, multiple subscribers, and multiple events being delivered through channels in an event bus. In this pattern, the same system can be both a publisher and a subscriber and a system can subscribe to multiple events.

Download Diagram
Open the template in Lucidchart(Lucidchart account required)

Tools Relevant to the Publish / Subscribe Pattern

Event Flow and Behavior Payload Considerations
Available Tools Required Skills Publish Via Subscribe Via Replay Period Payload Structure Payload Limits
MuleSoft Anypoint Platform Pro-code APIs N/A As Configured User Defined None
Composer Low-code APIs, Composer flows Composer flows As Configured User Defined None
MuleSoft Anypoint JMS Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Apache Kafka Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Solace Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQ Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQTT Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint AMQP Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Event-Driven (ASync) API Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQ Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Data Streams Pro-code APIs APIs As Configured User Defined None
Salesforce Platform Apache Kafka on Heroku Pro-code APIs, record changes in Heroku Postgres N/A 1-6 weeks User Defined User Defined
Change Data Capture Low-code to Pro-code Record Changes Apex, APIs, Lightning Web Components (LWC) 3 days Predefined 1 MB
Outbound Messages* Low-code Flow and Workflow Rules N/A 24 hours User Defined 100 notifications per message
Platform Events Low-code to Pro-code APIs, Apex, Flow Apex, APIs, Flow, LWC 3 days** User Defined 1 MB
Pub/Sub API Pro-code Pub/Sub API or APIs, Apex, Flow Pub/Sub API 3 days User Defined 1 MB
Event Relays*** Low-code Platform Events, Change Data Capture API 3 days User Defined 1 MB
Generic Events (Legacy)**** Low-code to Pro-code APIs APIs, LWC 1 day User Defined 3,000 characters
PushTopic Events (Legacy)**** Hybrid Record changes APIs, LWC 1 day User Defined 1 MB
*Salesforce will continue to support outbound messages within current functional capabilities, but does not plan to make further investments in this technology.
**Standard-volume platform events are retained for one day
***Event Relays only connect to AWS Eventbridge
****Salesforce will continue to support PushTopic and Generic Events within current functional capabilities, but does not plan to make further investments in this technology.

Additional Resources

Publish / Subscribe (Unique Copy)

With the publish / subscribe (unique copy) pattern, publishers and subscribers exchange messages via one or more channels in an event bus. Subscribers listen to the appropriate channels and receive new messages as they arrive. Unique copies of each message are sent to each subscriber, which makes it possible to guarantee delivery and identify which subscribers received which messages. Systems can also use replay functionality to recover past events, which helps to ensure resilience against system failures.

This Level 3 documentation and implementation diagram shows an example of the publish / subscribe pattern that depicts an event being published when a record is modified via a user interaction, a flow, or a batch job. Subscribers receive their own unique copies of events and make the appropriate updates to their own records. In this pattern, the message channel is aware of all the subscribers and creates event copies for each one, enabling subscribers to replay events if needed.

Download Diagram
Open the template in Lucidchart(Lucidchart account required)

Tools Relevant to the Publish / Subscribe (Unique Copy) Pattern

Event Flow and Behavior Payload Considerations
Available Tools Required Skills Publish Via Subscribe Via Replay Period Payload Structure Payload Limits
MuleSoft MuleSoft Anypoint JMS Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Apache Kafka Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Solace Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Event-Driven (ASync) API Pro-code APIs APIs As Configured User Defined None
Salesforce Platform Change Data Capture Low-code to Pro-code Record Changes Apex, APIs, Lightning Web Components (LWC) 3 days Predefined 1 MB
Platform Events Low-code to Pro-code APIs, Apex, Flow Apex, APIs, Flow, LWC 3 days* User Defined 1 MB
Pub/Sub API Pro-code Pub/Sub API or Apex, APIs, Flow Pub/Sub API 3 days User Defined 1 MB
Event Relays** Low-code Platform Events, Change Data Capture API 3 days User Defined 1 MB
*Standard-volume platform events are retained for one day
**Event Relays only send data to AWS Eventbridge

Business Use Case Examples

The publish / subscribe (unique copy) pattern is a good fit for scenarios in which a single source system needs to send the same message to multiple target systems. Here are some common examples:

Additional Resources

Fanout

With the fanout pattern, messages are delivered to one or multiple destinations (that is, listening clients or subscribers) through a single message queue. Subscribers retrieve the same message from the queue, rather than their own unique copy. While this can improve performance, it also makes it more difficult to verify whether or not a subscriber received a message.

This Level 3 documentation and implementation diagram shows an example of the fanout pattern. It depicts an event being published and written to a single queue when a record is modified via a user interaction, flow, or batch job. The subscriber system has multiple services that receive the same event from the message queue.

Download Diagram
Open the template in Lucidchart(Lucidchart account required)

Tools Relevant to the Fanout Pattern

Event Flow and Behavior Payload Considerations
Available Tools Required Skills Publish Via Subscribe Via Replay Period Payload Structure Payload Limits
MuleSoft MuleSoft Anypoint JMS Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Apache Kafka Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Solace Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQ Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQTT Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint AMQP Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQ Pro-code APIs APIs As Configured User Defined None
Salesforce Platform Apache Kafka on Heroku Pro-code APIs, record changes in Heroku Postgres N/A 1-6 weeks User Defined User Defined
Change Data Capture Low-code to Pro-code Record Changes Apex, APIs, Lightning Web Components (LWC) 3 days Predefined 1 MB
Platform Events Low-code to Pro-code APIs, Apex, Flow Apex, APIs, Flow, LWC 3 days* User Defined 1 MB
Pub/Sub API Pro-code Pub/Sub API or Apex, APIs, Flow Pub/Sub API 3 days User Defined 1 MB
Event Relays** Low-code Platform Events, Change Data Capture API 3 days User Defined 1 MB
*Standard-volume platform events are retained for one day
**Event Relays only send data to AWS Eventbridge

Business Use Case Example

Additional Resources

Claim Check

With the claim check pattern, instead of the complete representation of the transformed data being passed through the event bus, the message body is stored independently, while a message header containing a pointer to where the data is stored (a claim check) is sent to the subscribers. The main benefits of this pattern are lower data volumes being sent through the event bus and increased likelihood that messages will fit within the size limitations of the subscribing systems.

This Level 3 documentation and implementation diagram shows an example of the claim check pattern that depicts an event being published when a record is modified. The message body of the event is stored in a separate data store while the header, which contains a claim check, is passed to a subscriber. The subscriber then uses the claim check to retrieve the message body when it's ready to process the information.

Download Diagram
Open the template in Lucidchart(Lucidchart account required)

Tools Relevant to the Claim Check Pattern

Event Flow and Behavior Payload Considerations
Available Tools Required Skills Publish Via Subscribe Via Replay Period Payload Structure Payload Limits
MuleSoft MuleSoft Anypoint JMS Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Apache Kafka Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Solace Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQ Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQTT Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint AMQP Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQ Pro-code APIs APIs As Configured User Defined None
Salesforce Platform Apache Kafka on Heroku Pro-code APIs, record changes in Heroku Postgres N/A 1-6 weeks User Defined User Defined
Change Data Capture Low-code to Pro-code Record Changes Apex, APIs, Lightning Web Components (LWC) 3 days Predefined 1 MB
Platform Events Low-code to Pro-code APIs, Apex, Flow Apex, APIs, Flow, LWC 3 days* User Defined 1 MB
Pub/Sub API Pro-code Pub/Sub API or APIs, Apex, Flow Pub/Sub API 3 days User Defined 1 MB
Event Relays** Low-code Platform Events, Change Data Capture API 3 days User Defined 1 MB
*Standard-volume platform events are retained for one day
**Event Relays only send data to AWS Eventbridge

Business Use Case Examples

Considerations for the Claim Check Pattern

Additional Resources

Passed Messages

The passed messages pattern incorporates a streaming message platform to address issues like spikes in volume and complex data transformations. It works by segmenting the message handling logic into multiple components:

This Level 3 documentation and implementation diagram shows an example of a process flow for the passed messages pattern that includes a publisher, a subscriber, and a message. The message is split into multiple parts, which are transformed individually and then reassembled prior to being sent to the subscriber.

Download Diagram
Open the template in Lucidchart(Lucidchart account required)

Tools Relevant to the Passed Messages Pattern

Event Flow and Behavior Payload Considerations
Available Tools Required Skills Publish Via Subscribe Via Replay Period Payload Structure Payload Limits
Mulesoft MuleSoft Anypoint Apache Kafka Connector Pro-code APIs APIs As Configured User Defined None
Salesforce Platform Apache Kafka on Heroku Pro-code APIs, record changes in Heroku Postgres N/A 1-6 weeks User Defined User Defined
Change Data Capture Low-code to Pro-code Record Changes Apex, APIs, Lightning Web Components (LWC) 3 days Predefined 1 MB
Platform Events Low-code to Pro-code APIs, Apex, Flow Apex, APIs, Flow, LWC 3 days* User Defined 1 MB
Pub/Sub API Pro-code Pub/Sub API or APIs, Apex Flow Pub/Sub API 3 days User Defined 1 MB
Event Relays** Low-code Platform Events, Change Data Capture API 3 days User Defined 1 MB
*Standard-volume platform events are retained for one day
**Event Relays only send data to AWS Eventbridge

Business Use Case Examples

Additional Resources

Streaming

While the event-driven architecture patterns covered thus far involve publishing single-purpose events that are consumed by subscribers, event streaming services publish streams of events. Subscribers access each event stream and process the events in the exact order in which they were received. Unique copies of each message stream are sent to each subscriber, which makes it possible to guarantee delivery and identify which subscribers received which streams.

 This Level 3 documentation and implementation diagram shows an example of the streaming pattern that depicts a stream of events being published. Subscribers that are listening for the streams receive them and process them accordingly.

Download Diagram
Open the template in Lucidchart(Lucidchart account required)

Tools Relevant to the Streaming Pattern

Event Flow and Behavior Payload Considerations
Available Tools Required Skills Publish Via Subscribe Via Replay Period Payload Structure Payload Limits
MuleSoft MuleSoft Anypoint Data Streams Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Apache Kafka Connector Pro-code APIs APIs As Configured User Defined None
Salesforce Platform Apache Kafka on Heroku Pro-code APIs, record changes in Heroku Postgres N/A 1-6 weeks User Defined User Defined
Pub/Sub API Pro-code Pub/Sub API or APIs, Apex Flow Pub/Sub API 3 days User Defined 1 MB
Generic Events (Legacy)* Low-code to Pro-code APIs APIs, LWC 1 day User Defined 3,000 characters
*Salesforce will continue to support Generic Events within current functional capabilities, but does not plan to make further investments in this technology.

Business Use Case Examples

Considerations for the Streaming Pattern

For a stream to make sense, all of its events and their associated messages need to be in the correct order. In some cases, you may want to source the data in a stream from different systems, which means that you’ll need to incorporate additional ordering logic as part of the design process.

Additional Resources

Queuing

In this pattern, producers send messages to queues, which hold the messages until subscribers retrieve them. Most message queues follow first-in, first-out (FIFO) ordering and delete every message after it is retrieved. Each subscriber has a unique queue, which requires additional set up steps but makes it possible to guarantee delivery and identify which subscribers received which messages.

This Level 3 documentation and implementation diagram shows an example of the queuing pattern that depicts an event being published and written to a queue when a record is modified. Subscribers receive copies of the event from their associated queues and make the appropriate updates to their own records.

Download Diagram
Open the template in Lucidchart(Lucidchart account required)

Tools Relevant to the Queuing Pattern

Event Flow and Behavior Payload Considerations
Available Tools Required Skills Publish Via Subscribe Via Replay Period Payload Structure Payload Limits
MuleSoft MuleSoft Anypoint MQ Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint Apache Kafka Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQ Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint MQTT Connector Pro-code APIs APIs As Configured User Defined None
MuleSoft Anypoint AMQP Connector Pro-code APIs APIs As Configured User Defined None
Salesforce Platform Apache Kafka on Heroku Pro-code APIs, record changes in Heroku Postgres N/A 1-6 weeks User Defined User Defined
Change Data Capture Low-code to Pro-code Record Changes Apex, APIs, Lightning Web Components (LWC) 3 days Predefined 1 MB
Platform Events Low-code to Pro-code APIs, Apex, Flow Apex, APIs, Flow, LWC 3 days* User Defined 1 MB
Pub/Sub API Pro-code Pub/Sub API or APIs, Apex, Flow Pub/Sub API 3 days User Defined 1 MB
Event Relays** Low-code Platform Events, Change Data Capture API 3 days User Defined 1 MB
*Standard-volume platform events are retained for one day
**Event Relays only send data to AWS Eventbridge

Business Use Case Examples

Considerations for the Queuing Pattern

Because of the asynchronous nature of the queuing pattern, there can be a lengthy delay between a message being added to a queue and that message being retrieved. Queues require memory or storage space to hold their messages, so they can’t grow indefinitely, which means that a subscriber that is offline indefinitely can cause a failure if enough messages are allowed to build up in the queue. Message buffering can have the same effect if subscriber processing times become too long, causing high volumes of messages to build up in their queues. To mitigate these risks, perform a thorough analysis of storage requirements for all message queues and, if necessary, design processes that will purge and disable queues if messages aren’t retrieved within a set amount of time or when they reach a predetermined volume.

Additional Resources

Implementing Event-Driven Architectures

Before implementing an event-driven architecture, stop to consider if you truly need to be using one in the first place. The previous section describes common business scenarios that are good fits for each event-driven architecture pattern. You can also read more in Well-Architected - Interoperability. Make sure to review the Challenges to Consider when Implementing Event Driven Architectures section below as well to determine if the patterns you have in mind are the best fit for your specific use cases.

Note that while the majority of the scenarios covered in this guide involve integrations, event-driven architectures can also be used to send messages within a single Salesforce org through the use of platform events, for example. Make sure to keep any applicable event allocation limits in mind when designing processes that use platform events as an internal messaging system.

Additionally, make sure you avoid anti-patterns in your designs. See When Should You Not Use Event-Driven Architectures? below for more details.

Challenges to Consider when Implementing Event-Driven Architectures

As architects, we know that every architecture comes with tradeoffs and an event-driven architecture is no exception. While a landscape full of loosely coupled systems is highly scalable and resilient, there are some tradeoffs to consider as well:

When Should You Use Event-Driven Architectures?

Here are several common scenarios that are often a good fit for an event-driven architecture:

Most large organizations have complex IT landscapes that have a combination of systems with different capabilities. It’s possible, or perhaps likely, that your organization has some legacy systems that don’t support event-driven integrations. You might also have some use cases where event-driven integrations don’t make sense, even if the systems will support them (SFTP file transfers from third-parties, for example). If you take a step back and look at your organization’s IT landscape as a whole, chances are that — just as with other architectural solutions — you’ll employ a mixture of different patterns to support different scenarios. This is perfectly fine. Even if you decide to make event-driven your preferred approach to integrations, you should still think of it as another tool in your toolbox that can and should be used in the right scenarios, as opposed to an approach that needs to be imposed on every system even if it’s not a good fit. Developing a comprehensive integration strategy will help you determine when the patterns described in this guide may or may not be appropriate.

When Should You Not Use Event-Driven Architectures?

Many scenarios call for event-driven architectures. In other scenarios, event-driven architectures will work even if they are not the best fit. And in some scenarios, event-driven architectures simply shouldn’t be used. Here are some guiding questions that can help you identify these scenarios:

Anti-Patterns

Frequently, anti-patterns around event-driven architectures come from using events as a workaround for internal communications within a Salesforce org. Common anti-patterns include:

Designing Good Events

When implementing an event-driven architecture, one of the keys to success is to set standards for how the events themselves are designed. Specifics will vary depending on your organization’s use cases, but here are some general guidelines:

You can fix this type of anti-pattern by adding logic to both systems that ensures that changes made as the result of an event being consumed do not result in a new event being published. You should also make sure to document all of your events, their associated triggers, and the downstream systems that may be affected. Use this documentation as a reference during design sessions to help catch endless loops and similar scenarios as early as possible. (See Well-Architected - Process Design for more information.)

Migrating from Point-to-Point Integrations

Even if you’re fully convinced that an event-driven architecture is right for your organization, you may be starting with a landscape that already has a large number of point-to-point integrations. Getting funding for a project to replace all of them at once can be difficult and it might not even be possible to use an event-driven architecture directly with some legacy systems. In such scenarios, you can take an incremental approach to migrating to a more loosely coupled architecture by converting the most business-critical applications first and then converting other systems as they get updated or replaced in future projects. This approach makes it easy to add new applications to the event bus, and enables your overall IT landscape to stay scalable and resilient as systems continue to get added over time.

Closing Remarks

Keep this guide in mind and refer to it when building or considering event-driven integrations involving Salesforce. Be sure to thoroughly assess your current landscape before making changes to any of your architectures, especially if your current solution is working well. If you’re planning to build a data integration, consult the Architect’s Guide to Data Integration.

Tell us what you think

Help us make sure we're publishing what is most relevant to you; take our survey to provide feedback on this content and tell us what you’d like to see next.