Content last updated December 2020. Roadmap corresponds to Spring ’21 projections.
Our forward-looking statementapplies to roadmap projections.
With the release of before-save Flow triggers in Spring ’20 and after-save Flow triggers in Summer ‘20, we officially recommend Flow and Apex as the preferred no-code and pro-code options for triggered automation on the platform.
This document provides recommendations and rationale for which tools we believe are the most appropriate for various triggered automation use cases. It also provides insight into how Flow automatically handles bulkification and recursion control on behalf of the customer, as well as some pointers on how we recommend thinking about performance.
If you leave this document with nothing else, please take these points away with you:
This doc focuses on record-triggered automation. For the same assessment on Salesforce’s form-building tools, check out Architect’s Guide to Building Forms on Salesforce.
Low Code | --------------------------------------> | Pro Code | ||
---|---|---|---|---|
Before-Save Flow Trigger | After-Save Flow Trigger | After-Save Flow Trigger + Apex | Apex Triggers | |
Same-Record Field Updates | Available | Not Ideal | Not Ideal | Available |
High-Performance Batch Processing | Available | Not Ideal | Not Ideal | Available |
Cross-Object CRUD | Not Available | Available | Available | Available |
Complex List Processing | Not Available | Not Available | Available | Available |
Fire & Forget Asynchronous Processing | Not Available | Not Available | Available | Available |
Other Asynchronous Processing | Not Available | Not Available | Not Available | Available |
Custom Validation Errors | Not Available | Not Available | Not Available | Available |
The table above enumerates the most common trigger use cases we see across our customer base, and the tools we believe are well-suited for each.
In a case where multiple tools are available for a use case, we recommend choosing the tool that will allow you to implement and maintain the use case with the lowest cost.
This will be highly dependent on the makeup of your team.
For example, if your team comprises Apex developers and already has a well-established CI/CD pipeline and a well-managed framework for handling Apex triggers, it will probably be cheaper to continue on that path. In this case, the cost of changing your organization’s operating models to adopt Flow development will be significant.
On the other hand, if your team doesn’t have consistent access to developer resources, or a strong institutionalized culture of code quality, there may be times when you’d be better served by triggered Flows that more people can maintain, than by several lines of code that very few people can maintain.
In an environment where there are mixed skill sets or admin-heavy skill sets, Flow triggers provide a very compelling option that is more performant and easier to debug, maintain, and extend than any no-code offering of the past. We propose seriously considering Flow triggers as a way to delegate the delivery of business process implementation, so that in a case where you have limited developer resources, you can focus those resources on driving impact in projects that will most highly leverage their skillsets.
Just kidding, we won’t be getting rid of these for a while.
But, we do believe that Flow is far better architected to meet the increasing functionality and extensibility requirements of our customers today.
For these reasons, moving forward we will be focusing our investments on Flow. We recommend building in Flow where possible, and resorting to Process Builder and/or Workflow only when necessary. We will continue supporting Process Builder & Workflow rules within their current functional capacities, but do not plan on making further investments.
Should you find yourself in a place where you must use Process Builder due to functional gaps, you may wish to consider implementing as much logic in an autolaunched Flow (which can be called by a process) as possible, so that this logic might be called by a Flow trigger rather than a process in the future. While this pattern spreads implementation and maintenance across two tools, it comes with notable benefits: autolaunched Flows have a far superior debug experience, are easier to manage, and can be unit tested with Apex. In some cases, autolaunched Flows also exhibit better performance than their functionally equivalent Process Builder implementations.
In a similar vein, it’s possible to implement trigger logic in Apex, business logic in Flow, and use the Interview.start()
method in Apex to invoke the Flow. However, Interview.start()
is not a bulk method, so use of this method in a trigger context should be approached with extreme caution.
When we first launched this guide in June 2020, there were a number of major functional gaps between Flow triggers & Process Builder/Workflow. However, we’re proud to note that we’ve been able to close most of the gaps in the last two releases:
Targeted Timeline for Availability in Flow Triggers | |
---|---|
Process Builder | |
Navigate to related records using merge fields | ✔ Delivered in Winter '21 |
Access global variables without having to create formulas every time | ✔ Delivered in Winter '21 |
Execute a decision only if the field values that the record had immediately before saving didn't meet the expression criteria, but now do. | ✔ Delivered in Winter '21 |
Scheduled Actions | ✔ Delivered in Spring '21 |
PRIORVALUE() | ✔ Delivered in Spring '21 |
ISCHANGED() | Soon |
ISNEW() | Soon |
Call an autolaunched Flow | 1 Year |
Allow recursive executions |
Never |
Workflow Rules | |
Time-Based Workflow Monitoring | ✔ Delivered in Spring '21 |
Send an outbound message | Soon |
Set field values without triggering validation rules | Never |
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Same-Record Updates | Available | Not Ideal | Not Ideal | Available |
Of all the recommendations in this document, we most strongly recommend taking steps to minimize the number of same-record field updates that occur after the save. Or put more plainly, stop implementing same-record field update actions inside Workflow Rules or Process Builder processes! And don’t start implementing same-record field updates in after-save Flow triggers, either! Instead, do start implementing same-record field update actions in before-save Flow triggers or before-save Apex triggers.
Before-save same-record field updates are sensationally more performant than after-save same-record field updates, by design.
Well, that’s the theory anyways; what happens in practice?
Our tests (Performance Discussion: Same-Record Field Updates) provide some empirical flavor. In our experiments, bulk same-record updates performed anywhere between 10-20x faster when implemented using before-save triggers than when implemented using Workflow Rules or Process Builder. For this reason we do not believe performance should be considered as a limitation for implementing on before-save Flow triggers, except in perhaps the most extreme scenarios.
The main limitation of before-save Flow triggers is that they are currently quite functionally sparse: you can query records, loop, evaluate formulas, assign variables, and perform decisions (aka Switch statements) for logic, and can only make updates to the underlying record. You cannot extend a before-save Flow trigger with Apex invocable actions or subflows. Meanwhile, you can do anything you want in a before-save Apex trigger (except explicit DML on the underlying record).
We know that same-record field updates account for the lion’s share of Workflow Rule actions executed site-wide, and are also a large contributor to problematic Process Builder execution performance. By pulling any “recursive saves” out of the save order and implementing them before the save, there’s a lot of exciting perf savings waiting to be realized.
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
High-Performance Batch Processing | Available | Not Ideal | Not Ideal | Available |
If you desire the highly performant evaluation of complex logic in batch scenarios, then the vast configurability of Apex and its rich debug and tooling capabilities are for you.
Here are some examples of what we mean by “complex logic,” and why we recommend Apex.
While before-save Flow triggers are not quite as performant as before-save Apex triggers in barebones speed contests, the impact of the overhead is somewhat minimized when contextualized within the scope of the broader transaction. Before-save Flow triggers should still be amply fast enough for the vast majority of non-complex (as enumerated above), same-record field update batch scenarios. As they are pretty consistently more than 10x faster than Workflow Rules, it’s safe to use them anywhere you currently use Workflow Rules.
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Cross-Object CRUD | Not Available | Available | Available | Available |
Creating, updating, or deleting a different record requires a database operation, no matter what tool you use. The only tool that doesn’t currently support cross-object “crupdeletes” (a portmanteau of the Create, Update, and Delete operations) is the before-save Flow trigger.
Currently, Apex outperforms Flow in raw database operation speed. That is, it takes less time for the Apex runtime to prepare, perform, and process the result of any specific database call (e.g. Create a Case) than it takes the Flow runtime to do the same. In practice, however, we believe that if you are looking for major performance improvements, you will likely reap greater impact by identifying any possibly inefficient user implementations, and fixing them first, before looking into optimizing for lower level operations. The execution of actual user logic on the app server generally consumes far more time than the handling of database operations.
The most inefficient user implementations tend to issue multiple DML statements where fewer would suffice. For example, here is an implementation of a Flow trigger that updates two fields on a case’s parent account record with two Update Records elements.
This is a suboptimal implementation as it causes two DML operations (and two save orders) to be executed at runtime. Combining the two field updates into a single Update Records element will result in only one DML operation being executed at runtime.
Workflow Rules has gained a reputation for being highly performant. Part of this can be attributed to how Workflow Rules constrains the amount of DML it performs during a save.
Thus, when it comes to cross-object DML scenarios, the name of the game is to minimize unnecessary DML in the first place.
Sometimes this is easier said than done — if you’re not actually experiencing performance issues, then you may just find that premature optimization is not worth the investment.
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Complex List Processing | Not Available | Not Available | Available | Available |
There are a few major list processing limitations in Flow today.
MyList[myIndexVariable]
to do in Apex.The combination of these limitations makes some common list-processing tasks, such as in-place data transforms, sorts, and filters, overly cumbersome to achieve in Flow while being much more straightforward (and more performant) to achieve in Apex.
This is where extending Flows with invocable Apex can really shine. Apex developers can and have created efficient, modular, object-agnostic list processing methods in Apex. Since these methods are declared as invocable methods, they are automatically made available to Flow users. It’s a great way to keep business logic implementation in a tool that business-facing users can use, without forcing developers to implement functional logic in a tool that’s not as well-suited for functional logic implementation.
When building invocable Apex, please take into account these considerations.
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Fire & Forget Asynchronous Processing | Not Available | Not Available | Available | Available |
Other Asynchronous Processing | Not Available | Not Available | Not Available | Available |
For the purposes of this section, we will establish the following categorization-by-intent to disambiguate scheduled processing from asynchronous processing:
A canonical use case for asynchronous processing is web service callouts for a batch of records. Because web service integrations cannot be assumed to always be reliable and low latency, it can be difficult or impossible to fit all the necessary web service callouts for a 200-record batch (+ any associated error handling logic) within the Governor’s synchronous transaction limits.
Flow offers no native means of declaring some set of logic to execute asynchronously. We recommend implementing asynchronous processing inside a Queuable Apex class.
If the use case is a fire-and-forget use case, such that no business requirements depend on the results of the asynchronous logic, then the use case may be achieved by calling System.enqueueJob
against Queuable Apex from within an invocable Apex method, invoking the method from Flow through the invocable action framework, and implementing the business logic inside Flow.
However, for all other asynchronous use cases, we recommend a full Apex solution.
Note: the autolaunched Flow’s Pause element can be used to break processing into multiple transactions. While this can be used to force asynchronous processing, we do not recommend this pattern -- Flow’s Pause element is intended for scheduled processing; that is, implementation driven by business requirements. Once you start mixing business logic-driven implementation with technology-driven implementation in a single Flow, the Flow becomes more fragile, harder to parse, and harder to maintain over time. Scalability also becomes an issue with the zero-duration pause or platform event-based resume implementations.
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Custom Validation Errors | Not Available | Not Available | Not Available | Available |
At this time, Flow provides no way to either prevent DML operations from committing, or to throw custom errors; the addError()
Apex method is not supported when executed from Flow via Apex invocable method.
The rest of this document describes technical details about the Flow runtime.
This round’s performance discussion is going to be focused on the same-record field update use case. Of the ~150 billion actions that were executed by Workflow, Process Builder, and Flow in April this year — “actions” being things effected outside the runtime, such as record updates, email alerts, outbound messages, invocable actions — we believe that around 100 billion of those actions were same-record field updates. Note that before-save Flow triggers had only been launched the release before, so we’re talking ~ 100 billion after-save same-record field updates — or equivalently, 100 billion recursive saves — in the month of April. Wow! How much time could have been saved by before-save Flow triggers?
Caveat: Claims about performance should always be taken with a bowl of salt, even when they come from Salesforce. Results in your org will likely be different than the results in our orgs.
We made this claim earlier: “while Workflow Rules have a reputation for being fast, they nevertheless cause a recursive save and will always be considerably slower and more resource-hungry than a single functionally equivalent before-save Flow trigger.”
The theoretical side to this argument was that, by design, before-save Flow triggers neither cause DML nor the ensuing recursive firing of the save order, while Workflow Rules do, because they happen after the save.
But what happens in practice? Here are a few experiments we tried.
How much longer does an end user have to wait for a record to save?
For each of the different automation tools that can be used to automate a same-record field update, we created a fresh org, + one more fresh org to serve as a baseline org.
Then for each org, we:
Opportunity Create
that would set Opportunity.NextStep = Opportunity.Amount
.None
except Workflow.Info
and Apex Code.Debug
This gave us the average overhead that each trigger added to the log duration.
How about the other side of the spectrum: high-volume batch processing?
We borrowed some of our performance team’s internal environments to get a sense of how well the different trigger tools scale.
The configuration was:
Account Create
which each update Account.ShippingPostalCode
Account Create
which each update Account.ShippingPostalCode
Account Create
which each update Account.ShippingPostalCode
Account Create
which each update Account.ShippingPostalCode
Account Create
which each update Account.ShippingPostalCode
Then each Tuesday for the last 12 weeks, we uploaded 50,000 Accounts to each org through the Bulk API, with a 200-record batch size.
Fortunately, our internal environments can directly profile trigger execution time without requiring Apex debug logging or extrapolation from a baseline.
Unfortunately, our internal environments are so ill-representative of production that we’re only allowed to present the relative performance timings, and not the raw performance timings.
In both single-record and bulk use cases, the before-save Flow performs extremely well. As much as we’d like to take credit for the outcomes, however, most of the performance savings come simply due to the huge advantage of being before the save.
Go forth and stop implementing same-record field updates in Workflow Rules and Process Builder!
At this time, Flow CPU time consumption, as it is reported in the Apex debug logs, is inconsistent. We believe this is a reporting issue, and not a measurement issue.
For example, if a Flow consumes 8s actual CPU time during runtime, then the cumulative Governor CPU time limit will be incremented by 8s. However, the Flow element-level contributions to the limit, which are displayed in the Apex debug logs, are not always properly attributed. Sometimes, CPU time that should have been attributed to a Flow element is not attributed to any element in the Flow. As a result of this misattribution, the sum of the attributed element costs will not always equal 8s; instead, they may add up to a number that is less than 8s. In this case, the difference will be unintentionally rolled into the next CPU time consumption line in the debug logs. We are planning to fix this in Spring ’21.
In the meantime, a Flow’s CPU time consumption can be upper-capped by its wall clock time consumption. This is because Flows run on a single thread.
This section is intended to help you better better understand how & why Flow accrues against Governor limits the way it does. It contains technical discussion about Flow’s runtime bulkification
& recursion control behaviors.
We’ll mainly be focusing on how Flow affects these Governor limits.
We assume the reader possesses a prerequisite understanding of what these limits represent, and we recommend refreshing on the content and terminologies used in How DML Works and Triggers and Order of Execution
.
Before diving into the specifics of triggered Flow runtime behavior, it’s extremely important to make sure we use the same common mental model of the save order for the purpose of further discussion. We believe a tree model provides a reasonably accurate abstraction.
Since each node in a save order tree corresponds to a single processed DML record, and there is a limit of 10,000 on the number of processed DML records per transaction, there can be no more than 10,000 nodes total, across all of the save order trees in the transaction.
Additionally, there can be no more than 150 unique timestamped DML operations {DML0, DML1, ...* *, DML149} across all of the save order trees in the transaction.
Now, let’s revisit our earlier example of a suboptimal cross-object triggered Flow implementation:
Suppose that there are no other triggers in the org, and a user creates a single new Case, Case005, against parent Account Acme Corp. The corresponding save order tree is fairly simple:
Suppose that the user then creates two new cases, Case006 and Case007, in a single DML statement. You’d get two save order trees with three nodes each, for a total of six records processed by DML. However, thanks to Flow’s automatic cross-batch bulkification logic (Flow Bulkification), the six nodes would still be covered by a total of three issued DML statements:
Still not bad, right? In real life, though, you’d probably expect there to be a host of triggers on Account update, such that any single save order tree would end up looking like this (for the sake of discussion let’s say there are 3 triggers on Account):
And in a scenario where you’ve batch-inserted 200 Cases, there would be 200 respective save order trees sharing a 10,000 total node limit and a 150 total issued DML statements limit. Bad news bears.
However, by combining the Flow’s two original Update Records elements into a single Update Records element, the entire right subtree of Node0 can be eliminated.
This is an example of what we’ll call functional bulkification, one of two types of bulkification practices that can reduce the number of DML statements needed to process all the DML rows in a batch.
Functional bulkification attempts to minimize the number of unique DML statements that are needed to process all of the records in a single save order tree.
The example above achieves functional bulkification by effectively merging two functionally distinct DML nodes, and their respective save order subtrees, on Acme Corp. into a single, functionally equivalent, merged DML node and save order subtree. Not only does this reduce the number of DML statements issued, but it also saves CPU time. All the non-DML trigger logic is run once and not twice.
Cross-batch bulkification attempts to maximize the number of DML statements that can be shared across all save order trees in a batch.
An example of perfect cross-batch bulkification is an implementation where, if one record’s save order tree requires 5 DML statements to be issued, then a 200 record batch still requires only 5 DML statements to be issued.
In the above example, cross-batch bulkification is handled automatically by the Flow runtime.
Recursion control, on the other hand, increases processing efficiency by pruning functionally redundant subtrees.
The Flow runtime automatically performs cross-batch bulkification on behalf of the user. However, it does not perform any functional bulkification.
The following Flow elements can cause the consumption of DML & SOQL in a triggered Flow.
As an example, consider the following triggered Flow implementation, which, when an Account is updated, automatically updates all of its related Contracts, and attaches a Contract Amendment Log child record to each of those updated Contracts.
Suppose now that 200 Accounts are bulk updated. Then during runtime:
We strongly recommend against including DML & SOQL in loops for this reason. This is very similar to best practice #2 in Apex Code Best Practices. Users will be warned if they attempt to do so while building in Lightning Flow Builder.
Triggered Flows are subject to the recursive save behavior outlined in the Apex Developer Guide’s Triggers and Order of Execution page.
What does this actually mean? Let’s go back to the tree model we established earlier, and revisit this specific property of the tree:
The guarantee, “During a recursive save, Salesforce skips ... ” adds an additional bit of magic:
This has a few important implications:
[Consideration #1] A Flow trigger can fire multiple times on the same record during a transaction.
For example, suppose that in addition to the suboptimal Flow trigger on Case Create
to the right, the org also has a Flow trigger on Account Update
.
For simplicity’s sake, let’s assume the triggered Flow on Account Update
is a no-op. Suppose we create a new Case, Case #007, with parent Account “Bond Brothers.”
Then the save order tree would look like this:
Case Create
on Case #007 is entered.
Case Create
fires.
Account Update
on Bond Brothers is entered.Account Update
fires. // First execution on Bond Brothers.
Account Update
to be a no-op, nothing happens.Account Update
, Step 17 concludes.Account Update
on Bond Brothers is exited.Account Update
on Bond Brothers is entered.Account Update
fires. // Second execution on Bond Brothers. // Not a recursive execution!
Account Update
to be a no-op, nothing happens.Account Update
, Step 17 concludes.Account Update
on Bond Brothers is exited.Case Create
, Step 17 concludes.Case Create
on Case #007 concludes.Had the two Update Records elements been merged into a single Update Records element, the resolved save order would have instead looked like this:
Case Create
on Case #007 is entered.
Case Create
fires.
Account Update
on Bond Brothers is entered.Account Update
fires. // First execution on Bond Brothers.
Account Update
to be a no-op, nothing happens.Account Update
, Step 17 concludes.Account Update
on Bond Brothers is exited.Case Create
, Step 17 concludes.Case Create
on Case #007 concludes.[Consideration #2] A Flow trigger will never cause itself to fire on the same record again.
[Consideration #3] Although Flow triggers (and all other triggers in the v48.0 save order steps 9-18) get this type of recursion control for free, Steps 1-8 and 19-21 do not. So, when an after-save Flow trigger performs a same-record update, a save order is entered, and Steps 1-8 and 19-21 all execute again. This behavior is why it’s so important to move same-record updates into before-save Flow triggers!
You’ve made it! Have a good day and thanks for the read. Hope you learned something you found valuable.
Help us make sure we're publishing what is most relevant to you: take our survey to provide feedback on this content and tell us what you’d like to see next.