Our forward-looking statement applies to roadmap projections.
Roadmap corresponds to June 2024 projections.
This guide provides tools recommendations for various triggered automation use cases along with the rationale for those recommendations. It also provides insight into how Flow automatically handles bulkification and recursion control on behalf of the customer, as well as some pointers on performance and automation design.
Here are the most important takeaways:
This doc focuses on record-triggered automation. For the same assessment on Salesforce form-building tools, check out Architect’s Guide to Building Forms on Salesforce.
Low Code | --------------------------------------> | Pro Code | ||
---|---|---|---|---|
Before-Save Flow Trigger | After-Save Flow Trigger | After-Save Flow Trigger + Apex | Apex Triggers | |
Same-Record Field Updates | Available | Not Ideal | Not Ideal | Available |
High-Performance Batch Processing | Not Ideal | Not Ideal | Not Ideal | Available |
Cross-Object CRUD | Not Available | Available | Available | Available |
Asynchronous Processing | Not Available | Available | Available | Available |
Complex List Processing | Not Available | Not Ideal | Available | Available |
Custom Validation Errors | Not Available | Not Available | Not Available | Available |
The table above shows the most common trigger use cases, and the tools we believe are well-suited for each.
In a case where multiple tools are available for a use case, we recommend choosing the tool that will allow you to implement and maintain the use case with the lowest cost. This will be highly dependent on the makeup of your team.
For example, if your team includes Apex developers, and it already has a well-established CI/CD pipeline along with a well-managed framework for handling Apex triggers, it will probably be cheaper to continue on that path. In this case, the cost of changing your organization’s operating models to adopt Flow development would be significant. On the other hand, if your team doesn’t have consistent access to developer resources, or a strong institutionalized culture of code quality, you’d likely be better served by triggered flows that more people can maintain, rather than by several lines of code that few people can maintain.
For a team with mixed skill sets or admin-heavy skill sets, flow triggers provide a compelling option that is more performant and easier to debug, maintain, and extend than any no-code offering of the past. If you have limited developer resources, using flow triggers to delegate the delivery of business process implementation, enables you to focus those resources on projects and tasks that will make the most of their skill sets.
While the road to retirement for Process Builder and Workflow Rules may be long, we recommend that you begin implementing all your go-forward low-code automation in Flow. Flow is better architected to meet the increasing functionality and extensibility requirements of Salesforce customers today.
For these reasons, moving forward Salesforce will be focusing investments on Flow. We recommend building in Flow where possible, and resorting to Process Builder or Workflow only when necessary.
At this point, Flow has closed all the major functional gaps we had identified between it and Workflow Rules and Process Builder. We continue to invest in closing remaining minor gaps, including enhanced formulas and entry conditions, as well as usability improvements to streamline areas where Flow is more complex.
Flow has introduced a new concept to the low-code automation space by separating its record triggers into before and after save within the trigger order of execution. This aligns with the corresponding functionality available in Apex and allows for significantly better performance when it comes to same-record field updates. However, this introduces additional complexity to the Flow user experience, and users unfamiliar with triggers found the terminology confusing. So throughout this guide, we will continue to refer to these two options as “before save” and “after save,” but in Flow Builder, they have been renamed to “fast field update” and “actions and related records.”
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Same-Record Updates | Available | Not Ideal | Not Ideal | Available |
Of all the recommendations in this guide, we most strongly recommend taking steps to minimize the number of same-record field updates that occur after the save. Or put more plainly, stop implementing same-record field update actions inside Workflow Rules or Process Builder processes! And don’t start implementing same-record field updates in after-save flow triggers, either! Instead, do start implementing same-record field update actions in before-save flow triggers or before-save Apex triggers. Before-save same-record field updates are significantly faster than after-save same-record field updates, by design. There are two primary reasons for this:
Well, that’s the theory anyways; what happens in practice?
Our tests (Performance Discussion: Same-Record Field Updates) provide some empirical flavor. In our experiments, bulk same-record updates performed anywhere between 10-20x faster when implemented using before-save triggers than when implemented using Workflow Rules or Process Builder. For this reason, while there are still some theoretical limits relative to Apex, we do not believe performance should be considered as a limitation for implementing on before-save flow triggers, except in perhaps the most extreme scenarios.
The main limitation of before-save flow triggers is that they are functionally sparse: you can query records, loop, evaluate formulas, assign variables, and perform decisions (for example, Switch
statements) for logic, and can only make updates to the underlying record. You cannot extend a before-save flow trigger with Apex invocable actions or subflows. Meanwhile, you can do anything you want in a before-save Apex trigger (except explicit DML on the underlying record). We’ve scoped before-save flow triggers intentionally to support only those operations that will ensure the performance gains mentioned above.
We know that same-record field updates account for the lion’s share of Workflow Rule actions executed site-wide, and are also a large contributor to problematic Process Builder execution performance. Pulling any “recursive saves” out of the save order and implementing them before the save will lead to a lot of exciting performance improvements.
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
High-Performance Batch Processing | Not Ideal | Not Ideal | Not Ideal | Available |
If you’re looking for highly performant evaluation of complex logic in batch scenarios, then the configurability of Apex and its rich debug and tooling capabilities are for you. Here are some examples of what we mean by “complex logic,” and why we recommend Apex.
While before-save flow triggers are not quite as performant as before-save Apex triggers in barebones speed contests, the impact of the overhead is somewhat minimized when contextualized within the scope of the broader transaction. Before-save flow triggers should still be fast enough for the vast majority of non-complex (as enumerated above), same-record field update batch scenarios. As they are consistently more than 10x faster than Workflow Rules, it’s safe to use them anywhere you currently use Workflow Rules.
For batch processing that does not need to be triggered immediately during the initial transaction, Flow has some capabilities, though they continue to be more constrained and less feature-rich than Apex. Scheduled Flows can currently do a batch operation on up to 250,000 records per day and can be used for data sets that are unlikely to reach near that limit. Scheduled Paths in record-triggered flows also now support configurable batch sizes, so admins can change the batch size from the default (200) to a different amount, if needed. This can be used for scenarios like external callouts that cannot support the default batch size. (See Well-Architected - Data Handling for more information.)
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Cross-Object CRUD | Not Available | Available | Available | Available |
Creating, updating, or deleting a different record (other than the original record that triggered the transaction) requires a database operation, no matter what tool you use. The only tool that doesn’t currently support cross-object “crupdeletes” (a portmanteau of the create, update, and delete operations) is the before-save flow trigger.
Currently, Apex outperforms Flow in raw database operation speed. That is, it takes less time for the Apex runtime to prepare, perform, and process the result of any specific database call (e.g. a call to create a case) than it takes the Flow runtime to do the same. In practice, however, if you are looking for major performance improvements, you will likely reap greater benefits by identifying inefficient user implementations, and fixing them first, before looking into optimizing for lower level operations. The execution of actual user logic on the app server generally consumes far more time than the handling of database operations.
The most inefficient user implementations tend to issue multiple DML statements where fewer would suffice. For example, here is an implementation of a flow trigger that updates two fields on a case’s parent account record with two Update Records elements.
This is a suboptimal implementation as it causes two DML operations (and two save orders) to be executed at runtime. Combining the two field updates into a single Update Records element will result in only one DML operation being executed at runtime.
Workflow Rules has gained a reputation for being highly performant. Part of this can be attributed to how Workflow Rules constrains the amount of DML it performs during a save.
Thus, when it comes to cross-object DML scenarios, the idea is to minimize unnecessary DML from the start.
Sometimes this is easier said than done — if you’re not actually experiencing performance issues, then you may just find that such optimization is not worth the investment.
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Complex List Processing | Not Available | Not Ideal | Available | Available |
There are a few major list processing limitations in Flow today.
MyList[myIndexVariable]
to do in Apex.The combination of these limitations makes some common list-processing tasks, such as in-place data transforms, sorts, and filters, overly cumbersome to achieve in Flow while being much more straightforward (and more performant) to achieve in Apex.
This is where extending flows with invocable Apex can really shine. Apex developers can and have created efficient, modular, object-agnostic list processing methods in Apex. Since these methods are declared as invocable methods, they are automatically made available to Flow users. It’s a great way to keep business logic implementation in a tool that business-facing users can use, without forcing developers to implement functional logic in a tool that’s not as well-suited for functional logic implementation.
When building invocable Apex, take into account these considerations:
Since this guide was originally written, Flow has added more list processing capabilities, including filtering and sorting. However, it still does not have all the list processing capabilities of Apex, so the advice around using Apex or modularizing individual components still applies for more complex use cases.
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Fire & Forget Asynchronous Processing | Not Available | Available | Available | Available |
Other Asynchronous Processing | Not Available | Available | Available | Available |
Asynchronous processing has many meanings in the world of programming, but when it comes to record-triggers, there are a couple topics that generally arise. It's often requested in opposition to the default option, which is to make changes synchronously during the trigger order of execution. Let's explore why you would or would not want to take action synchronously.
With these considerations in mind, both Flow and Apex offer solutions for executing logic asynchronously to meet use cases that require separate transactions, external callouts, or will just simply take too long. For Apex, we recommend implementing asynchronous processing inside a Queueable Apex class. For Flow, we recommend using the Run Asynchronously path in after-save flows to achieve a similar result in a low-code manner. (See Well-Architected - Throughput for more information about synchronous and asynchronus processing.)
When deciding between low- and pro-code, a key consideration is the amount of control Apex will give you around callouts. Flow offers a fixed amount of retries and some basic error handling via its fault path, but Apex offers more direct control. For a mixed use case, you can call System.enqueueJob
against Queueable Apex from within an invocable Apex method, then invoke the method from Flow through the invocable action framework.
When testing on any solution, particularly one employing any kind of callouts, it’s important to think through the ramifications of what happens when any particular step has an error, a timeout, or sends back malformed data. In general, asynchronous processing has more power, but requires the designer to be more thoughtful about such edge cases, especially if that process is part of a larger solution that may be relying on a specific value. As an example, if your quoting automation requires a callout to a credit check bureau, what state will the quote be in if that credit check system is down for maintenance? What if it returns an invalid value? What state will your Opportunity or Lead be in during that interim, and what downstream automation is waiting on that result? Apex has more complex error handling customization than Flow, including the ability to intentionally trigger a failure case, and that may be a deciding factor between the two.
Previously, low-code admins have used various approaches (or “hacks”) for achieving asynchronous processing. One was to create a time-based workflow (in Workflow Rules), a scheduled action (in Process Builder), or a Scheduled Path (in Flow) that ran 0 minutes after the trigger executed. This effectively did the same thing as the Run Asynchronously path does today, but the new dedicated path has some advantages, including how quickly it will run. A 0-minute scheduled action could take a minute or more to fully instantiate, whereas Run Asynchronously is optimized to ensure it is enqueued and run as quickly as possible. Run Asynchronously will also potentially allow for more stateful capabilities in the future, like the ability to access the prior value of the triggering record, though it can’t do this today. It does some specialized caching to improve performance.
The other “hack” that has been used was to add a Pause element using an autolaunched subflow that waited for zero minutes, and then call that flow from Process Builder. That “zero-wait pause” will effectively break the transaction and schedule the remaining automation to run in its own transaction, but the mechanisms it uses do not scale well, as they were not designed for this purpose. As a result, increased use will lead to performance problems and flow interview limits. Additionally, the flow becomes more brittle and difficult to debug. Customers who have used this approach have often had to abandon it after reaching scale. We do not recommend starting down that path (pun intended), which is why it’s not available for subflows called from record-triggered flows.
One of the appeals of the “zero-wait pause” is the perceived stateful relationship between the synchronous and asynchronous processing. A flow variable may persist before and after the pause in this particular hack, even if that pause waits for weeks or months. This can have an appeal from an initial design perspective, but it goes against the underlying programming principles that asynchronous processing is intended to model. Separating out processes to run asynchronously allows them more flexibility and control over performance, but the data they operate on generally needs to be self-contained. That data could change in the time between two different independent processes run, even if it’s only milliseconds between one and the next, and almost certainly if it’s for longer. Flow variables, like the ones in New Resource, are designed to only last as long as the individual process that is running. If that information is going to be needed by a separate process, even one set to run asynchronously as soon as it finishes, it should be saved into persistent storage. Most often this will take the form of a custom field on the object of the record that triggered the flow, as that will automatically be loaded as $Record in any path on a record-triggered flow. For example, if you use Get Records to get an associated name from a contact for a record, and you want to reuse that name in an asynchronous path, you will either need to invoke the Get Records again in the separate path, or save that associated name back to $Record
. If you need sophisticated caching or alternative data stores beyond Salesforce objects and records, we recommend using Apex. (See Well-Architected - State Management for more information about state management.)
When it comes to asynchronous processing, it may take additional care and consideration to design your record-triggered automation, particularly if you require callouts to external systems or need to perpetuate state between processes. The Run Asynchronously path in Flow should meet many of your low-code needs, but some complex ones around custom errors or configurable retries will require Apex instead.
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Custom Validation Errors | Not Available | Not Available | Not Available | Available |
At this time, Flow provides no way to either prevent DML operations from committing, or to throw custom errors; the addError()
Apex method is not supported when executed from Flow via Apex invocable method. Support for calling the addError() method directly from Flow as a new low-code element is expected to be included in an upcoming ‘24 release. In the meantime, Validation Rules can be used for simple use cases and Apex triggers can be used for complex ones.
There are countless debates in the community around best practices when it comes to designing record-triggered automation. You may have heard some of the following:
The fact is that there are kernels of truth in all of the advice, but that none of them address everyone’s challenges or specific needs. There will always be exceptions and rules that apply to some instances but not others. This section describes the specific problems that are addressed by various pieces of advice, to help you make your own determinations.
When it came to building automation in Process Builder, performance was a big reason to recommend building one process per object/trigger. Process Builder has a high initialization cost, so every time a Process Builder ran on a record edit, it would incur a performance hit, and since Process Builder didn’t come with any gating entry conditions, those hits would always be incurred on every edit. Flow functions differently than Process Builder, so it does not have nearly as high of an initialization cost, but it does have some. Raw speed tests between Flow and Apex for identical use cases will usually show Apex being at least theoretically ahead, since Flow’s low-code benefits add at least one layer of abstraction, but from a performance perspective, this small difference is not a major differentiator for most use cases.
Flow also provides entry conditions, which can help dramatically lower the performance impact if they are used to exclude a flow from a record-edit. The majority of changes to a record are unlikely to necessitate running automation that makes additional changes. So if a typo gets fixed in a description, for example, you don’t need to rerun your owner assignment automation. You can configure entry conditions so that automation runs only when a certain conditional state is achieved. Edits made on a record are tracked and the automation executes only when a defined change is made. So you can run an automation when an opportunity is closed or on the specific edit that changed its status from open to closed. Either of these options are more efficient than running an automation on every update to a closed opportunity.
Making your record-triggered automation performant is a multidimensional problem, and no design rule will encompass all the factors For Flow, there are two important points to remember when it comes to your design:
This guide covers a number of performance considerations and recommendations, including using before-save flows to make field updates and eliminating excess or repeat DML operations wherever possible. Those areas, which are often where we see performance problems materialize in real-world customer scenarios, should be addressed first.
As architects, we would love to never have to troubleshoot automation, but we do from time to time. While having your automation spread out among multiple tools can work during initial development, it often causes more headaches over time as changes are made in different places. This is where the advice to consolidate your automation on a single Object into either Apex or Flow comes from. There is currently no unified troubleshooting experience that spans all Salesforce tools, so depending on the complexity of your organization and your anticipated debugging and troubleshooting needs, you may want to make the decision to stick with just one tool for your automation. Some customers make this a hard-and-fast rule due to their environment or the skills of their admins and engineers. Others find it useful to split out their automation across Flow and Apex, for example, by using invocable actions for pieces of automation that are too complex or require careful handling and calling them from Flow for greater access among admins.
It may be prudent to consolidate an object’s automation in a single tool when maintenance, debugging, or conflicts (such as different people editing the same field) are likely to be a concern. Other approaches, like using invocable actions to implement more complex functionality from non-admins, can also be used.
For many years, the biggest reason to consolidate automation into a single process or flow was to ensure ordering. The only way to keep two pieces of automation separate, but have them execute in a guaranteed sequential order, was to put them together. This quickly led to scaling problems. As orgs became more dynamic and needed to adapt to business changes, these “mega flows” became unwieldy and difficult to update, even for small changes.
With flow trigger ordering, introduced in Spring '22, admins can now assign a priority value to their flows and guarantee their execution order. This priority value is not an absolute value, so the values need not be sequentially numbered as 1, 2, 3, and so on. Instead, the flow will execute in the order described, with a tie-break applied to duplicate values (if there are two priority 1s, for example, they will execute alphabetically) in order to minimize disruption from other automation, managed packages, or movement between orgs. All flows that don’t have a trigger order (all legacy or active flows) will run between numbers 1000 and 1001 to allow for backwards compatibility. If you’d like to leave your active flows alone, you can start your ordering at 1001 for any new flows you’d like to run after them. As a best practice, leave space between flows that you number – use 10, 20, and 30 as values rather than 1, 2, and 3 for example. That way, if you add a flow in the future, you can number it 15 to put it between your first and second without having to deactivate and edit those flows that are already running.
For more advanced use cases, such as task lists for groups, or multi-step processes that interact with multiple users and multiple systems, or if you need an audit of the execution of your process, consider Flow Orchestration. An orchestration is a sequence of stages, comprised of one or more steps that can run synchronously or asynchronously. Each step in an orchestration is a flow, autolaunched flow for background processing and screen flow for user interaction. You can specify whether changes or creation of records in a given object will trigger or wake up an orchestration. Use Flow Orchestration to automate long running processes, and use Flow Trigger Explorer to order record-triggered flows.
In the past, the need for ordering has led to recommendations for consolidating all automation into a single flow. With flow trigger ordering, there is now no need to do that. (See Well-Architected - Data Handling for more data handling best practices.)
It is tempting to dissect the technical reasons underpinning various best practices, but it’s no less important to think about your organization and the people building and maintaining the automation. Some customers like to have their admins build all their automation in subflows, with only one key administrator tasked with consolidating all those into a single flow as a way of managing change control. Some only want to build in Apex because they have developers who can get it done faster that way. Others want more functionality to come in Flow entry conditions, so they can use record type, for example, to ensure multiple groups can build automation that won’t run into conflicts in production (we are working on that record type request!). We recommend that you organize around your business first and group flows functionally by what they are intended to automate and who is intended to own them, but that is going to look different for different orgs.
It can be incredibly challenging to understand an org that has years of automation built by admins who are no longer on the product. Best practices and documented design standards for your organization that are implemented upfront can help with long-term maintenance. Salesforce continues to invest in this area, with new features like Flow Trigger Explorer to help you understand what triggered automation is already in place and running today. It’s always a good idea to consider what will benefit the long-term health and maintenance of any automation you build. If you’re still stuck, we recommend reaching out to your Trailblazer Community. Many Trailblazers who have gone down this path, and they can advise on the human side of building automation as well as on the technical details. Best practices come from everyone!
It’s important to remember that documentation is just as important as automation! As you document your work, write clear, unique names for things, and use the Description field on every element across Flow to explain your intent. Comment your code. Every architect who has been around long enough has rushed through this step to meet some deadline. Likewise experienced architects have also eventually been on the receiving end of this scenario, and wound up scratching their head at some undocumented rogue piece of automation.
Ultimately, the best approach is one that works well with your business and organization. If you feel a bit lost, the Trailblazer Community is full of advice on how to manage a complex organization, so dig in and ask questions as you learn how to better match your unique business and admin setup with the product. And remember: Write things down!
The rest of this document describes technical details about the Flow runtime.
Approximately 150 billion actions were executed by Workflow, Process Builder, and Flow in April 2020, including record updates, email alerts, outbound messages, and invocable actions. Around 100 billion of those 150 billion actions were same-record field updates. Note that before-save flow triggers had only been launched the release before, so that means 100 billion after-save same-record field updates — or equivalently, 100 billion recursive saves — were executed in just one month. Imagine how much time could have been saved by before-save flow triggers
Caveat: Architects should view all performance claims with a critical eye, even when they come from Salesforce. Results in your org will likely be different than the results in our orgs.
Earlier in this guide, we noted that while Workflow Rules have a reputation for being fast, they will always be slower and more resource-hungry than a single, functionally equivalent before-save flow trigger. The theoretical side to this assertion is that, by design, before-save flow triggers neither cause DML operations nor the ensuing recursive firing of the save order, while Workflow Rules do (because they happen after the save).
But what happens in practice? We ran a few experiments to find out.
How much longer does an end user have to wait for a record to save?
For each of the different automation tools that can be used to automate a same-record field update, we created a fresh org, + one more fresh org to serve as a baseline org.
Then for each org, we:
Opportunity Create
that would set Opportunity.NextStep = Opportunity.Amount
.None
except Workflow.Info
and Apex Code.Debug
This gave us the average overhead that each trigger added to the log duration.
How about the other side of the spectrum: high-volume batch processing?
We borrowed some of our performance team’s internal environments to get a sense of how well the different trigger tools scale.
The configuration was:
Account Create
which each update Account.ShippingPostalCode
Account Create
which each update Account.ShippingPostalCode
Account Create
which each update Account.ShippingPostalCode
Account Create
which each update Account.ShippingPostalCode
Account Create
which each update Account.ShippingPostalCode
Then each Tuesday for the last 12 weeks, we uploaded 50,000 Accounts to each org through the Bulk API, with a 200-record batch size.
Fortunately, our internal environments can directly profile trigger execution time without requiring Apex debug logging or extrapolation from a baseline.
Because our internal environments are not representative of production, we’re sharing only the relative performance timings, and not the raw performance timings.
In both single-record and bulk use cases, the before-save Flow performs extremely well. As much as we’d like to take credit for the outcomes, however, most of the performance savings come simply due to the huge advantage of being before the save.
Go forth and stop implementing same-record field updates in Workflow Rules and Process Builder!
This section is intended to help you better better understand how & why Flow accrues against Governor limits the way it does. It contains technical discussion about Flow’s runtime bulkification & recursion control behaviors.
We’ll mainly be focusing on how Flow affects these Governor limits.
We assume the reader possesses a prerequisite understanding of what these limits represent, and we recommend refreshing on the content and terminologies used in How DML Works and Triggers and Order of Execution.
Before diving into the specifics of triggered Flow runtime behavior, it’s extremely important to make sure we use the same common mental model of the save order for the purpose of further discussion. We believe a tree model provides a reasonably accurate abstraction.
Since each node in a save order tree corresponds to a single processed DML record, and there is a limit of 10,000 on the number of processed DML records per transaction, there can be no more than 10,000 nodes total, across all of the save order trees in the transaction.
Additionally, there can be no more than 150 unique timestamped DML operations {DML0, DML1, ...* *, DML149} across all of the save order trees in the transaction.
Now, let’s revisit our earlier example of a suboptimal cross-object triggered flow implementation:
Suppose that there are no other triggers in the org, and a user creates a single new Case, Case005, against parent Account Acme Corp. The corresponding save order tree is fairly simple:
Suppose that the user then creates two new cases, Case006 and Case007, in a single DML statement. You’d get two save order trees with three nodes each, for a total of six records processed by DML. However, thanks to Flow’s automatic cross-batch bulkification logic (Flow Bulkification), the six nodes would still be covered by a total of three issued DML statements:
Still not bad, right? In real life, though, you’d probably expect there to be a host of triggers on Account update, such that any single save order tree would end up looking like this (for the sake of discussion let’s say there are 3 triggers on Account):
And in a scenario where you’ve batch-inserted 200 Cases, there would be 200 respective save order trees sharing a 10,000 total node limit and a 150 total issued DML statements limit. Bad news bears.
However, by combining the Flow’s two original Update Records elements into a single Update Records element, the entire right subtree of Node0 can be eliminated.
This is an example of what we’ll call functional bulkification, one of two types of bulkification practices that can reduce the number of DML statements needed to process all the DML rows in a batch.
Functional bulkification attempts to minimize the number of unique DML statements that are needed to process all of the records in a single save order tree.
The example above achieves functional bulkification by effectively merging two functionally distinct DML nodes, and their respective save order subtrees, on Acme Corp. into a single, functionally equivalent, merged DML node and save order subtree. Not only does this reduce the number of DML statements issued, but it also saves CPU time. All the non-DML trigger logic is run once and not twice.
Cross-batch bulkification attempts to maximize the number of DML statements that can be shared across all save order trees in a batch.
An example of perfect cross-batch bulkification is an implementation where, if one record’s save order tree requires 5 DML statements to be issued, then a 200 record batch still requires only 5 DML statements to be issued.
In the above example, cross-batch bulkification is handled automatically by the Flow runtime.
Recursion control, on the other hand, increases processing efficiency by pruning functionally redundant subtrees.
The Flow runtime automatically performs cross-batch bulkification on behalf of the user. However, it does not perform any functional bulkification.
The following Flow elements can cause the consumption of DML & SOQL in a triggered flow.
As an example, consider the following triggered Flow implementation, which, when an Account is updated, automatically updates all of its related Contracts, and attaches a Contract Amendment Log child record to each of those updated Contracts.
Suppose now that 200 Accounts are bulk updated. Then during runtime:
We strongly recommend against including DML & SOQL in loops for this reason. This is very similar to best practice #2 in Apex Code Best Practices. Users will be warned if they attempt to do so while building in Lightning Flow Builder.
Triggered Flows are subject to the recursive save behavior outlined in the Apex Developer Guide’s Triggers and Order of Execution page.
What does this actually mean? Let’s go back to the tree model we established earlier, and revisit this specific property of the tree:
The guarantee, “During a recursive save, Salesforce skips ... ” adds an additional bit of magic:
This has a few important implications:
[Consideration #1] A Flow trigger can fire multiple times on the same record during a transaction.
For example, suppose that in addition to the suboptimal Flow trigger on Case Create
to the right, the org also has a Flow trigger on Account Update
.
For simplicity’s sake, let’s assume the triggered Flow on Account Update
is a no-op. Suppose we create a new Case, Case #007, with parent Account “Bond Brothers.”
Then the save order tree would look like this:
Case Create
on Case #007 is entered.
Case Create
fires.
Account Update
on Bond Brothers is entered.Account Update
fires. // First execution on Bond Brothers.
Account Update
to be a no-op, nothing happens.Account Update
, Step 17 concludes.Account Update
on Bond Brothers is exited.Account Update
on Bond Brothers is entered.Account Update
fires. // Second execution on Bond Brothers. // Not a recursive execution!
Account Update
to be a no-op, nothing happens.Account Update
, Step 17 concludes.Account Update
on Bond Brothers is exited.Case Create
, Step 17 concludes.Case Create
on Case #007 concludes.Had the two Update Records elements been merged into a single Update Records element, the resolved save order would have instead looked like this:
Case Create
on Case #007 is entered.
Case Create
fires.
Account Update
on Bond Brothers is entered.Account Update
fires. // First execution on Bond Brothers.
Account Update
to be a no-op, nothing happens.Account Update
, Step 17 concludes.Account Update
on Bond Brothers is exited.Case Create
, Step 17 concludes.Case Create
on Case #007 concludes.[Consideration #2] A Flow trigger will never cause itself to fire on the same record again.
[Consideration #3] Although Flow triggers (and all other triggers in the v48.0 save order steps 9-18) get this type of recursion control for free, Steps 1-8 and 19-21 do not. So, when an after-save Flow trigger performs a same-record update, a save order is entered, and Steps 1-8 and 19-21 all execute again. This behavior is why it’s so important to move same-record updates into before-save Flow triggers!
You’ve made it! Have a good day and thanks for the read. Hope you learned something you found valuable.
Help us make sure we're publishing what is most relevant to you: take our survey to provide feedback on this content and tell us what you’d like to see next.