This site is in beta. We welcome your feedback

Content last updated December 2020. Roadmap corresponds to Spring ’21 projections.
Our forward-looking statement applies to roadmap projections.

Guide Overview

With the release of before-save Flow triggers in Spring ’20 and after-save Flow triggers in Summer ‘20, we officially recommend Flow and Apex as the preferred no-code and pro-code options for triggered automation on the platform.

This document provides recommendations and rationale for which tools we believe are the most appropriate for various triggered automation use cases. It also provides insight into how Flow automatically handles bulkification and recursion control on behalf of the customer, as well as some pointers on how we recommend thinking about performance.

If you leave this document with nothing else, please take these points away with you:

This doc focuses on record-triggered automation. For the same assessment on Salesforce’s form-building tools, check out Architect’s Guide to Building Forms on Salesforce.

Low Code --------------------------------------> Pro Code
Before-Save Flow Trigger After-Save Flow Trigger After-Save Flow Trigger + Apex Apex Triggers
Same-Record Field Updates Available Not Ideal Not Ideal Available
High-Performance Batch Processing Available Not Ideal Not Ideal Available
Cross-Object CRUD Not Available Available Available Available
Complex List Processing Not Available Not Available Available Available
Fire & Forget Asynchronous Processing Not Available Not Available Available Available
Other Asynchronous Processing Not Available Not Available Not Available Available
Custom Validation Errors Not Available Not Available Not Available Available

The table above enumerates the most common trigger use cases we see across our customer base, and the tools we believe are well-suited for each.

In a case where multiple tools are available for a use case, we recommend choosing the tool that will allow you to implement and maintain the use case with the lowest cost.

This will be highly dependent on the makeup of your team.

For example, if your team comprises Apex developers and already has a well-established CI/CD pipeline and a well-managed framework for handling Apex triggers, it will probably be cheaper to continue on that path. In this case, the cost of changing your organization’s operating models to adopt Flow development will be significant.

On the other hand, if your team doesn’t have consistent access to developer resources, or a strong institutionalized culture of code quality, there may be times when you’d be better served by triggered Flows that more people can maintain, than by several lines of code that very few people can maintain.

In an environment where there are mixed skill sets or admin-heavy skill sets, Flow triggers provide a very compelling option that is more performant and easier to debug, maintain, and extend than any no-code offering of the past. We propose seriously considering Flow triggers as a way to delegate the delivery of business process implementation, so that in a case where you have limited developer resources, you can focus those resources on driving impact in projects that will most highly leverage their skillsets.

So Long, Process Builder & Workflow Rules

Just kidding, we won’t be getting rid of these for a while.

But, we do believe that Flow is far better architected to meet the increasing functionality and extensibility requirements of our customers today.

For these reasons, moving forward we will be focusing our investments on Flow. We recommend building in Flow where possible, and resorting to Process Builder and/or Workflow only when necessary. We will continue supporting Process Builder & Workflow rules within their current functional capacities, but do not plan on making further investments.

Should you find yourself in a place where you must use Process Builder due to functional gaps, you may wish to consider implementing as much logic in an autolaunched Flow (which can be called by a process) as possible, so that this logic might be called by a Flow trigger rather than a process in the future. While this pattern spreads implementation and maintenance across two tools, it comes with notable benefits: autolaunched Flows have a far superior debug experience, are easier to manage, and can be unit tested with Apex. In some cases, autolaunched Flows also exhibit better performance than their functionally equivalent Process Builder implementations.

In a similar vein, it’s possible to implement trigger logic in Apex, business logic in Flow, and use the Interview.start() method in Apex to invoke the Flow. However, Interview.start() is not a bulk method, so use of this method in a trigger context should be approached with extreme caution.

When we first launched this guide in June 2020, there were a number of major functional gaps between Flow triggers & Process Builder/Workflow. However, we’re proud to note that we’ve been able to close most of the gaps in the last two releases:

Targeted Timeline for Availability in Flow Triggers
Process Builder
Navigate to related records using merge fields Soon
✔ Delivered in Winter '21
Access global variables without having to create formulas every time 1 Year
✔ Delivered in Winter '21
Execute a decision only if the field values that the record had immediately before saving didn't meet the expression criteria, but now do. Soon
✔ Delivered in Winter '21
Scheduled Actions 1 Year
✔ Delivered in Spring '21
PRIORVALUE() 1 Year
✔ Delivered in Spring '21
ISCHANGED() 1 Year
Soon
ISNEW() 1 Year
Soon
Call an autolaunched Flow 1 Year
Allow recursive executions Never
Workflow Rules
Time-Based Workflow Monitoring 1 Year
✔ Delivered in Spring '21
Send an outbound message 1 Year
Soon
Set field values without triggering validation rules Never

Use Case Considerations

Same-Record Field Updates

Record-Changed Flow: Before Save Record-Changed Flow: After Save Record-Changed Flow: After Save + Apex Apex Triggers
Same-Record Updates Available Not Ideal Not Ideal Available

Of all the recommendations in this document, we most strongly recommend taking steps to minimize the number of same-record field updates that occur after the save. Or put more plainly, stop implementing same-record field update actions inside Workflow Rules or Process Builder processes! And don’t start implementing same-record field updates in after-save Flow triggers, either! Instead, do start implementing same-record field update actions in before-save Flow triggers or before-save Apex triggers.

Before-save same-record field updates are sensationally more performant than after-save same-record field updates, by design.

  1. The record’s field values are already loaded into memory and don’t need to be loaded again.
  2. The update is performed by changing the values of the record in memory, and relying on the original underlying DML operation to save the changes to the database. This way, a DML operation is avoided (expensive), as well as the entire associated ensuing recursive save (this is why poorly implemented processes get out of control).

Well, that’s the theory anyways; what happens in practice?

Our tests (Performance Discussion: Same-Record Field Updates) provide some empirical flavor. In our experiments, bulk same-record updates performed anywhere between 10-20x faster when implemented using before-save triggers than when implemented using Workflow Rules or Process Builder. For this reason we do not believe performance should be considered as a limitation for implementing on before-save Flow triggers, except in perhaps the most extreme scenarios.

The main limitation of before-save Flow triggers is that they are currently quite functionally sparse: you can query records, loop, evaluate formulas, assign variables, and perform decisions (aka Switch statements) for logic, and can only make updates to the underlying record. You cannot extend a before-save Flow trigger with Apex invocable actions or subflows. Meanwhile, you can do anything you want in a before-save Apex trigger (except explicit DML on the underlying record).

We know that same-record field updates account for the lion’s share of Workflow Rule actions executed site-wide, and are also a large contributor to problematic Process Builder execution performance. By pulling any “recursive saves” out of the save order and implementing them before the save, there’s a lot of exciting perf savings waiting to be realized.

High-Performance Batch Processing

Record-Changed Flow: Before Save Record-Changed Flow: After Save Record-Changed Flow: After Save + Apex Apex Triggers
High-Performance Batch Processing Available Not Ideal Not Ideal Available

If you desire the highly performant evaluation of complex logic in batch scenarios, then the vast configurability of Apex and its rich debug and tooling capabilities are for you.

Here are some examples of what we mean by “complex logic,” and why we recommend Apex.

While before-save Flow triggers are not quite as performant as before-save Apex triggers in barebones speed contests, the impact of the overhead is somewhat minimized when contextualized within the scope of the broader transaction. Before-save Flow triggers should still be amply fast enough for the vast majority of non-complex (as enumerated above), same-record field update batch scenarios. As they are pretty consistently more than 10x faster than Workflow Rules, it’s safe to use them anywhere you currently use Workflow Rules.

Cross-Object CRUD

Record-Changed Flow: Before Save Record-Changed Flow: After Save Record-Changed Flow: After Save + Apex Apex Triggers
Cross-Object CRUD Not Available Available Available Available

Creating, updating, or deleting a different record requires a database operation, no matter what tool you use. The only tool that doesn’t currently support cross-object “crupdeletes” (a portmanteau of the Create, Update, and Delete operations) is the before-save Flow trigger.

Currently, Apex outperforms Flow in raw database operation speed. That is, it takes less time for the Apex runtime to prepare, perform, and process the result of any specific database call (e.g. Create a Case) than it takes the Flow runtime to do the same. In practice, however, we believe that if you are looking for major performance improvements, you will likely reap greater impact by identifying any possibly inefficient user implementations, and fixing them first, before looking into optimizing for lower level operations. The execution of actual user logic on the app server generally consumes far more time than the handling of database operations.

The most inefficient user implementations tend to issue multiple DML statements where fewer would suffice. For example, here is an implementation of a Flow trigger that updates two fields on a case’s parent account record with two Update Records elements.

Image of flow designer with an after save flow triggering duplicative DML elements.

This is a suboptimal implementation as it causes two DML operations (and two save orders) to be executed at runtime. Combining the two field updates into a single Update Records element will result in only one DML operation being executed at runtime.

Workflow Rules has gained a reputation for being highly performant. Part of this can be attributed to how Workflow Rules constrains the amount of DML it performs during a save.

  1. All of the immediate, same-record field update actions, across all Workflow rules on an object, are automatically consolidated into a single DML statement at runtime (so long as their criteria are met).
  2. A similar runtime consolidation occurs for the immediate, detail-to-master cross-object field update actions across all the Workflow rules on an object.
  3. Cross-object DML support is very constrained in Workflow Rules in the first place.

Thus, when it comes to cross-object DML scenarios, the name of the game is to minimize unnecessary DML in the first place.

  1. Before starting any optimization, it’s crucial to first know where all the DML is happening. This step is easier when you have logic spread across fewer triggers and have to look in fewer places (one reason for the commonly espoused one/two-trigger-per-object pattern), but you can also solve for this by institutionalizing strong documentation practices, by maintaining object-centric subflows, or by creating your own design standards that enable the efficient discovery of DML at design time. Note: Flow triggers cannot invoke subflows as of Summer ’20, but it’s on the roadmap.
  2. Once you know where all the DML is happening, try to consolidate any DML that targets the same record into the fewest number of Update Records elements necessary.
  3. When dealing with more complex use cases that require conditionally and/or sequentially editing multiple fields on a related record, consider creating a record variable to serve as a temporary, in-memory container for the data in the related record. Make updates to the temporary data in that variable during the Flow’s logical sequence, and only perform a single, explicit Update Records to persist the temporary data to the database at the very end of the Flow.

Sometimes this is easier said than done — if you’re not actually experiencing performance issues, then you may just find that premature optimization is not worth the investment.

Complex List Processing

Record-Changed Flow: Before Save Record-Changed Flow: After Save Record-Changed Flow: After Save + Apex Apex Triggers
Complex List Processing Not Available Not Available Available Available

There are a few major list processing limitations in Flow today.

  1. Flow offers a limited set of basic list processing operations out of the box.
  2. There’s no way to reference an item in a Flow collection, either by index or by using Flow’s Loop functionality (during runtime, for each iteration through a given collection, the Loop element simply assigns the next value in the collection to the Loop variable; this assignment is performed by value, and not by reference). Thus, you can’t do in Flow anything that you’d use MyList[myIndexVariable] to do in Apex.
  3. Loops are executed serially during runtime, even during batch processing. For this reason, any SOQL or DML operations which are enclosed within a loop are not bulkified, and add risk that the corresponding transaction Governor limits will be exceeded.

The combination of these limitations makes some common list-processing tasks, such as in-place data transforms, sorts, and filters, overly cumbersome to achieve in Flow while being much more straightforward (and more performant) to achieve in Apex.

This is where extending Flows with invocable Apex can really shine. Apex developers can and have created efficient, modular, object-agnostic list processing methods in Apex. Since these methods are declared as invocable methods, they are automatically made available to Flow users. It’s a great way to keep business logic implementation in a tool that business-facing users can use, without forcing developers to implement functional logic in a tool that’s not as well-suited for functional logic implementation.

When building invocable Apex, please take into account these considerations.

Asynchronous Processing

Record-Changed Flow: Before Save Record-Changed Flow: After Save Record-Changed Flow: After Save + Apex Apex Triggers
Fire & Forget Asynchronous Processing Not Available Not Available Available Available
Other Asynchronous Processing Not Available Not Available Not Available Available

For the purposes of this section, we will establish the following categorization-by-intent to disambiguate scheduled processing from asynchronous processing:

A canonical use case for asynchronous processing is web service callouts for a batch of records. Because web service integrations cannot be assumed to always be reliable and low latency, it can be difficult or impossible to fit all the necessary web service callouts for a 200-record batch (+ any associated error handling logic) within the Governor’s synchronous transaction limits.

Flow offers no native means of declaring some set of logic to execute asynchronously. We recommend implementing asynchronous processing inside a Queuable Apex class.

If the use case is a fire-and-forget use case, such that no business requirements depend on the results of the asynchronous logic, then the use case may be achieved by calling System.enqueueJob against Queuable Apex from within an invocable Apex method, invoking the method from Flow through the invocable action framework, and implementing the business logic inside Flow.

However, for all other asynchronous use cases, we recommend a full Apex solution.

Note: the autolaunched Flow’s Pause element can be used to break processing into multiple transactions. While this can be used to force asynchronous processing, we do not recommend this pattern -- Flow’s Pause element is intended for scheduled processing; that is, implementation driven by business requirements. Once you start mixing business logic-driven implementation with technology-driven implementation in a single Flow, the Flow becomes more fragile, harder to parse, and harder to maintain over time. Scalability also becomes an issue with the zero-duration pause or platform event-based resume implementations.

Custom Validation Errors

Record-Changed Flow: Before Save Record-Changed Flow: After Save Record-Changed Flow: After Save + Apex Apex Triggers
Custom Validation Errors Not Available Not Available Not Available Available

At this time, Flow provides no way to either prevent DML operations from committing, or to throw custom errors; the addError() Apex method is not supported when executed from Flow via Apex invocable method.

Triggered Flow Runtime Behavior

The rest of this document describes technical details about the Flow runtime.

Performance Discussion: Same-Record Field Updates

This round’s performance discussion is going to be focused on the same-record field update use case. Of the ~150 billion actions that were executed by Workflow, Process Builder, and Flow in April this year — “actions” being things effected outside the runtime, such as record updates, email alerts, outbound messages, invocable actions — we believe that around 100 billion of those actions were same-record field updates. Note that before-save Flow triggers had only been launched the release before, so we’re talking ~ 100 billion after-save same-record field updates — or equivalently, 100 billion recursive saves — in the month of April. Wow! How much time could have been saved by before-save Flow triggers?

Caveat: Claims about performance should always be taken with a bowl of salt, even when they come from Salesforce. Results in your org will likely be different than the results in our orgs.

We made this claim earlier: “while Workflow Rules have a reputation for being fast, they nevertheless cause a recursive save and will always be considerably slower and more resource-hungry than a single functionally equivalent before-save Flow trigger.”

The theoretical side to this argument was that, by design, before-save Flow triggers neither cause DML nor the ensuing recursive firing of the save order, while Workflow Rules do, because they happen after the save.

But what happens in practice? Here are a few experiments we tried.

[Experiment 1] Single trigger; single record created from the UI; Apex debug log duration

How much longer does an end user have to wait for a record to save?

For each of the different automation tools that can be used to automate a same-record field update, we created a fresh org, + one more fresh org to serve as a baseline org.

Then for each org, we:

  1. Except for the baseline org, implemented the simplest version of a trigger on Opportunity Create that would set Opportunity.NextStep = Opportunity.Amount.
  2. Enabled Apex debug logging, with all debug levels set to None except Workflow.Info and Apex Code.Debug
  3. Manually created a new Opportunity record with a populated Amount value through the UI, 25 times.
  4. Calculated the average duration of the log across the 25 transactions.
  5. Subtracted from the average duration in #4, the average duration of the log in the baseline org.

This gave us the average overhead that each trigger added to the log duration.

Bar chart showing average time added to single-record field updates from most to least efficient tool.

[Experiment 2] 50 triggers; 50,000 records inserted via Bulk API (200 record batches); internal tooling

How about the other side of the spectrum: high-volume batch processing?

We borrowed some of our performance team’s internal environments to get a sense of how well the different trigger tools scale.

The configuration was:

Then each Tuesday for the last 12 weeks, we uploaded 50,000 Accounts to each org through the Bulk API, with a 200-record batch size.

Fortunately, our internal environments can directly profile trigger execution time without requiring Apex debug logging or extrapolation from a baseline.

Unfortunately, our internal environments are so ill-representative of production that we’re only allowed to present the relative performance timings, and not the raw performance timings.

Bar chart showing average time added to bulk record updates from most to least efficient tool.

In both single-record and bulk use cases, the before-save Flow performs extremely well. As much as we’d like to take credit for the outcomes, however, most of the performance savings come simply due to the huge advantage of being before the save.

Go forth and stop implementing same-record field updates in Workflow Rules and Process Builder!

CPU Time

At this time, Flow CPU time consumption, as it is reported in the Apex debug logs, is inconsistent. We believe this is a reporting issue, and not a measurement issue.

For example, if a Flow consumes 8s actual CPU time during runtime, then the cumulative Governor CPU time limit will be incremented by 8s. However, the Flow element-level contributions to the limit, which are displayed in the Apex debug logs, are not always properly attributed. Sometimes, CPU time that should have been attributed to a Flow element is not attributed to any element in the Flow. As a result of this misattribution, the sum of the attributed element costs will not always equal 8s; instead, they may add up to a number that is less than 8s. In this case, the difference will be unintentionally rolled into the next CPU time consumption line in the debug logs. We are planning to fix this in Spring ’21.

In the meantime, a Flow’s CPU time consumption can be upper-capped by its wall clock time consumption. This is because Flows run on a single thread.

Bulkification & Recursion Control

This section is intended to help you better better understand how & why Flow accrues against Governor limits the way it does. It contains technical discussion about Flow’s runtime bulkification & recursion control behaviors.

We’ll mainly be focusing on how Flow affects these Governor limits.

We assume the reader possesses a prerequisite understanding of what these limits represent, and we recommend refreshing on the content and terminologies used in How DML Works and Triggers and Order of Execution.

Before diving into the specifics of triggered Flow runtime behavior, it’s extremely important to make sure we use the same common mental model of the save order for the purpose of further discussion. We believe a tree model provides a reasonably accurate abstraction.

Since each node in a save order tree corresponds to a single processed DML record, and there is a limit of 10,000 on the number of processed DML records per transaction, there can be no more than 10,000 nodes total, across all of the save order trees in the transaction.

Additionally, there can be no more than 150 unique timestamped DML operations {DML0, DML1, ...* *, DML149} across all of the save order trees in the transaction.

Now, let’s revisit our earlier example of a suboptimal cross-object triggered Flow implementation:

Image of flow designer with an after save flow triggering duplicative DML elements.

Suppose that there are no other triggers in the org, and a user creates a single new Case, Case005, against parent Account Acme Corp. The corresponding save order tree is fairly simple:

Image with three circles representing three DML nodes.

Suppose that the user then creates two new cases, Case006 and Case007, in a single DML statement. You’d get two save order trees with three nodes each, for a total of six records processed by DML. However, thanks to Flow’s automatic cross-batch bulkification logic (Flow Bulkification), the six nodes would still be covered by a total of three issued DML statements:

Image of two save operations, showing DML statements as circular nodes.

Still not bad, right? In real life, though, you’d probably expect there to be a host of triggers on Account update, such that any single save order tree would end up looking like this (for the sake of discussion let’s say there are 3 triggers on Account):

Image of more complex trigger scenario with a tree of multiple DML nodes.

And in a scenario where you’ve batch-inserted 200 Cases, there would be 200 respective save order trees sharing a 10,000 total node limit and a 150 total issued DML statements limit. Bad news bears.

However, by combining the Flow’s two original Update Records elements into a single Update Records element, the entire right subtree of Node0 can be eliminated.

Image of flow designer with an after save flow triggering a single DML operation. Image of a tree of DML nodes for single record trigger operation.

This is an example of what we’ll call functional bulkification, one of two types of bulkification practices that can reduce the number of DML statements needed to process all the DML rows in a batch.

  1. Functional bulkification attempts to minimize the number of unique DML statements that are needed to process all of the records in a single save order tree.

    The example above achieves functional bulkification by effectively merging two functionally distinct DML nodes, and their respective save order subtrees, on Acme Corp. into a single, functionally equivalent, merged DML node and save order subtree. Not only does this reduce the number of DML statements issued, but it also saves CPU time. All the non-DML trigger logic is run once and not twice.

  2. Cross-batch bulkification attempts to maximize the number of DML statements that can be shared across all save order trees in a batch.

    An example of perfect cross-batch bulkification is an implementation where, if one record’s save order tree requires 5 DML statements to be issued, then a 200 record batch still requires only 5 DML statements to be issued.

    In the above example, cross-batch bulkification is handled automatically by the Flow runtime.

Recursion control, on the other hand, increases processing efficiency by pruning functionally redundant subtrees.

Flow Bulkification

The Flow runtime automatically performs cross-batch bulkification on behalf of the user. However, it does not perform any functional bulkification.

The following Flow elements can cause the consumption of DML & SOQL in a triggered Flow.

  1. Create / Update / Delete Records: Each element consumes 1 DML for the entire batch, not including any downstream DML caused by triggers on the target object.
  2. Get Records: Each element consumes 1 SOQL for the entire batch.
  3. Action Calls: Depends on how the action is implemented. During runtime, the Flow runtime compiles a list of the inputs across all of the relevant Flow Interviews in the batch, then passes that list into a bulk action call. From that point, it’s up to the action developer to ensure the action is properly bulkified.
  4. Loop: Doesn’t consume DML or SOQL directly, but instead overrides rules #1-3 above by executing each contained element in the loop serially, for each Flow Interview in the batch, one-by-one.
    1. This essentially “escapes” Flow’s automatic cross-batch bulkification: no DML or SOQL in a loop is shared across the save order trees, so the number of records in a batch has a multiplicative effect on the amount of DML & SOQL consumed.

As an example, consider the following triggered Flow implementation, which, when an Account is updated, automatically updates all of its related Contracts, and attaches a Contract Amendment Log child record to each of those updated Contracts.

Image of Account after save flow performing related record DML.

Suppose now that 200 Accounts are bulk updated. Then during runtime:

  1. The Get Related Contracts element will add + 1 SOQL for the entire batch of 200 Accounts.
  2. Then, for each Account in the 200 Accounts:
    1. For each Contract that is related to that Account:
      1. The Update Contract element will add + 1 DML to update the Contract, not including any downstream DML caused by triggers on Contract update.
      2. The Create Contract Amendment Log will add + 1 DML to create the corresponding Contract Amendment Log child record, not including any downstream DML caused by triggers on Contract Amendment Log create.

We strongly recommend against including DML & SOQL in loops for this reason. This is very similar to best practice #2 in Apex Code Best Practices. Users will be warned if they attempt to do so while building in Lightning Flow Builder.

Flow Recursion Control

Triggered Flows are subject to the recursive save behavior outlined in the Apex Developer Guide’s Triggers and Order of Execution page.

Image displaying text about rules skipped in recursive save.

What does this actually mean? Let’s go back to the tree model we established earlier, and revisit this specific property of the tree:

The guarantee, “During a recursive save, Salesforce skips ... ” adds an additional bit of magic:

This has a few important implications:

[Consideration #1] A Flow trigger can fire multiple times on the same record during a transaction.

Image of Case after save flow performing duplicative DML on related records.

For example, suppose that in addition to the suboptimal Flow trigger on Case Create to the right, the org also has a Flow trigger on Account Update.

For simplicity’s sake, let’s assume the triggered Flow on Account Update is a no-op. Suppose we create a new Case, Case #007, with parent Account “Bond Brothers.”

Then the save order tree would look like this:

  1. Case #007 is created.
  2. Save order for Case Create on Case #007 is entered.
    1. Steps 1-16 in the save order execute. Since there are no other triggers on Case aside from the Flow trigger above, nothing happens.
    2. Step 17 executes: our public doc hasn’t been updated yet, but after-save Flow triggers will be the new step #17; the current step #17, roll-ups, and everything below it, is shifted 1 step lower.
      1. The Flow trigger on Case Create fires.
        1. The Flow trigger updates the Bond Brothers Account rating.
          1. Save order for Account Update on Bond Brothers is entered.
          2. Steps 1-16 in the save order execute. No operations.
          3. Step 17 executes.
            1. The Flow trigger on Account Update fires. // First execution on Bond Brothers.
              1. Since we defined the Flow trigger on Account Update to be a no-op, nothing happens.
            2. Since there are no other Flow triggers on Account Update, Step 17 concludes.
          4. Steps 18-22 execute. No operations.
          5. Save order for Account Update on Bond Brothers is exited.
        2. The Flow trigger updates the Bond Brothers Account propensity to pay.
          1. Save order for Account Update on Bond Brothers is entered.
          2. Steps 1-16 in the save order execute. No operations.
          3. Step 17 executes.
            1. The Flow trigger on Account Update fires. // Second execution on Bond Brothers. // Not a recursive execution!
              1. Since we defined the Flow trigger on Account Update to be a no-op, nothing happens.
            2. Since there are no other Flow triggers on Account Update, Step 17 concludes.
          4. Steps 18-22 execute. No operations.
          5. Save order for Account Update on Bond Brothers is exited.
      2. Since there are no other Flow triggers on Case Create, Step 17 concludes.
    3. Steps 18-22 execute. No operations.
    4. Save order for Case Create on Case #007 concludes.
  3. Transaction closes.

Had the two Update Records elements been merged into a single Update Records element, the resolved save order would have instead looked like this:

  1. Case #007 is created.
  2. Save order for Case Create on Case #007 is entered.
    1. Steps 1-16 in the save order execute. Since there are no other triggers on Case aside from the Flow trigger above, nothing happens.
    2. Step 17 executes: our public doc hasn’t been updated yet, but after-save Flow triggers will be the new step #17; the current step #17, roll-ups, and everything below it, is shifted 1 step lower.
      1. The Flow trigger on Case Create fires.
        1. The Flow trigger updates the Bond Brothers Account rating and propensity to pay.
          1. Save order for Account Update on Bond Brothers is entered.
          2. Steps 1-16 in the save order execute. No operations.
          3. Step 17 executes.
            1. The Flow trigger on Account Update fires. // First execution on Bond Brothers.
              1. Since we defined the Flow trigger on Account Update to be a no-op, nothing happens.
            2. Since there are no other Flow triggers on Account Update, Step 17 concludes.
          4. Steps 18-22 execute. No operations.
          5. Save order for Account Update on Bond Brothers is exited.
      2. Since there are no other Flow triggers on Case Create, Step 17 concludes.
    3. Steps 18-22 execute. No operations.
    4. Save order for Case Create on Case #007 concludes.
  3. Transaction closes.

[Consideration #2] A Flow trigger will never cause itself to fire on the same record again.

[Consideration #3] Although Flow triggers (and all other triggers in the v48.0 save order steps 9-18) get this type of recursion control for free, Steps 1-8 and 19-21 do not. So, when an after-save Flow trigger performs a same-record update, a save order is entered, and Steps 1-8 and 19-21 all execute again. This behavior is why it’s so important to move same-record updates into before-save Flow triggers!

Closing Remarks

You’ve made it! Have a good day and thanks for the read. Hope you learned something you found valuable.

Tell us what you think

Help us make sure we're publishing what is most relevant to you: take our survey to provide feedback on this content and tell us what you’d like to see next.