Read about our update schedules here.
Systems demonstrate automated behavior by enabling the business to meet key goals and objectives faster and at scale. Healthy automation enables users to focus on high-value work and reduces time spent on repetitive, manual tasks or complex data entry.
Most often, automation means translating business processes from one form to another: from paper-based form to digital form, from an old system to a new one. With every business process translation comes an opportunity for transformation.
Transformation is not about using new technologies to introduce disruptive and confusing changes for users. Transformation is about creating simpler ways for work to get done, enabling business to grow without friction, and empowering business users to focus more deeply on what really matters to their stakeholders. From an architectural point of view, this involves identifying tasks that can be eliminated altogether, or handled automatically. It requires a clear connection between how technology is used and its measurable impact on the business.
Something important to note about automation with Salesforce: it can be done with a variety of tools, using programmatic and declarative skill sets. Designing automations that are well-architected is not about choosing to build with just one automation tool. It is about using approaches that are consistent and predictable, and enabling teams to develop, test, deploy, and maintain the automations you design. Your automations should take the most maintainable and readable form possible.
This section covers how to design and refactor automations to enable businesses to meet key objectives faster and at scale. To learn more about choosing the right tool for your automation use case, see the Architect’s Decision Guide to Record-Triggered Automation.
You can improve the architecture of your automations in Salesforce by focusing on efficiency and data integrity.
Creating efficiency in your automations isn’t about dutifully re-creating business as usual with Salesforce technologies. It’s about deeply understanding the key metrics and business outcomes that teams will be accountable for meeting or tracking, and stepping back to see functional units within and across the work that you’re automating. It’s about identifying how you can create patterns with your automations that enable the business to operate more effectively and quickly, at scale.
Efficient automation logic will make your systems:
You can improve efficiency in your automations through process design and operational logic.
Process design involves defining the ways work gets done. Building truly efficient and effective processes means your designs do not just replicate current ways of working. Identifying and removing ineffective or unclear steps is essential. Optimized processes should create measurable business value (see KPIs) without unnecessary steps. Unclear or unnecessary steps will likely create technical debt and result in unmaintainable automations.
Often, the responsibility for discovering and documenting business processes will fall under the responsibility of a business analyst or even a system administrator. Architects are responsible for partnering with these members of your team to make sure your process designs are technically sound and well structured. Applying your knowledge of the Salesforce platform as early as possible will help your team identify processes to streamline through automation or processes that need to change to avoid costly customizations.
To build optimized processes for Salesforce, consider:
Define processes thoroughly. Processes with unclear purposes or ambiguous definitions are more likely to be misinterpreted at design-time. This will lead to flawed designs that are based on assumptions, which will result in incorrect or inefficient automations. Ensure the business processes you want to automate meet the following standards:
Make process steps clear.* While it can sometimes be tempting to add additional steps that “might be helpful in the future”, this is never a good approach. Every step in an automation be relevant to the outcome of the overall process. Ensure each process step has the following characteristics:
The list of patterns and anti-patterns below shows what proper (and poor) optimization looks like in a Salesforce org. You can use these to validate your automation designs before you build, or identify automations that need to be optimized further.
To learn more about process automation tools available from Salesforce, see Tools Relevant to Automated.
Operational logic deals with how effectively a process is translated from its design into an actual implementation. Automations with strong operational logic continue to perform well, regardless of spikes in transaction volumes or the number of concurrent instances that are running. Logically sound automations help businesses to more easily scale to operate at higher levels of demand. Building strong operational logic into your automations is directly related to the overall reliability of your system.
Automations that do not operate effectively provide poor user and customer experiences, leading to both potential revenue losses and loss of customer trust. They also have higher maintenance costs and can become bottlenecks that delay related processes, contributing to overall system performance issues.
To create effective operational logic in automations, consider:
WHERE
statement logic. Your code should also have proper controls to ensure a query will never run in non-bulkified contexts or against null or blank filter criteria.The list of patterns and anti-patterns below shows what proper (and poor) operational logic looks like in Salesforce automation. You can use these to validate your automation designs before you build, or identify automations that need to be optimized further.
To learn more about tools available from Salesforce that can help you plan for scale, see Tools Relevant to Automated.
Automation KPIs measure the impact of an automation over time. Without them, you’ll have no way to tell if an automation is truly adding business value or creating unintended complexity for your users. Every automation you build should be tied to a clear, measurable set of KPIs.
Good KPIs are defined by a measurable value along with an associated time frame. Examples include:
Once you have clear, measurable KPIs, you have to also understand if and how an automation in Salesforce will generate data that is relevant to reporting against those KPIs.
The list of patterns and anti-patterns below shows what proper (and poor) KPIs look like when it comes to Salesforce automations. You can use these to validate your existing KPIs, or identify where you need to better identify KPIs before you build.
To learn more about tools available from Salesforce for help with KPIs, see Tools Relevant to Automated.
The following table shows a selection of patterns to look for (or build) in your org and anti-patterns to avoid or target for remediation.
✨ Discover more patterns for efficiency in the Pattern & Anti-Pattern Explorer.
Patterns | Anti-Patterns | |
---|---|---|
Process Design | In your org:
- Each flow serves a single, specific purpose - Each step performs a specific, granular task - Flows are organized in a hierarchical structure consisting of a main flow and supporting subflows - All user inputs have a clear purpose within the flow - Users are only asked to provide data when existing system data can’t be used |
In your org:
- Flows serve multiple purposes and require additional inputs to provide context - Flows require inputs whose data is not used - Groups of related steps contain functionality that overlaps with groups of steps in other flows - Flows ask for user inputs when stored data can be used instead |
In Apex:
- Each class serves a single, specific purpose - Each method performs a specific, granular task - All input variables have a clear purpose within the class - Code execution requires a minimal number of resources |
In Apex:
- Classes serve multiple purposes - Methods perform multiple tasks or methods perform tasks that don’t align to the stated purpose of the class they’re part of - Input variables aren’t actually used in methods - Methods unnecessarily retrieve data from the database or from external systems |
|
Operational Logic | In Flow:
- No variables refer to hard-coded values (for record types, users, etc.) - All autolaunched flows and processes use decision and/or pause elements to evaluate entry criteria and prevent infinite loops or executions against large data volumes - Flows (including processes) hand logic off to Apex in large data volume contexts - Subflows are used for the sections of a processes that need to be reused across the business |
In Flow:
- Variables have hard-coded values - Flows (including processes) must be manually deactivated prior to bulk data loads - Flows (including processes) trigger "unhandled exception" notices - Even simple flows regularly cause errors related to governor limits - Portions of a flow are repeated across flows rather than using subflows |
In Apex:
- No variables refer to hard-coded values (for record types, users, etc.) - All wildcard criteria appear in SOSL - SOQL is wrapped in try-catch
- No SOQL appears within a loop - SOQL statements are selective, including: -- no usage of LIKE comparisons or partial text comparisons
-- comparision operators use positive logic (i.e. INCLUDES , IN ) as primary or only logic
-- usage of = NULL , != NULL is rare and/or always follows a positive comparision operator
-- no LIMIT 1 statements appear
-- no usage of ALL ROWS keyword
| In Apex:
- Variables have hard-coded values - SOSL is rarely or not consistently used for wildcard selection criteria - SOQL is not wrapped in try-catch
- SOQL appears within loops - SOQL statements are non-selective, including: -- LIKE and wildcard filter criteria appear
-- comparisons using NOT , NOT IN criteria are used as the primary or only comparison operator
-- = NULL , != NULL criteria are used as the primary or only comparison operator
-- LIMIT 1 statements appear
-- ALL ROWS keyword is used
| |
In your design standards and documentation:
- Planned and potential execution paths for automations are outlined clearly - The use cases for synchronous and asynchronous operations within automations are outlined clearly as part of design standards |
In your design standards and documentation:
- Automation invocation is not documented - Use cases for synchronous and asynchronous operations are not addressed |
|
KPIs | Within your documentation:
- Outputs for every automation are measurable and timebound - Accountable stakeholders are listed for each KPI |
Within your documentation:
- KPIs do not exist for automations or have unclear time frames for measurements - KPIs exist without accountable stakeholders |
Within reports and dashboards:
- All metrics related to KPIs are included in at least one report or dashboard |
Within reports and dashboards:
- KPI reporting does not exist or reports are missing metrics related to some KPIs |
Data integrity is about how well a system maintains accurate and complete data. The Salesforce Platform maintains robust, built-in processing logic designed to protect the integrity of data stored in an individual org’s relational database. One of the fundamentals of building healthy automations is understanding the built-in data integrity behaviors of Salesforce, and making sure all your automation designs align with (and acknowledge) these behaviors.
The biggest anti-patterns in automation design arise from failing to recognize the powerful data integrity services already provided by Salesforce and failing to use standard functionality that takes advantage of these services. To design automations that protect and maintain data integrity you must be familiar with the fundamental order of operation behaviors of Salesforce.
Properly extending data integrity into your custom automations means your system can:
You can build better data integrity into your Salesforce automations through proper data handling and error handling.
The first step to designing for proper data handling in Salesforce is understanding how the multitenant platform handles transactions. This includes understanding the built-in order of execution behaviors that the Salesforce Platform uses to ensure data integrity during record-level data operations. For more on the impacts of this behavior, see Database Manipulation in Salesforce Architecture Basics.
Poor data handling in your automations can be some of the most difficult anti-patterns to identify and fully remediate. The recursive and overlapping nature of the platform’s order of execution can make it difficult to see where issues originate. The specific section of code or flow that throws a fatal error or exceeds governor limits may not be the root cause of an underlying data handling issue.
Transaction awareness is key to building automations that perform reliably and at scale with Salesforce. This means making sure that every step in an automation is designed with the knowledge of where it is in relation to the platform-controlled order of execution, can carry out its function correctly, and passes on information to the next step correctly.
Regardless of the automation tool you are using, proper transaction awareness follows similar patterns and requires common considerations:
Beyond transaction awareness, there is a second dimension to data handling: knowing when to carry out logic in different execution contexts. Common reasons to break automations up into different execution contexts include:
The list of patterns and anti-patterns below shows what proper (and poor) data handling looks like in Salesforce automations. You can use these to validate your automation designs before you build, or identify automations that need to be refactored to improve data handling.
To learn more about tools available from Salesforce for data handling in automation, see Tools Relevant to Automated.
Error handling is critical for data integrity. Strong error handling also helps your system scale and age with more resilience.
Improper error handling in automations can lead to:
Error handling in automations requires giving any running process the capability to parse an error for information, access logic about what the next steps should be based on error information, and then follow the correct path. These capabilities don’t need to be built over and over in every automation (that’s an optimization anti-pattern). Instead, every automation in the system should have the ability to connect to the relevant error handling components.
To build proper error handling controls into your automations, ask these questions:
Once you’ve decided how to handle these errors, you can start to build effective error handling into your automations. The list of patterns and anti-patterns below shows what proper (and poor) error handling looks like in a Salesforce automation. You can use these to validate your automation designs before you build, or identify automations that need to be refactored to improve error handling.
To learn more about tools available from Salesforce for error handling, see Tools Relevant to Automated.
The following table shows a selection of patterns to look for (or build) in your org and anti-patterns to avoid or target for remediation.
✨ Discover more patterns for data integrity in the Pattern & Anti-Pattern Explorer.
Patterns | Anti-Patterns | |
---|---|---|
Data Handling | In your data dictionary:
- Field-level data and prioritization logic for all data sources and data lake objects exists - Field mapping from data lake object to data model object exists |
In your data dictionary:
- Field-level data and prioritization logic for data sources and data lake objects are not included - Field mapping from data lake objects to data model objects is not included |
In your Apex:
- All synchronous DML statements or Database class methods are carried out in before trigger execution contexts - Async Apex invocations use queueables to 'chain' complex DML across transactions - Batch Apex is used exclusively for large data volumes - @future Apex is not used or used sparingly, for callouts or system object DML |
In your Apex:
- DML statements regularly appear in code that will be invoked in after trigger contexts - Async Apex is rarely used - Async Apex features are used arbitrarily, including: -- Future Methods and Queueable Apex are used inconsistently or interchangeably -- Database operations do not have clear, consistent logic for passing execution to Batch Apex when needed |
|
In Flow:
- All flows launched in user context abstract all system context transactions to subflows, which are consistently placed after a Pause element, to create a new transaction - Complex sequences of related data operations are created with Orchestrator (instead of invoking multiple subflows within a monolithic flow) - All record-triggered flows have trigger order values populated - Flows involving external system callouts or long-running processes use asynchronous paths |
In Flow:
- Large, monolithic flows attempt to coordinate complex sequences of related data operations (with or without subflows) - Record-triggered flows do not use trigger order attributes at all or do not use trigger order values consistently - Asynchronous paths are not used consistently or at all |
|
In your org:
- Identity Resolution Reconciliation Rules follow the prioritization logic in your data dictionary |
In your org:
- Identity Resolution Reconciliation Rules do not follow prioritization logic in the data dictionary |
|
Error Handling | In Apex:
- Code wraps all DML, SOQL, callouts, and other critical process steps in try-catch blocks
- Custom exceptions are used to create advanced error messaging and logic - In async and bulk contexts, Database class methods are used instead of DML - Database class methods may be used exclusively for all data operations (instead of DML) |
In Apex:
- DML, SOQL, callouts, or other critical process steps are not consistently wrapped in try-catch blocks
- System.debug statements appear in production code (and are not commented out)
- No Database class methods are used - Data operations are done exclusively with DML |
In Lightning Web Components (LWC):
- JavaScript wraps all data operations and critical process steps in if () /else if () blocks
- All @wire functions use data and error properties provided by the API
- All if (error) /else if (error) statements contain logic to process errors and provide informative messages |
In LWC:
- JavaScript does not consistently use if () /else if () blocks with data operations or critical process steps
- @wire functions do not use data and error properties provided by the API (or do not use them consistently)
- If used at all, if (error) /else if (error) statements do not actually contain logic to process errors and provide useful error messages |
|
In Aura:
- JavaScript wraps all data operations and critical process steps in try-catch blocks
- Within try-catch blocks, native JavaScript Error is used in throw statements (no usage of $A.error() )
- All recoverable error logic appears within catch statements, and provides clear user messages |
In Aura:
- JavaScript does not consistently wrap data operations and critical process steps in try-catch blocks
- Components use $A.error()
- Recoverable error logic does not consistently appear within catch statements, and error messages to users are not clear |
|
In Flow:
- Screen flows consistently use fault connectors to show errors to users - Custom error messages are configured for errors that will appear on screen - Flows with data operations, callouts, and other critical processing logic have fault paths for all key actions |
In Flow:
- Flows do not use fault paths consistently or at all - Custom error messages are not used, so users see the default "An unhandled fault has occurred in this flow" message |
The concept of business value, in the context of automation, is about how well processes create measurable, positive impact for business stakeholders. Ideally, process automation enables users to spend less time on repetitive, low-value tasks. It also helps boost data integrity by eliminating manual processing activities that could introduce errors. Much like Process Design, identifying and delivering automations that will drive real business value requires work beyond basic discovery and business analysis.
At times, it may seem like the best way to deliver value to the business is to simply automate every process requested by a business user, either in the order they appear in your backlog (or ticketing queue) or based on political factors in your organization. This can lead to two related problems: building automations in a suboptimal order and building the wrong automations altogether. The first problem, poor prioritization, prevents high-value processes from getting implemented when they should, potentially slowing growth. The second problem, building the wrong automations, not only delays the delivery of high-value automations, it also leads to misspent time, unnecessary costs, and increased frustration among delivery teams.
You can deliver greater business value by focusing on KPIs and prioritization.
Tool | Description | Efficiency | Data Integrity | Business Value |
---|---|---|---|---|
Apex Batching | Batch records together and process them maneagable chunks | X | X | |
Apex Future Methods | Asynchronously execute apex methods in the background | X | X | |
Apex Queueing | Add apex jobs to a queue and monitor them | X | X | |
Apex Scheduler | Asynchronously execute apex classes at specified times | X | X | |
Approvals | Specify the required steps to approve records | X | X | |
Asynchronous Apex | Run Apex code asynchronously | X | X | |
Automated Actions | Perform field updates, email sends and other actions in the background | X | X | |
Einstein Next Best Action | Display the right recommendations to the right people at the right time | X | X | |
Email Alert | Create and send automated emails | X | X | |
Escalation Actions | Specify automated actions to take for case escalations | X | X | |
Field Update | Update field values based on automation | X | X | |
Flow Builder | Build automations with a point-and-click interface | X | X | |
Flow Extensions | Access stored variables as component inputs in flows | X | ||
Flow Templates Library | Use templates to design industry specific flows | X | X | |
Flow Trigger | Automate complex business processes | X | ||
Invocable Actions | Add Apex functionality to flows | X | X | |
Orchestrator | Create and manage multi-step automations | X | X | |
Outbound Message | Send information from an automated process with receipts and retries | X | ||
Publish Platform Events with Flow | Publish events via user interactions and automations | X | ||
Query Optimizer | Use selectivity and indexes to improve query, report, and list view performance | X | X | |
Salesforce Flow | Create declarative process automations with Flow Builder | X | X | |
Send Notifications with Flows | Send messages over SMS, WhatsApp, or Facebook Messenger | X | X | |
Send Notifications with Processes | Send messages over SMS, WhatsApp, or Facebook Messenger | X | X | |
SOQL FOR UPDATE modifier | Lock records to prevent race conditions and thread safety issues | X | ||
Strategy Builder | Identify recommendations to surface on record pages | X | X | |
Subflows | Reduce flow complexity through reuse | X | ||
Subscribe to Platform Events with Flow | Receive messages published through automations | X | ||
Task Actions | Determine assignment details given to a user by an automation | X |
Resource | Description | Efficiency | Data Integrity | Business Value |
---|---|---|---|---|
Apex Execution Governors & Limits | Learn how the Apex runtime engine enforces limits | X | X | |
Architect's Guide to Record-Triggered Automation | Choose the right tool for record-triggered automations | X | X | |
Batch Management Resources | Create, manage, schedule, and monitor batch jobs | X | X | |
Best Practices for SOQL and SOSL | Improve query performance of applications with large data volumes | X | ||
Design Standards Template | Create design standards for your organization | X | X | X |
Flow Bulkification in Transactions | Design flows to operate against collections | X | X | |
Flow Data Considerations | Learn about schedule-triggered flows for batch data | X | X | |
Flow Debugging | Test and troubleshoot flows | X | ||
How Requests are Processed | Learn how Salesforce processes jobs quickly and minimizes failures | X | X | |
KPI Spreadsheet Template | Determine the business value of a particular metric | X | X | |
Making Callouts to External Systems from Invocable Actions | Call external systems from a Flow using Apex | X | ||
Mixed DML Operations | Know which sObjects can be used together for DML in the same transaction | X | X | |
Order of Execution | Understand the order of events for inserts, updates, and upserts | X | X | |
Query Plan FAQ | Optimize queries involving large data volumes | X | X | |
Schedule-Triggered Flow Considerations | Understand the special behaviors of schedule-triggered flows | X | ||
Transaction Control | Generate a savepoint that specifies the current database state | X | X | |
What Happens When a Flow Fails? | Understand error handling in flows | X | X | |
Workflow Automation Best Practice Guide | Get started with Salesforce automation | X | X | X |
Working with Very Large SOQL Queries | Write more efficient SOQL Queries | X |
Help us keep Salesforce Well-Architected relevant to you; take our survey to provide feedback on this content and tell us what you’d like to see next.