Modern Salesforce architectures are increasingly powered by asynchronous processing; not as a convenience, but as a strategic requirement for scale. In recent years, we've seen more and more companies contending with surging data volumes, complex integrations that involve multiple touchpoints, and the rise of autonomous systems running 24/7/365. All of these things push architects towards designing systems that are asynchronous-first.
Asynchronous processing on Salesforce often means designing around governor limits and complexity. Those limits act as guardrails and architectural constraints that help produce bulk-safe, scalable systems. While no platform limits directly serve to manage complexity, design patterns can help mitigate risk on that front. Internally, Salesforce often pushes the platform’s boundaries to forward‑test new features and automate complex business processes. We built a Step-Based Asynchronous Processing Framework for running asynchronous jobs with an arbitrary number of steps. Each step can run, retry, and restart independently with shared governance controls and full operational visibility through centralized logging. This document outlines its key architectural components: Queueable Apex and Finalizers, Scheduled Flow, Apex Cursors, Invocable Actions, and integrations with Slack. Together, these components provide a modular, scalable, and observable architecture suited for evolving enterprise needs.
Key Takeaways
Modern Salesforce architectures should embrace an asynchronous-first approach to achieve scale, resiliency, and operational transparency.
Breaking complex work into independently executable steps enables predictable performance, safer retries, checkpointing, rollback, and modular evolution without re-engineering core workflows.
The framework provides a scalable alternative to monolithic and aging batch jobs, chained async calls, and deeply nested flows, and is built for high-volume workloads that must scale horizontally inside Salesforce without off-platform orchestration.
Deterministic and observable execution ensures progress tracking, SLA monitoring, failure diagnostics, and audit-level transparency through centralized logging and governance.
Designed for enterprise-grade rigor, including unified governance, compliance, and distributed state control across long-running business processes.
Platform Best Practices
Before reviewing the requirements, here are some dos and don’ts for when to use a framework like this. Above all, consider which system is the single source of truth. If your Salesforce org relies minimally on external data but needs to scale from hundreds to millions of records, consider a step‑based async framework.
Do use this framework if:
Most (or all) of the information to act upon already exists in your CRM.
The upfront or ongoing cost of maintaining an Extract Transform Load (ETL) job to harmonize external data is too high.
You need to defer processing a large number of Salesforce records on a set schedule.
You can break down the processing into discrete steps. For example, you can create a hierarchical or tree‑based set of records, particularly if data volume fans out down the hierarchy or tree.
Don’t use this framework if:
Creating or updating records requires immediate recalculation.
Integration is challenging because external systems host primary data for record updates. (Consider pushing updated data to Salesforce with the Bulk API.)
With those practices in mind, let's review our requirements and start building.
Breaking Down the Requirements
Consider the problem statement:
Given a job that needs to run daily, check if certain records meet pre-established criteria for further processing. If they do, kick off those processing jobs. Processing records might mean pulling data from multiple external systems to perform calculations. Steps in jobs should notify people via Slack that processed records are ready for review. Steps should also escalate notifications to managers and higher-ups in the role hierarchy based on a configurable delay after the first round of notifications.
This problem involves several different steps, some of which can happen independently of each other. There are many ways to split up the work. Here’s one grouping:
The scheduler.
The step interface and concrete implementations that process records (regardless of the type of processing).
There’s some complexity hidden in the phrase “configurable delay.” We'll review this complexity later on in this article.
Here’s an opinionated diagram for the built‑out framework:
Now, break down that diagram and start building the pieces.
Scheduling with Scheduled Flow
Scheduled Flow offers several advantages as a scheduling mechanism:
Scheduled Flows can be packaged and deployed as metadata. This isn't true for jobs scheduled via Apex (or via the Scheduled Jobs page).
The Wait element is critical for frameworks that require callouts. By using it in Flow, callouts aren’t necessary in the Invocable portion of the framework.
Scheduling granularity meets the requirements: the minimum interval for Scheduled Flows is daily. If you need a higher frequency (for example, hourly), reconsider Scheduled Flow for this requirement.
Another consideration when configuring the Scheduled Flow is environment gating. Before invoking the Apex action, add a Decision element that evaluates the {!$Api.Enterprise_Server_URL_100} variable. This ensures the job runs only in the intended environments, such as UAT and Production. This pattern is important because sandboxes are frequently refreshed or newly created during the SDLC, and without an explicit environment check, a Scheduled Flow could unintentionally execute in environments where the framework is not meant to run. Using the contains operator in the Decision element makes the setup resilient to future sandbox creations or URL changes.
Finally, consider how the framework should capture failures. Always add a fault path when Flow calls any Action; for example, you can wire faults to Nebula Logger’s "Add Log Entry" action. Nebula Logger writes logs to custom objects, so customers should be aware that log data consumes org storage — by default, logs are stored for 14 days within an org, and then cleaned up; this retention period is configurable. Nebula Logger also uses Platform Events to publish logs, so log entries are saved independently from the main data-processing transaction — this ensures failures are captured even if the primary Flow or Apex action rolls back. Customers should evaluate expected log volume and retention requirements when considering the addition of a logging framework.
Here's what the Flow looks like:
Let's move on to the first pieces of Apex code with the scheduling requirement now satisfied.
For this article, the step interface is shown as an outer class for clarity. The framework itself is flexible — teams can organize the interface and its implementations using any Apex packaging pattern they prefer, as long as all Step classes reference the same interface.
There are a few things to note about the methods defined within our interface:
execute, though argument‑less at the moment, improves when we pass a State class (or interface) to orchestrate data between steps when order matters.
getName could return a System.Type value instead of a String. The goal is to give the orchestration layer a way to log step names without exposing other properties.
Here’s the first concrete implementation to show how these pieces fit together. With one exception later, we recommend using Queueable Apex to implement asynchronous processing within Apex; Batch Apex is typically unnecessary (and @future methods are discouraged). Queueable Apex starts quickly and, with Apex Cursors, has many advantages over Batch Apex.
An Apex Cursor-Like Implementation
Apex Cursors offer a modern alternative to the traditional Batch Apex model. Similar to Batch processing, a Cursor implementation can fetch records in chunks (up to 2,000 per batch). However, Cursors allow multiple fetches within a single transaction, enabling significantly higher throughput for large-volume operations.
When adopting Cursors as part of this framework, teams should be aware of current testing and mockability limitations. Cursor behavior in tests may differ from production behavior, so it’s important to design test strategies that avoid relying on Cursor internals and instead validate orchestration logic at the boundaries. As the platform evolves, these areas will continue to improve, but the core guidance remains: Cursors provide higher performance and reduced orchestration overhead compared to Batch Apex for many use cases.
To define a clear boundary between the system-provided Cursor and your own code, we recommend creating a Cursor‑like representation when implementing the Step interface. Consider this code:
1public inherited sharing abstract class CursorStep implements Step{2 private static final Integer MAX_CHUNK_SIZE = 2000;34 protected Cursor cursor;56 private Integer chunkSize = System.Limits.getLimitDMLRows();7 private Integer position = 0;89 protected abstract Cursor getCursor();10 protected abstract void innerExecute(List<SObject>records);1112 public abstract String getName();1314 public virtual CursorStep withChunkSize(Integer chunkSize){15 this.chunkSize = chunkSize;16 return this;17}1819 public void execute(){20 this.cursor = this.cursor ?? this.getCursor();21 this.cursor.setFetchesPerTransaction(this.getFetchesPerTransaction());22 List<SObject>records = new List<SObject>();23 if(this.shouldAdvance()){24 records = this.cursor.fetch(this.position, this.chunkSize);25 this.position += this.chunkSize;26}27 this.innerExecute(records);28}2930 public virtual void finalize(){31 Logger.info('finished cursor step for ' + this.getName());32}3334 public virtual Boolean shouldRestart(){35 return this.position<this.cursor.getNumRecords();36}3738 protected virtual Integer getFetchesPerTransaction(){39 Integer maxRecordsPerFetchCall = 2000;40 if(this.chunkSize< maxRecordsPerFetchCall){41 return this.chunkSize;42}43 // Integer division rounds down44 // which is perfect for our use-case45 return this.chunkSize / maxRecordsPerFetchCall;46}4748 protected virtual Boolean shouldAdvance(){49 return true;50}51}
Notice the Cursor class. Apex cursors are instances of Database.Cursor, but our Cursor implementation gives us flexibility around the shortcomings of Cursors. Here’s the implementation:
1public virtual without sharing class Cursor{2 private static final Integer MAX_FETCHES_PER_TRANSACTION = Limits.getLimitFetchCallsOnApexCursor();34 @TestVisible5 private static Integer maxRecordsPerFetchCall = 2000;67 private Integer cursorNumRecords;8 private Integer fetchesPerTransaction = MAX_FETCHES_PER_TRANSACTION;9 private final Database.Cursor cursor;1011 public Cursor(12 String finalQuery,13 Map<String, Object>bindVars,14 System.AccessLevel accessLevel15){16 try{17 this.cursor = Database.getCursorWithBinds(finalQuery, bindVars, accessLevel);18}catch(FatalCursorException e){19 Logger.newEntry(20 System.LoggingLevel.WARN,21 'Error creating cursor. This can happen if there' +22 ' are no records returned by the query: ' + e.getMessage()23);24}25}2627 public Cursor setFetchesPerTransaction(Integer possibleFetchesPerTransaction){28 // Handle accidental round downs from Integer division29 if(possibleFetchesPerTransaction == 0){30 return this;31}32 if(possibleFetchesPerTransaction > MAX_FETCHES_PER_TRANSACTION){33 Logger.newEntry(34 System.LoggingLevel.DEBUG,35 'Fetches per transaction: ' +36 possibleFetchesPerTransaction +37 ' exceeded platform max fetches per transaction: ' +38 MAX_FETCHES_PER_TRANSACTION +39 ', defaulting to platform max'40);41 possibleFetchesPerTransaction = MAX_FETCHES_PER_TRANSACTION;42}43 this.fetchesPerTransaction = possibleFetchesPerTransaction;44 return this;45}4647 @SuppressWarnings('PMD.EmptyStatementBlock')48 protected Cursor(){49}5051 public virtual List<SObject>fetch(Integer start, Integer advanceBy){52 if(this.getNumRecords() == 0){53 Logger.newEntry(54 System.LoggingLevel.DEBUG,55 'Bypassing fetch call, no records to fetch'56);57 return new List<SObject>();58}59 Integer localStart = start;60 List<SObject>results = new List<SObject>();61 while(62 Limits.getFetchCallsOnApexCursor()<this.fetchesPerTransaction &&63 results.size()<this.getNumRecords() &&64 localStart < start + advanceBy65){66 Integer actualAdvanceBy = this.getAdvanceBy(localStart, advanceBy);67 results.addAll(this.cursor?.fetch(localStart, actualAdvanceBy)?? new List<SObject>());68 localStart += actualAdvanceBy;69}70 return results;71}7273 public virtual Integer getNumRecords(){74 this.cursorNumRecords = this.cursorNumRecords ?? this.cursor?.getNumRecords()?? 0;75 return this.cursorNumRecords;76}7778 protected Integer getAdvanceBy(Integer start, Integer advanceBy){79 Integer possibleFetchSize = Math.min(advanceBy, this.getNumRecords() - start);80 if(possibleFetchSize > maxRecordsPerFetchCall){81 Logger.newEntry(82 System.LoggingLevel.DEBUG,83 'Fetch size: ' +84 possibleFetchSize +85 ' exceeded platform max fetch size of ' +86 maxRecordsPerFetchCall +87 ', defaulting to max fetch size'88);89 possibleFetchSize = maxRecordsPerFetchCall;90}else if(possibleFetchSize <0){91 possibleFetchSize = 0;92}93 return possibleFetchSize;94}95}
For the rest of this article, we omit the sharing declarations when referring to Apex classes. In practice, ensure top‑level classes explicitly use with or without sharing to conform to your object model and permissions.
Also note that our Cursor implementation delegates to the platform Database.Cursor, with added benefits discussed next.
First, here are the corresponding tests:
1@IsTest2private class CursorTest{3 @IsTest4 static void itCapsAdvanceByArgument(){5 String accountName = 'helloWorld!';6 insert new Account(Name = accountName);7 String query = 'SELECT Name FROM Account WHERE Name = :bindVar0';8 Map<String, Object>bindVars = new Map<String, Object>{'bindVar0' => accountName };910 Cursor instance = new Cursor(query, bindVars, System.AccessLevel.SYSTEM_MODE);1112 Assert.areEqual(1, instance.getNumRecords());13 Assert.areEqual(accountName, instance.fetch(0, 1000).get(0).get('Name'));14 Assert.areEqual(1, System.Limits.getApexCursorRows());15}1617 @IsTest18 static void itCapsMaxRecordsPerFetchCall(){19 Cursor.maxRecordsPerFetchCall = 20;20 Integer oneMoreThanMaxFetch = Cursor.maxRecordsPerFetchCall + 1;2122 List<Account>accounts = new List<Account>();23 for(Integer i = 0; i < oneMoreThanMaxFetch; i++){24 accounts.add(new Account(Name = 'Fetch ' + i));25}26 insert accounts;2728 Exception ex;29 List<SObject>results;30 Cursor instance = new Cursor(31 'SELECT Id FROM Account',32 new Map<String, Object>(),33 System.AccessLevel.SYSTEM_MODE34);35 try{36 results = instance.fetch(0, oneMoreThanMaxFetch);37}catch(System.InvalidParameterValueException e){38 ex = e;39}4041 Assert.areEqual(null, ex?.getMessage());42 Assert.areEqual(2, Limits.getFetchCallsOnApexCursor());43 Assert.areEqual(oneMoreThanMaxFetch, results.size());44}4546 @IsTest47 static void itFetchesMultipleTimesPerTransactionWhenMoreThanMaxFetch(){48 Cursor.maxRecordsPerFetchCall = 20;49 List<Account>accounts = new List<Account>();50 Set<String>expectedFetchNames = new Set<String>();51 for(Integer i = 0; i <Cursor.maxRecordsPerFetchCall + 1; i++){52 String accountName = 'Fetch' + i;53 expectedFetchNames.add(accountName);54 accounts.add(new Account(Name = accountName));55}56 insert accounts;5758 Integer oneMoreThanMaxFetch = Cursor.maxRecordsPerFetchCall + 1;59 Cursor instance = new Cursor(60 'SELECT Name FROM Account',61 new Map<String, Object>(),62 System.AccessLevel.SYSTEM_MODE63);64 List<SObject>results = instance.setFetchesPerTransaction(2).fetch(0, oneMoreThanMaxFetch);6566 Assert.areEqual(Cursor.maxRecordsPerFetchCall + 1, results.size());67 Assert.areEqual(2, Limits.getFetchCallsOnApexCursor());68 Set<String>actuallyFetchedNames = new Set<String>();69 for(Account account :(List<Account>) results){70 actuallyFetchedNames.add(account.Name);71}72 Assert.areEqual(expectedFetchNames, actuallyFetchedNames);73}7475 @IsTest76 static void itFetchesMultipleTimesPerTransaction(){77 Cursor.maxRecordsPerFetchCall = 1;78 insert new List<Account>{new Account(Name = 'One'), new Account(Name = 'Two')};7980 Cursor instance = new Cursor(81 'SELECT Id FROM Account',82 new Map<String, Object>(),83 System.AccessLevel.SYSTEM_MODE84)85 .setFetchesPerTransaction(2);86 List<SObject>results = instance.fetch(0, 2);8788 Assert.areEqual(2, instance.getNumRecords());89 Assert.areEqual(2, results.size());90 results = instance.fetch(2, 1);91 Assert.areEqual(0, results.size());92}9394 @IsTest95 static void fetchesCorrectAmountOfRecords(){96 List<Account>accounts = new List<Account>();97 for(Integer i = 0; i <10; i++){98 accounts.add(new Account(Name = 'Fetch ' + i));99}100 insert accounts;101102 Cursor instance = new Cursor(103 'SELECT Id FROM Account',104 new Map<String, Object>(),105 System.AccessLevel.SYSTEM_MODE106)107 .setFetchesPerTransaction(10);108 List<SObject>results = instance.fetch(0, 2);109110 Assert.areEqual(2, results.size(), '' + results);111 Assert.areEqual(1, Limits.getFetchCallsOnApexCursor());112}113114 @IsTest115 static void doesNotExceedPlatformMaxFetch(){116 List<Account>accounts = new List<Account>();117 for(Integer i = 0; i <101; i++){118 accounts.add(new Account(Name = 'Fetch ' + i));119}120 insert accounts;121122 Test.startTest();123 Cursor instance = new Cursor(124 'SELECT Id FROM Account',125 new Map<String, Object>(),126 System.AccessLevel.SYSTEM_MODE127)128 .setFetchesPerTransaction(100);129 Integer counter = 0;130 List<SObject>results;131 while(counter <= 100){132 results = instance.fetch(counter, counter + 1);133 counter++;134}135 Test.stopTest();136137 Assert.areEqual(101, counter);138 Assert.areEqual(0, results.size());139}140}
By making Cursor virtual, concrete CursorStep implementations can operate without a Database.Cursor when they don’t need to iterate a large record set — similar to returning a System.Iterable<T> instead of a Database.QueryLocator in Batch Apex. Here’s an example:
1public abstract class CursorLikeImplementation extends CursorStep{2 private final Cursor cursorLike;34 public CursorLikeImplementation(List<SObject>previouslyRetrievedRecords){5 this.cursorLike = new CursorLike(previouslyRetrievedRecords);6}78 public override String getName(){9 return CursorLikeImplementation.class.getName();10}1112 public override Cursor getCursor(){13 return this.cursorLike;14}1516 private class CursorLike extends Cursor{17 private final List<SObject>records;1819 public CursorLike(List<SObject>records){20 super();21 this.records = records;22}2324 public override List<SObject>fetch(Integer position, Integer chunkSize){25 // clone, to keep the underlying list type26 List<SObject>clonedRecords = this.records.clone();27 clonedRecords.clear();28 for(Integer i = position; i <this.getAdvanceBy(position, chunkSize); i++){29 clonedRecords.add(this.records[i]);30}31 return clonedRecords;32}3334 public override Integer getNumRecords(){35 return this.records.size();36}37}38}
Note that because this class is also abstract, it leaves the concrete implementation of innerExecute to subclasses.
There’s also an alternative to the CursorLike inner subclass. If you know concrete versions of a step like this won’t burn through other governor limits, you can return this.records from CursorLike.fetch and override the parent CursorStep.shouldRestart() to return false. That allows you to iterate over a list bounded only by the Apex heap limit of 12 MB per async transaction.
Other Possible Step‑Based Implementations
Our Cursor-based implementation gives us plenty of flexibility when paginating over large quantities of data. The Step interface, meanwhile, gives us the flexibility to describe and encapsulate steps of all sorts.
Consider a Flow-based step:
1public virtual class FlowStep implements Step{2 private final Invocable.Action specificFlow;34 private Boolean shouldRestart = false;56 public FlowStep(String specificFlowName, Map<String, Object>inputs){7 this.specificFlow = Invocable.Action.createCustomAction('flow', specificFlowName);8 this.specificFlow.setInvocations(new List<Map<String,Object>>{ inputs });9}1011 public void execute(){12 List<Invocable.Action.Result>results = this.specificFlow.invoke();13 for(Invocable.Action.Result result : results){14 if(result.isSuccess()){15 Map<String, Object>outputParams = result.getOutputParameters();16 Object potentialShouldRestartValue = outputParams.get('shouldRestart');17 // Flow does not enforce Booleans being initialized18 // so a null check is sadly necessary here19 if(potentialShouldRestartValue != null){20 this.shouldRestart = this.shouldRestart ||21 Boolean.valueOf(potentialShouldRestartValue);22}23}else{24 List<String>errorMessages = new List<String>();25 for(Invocable.Action.Error error : result.getErrors()){26 errorMessages.add(27 'Error code: ' + error.getCode() +28 ', error message: ' + error.getMessage()29);30}31 Logger.error(32 'An error occurred within your auto-launched flow:\n' +33 String.join(errorMessages, '\n\t')34);35}36}37}3839 public virtual void finalize(){40 Logger.info(this.getName() + ' finished processing');41}4243 public String getName(){44 return FlowStep.class.getName() + ':' + this.specificFlow.getName();45}4647 public Boolean shouldRestart(){48 return this.shouldRestart;49}50}
Because Flows can’t return output parameters that conform to an Apex-defined type, we check for a shouldRestart output parameter before using it.
Some steps might be feature‑flagged. You can implement logic to decide which steps to include, or use a no‑op step for a disabled feature. The Null Object pattern is a common way to reduce complexity within the orchestration layer:
1@SuppressWarnings('PMD.EmptyStatementBlock')2public class NoOpStep implements Step{3 // The null object pattern is commonly implemented4 // as a singleton to reduce memory consumption5 public static final NoOpStep SELF {6 get {7 SELF = SELF ?? new NoOpStep();8}9 private set;10}1112 public void execute(){13}1415 public void finalize(){16}1718 String getName(){19 return NoOpStep.class.getName();20}2122 Boolean shouldRestart(){23 return false;24}25}
We now have quite a few building blocks to work with. Let's look at the orchestration layer responsible for iterating over steps.
Creating a Step Processor
The processor is an inflection point in the architecture. We must decide who defines which steps get initialized, and where. Options include:
Have the processor define which steps map to business logic. This option is simple, but it scales poorly for readability.
Define the mapping with Custom Metadata (CMDT). Metadata Relationship fields don’t support ApexClass, which loosely couples class name spelling into your business process setup. You can reduce admin risk by making the field a picklist and validating the type exists (Type.forName() or by querying ApexClass), but because CMDT records don’t support triggers, validation happens at run time. This route is testable, but admins can still create CMDT records only in production — proceed carefully.
Define the mapping with records. Non‑admins can configure steps, but deployments get harder and environments can drift. Proceed with caution.
There's a famous quote from Clean Code about how to handle this particular piece of complexity:
The solution to this problem is to bury the switch statement [for making objects] in the basement of an abstract factory, and never let anyone see it.
With that in mind, and because our current number of steps is well-defined and unlikely to grow too large, it's okay for the step processor to also be the factory for steps. This can use an enum to drive the switch statement:
1public StepProcessor implements System.Queueable, System.Finalizer,2 Database.AllowsCallouts{3 private final List<Step>steps = new List<Step>();45 private Step currentStep;67 public StepProcessor setSteps(List<StepType> stepTypes){8 for(StepType type : stepTypes){9 switch on type {10 WHEN TYPE_ONE {11 this.addTypeOneSteps();12}13 WHEN TYPE_TWO {14 this.addTypeTwoSteps();15}16 // ... etc17}18}19 this.cleanSteps();20 return this;21}2223 public void execute(System.QueueableContext context){24 this.currentStep = this.currentStep ?? this.steps.remove(0);25 if(context != null){26 System.attachFinalizer(this);27 Logger.setAsyncContext(context);28}29 Logger.info('Executing step ' + this.currentStep.getName());30 try{31 this.currentStep.execute();32}catch(Exception e){33 Logger.exception('Unexpected exception', e);34}35 Logger.info('Finished executing step ' + this.currentStep.getName());36 Logger.saveLog();37}3839 public void execute(System.FinalizerContext context){40 Logger.info('Executing finalizer for step ' + this.currentStep.getName());41 Logger.setAsyncContext(context);42 switch on context?.getResult(){43 when UNHANDLED_EXCEPTION {44 // see the note below about this logging paradigm45 Logger.warn(46 'Failed to run on step' + this.currentStep,47 context?.getException()48);49}50 when else{51 this.currentStep.finalize();52 if(this.currentStep.shouldRestart()){53 this.kickoff();54}else if(this.steps.isEmpty() == false){55 this.currentStep = this.steps.remove(0);56 this.kickoff();57}else{58 Logger.info('Finished executing steps');59}60}61}62 Logger.info(63 'Finished executing finalizer for step ' +64 this.currentStep.getName()65);66 Logger.saveLog();67}6869 public String kickoff(){70 return this.steps.isEmpty()? null : System.enqueueJob(this);71}7273 private void cleanSteps(){74 for(Integer reverseIndex = this.steps.size() - 1;75 reverseIndex >= 0; reverseIndex--){76 if(this.steps[reverseIndex]instanceof NoOpStep){77 this.steps.remove(reverseIndex);78}79}80}8182 private void addTypeOneSteps(){83 this.steps.addAll(84 new List<Step>{85 new ExampleCursorStepOne(),86 new ExampleCursorStepTwo()87}88);89}9091 private void addTypeTwoSteps(){92 this.steps.addAll(93 new List<Step>{94 new FlowStep('95 ExampleInvocableName',96 new Map<String, Object>('exampleParameter' =>true)97),98 new ExampleCursorStepThree()99}100);101}102}
The factory methods shown, like addTypeOneSteps(), can delegate concerns like feature flagging; cleanSteps() performs a one‑time check on the gathered steps to ensure that there aren’t any “empty” steps before going truly async. That might look like this:
We haven’t discussed error handling since mentioning Nebula Logger in the Scheduled Flow section. That’s because System.Finalizer lets us blanket‑cover logging for all error conditions without adding specific error handling in each step. Each Step focuses on running, while we log and rethrow any unhappy paths so they surface in unit tests. This supports safe iteration and production‑level alerting (using the Slack Logger plug-in for Nebula for all WARN and ERROR logs).
One note about error logging: passing the step instance into log messages assumes a level of trust in what becomes visible in logs. The default toString() for Apex classes includes all static and instance‑level properties in the message. That can be desirable — or it can leak sensitive information. While logging and security are not the focus here, note that for some systems, adherence to an interface like Step can also involve forcing an override for toString().
Such a method puts the onus on each object creator to decide what is permissible to print, which may be desirable.
On logging levels: at the StepProcessor level, we use INFO, the highest non‑error level. As you get more granular within the application, logging levels should decrease accordingly. Individual steps might use DEBUG for high‑level information, with FINE, FINER, and FINEST reserved for increasingly detailed output. Logging is as much an art as a science, but following these principles helps keep logs consistent and useful.
Handling Additional Complexity Within the Step Processor
Before moving on, let's briefly reflect on the decision to have our step processor host the logic for which steps get used. In a large codebase, consider making StepProcessor virtual or abstract, and have subclasses identify specific steps to establish a proper separation of concerns.
The Apex Invocable Layer
The scheduler eventually invokes Apex. With the rest of the setup complete, the Invocable Apex section can decide which steps should run and pass the List<StepType> to the processor:
1public class DailyJobExecutor{2 @InvocableMethod(label='Execute Daily Job')3 public static void executeJob(){4 Logger.info('Executing daily Job');56 List<StepType>correspondingTypes = new List<StepType>();7 // based on [business logic], determine which step types8 // should be included for any daily invocation910 if(correspondingTypes.isEmpty() == false){11 try{12 new StepProcessor().setSteps(correspondingTypes).kickoff();13}catch(Exception ex){14 Logger.exception('Error starting job', ex);15}16}17}1819 Logger.saveLog();20}
This is a simple part of the equation — using records, data, or logic to determine which step types to run. The Invocable Action is simple because we encapsulated complexity elsewhere. We’ve also protected against unexpected exceptions and made each piece easy to test in isolation.
Handling Delays Prior to Calling Slack
The Apex Slack SDK is beyond this article’s scope, but one potential snag from the requirements bears revisiting: notifying people upward in the role hierarchy based on a configurable delay. On paper, this is simple, and you might (correctly) consider System.enqueueJob(this) in the StepProcessor. With System.AsyncOptions, our initial inclination was to use the enqueueJob overload to satisfy this requirement.
For now, however, the maximum delay via System.AsyncOptions.MinimumQueueableDelayInMinutes is 10 minutes. Because the requirement is 120 minutes, a few options remain. A naive approach might look like this:
1public class ExampleDelayedNotifier implements Step{2 private final List<Slack.ChatPostMessageRequest>notifications = new List<Slack.ChatPostMessageRequest>();3 private final Slack.BotClient botClient = Slack.App4 .getAppByKey('some-slack-app-key')5 .getBotClientForTeam('slack team id');67 // account for the initial delay,8 // so 120 - 10 = 1109 private Integer delayMinutes = 110;1011 public void execute(){12 if(this.delayInMinutes>0){13 return;14}1516 Integer maximumAllowedCallouts = 100;17 while(this.notifications.isEmpty() == false && maximumAllowedCallouts >0){18 this.botClient.chatPostMessage(this.notifications.remove(0));19 maximumAllowedCallouts--;20}21}2223 public void finalize(){24 this.delayInMinutes -= 10;25}2627 public String getName(){28 return ExampleDelayedNotifier.class.getName();29}3031 public Boolean shouldRestart(){32 return this.delayInMinutes>0 || this.notifications.isEmpty() == false;33}34}
In practice, the delay would be passed into this class because the delay is configuration‑driven.
We don’t recommend this approach unless you are certain there will only ever be one delayed notification type. It burns through 11 extra async jobs before starting (or more, if the delay increases). That cost might be fine for one job — not for many. You’d also need to add a method to the Step interface so each step can tell the processor how long to wait before restarting, which adds noise.
That leaves us with two interesting possibilities:
You can slot the delayed step into your existing job framework if you already have a polling job scheduled at an appropriate interval. You should also be OK with the specified delay hitting up to 15 minutes later (15 minutes is the minimum refresh interval for an Apex-scheduled CRON expression). This roughly matches the Invocable Apex example; the scheduling is performed via Apex instead. In other words, you could reuse the same Step‑based architecture to process records based on a “Start After” timestamp and decide which steps to use based on a picklist or multi‑select picklist mapping back to the StepType enum values shown previously.
Alternatively, if you’re comfortable defining an extra outer Apex class, fall back to Batch Apex (unlike Queueable Apex, which supports inner classes, Batch Apex classes must be outer classes) using System.scheduleBatch().
Consider the Batch Apex example. While we generally recommend Queueable Apex for flexibility and control, this is one case where Batch Apex still reigns supreme:
1public class DelayedNotifier implements Database.Batchable<Object>{2 private final StepProcessor processor = new StepProcessor();34 public Iterable<Object>start(Database.BatchableContext bc){5 return new List<Object>();6}78 @SuppressWarnings('PMD.EmptyStatementBlock')9 public void execute(Database.BatchableContext bc, Object scope){10 // we don't need to actually do anything in execute,11 // we just need to start up the processor in finish12}1314 public void finish(Database.BatchableContext bc){15 try{16 // you can imagine Notifier as an elided,17 // simpler version of the naive implementation18 // we showed above, now only focused on sending messages19 this.processor.setSteps(new List<Step>{new Notifier()}).kickoff();20}catch(Exception ex){21 Logger.exception('Unexpected error', ex);22}finally{23 Logger.saveLog();24}25}26}
And then, in the StepProcessor, imagine that the previously shown addTypeOneSteps() method is updated with this delayed step:
1public StepProcessor implements System.Queueable, System.Finalizer,2 Database.AllowsCallouts{3 // .... unchanged top of class elided45 private void addTypeOneSteps(){6 this.steps.addAll(7 new List<Step>{8 new ExampleCursorStepOne(),9 new ExampleCursorStepTwo(),10 new DelayedNotifierStep()11}12);13}1415 // ...1617 private class DelayedNotifierStep implements Step{18 private final DelayedNotifier delayedNotifier = new DelayedNotifier();19 // again — in practice this value would also be passed in20 private final Integer delayInMinutes = 120;2122 public void execute(){23 System.scheduleBatch(24 this.delayedNotifier,25 'Delayed notifier: ' + System.now().getTime(),26 this.delayInMinutes27);28}2930 public void finalize(){31 Logger.debug('Nothing to finalize, batch scheduled');32}3334 public String getName(){35 return DelayedNotifierStep.class.getName();36}3738 public Boolean shouldRestart(){39 return false;40}41}42}
While we wouldn’t typically recommend this much hoop‑jumping, this step delay becomes another reusable building block. Until longer delays are allowed in Queueable Apex, it also represents the easiest way to produce this effect (without a polling mechanism, as discussed).
Conclusion
We’ve used object‑oriented design to fulfill the requirements and created a system that will scale while balancing the long‑term cost of building and maintenance. While step declaration and instantiation may ultimately outgrow their place in StepProcessor, there’s little additional technical debt here. With FlowStep, admins and developers can decide together when no‑code or pro‑code solutions make the most sense.
By using the System.Finalizer interface within Apex’s Queueable framework, together with Nebula Logger, we’ve built a robust, testable system that alerts us to unforeseen failures even if future steps lack explicit logging. For us, this system is happily crunching numbers and reducing cost and complexity. It has also given us valuable insights into Apex Cursors’ behavior under real workloads, helping us refine our approach while improving the feature itself.
By decomposing complex, high-volume workloads into modular execution steps, the Step-Based Asynchronous Processing Framework framework transforms platform constraints into engineered advantages, enabling predictable performance, observability, and governance at enterprise scale. Steps can be set up by both admins and developers, and in either case, step authors can safely focus on observing the basic platform governor limits (like DML rows, and query rows retrieved) without having to worry about how to scale each step.
Path Forward
To operationalize and adopt this pattern across enterprise implementations, architects should:
Evaluate existing automations to identify areas where async orchestration can help improve performance and enhance observability.
Break down large processes into discrete, independently executable steps with with clear processing goals and discrete author points (like Flow, or Apex).
Define and group step types to accelerate step reuse and standardization across business units.
Pilot the approach with new processes or existing automations. You might be surprised to find how many edge cases you find for free within steps, care of your built-in logging and observability!
About the Author
James Simone is a Principal Software Engineer at Salesforce, and has more than a decade's worth of experience working on the platform. He was a Salesforce customer — and product owner — before moving into development, and has been writing technical deep dives about Salesforce since 2019 within The Joys Of Apex. He's previously published articles on the Salesforce Developer blog, and the Salesforce Engineering blog as well.
We use cookies on our website to improve website performance, to analyze website usage and to tailor content and offers to your interests.
Advertising and functional cookies are only placed with your consent. By clicking “Accept All Cookies”, you consent to us placing these cookies. By clicking “Do Not Accept”, you reject the usage of such cookies. We always place required cookies that do not require consent, which are necessary for the website to work properly.
For more information about the different cookies we are using, read the Privacy Statement. To change your cookie settings and preferences, click the Cookie Consent Manager button.
Cookie Consent Manager
General Information
Required Cookies
Functional Cookies
Advertising Cookies
General Information
We use three kinds of cookies on our websites: required, functional, and advertising. You can choose whether functional and advertising cookies apply. Click on the different cookie categories to find out more about each category and to change the default settings.
Privacy Statement
Required Cookies
Always Active
Required cookies are necessary for basic website functionality. Some examples include: session cookies needed to transmit the website, authentication cookies, and security cookies.
Functional Cookies
Functional cookies enhance functions, performance, and services on the website. Some examples include: cookies used to analyze site traffic, cookies used for market research, and cookies used to display advertising that is not directed to a particular individual.
Advertising Cookies
Advertising cookies track activity across websites in order to understand a viewer’s interests, and direct them specific marketing. Some examples include: cookies used for remarketing, or interest-based advertising.