Our forward-looking statement applies to roadmap projections.
Modern Salesforce architectures are increasingly powered by asynchronous processing; not as a convenience, but as a strategic requirement for scale. In recent years, we've seen more and more companies contending with surging data volumes, complex integrations that involve multiple touchpoints, and the rise of autonomous systems running 24/7/365. All of these things push architects towards designing systems that are asynchronous-first.
Asynchronous processing on Salesforce often means designing around governor limits and complexity. Those limits act as guardrails and architectural constraints that help produce bulk-safe, scalable systems. While no platform limits directly serve to manage complexity, design patterns can help mitigate risk on that front. Internally, Salesforce often pushes the platform’s boundaries to forward‑test new features and automate complex business processes. We built a Step-Based Asynchronous Processing Framework for running asynchronous jobs with an arbitrary number of steps. Each step can run, retry, and restart independently with shared governance controls and full operational visibility through centralized logging. This document outlines its key architectural components: Queueable Apex and Finalizers, Scheduled Flow, Apex Cursors, Invocable Actions, and integrations with Slack. Together, these components provide a modular, scalable, and observable architecture suited for evolving enterprise needs.
Before reviewing the requirements, here are some dos and don’ts for when to use a framework like this. Above all, consider which system is the single source of truth. If your Salesforce org relies minimally on external data but needs to scale from hundreds to millions of records, consider a step‑based async framework.
Do use this framework if:
Don’t use this framework if:
With those practices in mind, let's review our requirements and start building.
Consider the problem statement:
Given a job that needs to run daily, check if certain records meet pre-established criteria for further processing. If they do, kick off those processing jobs. Processing records might mean pulling data from multiple external systems to perform calculations. Steps in jobs should notify people via Slack that processed records are ready for review. Steps should also escalate notifications to managers and higher-ups in the role hierarchy based on a configurable delay after the first round of notifications.
This problem involves several different steps, some of which can happen independently of each other. There are many ways to split up the work. Here’s one grouping:
Here’s an opinionated diagram for the built‑out framework:
Now, break down that diagram and start building the pieces.
Scheduled Flow offers several advantages as a scheduling mechanism:
Another consideration when configuring the Scheduled Flow is environment gating. Before invoking the Apex action, add a Decision element that evaluates the {!$Api.Enterprise_Server_URL_100} variable. This ensures the job runs only in the intended environments, such as UAT and Production. This pattern is important because sandboxes are frequently refreshed or newly created during the SDLC, and without an explicit environment check, a Scheduled Flow could unintentionally execute in environments where the framework is not meant to run. Using the contains operator in the Decision element makes the setup resilient to future sandbox creations or URL changes.
Finally, consider how the framework should capture failures. Always add a fault path when Flow calls any Action; for example, you can wire faults to Nebula Logger’s "Add Log Entry" action. Nebula Logger writes logs to custom objects, so customers should be aware that log data consumes org storage — by default, logs are stored for 14 days within an org, and then cleaned up; this retention period is configurable. Nebula Logger also uses Platform Events to publish logs, so log entries are saved independently from the main data-processing transaction — this ensures failures are captured even if the primary Flow or Apex action rolls back. Customers should evaluate expected log volume and retention requirements when considering the addition of a logging framework.
Here's what the Flow looks like:
Let's move on to the first pieces of Apex code with the scheduling requirement now satisfied.
Define a Step interface:
public interface Step {
void execute();
void finalize();
String getName();
Boolean shouldRestart();
}
For this article, the step interface is shown as an outer class for clarity. The framework itself is flexible — teams can organize the interface and its implementations using any Apex packaging pattern they prefer, as long as all Step classes reference the same interface.
There are a few things to note about the methods defined within our interface:
execute, though argument‑less at the moment, improves when we pass a State class (or interface) to orchestrate data between steps when order matters.getName could return a System.Type value instead of a String. The goal is to give the orchestration layer a way to log step names without exposing other properties.Here’s the first concrete implementation to show how these pieces fit together. With one exception later, we recommend using Queueable Apex to implement asynchronous processing within Apex; Batch Apex is typically unnecessary (and @future methods are discouraged). Queueable Apex starts quickly and, with Apex Cursors, has many advantages over Batch Apex.
Apex Cursors offer a modern alternative to the traditional Batch Apex model. Similar to Batch processing, a Cursor implementation can fetch records in chunks (up to 2,000 per batch). However, Cursors allow multiple fetches within a single transaction, enabling significantly higher throughput for large-volume operations.
When adopting Cursors as part of this framework, teams should be aware of current testing and mockability limitations. Cursor behavior in tests may differ from production behavior, so it’s important to design test strategies that avoid relying on Cursor internals and instead validate orchestration logic at the boundaries. As the platform evolves, these areas will continue to improve, but the core guidance remains: Cursors provide higher performance and reduced orchestration overhead compared to Batch Apex for many use cases.
To define a clear boundary between the system-provided Cursor and your own code, we recommend creating a Cursor‑like representation when implementing the Step interface. Consider this code:
public inherited sharing abstract class CursorStep implements Step {
private static final Integer MAX_CHUNK_SIZE = 2000;
protected Cursor cursor;
private Integer chunkSize = System.Limits.getLimitDMLRows();
private Integer position = 0;
protected abstract Cursor getCursor();
protected abstract void innerExecute(List<SObject> records);
public abstract String getName();
public virtual CursorStep withChunkSize(Integer chunkSize) {
this.chunkSize = chunkSize;
return this;
}
public void execute() {
this.cursor = this.cursor ?? this.getCursor();
// Integer division rounds down, which is exactly what we want here.
Integer fetchesPerTransaction = this.chunkSize / MAX_CHUNK_SIZE;
this.cursor.setFetchesPerTransaction(fetchesPerTransaction);
this.innerExecute(this.cursor.fetch(this.position, this.chunkSize));
this.position += this.chunkSize;
}
public virtual void finalize() {
Logger.info('finished cursor step for ' + this.getName());
}
public virtual Boolean shouldRestart() {
return this.position < this.cursor.getNumRecords();
}
}
Notice the Cursor class. Apex cursors are instances of Database.Cursor, but our Cursor implementation gives us flexibility around the shortcomings of Cursors. Here’s the implementation:
public virtual inherited sharing class Cursor {
private static final Integer MAX_FETCH_SIZE = 2000;
private static final Integer MAX_FETCHES_PER_TRANSACTION =
Limits.getLimitFetchCallsOnApexCursor();
private Integer fetchesPerTransaction = 1;
private final Database.Cursor cursor;
public Cursor(
String finalQuery,
Map<String, Object> bindVars,
System.AccessLevel accessLevel
) {
try {
this.cursor = Database.getCursorWithBinds(finalQuery, bindVars, accessLevel);
} catch (FatalCursorException e) {
Logger.newEntry(
System.LoggingLevel.WARN,
'Error creating cursor. This can happen if there ' +
' are no records returned by the query: ' + e.getMessage()
);
}
}
public Cursor setFetchesPerTransaction(Integer possibleFetchesPerTransaction) {
// Handle accidental round downs from Integer division
if (possibleFetchesPerTransaction == 0) {
return this;
}
if (possibleFetchesPerTransaction > MAX_FETCHES_PER_TRANSACTION) {
Logger.newEntry(
System.LoggingLevel.DEBUG,
'Fetches per transaction: ' +
possibleFetchesPerTransaction +
' exceeded platform max fetches per transaction: ' +
MAX_FETCHES_PER_TRANSACTION +
', defaulting to platform max'
);
possibleFetchesPerTransaction = MAX_FETCHES_PER_TRANSACTION;
}
this.fetchesPerTransaction = possibleFetchesPerTransaction;
return this;
}
protected Cursor() {
}
public virtual List<SObject> fetch(Integer start, Integer advanceBy) {
if (this.getNumRecords() == 0) {
Logger.newEntry(
System.LoggingLevel.DEBUG,
'Bypassing fetch call, no records to fetch'
);
return new List<SObject>();
}
Integer localFetchesMade = 0;
Integer localStart = start;
List<SObject> results = new List<SObject>();
while (localFetchesMade < this.fetchesPerTransaction) {
results.addAll(
this.cursor?.fetch(start, this.getAdvanceBy(start, advanceBy))
?? new List<SObject>()
);
localStart += advanceBy;
localFetchesMade++;
}
return results;
}
public virtual Integer getNumRecords() {
return this.cursor?.getNumRecords() ?? 0;
}
protected Integer getAdvanceBy(Integer start, Integer advanceBy) {
Integer possibleFetchSize = Math.min(advanceBy, this.getNumRecords() - start);
if (possibleFetchSize > MAX_FETCH_SIZE) {
Logger.newEntry(
System.LoggingLevel.DEBUG,
'Fetch size: ' +
possibleFetchSize +
' exceeded platform max fetch size of ' +
MAX_FETCH_SIZE +
', defaulting to max fetch size'
);
possibleFetchSize = MAX_FETCH_SIZE;
} else if (possibleFetchSize < 0) {
possibleFetchSize = 0;
}
return possibleFetchSize;
}
}
For the rest of this article, we omit the sharing declarations when referring to Apex classes. In practice, ensure top‑level classes explicitly use with or without sharing to conform to your object model and permissions.
Also note that our Cursor implementation delegates to the platform Database.Cursor, with added benefits discussed next.
First, here are the corresponding tests:
@IsTest
private class CursorTest {
@IsTest
static void itCapsAdvanceByArgument() {
String query = 'SELECT Id FROM User WHERE Id = :bindVar0';
Map<String, Object> bindVars = new Map<String, Object>{
'bindVar0' => UserInfo.getUserId()
};
Cursor cursor = new Cursor(query, bindVars, System.AccessLevel.SYSTEM_MODE);
Assert.areEqual(1, cursor.getNumRecords());
Assert.areEqual(UserInfo.getUserId(), cursor.fetch(0, 1000).get(0).Id);
Assert.areEqual(1, System.Limits.getApexCursorRows());
}
@IsTest
static void itCapsMaxFetchSize() {
List<Account> accounts = new List<Account>();
for (Integer i = 0; i < 2001; i++) {
accounts.add(new Account(Name = 'Fetch ' + i));
}
insert accounts;
Integer oneMoreThanMaxFetch = 2001;
Exception ex;
try {
new Cursor(
'SELECT Id FROM Account',
new Map<String, Object>(),
System.AccessLevel.SYSTEM_MODE
)
.fetch(0, oneMoreThanMaxFetch);
} catch (System.InvalidParameterValueException e) {
ex = e;
}
Assert.areEqual(null, ex?.getMessage());
}
@IsTest
static void itFetchesMultipleTimesPerTransaction() {
insert new List<Account>{ new Account(Name = 'One'), new Account(Name = 'Two') };
Cursor cursor = new Cursor(
'SELECT Id FROM Account',
new Map<String, Object>(),
System.AccessLevel.SYSTEM_MODE
)
.setFetchesPerTransaction(2);
List<SObject> results = cursor.fetch(0, 1);
Assert.areEqual(2, results.size());
results = cursor.fetch(2, 1);
Assert.areEqual(0, results.size());
Assert.areEqual(2, cursor.getNumRecords());
}
}
By making Cursor virtual, concrete CursorStep implementations can operate without a Database.Cursor when they don’t need to iterate a large record set — similar to returning a System.Iterable<T> instead of a Database.QueryLocator in Batch Apex. Here’s an example:
public abstract class CursorLikeImplementation extends CursorStep {
private final Cursor cursorLike;
public CursorLikeImplementation(List<SObject> previouslyRetrievedRecords) {
this.cursorLike = new CursorLike(previouslyRetrievedRecords);
}
public override String getName() {
return CursorLikeImplementation.class.getName();
}
public override Cursor getCursor() {
return this.cursorLike;
}
private class CursorLike extends Cursor {
private final List<SObject> records;
public CursorLike(List<SObject> records) {
super();
this.records = records;
}
public override List<SObject> fetch(Integer position, Integer chunkSize) {
// clone, to keep the underlying list type
List<SObject> clonedRecords = this.records.clone();
clonedRecords.clear();
for (Integer i = position; i < this.getAdvanceBy(position, chunkSize); i++) {
clonedRecords.add(this.records[i]);
}
return clonedRecords;
}
public override Integer getNumRecords() {
return this.records.size();
}
}
}
Note that because this class is also abstract, it leaves the concrete implementation of innerExecute to subclasses.
There’s also an alternative to the CursorLike inner subclass. If you know concrete versions of a step like this won’t burn through other governor limits, you can return this.records from CursorLike.fetch and override the parent CursorStep.shouldRestart() to return false. That allows you to iterate over a list bounded only by the Apex heap limit of 12 MB per async transaction.
Our Cursor-based implementation gives us plenty of flexibility when paginating over large quantities of data. The Step interface, meanwhile, gives us the flexibility to describe and encapsulate steps of all sorts.
Consider a Flow-based step:
public virtual class FlowStep implements Step {
private final Invocable.Action specificFlow;
private Boolean shouldRestart = false;
public FlowStep(String specificFlowName, Map<String, Object> inputs) {
this.specificFlow = Invocable.Action.createCustomAction('flow', specificFlowName);
this.specificFlow.setInvocations(new List<Map<String,Object>>{ inputs });
}
public void execute() {
List<Invocable.Action.Result> results = this.specificFlow.invoke();
for (Invocable.Action.Result result : results) {
if (result.isSuccess()) {
Map<String, Object> outputParams = result.getOutputParameters();
Object potentialShouldRestartValue = outputParams.get('shouldRestart');
// Flow does not enforce Booleans being initialized
// so a null check is sadly necessary here
if (potentialShouldRestartValue != null) {
this.shouldRestart = this.shouldRestart ||
Boolean.valueOf(potentialShouldRestartValue);
}
} else {
List<String> errorMessages = new List<String>();
for (Invocable.Action.Error error : result.getErrors()) {
errorMessages.add(
'Error code: ' + error.getCode() +
', error message: ' + error.getMessage()
);
}
Logger.error(
'An error occurred within your auto-launched flow:\n' +
String.join(errorMessages, '\n\t')
);
}
}
}
public virtual void finalize() {
Logger.info(this.getName() + ' finished processing');
}
public String getName() {
return FlowStep.class.getName() + ':' + this.specificFlow.getName();
}
public Boolean shouldRestart() {
return this.shouldRestart;
}
}
Because Flows can’t return output parameters that conform to an Apex-defined type, we check for a shouldRestart output parameter before using it.
Some steps might be feature‑flagged. You can implement logic to decide which steps to include, or use a no‑op step for a disabled feature. The Null Object pattern is a common way to reduce complexity within the orchestration layer:
@SuppressWarnings('PMD.EmptyStatementBlock')
public class NoOpStep implements Step {
// The null object pattern is commonly implemented
// as a singleton to reduce memory consumption
public static final NoOpStep SELF {
get {
SELF = SELF ?? new NoOpStep();
}
private set;
}
public void execute() {
}
public void finalize() {
}
String getName() {
return NoOpStep.class.getName();
}
Boolean shouldRestart() {
return false;
}
}
We now have quite a few building blocks to work with. Let's look at the orchestration layer responsible for iterating over steps.
The processor is an inflection point in the architecture. We must decide who defines which steps get initialized, and where. Options include:
ApexClass, which loosely couples class name spelling into your business process setup. You can reduce admin risk by making the field a picklist and validating the type exists (Type.forName() or by querying ApexClass), but because CMDT records don’t support triggers, validation happens at run time. This route is testable, but admins can still create CMDT records only in production — proceed carefully.There's a famous quote from Clean Code about how to handle this particular piece of complexity:
The solution to this problem is to bury the
switchstatement [for making objects] in the basement of an abstract factory, and never let anyone see it.
With that in mind, and because our current number of steps is well-defined and unlikely to grow too large, it's okay for the step processor to also be the factory for steps. This can use an enum to drive the switch statement:
public enum StepType {
TYPE_ONE,
TYPE_TWO,
TYPE_THREE,
TYPE_FOUR
// etc ...
}
And then for our StepProcessor:
public StepProcessor implements System.Queueable, System.Finalizer,
Database.AllowsCallouts {
private final List<Step> steps = new List<Step>();
private Step currentStep;
public StepProcessor setSteps(List<StepType> stepTypes) {
for (StepType type : stepTypes) {
switch on type {
WHEN TYPE_ONE {
this.addTypeOneSteps();
}
WHEN TYPE_TWO {
this.addTypeTwoSteps();
}
// ... etc
}
}
this.cleanSteps();
return this;
}
public void execute(System.QueueableContext context) {
this.currentStep = this.currentStep ?? this.steps.remove(0);
if (context != null) {
System.attachFinalizer(this);
Logger.setAsyncContext(context);
}
Logger.info('Executing step ' + this.currentStep.getName());
try {
this.currentStep.execute();
} catch (Exception e) {
Logger.exception('Unexpected exception', e);
}
Logger.info('Finished executing step ' + this.currentStep.getName());
Logger.saveLog();
}
public void execute(System.FinalizerContext context) {
Logger.info('Executing finalizer for step ' + this.currentStep.getName());
Logger.setAsyncContext(context);
switch on context?.getResult() {
when UNHANDLED_EXCEPTION {
// see the note below about this logging paradigm
Logger.warn(
'Failed to run on step' + this.currentStep,
context?.getException()
);
}
when else {
this.currentStep.finalize();
if (this.currentStep.shouldRestart()) {
this.kickoff();
} else if (this.steps.isEmpty() == false) {
this.currentStep = this.steps.remove(0);
this.kickoff();
} else {
Logger.info('Finished executing steps');
}
}
}
Logger.info(
'Finished executing finalizer for step ' +
this.currentStep.getName()
);
Logger.saveLog();
}
public String kickoff() {
return this.steps.isEmpty() ? null : System.enqueueJob(this);
}
private void cleanSteps() {
for (Integer reverseIndex = this.steps.size() - 1;
reverseIndex >= 0; reverseIndex--) {
if (this.steps[reverseIndex] instanceof NoOpStep) {
this.steps.remove(reverseIndex);
}
}
}
private void addTypeOneSteps() {
this.steps.addAll(
new List<Step> {
new ExampleCursorStepOne(),
new ExampleCursorStepTwo()
}
);
}
private void addTypeTwoSteps() {
this.steps.addAll(
new List<Step> {
new FlowStep('
ExampleInvocableName',
new Map<String, Object>( 'exampleParameter' => true)
),
new ExampleCursorStepThree()
}
);
}
}
The factory methods shown, like addTypeOneSteps(), can delegate concerns like feature flagging; cleanSteps() performs a one‑time check on the gathered steps to ensure that there aren’t any “empty” steps before going truly async. That might look like this:
private Step getStepOrDefault(String customPermissionName, Step defaultStep) {
if (System.FeatureManagement.checkPermission(customPermissionName)) {
return defaultStep;
}
return NoOpStep.SELF;
}
We haven’t discussed error handling since mentioning Nebula Logger in the Scheduled Flow section. That’s because System.Finalizer lets us blanket‑cover logging for all error conditions without adding specific error handling in each step. Each Step focuses on running, while we log and rethrow any unhappy paths so they surface in unit tests. This supports safe iteration and production‑level alerting (using the Slack Logger plug-in for Nebula for all WARN and ERROR logs).
One note about error logging: passing the step instance into log messages assumes a level of trust in what becomes visible in logs. The default toString() for Apex classes includes all static and instance‑level properties in the message. That can be desirable — or it can leak sensitive information. While logging and security are not the focus here, note that for some systems, adherence to an interface like Step can also involve forcing an override for toString().
public interface Step {
void execute();
void finalize();
String getName();
Boolean shouldRestart();
String toString();
}
Such a method puts the onus on each object creator to decide what is permissible to print, which may be desirable.
On logging levels: at the StepProcessor level, we use INFO, the highest non‑error level. As you get more granular within the application, logging levels should decrease accordingly. Individual steps might use DEBUG for high‑level information, with FINE, FINER, and FINEST reserved for increasingly detailed output. Logging is as much an art as a science, but following these principles helps keep logs consistent and useful.
Before moving on, let's briefly reflect on the decision to have our step processor host the logic for which steps get used. In a large codebase, consider making StepProcessor virtual or abstract, and have subclasses identify specific steps to establish a proper separation of concerns.
The scheduler eventually invokes Apex. With the rest of the setup complete, the Invocable Apex section can decide which steps should run and pass the List<StepType> to the processor:
public class DailyJobExecutor {
@InvocableMethod(label='Execute Daily Job')
public static void executeJob() {
Logger.info('Executing daily Job');
List<StepType> correspondingTypes = new List<StepType>();
// based on [business logic], determine which step types
// should be included for any daily invocation
if (correspondingTypes.isEmpty() == false) {
try {
new StepProcessor().setSteps(correspondingTypes).kickoff();
} catch (Exception ex) {
Logger.exception('Error starting job', ex);
}
}
}
Logger.saveLog();
}
This is a simple part of the equation — using records, data, or logic to determine which step types to run. The Invocable Action is simple because we encapsulated complexity elsewhere. We’ve also protected against unexpected exceptions and made each piece easy to test in isolation.
The Apex Slack SDK is beyond this article’s scope, but one potential snag from the requirements bears revisiting: notifying people upward in the role hierarchy based on a configurable delay. On paper, this is simple, and you might (correctly) consider System.enqueueJob(this) in the StepProcessor. With System.AsyncOptions, our initial inclination was to use the enqueueJob overload to satisfy this requirement.
For now, however, the maximum delay via System.AsyncOptions.MinimumQueueableDelayInMinutes is 10 minutes. Because the requirement is 120 minutes, a few options remain. A naive approach might look like this:
public class ExampleDelayedNotifier implements Step {
private final List<Slack.ChatPostMessageRequest> notifications = new List<Slack.ChatPostMessageRequest>();
private final Slack.BotClient botClient = Slack.App
.getAppByKey('some-slack-app-key')
.getBotClientForTeam('slack team id');
// account for the initial delay,
// so 120 - 10 = 110
private Integer delayMinutes = 110;
public void execute() {
if ( this.delayInMinutes > 0) {
return;
}
Integer maximumAllowedCallouts = 100;
while (this.notifications.isEmpty() == false && maximumAllowedCallouts > 0) {
this.botClient.chatPostMessage(this.notifications.remove(0));
maximumAllowedCallouts--;
}
}
public void finalize() {
this.delayInMinutes -= 10;
}
public String getName() {
return ExampleDelayedNotifier.class.getName();
}
public Boolean shouldRestart() {
return this.delayInMinutes > 0 || this.notifications.isEmpty() == false;
}
}
In practice, the delay would be passed into this class because the delay is configuration‑driven.
We don’t recommend this approach unless you are certain there will only ever be one delayed notification type. It burns through 11 extra async jobs before starting (or more, if the delay increases). That cost might be fine for one job — not for many. You’d also need to add a method to the Step interface so each step can tell the processor how long to wait before restarting, which adds noise.
That leaves us with two interesting possibilities:
Step‑based architecture to process records based on a “Start After” timestamp and decide which steps to use based on a picklist or multi‑select picklist mapping back to the StepType enum values shown previously.System.scheduleBatch().Consider the Batch Apex example. While we generally recommend Queueable Apex for flexibility and control, this is one case where Batch Apex still reigns supreme:
public class DelayedNotifier implements Database.Batchable<Object> {
private final StepProcessor processor = new StepProcessor();
public Iterable<Object> start(Database.BatchableContext bc) {
return new List<Object>();
}
@SuppressWarnings('PMD.EmptyStatementBlock')
public void execute(Database.BatchableContext bc, Object scope) {
// we don't need to actually do anything in execute,
// we just need to start up the processor in finish
}
public void finish(Database.BatchableContext bc) {
try {
// you can imagine Notifier as an elided,
// simpler version of the naive implementation
// we showed above, now only focused on sending messages
this.processor.setSteps(new List<Step>{ new Notifier() }).kickoff();
} catch (Exception ex) {
Logger.exception('Unexpected error', ex);
} finally {
Logger.saveLog();
}
}
}
And then, in the StepProcessor, imagine that the previously shown addTypeOneSteps() method is updated with this delayed step:
public StepProcessor implements System.Queueable, System.Finalizer,
Database.AllowsCallouts {
// .... unchanged top of class elided
private void addTypeOneSteps() {
this.steps.addAll(
new List<Step> {
new ExampleCursorStepOne(),
new ExampleCursorStepTwo(),
new DelayedNotifierStep()
}
);
}
// ...
private class DelayedNotifierStep implements Step {
private final DelayedNotifier delayedNotifier = new DelayedNotifier();
// again - in practice this value would also be passed in
private final Integer delayInMinutes = 120;
public void execute() {
System.scheduleBatch(
this.delayedNotifier,
'Delayed notifier: ' + System.now().getTime(),
this.delayInMinutes
);
}
public void finalize() {
Logger.debug('Nothing to finalize, batch scheduled');
}
public String getName() {
return DelayedNotifierStep.class.getName();
}
public Boolean shouldRestart() {
return false;
}
}
}
While we wouldn’t typically recommend this much hoop‑jumping, this step delay becomes another reusable building block. Until longer delays are allowed in Queueable Apex, it also represents the easiest way to produce this effect (without a polling mechanism, as discussed).
We’ve used object‑oriented design to fulfill the requirements and created a system that will scale while balancing the long‑term cost of building and maintenance. While step declaration and instantiation may ultimately outgrow their place in StepProcessor, there’s little additional technical debt here. With FlowStep, admins and developers can decide together when no‑code or pro‑code solutions make the most sense.
By using the System.Finalizer interface within Apex’s Queueable framework, together with Nebula Logger, we’ve built a robust, testable system that alerts us to unforeseen failures even if future steps lack explicit logging. For us, this system is happily crunching numbers and reducing cost and complexity. It has also given us valuable insights into Apex Cursors’ behavior under real workloads, helping us refine our approach while improving the feature itself.
By decomposing complex, high-volume workloads into modular execution steps, the Step-Based Asynchronous Processing Framework framework transforms platform constraints into engineered advantages, enabling predictable performance, observability, and governance at enterprise scale. Steps can be set up by both admins and developers, and in either case, step authors can safely focus on observing the basic platform governor limits (like DML rows, and query rows retrieved) without having to worry about how to scale each step.
To operationalize and adopt this pattern across enterprise implementations, architects should:
James Simone is a Principal Software Engineer at Salesforce, and has more than a decade's worth of experience working on the platform. He was a Salesforce customer — and product owner — before moving into development, and has been writing technical deep dives about Salesforce since 2019 within The Joys Of Apex. He's previously published articles on the Salesforce Developer blog, and the Salesforce Engineering blog as well.