This site is in beta. We welcome your feedbackOpen link in new window

Content last updated December 2020. Roadmap corresponds to Spring ‘21 projections.
Our forward-looking statementOpen link in new window applies to roadmap projections.

Guide Overview

You want to customize Salesforce. It may be something simple, like adding a field; or maybe it is much more complex, like a custom application with dozens of objects and thousands of lines of code.

How do you deploy into production? The answer, of course, is “it depends". The method you choose for moving a change into production will depend on many factors, including the urgency of the change, its complexity, the size of your team, and the metadata involved.

This guide explores seven different deployment options, ranging from simple but not-so-scalable techniques to significantly more complex yet highly scalable approaches:

  1. Manual Changes in Production
  2. Change Sets
  3. Metadata API: Direct Deployments
  4. Metadata API: Deployment with Source Control + Continuous Integration
  5. Org Dependent Packages
  6. Unlocked Packaged
  7. Managed Packages

For each option, we discuss the limitations of the approach, why you might choose it, why you might not, and (where appropriate) how to mitigate some of its potential downsides.

This guide also covers some hurdles you may encounter as you begin using the more complex techniques, deploying changes other than metadata, the various Salesforce environments involved in moving changes to production, an example deployment that combines multiple approaches, and third-party tooling you can use.

You can read this guide straight through, or just jump to the parts that you need. If you do decide to skip around, we’ve summarized the key points that you’ll want to take away with you no matter what:

This guide focuses on moving changes between environments with the goal of eventually deploying to a production environment you control. In other words, if you’re a customer or developing changes on behalf of a customer, then this guide is for you. If you are an ISV or AppExchange partner who is building assets for multicustomer or AppExchange distribution, then you should follow the documentation for second-generation managed packagesOpen link in new window.

Deployment Techniques, from Simplest to Most Scalable

First, let’s get familiar with the deployment options and the technology behind them.

Manual Change Sets Metadata Deployments Packages
Direct Single Source Org-Dependent1 Unlocked Managed
Deletion Manual Not Available Manual, Scripted Manual, Scripted Built-In Built-In Built-In
Apex Changes Not Available Available Available Available Available Available Available
Reversion Manual Not Available Manual Manual Built-In Built-In Built-In
Scratch Org Support Supported Not Available Supported Supported Supported Supported2 Supported2
Sandbox Support Supported Required Supported Supported Supported Supported Supported
Dependencies Any Any Any Any Any Packaged3 Packaged3
Repeatability Low Low Medium High High High High
Delay None Medium None None Medium Long4 Long4
1 Org-dependent packages are Beta in Winter '21 release.
2 Even if you don’t use a scratch org to create your package, it must be able to deploy to a scratch org or package creation will fail.
3 Every dependency must either be in the package or in another package.
4 You can skip validation on package version creation to reduce package build time via a flag on the command.

These options exist on a spectrum: Spectrum with manual deployments as simplest choice and managed packages as most complex.

As you move from left to right on the spectrum, you:

For the purposes of this document, scalability means support for:

Manual Changes in Production

Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay
Manual Changes in Production Manual Not Available Manual -- -- Any Low None

Image showing human actor with direct change in production org.

The simplest option for deploying to production is to make a manual change, directly in production.

Yes, some of you just had a panic attack. But hear us out.

Limitations of Manual Changes

You cannot modify Apex code in production. You must write Apex somewhere else and migrate it to production. If your project involves only Apex, then move along, this is not the option you’re looking for.

Why You Might Choose Manual Changes

  1. For some metadata types, it’s the only supported option.
  2. For some metadata types, it’s a reasonable, low-risk enabler of business agility; for example:
    1. Reports are a metadata type. You might even be deploying folders of reports that are used on a Lightning page or really important reports used on your operational dashboard. But you may also encourage users to modify reports in their own private folders without going through any sort of deployment process.
    2. Many orgs treat ListViews similarly.
  3. For small changes, it’s very fast:
    1. You may need to make an emergency change to turn off a validation rule, even if you deployed it via a more complex technique to the right on the spectrum.
    2. You need to give someone permission to do something temporarily, or figure out what’s wrong with the permissions. In this case, an admin can create a new permission set, assign it to a user, get the work handled, and then remove and delete that permission.
  4. It’s convenient if you’re doing a first-time set up of Salesforce before go-live, when the exposure is lower and you have a serious testing plan in place before users get access to the system.
  5. You enjoy similar high-risk activities like free-climbing and fielding unexpected calls from executives.

Why You Wouldn't Choose Manual Changes

  1. Many types of changes can be extremely dangerous. That simple validation rule where you accidentally used > instead of < can bring your company to a halt. Even worse, your automation might accidentally email customers and create a PR mess.
  2. Production changes can be difficult to test. The data in your org is your real data, so an automated testing process can do a lot of damage that is hard to reverse. Even if you have a backup, you’re going to have a lot of work to clean it up. And even if everything goes well, you’re still finding and cleaning up records like “TestOpportunity15”.
  3. It’s difficult to scale for a team. People are working on top of each other. If you’ve got someone working on Service and someone else on Sales, but both of them are changing the Contact object, there’s potential for some conflicts.
  4. It’s harder to reverse or abandon changes. Imagine you’ve updated your application, and let your users try it out. They give you some feedback and you realize it’s completely the wrong approach. Before you can create your new version, you have to get rid of the old version, backing out each change you made in the correct order.
  5. It’s difficult to deploy large, complex changes while people are on the system. If your change includes adding three new fields, plus layout changes, plus validation rules, plus changes to several existing Flows based on the new fields, plus permissions, then there will be some point in time where your change is only partially deployed. Users might be entering data without validation, or without the proper processes running. You either have to lock them out or work unusual hours. You’re now working under a time crunch and trying not to make a mistake.

Mitigation for Risks You May Face with Manual Changes

If you have changes that can only be done manually, then you can verify them by first completing the steps to make the changes in a sandbox, testing the results, and then repeating the same series of steps in production.

For example, Prediction BuilderOpen link in new window currently (Winter ‘21) doesn’t support any sort of deployments other than manual. Prediction Builder lets you configure a model and look at its accuracy. You can tweak the model until you’re happy with it.

But eventually, you’ll have Einstein start writing predictions to fields, and you may start using those fields for process automation. Before that starts, you might want to test how it works in a sandbox. Once you’ve done that, you do have to create and enable the model again in production (for example, with production on one monitor, sandbox on the other, carefully making them match). This approach is not foolproof, but at least when you move to production, you’ll sleep more soundly.

Change Sets

Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay
Change Sets Manual Supported Not Available Supported Supported Any Low Medium

Image showing human actor making changes in a sandbox, moved by change set to production.

Change sets are a point-and-click way to move changes.

They let you choose which items to move, with a checkbox for each field, object, layout, class, and so on.

The org admins can decide which environment can send and receive changes sets from other environments via deployment connectionsOpen link in new window. For example, consider the following scenario:

  1. Three orgs exist: Dev (a developer sandbox), QA (a full sandbox), and Production
  2. Dev sends changes to QA and receives changes from Production
  3. QA can send changes to Production
  4. Production can send changes to Dev and can only receive changes from QA

Once you make your selections with all those checkboxes, you “upload” them to an allowed destination org. Some time later, the change set appears in Inbound Change Sets in the destination org and you can deploy or validate the change set.

Limitations of Change Sets

  1. Change sets can only work with sandboxes, and all the sandboxes have to be created from the production org.
  2. You can click the View/Add Dependencies button to find dependencies, but it may not catch everything. (For example, you may have an Apex Class that doesn’t have a formal dependency on its test class, but you generally do have to have test coverage so a certain change set will fail to deploy if you don’t include the tests.)
  3. You’re limited to 10,000 files (items represented by a checkbox).
  4. Sometimes, a sandbox is on a different release than the destination org. When that happens, some metadata types can’t be deployed because they’ve changed between releases and you have to either spin up a new sandbox on the correct version or wait until the orgs are on the same version.
  5. Change sets can’t remove any metadata or configuration.

Why You Might Choose Change Sets

  1. It has existed for a long time, so it’s well-known by most everyone.
  2. It is admin-friendly (it requires no local tools, code, or terminal commands).
  3. Unlike changes performed manually, all the changes hit production simultaneously.
  4. You can validate that a deployment would deploy while everyone is using the system on Friday, but actually deploy the change during off-hours.
  5. If your change fails to deploy or validate, you can clone the change set and add what you left out.
  6. If you’ve made one of those “emergency” changes in production, change sets can also send the change to sandboxes.
  7. Deployment connections provide good control over how changes move through environments.
  8. Permissions control who can create and deploy change sets.

Why You Wouldn't Choose Change Sets

  1. As you’re building, you have to track your changes. People who have been doing this for a long time have elaborate spreadsheet templates so that when it’s time to upload a change set, they know everything that needs to go in it.
  2. If you move a change set from Dev to QA and want to move it from QA to Production, you need to create another one — with all those checkboxes again.
  3. Not every metadata type is supported. See link API Support for details.
  4. The delay between when you upload a change set and when it arrives and becomes deployable in production is indeterminate. Sometimes it’s a minute, sometimes it’s an hour. And there’s no real way to know when it’s going to arrive. Once it arrives, sometimes deploying will result in a message that it’s not ready yet.

Moving from Change Sets to Metadata Deployments

If your team has been using change sets, but is considering moving to source-based deployments, there is an option to retrieve change sets via the Salesforce CLI. This enables you to create a change set from your sandbox. Then a CLI user or a script can retrieve the change set by name using the CLI retrieve commandOpen link in new window and extract the source (give the command the change set name as the package name). In this case, you’re not really using change sets for deployment, but are using them to extract source to enable a deployment technique further to the right.

Metadata API: Direct Deployments

Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay
Metadata: Direct Deployments Manual, Scriptable Supported Manual Supported Supported Any Medium None

Human actor making changes in sandbox or scratch org and deploying via API.

The Salesforce Metadata API lets you migrate metadata. Few people use it directly — if your process has been around for a long time, you may have used AntOpen link in new window scripts. More recently, the Salesforce CLIOpen link in new window and Salesforce Extensions for Visual Studio CodeOpen link in new window and many other tools make use of the metadata API to retrieve and deploy metadata.

Scenarios here include a developer retrieving metadata from a sandbox, making changes on their machine, and deploying it back to production.

You can also perform deployments that look like, “take the metadata described in this package.xmlOpen link in new window(the traditional Salesforce manifest file) and move it from QA to Production.”

In metadata-based deployments, you’re adding or modifying only what’s specified. The deployment doesn’t delete or change any files that are omitted.

Limitations of Direct Metadata Deploys

  1. Similar to change sets, only 10,000 files are allowed per transaction.
  2. The total unzipped size of the files cannot exceed 400MB.
  3. The Metadata API doesn’t support all metadata types (see API Support).

Why You Might Choose Direct Metadata Deploys

  1. The deployment is repeatable. Each deployment of the same set of files should result in the same state. It’s also easy to repeat the same deployment to multiple targets (e.g. QA, then production)
  2. You can specify deletions. The Metadata API includes support for destructive changesOpen link in new window — you can specify metadata that should not exist in the target and remove components as the new metadata is deployed.
  3. You can deploy settings. Imagine you need to deploy something that you haven’t activated in production (for example, a chatbot or path). Within the Metadata API there are also SettingsOpen link in new window types that represent actions you would normally do manually in the Setup UI.
  4. Scriptability. Later, we’ll discuss items that need to deploy with your metadata. You can create repeatable deployment scripts to make sure these items are in the proper state before and/or after the deployment.

Why You Wouldn't Choose Direct Metadata Deploys

  1. It’s hard to trace. Metadata deployed from someone’s local filesystem looks just like metadata modified in production. And it’s difficult to trace back what was part of the deployment if you need to reverse something.
  2. It’s hard to control. If multiple developers can deploy changes to production, they may deploy over each other’s versions. Also, your policy may say they should test before deploying, but they may not.

Mitigations for Risks You Might Face with Direct Metadata Deploys

Companies tend to have a single person who has access to make these deployments (e.g. a “Release Manager”). While this helps ensure control, it can become a bottleneck.

Metadata API: Deployment with Source Control and Continuous Integration

Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay
Metadata: With Source Control Manual, Scriptable Supported Manual Supported Supported Any Medium None

Human actor using CLI tools to move changes to sandboxes or scratch orgs and source.

Most developers are comfortable working with a source control system (also referred to as a version control system or VCS), like gitOpen link in new window. Such systems have all sorts of useful features (like branches, pull requests, diffs, file history, and more). Source tracking is a solved problem with high-quality tooling outside of Salesforce, and we recommend using it.

Back to deployments — the idea here is to use source repository branches as the source for a deployment. Developers are prohibited from deploying source directly beyond their dev environment; only by merging into a branch does their code deploy anywhere else.

Typically, these branch merges are deployed by a system (e.g. CI) rather than a person. The steps are:

  1. Get the source from the repo
  2. Authenticate to the org
  3. Deploy

Because the Metadata API and CLI are available, you’re free to use the tools you prefer. For example, GitHub acts as a source control system but also handles things like code review, and GitHub actions support automation based on events (like merging a branch).

When you begin using this deployment option, you may find yourself facing some challenging questions. What do my repos look like? Am I always deploying the whole thing, or is there some subset of metadata? Do I have one repo with one directory, multiple package directories in a single repo, or multiple repos? Multiple projects within a larger monorepo? How do I express and control dependencies between them? This challenge is not exclusive to this method; you’ll be addressing the same questions when you begin moving to packages.

There’s some preliminary discussion about modularizationOpen link in new window on the Developer blog, but it’s a few years old. In particular, it doesn’t account for some newer options like Org-Dependent Packages that help with this problem, and the CLI is now much better at working with multiple source directories within a single project.

Limitations of Metadata Deploys with Source Control

See Limitations of Direct Metadata Deploys.

Why You Might Choose Metadata Deploys with Source Control

  1. Source control is a solved problem. Many companies have built high-quality tooling for source control and CI.
  2. Developers know it. This is the default operating model for developers outside of Salesforce.
  3. Source control supports automation. Deploy from GitHub Actions or have your CI system subscribed to webhooks to take those actions. Besides just deployments, this allows testing automation, code analysis, and linter/styling to check pull requests.
  4. Scales better for larger teams. You can use multiple repos to break up the codebase. Developers are merging in source control and not into the org; they know when they are conflicting with changes they don’t have. Imagine you’ve got several internal teams plus various contractors and multiple SIs working on Salesforce-related projects. The reduced finger-pointing alone is a good reason to keep them from directly changing the org.
  5. Branches help multiple projects happen simultaneously. Even on a small team, you may have a simultaneous mix of small features, emergencies, release checks, bug fixes, experiments, and large projects. Keeping them organized helps your team work faster.
  6. Feature branches allow for partial deployments. Imagine two features merged into the final QA environment. Users are good with the first but don’t like the second. You can merge the first feature into main and deploy it to production while the second gets some more attention.

Why You Wouldn't Choose Metadata Deploys with Source Control

  1. Your team’s customizations largely consist of metadata types that don’t deploy well.
  2. Source control is unknown territory for your team. Perhaps you have mostly Salesforce Admins or “Adminelopers” (who write some code but don’t come from a traditional developer background), or you have seasoned Salesforce developers who’ve worked exclusively on the platform. This type of process may be difficult for them to adapt to quickly. See People and SkillsOpen link in new window for mitigations.
  3. Your company’s releases are usually large and infrequent.
  4. You’re not able to invest the time setting up this kind of tooling. Sometimes people see the value here but can’t have a team off of “primary tasks” during implementation of the new process.

Interlude on Packages

The next three deployment techniques dive into Second-Generation Packages.

Background on Packaging

Salesforce has used packages since the launch of AppExchange, and most admins are familiar with installing managed packages into their orgs. First-generation managed packages were designed primarily for this ISV use case. Managed packages are very restrictive; once you release a package, there are many changes that are no longer allowed because the developer can’t know what a customer org may have built dependencies on. There were also some customers using unmanaged packages, which were impossible to upgrade.

We recommend not using at first-generation managed packages, also called Classic Packaging, at all. If you’re not an ISV currently using them, you have no reason to start now. Second-generation packages are where it’s at: managed packages primarily for ISVs and unlocked packages primarily for customers.

Second-generation packages are created from source, and not from the contents of an org. The naming of the various kinds of packages can be confusing and hasn’t been consistent over time. In this guide, unless we’re speaking of first-generation managed packages, we’ll drop the “generation” label and refer to packages as unlocked or managed.

Note: This also follows the naming you'll see used in the Metadata Coverage ReportOpen link in new window, as well.

Package Basics

The idea of a package is to have a subset of metadata that is versioned.


Packages also offer some deployment controls. When you create a package version, the version begins in Beta status. You can install the package in scratch orgs and sandboxes, but not in a production org.

To deploy in production, you must first promote a package to released statusOpen link in new window. By controlling that phase, packages enable easy distribution for testing but a formal, controlled release.

How code becomes a package

  1. Specify a folder of source code that you want to become the package.
  2. Create a package using the CLI. This package is owned by a Dev HubOpen link in new window.
  3. Create a version of that package. This is a snapshot of source code at a point in time. The packaging process and the strictness of its requirements depend on which type of package you’re using.

The next three sections describe package types.

Org-Dependent Packages

Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay
Org-Dependent Packages Supported Supported Supported Supported Supported Any High Medium

Image showing development flow with org-dependent packages.

Org-dependent packages are technically unlocked packages with a special flag (—skipvalidation) during creation. They allow dependencies outside of the package that aren’t in another package — in other words, they depend on something in your org.

For example, let’s say you’re building a package that includes a Flow, and that Flow refers to a custom notification type (NotificationTypeConfigOpen link in new window). That metadata type is supported in the Metadata API as of Winter ’21Open link in new window but it can’t be packaged. When you review the Metadata Coverage report, keep in mind that org-dependent packages are unlocked packages. The supported types will be the same.

An org-dependent package lets you package your Flow and optimistically assume that the Custom Notification Type will be present in the destination org. It’ll throw an error on installation if that Custom Notification Type is not present.

You’ll want to use a sandbox that supports source trackingOpen link in new window so that

  1. it contains all the metadata you might depend on that’s outside the package
  2. your changes are tracked, enabling you to pull them into source control

Limitations of Org-Dependent Packages

  1. Org-dependent packages are currently Beta, with plans to be GA Spring ‘21.
  2. Other packages cannot depend on an org-dependent package.
  3. Org-dependent packages can’t depend on other packages (to be more specific, Salesforce won’t check that dependency).

Why You Might Choose Org-Dependent Packages

  1. You want to create a package that depends on something without packaging support.
  2. You have some metadata in the org that isn’t ready to be packaged. For example, it has some tangled circular dependency that makes that process difficult.
  3. You want some of the benefits of packaging but don’t control the metadata you depend on (for example, it’s owned by another team at your company, or an AppExchange app).
  4. You want some of the benefits of packaging but you can’t modularize your existing metadata.
  5. You can deploy over existing unpackaged metadata. For example, let’s say your current org has Whatever__c deployed to it. If you deploy a package that includes Whatever__c, then that metadata will be recognized within the org as being part of the package from then on regardless of how it was originally deployed.
  6. You are unable to create a scratch org that supports the contents of your package, even if it has no external dependencies. Because org-dependent packages skip the step that validates packages in a scratch org, you can use them to work around this limitation.

Why You Wouldn't Choose Org-Dependent Packages

  1. If your package can include/declare all of its dependencies, prefer an unlocked package. You’ll avoid surprise deploy-time errors.
  2. You want to be able to deploy the package to a scratch org. For example, you have automatic CI testing using scratch orgs. The org-dependent package has to go to some type of sandbox where the dependency is met, which can take much longer to create and cannot be destroyed immediately.
  3. All packages take significant time to create, release, and install.

Unlocked Packages

Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay
Unlocked Packages Supported Supported Supported Supported Supported Packaged High Long

Image showing development flow with unlocked packages.

If you’re a customer using packages, unlocked packages should be your primary deployment option. Unlike org-dependent packages, unlocked packages have all dependencies either inside the package or inside another package explicitly declared in the package’s dependencies

Unlocked means simply, “allows changes not via the packaging process.” For example, imagine you’ve packaged a formula field that’s causing problems. An unlocked package allows you to modify the formula in production! You can put out the fire immediately.

But next time you deploy the package, whatever is in the source will deploy over any changes made in production; that is, whatever is in the package wins. The only way to make a permanent change is to remember to update the formula in the package so that subsequent deployments include the fix.

Limitations of Unlocked Packages

  1. You cannot have unpackaged external dependencies. Everything down the dependency graph must be packageable, packaged, and in the dependencies manifest.
  2. You must be able to configure a scratch org to support everything your package requires. Let’s say your package depends on PersonAccounts. That’s OK, because that’s a feature that can be configured in a scratch orgOpen link in new window. Underneath the covers, the packaging process deploys your source into a scratch org with a given configuration to build your package.
  3. 75% minimum Apex test coverage. If you worked with or read about unlocked packages prior to Winter ‘21, you know that Salesforce added code coverage requirementsOpen link in new window. Tests will run as part of the packaging process, so you can’t rely on the target environment.

Why You Would Choose Unlocked Packages

  1. It offers a known, good state of your metadata.
  2. You know the exact state of metadata at any point in time. The org has a record of package version deployments, and packages are linked to source control.
  3. The package can be deployed to a scratch org for testing.
  4. You can revert to a previous version.
  5. Similar to org-dependent unlocked packages, you can deploy over unpackaged metadata.

Why You Wouldn't Choose Unlocked Packages

  1. Production changes are overwritten by new package deployments. If you find yourself frequently making production fixes within a package, and not getting those back into the package source, you’ll be unhappy with your deployments re-breaking those patches. You’ll want to create some process for tracking in-production changes to make sure they work their way back through your normal packaging process.
  2. Packages have formal ancestry requirementsOpen link in new window, so large refactorings can lead to situations where you can’t upgrade. This may take developers some experience to get used to.
  3. All packages take significant time to create, release, and install.

Salesforce previously announced a concept of locked packages, which were less strict about changes than managed, but didn’t allow manual changes in the org. This has been deprioritized.

Mitigations for Risks You Might Face with Unlocked Packages

For packages intended for non-production environments, you can skip the package validation stepOpen link in new window. This speeds up the packaging process so you can deploy and get test results sooner. If you’re using automated tests and frequent builds, this can be useful.

You’ll still eventually need to validate the package and promote it before you deploy it to production.

Managed Packages

Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay
Unlocked Packages Supported Supported Supported Supported Supported Packaged High Long

The workflow for this option is the same as Unlocked Packages, so that diagram is omitted here.

Managed packages have more limitations than unlocked packages. They’re normally used by AppExchange partners who want to prevent customers from creating dependencies on code or components that aren’t designed to be depended on.

Limitations of Managed Packages

  1. Once you expose something, it’s difficult to delete it (packaging assumes there may be dependencies you don’t know about).
  2. You’ll need a namespace associated to your Dev Hub, and anything referring to the package’s code will need to use that namespace in the reference.

Why You Might Choose Managed Packages

  1. You're a partner looking to build and deliver a packaged solution on AppExchange.
  2. You’re working with multiple orgs and are creating a package to be used in them, and you have a compelling need to block changes in production that can't be accomplished through governance and permissions alone.
  3. You have a compelling need to formalize what a package exposes and better encapsulate some of the internals that cannot be met by the equivalent capability of Unlocked Packages.
  4. You have a compelling need to access to namespaces to help keep code organized and modular that can't be accomplished through governance and development standards alone, and have adequate engineering expertise to design for the added complexity this will add to things like LWC cross-namespace operationsOpen link in new window.

Why You Wouldn't Choose Managed Packages

  1. You're not an AppExchange partner and have no compelling reason to use them.
  2. You’re not 100% sure of how metadata might be reused and don’t want to prevent reuse of everything.
  3. The package’s functionality changes frequently or might need to allow for major refactoring. For example, some custom Apex utilities for handling security or caching might be more suitable than dense business logic.
  4. Your team is not absolutely sure of how to design for additional namespace-related complexity for both developers and users of packages. This is especially true where dynamic code or configuration are used.
  5. All packages take significant time to create, release, and install.

Barriers to Moving Rightward with Migration Techniques

There are a few common constraints to moving to the right on the spectrum.

Spectrum with manual deployments as simplest choice and managed packages as most complex.


Metadata and packages require teams that have some familiarity with code and source control. Use of the Salesforce CLI is recommended, as is Visual Studio Code and the Salesforce Extensions. Even experienced Salesforce admins and developers may not be familiar with the latest tools. From an individual user’s perspective, the extra steps may feel like more work because it’s not always easy to see the eventual big-picture gains.

Scratch orgs and sandboxes that support source tracking make it easy to retrieve what you’ve changed in an org. The Salesforce Extensions for VS Code make it possible to easily retrieve those changes (via shortcuts) and built-in GitHub integration means you don’t have to use the CLI, either. Even developers who adore terminals benefit from how few keystrokes this requires.

The new Salesforce DevOps CenterOpen link in new window, currently in Developer Preview and planned for GA in 2021, will help admins work with source-controlled deployments. It offers a simpler way for non-developers to make changes in a developer environment, automatically track those changes (no spreadsheets for building change sets!), and commit them to source control (no IDE or CLI!). Join the Trailblazer Community Group for DevOps CenterOpen link in new window to stay informed.


Assuming people are willing to learn all these new tools, there’s the issue of tooling and access itself. For example, some companies restrict installation of developer tools on machines. You may have to get exceptions to software restrictions, ports unblocked, a public npm registry allowed, and so on.

And while we take for granted cloud-based source control (e.g., GitHub and GitLab) that may not be an option for some companies. The setup described below may involve several on-premises servers and be more complex than most of us experience. Or, your company may require a security review of the cloud services you choose.

API Support

Not everything can be deployed by every technique due to product gaps. The best resource for understanding these gaps is the Metadata Coverage ReportOpen link in new window. The following images and scenarios are as of Winter ’21 (api version 50) and subject to change.

Picture of Metadata Coverage report for LiveAgentSettings.

For example, let’s say you’ve set up and tested LiveAgent (Chat) in a sandbox. To move that to production, based on the chart above, you could:

You could not use a package to do the deployment.

Note: There are more objects related to deploying LiveAgent. We simplified this example so you don’t have to scroll all over the Metadata Coverage Report to look at all of them.

If you can’t find what you’re looking for on the coverage report, for example predictions built with Prediction Builder or portability policies, then they aren’t supported in anything other than manual setup.

Why doesn’t everything support all the deployment options?

This is almost always a prioritization challenge. Products are built by teams and owned by a product manager who has to prioritize what their users want. Each team is responsible for adding support for deployment techniques to their product, and they balance this against new features, bug fixes, and other work.

If you are using a feature that doesn’t deploy well, reach out to the product manager to make sure they know that you value deployability.

Your process may dictate product adoption

Eventually, you may have an amazing deployment process you love that’s running like a well-oiled machine. Then, Salesforce creates some killer new feature that your users are really excited about, but it lacks support for your preferred deployment process.

At this point, you face an uncomfortable choice: Create a whole new process around deploying that one tool (adding complexity and reducing agility), or wait until the new feature has proper deployment support that lets you preserve your process.

Ultimately this becomes a “greater business value” choice. This is another scenario in which it’s important that the product team understands that their deployability support is preventing your adoption of their feature.

Downstream Effects

Continuing the previous example, imagine you have additional functionality that depends on your LiveAgentButton (think of an Experience, formerly known as a Community, where that button is embedded). You would also not be able to use unlocked or managed packages for that community (ExperienceBundle), even though ExperienceBundle itself works with unlocked packages, because the ExperienceBundle package wouldn’t contain all the source without that button.

Picture of Metadata Coverage report for ExperienceBundle.

You could, however, use an org-dependent package that assumes the button will exist in the target environment.

Platform Quality

As you look through the coverage report, you’ll see links to known issues. Second-generation packaging is relatively new, as are the SFDX tools and features like Code Builder and DevOps Center.

Additionally, Salesforce is making huge efforts to expand metadata coverage and releasing new products and features. You are more likely to find blocking bugs and gaps in newer functionality.

You may even find that within a metadata type, certain features don’t behave as expected. For example, orgs with source tracking will register changes when adding a new field or renaming an object, but changes to an object’s field history tracking are ignored. You can edit the object metadata by hand to change the value, and then it deploys as expected.

Please report issues so that Salesforce gets them fixed and other customers are aware of them.

Deployments are More than Metadata

OK, so let’s say you’ve chosen one or more options described in this guide for your project. You’ll typically find “other stuff” that you need to deploy with your change. Here are some examples:

In these examples, if you’re doing manual deployments, these are just additional manual steps.

If you’re using packages, you have the option to create Apex classes that run as a post install scriptOpen link in new window. They can verify the state of records in the org and modify where necessary. There is no option to do a pre-install as part of the package installation, so you’ll need to make those changes manually or from a script outside the package.

If you’re doing deployments from metadata, you’ll need to complete these steps either manually or via a script that runs before and/or after your deployment.

Manual Change Sets Metadata API Deployments Packages
Pre-Deploy Manual Manual Manual, External Script Manual, External Script
Post-Deploy Manual Manual Manual, External Script Manual, External Script

The advantage of using deployment scripts is testability. You can run the script on a non-production environment and verify the result, and modify the script as necessary before using it in production.


Permissions are worth a special call-out. Historically, when people create new objects, fields, and so on, they often assign them to one or more profiles. With change sets, it’s possible to deploy those profile changes.

The scenario get a little more complicated with metadata deployments. You can modify a profile and retrieve it from a source-tracking org, but it’s going to be the entire, very large profile instead of just what you’ve added. The metadata API tries to handle these modifications for you and may be successful. For example, if you retrieve from a change set or package.xml, the API will try to return the portion of the profile that corresponds to the other metadata in the retrieval scope. Similarly, you can deploy a profile that contains only what’s in your package and the API will attempt to merge it with the existing profile that covers the rest of the org. You probably want finer-grained control than that.

Packages are even more interesting. On deployment, they assign (by default) permissions for everything in the package to the System Administrator profile. If you ever retrieve that profile, it’s going to have references to all the installed packages.

If you’re using an org with source tracking to build, adding any object or field to a profile will cause the profile to be marked as “modified.” When you retrieve the source, the entire profile downloads, not just the field you change. The profile source probably refers to something in the org beyond the scope of your project, so you’ll be manually cleaning those to keep from polluting the profile source files.

You likely see the point: Profiles are not the right tool. Salesforce recommends using permission sets to assign permissions. They’re more granular, aren’t 1:1 with users, and eliminate dependencies on a profile. If you’re not familiar, take a moment to read Migrating to Permission Sets for DX.Open link in new window You can also use permission set groupsOpen link in new window to reduce the work of managing common clusters of permission sets.


Once you’ve moved beyond manual changes in production, you’re into the world of developer environments. This section covers the different types of environments available to you and how we recommend using them.


A sandbox is an org that’s created from your production org and remains connected to that org. Usually, the metadata matches the production org’s metadata at the time of sandbox copy. It is possible to copy a sandbox from another sandbox.

Your sandbox allocation is based on the edition of your production org. Salesforce will sell you more if you need them.

Sandboxes can be deleted or refreshed (a fresh copy from production) but are limited in the frequency. The minimum duration in the table below describes how long a sandbox must be active before it can be refreshed or deleted.

Sandboxes Scratch Org
Developer Developer Pro Partial Copy Full
Data Storage1 200MB 1GB 5GB Matches Production 200MB
File Storage 200MB 1GB Matches Production Matches Production 50MB
Minimum Duration (Days) 1 1 5 29 None2
Data Copy None None Per Template Per Template or All None
Metadata Copy Matches Production Matches Production Matches Production Matches Production None
Features/Licenses Matches Production Matches Production Matches Production Matches Production Matches Definition File3, Shape4

1 Salesforce has an unusual way of calculating storageOpen link in new window based on record count.
2 Scratch orgs have a maximum duration of 30 days.
3 Scratch org definiton filesOpen link in new window allow for various feature enablements and license configuations.
4 Scratch org shape (Beta)Open link in new window allows scratch org feature and license enablements to match a source org.

Developer and Developer Pro Sandboxes

The main difference between Developer and Developer Pro sandboxes is their storage capacity. You should use them for creating most changes. Because they support source tracking, you can use the CLI or DevOps Center to capture your changes for you. Once you’re done with changes, you can migrate those changes as described in the Deployment Techniques, from Simplest to Most Scalable section.

The challenge with these sandbox types is that they contain no data when created. As a result you need to do one of the following:

Partial Copy Sandboxes

Partial Copy sandboxes allow for more storage and let you create a sandbox templateOpen link in new window to copy a subset of production data. This dramatically simplifies the data set up and can be used for complex testing where a lot of test data is required.

The main limitation of templates is that they work per object. If you have a small amount of production data, you can pick the objects that you want. But if you need Person AccountOpen link in new window records, but have 3 million of them in production, you can’t select which Accounts/Contacts the template should copy; it’s all-or-nothing at the object level. Because each Person Account is 4k of storage (one account at 2K plus one contact at 2K), 3 million of them will consume about 11GB of data storage.


A full copy can be exactly what it sounds like — just copy everything. Because sandbox copy time is related to the amount of data to copy, you can limit the copy to the items you need using a template. Besides data, you can also choose to include or omit things like field history and Chatter to speed up the copy process.

Full Copy sandboxes are best suited for:

Sandbox versioning

During Salesforce releases windows, you will want to carefully time your sandbox refreshesOpen link in new window to make sure sandboxes are on the version you want. The dates can change each release, as well as which instances receive the release preview before or after production, so always check the release information.

You might want some developer sandboxes to remain on the same release as your production org for scenarios that involve hot-fixes or debugging.

At the same time, you may be working in Developer sandboxes on a project that should go live after the next Salesforce major release. There, you’ll want those orgs on preview so that you can test against the target release and use any features in that release that aren’t currently available.

There may be a period where you can’t promote certain changes from your preview sandboxes until destination orgs receive the release.

You may wan to use another sandbox to prepare for releases with user training, creating release documentation, or similar activities.

Sandbox planning requires keeping up with the release dates for not only production but also specific sandbox instances and planning refreshes around release windows.

About those copy times

Sandbox copies on a large org can take several days. Large orgs will want to plan accordingly if they really need everything in their org in a Full Sandbox.

Salesforce has announced Quick-Create Sandboxes (both Developer and Full) that dramatically reduce the copy time—a large org might copy in less than 10 minutes instead of days. Not only does it reduce the wait time when users manually create sandboxes, but also allows for more automation in CI processes. This will be especially useful when changes can’t be packages or when using Org-Dependent packages.

A Developer Preview for Quick-Create Sandboxes may occur in the Spring ‘21 release.

A word on sandboxes and security

Sandboxes do copy your users from the production org, including their permissions. If your sandbox owner is an administrator in production, their sandbox is ready to go — they can do what they need to.

Some companies either limit developers’ privileges in production or don’t allow developers a production login at all. If that’s the case, someone with production admin permissions will need to create the sandbox, log in, and then elevate the permissions of the sandbox user to whatever is required for their changes (usually full permissions).

For sandboxes that copy from production data (Partial Copy and Full Copy types), or for organizations that are copying production data to Developer or Developer Pro Sandboxes, this opens another potential security problem. Specifically, there may be production data developers should not have access to. Besides access by the developers themselves, developers may make changes to security policies or open temporary security gaps. Some examples:

This risk is mitigated by using a tool that masks production data like Salesforce Data MaskOpen link in new window. The Production Administrator can decide which data should be obfuscated or deleted as part of the sandbox creation process to prevent developer access or inadvertent exposure.

Scratch Orgs

Scratch orgs are very different from sandboxes. They are meant to be created quickly, destroyed quickly, and be more configurable.

An example of configurability: Let’s say you want to experiment with a new feature like Salesforce CPQ. Sandboxes will create with the licenses, configuration, and metadata from your production org. You’d have to get CPQ licenses added to production, then create your sandbox (or sync licenses of an existing sandbox).

But with scratch orgs, you can specify the org’s features in a configuration fileOpen link in new window. You can spin up the org and do your proof-of-concept.

For developers building features, scratch orgs help with enforcing dependencies. Because scratch orgs don’t start with your production metadata, you’ll be able to capture in source control everything your changes require. If you forgot to include something, you won’t be able to deploy it to a scratch org.

Scratch orgs are the preferred option for creating non-org-dependent packages. And behind the scenes, they’re where Salesforce builds your package from your source.

Some forward-looking statements

For some customers, the complexity of configuring and scripting the set up of scratch orgs has been a barrier to their use. They really are empty unless you specify the configuration, settings, metadata, and data.

To make this process easier, scratch org shapesOpen link in new window (Beta in Winter ‘21) provide two options:

Additionally, building scripts for scratch orgs are another challenge. First, you have to create and maintain the scripts. This can be especially challenging for users without shell-scripting experience. Second, they can take a long time to run, especially if you’re installing a lot of packages (your own or AppExchange).

To accelerate this experience, there’s a pilot for scratch org snapshotsOpen link in new window that let you get an org to a known state (for example, installing all the packages and doing data setup) and then store the snapshot of it. Then, future orgs can start from that snapshot.

Example Scenario: Blended Processes

Remember, most companies can’t (or shouldn’t) use a single technique for all deployments.

The following is a realistic scenario of a company trying to move to a packaging approach while running multiple techniques.

In this scenario, you want to use packages because you have a complex enterprise environment where several teams (both internal and SI) are working on multiple projects that eventually deploy into a single org. Occasionally, these separate teams come into contention on shared objects like Account and Contact.

Where you can’t use packages, you do metadata deployments from source (at least for now), and you have one project using its own process due to Salesforce limitations within specific features.

The Preferred Approach

Several parts of your org’s configuration are currently deployed using Unlocked Packages. This is your preferred option when possible. Any changes run through your CI process, create a new package version, and deploy. Each package contains a few permission sets that are sometimes modified, too.

Process for the Preferred Approach

  1. Each package is stored in its own repo.
  2. A developer creates a Git branch and uses a scratch org to make changes.
  3. The developer pulls changes from the scratch org, commits to the repo, and creates a pull request (PR).
  4. The PR initiates various automated tests in a scratch org.
  5. Once reviewed and merged, CI automatically builds a new package version and installs to a series of orgs before deploying to production.
    1. Depending on the nature of the change, users may manually review the changes in the QA sandbox (UAT) before the package deploys to production.
  6. If users have problems with a release, the previous version of the package is deployed.
Image of complex, mixed workflow with two development teams.

With this approach, you may experience...

  1. Occasionally, someone needs to make a production change. Unlocked packages allow for this. It’s considered an “exception” so part of the exception process is creating a work item for that change to get into the next package.
  2. Almost all new projects are used this way, unless there are technical limitations preventing it.
  3. The defect rate is dramatically reduced by the automated test and dependencies caught by the packaging process itself.
  4. To support this process, you’ve created a few docs and videos that walk admins through the basics of using VS Code to connect to GitHub and to the orgs. They know how to create a branch, push/pull changes, commit, and open PRs. They’ve seen DevOps Center and think it might make their lives easier.
  5. To support both admins and developers, each project also maintains some scripts that set up an org with the required configuration, users, permissions, and basic data. Everyone knows how to run the script.
  6. You’ve opened several cases with Salesforce support around unexpected issues with packaging.
  7. There is often a lot of debate about packaging strategy. Should we split this one into two? Should these be combined? Can we break out part of one because it might be a shared dependency for a new project and an existing one? This is new territory for everyone and it’s the place where teams, who can usually work independently, tend to find conflict. You’re wondering if “Packaging Strategy” should be someone’s job to decide.

A Secondary Approach

Additional metadata exists in a single large GitHub repo per Metadata API: Deployment with Source Control + CI. You’d like to break part of it off into a few more packages, but haven’t had time yet. It’s not clear how it should be organized in a package because the dependencies are so tangled. You use GitHub ActionsOpen link in new window to deploy this between environments. Some parts of it probably end up being Org-Dependent Packages eventually, but you don’t like using non-GA features.

Process for the Secondary Approach

  1. The source lives in a single repo.
  2. A developer creates (or refreshes) a Developer sandbox before beginning work.
  3. The developer then makes changes in the sandbox, pulls them to local source, commits to GitHub, and opens a PR to merge into the integration branch.
  4. CI deploys the entire repo to a Partial Copy sandbox named Integrate to verify and run larger tests.
  5. If new metadata is being created, the developer may also need to add new test data to the sandbox(es) manually or, preferably, by script.
  6. If everything looks good, the changes from the feature branch are merged into the QA branch, which initiates a metadata deployment to a Full Copy sandbox. Users can test larger changes there.
  7. You have options here (more on branchesOpen link in new window):
    1. Each change is merged from the feature branch into the main branch, which deploys to production. This is a lot of manual merges and a complicated branch operation, but does allow for granular changes to go when they’re ready. It may also create a lot of manual reviews in the QA sandbox.
    2. When everything in QA looks good, a production deployment happens. This is simpler and perhaps more predictable (“if we liked QA, we’ll like Production”) but does allow for a not-ready change to keep other things from deploying (since you’re all working in one big repo) and this can become a bottleneck.
Image of a workflow with two developers using source control.

With this approach, you may experience...

  1. Because you’ve enjoyed your packaged projects so much, you’ve got a team testing org-dependent packages. You’re not using them for production deployments, but plan to as soon as it’s GA.
  2. You prioritize decomposing the remaining into packages based on how often the bottlenecks happen. Speeding up the different teams and reducing contention has real ROI.
  3. Several projects have tried to break off from this approach and used unlocked packages but ended up back here after a bit of wasted effort.
  4. You’ve set up a tracker for which metadata types you are using that aren’t supported in packaging and update it each release per the Metadata Coverage Report to plan future migrations to your preferred process, to help prevent those wasted efforts from happening.
  5. This has a lot more manual steps and is more error prone than your packages. You’ve created checklists and source reviews to prevent mistakes from happening.

A Non-Standard Approach

You have an Experience using Salesforce CMS that you work on in a Full Copy sandbox. You’ve had bad Experiences (pun intended) moving these via metadata because of product gaps and bugs, so you typically build LWC for the community in the full sandbox, deploy them to production via Metadata API containing just the LWC and supporting Apex classes, and let the experience administrators manually add them to the production version of the Experience rather than try to deploy the Experience. This entire process is owned by a single developer.

Process for a Non-Standard Approach

  1. The production admin creates/refreshes a Full Copy sandbox.
  2. The developer connects to the sandbox using VS Code and retrieves selected LWC/Apex classes via Org Browser.
  3. The developer makes the changes locally, auto-saving to the sandbox on each change.
  4. The developer previews the changes in the Experience.
  5. Sometimes, large changes are previewed in the Full Copy sandbox (UAT) by someone else before deploying to production.
  6. The developer commits changes to source control for safety and reversibility, but the deployment of individual LWC and Apex occurs to production from local source. On the spectrum of techniques, this is a combination of:
    1. Metadata API: Direct Deployments for the developer even though they’re using source control because the deployment is not directly from the source control system.
    2. Manual Changes in Production for the community admin
Mixed workflow with a developer using source control and another making changes in production.

With this approach, you may experience...

  1. This works fine when it’s just LWC and Apex Classes that are being modified.
  2. The developer must be careful not to modify any Apex classes or LWC that do not belong to this Experience project. Occasionally, a change extends beyond the scope of this work (for example, creating some new fields on an SObject used internally and in the Experience). This tends to become a larger effort coordinating across project boundaries, where the Experience LWCs are waiting for changes in packages.
  3. There have been a few occasions where you choose to copy-paste some code from elsewhere in the org rather than create a dependency on existing code. It’s a trade-off accepted to keep this process more independent. You make notes of these in the code and eventually plan to eliminate these duplications.
  4. You don’t see this work becoming multideveloper or multiteam anytime soon. Every year or so, you check the progress of metadata types and ExperienceBundle deployments to see if it’s possible to improve this area and be more consistent with your other deployments.

Third-party Tooling

Source Control

The SFDX command line tools are agnostic to your source, and the use of scripts should let you work with the tool of your choice.

Early iterations of DevOps Center work with GitHub, so if you can use that for source control, you should.


CumulusCIOpen link in new window is a free, open-source CI tool used heavily by the Salesforce.orgOpen link in new window ecosystem (not-for-profits). Support for second-generation packaging is in progress, so we do not recommend using its default automation unless you’re an ISV.

However, it is extensible to use any Salesforce CLI command (or any shell command in general). It includes some powerful features for automating UI tests (simulating a browser), loading test data, creating fake data, dealing with namespaces, and managing releases and release notes via GitHub.

Salesforce Partners

Several tooling vendors are working to solve some of the complexities of deployments. You should explore them as part of any company-wide deployment strategy initiative.


CI/CD Providers

Release Management Partners

Closing Remarks

Someday, API support may be so ubiquitous that you can select deployment mechanisms solely based on your team’s preferences. Until then, the Metadata Coverage ReportOpen link in new window is your friend.

Give it a look anytime you’re planning to move rightward on the spectrum or introduce new metadata types into your deployments.

Tell us what you think

Help us make sure we're publishing what is most relevant to you: take our surveyOpen link in new window to provide feedback on this content and tell us what you’d like to see next.