Advanced Data Deploy Best Practices

Updated 1 week ago by Copado Solutions

Data Quality Best Practices

When working with a release management tool like Copado, it is very important to have good quality data.

You want to make sure the changes you are moving across your pipeline have passed the relevant quality gates and that your sandboxes are updated with the latest changes from your production org.

Bearing this in mind, there are some best practices you can implement in your organization to ensure your data is consistent and has a good quality:

  1. You need to have an External id field in your objects so that records are not duplicated upon deployment. Create external Id fields for the different objects you are working with.
    External Id fields contain unique values and help you avoid creating duplicate records when importing data or deploying duplicate records when working with data deployments.
To automatically populate a value in external Id fields that are empty, use a process builder.
If you don’t have an external Id in your objects, Copado will do an insert in every deployment. Records will be duplicated.
  1. Use validation rules to maintain your data structure.
    Validation rules allow you to enforce specific requirements that you specify before users can save a record. Check out Salesforce’s article for more information about the different validations rules you can set up.
  2. If you have any validation rules or triggers in place, make sure they are valid for your existing data before deploying to the destination org.
  3. Make sure you document everything and add help texts to fields so that users understand the data entry requirements.
  4. Check that your data is clean. Use apps to cleanse duplicates and enforce duplicate/matching rules and unique values.
  5. Keep your sandboxes in sync. Updating your sandboxes with the latest data from your production org will help you avoid deployment errors.
    For this purpose, you can leverage Copado Continuous Delivery to easily back-promote user stories to lower environments.

In addition to these general guidelines on how to maintain a good quality data, check out the best practices below that will help you successfully work with data templates and data deployments overall.

Copado Data Templates Best Practices

  1. When working with data templates, you can easily select and deploy all the related records you need to move as part of your deployment. But before that, we recommend that you leverage Salesforce’s Schema Builder and take a look at your data model to have a clear view of all the related records you need to deploy.
  2. Establish a naming convention for your data templates:
  • When multiple users are creating data templates in the same org, it is recommended to add user initials to the data template name.
  • Similarly, if you are creating a data template to move test data from your production org to your sandboxes, you can add ‘Test Data’ to the template name.
  • If you create multiple versions of the same data template, make sure you append the version number. In the description of the template specify what makes this version different from the other versions.
  • If you have created or cloned a template for a specific user story, add the user story number to the template name.
  1. Focus on adding precise filters in the main template, especially for configuration data, because when deploying the main template with several related templates, only the filter of the main template applies during the deployment. Therefore, all the child and parent records deployed will be based on the main template records. 
  2. You can’t repeat templates in the same set of related templates. Therefore, when selecting a related object, if you already have templates for this object in the template dropdown, do not select a template that was already selected in another related object. For example when an object has 2 lookup fields.
  3. When working with test data, it is a good practice to move the most recent records in your production org since they will contain the latest functionality, and you may be interested in working with the latest features. To do so, you can filter by created date, e.g. Created Date = Last Quarter.
  4. When creating new fields in an object for which you already have a template, or if you are using a different set of filters, you can clone your existing data template instead of updating it and add the new fields or filters to the cloned template. This way, you can keep different versions of the filters.
  5. If you are moving sensitive data to a sandbox for testing purposes, leverage the Scramble Value and the Scramble With Format functionality. Copado will replace the original value with a random value.
  6. Create list views for templates belonging to a particular application or object to easily find the templates you need. 
    1. For example, you can create a list view for CPQ templates where the copado__Main_Object__c field starts with “SBQQ__”, which is the CPQ package namespace.

Data Backup Best Practices

  1. Leverage Enterprise grade Data Backup tool that continually backs up your production data and enables you to do a fast recovery when needed. 
  2. Execute production backups before and after a data deployment with Copado so that you have a delta of the changes. This way, if there needs to be data recovery, you can have a snapshot of the changes done during the production data deployment.


How did we do?