Schema for the Continuous Compliance Engine API
- Update tokenization job by ID
Masking API (5.1.43)
- Mock serverhttps://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs
- https://help-api.delphix.com/masking/api/v5.1.43/tokenization-jobs
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs?page_number=1&page_size=0&environment_id=0' \
-H 'Authorization: YOUR_API_KEY_HERE'Success
{ "_pageInfo": { "numberOnPage": 0, "total": 0 }, "responseList": [ { … } ] }
The tokenization job to create
The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.
The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.
The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.
The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.
A description of the job.
The maximum amount of memory, in MB, that the tokenization job can consume during execution.
The minimum amount of memory, in MB, that the tokenization job can consume during execution.
This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.
This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.
This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.
This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.
This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.
- Mock serverhttps://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs
- https://help-api.delphix.com/masking/api/v5.1.43/tokenization-jobs
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X POST \
https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs \
-H 'Authorization: YOUR_API_KEY_HERE' \
-H 'Content-Type: application/json' \
-d '{
"jobName": "some_tokenization_job",
"rulesetId": 7,
"jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.",
"feedbackSize": 100000,
"onTheFlyMasking": false,
"databaseMaskingOptions": {
"batchUpdate": true,
"commitSize": 20000,
"dropConstraints": true,
"prescript": {
"name": "my_prescript.sql",
"contents": "ALTER TABLE table_name DROP COLUMN column_name;"
},
"postscript": {
"name": "my_postscript.sql",
"contents": "ALTER TABLE table_name ADD column_name VARCHAR(255);"
}
}
}'Success
The ID number of the tokenization job. This field is auto-generated by the Masking Engine.
The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.
The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.
The user that created the tokenization job. This field is auto-generated by the Masking Engine.
The time when the tokenization job was created. This field is auto-generated by the Masking Engine.
The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.
The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.
A description of the job.
The maximum amount of memory, in MB, that the tokenization job can consume during execution.
The minimum amount of memory, in MB, that the tokenization job can consume during execution.
This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.
This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.
This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.
This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.
This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.
{ "jobName": "some_tokenization_job", "rulesetId": 7, "jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.", "feedbackSize": 100000, "onTheFlyMasking": false, "databaseMaskingOptions": { "batchUpdate": true, "commitSize": 20000, "dropConstraints": true, "prescript": { … }, "postscript": { … } } }
- Mock serverhttps://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs/{tokenizationJobId}
- https://help-api.delphix.com/masking/api/v5.1.43/tokenization-jobs/{tokenizationJobId}
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs/{tokenizationJobId}' \
-H 'Authorization: YOUR_API_KEY_HERE'Success
The ID number of the tokenization job. This field is auto-generated by the Masking Engine.
The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.
The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.
The user that created the tokenization job. This field is auto-generated by the Masking Engine.
The time when the tokenization job was created. This field is auto-generated by the Masking Engine.
The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.
The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.
A description of the job.
The maximum amount of memory, in MB, that the tokenization job can consume during execution.
The minimum amount of memory, in MB, that the tokenization job can consume during execution.
This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.
This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.
This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.
This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.
This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.
{ "jobName": "some_tokenization_job", "rulesetId": 7, "jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.", "feedbackSize": 100000, "onTheFlyMasking": false, "databaseMaskingOptions": { "batchUpdate": true, "commitSize": 20000, "dropConstraints": true, "prescript": { … }, "postscript": { … } } }
The updated tokenization job
The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.
The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.
The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.
The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.
A description of the job.
The maximum amount of memory, in MB, that the tokenization job can consume during execution.
The minimum amount of memory, in MB, that the tokenization job can consume during execution.
This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.
This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.
This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.
This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.
This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.
- Mock serverhttps://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs/{tokenizationJobId}
- https://help-api.delphix.com/masking/api/v5.1.43/tokenization-jobs/{tokenizationJobId}
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X PUT \
'https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs/{tokenizationJobId}' \
-H 'Authorization: YOUR_API_KEY_HERE' \
-H 'Content-Type: application/json' \
-d '{
"jobName": "some_tokenization_job",
"rulesetId": 7,
"jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.",
"feedbackSize": 100000,
"onTheFlyMasking": false,
"databaseMaskingOptions": {
"batchUpdate": true,
"commitSize": 20000,
"dropConstraints": true,
"prescript": {
"name": "my_prescript.sql",
"contents": "ALTER TABLE table_name DROP COLUMN column_name;"
},
"postscript": {
"name": "my_postscript.sql",
"contents": "ALTER TABLE table_name ADD column_name VARCHAR(255);"
}
}
}'Success
The ID number of the tokenization job. This field is auto-generated by the Masking Engine.
The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.
The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.
The user that created the tokenization job. This field is auto-generated by the Masking Engine.
The time when the tokenization job was created. This field is auto-generated by the Masking Engine.
The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.
The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.
A description of the job.
The maximum amount of memory, in MB, that the tokenization job can consume during execution.
The minimum amount of memory, in MB, that the tokenization job can consume during execution.
This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.
This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.
This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.
This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.
This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.
{ "jobName": "some_tokenization_job", "rulesetId": 7, "jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.", "feedbackSize": 100000, "onTheFlyMasking": false, "databaseMaskingOptions": { "batchUpdate": true, "commitSize": 20000, "dropConstraints": true, "prescript": { … }, "postscript": { … } } }
- Mock serverhttps://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs/{tokenizationJobId}
- https://help-api.delphix.com/masking/api/v5.1.43/tokenization-jobs/{tokenizationJobId}
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X DELETE \
'https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs/{tokenizationJobId}' \
-H 'Authorization: YOUR_API_KEY_HERE'