Skip to content

Masking API (5.1.43)

Schema for the Continuous Compliance Engine API

Languages
Servers
Mock server
https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0
https://help-api.delphix.com/masking/api/v5.1.43

algorithm

Operations

logging

Operations

application

Operations

applicationSettings

Operations

asyncTask

Operations

billingUsage

Operations

classifier

Operations

columnMetadata

Operations

credentialPath

Operations

configuration

Operations

databaseConnector

Operations

databaseRuleset

Operations

domain

Operations

encryptionKey

Operations

environment

Operations

execution

Operations

executionComponent

Operations

executionEvent

Operations

sync

Operations

fileConnector

Operations

fileDownload

Operations

fileFieldMetadata

Operations

fileFormat

Operations

fileMetadata

Operations

fileRuleset

Operations

fileUpload

Operations

inventoryApproval

Operations

knowledgeBaseInfo

Operations

jdbcDriver

Operations

license

Operations

login

Operations

mainframeDatasetConnector

Operations

mainframeDatasetFieldMetadata

Operations

mainframeDatasetFormat

Operations

mainframeDatasetMetadata

Operations

mainframeDatasetRecordType

Operations

mainframeDatasetRuleset

Operations

mappingAlgorithm

Operations

maskingJob

Operations

monitoring

Operations

mountFilesystem

Operations

nonConformantDataSample

Operations

passwordVault

Operations

plugin

Operations

profileJob

Operations

profileSet

Operations

profileResultDatabase

Operations

profileResultFile

Operations

profileResultMainframe

Operations

recordType

Operations

recordTypeQualifier

Operations

reidentificationJob

Operations

role

Operations

sshKey

Operations

supportBundle

Operations

systemInformation

Operations

tableMetadata

Operations

tokenizationJob

Operations

Get all tokenization jobs

Request

Security
api_key
Query
page_numberinteger(int64)

The page number for which to get tokenization jobs. This will default to the first page if excluded

Default 1
page_sizeinteger(int64)

The maximum number of objects to return. This will default to the DefaultApiPageSize setting if not provided

environment_idinteger(int32)

The ID of the environment to get all tokenization jobs from

curl -i -X GET \
  'https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs?page_number=1&page_size=0&environment_id=0' \
  -H 'Authorization: YOUR_API_KEY_HERE'

Responses

Success

Bodyapplication/json
_pageInfoobject(PageInfo)
responseListArray of objects(TokenizationJob)
Example: [{"jobName":"some_tokenization_job","rulesetId":7,"jobDescription":"This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.","feedbackSize":100000,"onTheFlyMasking":false,"databaseMaskingOptions":{"batchUpdate":true,"commitSize":20000,"dropConstraints":true,"prescript":{"name":"my_prescript.sql","contents":"ALTER TABLE table_name DROP COLUMN column_name;"},"postscript":{"name":"my_postscript.sql","contents":"ALTER TABLE table_name ADD column_name VARCHAR(255);"}}}]
Response
application/json
{ "_pageInfo": { "numberOnPage": 0, "total": 0 }, "responseList": [ { … } ] }

Create tokenization job

Request

Security
api_key
Bodyapplication/jsonrequired

The tokenization job to create

jobNamestring<= 255 charactersrequired

The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.

Example: "some_tokenization_job"
rulesetIdinteger(int32)required

The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.

Example: 7
emailstring

The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.

feedbackSizeinteger(int32)>= 1

The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.

Example: 100000
jobDescriptionstring<= 255 characters

A description of the job.

Example: "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob."
maxMemoryinteger(int32)

The maximum amount of memory, in MB, that the tokenization job can consume during execution.

minMemoryinteger(int32)

The minimum amount of memory, in MB, that the tokenization job can consume during execution.

multiTenantboolean

This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.

Default false
numInputStreamsinteger(int32)>= 1

This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.

Default 1
onTheFlyMaskingboolean

This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.

Default false
databaseMaskingOptionsobject(DatabaseMaskingOptions)
onTheFlyMaskingSourceobject(OnTheFlyMaskingSource)
failImmediatelyboolean

This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.

Default false
enabledTasksArray of objects(JobTask)

This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.

Example: [{"taskId":1}]
streamRowLimitinteger>= -1

This value constrains the total number of rows that may enter the job for each masking stream. A setting of 0 means unlimited. A value of -1 selects the default value. The default value for this setting varies by job type. The minimum explicit value allowed is 20

curl -i -X POST \
  https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs \
  -H 'Authorization: YOUR_API_KEY_HERE' \
  -H 'Content-Type: application/json' \
  -d '{
    "jobName": "some_tokenization_job",
    "rulesetId": 7,
    "jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.",
    "feedbackSize": 100000,
    "onTheFlyMasking": false,
    "databaseMaskingOptions": {
      "batchUpdate": true,
      "commitSize": 20000,
      "dropConstraints": true,
      "prescript": {
        "name": "my_prescript.sql",
        "contents": "ALTER TABLE table_name DROP COLUMN column_name;"
      },
      "postscript": {
        "name": "my_postscript.sql",
        "contents": "ALTER TABLE table_name ADD column_name VARCHAR(255);"
      }
    }
  }'

Responses

Success

Bodyapplication/json
tokenizationJobIdinteger(int32)read-only

The ID number of the tokenization job. This field is auto-generated by the Masking Engine.

jobNamestring<= 255 charactersrequired

The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.

Example: "some_tokenization_job"
rulesetIdinteger(int32)required

The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.

Example: 7
rulesetTypestringread-only

The type of the ruleset that this tokenization job is assigned to.

createdBystring<= 255 charactersread-only

The user that created the tokenization job. This field is auto-generated by the Masking Engine.

createdTimestring(date-time)read-only

The time when the tokenization job was created. This field is auto-generated by the Masking Engine.

emailstring

The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.

feedbackSizeinteger(int32)>= 1

The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.

Example: 100000
jobDescriptionstring<= 255 characters

A description of the job.

Example: "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob."
maxMemoryinteger(int32)

The maximum amount of memory, in MB, that the tokenization job can consume during execution.

minMemoryinteger(int32)

The minimum amount of memory, in MB, that the tokenization job can consume during execution.

multiTenantboolean

This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.

Default false
numInputStreamsinteger(int32)>= 1

This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.

Default 1
onTheFlyMaskingboolean

This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.

Default false
databaseMaskingOptionsobject(DatabaseMaskingOptions)
onTheFlyMaskingSourceobject(OnTheFlyMaskingSource)
failImmediatelyboolean

This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.

Default false
enabledTasksArray of objects(JobTask)

This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.

Example: [{"taskId":1}]
streamRowLimitinteger>= -1

This value constrains the total number of rows that may enter the job for each masking stream. A setting of 0 means unlimited. A value of -1 selects the default value. The default value for this setting varies by job type. The minimum explicit value allowed is 20

Response
application/json
{ "jobName": "some_tokenization_job", "rulesetId": 7, "jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.", "feedbackSize": 100000, "onTheFlyMasking": false, "databaseMaskingOptions": { "batchUpdate": true, "commitSize": 20000, "dropConstraints": true, "prescript": { … }, "postscript": { … } } }

Get tokenization job by ID

Request

Security
api_key
Path
tokenizationJobIdinteger(int32)required

The ID of the tokenization job to get

curl -i -X GET \
  'https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs/{tokenizationJobId}' \
  -H 'Authorization: YOUR_API_KEY_HERE'

Responses

Success

Bodyapplication/json
tokenizationJobIdinteger(int32)read-only

The ID number of the tokenization job. This field is auto-generated by the Masking Engine.

jobNamestring<= 255 charactersrequired

The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.

Example: "some_tokenization_job"
rulesetIdinteger(int32)required

The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.

Example: 7
rulesetTypestringread-only

The type of the ruleset that this tokenization job is assigned to.

createdBystring<= 255 charactersread-only

The user that created the tokenization job. This field is auto-generated by the Masking Engine.

createdTimestring(date-time)read-only

The time when the tokenization job was created. This field is auto-generated by the Masking Engine.

emailstring

The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.

feedbackSizeinteger(int32)>= 1

The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.

Example: 100000
jobDescriptionstring<= 255 characters

A description of the job.

Example: "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob."
maxMemoryinteger(int32)

The maximum amount of memory, in MB, that the tokenization job can consume during execution.

minMemoryinteger(int32)

The minimum amount of memory, in MB, that the tokenization job can consume during execution.

multiTenantboolean

This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.

Default false
numInputStreamsinteger(int32)>= 1

This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.

Default 1
onTheFlyMaskingboolean

This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.

Default false
databaseMaskingOptionsobject(DatabaseMaskingOptions)
onTheFlyMaskingSourceobject(OnTheFlyMaskingSource)
failImmediatelyboolean

This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.

Default false
enabledTasksArray of objects(JobTask)

This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.

Example: [{"taskId":1}]
streamRowLimitinteger>= -1

This value constrains the total number of rows that may enter the job for each masking stream. A setting of 0 means unlimited. A value of -1 selects the default value. The default value for this setting varies by job type. The minimum explicit value allowed is 20

Response
application/json
{ "jobName": "some_tokenization_job", "rulesetId": 7, "jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.", "feedbackSize": 100000, "onTheFlyMasking": false, "databaseMaskingOptions": { "batchUpdate": true, "commitSize": 20000, "dropConstraints": true, "prescript": { … }, "postscript": { … } } }

Update tokenization job by ID

Request

Security
api_key
Path
tokenizationJobIdinteger(int32)required

The ID of the tokenization job to update

Bodyapplication/jsonrequired

The updated tokenization job

jobNamestring<= 255 charactersrequired

The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.

Example: "some_tokenization_job"
rulesetIdinteger(int32)required

The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.

Example: 7
emailstring

The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.

feedbackSizeinteger(int32)>= 1

The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.

Example: 100000
jobDescriptionstring<= 255 characters

A description of the job.

Example: "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob."
maxMemoryinteger(int32)

The maximum amount of memory, in MB, that the tokenization job can consume during execution.

minMemoryinteger(int32)

The minimum amount of memory, in MB, that the tokenization job can consume during execution.

multiTenantboolean

This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.

Default false
numInputStreamsinteger(int32)>= 1

This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.

Default 1
onTheFlyMaskingboolean

This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.

Default false
databaseMaskingOptionsobject(DatabaseMaskingOptions)
onTheFlyMaskingSourceobject(OnTheFlyMaskingSource)
failImmediatelyboolean

This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.

Default false
enabledTasksArray of objects(JobTask)

This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.

Example: [{"taskId":1}]
streamRowLimitinteger>= -1

This value constrains the total number of rows that may enter the job for each masking stream. A setting of 0 means unlimited. A value of -1 selects the default value. The default value for this setting varies by job type. The minimum explicit value allowed is 20

curl -i -X PUT \
  'https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs/{tokenizationJobId}' \
  -H 'Authorization: YOUR_API_KEY_HERE' \
  -H 'Content-Type: application/json' \
  -d '{
    "jobName": "some_tokenization_job",
    "rulesetId": 7,
    "jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.",
    "feedbackSize": 100000,
    "onTheFlyMasking": false,
    "databaseMaskingOptions": {
      "batchUpdate": true,
      "commitSize": 20000,
      "dropConstraints": true,
      "prescript": {
        "name": "my_prescript.sql",
        "contents": "ALTER TABLE table_name DROP COLUMN column_name;"
      },
      "postscript": {
        "name": "my_postscript.sql",
        "contents": "ALTER TABLE table_name ADD column_name VARCHAR(255);"
      }
    }
  }'

Responses

Success

Bodyapplication/json
tokenizationJobIdinteger(int32)read-only

The ID number of the tokenization job. This field is auto-generated by the Masking Engine.

jobNamestring<= 255 charactersrequired

The name of the tokenization job. Once the tokenization job is created, this field cannot be changed.

Example: "some_tokenization_job"
rulesetIdinteger(int32)required

The ID of the ruleset that this tokenization job is based on. Once the tokenization job is created, the underlying environment that is inferred from the ruleset will be unchangeable. That is, the job can only be updated to reference a ruleset that is in the same environment as the environment of the original ruleset.

Example: 7
rulesetTypestringread-only

The type of the ruleset that this tokenization job is assigned to.

createdBystring<= 255 charactersread-only

The user that created the tokenization job. This field is auto-generated by the Masking Engine.

createdTimestring(date-time)read-only

The time when the tokenization job was created. This field is auto-generated by the Masking Engine.

emailstring

The email address to send job status notifications to; note that the SMTP settings must be configured first to receive notifications.

feedbackSizeinteger(int32)>= 1

The granularity with which the Masking Engine provides updates on the progress of the tokenization job. For instance, a feedbackSize of 50000 results in log updates whenever 50000 rows are processed during the masking phase.

Example: 100000
jobDescriptionstring<= 255 characters

A description of the job.

Example: "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob."
maxMemoryinteger(int32)

The maximum amount of memory, in MB, that the tokenization job can consume during execution.

minMemoryinteger(int32)

The minimum amount of memory, in MB, that the tokenization job can consume during execution.

multiTenantboolean

This field determines whether the tokenization job, after creation, can be executed using a connector that is different from the underlying connector associated with the ruleset that this tokenization job is based on.

Default false
numInputStreamsinteger(int32)>= 1

This field controls the amount of parallelism that the tokenization job uses to extract out the data to be masked. For instance, when masking a database, specifying 5 input streams results in the tokenization job reading up to 5 database tables in parallel and then masking those 5 streams of data in parallel. The higher the value of this field, the more potential parallelism there will be in the job, but the tokenization job will consume more memory. If the number of input streams exceeds the number of units being masked (e.g. tables or files), then the excess streams will do nothing.

Default 1
onTheFlyMaskingboolean

This field determines whether the tokenization job will be performed InPlace or OnTheFly. The process for InPlace masking is to read out the data to be masked, mask the data, and then load the masked data back into the original data source. The process for OnTheFly masking is to read out the data to be masked, mask the data, and then load the masked data back into a different data source. When masking OnTheFly, the field 'onTheFlyMaskingSource' must be provided.

Default false
databaseMaskingOptionsobject(DatabaseMaskingOptions)
onTheFlyMaskingSourceobject(OnTheFlyMaskingSource)
failImmediatelyboolean

This field determines whether the masking job will fail immediately or delay failure until job completion when a masking algorithm fails to mask its data. Setting this value to 'false' provides a means for a user to see all cumulative masking errors before the job is marked as failed.

Default false
enabledTasksArray of objects(JobTask)

This field determines what tasks to perform before/after a job from a set of available driver support tasks as indicated by the chosen target ruleset/connector.

Example: [{"taskId":1}]
streamRowLimitinteger>= -1

This value constrains the total number of rows that may enter the job for each masking stream. A setting of 0 means unlimited. A value of -1 selects the default value. The default value for this setting varies by job type. The minimum explicit value allowed is 20

Response
application/json
{ "jobName": "some_tokenization_job", "rulesetId": 7, "jobDescription": "This example illustrates a TokenizationJob with just a handful of the possible fields set. It is meant to exemplify a simple JSON body that can be passed to the endpoint to create a TokenizationJob.", "feedbackSize": 100000, "onTheFlyMasking": false, "databaseMaskingOptions": { "batchUpdate": true, "commitSize": 20000, "dropConstraints": true, "prescript": { … }, "postscript": { … } } }

Delete tokenization job by ID

Request

Security
api_key
Path
tokenizationJobIdinteger(int32)required

The ID of the tokenization job to delete

curl -i -X DELETE \
  'https://help-api.delphix.com/_mock/continuous-compliance-engine/2025.3.0.0/cc-engine-apis-2025.3.0.0/tokenization-jobs/{tokenizationJobId}' \
  -H 'Authorization: YOUR_API_KEY_HERE'

Responses

Success

user

Operations