The new environment setting TONIC_DB_SCHEMA
configures the Tonic Structural application database schema. Use the new environment setting TONIC_MIGRATION_ENABLE_LOGGING
to enable or disable logging when applying migrations to the Tonic Structural application database. Note that if TONIC_DB_SCHEMA
is a non-public value, then you must restart the API container. If there was existing data in the Tonic Structural application database, that data is not migrated. After the API is restarted and the migrations are applied, you can import a backup of the existing data.
MySQL - You can now configure the following environment settings to override the Structural default behavior when a connection opens and a session is established:
TONIC_MYSQL_NETWORK_READ_TIMEOUT
TONIC_MYSQL_NETWORK_WRITE_TIMEOUT
TONIC_MYSQL_WAIT_TIMEOUT
TONIC_MYSQL_LOCK_WAIT_TIMEOUT
TONIC_MYSQL_INNODB_LOCK_WAIT_TIMEOUT
Azure SSO - Added support for authenticating application service principals using the EntraID client-credentials flow. Service principals can access the Structural API. For configuration requirements, refer to the Azure/EntraID SSO configuration information in the Structural User Guide.
Databricks - Structural now supports writing Identity columns to tables.File connector - You can now assign the Timestamp Shift and Date Truncation generators to Parquet date fields.
PostgresSQL - Removed the option to run PostgreSQL jobs using the older flow. All jobs now run with the Data Pipeline v2.
File connector - Fixed an issue that a caused authorization failures when using Assume Role to authorize access to Amazon S3 from Structural Cloud.
Fixed an issue where after an import from a JSON file, Subsetting view did not immediately reflect the state of the workspace.
Spark - Removed support for Livy on Hive.
For custom sensitivity rules, column matching rules are now always case insensitive. Previously, the column matching rules were always case sensitive.
SQL Server - Added support for:
Scheduling data generation - From the Jobs view (renamed from Job History) for a workspace, you can now configure the data generation to run automatically on a schedule. The schedule consists of one or more cron expressions, along with the time zone to use for the schedule. The Structural API includes new endpoints to manage the job schedule.