Platform
- The SQL User Settings have been combined with the general workspace user settings to create a unified experience for workspace users.
- The Databricks connector allows you to connect to compute resources configured in another Databricks workspace and return results to your current workspace. The Databricks JDBC Driver is included in the runtime. For more information : https://docs.databricks.com/external-data/databricks.html
- Databricks connect V2 is now in public preview. It has been built upon Spark Connect enabling developers to use the power of Databricks and the IDE of choice. For more information : https://docs.databricks.com/dev-tools/databricks-connect.html
- Databricks account console now supports IP access lists to control access to the account console by IP Address ranges. Use either the account console UI or REST API to manage access. For more information : https://docs.databricks.com/security/network/ip-access-list-account.html
- There are now audit log events for changes to admin settings in workspace-level logs and the account-level logs.
- Workspaces that use the Compliance Security Profile or Enhanced Security monitoring, there are new audit log events for the system log and monitoring log. For more information https://docs.databricks.com/security/privacy/monitor-log-schemas.html
- Databricks terraform provider updated to versions 1.14.2
- For Databricks Runtime Version 13.0 and Above, Ganglia metrics are replaced with Databricks Cluster Metrics
- You can now store cluster-scoped init scripts in workspace files, regardless of Databricks Runtime version used by your compute. Databricks recommends storing all cluster-scoped init scripts in workspace files. For more information : https://docs.databricks.com/files/workspace-init-scripts.html
- You can now work with non-notebook files in Databricks. Workspaces files are enabled by default in all workspaces. For more information : https://docs.databricks.com/files/workspace.html
- Databricks marketplace ungated Public Preview on three clouds. For more information https://docs.databricks.com/marketplace/
- For files and notebooks in Databricks Repos, you can now configure the Python formatter based on a Black specification. For more information : https://docs.databricks.com/notebooks/notebooks-code.html#format-python-cells
Delta Lake
- For Delta Live tables The number of allowed notebooks per pipeline is increased to 100. The previous limit was 25 notebooks.
- This release allows use of additional commands in a spark.sql(<command>) function, including DESC,DESCRIBE, SHOW TBLPROPERTIES, SHOW TABLES, SHOW NAMESPACES, SHOW COLUMNS IN, SHOW FUNCTIONS, SHOW CATALOGS, SHOW SCHEMAS, SHOW DATABASES, and SHOW CREATE TABLE.
- Unity Catalog and Delta Live Tables Integration targeting ungated Public Preview for both SQL/Python. For more information : https://docs.databricks.com/delta-live-tables/unity-catalog.html
Governance
- You can use the settings kafka.ssl.truststore.location and kafka.ssl.keystore.location to store Kafka certificates on external locations managed by Unity Catalog when you use structured Streaming on shared access clusters. ( Require DBR 13)
- The new privileges USE SHARE, USE RECIPIENT, USE PROVIDER, and SET SHARE PERMISSION enable delegation of share, recipient, and provider management tasks that would otherwise require the metastore admin role or object ownership. These privileges enable you to limit the number of users who have the powerful metastore admin role. For more information: https://docs.databricks.com/data-governance/unity-catalog/manage-privileges/privileges.html#delta-sharing
Workflows
- You can view lineage information for your jobs in the Databricks Jobs UI. For more information: https://docs.databricks.com/workflows/jobs/jobs.html#view-lineage
Databricks SQL
- Return text generated by a selected large language model (LLM) given the prompt with ai_generate_text. This function is only available as a public preview on Databricks SQL Pro and Serverless.
- The TIMESTAMP_NTZ type represents values comprising of fields year, month, day, hour, minute, and second. All operations are performed regardless of time zone.