Skip to content

Commit

Permalink
Upgrade Go SDK to 0.54.0 (#2029)
Browse files Browse the repository at this point in the history
## Changes

* Added
[a.AccountFederationPolicy](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#AccountFederationPolicyAPI)
account-level service and
[a.ServicePrincipalFederationPolicy](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#ServicePrincipalFederationPolicyAPI)
account-level service.
* Added `IsSingleNode`, `Kind` and `UseMlRuntime` fields for Cluster
commands.
* Added `UpdateParameterSyntax` field for
[dashboards.MigrateDashboardRequest](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#MigrateDashboardRequest).
  • Loading branch information
andrewnester authored Dec 18, 2024
1 parent 042c8d8 commit 59f0859
Show file tree
Hide file tree
Showing 12 changed files with 1,009 additions and 11 deletions.
2 changes: 1 addition & 1 deletion .codegen/_openapi_sha
Original file line number Diff line number Diff line change
@@ -1 +1 @@
7016dcbf2e011459416cf408ce21143bcc4b3a25
a6a317df8327c9b1e5cb59a03a42ffa2aabeef6d
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ cmd/account/custom-app-integration/custom-app-integration.go linguist-generated=
cmd/account/disable-legacy-features/disable-legacy-features.go linguist-generated=true
cmd/account/encryption-keys/encryption-keys.go linguist-generated=true
cmd/account/esm-enablement-account/esm-enablement-account.go linguist-generated=true
cmd/account/federation-policy/federation-policy.go linguist-generated=true
cmd/account/groups/groups.go linguist-generated=true
cmd/account/ip-access-lists/ip-access-lists.go linguist-generated=true
cmd/account/log-delivery/log-delivery.go linguist-generated=true
Expand All @@ -19,6 +20,7 @@ cmd/account/o-auth-published-apps/o-auth-published-apps.go linguist-generated=tr
cmd/account/personal-compute/personal-compute.go linguist-generated=true
cmd/account/private-access/private-access.go linguist-generated=true
cmd/account/published-app-integration/published-app-integration.go linguist-generated=true
cmd/account/service-principal-federation-policy/service-principal-federation-policy.go linguist-generated=true
cmd/account/service-principal-secrets/service-principal-secrets.go linguist-generated=true
cmd/account/service-principals/service-principals.go linguist-generated=true
cmd/account/settings/settings.go linguist-generated=true
Expand Down
54 changes: 50 additions & 4 deletions bundle/internal/schema/annotations_openapi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,12 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
instance_pool_id:
description: The optional ID of the instance pool to which the cluster belongs.
is_single_node:
description: |
This field can only be used with `kind`.
When set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`
kind: {}
node_type_id:
description: |
This field encodes, through a single value, the resources available to each of
Expand Down Expand Up @@ -119,6 +125,11 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
SSH public key contents that will be added to each Spark node in this cluster. The
corresponding private keys can be used to login with the user name `ubuntu` on port `2200`.
Up to 10 keys can be specified.
use_ml_runtime:
description: |
This field can only be used with `kind`.
`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
workload_type: {}
github.com/databricks/cli/bundle/config/resources.Dashboard:
create_time:
Expand Down Expand Up @@ -759,6 +770,12 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
If `cluster_log_conf` is specified, init script logs are sent to `<destination>/<cluster-ID>/init_scripts`.
instance_pool_id:
description: The optional ID of the instance pool to which the cluster belongs.
is_single_node:
description: |
This field can only be used with `kind`.
When set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`
kind: {}
node_type_id:
description: |
This field encodes, through a single value, the resources available to each of
Expand Down Expand Up @@ -808,13 +825,24 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
SSH public key contents that will be added to each Spark node in this cluster. The
corresponding private keys can be used to login with the user name `ubuntu` on port `2200`.
Up to 10 keys can be specified.
use_ml_runtime:
description: |
This field can only be used with `kind`.
`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
workload_type: {}
github.com/databricks/databricks-sdk-go/service/compute.DataSecurityMode:
_:
description: |
Data security mode decides what data governance model to use when accessing data
from a cluster.
The following modes can only be used with `kind`.
* `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.
* `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.
* `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.
The following modes can be used regardless of `kind`.
* `NONE`: No security isolation for multiple users sharing the cluster. Data governance features are not available in this mode.
* `SINGLE_USER`: A secure cluster that can only be exclusively used by a single user specified in `single_user_name`. Most programming languages, cluster features and data governance features are available in this mode.
* `USER_ISOLATION`: A secure cluster that can be shared by multiple users. Cluster users are fully isolated so that they cannot see each other's data and credentials. Most data governance features are supported in this mode. But programming languages and cluster features might be limited.
Expand All @@ -827,6 +855,9 @@ github.com/databricks/databricks-sdk-go/service/compute.DataSecurityMode:
* `LEGACY_SINGLE_USER`: This mode is for users migrating from legacy Passthrough on standard clusters.
* `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesn’t have UC nor passthrough enabled.
enum:
- DATA_SECURITY_MODE_AUTO
- DATA_SECURITY_MODE_STANDARD
- DATA_SECURITY_MODE_DEDICATED
- NONE
- SINGLE_USER
- USER_ISOLATION
Expand Down Expand Up @@ -1068,6 +1099,17 @@ github.com/databricks/databricks-sdk-go/service/dashboards.LifecycleState:
enum:
- ACTIVE
- TRASHED
github.com/databricks/databricks-sdk-go/service/jobs.CleanRoomsNotebookTask:
clean_room_name:
description: The clean room that the notebook belongs to.
etag:
description: |-
Checksum to validate the freshness of the notebook resource (i.e. the notebook being run is the latest version).
It can be fetched by calling the :method:cleanroomassets/get API.
notebook_base_parameters:
description: Base parameters to be used for the clean room notebook job.
notebook_name:
description: Name of the notebook being run.
github.com/databricks/databricks-sdk-go/service/jobs.Condition:
_:
enum:
Expand Down Expand Up @@ -1346,10 +1388,10 @@ github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthMetric:
Specifies the health metric that is being evaluated for a particular health rule.
* `RUN_DURATION_SECONDS`: Expected total time for a run in seconds.
* `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Private Preview.
* `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Private Preview.
* `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Private Preview.
* `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Private Preview.
* `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Public Preview.
* `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Public Preview.
* `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Public Preview.
* `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Public Preview.
enum:
- RUN_DURATION_SECONDS
- STREAMING_BACKLOG_BYTES
Expand Down Expand Up @@ -1651,6 +1693,10 @@ github.com/databricks/databricks-sdk-go/service/jobs.TableUpdateTriggerConfigura
and can be used to wait for a series of table updates before triggering a run. The
minimum allowed value is 60 seconds.
github.com/databricks/databricks-sdk-go/service/jobs.Task:
clean_rooms_notebook_task:
description: |-
The task runs a [clean rooms](https://docs.databricks.com/en/clean-rooms/index.html) notebook
when the `clean_rooms_notebook_task` field is present.
condition_task:
description: |-
The task evaluates a condition that can be used to control the execution of other tasks when the `condition_task` field is present.
Expand Down
6 changes: 6 additions & 0 deletions bundle/internal/schema/annotations_openapi_overrides.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
"docker_image":
"description": |-
PLACEHOLDER
"kind":
"description": |-
PLACEHOLDER
"permissions":
"description": |-
PLACEHOLDER
Expand Down Expand Up @@ -90,6 +93,9 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
"docker_image":
"description": |-
PLACEHOLDER
"kind":
"description": |-
PLACEHOLDER
"runtime_engine":
"description": |-
PLACEHOLDER
Expand Down
Loading

0 comments on commit 59f0859

Please sign in to comment.