From f42e1fb2cfffd1a5af5425749bf4193692628643 Mon Sep 17 00:00:00 2001 From: hectorcast-db Date: Fri, 28 Jun 2024 21:48:59 +0200 Subject: [PATCH 01/24] Release v1.48.2 (#3722) * Added isolation mode support for `databricks_external_location` & `databricks_storage_credential` ([#3704](https://github.com/databricks/terraform-provider-databricks/pull/3704)). * Add terraform support for periodic triggers ([#3700](https://github.com/databricks/terraform-provider-databricks/pull/3700)). --- CHANGELOG.md | 7 +++++++ common/version.go | 2 +- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index b2925d58d0..1c21baaf4d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,12 @@ # Version changelog +## 1.48.2 + +### New Features and Improvements +* Added isolation mode support for `databricks_external_location` & `databricks_storage_credential` ([#3704](https://github.com/databricks/terraform-provider-databricks/pull/3704)). +* Add terraform support for periodic triggers ([#3700](https://github.com/databricks/terraform-provider-databricks/pull/3700)). + + ## 1.48.1 ### New Features and Improvements diff --git a/common/version.go b/common/version.go index 5aa00967e6..927e473e3a 100644 --- a/common/version.go +++ b/common/version.go @@ -3,7 +3,7 @@ package common import "context" var ( - version = "1.48.1" + version = "1.48.2" // ResourceName is resource name without databricks_ prefix ResourceName contextKey = 1 // Provider is the current instance of provider From d669d7a06bb2c0140ca878ef9adaaf360a0684aa Mon Sep 17 00:00:00 2001 From: vuong-nguyen <44292934+nkvuong@users.noreply.github.com> Date: Sun, 30 Jun 2024 11:21:59 +0100 Subject: [PATCH 02/24] remove references to basic auth (#3720) --- docs/guides/aws-private-link-workspace.md | 6 ++-- docs/guides/unity-catalog-azure.md | 2 +- docs/guides/unity-catalog-gcp.md | 2 +- docs/index.md | 37 +++++------------------ docs/resources/user.md | 4 +-- 5 files changed, 15 insertions(+), 36 deletions(-) diff --git a/docs/guides/aws-private-link-workspace.md b/docs/guides/aws-private-link-workspace.md index d680a62281..6879329d88 100644 --- a/docs/guides/aws-private-link-workspace.md +++ b/docs/guides/aws-private-link-workspace.md @@ -2,10 +2,10 @@ page_title: "Provisioning Databricks on AWS with Private Link" --- --> **Note** Refer to the [Databricks Terraform Registry modules](https://registry.terraform.io/modules/databricks/examples/databricks/latest) for Terraform modules and examples to deploy Azure Databricks resources. - # Provisioning Databricks on AWS with Private Link +-> **Note** Refer to the [Databricks Terraform Registry modules](https://registry.terraform.io/modules/databricks/examples/databricks/latest) for Terraform modules and examples to deploy Azure Databricks resources. + Databricks PrivateLink support enables private connectivity between users and their Databricks workspaces and between clusters on the data plane and core services on the control plane within the Databricks workspace infrastructure. You can use Terraform to deploy the underlying cloud resources and the private access settings resources automatically using a programmatic approach. This guide assumes you are deploying into an existing VPC and have set up credentials and storage configurations as per prior examples, notably here. ![Private link backend](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/main/docs/images/aws-e2-private-link-backend.png) @@ -39,7 +39,7 @@ This guide takes you through the following high-level steps to set up a workspac ## Provider initialization -To set up account-level resources, initialize [provider with `mws` alias](https://www.terraform.io/language/providers/configuration#alias-multiple-provider-configurations). See [provider authentication](../index.md#authenticating-with-hostname,-username,-and-password) for more details. +To set up account-level resources, initialize [provider with `mws` alias](https://www.terraform.io/language/providers/configuration#alias-multiple-provider-configurations). See [provider authentication](../index.md#authenticating-with-databricks-managed-service-principal) for more details. ```hcl terraform { diff --git a/docs/guides/unity-catalog-azure.md b/docs/guides/unity-catalog-azure.md index d7fac487f1..fc1d29b0fa 100644 --- a/docs/guides/unity-catalog-azure.md +++ b/docs/guides/unity-catalog-azure.md @@ -31,7 +31,7 @@ To get started with Unity Catalog, this guide takes you through the following hi ## Provider initialization -Initialize the 3 providers to set up the required resources. See [Databricks provider authentication](../index.md#authenticating-with-hostname,-username,-and-password), [Azure AD provider authentication](https://registry.terraform.io/providers/hashicorp/azuread/latest/docs#authenticating-to-azure-active-directory) and [Azure provider authentication](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs#authenticating-to-azure) for more details. +Initialize the 3 providers to set up the required resources. See [Databricks provider authentication](../index.md#authenticating-with-databricks-managed-service-principal), [Azure AD provider authentication](https://registry.terraform.io/providers/hashicorp/azuread/latest/docs#authenticating-to-azure-active-directory) and [Azure provider authentication](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs#authenticating-to-azure) for more details. Define the required variables, and calculate the local values diff --git a/docs/guides/unity-catalog-gcp.md b/docs/guides/unity-catalog-gcp.md index d8efb5a71c..bc8e33d61d 100644 --- a/docs/guides/unity-catalog-gcp.md +++ b/docs/guides/unity-catalog-gcp.md @@ -31,7 +31,7 @@ To get started with Unity Catalog, this guide takes you through the following hi ## Provider initialization -Initialize the 3 providers to set up the required resources. See [Databricks provider authentication](../index.md#authenticating-with-hostname,-username,-and-password), [Azure AD provider authentication](https://registry.terraform.io/providers/hashicorp/azuread/latest/docs#authenticating-to-azure-active-directory) and [Azure provider authentication](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs#authenticating-to-azure) for more details. +Initialize the 3 providers to set up the required resources. See [Databricks provider authentication](../index.md#authentication), [Azure AD provider authentication](https://registry.terraform.io/providers/hashicorp/azuread/latest/docs#authenticating-to-azure-active-directory) and [Azure provider authentication](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs#authenticating-to-azure) for more details. Define the required variables, and calculate the local values diff --git a/docs/index.md b/docs/index.md index 575c196ae4..18efc26fe7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -138,8 +138,7 @@ There are currently a number of supported methods to [authenticate](https://docs * [PAT Tokens](#authenticating-with-hostname-and-token) * AWS, Azure and GCP via [Databricks-managed Service Principals](#authenticating-with-databricks-managed-service-principal) * GCP via [Google Cloud CLI](#special-configurations-for-gcp) -* Azure Active Directory Tokens via [Azure CLI](#authenticating-with-azure-cli), [Azure-managed Service Principals](#authenticating-with-azure-service-principal), or [Managed Service Identities](#authenticating-with-azure-msi) -* Username and password pair (legacy) +* Azure Active Directory Tokens via [Azure CLI](#authenticating-with-azure-cli), [Azure-managed Service Principals](#authenticating-with-azure-managed-service-principal), or [Managed Service Identities](#authenticating-with-azure-msi) ### Authenticating with Databricks CLI credentials @@ -181,20 +180,6 @@ provider "databricks" { } ``` -### Authenticating with hostname, username, and password - -!> **Warning** This approach is not recommended for regular use. Instead, authenticate with [service principal](#authenticating-with-service-principal) - -You can use the `username` + `password` attributes to authenticate the provider for a workspace setup. Respective `DATABRICKS_USERNAME` and `DATABRICKS_PASSWORD` environment variables are applicable as well. - -``` hcl -provider "databricks" { - host = "https://accounts.cloud.databricks.com" - username = var.user - password = var.password -} -``` - ### Authenticating with Databricks-managed Service Principal You can use the `client_id` + `client_secret` attributes to authenticate with a Databricks-managed service principal at both the account and workspace levels in all supported clouds. The `client_id` is the `application_id` of the [Service Principal](resources/service_principal.md) and `client_secret` is its secret. You can generate the secret from Databricks Accounts Console (see [instruction](https://docs.databricks.com/dev-tools/authentication-oauth.html#step-2-create-an-oauth-secret-for-a-service-principal)) or by using the Terraform resource [databricks_service_principal_secret](resources/service_principal_secret.md). @@ -249,12 +234,10 @@ The provider block supports the following arguments: * `host` - (optional) This is the host of the Databricks workspace. It is a URL that you use to login to your workspace. Alternatively, you can provide this value as an environment variable `DATABRICKS_HOST`. * `token` - (optional) This is the API token to authenticate into the workspace. Alternatively, you can provide this value as an environment variable `DATABRICKS_TOKEN`. -* `username` - (optional) This is the username of the user that can log into the workspace. Alternatively, you can provide this value as an environment variable `DATABRICKS_USERNAME`. Recommended only for [creating workspaces in AWS](resources/mws_workspaces.md). -* `password` - (optional) This is the user's password that can log into the workspace. Alternatively, you can provide this value as an environment variable `DATABRICKS_PASSWORD`. Recommended only for [creating workspaces in AWS](resources/mws_workspaces.md). -* `config_file` - (optional) Location of the Databricks CLI credentials file created by `databricks configure --token` command (~/.databrickscfg by default). Check [Databricks CLI documentation](https://docs.databricks.com/dev-tools/cli/index.html#set-up-authentication) for more details. The provider uses configuration file credentials when you don't specify host/token/username/password/azure attributes. Alternatively, you can provide this value as an environment variable `DATABRICKS_CONFIG_FILE`. This field defaults to `~/.databrickscfg`. +* `config_file` - (optional) Location of the Databricks CLI credentials file created by `databricks configure --token` command (~/.databrickscfg by default). Check [Databricks CLI documentation](https://docs.databricks.com/dev-tools/cli/index.html#set-up-authentication) for more details. The provider uses configuration file credentials when you don't specify host/token/azure attributes. Alternatively, you can provide this value as an environment variable `DATABRICKS_CONFIG_FILE`. This field defaults to `~/.databrickscfg`. * `profile` - (optional) Connection profile specified within ~/.databrickscfg. Please check [connection profiles section](https://docs.databricks.com/dev-tools/cli/index.html#connection-profiles) for more details. This field defaults to `DEFAULT`. -* `account_id` - (optional for workspace-level operations, but required for account-level) Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/). Alternatively, you can provide this value as an environment variable `DATABRICKS_ACCOUNT_ID`. Only has effect when `host = "https://accounts.cloud.databricks.com/"`, and is currently used to provision account admins via [databricks_user](resources/user.md). In the future releases of the provider this property will also be used specify account for `databricks_mws_*` resources as well. +* `account_id` - (optional for workspace-level operations, but required for account-level) Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/). Alternatively, you can provide this value as an environment variable `DATABRICKS_ACCOUNT_ID`. Only has effect when `host = "https://accounts.cloud.databricks.com/"`, and is currently used to provision account admins via [databricks_user](resources/user.md). In the future releases of the provider this property will also be used specify account for `databricks_mws_*` resources as well. * `auth_type` - (optional) enforce specific auth type to be used in very rare cases, where a single Terraform state manages Databricks workspaces on more than one cloud and `more than one authorization method configured` error is a false positive. Valid values are `pat`, `basic`, `oauth-m2m`, `azure-client-secret`, `azure-msi`, `azure-cli`, `google-credentials`, and `google-id`. ## Special configurations for Azure @@ -378,8 +361,6 @@ The following configuration attributes can be passed via environment variables: | `auth_type` | `DATABRICKS_AUTH_TYPE` | | `host` | `DATABRICKS_HOST` | | `token` | `DATABRICKS_TOKEN` | -| `username` | `DATABRICKS_USERNAME` | -| `password` | `DATABRICKS_PASSWORD` | | `account_id` | `DATABRICKS_ACCOUNT_ID` | | `config_file` | `DATABRICKS_CONFIG_FILE` | | `profile` | `DATABRICKS_CONFIG_PROFILE` | @@ -408,12 +389,10 @@ provider "databricks" {} 1. Provider will check all the supported environment variables and set values of relevant arguments. 2. In case any conflicting arguments are present, the plan will end with an error. 3. Will check for the presence of `host` + `token` pair, continue trying otherwise. -4. Will check for `host` + `username` + `password` presence, continue trying otherwise. -5. Will check for Azure workspace ID, `azure_client_secret` + `azure_client_id` + `azure_tenant_id` presence, continue trying otherwise. -6. Will check for availability of Azure MSI, if enabled via `azure_use_msi`, continue trying otherwise. -7. Will check for Azure workspace ID presence, and if `AZ CLI` returns an access token, continue trying otherwise. -8. Will check for the `~/.databrickscfg` file in the home directory, will fail otherwise. -9. Will check for `profile` presence and try picking from that file will fail otherwise. -10. Will check for `host` and `token` or `username`+`password` combination, and will fail if none of these exist. +4. Will check for Azure workspace ID, `azure_client_secret` + `azure_client_id` + `azure_tenant_id` presence, continue trying otherwise. +5. Will check for availability of Azure MSI, if enabled via `azure_use_msi`, continue trying otherwise. +6. Will check for Azure workspace ID presence, and if `AZ CLI` returns an access token, continue trying otherwise. +7. Will check for the `~/.databrickscfg` file in the home directory, will fail otherwise. +8. Will check for `profile` presence and try picking from that file will fail otherwise. Please check [Default Authentication Flow](https://github.com/databricks/databricks-sdk-go#default-authentication-flow) from [Databricks SDK for Go](https://docs.databricks.com/dev-tools/sdk-go.html) in case you need more details. diff --git a/docs/resources/user.md b/docs/resources/user.md index 75ee1698ac..03e16365c3 100644 --- a/docs/resources/user.md +++ b/docs/resources/user.md @@ -3,7 +3,7 @@ subcategory: "Security" --- # databricks_user Resource -This resource allows you to manage [users in Databricks Workspace](https://docs.databricks.com/administration-guide/users-groups/users.html), [Databricks Account Console](https://accounts.cloud.databricks.com/) or [Azure Databricks Account Console](https://accounts.azuredatabricks.net). You can also [associate](group_member.md) Databricks users to [databricks_group](group.md). Upon user creation the user will receive a password reset email. You can also get information about caller identity using [databricks_current_user](../data-sources/current_user.md) data source. +This resource allows you to manage [users in Databricks Workspace](https://docs.databricks.com/administration-guide/users-groups/users.html), [Databricks Account Console](https://accounts.cloud.databricks.com/) or [Azure Databricks Account Console](https://accounts.azuredatabricks.net). You can also [associate](group_member.md) Databricks users to [databricks_group](group.md). Upon user creation the user will receive a welcome email. You can also get information about caller identity using [databricks_current_user](../data-sources/current_user.md) data source. -> **Note** To assign account level users to workspace use [databricks_mws_permission_assignment](mws_permission_assignment.md). @@ -101,7 +101,7 @@ The following arguments are available: * `force` - (Optional) Ignore `cannot create user: User with username X already exists` errors and implicitly import the specific user into Terraform state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation. * `force_delete_repos` - (Optional) This flag determines whether the user's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default. * `force_delete_home_dir` - (Optional) This flag determines whether the user's home directory is deleted when the user is deleted. It will have not impact when in the accounts SCIM API. False by default. -* `disable_as_user_deletion` - (Optional) Deactivate the user when deleting the resource, rather than deleting the user entirely. Defaults to `true` when the provider is configured at the account-level and `false` when configured at the workspace-level. This flag is exclusive to force_delete_repos and force_delete_home_dir flags. +* `disable_as_user_deletion` - (Optional) Deactivate the user when deleting the resource, rather than deleting the user entirely. Defaults to `true` when the provider is configured at the account-level and `false` when configured at the workspace-level. This flag is exclusive to force_delete_repos and force_delete_home_dir flags. ## Attribute Reference From 0c252d47dbbcd17385d12e9b06a95725f68ff854 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aleksandar=20Dragojevi=C4=87?= Date: Sun, 30 Jun 2024 12:22:55 +0200 Subject: [PATCH 03/24] Fix invalid priviledges in grants.md (#3716) --- docs/resources/grants.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/resources/grants.md b/docs/resources/grants.md index 48679b5965..25f22c91af 100644 --- a/docs/resources/grants.md +++ b/docs/resources/grants.md @@ -274,15 +274,15 @@ resource "databricks_grants" "some" { } grant { principal = databricks_service_principal.my_sp.application_id - privileges = ["USE_SCHEMA", "MODIFY"] + privileges = ["CREATE_EXTERNAL_TABLE", "READ_FILES"] } grant { principal = databricks_group.my_group.display_name - privileges = ["USE_SCHEMA", "MODIFY"] + privileges = ["CREATE_EXTERNAL_TABLE", "READ_FILES"] } grant { principal = databricks_group.my_user.user_name - privileges = ["USE_SCHEMA", "MODIFY"] + privileges = ["CREATE_EXTERNAL_TABLE", "READ_FILES"] } } ``` From 1ba1772a28244c53db6fa6a71fcbd36d8032f878 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 1 Jul 2024 09:36:15 +0200 Subject: [PATCH 04/24] Bump github.com/hashicorp/hcl/v2 from 2.20.1 to 2.21.0 (#3684) Bumps [github.com/hashicorp/hcl/v2](https://github.com/hashicorp/hcl) from 2.20.1 to 2.21.0. - [Release notes](https://github.com/hashicorp/hcl/releases) - [Changelog](https://github.com/hashicorp/hcl/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/hcl/compare/v2.20.1...v2.21.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/hcl/v2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- go.mod | 2 +- go.sum | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/go.mod b/go.mod index 5a56fb699e..a421cdb136 100644 --- a/go.mod +++ b/go.mod @@ -7,7 +7,7 @@ require ( github.com/golang-jwt/jwt/v4 v4.5.0 github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 github.com/hashicorp/hcl v1.0.0 - github.com/hashicorp/hcl/v2 v2.20.1 + github.com/hashicorp/hcl/v2 v2.21.0 github.com/hashicorp/terraform-plugin-log v0.9.0 github.com/hashicorp/terraform-plugin-sdk/v2 v2.34.0 github.com/stretchr/testify v1.9.0 diff --git a/go.sum b/go.sum index d2e63b3423..aa4bdff4a6 100644 --- a/go.sum +++ b/go.sum @@ -122,8 +122,8 @@ github.com/hashicorp/hc-install v0.6.4 h1:QLqlM56/+SIIGvGcfFiwMY3z5WGXT066suo/v9 github.com/hashicorp/hc-install v0.6.4/go.mod h1:05LWLy8TD842OtgcfBbOT0WMoInBMUSHjmDx10zuBIA= github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= -github.com/hashicorp/hcl/v2 v2.20.1 h1:M6hgdyz7HYt1UN9e61j+qKJBqR3orTWbI1HKBJEdxtc= -github.com/hashicorp/hcl/v2 v2.20.1/go.mod h1:TZDqQ4kNKCbh1iJp99FdPiUaVDDUPivbqxZulxDYqL4= +github.com/hashicorp/hcl/v2 v2.21.0 h1:lve4q/o/2rqwYOgUg3y3V2YPyD1/zkCLGjIV74Jit14= +github.com/hashicorp/hcl/v2 v2.21.0/go.mod h1:62ZYHrXgPoX8xBnzl8QzbWq4dyDsDtfCRgIq1rbJEvA= github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y= github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= github.com/hashicorp/terraform-exec v0.21.0 h1:uNkLAe95ey5Uux6KJdua6+cv8asgILFVWkd/RG0D2XQ= @@ -206,8 +206,8 @@ github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= github.com/zclconf/go-cty v1.14.4 h1:uXXczd9QDGsgu0i/QFR/hzI5NYCHLf6NQw/atrbnhq8= github.com/zclconf/go-cty v1.14.4/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE= -github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b h1:FosyBZYxY34Wul7O/MSKey3txpPYyCqVO5ZyceuQJEI= -github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b/go.mod h1:ZRKQfBXbGkpdV6QMzT3rU1kSTAnfu1dO8dPKjYprgj8= +github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940 h1:4r45xpDWB6ZMSMNJFMOjqrGHynW3DIBuR2H9j0ug+Mo= +github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940/go.mod h1:CmBdvvj3nqzfzJ6nTCIwDTPZ56aVGvDrmztiO5g3qrM= go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 h1:4Pp6oUg3+e/6M4C0A/3kJ2VYa++dsWVTtGgLVj5xtHg= From df210b2aba89e3a00ce83abfee8bd4f446ae7f80 Mon Sep 17 00:00:00 2001 From: vuong-nguyen <44292934+nkvuong@users.noreply.github.com> Date: Mon, 1 Jul 2024 14:45:55 +0100 Subject: [PATCH 05/24] Refactored `databricks_cluster(s)` data sources to Go SDK (#3685) * relax cluster check * fix * fix * refactor `databricks_cluster` data source to Go SDK * refactor `databricks_clusters` data source to Go SDK --- clusters/data_cluster.go | 29 +++--- clusters/data_cluster_test.go | 171 ++++++++++++++------------------- clusters/data_clusters.go | 56 +++++------ clusters/data_clusters_test.go | 70 ++++++-------- 4 files changed, 141 insertions(+), 185 deletions(-) diff --git a/clusters/data_cluster.go b/clusters/data_cluster.go index 8a45b7afdf..73ae4a1e19 100644 --- a/clusters/data_cluster.go +++ b/clusters/data_cluster.go @@ -4,25 +4,24 @@ import ( "context" "fmt" + "github.com/databricks/databricks-sdk-go" + "github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/terraform-provider-databricks/common" ) func DataSourceCluster() common.Resource { - type clusterData struct { - Id string `json:"id,omitempty" tf:"computed"` - ClusterId string `json:"cluster_id,omitempty" tf:"computed"` - Name string `json:"cluster_name,omitempty" tf:"computed"` - ClusterInfo *ClusterInfo `json:"cluster_info,omitempty" tf:"computed"` - } - return common.DataResource(clusterData{}, func(ctx context.Context, e interface{}, c *common.DatabricksClient) error { - data := e.(*clusterData) - clusterAPI := NewClustersAPI(ctx, c) + return common.WorkspaceData(func(ctx context.Context, data *struct { + Id string `json:"id,omitempty" tf:"computed"` + ClusterId string `json:"cluster_id,omitempty" tf:"computed"` + Name string `json:"cluster_name,omitempty" tf:"computed"` + ClusterInfo *compute.ClusterDetails `json:"cluster_info,omitempty" tf:"computed"` + }, w *databricks.WorkspaceClient) error { if data.Name != "" { - clusters, err := clusterAPI.List() + clusters, err := w.Clusters.ListAll(ctx, compute.ListClustersRequest{}) if err != nil { return err } - namedClusters := []ClusterInfo{} + namedClusters := []compute.ClusterDetails{} for _, clst := range clusters { cluster := clst if cluster.ClusterName == data.Name { @@ -37,16 +36,16 @@ func DataSourceCluster() common.Resource { } data.ClusterInfo = &namedClusters[0] } else if data.ClusterId != "" { - cls, err := clusterAPI.Get(data.ClusterId) + cls, err := w.Clusters.GetByClusterId(ctx, data.ClusterId) if err != nil { return err } - data.ClusterInfo = &cls + data.ClusterInfo = cls } else { return fmt.Errorf("you need to specify either `cluster_name` or `cluster_id`") } - data.Id = data.ClusterInfo.ClusterID - data.ClusterId = data.ClusterInfo.ClusterID + data.Id = data.ClusterInfo.ClusterId + data.ClusterId = data.ClusterInfo.ClusterId return nil }) diff --git a/clusters/data_cluster_test.go b/clusters/data_cluster_test.go index 9945634fcc..cd20edec0d 100644 --- a/clusters/data_cluster_test.go +++ b/clusters/data_cluster_test.go @@ -1,104 +1,81 @@ package clusters import ( - "fmt" "testing" + "github.com/databricks/databricks-sdk-go/experimental/mocks" + "github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/terraform-provider-databricks/qa" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" + "github.com/stretchr/testify/mock" ) func TestClusterDataByID(t *testing.T) { - d, err := qa.ResourceFixture{ - Fixtures: []qa.HTTPFixture{ - { - Method: "GET", - Resource: "/api/2.0/clusters/get?cluster_id=abc", - Response: ClusterInfo{ - ClusterID: "abc", - NumWorkers: 100, - ClusterName: "Shared Autoscaling", - SparkVersion: "7.1-scala12", - NodeTypeID: "i3.xlarge", - AutoterminationMinutes: 15, - State: ClusterStateRunning, - AutoScale: &AutoScale{ - MaxWorkers: 4, - }, + qa.ResourceFixture{ + MockWorkspaceClientFunc: func(m *mocks.MockWorkspaceClient) { + e := m.GetMockClustersAPI().EXPECT() + e.GetByClusterId(mock.Anything, "abc").Return(&compute.ClusterDetails{ + ClusterId: "abc", + NumWorkers: 100, + ClusterName: "Shared Autoscaling", + SparkVersion: "7.1-scala12", + NodeTypeId: "i3.xlarge", + AutoterminationMinutes: 15, + State: ClusterStateRunning, + Autoscale: &compute.AutoScale{ + MaxWorkers: 4, }, - }, + }, nil) }, Resource: DataSourceCluster(), HCL: `cluster_id = "abc"`, Read: true, NonWritable: true, ID: "abc", - }.Apply(t) - require.NoError(t, err) - assert.Equal(t, 15, d.Get("cluster_info.0.autotermination_minutes")) - assert.Equal(t, "Shared Autoscaling", d.Get("cluster_info.0.cluster_name")) - assert.Equal(t, "i3.xlarge", d.Get("cluster_info.0.node_type_id")) - assert.Equal(t, 4, d.Get("cluster_info.0.autoscale.0.max_workers")) - assert.Equal(t, "RUNNING", d.Get("cluster_info.0.state")) - - for k, v := range d.State().Attributes { - fmt.Printf("assert.Equal(t, %#v, d.Get(%#v))\n", v, k) - } + }.ApplyAndExpectData(t, map[string]any{ + "cluster_info.0.autotermination_minutes": 15, + "cluster_info.0.cluster_name": "Shared Autoscaling", + "cluster_info.0.node_type_id": "i3.xlarge", + "cluster_info.0.autoscale.0.max_workers": 4, + "cluster_info.0.state": "RUNNING", + }) } func TestClusterDataByName(t *testing.T) { - d, err := qa.ResourceFixture{ - Fixtures: []qa.HTTPFixture{ - { - Method: "GET", - Resource: "/api/2.0/clusters/list", - - Response: ClusterList{ - Clusters: []ClusterInfo{{ - ClusterID: "abc", - NumWorkers: 100, - ClusterName: "Shared Autoscaling", - SparkVersion: "7.1-scala12", - NodeTypeID: "i3.xlarge", - AutoterminationMinutes: 15, - State: ClusterStateRunning, - AutoScale: &AutoScale{ - MaxWorkers: 4, - }, - }}, + qa.ResourceFixture{ + MockWorkspaceClientFunc: func(m *mocks.MockWorkspaceClient) { + e := m.GetMockClustersAPI().EXPECT() + e.ListAll(mock.Anything, compute.ListClustersRequest{}).Return([]compute.ClusterDetails{{ + ClusterId: "abc", + NumWorkers: 100, + ClusterName: "Shared Autoscaling", + SparkVersion: "7.1-scala12", + NodeTypeId: "i3.xlarge", + AutoterminationMinutes: 15, + State: ClusterStateRunning, + Autoscale: &compute.AutoScale{ + MaxWorkers: 4, }, - }, + }}, nil) }, Resource: DataSourceCluster(), HCL: `cluster_name = "Shared Autoscaling"`, Read: true, NonWritable: true, ID: "_", - }.Apply(t) - require.NoError(t, err) - assert.Equal(t, 15, d.Get("cluster_info.0.autotermination_minutes")) - assert.Equal(t, "Shared Autoscaling", d.Get("cluster_info.0.cluster_name")) - assert.Equal(t, "i3.xlarge", d.Get("cluster_info.0.node_type_id")) - assert.Equal(t, 4, d.Get("cluster_info.0.autoscale.0.max_workers")) - assert.Equal(t, "RUNNING", d.Get("cluster_info.0.state")) - - for k, v := range d.State().Attributes { - fmt.Printf("assert.Equal(t, %#v, d.Get(%#v))\n", v, k) - } + }.ApplyAndExpectData(t, map[string]any{ + "cluster_info.0.autotermination_minutes": 15, + "cluster_info.0.cluster_name": "Shared Autoscaling", + "cluster_info.0.node_type_id": "i3.xlarge", + "cluster_info.0.autoscale.0.max_workers": 4, + "cluster_info.0.state": "RUNNING", + }) } func TestClusterDataByName_NotFound(t *testing.T) { qa.ResourceFixture{ - Fixtures: []qa.HTTPFixture{ - { - Method: "GET", - Resource: "/api/2.0/clusters/list", - - Response: ClusterList{ - Clusters: []ClusterInfo{}, - }, - }, + MockWorkspaceClientFunc: func(m *mocks.MockWorkspaceClient) { + e := m.GetMockClustersAPI().EXPECT() + e.ListAll(mock.Anything, compute.ListClustersRequest{}).Return([]compute.ClusterDetails{}, nil) }, Resource: DataSourceCluster(), HCL: `cluster_name = "Unknown"`, @@ -110,34 +87,34 @@ func TestClusterDataByName_NotFound(t *testing.T) { func TestClusterDataByName_DuplicateNames(t *testing.T) { qa.ResourceFixture{ - Fixtures: []qa.HTTPFixture{ - { - Method: "GET", - Resource: "/api/2.0/clusters/list", - - Response: ClusterList{ - Clusters: []ClusterInfo{ - { - ClusterID: "abc", - NumWorkers: 100, - ClusterName: "Shared Autoscaling", - SparkVersion: "7.1-scala12", - NodeTypeID: "i3.xlarge", - AutoterminationMinutes: 15, - State: ClusterStateRunning, - }, - { - ClusterID: "def", - NumWorkers: 100, - ClusterName: "Shared Autoscaling", - SparkVersion: "7.1-scala12", - NodeTypeID: "i3.xlarge", - AutoterminationMinutes: 15, - State: ClusterStateRunning, - }, + MockWorkspaceClientFunc: func(m *mocks.MockWorkspaceClient) { + e := m.GetMockClustersAPI().EXPECT() + e.ListAll(mock.Anything, compute.ListClustersRequest{}).Return([]compute.ClusterDetails{ + { + ClusterId: "abc", + NumWorkers: 100, + ClusterName: "Shared Autoscaling", + SparkVersion: "7.1-scala12", + NodeTypeId: "i3.xlarge", + AutoterminationMinutes: 15, + State: ClusterStateRunning, + Autoscale: &compute.AutoScale{ + MaxWorkers: 4, + }, + }, + { + ClusterId: "def", + NumWorkers: 100, + ClusterName: "Shared Autoscaling", + SparkVersion: "7.1-scala12", + NodeTypeId: "i3.xlarge", + AutoterminationMinutes: 15, + State: ClusterStateRunning, + Autoscale: &compute.AutoScale{ + MaxWorkers: 4, }, }, - }, + }, nil) }, Resource: DataSourceCluster(), HCL: `cluster_name = "Shared Autoscaling"`, diff --git a/clusters/data_clusters.go b/clusters/data_clusters.go index 2628c4968d..da637762b5 100644 --- a/clusters/data_clusters.go +++ b/clusters/data_clusters.go @@ -4,42 +4,32 @@ import ( "context" "strings" + "github.com/databricks/databricks-sdk-go" + "github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/terraform-provider-databricks/common" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" ) func DataSourceClusters() common.Resource { - return common.Resource{ - Read: func(ctx context.Context, d *schema.ResourceData, i *common.DatabricksClient) error { - clusters, err := NewClustersAPI(ctx, i).List() - if err != nil { - return err + return common.WorkspaceData(func(ctx context.Context, data *struct { + Id string `json:"id,omitempty" tf:"computed"` + Ids []string `json:"ids,omitempty" tf:"computed,slice_set"` + ClusterNameContains string `json:"cluster_name_contains"` + }, w *databricks.WorkspaceClient) error { + clusters, err := w.Clusters.ListAll(ctx, compute.ListClustersRequest{}) + if err != nil { + return err + } + ids := make([]string, 0, len(clusters)) + name_contains := strings.ToLower(data.ClusterNameContains) + for _, v := range clusters { + match_name := strings.Contains(strings.ToLower(v.ClusterName), name_contains) + if name_contains != "" && !match_name { + continue } - ids := schema.NewSet(schema.HashString, []any{}) - name_contains := strings.ToLower(d.Get("cluster_name_contains").(string)) - for _, v := range clusters { - match_name := strings.Contains(strings.ToLower(v.ClusterName), name_contains) - if name_contains != "" && !match_name { - continue - } - ids.Add(v.ClusterID) - } - d.Set("ids", ids) - d.SetId("_") - return nil - }, - Schema: map[string]*schema.Schema{ - "ids": { - Computed: true, - Type: schema.TypeSet, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "cluster_name_contains": { - Optional: true, - Type: schema.TypeString, - }, - }, - } + ids = append(ids, v.ClusterId) + } + data.Ids = ids + data.Id = "_" + return nil + }) } diff --git a/clusters/data_clusters_test.go b/clusters/data_clusters_test.go index ddabc295fe..48d80afdfe 100644 --- a/clusters/data_clusters_test.go +++ b/clusters/data_clusters_test.go @@ -6,69 +6,59 @@ import ( "github.com/databricks/databricks-sdk-go/client" "github.com/databricks/databricks-sdk-go/config" + "github.com/databricks/databricks-sdk-go/experimental/mocks" + "github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" + "github.com/stretchr/testify/mock" ) func TestClustersDataSource(t *testing.T) { qa.ResourceFixture{ - Fixtures: []qa.HTTPFixture{ - { - Method: "GET", - Resource: "/api/2.0/clusters/list", - - Response: ClusterList{ - Clusters: []ClusterInfo{ - { - ClusterID: "b", - }, - { - ClusterID: "a", - }, - }, + MockWorkspaceClientFunc: func(m *mocks.MockWorkspaceClient) { + e := m.GetMockClustersAPI().EXPECT() + e.ListAll(mock.Anything, compute.ListClustersRequest{}).Return([]compute.ClusterDetails{ + { + ClusterId: "b", + }, + { + ClusterId: "a", }, - }, + }, nil) }, Resource: DataSourceClusters(), NonWritable: true, Read: true, ID: "_", - }.ApplyNoError(t) + }.ApplyAndExpectData(t, map[string]any{ + "ids": []string{"a", "b"}, + }) } func TestClustersDataSourceContainsName(t *testing.T) { - d, err := qa.ResourceFixture{ - Fixtures: []qa.HTTPFixture{ - { - Method: "GET", - Resource: "/api/2.0/clusters/list", - Response: ClusterList{ - Clusters: []ClusterInfo{ - { - ClusterID: "b", - ClusterName: "THIS NAME", - }, - { - ClusterID: "a", - ClusterName: "that name", - }, - }, + qa.ResourceFixture{ + MockWorkspaceClientFunc: func(m *mocks.MockWorkspaceClient) { + e := m.GetMockClustersAPI().EXPECT() + e.ListAll(mock.Anything, compute.ListClustersRequest{}).Return([]compute.ClusterDetails{ + { + ClusterId: "b", + ClusterName: "THIS NAME", + }, + { + ClusterId: "a", + ClusterName: "that name", }, - }, + }, nil) }, Resource: DataSourceClusters(), NonWritable: true, Read: true, ID: "_", HCL: `cluster_name_contains = "this"`, - }.Apply(t) - require.NoError(t, err) - ids := d.Get("ids").(*schema.Set) - assert.True(t, ids.Contains("b")) - assert.Equal(t, 1, ids.Len()) + }.ApplyAndExpectData(t, map[string]any{ + "ids": []string{"b"}, + }) } func TestClustersDataSourceErrorsOut(t *testing.T) { From fc889ccca82a0acd11752f903c3741d079ca8885 Mon Sep 17 00:00:00 2001 From: vuong-nguyen <44292934+nkvuong@users.noreply.github.com> Date: Tue, 2 Jul 2024 16:53:22 +0100 Subject: [PATCH 06/24] Renamed `databricks_catalog_workspace_binding` to `databricks_workspace_binding` (#3703) * rename resource * fix test --- catalog/resource_catalog_workspace_binding.go | 154 +--------- ...resource_catalog_workspace_binding_test.go | 97 ------ catalog/resource_workspace_binding.go | 151 +++++++++ catalog/resource_workspace_binding_test.go | 287 ++++++++++++++++++ docs/resources/catalog_workspace_binding.md | 2 +- docs/resources/workspace_binding.md | 47 +++ internal/acceptance/workspace_binding_test.go | 50 +++ provider/provider.go | 1 + 8 files changed, 540 insertions(+), 249 deletions(-) create mode 100644 catalog/resource_workspace_binding.go create mode 100644 catalog/resource_workspace_binding_test.go create mode 100644 docs/resources/workspace_binding.md create mode 100644 internal/acceptance/workspace_binding_test.go diff --git a/catalog/resource_catalog_workspace_binding.go b/catalog/resource_catalog_workspace_binding.go index 9cbdeba066..b091bf216c 100644 --- a/catalog/resource_catalog_workspace_binding.go +++ b/catalog/resource_catalog_workspace_binding.go @@ -1,159 +1,11 @@ package catalog import ( - "context" - "fmt" - "log" - "strconv" - "strings" - - "github.com/databricks/databricks-sdk-go/apierr" - "github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/terraform-provider-databricks/common" - "github.com/hashicorp/go-cty/cty" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" ) -var getSecurableName = func(d *schema.ResourceData) string { - securableName, ok := d.GetOk("securable_name") - if !ok { - securableName = d.Get("catalog_name") - } - return securableName.(string) -} - func ResourceCatalogWorkspaceBinding() common.Resource { - workspaceBindingSchema := common.StructToSchema(catalog.WorkspaceBinding{}, - func(m map[string]*schema.Schema) map[string]*schema.Schema { - m["catalog_name"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - ExactlyOneOf: []string{"catalog_name", "securable_name"}, - Deprecated: "Please use 'securable_name' and 'securable_type instead.", - } - m["securable_name"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - ExactlyOneOf: []string{"catalog_name", "securable_name"}, - } - m["securable_type"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Default: "catalog", - } - m["binding_type"].Default = catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite - m["binding_type"].ValidateFunc = validation.StringInSlice([]string{ - string(catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite), - string(catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly), - }, false) - return m - }, - ) - return common.Resource{ - Schema: workspaceBindingSchema, - SchemaVersion: 1, - StateUpgraders: []schema.StateUpgrader{ - { - Version: 0, - Type: bindingSchemaV0(), - Upgrade: bindingMigrateV0, - }, - }, - Create: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { - w, err := c.WorkspaceClient() - if err != nil { - return err - } - var update catalog.WorkspaceBinding - common.DataToStructPointer(d, workspaceBindingSchema, &update) - - securableName := getSecurableName(d) - _, err = w.WorkspaceBindings.UpdateBindings(ctx, catalog.UpdateWorkspaceBindingsParameters{ - Add: []catalog.WorkspaceBinding{update}, - SecurableName: securableName, - SecurableType: d.Get("securable_type").(string), - }) - d.SetId(fmt.Sprintf("%d|%s|%s", update.WorkspaceId, d.Get("securable_type").(string), securableName)) - return err - }, - Read: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { - w, err := c.WorkspaceClient() - if err != nil { - return err - } - // TODO: fix Read operation by splitting `id` into parts... Test with actual import. Remove not necessary code in exporter? - workspaceId := int64(d.Get("workspace_id").(int)) - securable_name := getSecurableName(d) - securable_type := d.Get("securable_type").(string) - if workspaceId == 0 || securable_name == "" || securable_type == "" { - parts := strings.Split(d.Id(), "|") - if len(parts) != 3 { - return fmt.Errorf("incorrect binding id: %s. Correct format: ||", d.Id()) - } - securable_name = parts[2] - securable_type = parts[1] - workspaceId, err = strconv.ParseInt(parts[0], 10, 0) - if err != nil { - return fmt.Errorf("can't parse workspace_id: %w", err) - } - d.Set("securable_name", securable_name) - d.Set("securable_type", securable_type) - d.Set("workspace_id", workspaceId) - } - bindings, err := w.WorkspaceBindings.GetBindings(ctx, catalog.GetBindingsRequest{ - SecurableName: securable_name, - SecurableType: securable_type, - }) - if err != nil { - return err - } - for _, binding := range bindings.Bindings { - if binding.WorkspaceId == workspaceId { - return common.StructToData(binding, workspaceBindingSchema, d) - } - } - return apierr.NotFound("Catalog has no binding to this workspace") - }, - Delete: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { - w, err := c.WorkspaceClient() - if err != nil { - return err - } - var update catalog.WorkspaceBinding - common.DataToStructPointer(d, workspaceBindingSchema, &update) - _, err = w.WorkspaceBindings.UpdateBindings(ctx, catalog.UpdateWorkspaceBindingsParameters{ - Remove: []catalog.WorkspaceBinding{update}, - SecurableName: getSecurableName(d), - SecurableType: d.Get("securable_type").(string), - }) - return err - }, - } -} - -// migrate to v1 state, as catalog_name is moved to securable_name -func bindingMigrateV0(ctx context.Context, rawState map[string]any, meta any) (map[string]any, error) { - newState := map[string]any{} - log.Printf("[INFO] Upgrade workspace binding schema") - newState["securable_name"] = rawState["catalog_name"] - newState["securable_type"] = "catalog" - newState["catalog_name"] = rawState["catalog_name"] - newState["workspace_id"] = rawState["workspace_id"] - newState["binding_type"] = string(catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite) - return newState, nil -} - -func bindingSchemaV0() cty.Type { - return (&schema.Resource{ - Schema: map[string]*schema.Schema{ - "catalog_name": { - Type: schema.TypeString, - Optional: true, - }, - "workspace_id": { - Type: schema.TypeString, - Optional: true, - }, - }}).CoreConfigSchema().ImpliedType() + r := ResourceWorkspaceBinding() + r.DeprecationMessage = "Use `databricks_workspace_binding` instead." + return r } diff --git a/catalog/resource_catalog_workspace_binding_test.go b/catalog/resource_catalog_workspace_binding_test.go index bfaa713e77..dadf0fdfc9 100644 --- a/catalog/resource_catalog_workspace_binding_test.go +++ b/catalog/resource_catalog_workspace_binding_test.go @@ -106,103 +106,6 @@ func TestCatalogWorkspaceBindingsReadOnly_Create(t *testing.T) { }.ApplyNoError(t) } -func TestSecurableWorkspaceBindings_Create(t *testing.T) { - qa.ResourceFixture{ - Fixtures: []qa.HTTPFixture{ - { - Method: "PATCH", - Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog", - ExpectedRequest: catalog.UpdateWorkspaceBindingsParameters{ - Add: []catalog.WorkspaceBinding{ - { - BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, - WorkspaceId: int64(1234567890101112), - }, - }, - SecurableName: "my_catalog", - SecurableType: "catalog", - }, - Response: catalog.WorkspaceBindingsResponse{ - Bindings: []catalog.WorkspaceBinding{ - { - BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, - WorkspaceId: int64(1234567890101112), - }, - }, - }, - }, { - Method: "GET", - Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog?", - Response: catalog.WorkspaceBindingsResponse{ - Bindings: []catalog.WorkspaceBinding{ - { - BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, - WorkspaceId: int64(1234567890101112), - }, - }, - }, - }, - }, - Resource: ResourceCatalogWorkspaceBinding(), - Create: true, - HCL: ` - securable_name = "my_catalog" - securable_type = "catalog" - workspace_id = "1234567890101112" - binding_type = "BINDING_TYPE_READ_ONLY" - `, - }.ApplyNoError(t) -} - -func TestSecurableWorkspaceBindings_Delete(t *testing.T) { - qa.ResourceFixture{ - Fixtures: []qa.HTTPFixture{ - { - Method: "PATCH", - Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog", - ExpectedRequest: catalog.UpdateWorkspaceBindingsParameters{ - Remove: []catalog.WorkspaceBinding{ - { - BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, - WorkspaceId: int64(1234567890101112), - }, - }, - SecurableName: "my_catalog", - SecurableType: "catalog", - }, - Response: catalog.WorkspaceBindingsResponse{ - Bindings: []catalog.WorkspaceBinding{ - { - BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, - WorkspaceId: int64(1234567890101112), - }, - }, - }, - }, { - Method: "GET", - Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog?", - Response: catalog.WorkspaceBindingsResponse{ - Bindings: []catalog.WorkspaceBinding{ - { - BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, - WorkspaceId: int64(1234567890101112), - }, - }, - }, - }, - }, - Resource: ResourceCatalogWorkspaceBinding(), - Delete: true, - ID: "1234567890101112|catalog|my_catalog", - HCL: ` - securable_name = "my_catalog" - securable_type = "catalog" - workspace_id = "1234567890101112" - binding_type = "BINDING_TYPE_READ_ONLY" - `, - }.ApplyNoError(t) -} - func TestCatalogWorkspaceBindingsReadImport(t *testing.T) { qa.ResourceFixture{ Fixtures: []qa.HTTPFixture{ diff --git a/catalog/resource_workspace_binding.go b/catalog/resource_workspace_binding.go new file mode 100644 index 0000000000..0f558753b9 --- /dev/null +++ b/catalog/resource_workspace_binding.go @@ -0,0 +1,151 @@ +package catalog + +import ( + "context" + "fmt" + "log" + "strconv" + "strings" + + "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/common" + "github.com/hashicorp/go-cty/cty" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +var getSecurableName = func(d *schema.ResourceData) string { + securableName, ok := d.GetOk("securable_name") + if !ok { + securableName = d.Get("catalog_name") + } + return securableName.(string) +} + +func ResourceWorkspaceBinding() common.Resource { + workspaceBindingSchema := common.StructToSchema(catalog.WorkspaceBinding{}, + func(m map[string]*schema.Schema) map[string]*schema.Schema { + m["catalog_name"] = &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"catalog_name", "securable_name"}, + Deprecated: "Please use 'securable_name' and 'securable_type instead.", + } + m["securable_name"] = &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ExactlyOneOf: []string{"catalog_name", "securable_name"}, + } + m["securable_type"] = &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "catalog", + } + common.CustomizeSchemaPath(m, "securable_type").SetValidateFunc(validation.StringInSlice([]string{"catalog", "external-location", "storage-credential"}, false)) + common.CustomizeSchemaPath(m, "binding_type").SetDefault(catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite).SetValidateFunc(validation.StringInSlice([]string{ + string(catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite), + string(catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly), + }, false)) + return m + }, + ) + return common.Resource{ + Schema: workspaceBindingSchema, + SchemaVersion: 1, + StateUpgraders: []schema.StateUpgrader{ + { + Version: 0, + Type: bindingSchemaV0(), + Upgrade: bindingMigrateV0, + }, + }, + Create: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { + w, err := c.WorkspaceClient() + if err != nil { + return err + } + var update catalog.WorkspaceBinding + common.DataToStructPointer(d, workspaceBindingSchema, &update) + securableName := getSecurableName(d) + securableType := d.Get("securable_type").(string) + _, err = w.WorkspaceBindings.UpdateBindings(ctx, catalog.UpdateWorkspaceBindingsParameters{ + Add: []catalog.WorkspaceBinding{update}, + SecurableName: securableName, + SecurableType: securableType, + }) + d.SetId(fmt.Sprintf("%d|%s|%s", update.WorkspaceId, securableType, securableName)) + return err + }, + Read: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { + w, err := c.WorkspaceClient() + if err != nil { + return err + } + parts := strings.Split(d.Id(), "|") + if len(parts) != 3 { + return fmt.Errorf("incorrect binding id: %s. Correct format: ||", d.Id()) + } + securableName := parts[2] + securableType := parts[1] + workspaceId, err := strconv.ParseInt(parts[0], 10, 0) + if err != nil { + return fmt.Errorf("can't parse workspace_id: %w", err) + } + d.Set("securable_name", securableName) + d.Set("securable_type", securableType) + d.Set("workspace_id", workspaceId) + bindings, err := w.WorkspaceBindings.GetBindingsBySecurableTypeAndSecurableName(ctx, securableType, securableName) + if err != nil { + return err + } + for _, binding := range bindings.Bindings { + if binding.WorkspaceId == workspaceId { + return common.StructToData(binding, workspaceBindingSchema, d) + } + } + return apierr.NotFound(fmt.Sprintf("%s has no binding to this workspace", securableName)) + }, + Delete: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { + w, err := c.WorkspaceClient() + if err != nil { + return err + } + var update catalog.WorkspaceBinding + common.DataToStructPointer(d, workspaceBindingSchema, &update) + _, err = w.WorkspaceBindings.UpdateBindings(ctx, catalog.UpdateWorkspaceBindingsParameters{ + Remove: []catalog.WorkspaceBinding{update}, + SecurableName: getSecurableName(d), + SecurableType: d.Get("securable_type").(string), + }) + return err + }, + } +} + +// migrate to v1 state, as catalog_name is moved to securableName +func bindingMigrateV0(ctx context.Context, rawState map[string]any, meta any) (map[string]any, error) { + newState := map[string]any{} + log.Printf("[INFO] Upgrade workspace binding schema") + newState["securable_name"] = rawState["catalog_name"] + newState["securable_type"] = "catalog" + newState["catalog_name"] = rawState["catalog_name"] + newState["workspace_id"] = rawState["workspace_id"] + newState["binding_type"] = string(catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite) + return newState, nil +} + +func bindingSchemaV0() cty.Type { + return (&schema.Resource{ + Schema: map[string]*schema.Schema{ + "catalog_name": { + Type: schema.TypeString, + Optional: true, + }, + "workspace_id": { + Type: schema.TypeString, + Optional: true, + }, + }}).CoreConfigSchema().ImpliedType() +} diff --git a/catalog/resource_workspace_binding_test.go b/catalog/resource_workspace_binding_test.go new file mode 100644 index 0000000000..4059e11b44 --- /dev/null +++ b/catalog/resource_workspace_binding_test.go @@ -0,0 +1,287 @@ +package catalog + +import ( + "testing" + + "github.com/databricks/databricks-sdk-go/experimental/mocks" + "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/qa" + "github.com/stretchr/testify/mock" +) + +func TestWorkspaceBindingsCornerCases(t *testing.T) { + qa.ResourceCornerCases(t, ResourceWorkspaceBinding(), + qa.CornerCaseID("1234567890101112|catalog|my_catalog"), + qa.CornerCaseSkipCRUD("create")) +} + +func TestWorkspaceBindings_Create(t *testing.T) { + qa.ResourceFixture{ + Fixtures: []qa.HTTPFixture{ + { + Method: "PATCH", + Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog", + ExpectedRequest: catalog.UpdateWorkspaceBindingsParameters{ + Add: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite, + WorkspaceId: int64(1234567890101112), + }, + }, + SecurableName: "my_catalog", + SecurableType: "catalog", + }, + Response: catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite, + WorkspaceId: int64(1234567890101112), + }, + }, + }, + }, { + Method: "GET", + Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog?", + Response: catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite, + WorkspaceId: int64(1234567890101112), + }, + }, + }, + }, + }, + Resource: ResourceWorkspaceBinding(), + Create: true, + HCL: ` + catalog_name = "my_catalog" + workspace_id = "1234567890101112" + `, + }.ApplyNoError(t) +} + +func TestWorkspaceBindingsReadOnly_Create(t *testing.T) { + qa.ResourceFixture{ + Fixtures: []qa.HTTPFixture{ + { + Method: "PATCH", + Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog", + ExpectedRequest: catalog.UpdateWorkspaceBindingsParameters{ + Add: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + SecurableName: "my_catalog", + SecurableType: "catalog", + }, + Response: catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + }, + }, { + Method: "GET", + Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog?", + Response: catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + }, + }, + }, + Resource: ResourceWorkspaceBinding(), + Create: true, + HCL: ` + catalog_name = "my_catalog" + workspace_id = "1234567890101112" + binding_type = "BINDING_TYPE_READ_ONLY" + `, + }.ApplyNoError(t) +} + +func TestSecurableWorkspaceBindings_Create(t *testing.T) { + qa.ResourceFixture{ + Fixtures: []qa.HTTPFixture{ + { + Method: "PATCH", + Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog", + ExpectedRequest: catalog.UpdateWorkspaceBindingsParameters{ + Add: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + SecurableName: "my_catalog", + SecurableType: "catalog", + }, + Response: catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + }, + }, { + Method: "GET", + Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog?", + Response: catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + }, + }, + }, + Resource: ResourceWorkspaceBinding(), + Create: true, + HCL: ` + securable_name = "my_catalog" + securable_type = "catalog" + workspace_id = "1234567890101112" + binding_type = "BINDING_TYPE_READ_ONLY" + `, + }.ApplyNoError(t) +} + +func TestSecurableWorkspaceBindings_CreateExtLocation(t *testing.T) { + qa.ResourceFixture{ + MockWorkspaceClientFunc: func(m *mocks.MockWorkspaceClient) { + e := m.GetMockWorkspaceBindingsAPI().EXPECT() + e.UpdateBindings(mock.Anything, catalog.UpdateWorkspaceBindingsParameters{ + Add: []catalog.WorkspaceBinding{{ + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite, + WorkspaceId: int64(1234567890101112), + }, + }, + SecurableName: "external_location", + SecurableType: "external-location", + }).Return(&catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadWrite, + WorkspaceId: int64(1234567890101112), + }, + }, + }, nil) + e.GetBindingsBySecurableTypeAndSecurableName(mock.Anything, "external-location", "external_location").Return(&catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + WorkspaceId: int64(1234567890101112), + }, + }, + }, nil) + }, + Resource: ResourceWorkspaceBinding(), + Create: true, + HCL: ` + securable_name = "external_location" + securable_type = "external-location" + workspace_id = "1234567890101112" + `, + }.ApplyNoError(t) +} + +func TestSecurableWorkspaceBindings_Delete(t *testing.T) { + qa.ResourceFixture{ + Fixtures: []qa.HTTPFixture{ + { + Method: "PATCH", + Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog", + ExpectedRequest: catalog.UpdateWorkspaceBindingsParameters{ + Remove: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + SecurableName: "my_catalog", + SecurableType: "catalog", + }, + Response: catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + }, + }, { + Method: "GET", + Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog?", + Response: catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + }, + }, + }, + Resource: ResourceWorkspaceBinding(), + Delete: true, + ID: "1234567890101112|catalog|my_catalog", + HCL: ` + securable_name = "my_catalog" + securable_type = "catalog" + workspace_id = "1234567890101112" + binding_type = "BINDING_TYPE_READ_ONLY" + `, + }.ApplyNoError(t) +} + +func TestWorkspaceBindingsReadImport(t *testing.T) { + qa.ResourceFixture{ + Fixtures: []qa.HTTPFixture{ + { + Method: "GET", + Resource: "/api/2.1/unity-catalog/bindings/catalog/my_catalog?", + Response: catalog.WorkspaceBindingsResponse{ + Bindings: []catalog.WorkspaceBinding{ + { + BindingType: catalog.WorkspaceBindingBindingTypeBindingTypeReadOnly, + WorkspaceId: int64(1234567890101112), + }, + }, + }, + }, + }, + Resource: ResourceWorkspaceBinding(), + ID: "1234567890101112|catalog|my_catalog", + New: true, + Read: true, + }.ApplyAndExpectData(t, map[string]any{ + "workspace_id": 1234567890101112, + "securable_type": "catalog", + "securable_name": "my_catalog", + }) +} + +func TestWorkspaceBindingsReadErrors(t *testing.T) { + qa.ResourceFixture{ + Resource: ResourceWorkspaceBinding(), + ID: "1234567890101112|catalog", + New: true, + Read: true, + }.ExpectError(t, "incorrect binding id: 1234567890101112|catalog. Correct format: ||") + + qa.ResourceFixture{ + Resource: ResourceWorkspaceBinding(), + ID: "A234567890101112|catalog|my_catalog", + New: true, + Read: true, + }.ExpectError(t, "can't parse workspace_id: strconv.ParseInt: parsing \"A234567890101112\": invalid syntax") +} diff --git a/docs/resources/catalog_workspace_binding.md b/docs/resources/catalog_workspace_binding.md index 32312f9966..5520192fb5 100644 --- a/docs/resources/catalog_workspace_binding.md +++ b/docs/resources/catalog_workspace_binding.md @@ -3,7 +3,7 @@ subcategory: "Unity Catalog" --- # databricks_catalog_workspace_binding Resource --> **Note** This resource could be only used with workspace-level provider! +-> **NOTE**This resource has been deprecated and will be removed soon. Please use the [databricks_workspace_binding resource](./workspace_binding.md) instead. If you use workspaces to isolate user data access, you may want to limit catalog access to specific workspaces in your account, also known as workspace-catalog binding diff --git a/docs/resources/workspace_binding.md b/docs/resources/workspace_binding.md new file mode 100644 index 0000000000..8eaabe9422 --- /dev/null +++ b/docs/resources/workspace_binding.md @@ -0,0 +1,47 @@ +--- +subcategory: "Unity Catalog" +--- +# databricks_workspace_binding Resource + +-> **Note** This resource could be only used with workspace-level provider! + +If you use workspaces to isolate user data access, you may want to limit access to catalog, external locations or storage credentials from specific workspaces in your account, also known as workspace binding + +By default, Databricks assigns the securable to all workspaces attached to the current metastore. By using `databricks_workspace_binding`, the securable will be unassigned from all workspaces and only assigned explicitly using this resource. + +-> **Note** + To use this resource the securable must have its isolation mode set to `ISOLATED` in the. Alternatively, the isolation mode can be set using the UI or API by following [this guide](https://docs.databricks.com/data-governance/unity-catalog/create-catalogs.html#configuration), [this guide](https://docs.databricks.com/en/connect/unity-catalog/external-locations.html#workspace-binding) or [this guide](https://docs.databricks.com/en/connect/unity-catalog/storage-credentials.html#optional-assign-a-storage-credential-to-specific-workspaces). + +-> **Note** + If the securable's isolation mode was set to `ISOLATED` using Terraform then the securable will have been automatically bound to the workspace it was created from. + +## Example Usage + +```hcl +resource "databricks_catalog" "sandbox" { + name = "sandbox" + isolation_mode = "ISOLATED" +} + +resource "databricks_workspace_binding" "sandbox" { + securable_name = databricks_catalog.sandbox.name + workspace_id = databricks_mws_workspaces.other.workspace_id +} +``` + +## Argument Reference + +The following arguments are required: + +* `workspace_id` - ID of the workspace. Change forces creation of a new resource. +* `securable_name` - Name of securable. Change forces creation of a new resource. +* `securable_type` - Type of securable. Default to `catalog`. Change forces creation of a new resource. +* `binding_type` - Binding mode. Default to `BINDING_TYPE_READ_WRITE`. Possible values are `BINDING_TYPE_READ_ONLY`, `BINDING_TYPE_READ_WRITE` + +## Import + +This resource can be imported by using combination of workspace ID, securable type and name: + +```sh +terraform import databricks_catalog_workspace_binding.this "||" +``` diff --git a/internal/acceptance/workspace_binding_test.go b/internal/acceptance/workspace_binding_test.go new file mode 100644 index 0000000000..8759b8a479 --- /dev/null +++ b/internal/acceptance/workspace_binding_test.go @@ -0,0 +1,50 @@ +package acceptance + +import ( + "fmt" + "testing" +) + +func workspaceBindingTemplateWithWorkspaceId(workspaceId string) string { + return fmt.Sprintf(` + # The dummy workspace needs to be assigned to the metastore for this test to pass + resource "databricks_metastore_assignment" "this" { + metastore_id = "{env.TEST_METASTORE_ID}" + workspace_id = {env.DUMMY_WORKSPACE_ID} + } + + resource "databricks_catalog" "dev" { + name = "dev{var.RANDOM}" + isolation_mode = "ISOLATED" + } + + resource "databricks_catalog" "prod" { + name = "prod{var.RANDOM}" + isolation_mode = "ISOLATED" + } + + resource "databricks_workspace_binding" "dev" { + catalog_name = databricks_catalog.dev.name + workspace_id = %s + } + + resource "databricks_workspace_binding" "prod" { + securable_name = databricks_catalog.prod.name + securable_type = "catalog" + workspace_id = %s + binding_type = "BINDING_TYPE_READ_ONLY" + } + `, workspaceId, workspaceId) +} + +func TestUcAccWorkspaceBindingToOtherWorkspace(t *testing.T) { + unityWorkspaceLevel(t, step{ + Template: workspaceBindingTemplateWithWorkspaceId("{env.DUMMY_WORKSPACE_ID}"), + }) +} + +func TestUcAccWorkspaceBindingToSameWorkspace(t *testing.T) { + unityWorkspaceLevel(t, step{ + Template: workspaceBindingTemplateWithWorkspaceId("{env.THIS_WORKSPACE_ID}"), + }) +} diff --git a/provider/provider.go b/provider/provider.go index f3f0bcc921..65bbf8e90c 100644 --- a/provider/provider.go +++ b/provider/provider.go @@ -206,6 +206,7 @@ func DatabricksProvider() *schema.Provider { "databricks_vector_search_endpoint": vectorsearch.ResourceVectorSearchEndpoint().ToResource(), "databricks_vector_search_index": vectorsearch.ResourceVectorSearchIndex().ToResource(), "databricks_volume": catalog.ResourceVolume().ToResource(), + "databricks_workspace_binding": catalog.ResourceWorkspaceBinding().ToResource(), "databricks_workspace_conf": workspace.ResourceWorkspaceConf().ToResource(), "databricks_workspace_file": workspace.ResourceWorkspaceFile().ToResource(), }, From c6f949c8a29c829e4990bc013cf8e320f519b1d0 Mon Sep 17 00:00:00 2001 From: Alex Ott Date: Tue, 2 Jul 2024 18:42:55 +0200 Subject: [PATCH 07/24] Exporter: fix generation of `run_as` blocks in `databricks_job` (#3724) * Exporter: fix generation of `run_as` blocks in `databricks_job` Because the `run_as` was marked as `computed` it was ignored when generating the code. * Ignore `run_as` for the current user --- exporter/context.go | 2 +- exporter/exporter_test.go | 20 +++++++++++++++++++- exporter/importables.go | 11 +++++++++++ exporter/test-data/run-job-child.json | 2 ++ exporter/test-data/run-job-main.json | 2 ++ 5 files changed, 35 insertions(+), 2 deletions(-) diff --git a/exporter/context.go b/exporter/context.go index 28fddf80f2..ee3f2a753b 100644 --- a/exporter/context.go +++ b/exporter/context.go @@ -360,10 +360,10 @@ func (ic *importContext) Run() error { if err != nil { return err } + ic.meUserName = me.UserName for _, g := range me.Groups { if g.Display == "admins" { ic.meAdmin = true - ic.meUserName = me.UserName break } } diff --git a/exporter/exporter_test.go b/exporter/exporter_test.go index 174a766409..30c92591d6 100644 --- a/exporter/exporter_test.go +++ b/exporter/exporter_test.go @@ -2553,7 +2553,19 @@ resource "databricks_pipeline" "def" { func TestImportingRunJobTask(t *testing.T) { qa.HTTPFixturesApply(t, []qa.HTTPFixture{ - meAdminFixture, + { + Method: "GET", + ReuseRequest: true, + Resource: "/api/2.0/preview/scim/v2/Me", + Response: scim.User{ + Groups: []scim.ComplexValue{ + { + Display: "admins", + }, + }, + UserName: "user@domain.com", + }, + }, noCurrentMetastoreAttached, emptyRepos, emptyIpAccessLIst, @@ -2596,5 +2608,11 @@ func TestImportingRunJobTask(t *testing.T) { assert.True(t, strings.Contains(contentStr, `job_id = databricks_job.jartask_932035899730845.id`)) assert.True(t, strings.Contains(contentStr, `resource "databricks_job" "runjobtask_1047501313827425"`)) assert.True(t, strings.Contains(contentStr, `resource "databricks_job" "jartask_932035899730845"`)) + assert.True(t, strings.Contains(contentStr, `run_as { + service_principal_name = "c1b2a35b-87c4-481a-a0fb-0508be621957" + }`)) + assert.False(t, strings.Contains(contentStr, `run_as { + user_name = "user@domain.com" + }`)) }) } diff --git a/exporter/importables.go b/exporter/importables.go index 165081f12c..032ab855fc 100644 --- a/exporter/importables.go +++ b/exporter/importables.go @@ -635,6 +635,17 @@ var resourcesMap map[string]importable = map[string]importable{ if js.NotificationSettings != nil { return reflect.DeepEqual(*js.NotificationSettings, sdk_jobs.JobNotificationSettings{}) } + case "run_as": + if js.RunAs != nil && (js.RunAs.UserName != "" || js.RunAs.ServicePrincipalName != "") { + var user string + if js.RunAs.UserName != "" { + user = js.RunAs.UserName + } else { + user = js.RunAs.ServicePrincipalName + } + return user == ic.meUserName + } + return true } if strings.HasPrefix(pathString, "task.") { parts := strings.Split(pathString, ".") diff --git a/exporter/test-data/run-job-child.json b/exporter/test-data/run-job-child.json index 4cc2c7a6f2..6131aed35a 100644 --- a/exporter/test-data/run-job-child.json +++ b/exporter/test-data/run-job-child.json @@ -1,6 +1,8 @@ { "created_time":1678702840675, "job_id":932035899730845, + "run_as_user_name": "c1b2a35b-87c4-481a-a0fb-0508be621957", + "run_as_owner": false, "settings": { "format":"MULTI_TASK", "max_concurrent_runs":1, diff --git a/exporter/test-data/run-job-main.json b/exporter/test-data/run-job-main.json index 0430f6fd8a..15390aa00d 100644 --- a/exporter/test-data/run-job-main.json +++ b/exporter/test-data/run-job-main.json @@ -1,6 +1,8 @@ { "created_time":1700654567867, "job_id":1047501313827425, + "run_as_user_name": "user@domain.com", + "run_as_owner": false, "settings": { "format":"MULTI_TASK", "max_concurrent_runs":1, From ff837ab7f8b45e1dba1e52dd3a020ba4059ae60c Mon Sep 17 00:00:00 2001 From: Karol Date: Wed, 3 Jul 2024 09:05:27 +0200 Subject: [PATCH 08/24] Adds `databricks_volume` as data source (#3211) * data_volume * data_volume unit and acceptance tests * docs * WorkspaceDataWithCustomParams test * fixed formatting * Removing unnecessary changes to resource.go * refactored data_volume * making change for consitency with GO SDK v0.35.0 * Update catalog/data_volume.go * Update catalog/data_volume.go * data source as nested strucutre * review comments addressed * acceptance test --------- Co-authored-by: Alex Ott Co-authored-by: vuong-nguyen <44292934+nkvuong@users.noreply.github.com> --- catalog/data_volume.go | 25 +++++++++ catalog/data_volume_test.go | 50 ++++++++++++++++++ common/resource.go | 2 +- docs/data-sources/volume.md | 69 +++++++++++++++++++++++++ internal/acceptance/data_volume_test.go | 51 ++++++++++++++++++ provider/provider.go | 1 + 6 files changed, 197 insertions(+), 1 deletion(-) create mode 100644 catalog/data_volume.go create mode 100644 catalog/data_volume_test.go create mode 100644 docs/data-sources/volume.md create mode 100644 internal/acceptance/data_volume_test.go diff --git a/catalog/data_volume.go b/catalog/data_volume.go new file mode 100644 index 0000000000..598160206a --- /dev/null +++ b/catalog/data_volume.go @@ -0,0 +1,25 @@ +package catalog + +import ( + "context" + + "github.com/databricks/databricks-sdk-go" + "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/common" +) + +func DataSourceVolume() common.Resource { + return common.WorkspaceData(func(ctx context.Context, data *struct { + Id string `json:"id,omitempty" tf:"computed"` + Name string `json:"name"` + Volume *catalog.VolumeInfo `json:"volume_info,omitempty" tf:"computed"` + }, w *databricks.WorkspaceClient) error { + volume, err := w.Volumes.ReadByName(ctx, data.Name) + if err != nil { + return err + } + data.Volume = volume + data.Id = volume.FullName + return nil + }) +} diff --git a/catalog/data_volume_test.go b/catalog/data_volume_test.go new file mode 100644 index 0000000000..a7af490f3e --- /dev/null +++ b/catalog/data_volume_test.go @@ -0,0 +1,50 @@ +package catalog + +import ( + "testing" + + "github.com/databricks/databricks-sdk-go/experimental/mocks" + "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/qa" + "github.com/stretchr/testify/mock" +) + +func TestDataSourceVolume(t *testing.T) { + qa.ResourceFixture{ + MockWorkspaceClientFunc: func(m *mocks.MockWorkspaceClient) { + e := m.GetMockVolumesAPI().EXPECT() + e.ReadByName(mock.Anything, "a.b.c").Return(&catalog.VolumeInfo{ + FullName: "a.b.c", + CatalogName: "a", + SchemaName: "b", + Name: "c", + Owner: "account users", + VolumeType: catalog.VolumeTypeManaged, + }, nil) + }, + Resource: DataSourceVolume(), + HCL: ` + name="a.b.c"`, + Read: true, + NonWritable: true, + ID: "_", + }.ApplyAndExpectData(t, map[string]any{ + "name": "a.b.c", + "volume_info.0.full_name": "a.b.c", + "volume_info.0.catalog_name": "a", + "volume_info.0.schema_name": "b", + "volume_info.0.name": "c", + "volume_info.0.owner": "account users", + "volume_info.0.volume_type": "MANAGED", + }) +} + +func TestDataSourceVolume_Error(t *testing.T) { + qa.ResourceFixture{ + Fixtures: qa.HTTPFailures, + Resource: DataSourceVolume(), + Read: true, + NonWritable: true, + ID: "_", + }.ExpectError(t, "i'm a teapot") +} diff --git a/common/resource.go b/common/resource.go index 4ae0bef2e0..5f42a4ac48 100644 --- a/common/resource.go +++ b/common/resource.go @@ -365,7 +365,7 @@ func genericDatabricksData[T, P, C any]( hasOther bool) Resource { var dummy T var other P - otherFields := StructToSchema(other, NoCustomize) + otherFields := StructToSchema(other, nil) s := StructToSchema(dummy, func(m map[string]*schema.Schema) map[string]*schema.Schema { // For WorkspaceData and AccountData, a single data type is used to represent all of the fields of // the resource, so its configuration is correct. For the *WithParams methods, the SdkType parameter diff --git a/docs/data-sources/volume.md b/docs/data-sources/volume.md new file mode 100644 index 0000000000..9a32875a1f --- /dev/null +++ b/docs/data-sources/volume.md @@ -0,0 +1,69 @@ +--- +subcategory: "Unity Catalog" +--- +# databricks_volume Data Source + +Retrieves details about [databricks_volume](../resources/volume.md) that was created by Terraform or manually. +A volume can be identified by its three-level (fully qualified) name (in the form of: `catalog_name`.`schema_name`.`volume_name`) as input. This can be retrieved programmatically using [databricks_volumes](../data-sources/volumes.md) data source. + +## Example Usage + +* Retrieve details of all volumes in in a _things_ [databricks_schema](../resources/schema.md) of a _sandbox_ [databricks_catalog](../resources/catalog.md): + +```hcl +data "databricks_volumes" "all" { + catalog_name = "sandbox" + schema_name = "things" +} + +data "databricks_volume" { + for_each = data.datatbricks_volumes.all.ids + name = each.value +} +``` + +* Search for a specific volume by its fully qualified name + +```hcl +data "databricks_volume" "this" { + name = "catalog.schema.volume" +} +``` + +## Argument Reference + +* `name` - (Required) a fully qualified name of [databricks_volume](../resources/volume.md): *`catalog`.`schema`.`volume`* + + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of this Unity Catalog Volume in form of `..`. +* `volume_info` - TableInfo object for a Unity Catalog table. This contains the following attributes: + * `name` - Name of table, relative to parent schema. + * `access_point` - the AWS access point to use when accessing s3 bucket for this volume's external location + * `browse_only` - indicates whether the principal is limited to retrieving metadata for the volume through the BROWSE privilege when include_browse is enabled in the request. + * `catalog_name` - the name of the catalog where the schema and the volume are + * `comment` - the comment attached to the volume + * `created_at` - the Unix timestamp at the volume's creation + * `created_by` - the identifier of the user who created the volume + * `encryption_details` - encryption options that apply to clients connecting to cloud storage + * `full_name` - the three-level (fully qualified) name of the volume + * `metastore_id` - the unique identifier of the metastore + * `name` - the name of the volume + * `owner` - the identifier of the user who owns the volume + * `schema_name` - the name of the schema where the volume is + * `storage_location` - the storage location on the cloud + * `updated_at` - the timestamp of the last time changes were made to the volume + * `updated_by` - the identifier of the user who updated the volume last time + * `volume_id` - the unique identifier of the volume + * `volume_type` - whether the volume is `MANAGED` or `EXTERNAL` + +## Related Resources + +The following resources are used in the same context: + +* [databricks_volume](../resources/volume.md) to manage volumes within Unity Catalog. +* [databricks_schema](../resources/schema.md) to manage schemas within Unity Catalog. +* [databricks_catalog](../resources/catalog.md) to manage catalogs within Unity Catalog. diff --git a/internal/acceptance/data_volume_test.go b/internal/acceptance/data_volume_test.go new file mode 100644 index 0000000000..f9d1ae7033 --- /dev/null +++ b/internal/acceptance/data_volume_test.go @@ -0,0 +1,51 @@ +package acceptance + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/stretchr/testify/require" +) + +func checkDataSourceVolume(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + _, ok := s.Modules[0].Resources["data.databricks_volume.this"] + require.True(t, ok, "data.databricks_volume.this has to be there") + return nil + } +} +func TestUcAccDataSourceVolume(t *testing.T) { + unityWorkspaceLevel(t, step{ + Template: ` + resource "databricks_catalog" "sandbox" { + name = "sandbox{var.RANDOM}" + comment = "this catalog is managed by terraform" + properties = { + purpose = "testing" + } + } + + resource "databricks_schema" "things" { + catalog_name = databricks_catalog.sandbox.id + name = "things{var.RANDOM}" + comment = "this database is managed by terraform" + properties = { + kind = "various" + } + } + + resource "databricks_volume" "this" { + name = "volume_data_source_test" + catalog_name = databricks_catalog.sandbox.name + schema_name = databricks_schema.things.name + volume_type = "MANAGED" + } + + data "databricks_volume" "this" { + name = databricks_volume.this.id + depends_on = [ databricks_volume.this ] + } + `, + Check: checkDataSourceVolume(t), + }) +} diff --git a/provider/provider.go b/provider/provider.go index 65bbf8e90c..a73b4e28a8 100644 --- a/provider/provider.go +++ b/provider/provider.go @@ -113,6 +113,7 @@ func DatabricksProvider() *schema.Provider { "databricks_table": catalog.DataSourceTable().ToResource(), "databricks_tables": catalog.DataSourceTables().ToResource(), "databricks_views": catalog.DataSourceViews().ToResource(), + "databricks_volume": catalog.DataSourceVolume().ToResource(), "databricks_volumes": catalog.DataSourceVolumes().ToResource(), "databricks_user": scim.DataSourceUser().ToResource(), "databricks_zones": clusters.DataSourceClusterZones().ToResource(), From 0d943ead9da02f88879f4c18aeafc374eb6e76e9 Mon Sep 17 00:00:00 2001 From: touchida <56789230+touchida@users.noreply.github.com> Date: Wed, 3 Jul 2024 16:06:27 +0900 Subject: [PATCH 09/24] Make the schedule.pause_status field read-only (#3692) --- catalog/resource_quality_monitor.go | 1 + docs/resources/quality_monitor.md | 1 - 2 files changed, 1 insertion(+), 1 deletion(-) diff --git a/catalog/resource_quality_monitor.go b/catalog/resource_quality_monitor.go index 9e9169fd4e..1d2beffc4e 100644 --- a/catalog/resource_quality_monitor.go +++ b/catalog/resource_quality_monitor.go @@ -53,6 +53,7 @@ func ResourceQualityMonitor() common.Resource { common.CustomizeSchemaPath(m, "profile_metrics_table_name").SetReadOnly() common.CustomizeSchemaPath(m, "status").SetReadOnly() common.CustomizeSchemaPath(m, "dashboard_id").SetReadOnly() + common.CustomizeSchemaPath(m, "schedule", "pause_status").SetReadOnly() return m }, ) diff --git a/docs/resources/quality_monitor.md b/docs/resources/quality_monitor.md index a3292f65d2..b01208c80e 100644 --- a/docs/resources/quality_monitor.md +++ b/docs/resources/quality_monitor.md @@ -112,7 +112,6 @@ table. * `schedule` - The schedule for automatically updating and refreshing metric tables. This block consists of following fields: * `quartz_cron_expression` - string expression that determines when to run the monitor. See [Quartz documentation](https://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html) for examples. * `timezone_id` - string with timezone id (e.g., `PST`) in which to evaluate the Quartz expression. - * `pause_status` - optional string field that indicates whether a schedule is paused (`PAUSED`) or not (`UNPAUSED`). * `skip_builtin_dashboard` - Whether to skip creating a default dashboard summarizing data quality metrics. * `slicing_exprs` - List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices. * `warehouse_id` - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used. From 75236a645b7d86819e43281e6c2623bbc5527528 Mon Sep 17 00:00:00 2001 From: vuong-nguyen <44292934+nkvuong@users.noreply.github.com> Date: Wed, 3 Jul 2024 08:51:05 +0100 Subject: [PATCH 10/24] Added support for binding storage credentials and external locations to specific workspaces (#3678) * add isolation mode * rename * doc * fix doc * add tests * add acceptance tests * add computed * typo * add tests * use correct isolation_mode * fix test --- catalog/bindings/bindings.go | 2 +- catalog/resource_external_location_test.go | 8 +-- catalog/resource_storage_credential_test.go | 10 +-- docs/resources/workspace_binding.md | 4 +- .../catalog_workspace_binding_test.go | 61 ------------------- internal/acceptance/external_location_test.go | 4 +- internal/acceptance/workspace_binding_test.go | 39 +++++++++--- 7 files changed, 44 insertions(+), 84 deletions(-) delete mode 100644 internal/acceptance/catalog_workspace_binding_test.go diff --git a/catalog/bindings/bindings.go b/catalog/bindings/bindings.go index 8c7743aaf1..6a2633ad8a 100644 --- a/catalog/bindings/bindings.go +++ b/catalog/bindings/bindings.go @@ -9,7 +9,7 @@ import ( ) func AddCurrentWorkspaceBindings(ctx context.Context, d *schema.ResourceData, w *databricks.WorkspaceClient, securableName string, securableType string) error { - if d.Get("isolation_mode") != "ISOLATED" { + if d.Get("isolation_mode") != "ISOLATED" && d.Get("isolation_mode") != "ISOLATION_MODE_ISOLATED" { return nil } // Bind the current workspace if the catalog is isolated, otherwise the read will fail diff --git a/catalog/resource_external_location_test.go b/catalog/resource_external_location_test.go index a460425101..314d2731d0 100644 --- a/catalog/resource_external_location_test.go +++ b/catalog/resource_external_location_test.go @@ -76,13 +76,13 @@ func TestCreateIsolatedExternalLocation(t *testing.T) { Url: "s3://foo/bar", CredentialName: "bcd", Comment: "def", - IsolationMode: "ISOLATED", + IsolationMode: "ISOLATION_MODE_ISOLATED", }).Return(&catalog.ExternalLocationInfo{ Name: "abc", Url: "s3://foo/bar", CredentialName: "bcd", Comment: "def", - IsolationMode: "ISOLATED", + IsolationMode: "ISOLATION_MODE_ISOLATED", MetastoreId: "e", Owner: "f", }, nil) @@ -112,7 +112,7 @@ func TestCreateIsolatedExternalLocation(t *testing.T) { Url: "s3://foo/bar", CredentialName: "bcd", Comment: "def", - IsolationMode: "ISOLATED", + IsolationMode: "ISOLATION_MODE_ISOLATED", MetastoreId: "e", Owner: "f", }, nil) @@ -124,7 +124,7 @@ func TestCreateIsolatedExternalLocation(t *testing.T) { url = "s3://foo/bar" credential_name = "bcd" comment = "def" - isolation_mode = "ISOLATED" + isolation_mode = "ISOLATION_MODE_ISOLATED" `, }.ApplyNoError(t) } diff --git a/catalog/resource_storage_credential_test.go b/catalog/resource_storage_credential_test.go index c9d2e07af6..cf9bf0118d 100644 --- a/catalog/resource_storage_credential_test.go +++ b/catalog/resource_storage_credential_test.go @@ -88,7 +88,7 @@ func TestCreateIsolatedStorageCredential(t *testing.T) { RoleArn: "def", }, Comment: "c", - IsolationMode: "ISOLATED", + IsolationMode: "ISOLATION_MODE_ISOLATED", }).Return(&catalog.StorageCredentialInfo{ Name: "a", AwsIamRole: &catalog.AwsIamRoleResponse{ @@ -98,7 +98,7 @@ func TestCreateIsolatedStorageCredential(t *testing.T) { MetastoreId: "d", Id: "1234-5678", Owner: "f", - IsolationMode: "ISOLATED", + IsolationMode: "ISOLATION_MODE_ISOLATED", }, nil) w.GetMockMetastoresAPI().EXPECT().Current(mock.Anything).Return(&catalog.MetastoreAssignment{ MetastoreId: "e", @@ -130,7 +130,7 @@ func TestCreateIsolatedStorageCredential(t *testing.T) { MetastoreId: "d", Id: "1234-5678", Owner: "f", - IsolationMode: "ISOLATED", + IsolationMode: "ISOLATION_MODE_ISOLATED", }, nil) }, Resource: ResourceStorageCredential(), @@ -141,14 +141,14 @@ func TestCreateIsolatedStorageCredential(t *testing.T) { role_arn = "def" } comment = "c" - isolation_mode = "ISOLATED" + isolation_mode = "ISOLATION_MODE_ISOLATED" `, }.ApplyAndExpectData(t, map[string]any{ "aws_iam_role.0.external_id": "123", "aws_iam_role.0.role_arn": "def", "name": "a", "storage_credential_id": "1234-5678", - "isolation_mode": "ISOLATED", + "isolation_mode": "ISOLATION_MODE_ISOLATED", }) } diff --git a/docs/resources/workspace_binding.md b/docs/resources/workspace_binding.md index 8eaabe9422..198ce8fe21 100644 --- a/docs/resources/workspace_binding.md +++ b/docs/resources/workspace_binding.md @@ -35,8 +35,8 @@ The following arguments are required: * `workspace_id` - ID of the workspace. Change forces creation of a new resource. * `securable_name` - Name of securable. Change forces creation of a new resource. -* `securable_type` - Type of securable. Default to `catalog`. Change forces creation of a new resource. -* `binding_type` - Binding mode. Default to `BINDING_TYPE_READ_WRITE`. Possible values are `BINDING_TYPE_READ_ONLY`, `BINDING_TYPE_READ_WRITE` +* `securable_type` - Type of securable. Can be `catalog`, `external-locations` or `storage-credentials`. Default to `catalog`. Change forces creation of a new resource. +* `binding_type` - (Optional) Binding mode. Default to `BINDING_TYPE_READ_WRITE`. For `catalog`, possible values are `BINDING_TYPE_READ_ONLY`, `BINDING_TYPE_READ_WRITE`. For `external-location` or `storage-credential`, no binding mode needs to be specified ## Import diff --git a/internal/acceptance/catalog_workspace_binding_test.go b/internal/acceptance/catalog_workspace_binding_test.go deleted file mode 100644 index 5822195d6d..0000000000 --- a/internal/acceptance/catalog_workspace_binding_test.go +++ /dev/null @@ -1,61 +0,0 @@ -package acceptance - -import ( - "testing" -) - -func TestUcAccCatalogWorkspaceBindingToOtherWorkspace(t *testing.T) { - unityWorkspaceLevel(t, step{ - Template: ` - # The dummy workspace needs to be assigned to the metastore for this test to pass - resource "databricks_metastore_assignment" "this" { - metastore_id = "{env.TEST_METASTORE_ID}" - workspace_id = {env.DUMMY_WORKSPACE_ID} - } - - resource "databricks_catalog" "dev" { - name = "dev{var.RANDOM}" - isolation_mode = "ISOLATED" - } - - resource "databricks_catalog_workspace_binding" "test" { - catalog_name = databricks_catalog.dev.name - workspace_id = {env.DUMMY_WORKSPACE_ID} # dummy workspace, not the authenticated workspace in this test - } - `, - }) -} - -func TestUcAccCatalogWorkspaceBindingToSameWorkspace(t *testing.T) { - unityWorkspaceLevel(t, step{ - Template: ` - resource "databricks_catalog" "dev" { - name = "dev{var.RANDOM}" - isolation_mode = "ISOLATED" - } - - resource "databricks_catalog_workspace_binding" "test" { - catalog_name = databricks_catalog.dev.name - workspace_id = {env.THIS_WORKSPACE_ID} - } - `, - }) -} - -func TestUcAccSecurableWorkspaceBindingToSameWorkspaceReadOnly(t *testing.T) { - unityWorkspaceLevel(t, step{ - Template: ` - resource "databricks_catalog" "dev" { - name = "dev{var.RANDOM}" - isolation_mode = "ISOLATED" - } - - resource "databricks_catalog_workspace_binding" "test" { - securable_name = databricks_catalog.dev.name - securable_type = "catalog" - workspace_id = {env.THIS_WORKSPACE_ID} - binding_type = "BINDING_TYPE_READ_ONLY" - } - `, - }) -} diff --git a/internal/acceptance/external_location_test.go b/internal/acceptance/external_location_test.go index d0454746d3..fd8f497750 100644 --- a/internal/acceptance/external_location_test.go +++ b/internal/acceptance/external_location_test.go @@ -21,7 +21,7 @@ func externalLocationTemplateWithOwner(comment string, owner string) string { name = "external-{var.STICKY_RANDOM}" url = "s3://{env.TEST_BUCKET}/some{var.STICKY_RANDOM}" credential_name = databricks_storage_credential.external.id - isolation_mode = "ISOLATED" + isolation_mode = "ISOLATION_MODE_ISOLATED" comment = "%s" owner = "%s" } @@ -37,7 +37,7 @@ func storageCredentialTemplateWithOwner(comment, owner string) string { } comment = "%s" owner = "%s" - isolation_mode = "ISOLATED" + isolation_mode = "ISOLATION_MODE_ISOLATED" force_update = true } `, comment, owner) diff --git a/internal/acceptance/workspace_binding_test.go b/internal/acceptance/workspace_binding_test.go index 8759b8a479..24636da693 100644 --- a/internal/acceptance/workspace_binding_test.go +++ b/internal/acceptance/workspace_binding_test.go @@ -21,7 +21,22 @@ func workspaceBindingTemplateWithWorkspaceId(workspaceId string) string { resource "databricks_catalog" "prod" { name = "prod{var.RANDOM}" isolation_mode = "ISOLATED" - } + } + + resource "databricks_storage_credential" "external" { + name = "cred-{var.RANDOM}" + aws_iam_role { + role_arn = "{env.TEST_METASTORE_DATA_ACCESS_ARN}" + } + isolation_mode = "ISOLATION_MODE_ISOLATED" + } + + resource "databricks_external_location" "some" { + name = "external-{var.RANDOM}" + url = "s3://{env.TEST_BUCKET}/some{var.RANDOM}" + credential_name = databricks_storage_credential.external.id + isolation_mode = "ISOLATION_MODE_ISOLATED" + } resource "databricks_workspace_binding" "dev" { catalog_name = databricks_catalog.dev.name @@ -33,8 +48,20 @@ func workspaceBindingTemplateWithWorkspaceId(workspaceId string) string { securable_type = "catalog" workspace_id = %s binding_type = "BINDING_TYPE_READ_ONLY" - } - `, workspaceId, workspaceId) + } + + resource "databricks_workspace_binding" "ext" { + securable_name = databricks_external_location.some.id + securable_type = "external-location" + workspace_id = %s + } + + resource "databricks_workspace_binding" "cred" { + securable_name = databricks_storage_credential.external.id + securable_type = "storage-credential" + workspace_id = %s + } + `, workspaceId, workspaceId, workspaceId, workspaceId) } func TestUcAccWorkspaceBindingToOtherWorkspace(t *testing.T) { @@ -42,9 +69,3 @@ func TestUcAccWorkspaceBindingToOtherWorkspace(t *testing.T) { Template: workspaceBindingTemplateWithWorkspaceId("{env.DUMMY_WORKSPACE_ID}"), }) } - -func TestUcAccWorkspaceBindingToSameWorkspace(t *testing.T) { - unityWorkspaceLevel(t, step{ - Template: workspaceBindingTemplateWithWorkspaceId("{env.THIS_WORKSPACE_ID}"), - }) -} From 411f85cfb59059f9bacbda75edd71a6006970268 Mon Sep 17 00:00:00 2001 From: Alex Ott Date: Thu, 4 Jul 2024 10:33:46 +0200 Subject: [PATCH 11/24] Exporter: use Go SDK structs for `databricks_job` resource (#3727) --- exporter/importables.go | 34 ++++++++++++++-------------------- exporter/util.go | 2 +- 2 files changed, 15 insertions(+), 21 deletions(-) diff --git a/exporter/importables.go b/exporter/importables.go index 032ab855fc..5eed0d0871 100644 --- a/exporter/importables.go +++ b/exporter/importables.go @@ -349,7 +349,7 @@ var resourcesMap map[string]importable = map[string]importable{ return nil }, Import: func(ic *importContext, r *resource) error { - var c compute.ClusterDetails + var c compute.ClusterSpec s := ic.Resources["databricks_cluster"].Schema common.DataToStructPointer(r.Data, s, &c) ic.importCluster(&c) @@ -457,17 +457,11 @@ var resourcesMap map[string]importable = map[string]importable{ MatchType: MatchPrefix, SearchValueTransformFunc: appendEndingSlashToDirName}, }, Import: func(ic *importContext, r *resource) error { - var job jobs.JobSettings + var job jobs.JobSettingsResource s := ic.Resources["databricks_job"].Schema common.DataToStructPointer(r.Data, s, &job) - ic.importClusterLegacy(job.NewCluster) - ic.Emit(&resource{ - Resource: "databricks_cluster", - ID: job.ExistingClusterID, - }) ic.emitPermissionsIfNotIgnored(r, fmt.Sprintf("/jobs/%s", r.ID), "job_"+ic.Importables["databricks_job"].Name(ic, r.Data)) - // Support for multitask jobs for _, task := range job.Tasks { if task.NotebookTask != nil { if task.NotebookTask.Source != "GIT" { @@ -484,7 +478,7 @@ var resourcesMap map[string]importable = map[string]importable{ if task.PipelineTask != nil { ic.Emit(&resource{ Resource: "databricks_pipeline", - ID: task.PipelineTask.PipelineID, + ID: task.PipelineTask.PipelineId, }) } if task.SparkPythonTask != nil { @@ -514,25 +508,25 @@ var resourcesMap map[string]importable = map[string]importable{ if task.SqlTask.Query != nil { ic.Emit(&resource{ Resource: "databricks_sql_query", - ID: task.SqlTask.Query.QueryID, + ID: task.SqlTask.Query.QueryId, }) } if task.SqlTask.Dashboard != nil { ic.Emit(&resource{ Resource: "databricks_sql_dashboard", - ID: task.SqlTask.Dashboard.DashboardID, + ID: task.SqlTask.Dashboard.DashboardId, }) } if task.SqlTask.Alert != nil { ic.Emit(&resource{ Resource: "databricks_sql_alert", - ID: task.SqlTask.Alert.AlertID, + ID: task.SqlTask.Alert.AlertId, }) } - if task.SqlTask.WarehouseID != "" { + if task.SqlTask.WarehouseId != "" { ic.Emit(&resource{ Resource: "databricks_sql_endpoint", - ID: task.SqlTask.WarehouseID, + ID: task.SqlTask.WarehouseId, }) } if task.SqlTask.File != nil && task.SqlTask.File.Source == "WORKSPACE" { @@ -567,22 +561,22 @@ var resourcesMap map[string]importable = map[string]importable{ } } } - if task.RunJobTask != nil && task.RunJobTask.JobID != 0 { + if task.RunJobTask != nil && task.RunJobTask.JobId != 0 { ic.Emit(&resource{ Resource: "databricks_job", - ID: strconv.FormatInt(task.RunJobTask.JobID, 10), + ID: strconv.FormatInt(task.RunJobTask.JobId, 10), }) ic.emitFilesFromMap(task.RunJobTask.JobParameters) } - ic.importClusterLegacy(task.NewCluster) + ic.importCluster(task.NewCluster) ic.Emit(&resource{ Resource: "databricks_cluster", - ID: task.ExistingClusterID, + ID: task.ExistingClusterId, }) ic.emitLibraries(task.Libraries) } for _, jc := range job.JobClusters { - ic.importClusterLegacy(jc.NewCluster) + ic.importCluster(&jc.NewCluster) } if job.RunAs != nil { if job.RunAs.UserName != "" { @@ -620,7 +614,7 @@ var resourcesMap map[string]importable = map[string]importable{ case "url", "format": return true } - var js jobs.JobSettings + var js jobs.JobSettingsResource common.DataToStructPointer(d, ic.Resources["databricks_job"].Schema, &js) switch pathString { case "email_notifications": diff --git a/exporter/util.go b/exporter/util.go index 86ce146e74..99f60021ee 100644 --- a/exporter/util.go +++ b/exporter/util.go @@ -120,7 +120,7 @@ func (ic *importContext) importClusterLegacy(c *clusters.Cluster) { ic.emitUserOrServicePrincipal(c.SingleUserName) } -func (ic *importContext) importCluster(c *compute.ClusterDetails) { +func (ic *importContext) importCluster(c *compute.ClusterSpec) { if c == nil { return } From e8640654e963d205416c3d9d109f823a8800afee Mon Sep 17 00:00:00 2001 From: Miles Yucht Date: Thu, 4 Jul 2024 10:52:40 +0200 Subject: [PATCH 12/24] Change TF registry ownership (#3736) --- .terraform-registry | 3 +++ 1 file changed, 3 insertions(+) create mode 100644 .terraform-registry diff --git a/.terraform-registry b/.terraform-registry new file mode 100644 index 0000000000..4032bcc614 --- /dev/null +++ b/.terraform-registry @@ -0,0 +1,3 @@ +Request: Change owner to @mgyucht +Registry link: https://registry.terraform.io/namespaces/databricks, https://registry.terraform.io/providers/databricks/databricks/latest/docs +Request by: miles@databricks.com \ No newline at end of file From 701b5e52a2b4a9165f0bc91981eba0c94a994dcf Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Thu, 4 Jul 2024 17:27:18 +0200 Subject: [PATCH 13/24] Run goreleaser action in snapshot mode from merge queue (#3646) * Run goreleaser action in snapshot mode from merge queue * Don't run on PRs * False branch --- .github/workflows/release.yml | 14 ++++++++++---- .goreleaser.yml | 11 ++++++++++- 2 files changed, 20 insertions(+), 5 deletions(-) diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index fa58c3458f..ba0289e5b9 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -4,6 +4,8 @@ on: push: tags: - '*' + merge_group: + types: [checks_requested] jobs: goreleaser: @@ -19,6 +21,13 @@ jobs: with: go-version: 1.22.x + # The default cache key for this action considers only the `go.sum` file. + # We include .goreleaser.yaml here to differentiate from the cache used by the push action + # that runs unit tests. This job produces and uses a different cache. + cache-dependency-path: | + go.sum + .goreleaser.yml + - name: Import GPG key id: import_gpg uses: crazy-max/ghaction-import-gpg@v2 @@ -26,14 +35,11 @@ jobs: GPG_PRIVATE_KEY: ${{ secrets.GPG_PRIVATE_KEY }} PASSPHRASE: ${{ secrets.PASSPHRASE }} - - name: Pull external libraries - run: make vendor - - name: Run GoReleaser uses: goreleaser/goreleaser-action@v6 with: version: ~> v2 - args: release --clean + args: release --clean ${{ !startsWith(github.ref, 'refs/tags/v') && '--snapshot' || '' }} env: # use GITHUB_TOKEN that is already available in secrets.GITHUB_TOKEN # https://docs.github.com/en/free-pro-team@latest/actions/reference/authentication-in-a-workflow#permissions-for-the-github_token diff --git a/.goreleaser.yml b/.goreleaser.yml index bab7c74906..2fd8b24631 100644 --- a/.goreleaser.yml +++ b/.goreleaser.yml @@ -1,6 +1,9 @@ +version: 2 + before: hooks: - go mod download + builds: - env: - CGO_ENABLED=0 @@ -15,6 +18,7 @@ builds: goarch: - amd64 - arm64 + archives: - format: zip name_template: '{{ .ProjectName }}_{{ replace .Version "v" "" }}_{{ .Os }}_{{ .Arch }}' @@ -22,11 +26,14 @@ archives: - LICENSE* - CHANGELOG* - NOTICE* + checksum: name_template: '{{ .ProjectName }}_{{ replace .Version "v" "" }}_SHA256SUMS' algorithm: sha256 + snapshot: name_template: "{{ .Tag }}" + signs: - artifacts: checksum args: @@ -36,11 +43,13 @@ signs: - "${signature}" - "--detach-sign" - "${artifact}" + changelog: sort: asc filters: exclude: - '^docs:' - '^test:' + release: - draft: true \ No newline at end of file + draft: true From 61b17e711e993314d6e92226ab1e4d78bb201e2e Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 5 Jul 2024 08:54:03 +0200 Subject: [PATCH 14/24] Bump golang.org/x/mod from 0.18.0 to 0.19.0 (#3739) Bumps [golang.org/x/mod](https://github.com/golang/mod) from 0.18.0 to 0.19.0. - [Commits](https://github.com/golang/mod/compare/v0.18.0...v0.19.0) --- updated-dependencies: - dependency-name: golang.org/x/mod dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- go.mod | 2 +- go.sum | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/go.mod b/go.mod index a421cdb136..34cef45274 100644 --- a/go.mod +++ b/go.mod @@ -13,7 +13,7 @@ require ( github.com/stretchr/testify v1.9.0 github.com/zclconf/go-cty v1.14.4 golang.org/x/exp v0.0.0-20240222234643-814bf88cf225 - golang.org/x/mod v0.18.0 + golang.org/x/mod v0.19.0 ) require ( diff --git a/go.sum b/go.sum index aa4bdff4a6..f984b87658 100644 --- a/go.sum +++ b/go.sum @@ -232,8 +232,8 @@ golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTk golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= -golang.org/x/mod v0.18.0 h1:5+9lSbEzPSdWkH32vYPBwEpX8KwDbM52Ud9xBUvNlb0= -golang.org/x/mod v0.18.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.19.0 h1:fEdghXQSo20giMthA7cd28ZC+jts4amQ3YMXiP5oMQ8= +golang.org/x/mod v0.19.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= From ac06fdbfff9b41a84817c4251ad0c2bc8a645016 Mon Sep 17 00:00:00 2001 From: mkubicek Date: Fri, 5 Jul 2024 14:56:46 +0200 Subject: [PATCH 15/24] Update cluster.md: add data_security_mode parameters `NONE` and `NO_ISOLATION` (#3740) * Update cluster.md * Update cluster.md --- docs/resources/cluster.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/resources/cluster.md b/docs/resources/cluster.md index 06233c70f5..aabf77e4ca 100644 --- a/docs/resources/cluster.md +++ b/docs/resources/cluster.md @@ -43,7 +43,7 @@ resource "databricks_cluster" "shared_autoscaling" { * `autotermination_minutes` - (Optional) Automatically terminate the cluster after being inactive for this time in minutes. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination. Defaults to `60`. *We highly recommend having this setting present for Interactive/BI clusters.* * `enable_elastic_disk` - (Optional) If you don’t want to allocate a fixed number of EBS volumes at cluster creation time, use autoscaling local storage. With autoscaling local storage, Databricks monitors the amount of free disk space available on your cluster’s Spark workers. If a worker begins to run too low on disk, Databricks automatically attaches a new EBS volume to the worker before it runs out of disk space. EBS volumes are attached up to a limit of 5 TB of total disk space per instance (including the instance’s local storage). To scale down EBS usage, make sure you have `autotermination_minutes` and `autoscale` attributes set. More documentation available at [cluster configuration page](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage-1). * `enable_local_disk_encryption` - (Optional) Some instance types you use to run clusters may have locally attached disks. Databricks may store shuffle data or temporary data on these locally attached disks. To ensure that all data at rest is encrypted for all storage types, including shuffle data stored temporarily on your cluster’s local disks, you can enable local disk encryption. When local disk encryption is enabled, Databricks generates an encryption key locally unique to each cluster node and uses it to encrypt all data stored on local disks. The scope of the key is local to each cluster node and is destroyed along with the cluster node itself. During its lifetime, the key resides in memory for encryption and decryption and is stored encrypted on the disk. *Your workloads may run more slowly because of the performance impact of reading and writing encrypted data to and from local volumes. This feature is not available for all Azure Databricks subscriptions. Contact your Microsoft or Databricks account representative to request access.* -* `data_security_mode` - (Optional) Select the security features of the cluster. [Unity Catalog requires](https://docs.databricks.com/data-governance/unity-catalog/compute.html#create-clusters--sql-warehouses-with-unity-catalog-access) `SINGLE_USER` or `USER_ISOLATION` mode. `LEGACY_PASSTHROUGH` for passthrough cluster and `LEGACY_TABLE_ACL` for Table ACL cluster. If omitted, no security features are enabled. In the Databricks UI, this has been recently been renamed *Access Mode* and `USER_ISOLATION` has been renamed *Shared*, but use these terms here. +* `data_security_mode` - (Optional) Select the security features of the cluster. [Unity Catalog requires](https://docs.databricks.com/data-governance/unity-catalog/compute.html#create-clusters--sql-warehouses-with-unity-catalog-access) `SINGLE_USER` or `USER_ISOLATION` mode. `LEGACY_PASSTHROUGH` for passthrough cluster and `LEGACY_TABLE_ACL` for Table ACL cluster. If omitted, default security features are enabled. To disable security features use `NONE` or legacy mode `NO_ISOLATION`. In the Databricks UI, this has been recently been renamed *Access Mode* and `USER_ISOLATION` has been renamed *Shared*, but use these terms here. * `single_user_name` - (Optional) The optional user name of the user to assign to an interactive cluster. This field is required when using `data_security_mode` set to `SINGLE_USER` or AAD Passthrough for Azure Data Lake Storage (ADLS) with a single-user cluster (i.e., not high-concurrency clusters). * `idempotency_token` - (Optional) An optional token to guarantee the idempotency of cluster creation requests. If an active cluster with the provided token already exists, the request will not create a new cluster, but it will return the existing running cluster's ID instead. If you specify the idempotency token, upon failure, you can retry until the request succeeds. Databricks platform guarantees to launch exactly one cluster with that idempotency token. This token should have at most 64 characters. * `ssh_public_keys` - (Optional) SSH public key contents that will be added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu on port 2200. You can specify up to 10 keys. From d24adbd945bf14be245f6069e3e3b64a789bf218 Mon Sep 17 00:00:00 2001 From: Alex Ott Date: Mon, 8 Jul 2024 23:00:43 +0200 Subject: [PATCH 16/24] Add `databricks_schema` data source (#3732) Adding new data source for completeness of our data sources for UC objects. --- catalog/data_schema.go | 25 +++++++++ catalog/data_schema_test.go | 47 +++++++++++++++++ docs/data-sources/schema.md | 67 +++++++++++++++++++++++++ docs/data-sources/volume.md | 4 +- internal/acceptance/data_schema_test.go | 43 ++++++++++++++++ provider/provider.go | 1 + 6 files changed, 185 insertions(+), 2 deletions(-) create mode 100644 catalog/data_schema.go create mode 100644 catalog/data_schema_test.go create mode 100644 docs/data-sources/schema.md create mode 100644 internal/acceptance/data_schema_test.go diff --git a/catalog/data_schema.go b/catalog/data_schema.go new file mode 100644 index 0000000000..d93a0f682a --- /dev/null +++ b/catalog/data_schema.go @@ -0,0 +1,25 @@ +package catalog + +import ( + "context" + + "github.com/databricks/databricks-sdk-go" + "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/common" +) + +func DataSourceSchema() common.Resource { + return common.WorkspaceData(func(ctx context.Context, data *struct { + Id string `json:"id,omitempty" tf:"computed"` + Name string `json:"name"` + Schema *catalog.SchemaInfo `json:"schema_info,omitempty" tf:"computed"` + }, w *databricks.WorkspaceClient) error { + schema, err := w.Schemas.GetByFullName(ctx, data.Name) + if err != nil { + return err + } + data.Schema = schema + data.Id = schema.FullName + return nil + }) +} diff --git a/catalog/data_schema_test.go b/catalog/data_schema_test.go new file mode 100644 index 0000000000..c09ba25da1 --- /dev/null +++ b/catalog/data_schema_test.go @@ -0,0 +1,47 @@ +package catalog + +import ( + "testing" + + "github.com/databricks/databricks-sdk-go/experimental/mocks" + "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/qa" + "github.com/stretchr/testify/mock" +) + +func TestDataSourceSchema(t *testing.T) { + qa.ResourceFixture{ + MockWorkspaceClientFunc: func(m *mocks.MockWorkspaceClient) { + e := m.GetMockSchemasAPI().EXPECT() + e.GetByFullName(mock.Anything, "a.b").Return(&catalog.SchemaInfo{ + FullName: "a.b", + CatalogName: "a", + Name: "b", + Owner: "account users", + }, nil) + }, + Resource: DataSourceSchema(), + HCL: ` + name="a.b"`, + Read: true, + NonWritable: true, + ID: "_", + }.ApplyAndExpectData(t, map[string]any{ + "name": "a.b", + "id": "a.b", + "schema_info.0.full_name": "a.b", + "schema_info.0.catalog_name": "a", + "schema_info.0.name": "b", + "schema_info.0.owner": "account users", + }) +} + +func TestDataSourceSchema_Error(t *testing.T) { + qa.ResourceFixture{ + Fixtures: qa.HTTPFailures, + Resource: DataSourceSchema(), + Read: true, + NonWritable: true, + ID: "_", + }.ExpectError(t, "i'm a teapot") +} diff --git a/docs/data-sources/schema.md b/docs/data-sources/schema.md new file mode 100644 index 0000000000..6a381b9142 --- /dev/null +++ b/docs/data-sources/schema.md @@ -0,0 +1,67 @@ +--- +subcategory: "Unity Catalog" +--- +# databricks_schema Data Source + +Retrieves details about [databricks_schema](../resources/schema.md) that was created by Terraform or manually. +A schema can be identified by its two-level (fully qualified) name (in the form of: `catalog_name`.`schema_name`) as input. This can be retrieved programmatically using [databricks_schemas](../data-sources/schemas.md) data source. + +## Example Usage + +* Retrieve details of all schemas in in a _sandbox_ [databricks_catalog](../resources/catalog.md): + +```hcl +data "databricks_schemas" "all" { + catalog_name = "sandbox" +} + +data "databricks_schema" { + for_each = data.datatbricks_schemas.all.ids + name = each.value +} +``` + +* Search for a specific schema by its fully qualified name: + +```hcl +data "databricks_schema" "this" { + name = "catalog.schema" +} +``` + +## Argument Reference + +* `name` - (Required) a fully qualified name of [databricks_schema](../resources/schema.md): *`catalog`.`schema`* + + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of this Unity Catalog Schema in form of `.`. +* `schema_info` - `SchemaInfo` object for a Unity Catalog schema. This contains the following attributes: + * `browse_only` - indicates whether the principal is limited to retrieving metadata for the schema through the BROWSE privilege. + * `catalog_name` - the name of the catalog where the schema is. + * `catalog_type` - the type of the parent catalog. + * `comment` - the comment attached to the volume + * `created_at` - time at which this schema was created, in epoch milliseconds. + * `created_by` - username of schema creator. + * `effective_predictive_optimization_flag` - information about actual state of predictive optimization. + * `enable_predictive_optimization` - whether predictive optimization should be enabled for this object and objects under it. + * `full_name` - the two-level (fully qualified) name of the schema + * `metastore_id` - the unique identifier of the metastore + * `name` - Name of schema, relative to parent catalog. + * `owner` - the identifier of the user who owns the schema + * `properties` - map of properties set on the schema + * `schema_id` - the unique identifier of the volume + * `storage_location` - the storage location on the cloud. + * `storage_root` - storage root URL for managed tables within schema. + * `updated_at` - the timestamp of the last time changes were made to the schema + * `updated_by` - the identifier of the user who updated the schema last time + +## Related Resources + +The following resources are used in the same context: + +* [databricks_schema](../resources/schema.md) to manage schemas within Unity Catalog. +* [databricks_catalog](../resources/catalog.md) to manage catalogs within Unity Catalog. diff --git a/docs/data-sources/volume.md b/docs/data-sources/volume.md index 9a32875a1f..3a6ebeba3f 100644 --- a/docs/data-sources/volume.md +++ b/docs/data-sources/volume.md @@ -40,8 +40,8 @@ data "databricks_volume" "this" { In addition to all arguments above, the following attributes are exported: * `id` - ID of this Unity Catalog Volume in form of `..`. -* `volume_info` - TableInfo object for a Unity Catalog table. This contains the following attributes: - * `name` - Name of table, relative to parent schema. +* `volume_info` - `VolumeInfo` object for a Unity Catalog volume. This contains the following attributes: + * `name` - Name of the volume, relative to parent schema. * `access_point` - the AWS access point to use when accessing s3 bucket for this volume's external location * `browse_only` - indicates whether the principal is limited to retrieving metadata for the volume through the BROWSE privilege when include_browse is enabled in the request. * `catalog_name` - the name of the catalog where the schema and the volume are diff --git a/internal/acceptance/data_schema_test.go b/internal/acceptance/data_schema_test.go new file mode 100644 index 0000000000..f72919a85f --- /dev/null +++ b/internal/acceptance/data_schema_test.go @@ -0,0 +1,43 @@ +package acceptance + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/stretchr/testify/require" +) + +func checkDataSourceSchema(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + _, ok := s.Modules[0].Resources["data.databricks_schema.this"] + require.True(t, ok, "data.databricks_schema.this has to be there") + return nil + } +} +func TestUcAccDataSourceSchema(t *testing.T) { + unityWorkspaceLevel(t, step{ + Template: ` + resource "databricks_catalog" "sandbox" { + name = "sandbox{var.RANDOM}" + comment = "this catalog is managed by terraform" + properties = { + purpose = "testing" + } + } + + resource "databricks_schema" "things" { + catalog_name = databricks_catalog.sandbox.id + name = "things{var.RANDOM}" + comment = "this database is managed by terraform" + properties = { + kind = "various" + } + } + + data "databricks_schema" "this" { + name = databricks_schema.things.id + } + `, + Check: checkDataSourceSchema(t), + }) +} diff --git a/provider/provider.go b/provider/provider.go index a73b4e28a8..a4eff38cd1 100644 --- a/provider/provider.go +++ b/provider/provider.go @@ -100,6 +100,7 @@ func DatabricksProvider() *schema.Provider { "databricks_notebook": workspace.DataSourceNotebook().ToResource(), "databricks_notebook_paths": workspace.DataSourceNotebookPaths().ToResource(), "databricks_pipelines": pipelines.DataSourcePipelines().ToResource(), + "databricks_schema": catalog.DataSourceSchema().ToResource(), "databricks_schemas": catalog.DataSourceSchemas().ToResource(), "databricks_service_principal": scim.DataSourceServicePrincipal().ToResource(), "databricks_service_principals": scim.DataSourceServicePrincipals().ToResource(), From a55dc7fee0c38a3e030cc5e15618c2f4e7eea76b Mon Sep 17 00:00:00 2001 From: Alex Ott Date: Tue, 9 Jul 2024 11:16:46 +0200 Subject: [PATCH 17/24] Exporter: export libraries specified as `requirements.txt` (#3649) --- exporter/importables.go | 2 ++ exporter/util.go | 3 ++- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/exporter/importables.go b/exporter/importables.go index 5eed0d0871..72e8f71d4b 100644 --- a/exporter/importables.go +++ b/exporter/importables.go @@ -390,6 +390,8 @@ var resourcesMap map[string]importable = map[string]importable{ {Path: "task.library.whl", Resource: "databricks_dbfs_file", Match: "dbfs_path"}, {Path: "task.library.whl", Resource: "databricks_file"}, {Path: "task.library.whl", Resource: "databricks_workspace_file", Match: "workspace_path"}, + {Path: "task.library.requirements", Resource: "databricks_file"}, + {Path: "task.library.requirements", Resource: "databricks_workspace_file", Match: "workspace_path"}, {Path: "task.new_cluster.aws_attributes.instance_profile_arn", Resource: "databricks_instance_profile"}, {Path: "task.new_cluster.driver_instance_pool_id", Resource: "databricks_instance_pool"}, {Path: "task.new_cluster.init_scripts.dbfs.destination", Resource: "databricks_dbfs_file", Match: "dbfs_path"}, diff --git a/exporter/util.go b/exporter/util.go index 99f60021ee..d2b0dba341 100644 --- a/exporter/util.go +++ b/exporter/util.go @@ -410,11 +410,12 @@ func (ic *importContext) emitLibraries(libs []compute.Library) { ic.emitIfWsfsFile(lib.Whl) ic.emitIfWsfsFile(lib.Jar) ic.emitIfWsfsFile(lib.Egg) + ic.emitIfWsfsFile(lib.Requirements) // Files on UC Volumes ic.emitIfVolumeFile(lib.Whl) ic.emitIfVolumeFile(lib.Jar) + ic.emitIfVolumeFile(lib.Requirements) } - } func (ic *importContext) importLibraries(d *schema.ResourceData, s map[string]*schema.Schema) error { From 9c9bf2b27e0cd888b89aa96b6635c20a7c56c638 Mon Sep 17 00:00:00 2001 From: Alex Ott Date: Tue, 9 Jul 2024 11:17:20 +0200 Subject: [PATCH 18/24] Exporter: Emit directories during the listing only if they are explicitly configured in `-listing` (#3673) Exporter emitted directories even if they were specified only in `-services`, leading to exporting of not necessary objects... --- exporter/util.go | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/exporter/util.go b/exporter/util.go index d2b0dba341..ab06f99647 100644 --- a/exporter/util.go +++ b/exporter/util.go @@ -62,6 +62,7 @@ func (ic *importContext) emitInitScripts(initScripts []compute.InitScriptInfo) { ic.emitWorkspaceFileOrRepo(is.Workspace.Destination) } if is.Volumes != nil { + // TODO: we should emit allow list for init scripts as well ic.emitIfVolumeFile(is.Volumes.Destination) } } @@ -413,6 +414,7 @@ func (ic *importContext) emitLibraries(libs []compute.Library) { ic.emitIfWsfsFile(lib.Requirements) // Files on UC Volumes ic.emitIfVolumeFile(lib.Whl) + // TODO: we should emit UC allow list as well ic.emitIfVolumeFile(lib.Jar) ic.emitIfVolumeFile(lib.Requirements) } @@ -1160,7 +1162,7 @@ func listNotebooksAndWorkspaceFiles(ic *importContext) error { allObjects := ic.getAllWorkspaceObjects(func(objects []workspace.ObjectStatus) { for _, object := range objects { if object.ObjectType == workspace.Directory { - if !ic.incremental && object.Path != "/" && ic.isServiceEnabled("directories") { + if !ic.incremental && object.Path != "/" && ic.isServiceInListing("directories") { objectsChannel <- object } } else { @@ -1185,9 +1187,9 @@ func listNotebooksAndWorkspaceFiles(ic *importContext) error { if ic.shouldSkipWorkspaceObject(object, updatedSinceMs) { continue } - if object.ObjectType == workspace.Directory && !ic.incremental && ic.isServiceEnabled("directories") && object.Path != "/" { + if object.ObjectType == workspace.Directory && !ic.incremental && ic.isServiceInListing("directories") && object.Path != "/" { emitWorkpaceObject(ic, object) - } else if (object.ObjectType == workspace.Notebook || object.ObjectType == workspace.File) && ic.isServiceEnabled("notebooks") { + } else if (object.ObjectType == workspace.Notebook || object.ObjectType == workspace.File) && ic.isServiceInListing("notebooks") { emitWorkpaceObject(ic, object) } } From ccad28ffdf6e5efd671959e4271f9dde1218920b Mon Sep 17 00:00:00 2001 From: Renaud Hartert Date: Tue, 9 Jul 2024 13:02:23 +0200 Subject: [PATCH 19/24] Add new APIErrorBody struct and update deps (#3745) --- access/resource_ip_access_list_test.go | 12 ++++---- aws/resource_group_instance_profile_test.go | 14 ++++----- aws/resource_instance_profile_test.go | 10 +++---- aws/resource_service_principal_role_test.go | 6 ++-- aws/resource_user_instance_profile_test.go | 10 +++---- catalog/resource_artifact_allowlist_test.go | 10 +++---- catalog/resource_connection_test.go | 10 +++---- catalog/resource_external_location_test.go | 8 ++--- catalog/resource_storage_credential_test.go | 4 +-- catalog/resource_system_schema_test.go | 10 +++---- catalog/resource_volume_test.go | 18 +++++------ clusters/clusters_api_sdk_test.go | 4 +-- clusters/clusters_api_test.go | 8 ++--- clusters/resource_cluster_test.go | 13 ++++---- common/apierr.go | 30 +++++++++++++++++++ jobs/data_job_test.go | 4 +-- jobs/resource_job_test.go | 6 ++-- mlflow/data_mlflow_experiment_test.go | 8 ++--- mlflow/data_mlflow_model_test.go | 4 +-- mws/resource_mws_credentials_test.go | 11 ++++--- ...resource_mws_customer_managed_keys_test.go | 6 ++-- mws/resource_mws_log_delivery_test.go | 10 +++---- mws/resource_mws_networks_test.go | 12 ++++---- ...esource_mws_storage_configurations_test.go | 10 +++---- mws/resource_mws_vpc_endpoint_test.go | 12 ++++---- mws/resource_mws_workspaces_test.go | 16 +++++----- .../resource_access_control_rule_set_test.go | 4 +-- permissions/resource_permissions_test.go | 12 ++++---- pipelines/resource_pipeline_test.go | 18 +++++------ policies/resource_cluster_policy_test.go | 18 +++++------ pools/resource_instance_pool_test.go | 13 ++++---- provider/generate_test.go | 10 +++---- repos/resource_git_credential_test.go | 20 ++++++------- repos/resource_repo_test.go | 6 ++-- scim/resource_entitlement_test.go | 14 ++++----- scim/resource_group_member_test.go | 10 +++---- scim/resource_group_role_test.go | 10 +++---- scim/resource_group_test.go | 10 +++---- scim/resource_service_principal_test.go | 6 ++-- scim/resource_user_test.go | 9 +++--- secrets/resource_secret_acl_test.go | 10 +++---- secrets/resource_secret_scope_test.go | 8 ++--- secrets/resource_secret_test.go | 8 ++--- serving/resource_model_serving_test.go | 12 ++++---- sharing/resource_recipient_test.go | 6 ++-- sharing/resource_share_test.go | 8 ++--- storage/resource_file_test.go | 6 ++-- tokens/resource_token_test.go | 7 +++-- workspace/resource_directory_test.go | 12 ++++---- workspace/resource_global_init_script_test.go | 4 +-- workspace/resource_notebook_test.go | 12 ++++---- workspace/resource_workspace_conf_test.go | 12 ++++---- workspace/resource_workspace_file_test.go | 12 ++++---- 53 files changed, 285 insertions(+), 258 deletions(-) create mode 100644 common/apierr.go diff --git a/access/resource_ip_access_list_test.go b/access/resource_ip_access_list_test.go index d2cb1b09f4..8970493f92 100644 --- a/access/resource_ip_access_list_test.go +++ b/access/resource_ip_access_list_test.go @@ -8,8 +8,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/settings" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -93,7 +93,7 @@ func TestAPIACLCreate_Error(t *testing.T) { { Method: http.MethodPost, Resource: "/api/2.0/ip-access-lists", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_ALREADY_EXISTS", Message: "IP access list with type (" + TestingListTypeString + ") and label (" + TestingLabel + ") already exists", }, @@ -185,7 +185,7 @@ func TestIPACLUpdate_Error(t *testing.T) { ExpectedRequest: settings.UpdateIpAccessList{ Enabled: TestingEnabled, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -240,7 +240,7 @@ func TestIPACLRead_NotFound(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/ip-access-lists/" + TestingId + "?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Can't find an IP access list with id: " + TestingId + ".", }, @@ -260,7 +260,7 @@ func TestIPACLRead_Error(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/ip-access-lists/" + TestingId + "?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -298,7 +298,7 @@ func TestIPACLDelete_Error(t *testing.T) { { Method: http.MethodDelete, Resource: fmt.Sprintf("/api/2.0/ip-access-lists/%s?", TestingId), - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Something went wrong", }, diff --git a/aws/resource_group_instance_profile_test.go b/aws/resource_group_instance_profile_test.go index 1d90bd1d22..6b96a34b41 100644 --- a/aws/resource_group_instance_profile_test.go +++ b/aws/resource_group_instance_profile_test.go @@ -3,7 +3,7 @@ package aws import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/scim" "github.com/databricks/terraform-provider-databricks/qa" @@ -56,7 +56,7 @@ func TestResourceGroupInstanceProfileCreate_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Groups/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -80,7 +80,7 @@ func TestResourceGroupInstanceProfileCreate_Error_InvalidARN(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Groups/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -103,7 +103,7 @@ func TestResourceGroupInstanceProfileCreate_Error_OtherARN(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Groups/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -152,7 +152,7 @@ func TestResourceGroupInstanceProfileRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=roles", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -192,7 +192,7 @@ func TestResourceGroupInstanceProfileRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=roles", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -232,7 +232,7 @@ func TestResourceGroupInstanceProfileDelete_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Groups/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/aws/resource_instance_profile_test.go b/aws/resource_instance_profile_test.go index 401e034b71..ba6eb99414 100644 --- a/aws/resource_instance_profile_test.go +++ b/aws/resource_instance_profile_test.go @@ -3,7 +3,7 @@ package aws import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -114,7 +114,7 @@ func TestResourceInstanceProfileCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/instance-profiles/add", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -249,7 +249,7 @@ func TestResourceInstanceProfileRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/instance-profiles/list", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -289,7 +289,7 @@ func TestResourceInstanceProfileDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/instance-profiles/remove", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -356,7 +356,7 @@ func TestResourceInstanceProfileUpdate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/instance-profiles/edit", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/aws/resource_service_principal_role_test.go b/aws/resource_service_principal_role_test.go index 9666ef9165..5ce5747ff0 100644 --- a/aws/resource_service_principal_role_test.go +++ b/aws/resource_service_principal_role_test.go @@ -3,7 +3,7 @@ package aws import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/scim" "github.com/databricks/terraform-provider-databricks/qa" @@ -53,7 +53,7 @@ func TestResourceServicePrincipalRoleCreate_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/ServicePrincipals/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -119,7 +119,7 @@ func TestResourceServicePrincipalRoleRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/ServicePrincipals/abc?attributes=roles", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, diff --git a/aws/resource_user_instance_profile_test.go b/aws/resource_user_instance_profile_test.go index ef6911e6b8..795a579bfb 100644 --- a/aws/resource_user_instance_profile_test.go +++ b/aws/resource_user_instance_profile_test.go @@ -3,7 +3,7 @@ package aws import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/scim" "github.com/databricks/terraform-provider-databricks/qa" @@ -68,7 +68,7 @@ func TestResourceUserInstanceProfileCreate_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Users/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -138,7 +138,7 @@ func TestResourceUserInstanceProfileRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Users/abc?attributes=roles", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -158,7 +158,7 @@ func TestResourceUserInstanceProfileRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Users/abc?attributes=roles", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -198,7 +198,7 @@ func TestResourceUserInstanceProfileDelete_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Users/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/catalog/resource_artifact_allowlist_test.go b/catalog/resource_artifact_allowlist_test.go index d4cf08b108..25b5e9526e 100644 --- a/catalog/resource_artifact_allowlist_test.go +++ b/catalog/resource_artifact_allowlist_test.go @@ -4,8 +4,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -105,7 +105,7 @@ func TestArtifactAllowlistCreate_Error(t *testing.T) { Method: http.MethodPut, Resource: "/api/2.1/unity-catalog/artifact-allowlists/INIT_SCRIPT", ExpectedRequest: setArtifact, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -156,7 +156,7 @@ func TestResourceArtifactAllowlistRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.1/unity-catalog/artifact-allowlists/INIT_SCRIPT?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -219,7 +219,7 @@ func TestArtifactAllowlistUpdate_Error(t *testing.T) { Method: http.MethodPut, Resource: "/api/2.1/unity-catalog/artifact-allowlists/INIT_SCRIPT", ExpectedRequest: updateArtifact, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -280,7 +280,7 @@ func TestArtifactAllowlistDelete_Error(t *testing.T) { ArtifactType: catalog.ArtifactTypeInitScript, ArtifactMatchers: []catalog.ArtifactMatcher{}, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Something went wrong", }, diff --git a/catalog/resource_connection_test.go b/catalog/resource_connection_test.go index 8035d79948..2d7a1726eb 100644 --- a/catalog/resource_connection_test.go +++ b/catalog/resource_connection_test.go @@ -4,8 +4,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -145,7 +145,7 @@ func TestConnectionsCreate_Error(t *testing.T) { "host": "test.com", }, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -210,7 +210,7 @@ func TestConnectionRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.1/unity-catalog/connections/testConnectionName?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -395,7 +395,7 @@ func TestConnectionUpdate_Error(t *testing.T) { "host": "test.com", }, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -443,7 +443,7 @@ func TestConnectionDelete_Error(t *testing.T) { { Method: http.MethodDelete, Resource: "/api/2.1/unity-catalog/connections/testConnectionName?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Something went wrong", }, diff --git a/catalog/resource_external_location_test.go b/catalog/resource_external_location_test.go index 314d2731d0..3713edcff5 100644 --- a/catalog/resource_external_location_test.go +++ b/catalog/resource_external_location_test.go @@ -4,9 +4,9 @@ import ( "fmt" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/experimental/mocks" "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/mock" ) @@ -450,7 +450,7 @@ func TestUpdateExternalLocationRollback(t *testing.T) { Url: "s3://foo/bar", CredentialName: "xyz", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -514,7 +514,7 @@ func TestUpdateExternalLocationRollbackError(t *testing.T) { Url: "s3://foo/bar", CredentialName: "xyz", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -526,7 +526,7 @@ func TestUpdateExternalLocationRollbackError(t *testing.T) { ExpectedRequest: catalog.UpdateExternalLocation{ Owner: "administrators", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/catalog/resource_storage_credential_test.go b/catalog/resource_storage_credential_test.go index cf9bf0118d..fa76de2cbd 100644 --- a/catalog/resource_storage_credential_test.go +++ b/catalog/resource_storage_credential_test.go @@ -3,9 +3,9 @@ package catalog import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/experimental/mocks" "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/mock" ) @@ -506,7 +506,7 @@ func TestUpdateStorageCredentialsRollback(t *testing.T) { RoleArn: "CHANGED", }, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, diff --git a/catalog/resource_system_schema_test.go b/catalog/resource_system_schema_test.go index d3720075ab..e576eec072 100644 --- a/catalog/resource_system_schema_test.go +++ b/catalog/resource_system_schema_test.go @@ -4,8 +4,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -72,7 +72,7 @@ func TestSystemSchemaCreate_Error(t *testing.T) { { Method: http.MethodPut, Resource: "/api/2.1/unity-catalog/metastores/abc/systemschemas/access", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -157,7 +157,7 @@ func TestSystemSchemaUpdate_Error(t *testing.T) { { Method: http.MethodPut, Resource: "/api/2.1/unity-catalog/metastores/abc/systemschemas/access", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -224,7 +224,7 @@ func TestSystemSchemaRead_Error(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.1/unity-catalog/metastores/abc/systemschemas?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -282,7 +282,7 @@ func TestSystemSchemaDelete_Error(t *testing.T) { { Method: http.MethodDelete, Resource: "/api/2.1/unity-catalog/metastores/abc/systemschemas/access?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/catalog/resource_volume_test.go b/catalog/resource_volume_test.go index 39e19205d2..6cdf9c4cd6 100644 --- a/catalog/resource_volume_test.go +++ b/catalog/resource_volume_test.go @@ -6,8 +6,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/catalog" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -154,7 +154,7 @@ func TestVolumesCreateWithoutInitialOwner_Error(t *testing.T) { { Method: http.MethodPost, Resource: "/api/2.1/unity-catalog/volumes", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -216,7 +216,7 @@ func TestVolumesCreateWithInitialOwner_Error(t *testing.T) { { Method: http.MethodPatch, Resource: "/api/2.1/unity-catalog/volumes/testCatalogName.testSchemaName.testName", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -279,7 +279,7 @@ func TestResourceVolumeRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.1/unity-catalog/volumes/testCatalogName.testSchemaName.testName?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -574,7 +574,7 @@ func TestVolumesUpdateRollback(t *testing.T) { Name: "testName", Comment: "This is a new test comment.", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -628,7 +628,7 @@ func TestVolumesUpdateRollback_Error(t *testing.T) { Name: "testName", Comment: "This is a new test comment.", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: serverErrMessage, }, @@ -640,7 +640,7 @@ func TestVolumesUpdateRollback_Error(t *testing.T) { ExpectedRequest: catalog.UpdateVolumeRequestContent{ Owner: "testOwnerOld", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: rollbackErrMessage, }, @@ -678,7 +678,7 @@ func TestVolumeUpdate_Error(t *testing.T) { ExpectedRequest: catalog.UpdateVolumeRequestContent{ Owner: "testOwnerNew", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -727,7 +727,7 @@ func TestVolumeDelete_Error(t *testing.T) { { Method: http.MethodDelete, Resource: "/api/2.1/unity-catalog/volumes/testCatalogName.testSchemaName.testName?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Something went wrong", }, diff --git a/clusters/clusters_api_sdk_test.go b/clusters/clusters_api_sdk_test.go index 7e6d5daddf..6067c21b34 100644 --- a/clusters/clusters_api_sdk_test.go +++ b/clusters/clusters_api_sdk_test.go @@ -4,7 +4,7 @@ import ( "context" "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -144,7 +144,7 @@ func TestStartClusterAndGetInfo_StartingError(t *testing.T) { ExpectedRequest: ClusterID{ ClusterID: "abc", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ Message: "I am a teapot!", }, Status: 418, diff --git a/clusters/clusters_api_test.go b/clusters/clusters_api_test.go index 441d7397d5..2cc9cc24ea 100644 --- a/clusters/clusters_api_test.go +++ b/clusters/clusters_api_test.go @@ -174,7 +174,7 @@ func TestWaitForClusterStatus_RetryOnNotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/clusters/get?cluster_id=abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ Message: "Nope", }, Status: 404, @@ -204,7 +204,7 @@ func TestWaitForClusterStatus_StopRetryingEarly(t *testing.T) { { Method: "GET", Resource: "/api/2.0/clusters/get?cluster_id=abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ Message: "I am a teapot", }, Status: 418, @@ -643,7 +643,7 @@ func TestStartAndGetInfo_StartingError(t *testing.T) { ExpectedRequest: ClusterID{ ClusterID: "abc", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ Message: "I am a teapot!", }, Status: 418, @@ -680,7 +680,7 @@ func TestPermanentDelete_Pinned(t *testing.T) { ExpectedRequest: ClusterID{ ClusterID: "abc", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ Message: "unpin the cluster first", }, Status: 400, diff --git a/clusters/resource_cluster_test.go b/clusters/resource_cluster_test.go index 0a2e404aeb..4ddd930c2d 100644 --- a/clusters/resource_cluster_test.go +++ b/clusters/resource_cluster_test.go @@ -5,9 +5,8 @@ import ( "strings" "testing" - "github.com/databricks/databricks-sdk-go/apierr" - "github.com/databricks/databricks-sdk-go/service/compute" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -439,7 +438,7 @@ func TestResourceClusterCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/clusters/create", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -519,7 +518,7 @@ func TestResourceClusterRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/clusters/get?cluster_id=abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ // clusters API is not fully restful, so let's test for that // TODO: https://github.com/databricks/terraform-provider-databricks/issues/2021 ErrorCode: "INVALID_STATE", @@ -541,7 +540,7 @@ func TestResourceClusterRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/clusters/get?cluster_id=abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -1227,7 +1226,7 @@ func TestResourceClusterUpdate_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/clusters/get?cluster_id=abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -1383,7 +1382,7 @@ func TestResourceClusterDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/clusters/permanent-delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/common/apierr.go b/common/apierr.go new file mode 100644 index 0000000000..8417f88879 --- /dev/null +++ b/common/apierr.go @@ -0,0 +1,30 @@ +package common + +// APIErrorBody represents an API error returned by Databricks API. +// +// Deprecated: this class is meant to disappear as the Terraform provider +// progressively moves to use the service clients provided by the SDK. Clients +// should not use this class for any other purpose than testing code that +// mocks the behavior of Databricks services. +type APIErrorBody struct { + ErrorCode string `json:"error_code,omitempty"` + Message string `json:"message,omitempty"` + Details []ErrorDetail `json:"details,omitempty"` + ScimDetail string `json:"detail,omitempty"` + ScimStatus string `json:"status,omitempty"` + ScimType string `json:"scimType,omitempty"` + API12Error string `json:"error,omitempty"` +} + +// ErrorDetail represents the details of an API error. +// +// Deprecated: this struct is meant to disappear as the Terraform provider +// progressively moves to use the service clients provided by the SDK. Clients +// should not use this struct for any other purpose than testing code that +// mocks the behavior of Databricks services. +type ErrorDetail struct { + Type string `json:"@type,omitempty"` + Reason string `json:"reason,omitempty"` + Domain string `json:"domain,omitempty"` + Metadata map[string]string `json:"metadata,omitempty"` +} diff --git a/jobs/data_job_test.go b/jobs/data_job_test.go index 2342891433..cb67ccc143 100755 --- a/jobs/data_job_test.go +++ b/jobs/data_job_test.go @@ -4,7 +4,7 @@ import ( "fmt" "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" ) @@ -215,7 +215,7 @@ func TestDataSourceQueryableJobNoMatchId(t *testing.T) { { Method: "GET", Resource: "/api/2.0/jobs/get?job_id=567", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Job 567 does not exist.", }, diff --git a/jobs/resource_job_test.go b/jobs/resource_job_test.go index cbfbb411ae..95ffb03923 100644 --- a/jobs/resource_job_test.go +++ b/jobs/resource_job_test.go @@ -1532,7 +1532,7 @@ func TestResourceJobUpdate_ControlRunState_ContinuousUpdateRunNowFailsWith409(t JobID: 789, }, Status: 409, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "CONFLICT", Message: "A concurrent request to run the continuous job is already in progress. Please wait for it to complete before issuing a new request.", }, @@ -2127,7 +2127,7 @@ func TestResourceJobRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/jobs/get?job_id=789", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -2148,7 +2148,7 @@ func TestResourceJobRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/jobs/get?job_id=789", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/mlflow/data_mlflow_experiment_test.go b/mlflow/data_mlflow_experiment_test.go index 992448620b..e57bfe9c55 100644 --- a/mlflow/data_mlflow_experiment_test.go +++ b/mlflow/data_mlflow_experiment_test.go @@ -5,9 +5,9 @@ import ( "net/url" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/ml" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" ) @@ -50,7 +50,7 @@ func TestDataSourceExperimentByIdNotFound(t *testing.T) { Method: "GET", Resource: "/api/2.0/mlflow/experiments/get?experiment_id=0987654321", Status: 404, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Node ID 0987654321 does not exist.", }, @@ -107,7 +107,7 @@ func TestDataSourceExperimentByNameNotFound(t *testing.T) { Method: "GET", Resource: fmt.Sprintf("/api/2.0/mlflow/experiments/get-by-name?experiment_name=%s", url.QueryEscape(experimentName)), Status: 404, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Node /Users/databricks/non-existent-experiment does not exist.", }, @@ -130,7 +130,7 @@ func TestDataSourceExperimentByNameInvalidPath(t *testing.T) { Method: "GET", Resource: fmt.Sprintf("/api/2.0/mlflow/experiments/get-by-name?experiment_name=%s", url.QueryEscape(experimentName)), Status: 404, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Got an invalid experiment name 'invalid_path'. An experiment name must be an absolute path within the Databricks workspace, e.g. '/Users//my-experiment'.", }, diff --git a/mlflow/data_mlflow_model_test.go b/mlflow/data_mlflow_model_test.go index c55db0c117..dd0b1d0bef 100644 --- a/mlflow/data_mlflow_model_test.go +++ b/mlflow/data_mlflow_model_test.go @@ -4,8 +4,8 @@ import ( "fmt" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/ml" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" ) @@ -59,7 +59,7 @@ func TestDataSourceModelNotFound(t *testing.T) { Method: "GET", Resource: fmt.Sprintf("/api/2.0/mlflow/databricks/registered-models/get?name=%s", modelName), Status: 404, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: fmt.Sprintf("RegisteredModel '%s' does not exist. It might have been deleted.", modelName), }, diff --git a/mws/resource_mws_credentials_test.go b/mws/resource_mws_credentials_test.go index dcaa20ddfe..6d33365974 100644 --- a/mws/resource_mws_credentials_test.go +++ b/mws/resource_mws_credentials_test.go @@ -3,8 +3,7 @@ package mws import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" - + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -210,7 +209,7 @@ func TestResourceCredentialsCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/accounts/abc/credentials", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -266,7 +265,7 @@ func TestResourceCredentialsRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/credentials/cid?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -287,7 +286,7 @@ func TestResourceCredentialsRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/credentials/cid?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -326,7 +325,7 @@ func TestResourceCredentialsDelete_Error(t *testing.T) { { Method: "DELETE", Resource: "/api/2.0/accounts/abc/credentials/cid?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/mws/resource_mws_customer_managed_keys_test.go b/mws/resource_mws_customer_managed_keys_test.go index 48f8c5b05f..1f926dda3e 100644 --- a/mws/resource_mws_customer_managed_keys_test.go +++ b/mws/resource_mws_customer_managed_keys_test.go @@ -4,7 +4,7 @@ import ( "context" "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -75,7 +75,7 @@ func TestResourceCustomerManagedKeyCreate_Error(t *testing.T) { }, UseCases: []string{"MANAGED_SERVICE"}, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -160,7 +160,7 @@ func TestResourceCustomerManagedKeyRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/customer-managed-keys/cmkid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ Message: "Invalid endpoint", }, Status: 404, diff --git a/mws/resource_mws_log_delivery_test.go b/mws/resource_mws_log_delivery_test.go index 5bb05af13f..25e4f01b29 100644 --- a/mws/resource_mws_log_delivery_test.go +++ b/mws/resource_mws_log_delivery_test.go @@ -3,7 +3,7 @@ package mws import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -142,7 +142,7 @@ func TestResourceLogDeliveryCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/accounts/abc/log-delivery", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -226,7 +226,7 @@ func TestResourceLogDeliveryRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/log-delivery/nid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -307,7 +307,7 @@ func TestUpdateLogDeliveryError(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/accounts/abc/log-delivery/nid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -369,7 +369,7 @@ func TestResourceLogDeliveryDelete_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/accounts/abc/log-delivery/nid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/mws/resource_mws_networks_test.go b/mws/resource_mws_networks_test.go index dc328270cf..0a009b7f94 100644 --- a/mws/resource_mws_networks_test.go +++ b/mws/resource_mws_networks_test.go @@ -3,7 +3,7 @@ package mws import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -202,7 +202,7 @@ func TestResourceNetworkCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/accounts/abc/networks", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -259,7 +259,7 @@ func TestResourceNetworkRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/networks/nid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -279,7 +279,7 @@ func TestResourceNetworkRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/networks/nid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -315,7 +315,7 @@ func TestResourceNetworkDelete(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/networks/nid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Yes, it's not found", }, @@ -336,7 +336,7 @@ func TestResourceNetworkDelete_Error(t *testing.T) { { Method: "DELETE", Resource: "/api/2.0/accounts/abc/networks/nid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/mws/resource_mws_storage_configurations_test.go b/mws/resource_mws_storage_configurations_test.go index c21b46c156..8154570af6 100644 --- a/mws/resource_mws_storage_configurations_test.go +++ b/mws/resource_mws_storage_configurations_test.go @@ -3,7 +3,7 @@ package mws import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -54,7 +54,7 @@ func TestResourceStorageConfigurationCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/accounts/abc/storage-configurations", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -106,7 +106,7 @@ func TestResourceStorageConfigurationRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/storage-configurations/scid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -126,7 +126,7 @@ func TestResourceStorageConfigurationRead_Error(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: "/api/2.0/accounts/abc/storage-configurations/scid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -163,7 +163,7 @@ func TestResourceStorageConfigurationDelete_Error(t *testing.T) { { Method: "DELETE", Resource: "/api/2.0/accounts/abc/storage-configurations/scid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/mws/resource_mws_vpc_endpoint_test.go b/mws/resource_mws_vpc_endpoint_test.go index 0c5dd778e0..c73182f42a 100644 --- a/mws/resource_mws_vpc_endpoint_test.go +++ b/mws/resource_mws_vpc_endpoint_test.go @@ -4,7 +4,7 @@ import ( "context" "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -132,7 +132,7 @@ func TestResourceVPCEndpointCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/accounts/abc/vpc-endpoints", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -185,7 +185,7 @@ func TestResourceVPCEndpointRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/vpc-endpoints/veid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -205,7 +205,7 @@ func TestResourceVPCEndpoint_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/vpc-endpoints/veid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -240,7 +240,7 @@ func TestResourceVPCEndpointDelete(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/vpc-endpoints/veid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Yes, it's not found", }, @@ -261,7 +261,7 @@ func TestResourceVPCEndpointDelete_Error(t *testing.T) { { Method: "DELETE", Resource: "/api/2.0/accounts/abc/vpc-endpoints/veid", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/mws/resource_mws_workspaces_test.go b/mws/resource_mws_workspaces_test.go index 25f4c328be..9a69c61f2a 100644 --- a/mws/resource_mws_workspaces_test.go +++ b/mws/resource_mws_workspaces_test.go @@ -199,7 +199,7 @@ func TestResourceWorkspaceCreate_Error_Custom_tags(t *testing.T) { "SoldToCode": "1234", }, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_PARAMETER_VALUE", Message: "custom_tags are only allowed for AWS workspaces", }, @@ -532,7 +532,7 @@ func TestResourceWorkspaceCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/accounts/abc/workspaces", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -541,7 +541,7 @@ func TestResourceWorkspaceCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/accounts/abc/workspaces", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -671,7 +671,7 @@ func TestResourceWorkspaceRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/workspaces/1234", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -691,7 +691,7 @@ func TestResourceWorkspaceRead_Error(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: "/api/2.0/accounts/abc/workspaces/1234", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -873,7 +873,7 @@ func TestResourceWorkspaceUpdate_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/accounts/abc/workspaces/1234", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -918,7 +918,7 @@ func TestResourceWorkspaceDelete(t *testing.T) { { Method: "GET", Resource: "/api/2.0/accounts/abc/workspaces/1234", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Cannot find anything", }, @@ -939,7 +939,7 @@ func TestResourceWorkspaceDelete_Error(t *testing.T) { { Method: "DELETE", Resource: "/api/2.0/accounts/abc/workspaces/1234", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/permissions/resource_access_control_rule_set_test.go b/permissions/resource_access_control_rule_set_test.go index 4a55361345..4225cec60c 100644 --- a/permissions/resource_access_control_rule_set_test.go +++ b/permissions/resource_access_control_rule_set_test.go @@ -5,11 +5,11 @@ import ( "net/url" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/iam" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" ) @@ -231,7 +231,7 @@ func TestResourceRuleSetUpdateConflict(t *testing.T) { }, }, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_CONFLICT", Message: "Conflict with another RuleSet operation", }, diff --git a/permissions/resource_permissions_test.go b/permissions/resource_permissions_test.go index e7a3b2366c..7f856612b5 100644 --- a/permissions/resource_permissions_test.go +++ b/permissions/resource_permissions_test.go @@ -372,7 +372,7 @@ func TestResourcePermissionsRead_NotFound(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/permissions/clusters/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Cluster does not exist", }, @@ -394,7 +394,7 @@ func TestResourcePermissionsRead_some_error(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/permissions/clusters/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -460,7 +460,7 @@ func TestResourcePermissionsRead_ErrorOnScimMe(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/preview/scim/v2/Me", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -614,7 +614,7 @@ func TestResourcePermissionsDelete_error(t *testing.T) { }, }, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -965,7 +965,7 @@ func TestResourcePermissionsCreate_NotebookPath_NotExists(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/workspace/get-status?path=%2FDevelopment%2FInit", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -1143,7 +1143,7 @@ func TestResourcePermissionsCreate_error(t *testing.T) { { Method: http.MethodPut, Resource: "/api/2.0/permissions/clusters/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/pipelines/resource_pipeline_test.go b/pipelines/resource_pipeline_test.go index e51578c9c6..4fed6c1404 100644 --- a/pipelines/resource_pipeline_test.go +++ b/pipelines/resource_pipeline_test.go @@ -4,7 +4,7 @@ import ( "context" "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -126,7 +126,7 @@ func TestResourcePipelineCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/pipelines", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -177,7 +177,7 @@ func TestResourcePipelineCreate_ErrorWhenWaitingFailedCleanup(t *testing.T) { { Method: "GET", Resource: "/api/2.0/pipelines/abcd", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INTERNAL_ERROR", Message: "Internal error", }, @@ -228,7 +228,7 @@ func TestResourcePipelineCreate_ErrorWhenWaitingSuccessfulCleanup(t *testing.T) { Method: "GET", Resource: "/api/2.0/pipelines/abcd", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "No such resource", }, @@ -284,7 +284,7 @@ func TestResourcePipelineRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/pipelines/abcd", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -304,7 +304,7 @@ func TestResourcePipelineRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/pipelines/abcd", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -392,7 +392,7 @@ func TestResourcePipelineUpdate_Error(t *testing.T) { { // read log output for better stub url... Method: "PUT", Resource: "/api/2.0/pipelines/abcd", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -499,7 +499,7 @@ func TestResourcePipelineDelete(t *testing.T) { { Method: "GET", Resource: "/api/2.0/pipelines/abcd", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "No such resource", }, @@ -520,7 +520,7 @@ func TestResourcePipelineDelete_Error(t *testing.T) { { Method: "DELETE", Resource: "/api/2.0/pipelines/abcd?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/policies/resource_cluster_policy_test.go b/policies/resource_cluster_policy_test.go index 178b0d33cf..30c618a26a 100644 --- a/policies/resource_cluster_policy_test.go +++ b/policies/resource_cluster_policy_test.go @@ -3,8 +3,8 @@ package policies import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/compute" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -43,7 +43,7 @@ func TestResourceClusterPolicyRead_NotFound(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: "/api/2.0/policies/clusters/get?policy_id=abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -63,7 +63,7 @@ func TestResourceClusterPolicyRead_Error(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: "/api/2.0/policies/clusters/get?policy_id=abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -244,7 +244,7 @@ func TestResourceClusterPolicyCreateOverrideBuiltin_ErrorListingFamilies(t *test { Method: "GET", Resource: "/api/2.0/policy-families?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -282,7 +282,7 @@ func TestResourceClusterPolicyCreateOverrideBuiltin_ErrorListingPolicies(t *test { Method: "GET", Resource: "/api/2.0/policies/clusters/list?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -322,7 +322,7 @@ func TestResourceClusterPolicyCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/policies/clusters/create", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -420,7 +420,7 @@ func TestResourceClusterPolicyUpdate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/policies/clusters/edit", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -502,7 +502,7 @@ func TestResourceClusterPolicyDeletePolicyOverride_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/policy-families?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -526,7 +526,7 @@ func TestResourceClusterPolicyDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/policies/clusters/delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/pools/resource_instance_pool_test.go b/pools/resource_instance_pool_test.go index dd9562750c..501e36918e 100644 --- a/pools/resource_instance_pool_test.go +++ b/pools/resource_instance_pool_test.go @@ -3,8 +3,7 @@ package pools import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" - + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -61,7 +60,7 @@ func TestResourceInstancePoolCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/instance-pools/create", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -119,7 +118,7 @@ func TestResourceInstancePoolRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/instance-pools/get?instance_pool_id=abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -139,7 +138,7 @@ func TestResourceInstancePoolRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/instance-pools/get?instance_pool_id=abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -208,7 +207,7 @@ func TestResourceInstancePoolUpdate_Error(t *testing.T) { { // read log output for better stub url... Method: "POST", Resource: "/api/2.0/instance-pools/edit", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -256,7 +255,7 @@ func TestResourceInstancePoolDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/instance-pools/delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/provider/generate_test.go b/provider/generate_test.go index 5c095b5327..4bba20c1f1 100644 --- a/provider/generate_test.go +++ b/provider/generate_test.go @@ -59,7 +59,7 @@ func (stub *resourceTestStub) Reads(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: "/api/2.0/...", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -79,7 +79,7 @@ func (stub *resourceTestStub) Reads(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: "/api/2.0/...", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -120,7 +120,7 @@ func (stub *resourceTestStub) Creates(t *testing.T) { { // read log output for better stub url... Method: "POST", Resource: "/api/2.0/...", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -165,7 +165,7 @@ func (stub *resourceTestStub) Updates(t *testing.T) { { // read log output for better stub url... Method: "POST", Resource: "/api/2.0/.../edit", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -212,7 +212,7 @@ func (stub *resourceTestStub) Deletes(t *testing.T) { { Method: "POST", Resource: "/api/2.0/.../delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/repos/resource_git_credential_test.go b/repos/resource_git_credential_test.go index 490806b52d..911a48eeb0 100644 --- a/repos/resource_git_credential_test.go +++ b/repos/resource_git_credential_test.go @@ -5,8 +5,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/workspace" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" ) @@ -41,7 +41,7 @@ func TestResourceGitCredentialRead_Error(t *testing.T) { { Method: http.MethodGet, Resource: fmt.Sprintf("/api/2.0/git-credentials/%d?", credID), - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Git credential with the given ID could not be found.", }, @@ -131,7 +131,7 @@ func TestResourceGitCredentialUpdate_Error(t *testing.T) { GitUsername: user, PersonalAccessToken: token, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Git credential with the given ID could not be found.", }, @@ -205,7 +205,7 @@ func TestResourceGitCredentialCreate_Error(t *testing.T) { GitUsername: user, PersonalAccessToken: token, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Only one Git credential is supported at this time. If you would like to update your credential, please use the PATCH endpoint.", }, @@ -241,7 +241,7 @@ func TestResourceGitCredentialCreateWithForce(t *testing.T) { GitUsername: user, PersonalAccessToken: token, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Only one Git credential is supported at this time. If you would like to update your credential, please use the PATCH endpoint.", }, @@ -296,7 +296,7 @@ func TestResourceGitCredentialCreateWithForce_Error_List(t *testing.T) { GitUsername: user, PersonalAccessToken: token, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Only one Git credential is supported at this time. If you would like to update your credential, please use the PATCH endpoint.", }, @@ -305,7 +305,7 @@ func TestResourceGitCredentialCreateWithForce_Error_List(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/git-credentials", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "No such endpoint", }, @@ -337,7 +337,7 @@ func TestResourceGitCredentialCreateWithForce_ErrorEmptyList(t *testing.T) { GitUsername: user, PersonalAccessToken: token, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Only one Git credential is supported at this time. If you would like to update your credential, please use the PATCH endpoint.", }, @@ -379,7 +379,7 @@ func TestResourceGitCredentialCreateWithForce_ErrorUpdate(t *testing.T) { GitUsername: user, PersonalAccessToken: token, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Only one Git credential is supported at this time. If you would like to update your credential, please use the PATCH endpoint.", }, @@ -395,7 +395,7 @@ func TestResourceGitCredentialCreateWithForce_ErrorUpdate(t *testing.T) { { Method: http.MethodPatch, Resource: fmt.Sprintf("/api/2.0/git-credentials/%d", resp.CredentialId), - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Git credential with the given ID could not be found.", }, diff --git a/repos/resource_repo_test.go b/repos/resource_repo_test.go index d3346f6975..288d36e39d 100644 --- a/repos/resource_repo_test.go +++ b/repos/resource_repo_test.go @@ -9,7 +9,7 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" ) @@ -61,7 +61,7 @@ func TestResourceRepoRead_NotFound(t *testing.T) { { Method: http.MethodGet, Resource: fmt.Sprintf("/api/2.0/repos/%s", repoID), - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Repo could not be found", }, @@ -182,7 +182,7 @@ func TestResourceRepoCreateCustomDirectoryError(t *testing.T) { ExpectedRequest: map[string]string{ "path": "/Repos/Production", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/scim/resource_entitlement_test.go b/scim/resource_entitlement_test.go index c69422e522..7f5147ba52 100644 --- a/scim/resource_entitlement_test.go +++ b/scim/resource_entitlement_test.go @@ -3,7 +3,7 @@ package scim import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -178,7 +178,7 @@ func TestResourceEntitlementsGroupRead_Error(t *testing.T) { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=entitlements", Status: 400, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ScimDetail: "Something", ScimStatus: "Else", }, @@ -273,7 +273,7 @@ func TestResourceEntitlementsGroupDeleteEmptyEntitlement(t *testing.T) { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Groups/abc", ExpectedRequest: deleteRequest, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_PATH", Message: "invalidPath No such attribute with the name : entitlements in the current resource", }, @@ -420,7 +420,7 @@ func TestResourceEntitlementsUserRead_Error(t *testing.T) { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Users/abc?attributes=entitlements", Status: 400, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ScimDetail: "Something", ScimStatus: "Else", }, @@ -441,7 +441,7 @@ func TestResourceEntitlementsUserUpdate_Error(t *testing.T) { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Users/abc?attributes=entitlements", Status: 400, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ScimDetail: "Something", ScimStatus: "Else", }, @@ -451,7 +451,7 @@ func TestResourceEntitlementsUserUpdate_Error(t *testing.T) { Resource: "/api/2.0/preview/scim/v2/Users/abc", ExpectedRequest: updateRequest, Status: 400, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ScimDetail: "Something", ScimStatus: "Else", }, @@ -630,7 +630,7 @@ func TestResourceEntitlementsSPNRead_Error(t *testing.T) { Method: "GET", Resource: "/api/2.0/preview/scim/v2/ServicePrincipals/abc?attributes=entitlements", Status: 400, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ScimDetail: "Something", ScimStatus: "Else", }, diff --git a/scim/resource_group_member_test.go b/scim/resource_group_member_test.go index d7b2351e1a..338174ce08 100644 --- a/scim/resource_group_member_test.go +++ b/scim/resource_group_member_test.go @@ -3,7 +3,7 @@ package scim import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -51,7 +51,7 @@ func TestResourceGroupMemberCreate_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Groups/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -121,7 +121,7 @@ func TestResourceGroupMemberRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=members", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -141,7 +141,7 @@ func TestResourceGroupMemberRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=members", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -181,7 +181,7 @@ func TestResourceGroupMemberDelete_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Groups/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/scim/resource_group_role_test.go b/scim/resource_group_role_test.go index 968444175d..788d0a5750 100644 --- a/scim/resource_group_role_test.go +++ b/scim/resource_group_role_test.go @@ -3,7 +3,7 @@ package scim import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" ) @@ -48,7 +48,7 @@ func TestResourceGroupRoleCreate_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Groups/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -114,7 +114,7 @@ func TestResourceGroupRoleRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=roles", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -134,7 +134,7 @@ func TestResourceGroupRoleRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=roles", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -170,7 +170,7 @@ func TestResourceGroupRoleDelete_Error(t *testing.T) { { Method: "PATCH", Resource: "/api/2.0/preview/scim/v2/Groups/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/scim/resource_group_test.go b/scim/resource_group_test.go index 1e7081360e..0b8a496514 100644 --- a/scim/resource_group_test.go +++ b/scim/resource_group_test.go @@ -82,7 +82,7 @@ func TestResourceGroupCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/preview/scim/v2/Groups", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -164,7 +164,7 @@ func TestResourceGroupRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=displayName,externalId,entitlements", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -184,7 +184,7 @@ func TestResourceGroupRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=displayName,externalId,entitlements", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -300,7 +300,7 @@ func TestResourceGroupUpdate_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Groups/abc?attributes=displayName,entitlements,groups,members,externalId", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -338,7 +338,7 @@ func TestResourceGroupDelete_Error(t *testing.T) { { Method: "DELETE", Resource: "/api/2.0/preview/scim/v2/Groups/abc", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/scim/resource_service_principal_test.go b/scim/resource_service_principal_test.go index cb21afa3d8..f27c05cf75 100644 --- a/scim/resource_service_principal_test.go +++ b/scim/resource_service_principal_test.go @@ -73,7 +73,7 @@ func TestResourceServicePrincipalRead_Error(t *testing.T) { Method: "GET", Resource: "/api/2.0/preview/scim/v2/ServicePrincipals/abc?attributes=userName,displayName,active,externalId,entitlements", Status: 400, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ScimDetail: "Something", ScimStatus: "Else", }, @@ -474,7 +474,7 @@ func TestResourceServicePrincipalDelete_NonExistingRepo(t *testing.T) { Path: "/Repos/abc", Recursive: true, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Path (/Repos/abc) doesn't exist.", }, @@ -533,7 +533,7 @@ func TestResourceServicePrincipalDelete_NonExistingDir(t *testing.T) { Path: "/Users/abc", Recursive: true, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Path (/Users/abc) doesn't exist.", }, diff --git a/scim/resource_user_test.go b/scim/resource_user_test.go index 93257b236a..fa0d61220d 100644 --- a/scim/resource_user_test.go +++ b/scim/resource_user_test.go @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/workspace" @@ -76,7 +75,7 @@ func TestResourceUserRead_Error(t *testing.T) { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Users/abc?attributes=userName,displayName,active,externalId,entitlements", Status: 400, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ScimDetail: "Something", ScimStatus: "Else", }, @@ -515,7 +514,7 @@ func TestResourceUserDelete_NonExistingRepo(t *testing.T) { Path: "/Repos/abc", Recursive: true, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Path (/Repos/abc) doesn't exist.", }, @@ -573,7 +572,7 @@ func TestResourceUserDelete_NonExistingDir(t *testing.T) { Path: "/Users/abc", Recursive: true, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Path (/Users/abc) doesn't exist.", }, @@ -632,7 +631,7 @@ func TestCreateForceOverwriteCannotListUsers(t *testing.T) { Method: "GET", Resource: "/api/2.0/preview/scim/v2/Users?excludedAttributes=roles&filter=userName%20eq%20%22me%40example.com%22", Status: 417, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ Message: "cannot find user", }, }, diff --git a/secrets/resource_secret_acl_test.go b/secrets/resource_secret_acl_test.go index 10f4f6da56..e70475890c 100644 --- a/secrets/resource_secret_acl_test.go +++ b/secrets/resource_secret_acl_test.go @@ -3,8 +3,8 @@ package secrets import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/workspace" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -37,7 +37,7 @@ func TestResourceSecretACLRead_NotFound(t *testing.T) { { Method: "GET", Resource: "/api/2.0/secrets/acls/get?principal=something&scope=global", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -57,7 +57,7 @@ func TestResourceSecretACLRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/secrets/acls/get?principal=something&scope=global", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -142,7 +142,7 @@ func TestResourceSecretACLCreate_Error(t *testing.T) { { // read log output for better stub url... Method: "POST", Resource: "/api/2.0/secrets/acls/put", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -187,7 +187,7 @@ func TestResourceSecretACLDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/secrets/acls/delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/secrets/resource_secret_scope_test.go b/secrets/resource_secret_scope_test.go index 4fd0702d27..455aac6d77 100644 --- a/secrets/resource_secret_scope_test.go +++ b/secrets/resource_secret_scope_test.go @@ -4,8 +4,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/workspace" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -104,7 +104,7 @@ func TestResourceSecretScopeRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/secrets/scopes/list", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -244,7 +244,7 @@ func TestResourceSecretScopeCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/secrets/scopes/create", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -287,7 +287,7 @@ func TestResourceSecretScopeDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/secrets/scopes/delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/secrets/resource_secret_test.go b/secrets/resource_secret_test.go index 09f87b8329..3ab58b7722 100644 --- a/secrets/resource_secret_test.go +++ b/secrets/resource_secret_test.go @@ -3,8 +3,8 @@ package secrets import ( "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/workspace" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -67,7 +67,7 @@ func TestResourceSecretRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/secrets/list?scope=foo", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -125,7 +125,7 @@ func TestResourceSecretCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/secrets/put", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -170,7 +170,7 @@ func TestResourceSecretDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/secrets/delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/serving/resource_model_serving_test.go b/serving/resource_model_serving_test.go index bcf572ed34..fc494b2f35 100644 --- a/serving/resource_model_serving_test.go +++ b/serving/resource_model_serving_test.go @@ -4,8 +4,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/serving" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" ) @@ -277,7 +277,7 @@ func TestModelServingCreate_Error(t *testing.T) { { Method: http.MethodPost, Resource: "/api/2.0/serving-endpoints", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -335,7 +335,7 @@ func TestModelServingCreate_WithErrorOnWait(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/serving-endpoints/test-endpoint?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -449,7 +449,7 @@ func TestModelServingRead_Error(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/serving-endpoints/test-endpoint?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -565,7 +565,7 @@ func TestModelServingUpdate_Error(t *testing.T) { { Method: http.MethodPut, Resource: "/api/2.0/serving-endpoints/test-endpoint/config", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -620,7 +620,7 @@ func TestModelServingDelete_Error(t *testing.T) { { Method: http.MethodDelete, Resource: "/api/2.0/serving-endpoints/test-endpoint?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/sharing/resource_recipient_test.go b/sharing/resource_recipient_test.go index c2cfadcfe3..40cb8ffd68 100644 --- a/sharing/resource_recipient_test.go +++ b/sharing/resource_recipient_test.go @@ -4,8 +4,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/sharing" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -329,7 +329,7 @@ func TestUpdateRecipientRollback(t *testing.T) { ExpectedRequest: sharing.UpdateRecipient{ Comment: "e", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -402,7 +402,7 @@ func TestDeleteRecipientError(t *testing.T) { { Method: http.MethodDelete, Resource: "/api/2.1/unity-catalog/recipients/testRecipient?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_STATE", Message: "Something went wrong", }, diff --git a/sharing/resource_share_test.go b/sharing/resource_share_test.go index 3967ac52b7..7d03f76089 100644 --- a/sharing/resource_share_test.go +++ b/sharing/resource_share_test.go @@ -5,8 +5,8 @@ import ( "github.com/stretchr/testify/assert" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/sharing" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" ) @@ -426,7 +426,7 @@ func TestUpdateShareRollback(t *testing.T) { }, }, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "SERVER_ERROR", Message: "Something unexpected happened", }, @@ -557,7 +557,7 @@ func TestCreateShare_ThrowError(t *testing.T) { ExpectedRequest: ShareInfo{ Name: "a", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -620,7 +620,7 @@ func TestCreateShareButPatchFails(t *testing.T) { }, }, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/storage/resource_file_test.go b/storage/resource_file_test.go index 25dd748802..dea290c509 100644 --- a/storage/resource_file_test.go +++ b/storage/resource_file_test.go @@ -4,8 +4,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/files" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -94,7 +94,7 @@ func TestResourceFileCreate_Error(t *testing.T) { { Method: http.MethodPut, Resource: "/api/2.0/fs/files/Volumes/CatalogName/SchemaName/VolumeName/fileName", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -185,7 +185,7 @@ func TestResourceFileDelete_Error(t *testing.T) { { Method: "DELETE", Resource: "/api/2.0/fs/files/Volumes/CatalogName/SchemaName/VolumeName/fileName?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/tokens/resource_token_test.go b/tokens/resource_token_test.go index d67275ee77..81fb542ddf 100644 --- a/tokens/resource_token_test.go +++ b/tokens/resource_token_test.go @@ -5,6 +5,7 @@ import ( "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -71,7 +72,7 @@ func TestResourceTokenRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/token/list", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -137,7 +138,7 @@ func TestResourceTokenCreate_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/token/create", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -238,7 +239,7 @@ func TestResourceTokenDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/token/delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/workspace/resource_directory_test.go b/workspace/resource_directory_test.go index b34f5965a3..d11c73d4dc 100644 --- a/workspace/resource_directory_test.go +++ b/workspace/resource_directory_test.go @@ -6,7 +6,7 @@ import ( "net/url" "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -70,7 +70,7 @@ func TestResourceDirectoryDelete_NotFound(t *testing.T) { Method: http.MethodPost, Resource: "/api/2.0/workspace/delete", ExpectedRequest: DeletePath{Path: path, Recursive: delete_recursive}, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "Path (/test/path) doesn't exist.", }, @@ -96,7 +96,7 @@ func TestResourceDirectoryRead_NotFound(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: fmt.Sprintf("/api/2.0/workspace/get-status?path=%s", url.PathEscape(path)), - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -117,7 +117,7 @@ func TestResourceDirectoryRead_Error(t *testing.T) { { Method: "GET", Resource: fmt.Sprintf("/api/2.0/workspace/get-status?path=%s", url.PathEscape(path)), - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -175,7 +175,7 @@ func TestResourceDirectoryCreate_Error(t *testing.T) { ExpectedRequest: map[string]string{ "path": path, }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -200,7 +200,7 @@ func TestResourceDirectoryDelete_Error(t *testing.T) { Method: "POST", Resource: "/api/2.0/workspace/delete", ExpectedRequest: DeletePath{Path: path, Recursive: false}, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/workspace/resource_global_init_script_test.go b/workspace/resource_global_init_script_test.go index 57a0290270..a29a4bf497 100644 --- a/workspace/resource_global_init_script_test.go +++ b/workspace/resource_global_init_script_test.go @@ -6,8 +6,8 @@ import ( "strings" "testing" - "github.com/databricks/databricks-sdk-go/apierr" "github.com/databricks/databricks-sdk-go/service/compute" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -70,7 +70,7 @@ func TestResourceGlobalInitScriptRead_NotFound(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: "/api/2.0/global-init-scripts/1234?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "RESOURCE_DOES_NOT_EXIST", Message: "The global unit script with ID 1234 does not exist.", }, diff --git a/workspace/resource_notebook_test.go b/workspace/resource_notebook_test.go index 2d06bda4e5..938b119ead 100644 --- a/workspace/resource_notebook_test.go +++ b/workspace/resource_notebook_test.go @@ -4,7 +4,7 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -63,7 +63,7 @@ func TestResourceNotebookRead_NotFound(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: "/api/2.0/workspace/get-status?path=%2Ftest%2Fpath", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -83,7 +83,7 @@ func TestResourceNotebookRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/workspace/get-status?path=%2Ftest%2Fpath", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -225,7 +225,7 @@ func TestResourceNotebookCreate_DirectoryCreateError(t *testing.T) { ExpectedRequest: map[string]string{ "path": "/foo", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -357,7 +357,7 @@ func TestResourceNotebookCreate_Error(t *testing.T) { { Method: http.MethodPost, Resource: "/api/2.0/workspace/import", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -382,7 +382,7 @@ func TestResourceNotebookDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/workspace/delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, diff --git a/workspace/resource_workspace_conf_test.go b/workspace/resource_workspace_conf_test.go index 87d7c358d5..b8a87ce877 100644 --- a/workspace/resource_workspace_conf_test.go +++ b/workspace/resource_workspace_conf_test.go @@ -4,7 +4,7 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" ) @@ -47,7 +47,7 @@ func TestWorkspaceConfCreate_Error(t *testing.T) { ExpectedRequest: map[string]string{ "enableIpAccessLists": "true", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -120,7 +120,7 @@ func TestWorkspaceConfUpdate_Error(t *testing.T) { ExpectedRequest: map[string]string{ "enableIpAccessLists": "true", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -159,7 +159,7 @@ func TestWorkspaceConfRead_Error(t *testing.T) { { Method: http.MethodGet, Resource: "/api/2.0/workspace-conf?", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -208,7 +208,7 @@ func TestWorkspaceConfDelete_Error(t *testing.T) { { Method: http.MethodPatch, Resource: "/api/2.0/workspace-conf", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -230,7 +230,7 @@ func TestWorkspaceConfUpdateOnInvalidConf(t *testing.T) { Method: http.MethodPatch, Resource: "/api/2.0/workspace-conf", Status: 400, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "some-invalid-conf is an invalid config key", }, diff --git a/workspace/resource_workspace_file_test.go b/workspace/resource_workspace_file_test.go index db8f625a81..92518f00cf 100644 --- a/workspace/resource_workspace_file_test.go +++ b/workspace/resource_workspace_file_test.go @@ -4,8 +4,8 @@ import ( "net/http" "testing" - "github.com/databricks/databricks-sdk-go/apierr" ws_api "github.com/databricks/databricks-sdk-go/service/workspace" + "github.com/databricks/terraform-provider-databricks/common" "github.com/databricks/terraform-provider-databricks/qa" "github.com/stretchr/testify/assert" @@ -68,7 +68,7 @@ func TestResourceWorkspaceFileRead_NotFound(t *testing.T) { { // read log output for correct url... Method: "GET", Resource: "/api/2.0/workspace/get-status?path=%2Ftest%2Fpath", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "NOT_FOUND", Message: "Item not found", }, @@ -88,7 +88,7 @@ func TestResourceWorkspaceFileRead_Error(t *testing.T) { { Method: "GET", Resource: "/api/2.0/workspace/get-status?path=%2Ftest%2Fpath", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -223,7 +223,7 @@ func TestResourceWorkspaceFileCreate_DirectoryCreateError(t *testing.T) { ExpectedRequest: map[string]string{ "path": "/foo", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -338,7 +338,7 @@ func TestResourceWorkspaceFileCreate_Error(t *testing.T) { "overwrite": true, "path": "/path.py", }, - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, @@ -362,7 +362,7 @@ func TestResourceWorkspaceFileDelete_Error(t *testing.T) { { Method: "POST", Resource: "/api/2.0/workspace/delete", - Response: apierr.APIErrorBody{ + Response: common.APIErrorBody{ ErrorCode: "INVALID_REQUEST", Message: "Internal error happened", }, From 77fc0b42cf28a08a070135395de9b96ca2330c19 Mon Sep 17 00:00:00 2001 From: Tanmay Rustagi <88379306+tanmay-db@users.noreply.github.com> Date: Tue, 9 Jul 2024 13:23:56 +0200 Subject: [PATCH 20/24] Upgrade databricks-sdk-go (#3743) --- go.mod | 2 +- go.sum | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/go.mod b/go.mod index 34cef45274..17810585ed 100644 --- a/go.mod +++ b/go.mod @@ -3,7 +3,7 @@ module github.com/databricks/terraform-provider-databricks go 1.22 require ( - github.com/databricks/databricks-sdk-go v0.43.0 + github.com/databricks/databricks-sdk-go v0.43.1 github.com/golang-jwt/jwt/v4 v4.5.0 github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 github.com/hashicorp/hcl v1.0.0 diff --git a/go.sum b/go.sum index f984b87658..46e14aec96 100644 --- a/go.sum +++ b/go.sum @@ -26,8 +26,8 @@ github.com/cloudflare/circl v1.3.7/go.mod h1:sRTcRWXGLrKw6yIGJ+l7amYJFfAXbZG0kBS github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg= github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= -github.com/databricks/databricks-sdk-go v0.43.0 h1:x4laolWhYlsQg2t8yWEGyRPZy4/Wv3pKnLEoJfVin7I= -github.com/databricks/databricks-sdk-go v0.43.0/go.mod h1:a9rr0FOHLL26kOjQjZZVFjIYmRABCbrAWVeundDEVG8= +github.com/databricks/databricks-sdk-go v0.43.1 h1:JJJ0S5yiDLQF8dzo6V1O2jKsOAkULtNqrnmFcvHstLg= +github.com/databricks/databricks-sdk-go v0.43.1/go.mod h1:nlzeOEgJ1Tmb5HyknBJ3GEorCZKWqEBoHprvPmTSNq8= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= From c16345e7502a0c0b8a2de69d649f2c6176098e5b Mon Sep 17 00:00:00 2001 From: hectorcast-db Date: Tue, 9 Jul 2024 14:40:52 +0200 Subject: [PATCH 21/24] [Internal] Improve Changelog by grouping changes (#3747) * [Internal] Improve Changelog by grouping changes * Fixes --- .codegen.json | 1 + .codegen/changelog.md.tmpl | 18 ++++-------------- .codegen/changelog_config.yml | 15 +++++++++++++++ .github/dependabot.yml | 2 ++ .github/workflows/push.yml | 19 +++++++++++++++++++ 5 files changed, 41 insertions(+), 14 deletions(-) create mode 100644 .codegen/changelog_config.yml diff --git a/.codegen.json b/.codegen.json index fd308f7ef7..3e2cd7a6a5 100644 --- a/.codegen.json +++ b/.codegen.json @@ -1,5 +1,6 @@ { "formatter": "make fmt", + "changelog_config": ".codegen/changelog_config.yml", "version": { "common/version.go": "version = \"$VERSION\"" }, diff --git a/.codegen/changelog.md.tmpl b/.codegen/changelog.md.tmpl index fdb4bcc388..290ea71170 100644 --- a/.codegen/changelog.md.tmpl +++ b/.codegen/changelog.md.tmpl @@ -1,22 +1,12 @@ # Version changelog ## {{.Version}} +{{- range .GroupChanges}} -### New Features and Improvements -{{range .Changes -}} +### {{.Type.Message}} +{{range .Changes}} * {{.}}. -{{end}} - -### Documentation Changes - -### Exporter - -### Internal Changes -{{if .DependencyUpdates}} -Dependency updates: -{{range .DependencyUpdates}} - * {{.}}. -{{- end -}} +{{- end}} {{end}} ## {{.PrevVersion}} \ No newline at end of file diff --git a/.codegen/changelog_config.yml b/.codegen/changelog_config.yml new file mode 100644 index 0000000000..ee86db8a7a --- /dev/null +++ b/.codegen/changelog_config.yml @@ -0,0 +1,15 @@ +change_types: + - message: New Features and Improvements + tag: "[Feature]" + - message: Bug Fixes + tag: "[Fix]" + - message: Documentation + tag: "[Doc]" + - message: Internal Changes + tag: "[Internal]" + - message: Dependency Updates + tag: "[Dependency]" + - message: Exporter + tag: "[Exporter]" + # Default for messages without a tag + - message: Other Changes \ No newline at end of file diff --git a/.github/dependabot.yml b/.github/dependabot.yml index 3938344a7a..53ce671f4c 100644 --- a/.github/dependabot.yml +++ b/.github/dependabot.yml @@ -4,3 +4,5 @@ updates: directory: "/" schedule: interval: "daily" + commit-message: + prefix: "[Dependency] " \ No newline at end of file diff --git a/.github/workflows/push.yml b/.github/workflows/push.yml index 4a6f8892d4..354747f208 100644 --- a/.github/workflows/push.yml +++ b/.github/workflows/push.yml @@ -54,3 +54,22 @@ jobs: run: | # Exit with status code 1 if there are differences (i.e. unformatted files) git diff --exit-code + + commit-message: + runs-on: ubuntu-latest + if: ${{ github.event_name == 'pull_request' }} + steps: + - name: Checkout + uses: actions/checkout@v3 + with: + fetch-depth: 0 + + - name: Validate Tag + run: | + TAG=$(echo ${{ github.event.pull_request.title }} | sed -ne 's/\[\(.*\)\].*/\1/p') + if grep -q "tag: \"[$TAG]\"" .codegen/changelog_config.yml; then + echo "Invalid or missing tag in commit message: [$TAG]" + exit 1 + else + echo "Valid tag found: [$TAG]" + fi \ No newline at end of file From 47b84b5b101543489b71b013a13fe3a889f0f207 Mon Sep 17 00:00:00 2001 From: hectorcast-db Date: Tue, 9 Jul 2024 16:55:34 +0200 Subject: [PATCH 22/24] [Internal] Add Release tag (#3748) --- .codegen/changelog_config.yml | 3 +++ .github/workflows/push.yml | 6 +++--- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/.codegen/changelog_config.yml b/.codegen/changelog_config.yml index ee86db8a7a..5f6dc40ff9 100644 --- a/.codegen/changelog_config.yml +++ b/.codegen/changelog_config.yml @@ -11,5 +11,8 @@ change_types: tag: "[Dependency]" - message: Exporter tag: "[Exporter]" + # Does not appear in the Changelog. Only for PR validation. + - message: Release + tag: "[Release]" # Default for messages without a tag - message: Other Changes \ No newline at end of file diff --git a/.github/workflows/push.yml b/.github/workflows/push.yml index 354747f208..0936a2807d 100644 --- a/.github/workflows/push.yml +++ b/.github/workflows/push.yml @@ -67,9 +67,9 @@ jobs: - name: Validate Tag run: | TAG=$(echo ${{ github.event.pull_request.title }} | sed -ne 's/\[\(.*\)\].*/\1/p') - if grep -q "tag: \"[$TAG]\"" .codegen/changelog_config.yml; then + if grep -q "tag: \"\[$TAG\]\"" .codegen/changelog_config.yml; then + echo "Valid tag found: [$TAG]" + else echo "Invalid or missing tag in commit message: [$TAG]" exit 1 - else - echo "Valid tag found: [$TAG]" fi \ No newline at end of file From 364f9ec1053026a96f4bdcecd1b559cf01d43d42 Mon Sep 17 00:00:00 2001 From: Tanmay Rustagi <88379306+tanmay-db@users.noreply.github.com> Date: Tue, 9 Jul 2024 18:02:11 +0200 Subject: [PATCH 23/24] [Internal] Upgrade Go SDK to v0.43.2 (#3750) --- go.mod | 2 +- go.sum | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/go.mod b/go.mod index 17810585ed..1947785570 100644 --- a/go.mod +++ b/go.mod @@ -3,7 +3,7 @@ module github.com/databricks/terraform-provider-databricks go 1.22 require ( - github.com/databricks/databricks-sdk-go v0.43.1 + github.com/databricks/databricks-sdk-go v0.43.2 github.com/golang-jwt/jwt/v4 v4.5.0 github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 github.com/hashicorp/hcl v1.0.0 diff --git a/go.sum b/go.sum index 46e14aec96..63534b76fb 100644 --- a/go.sum +++ b/go.sum @@ -28,6 +28,8 @@ github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53E github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/databricks/databricks-sdk-go v0.43.1 h1:JJJ0S5yiDLQF8dzo6V1O2jKsOAkULtNqrnmFcvHstLg= github.com/databricks/databricks-sdk-go v0.43.1/go.mod h1:nlzeOEgJ1Tmb5HyknBJ3GEorCZKWqEBoHprvPmTSNq8= +github.com/databricks/databricks-sdk-go v0.43.2 h1:4B+sHAYO5kFqwZNQRmsF70eecqsFX6i/0KfXoDFQT/E= +github.com/databricks/databricks-sdk-go v0.43.2/go.mod h1:nlzeOEgJ1Tmb5HyknBJ3GEorCZKWqEBoHprvPmTSNq8= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= From 8a7a8f00df4b8eaec5c428f8edbcee117edaafc8 Mon Sep 17 00:00:00 2001 From: vuong-nguyen <44292934+nkvuong@users.noreply.github.com> Date: Tue, 9 Jul 2024 17:19:00 +0100 Subject: [PATCH 24/24] fixed broken links in documentation (#3746) --- docs/data-sources/mlflow_experiment.md | 2 +- docs/data-sources/mws_credentials.md | 10 +++++----- docs/data-sources/mws_workspaces.md | 2 +- docs/data-sources/service_principals.md | 4 ++-- docs/data-sources/sql_warehouse.md | 10 +++++----- docs/data-sources/sql_warehouses.md | 14 +++++++------- docs/index.md | 2 +- docs/resources/instance_pool.md | 2 +- docs/resources/mws_network_connectivity_config.md | 2 +- docs/resources/mws_networks.md | 4 ++-- docs/resources/mws_private_access_settings.md | 2 +- docs/resources/mws_workspaces.md | 2 +- docs/resources/registered_model.md | 5 ++--- docs/resources/sql_endpoint.md | 2 +- 14 files changed, 31 insertions(+), 32 deletions(-) diff --git a/docs/data-sources/mlflow_experiment.md b/docs/data-sources/mlflow_experiment.md index 342b3e36aa..685f569aaf 100644 --- a/docs/data-sources/mlflow_experiment.md +++ b/docs/data-sources/mlflow_experiment.md @@ -3,7 +3,7 @@ subcategory: "MLflow" --- # databricks_mlflow_experiment Data Source --> **Note** If you have a fully automated setup with workspaces created by [databricks_mws_workspaces](../resources/mws_workspaces.md) or [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace), please make sure to add [depends_on attribute](../index.md#data-resources-and-authentication-is-not-configured-errors) in order to prevent _default auth: cannot configure default credentials_ errors. +-> **Note** If you have a fully automated setup with workspaces created by [databricks_mws_workspaces](../resources/mws_workspaces.md) or [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace), please make sure to add [depends_on attribute](../guides/troubleshooting.md#data-resources-and-authentication-is-not-configured-errors) in order to prevent _default auth: cannot configure default credentials_ errors. Retrieves the settings of [databricks_mlflow_experiment](../resources/mlflow_experiment.md) by id or name. diff --git a/docs/data-sources/mws_credentials.md b/docs/data-sources/mws_credentials.md index 5184bc1e04..9c1fe6dbba 100755 --- a/docs/data-sources/mws_credentials.md +++ b/docs/data-sources/mws_credentials.md @@ -39,8 +39,8 @@ This data source exports the following attributes: The following resources are used in the same context: * [Provisioning Databricks on AWS](../guides/aws-workspace.md) guide. -* [databricks_mws_customer_managed_keys](mws_customer_managed_keys.md) to configure KMS keys for new workspaces within AWS. -* [databricks_mws_log_delivery](mws_log_delivery.md) to configure delivery of [billable usage logs](https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html) and [audit logs](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html). -* [databricks_mws_networks](mws_networks.md) to [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) & subnets for new workspaces within AWS. -* [databricks_mws_storage_configurations](mws_storage_configurations.md) to configure root bucket new workspaces within AWS. -* [databricks_mws_workspaces](mws_workspaces.md) to set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1). +* [databricks_mws_customer_managed_keys](../resources/mws_customer_managed_keys.md) to configure KMS keys for new workspaces within AWS. +* [databricks_mws_log_delivery](../resources/mws_log_delivery.md) to configure delivery of [billable usage logs](https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html) and [audit logs](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html). +* [databricks_mws_networks](../resources/mws_networks.md) to [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) & subnets for new workspaces within AWS. +* [databricks_mws_storage_configurations](../resources/mws_storage_configurations.md) to configure root bucket new workspaces within AWS. +* [databricks_mws_workspaces](../resources/mws_workspaces.md) to set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1). diff --git a/docs/data-sources/mws_workspaces.md b/docs/data-sources/mws_workspaces.md index 1b3e8ab3aa..434e919c3e 100755 --- a/docs/data-sources/mws_workspaces.md +++ b/docs/data-sources/mws_workspaces.md @@ -39,4 +39,4 @@ This data source exports the following attributes: The following resources are used in the same context: * [databricks_mws_workspaces](../resources/mws_workspaces.md) to manage Databricks Workspaces on AWS and GCP. -* [databricks_metastore_assignment](../resources/metastore_assignment.md) to assign [databricks_metastore](docs/resources/metastore.md) to [databricks_mws_workspaces](../resources/mws_workspaces.md) or [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace) +* [databricks_metastore_assignment](../resources/metastore_assignment.md) to assign [databricks_metastore](../resources/metastore.md) to [databricks_mws_workspaces](../resources/mws_workspaces.md) or [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace) diff --git a/docs/data-sources/service_principals.md b/docs/data-sources/service_principals.md index 457e0f98bc..08fdccded8 100644 --- a/docs/data-sources/service_principals.md +++ b/docs/data-sources/service_principals.md @@ -37,13 +37,13 @@ resource "databricks_group_member" "my_member_spn" { Data source allows you to pick service principals by the following attributes -- `display_name_contains` - (Optional) Only return [databricks_service_principal](databricks_service_principal.md) display name that match the given name string +- `display_name_contains` - (Optional) Only return [databricks_service_principal](service_principal.md) display name that match the given name string ## Attribute Reference Data source exposes the following attributes: -- `application_ids` - List of `application_ids` of service principals Individual service principal can be retrieved using [databricks_service_principal](databricks_service_principal.md) data source +- `application_ids` - List of `application_ids` of service principals Individual service principal can be retrieved using [databricks_service_principal](service_principal.md) data source ## Related Resources diff --git a/docs/data-sources/sql_warehouse.md b/docs/data-sources/sql_warehouse.md index e601a183cd..b930545f89 100644 --- a/docs/data-sources/sql_warehouse.md +++ b/docs/data-sources/sql_warehouse.md @@ -5,7 +5,7 @@ subcategory: "Databricks SQL" -> **Note** If you have a fully automated setup with workspaces created by [databricks_mws_workspaces](../resources/mws_workspaces.md) or [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace), please make sure to add [depends_on attribute](../guides/troubleshooting.md#data-resources-and-authentication-is-not-configured-errors) in order to prevent _default auth: cannot configure default credentials_ errors. -Retrieves information about a [databricks_sql_warehouse](../resources/sql_warehouse.md) using its id. This could be retrieved programmatically using [databricks_sql_warehouses](../data-sources/sql_warehouses.md) data source. +Retrieves information about a [databricks_sql_warehouse](../resources/sql_endpoint.md) using its id. This could be retrieved programmatically using [databricks_sql_warehouses](../data-sources/sql_warehouses.md) data source. ## Example usage @@ -70,7 +70,7 @@ This data source exports the following attributes: The following resources are often used in the same context: * [End to end workspace management](../guides/workspace-management.md) guide. -* [databricks_instance_profile](instance_profile.md) to manage AWS EC2 instance profiles that users can launch [databricks_cluster](cluster.md) and access data, like [databricks_mount](mount.md). -* [databricks_sql_dashboard](sql_dashboard.md) to manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html). -* [databricks_sql_global_config](sql_global_config.md) to configure the security policy, [databricks_instance_profile](instance_profile.md), and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all [databricks_sql_warehouse](sql_warehouse.md) of workspace. -* [databricks_sql_permissions](sql_permissions.md) to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and [more](https://docs.databricks.com/security/access-control/table-acls/object-privileges.html). +* [databricks_instance_profile](../resources/instance_profile.md) to manage AWS EC2 instance profiles that users can launch [databricks_cluster](cluster.md) and access data, like [databricks_mount](../resources/mount.md). +* [databricks_sql_dashboard](../resources/sql_dashboard.md) to manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html). +* [databricks_sql_global_config](../resources/sql_global_config.md) to configure the security policy, [databricks_instance_profile](../resources/instance_profile.md), and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all [databricks_sql_warehouse](sql_warehouse.md) of workspace. +* [databricks_sql_permissions](../resources/sql_permissions.md) to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and [more](https://docs.databricks.com/security/access-control/table-acls/object-privileges.html). diff --git a/docs/data-sources/sql_warehouses.md b/docs/data-sources/sql_warehouses.md index cfb9cbb0cb..162ffd5fb7 100644 --- a/docs/data-sources/sql_warehouses.md +++ b/docs/data-sources/sql_warehouses.md @@ -5,7 +5,7 @@ subcategory: "Databricks SQL" -> **Note** If you have a fully automated setup with workspaces created by [databricks_mws_workspaces](../resources/mws_workspaces.md) or [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace), please make sure to add [depends_on attribute](../guides/troubleshooting.md#data-resources-and-authentication-is-not-configured-errors) in order to prevent _default auth: cannot configure default credentials_ errors. -Retrieves a list of [databricks_sql_endpoint](../resources/sql_endpoint.md#id) ids, that were created by Terraform or manually. +Retrieves a list of [databricks_sql_endpoint](../resources/sql_endpoint.md) ids, that were created by Terraform or manually. ## Example Usage @@ -26,20 +26,20 @@ data "databricks_sql_warehouses" "all_shared" { ## Argument Reference -* `warehouse_name_contains` - (Optional) Only return [databricks_sql_endpoint](../resources/sql_endpoint.md#id) ids that match the given name string. +* `warehouse_name_contains` - (Optional) Only return [databricks_sql_endpoint](../resources/sql_endpoint.md) ids that match the given name string. ## Attribute Reference This data source exports the following attributes: -* `ids` - list of [databricks_sql_endpoint](../resources/sql_endpoint.md#id) ids +* `ids` - list of [databricks_sql_endpoint](../resources/sql_endpoint.md) ids ## Related Resources The following resources are often used in the same context: * [End to end workspace management](../guides/workspace-management.md) guide. -* [databricks_instance_profile](instance_profile.md) to manage AWS EC2 instance profiles that users can launch [databricks_cluster](cluster.md) and access data, like [databricks_mount](mount.md). -* [databricks_sql_dashboard](sql_dashboard.md) to manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html). -* [databricks_sql_global_config](sql_global_config.md) to configure the security policy, [databricks_instance_profile](instance_profile.md), and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all [databricks_sql_endpoint](sql_endpoint.md) of workspace. -* [databricks_sql_permissions](sql_permissions.md) to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and [more](https://docs.databricks.com/security/access-control/table-acls/object-privileges.html). +* [databricks_instance_profile](../resources/instance_profile.md) to manage AWS EC2 instance profiles that users can launch [databricks_cluster](cluster.md) and access data, like [databricks_mount](../resources/mount.md). +* [databricks_sql_dashboard](../resources/sql_dashboard.md) to manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html). +* [databricks_sql_global_config](../resources/sql_global_config.md) to configure the security policy, [databricks_instance_profile](../resources/instance_profile.md), and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all [databricks_sql_warehouse](sql_warehouse.md) of workspace. +* [databricks_sql_permissions](../resources/sql_permissions.md) to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and [more](https://docs.databricks.com/security/access-control/table-acls/object-privileges.html). diff --git a/docs/index.md b/docs/index.md index 18efc26fe7..57349b07c7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -50,7 +50,7 @@ Databricks SQL * Create [databricks_sql_endpoint](resources/sql_endpoint.md) controlled by [databricks_permissions](resources/permissions.md). * Manage [queries](resources/sql_query.md) and their [visualizations](resources/sql_visualization.md). * Manage [dashboards](resources/sql_dashboard.md) and their [widgets](resources/sql_widget.md). -* Provide [global configuration for all SQL warehouses](docs/resources/sql_global_config.md) +* Provide [global configuration for all SQL warehouses](resources/sql_global_config.md) Machine Learning diff --git a/docs/resources/instance_pool.md b/docs/resources/instance_pool.md index 5a1eefa1a0..91ad59449b 100644 --- a/docs/resources/instance_pool.md +++ b/docs/resources/instance_pool.md @@ -45,7 +45,7 @@ The following arguments are supported: * `node_type_id` - (Required) (String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool’s idle instances are allocated based on this type. You can retrieve a list of available node types by using the [List Node Types API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistnodetypes) call. * `custom_tags` - (Optional) (Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS & Azure instances and Disk volumes). The tags of the instance pool will propagate to the clusters using the pool (see the [official documentation](https://docs.databricks.com/administration-guide/account-settings/usage-detail-tags-aws.html#tag-propagation)). Attempting to set the same tags in both cluster and instance pool will raise an error. *Databricks allows at most 43 custom tags.* * `enable_elastic_disk` - (Optional) (Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space. -* `preloaded_spark_versions` - (Optional) (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via [databricks_spark_version](../data-sources/spark-version.md) data source or via [Runtime Versions API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistsparkversions) call. +* `preloaded_spark_versions` - (Optional) (List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via [databricks_spark_version](../data-sources/spark_version.md) data source or via [Runtime Versions API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistsparkversions) call. ### aws_attributes Configuration Block diff --git a/docs/resources/mws_network_connectivity_config.md b/docs/resources/mws_network_connectivity_config.md index bdc8b7557a..e8051e31e3 100644 --- a/docs/resources/mws_network_connectivity_config.md +++ b/docs/resources/mws_network_connectivity_config.md @@ -7,7 +7,7 @@ subcategory: "Deployment" -> **Public Preview** This feature is available for AWS & Azure only, and is in [Public Preview](https://docs.databricks.com/release-notes/release-types.html) in AWS. -Allows you to create a [Network Connectivity Config] that can be used as part of a [databricks_mws_workspaces](mws_workspaces.md) resource to create a [Databricks Workspace that leverages serverless network connectivity configs](https://learn.microsoft.com/en-us/azure/databricks/security/network/serverless-network-security/serverless-firewall). +Allows you to create a Network Connectivity Config that can be used as part of a [databricks_mws_workspaces](mws_workspaces.md) resource to create a [Databricks Workspace that leverages serverless network connectivity configs](https://learn.microsoft.com/en-us/azure/databricks/security/network/serverless-network-security/serverless-firewall). ## Example Usage diff --git a/docs/resources/mws_networks.md b/docs/resources/mws_networks.md index 83e01f1f49..cc26d438c9 100644 --- a/docs/resources/mws_networks.md +++ b/docs/resources/mws_networks.md @@ -82,7 +82,7 @@ resource "databricks_mws_networks" "this" { } ``` -In order to create a VPC [that leverages AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) you would need to add the `vpc_endpoint_id` Attributes from [mws_vpc_endpoint](mws_vpc_endpoint.md) resources into the [databricks_mws_networks](databricks_mws_networks.md) resource. For example: +In order to create a VPC [that leverages AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) you would need to add the `vpc_endpoint_id` Attributes from [mws_vpc_endpoint](mws_vpc_endpoint.md) resources into the [databricks_mws_networks](mws_networks.md) resource. For example: ```hcl resource "databricks_mws_networks" "this" { @@ -157,7 +157,7 @@ resource "databricks_mws_networks" "this" { } ``` -In order to create a VPC [that leverages GCP Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) you would need to add the `vpc_endpoint_id` Attributes from [mws_vpc_endpoint](mws_vpc_endpoint.md) resources into the [databricks_mws_networks](databricks_mws_networks.md) resource. For example: +In order to create a VPC [that leverages GCP Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) you would need to add the `vpc_endpoint_id` Attributes from [mws_vpc_endpoint](mws_vpc_endpoint.md) resources into the [databricks_mws_networks](mws_networks.md) resource. For example: ```hcl resource "databricks_mws_networks" "this" { diff --git a/docs/resources/mws_private_access_settings.md b/docs/resources/mws_private_access_settings.md index 03acd3b17a..3fbc4577b2 100644 --- a/docs/resources/mws_private_access_settings.md +++ b/docs/resources/mws_private_access_settings.md @@ -24,7 +24,7 @@ resource "databricks_mws_private_access_settings" "pas" { ``` -The `databricks_mws_private_access_settings.pas.private_access_settings_id` can then be used as part of a [databricks_mws_workspaces](databricks_mws_workspaces.md) resource: +The `databricks_mws_private_access_settings.pas.private_access_settings_id` can then be used as part of a [databricks_mws_workspaces](mws_workspaces.md) resource: ```hcl resource "databricks_mws_workspaces" "this" { diff --git a/docs/resources/mws_workspaces.md b/docs/resources/mws_workspaces.md index 3655ec009e..c56f1c51c6 100644 --- a/docs/resources/mws_workspaces.md +++ b/docs/resources/mws_workspaces.md @@ -351,7 +351,7 @@ You can specify a `token` block in the body of the workspace resource, so that T On AWS, the following arguments could be modified after the workspace is running: -* `network_id` - Modifying [networks on running workspaces](mws_networks.md#modifying-networks-on-running-workspaces) would require three separate `terraform apply` steps. +* `network_id` - Modifying [networks on running workspaces](mws_networks.md#modifying-networks-on-running-workspaces-aws-only) would require three separate `terraform apply` steps. * `credentials_id` * `storage_customer_managed_key_id` * `private_access_settings_id` diff --git a/docs/resources/registered_model.md b/docs/resources/registered_model.md index 90628839c9..5594309e92 100644 --- a/docs/resources/registered_model.md +++ b/docs/resources/registered_model.md @@ -52,6 +52,5 @@ The following resources are often used in the same context: * [databricks_model_serving](model_serving.md) to serve this model on a Databricks serving endpoint. * [databricks_mlflow_experiment](mlflow_experiment.md) to manage [MLflow experiments](https://docs.databricks.com/data/data-sources/mlflow-experiment.html) in Databricks. -* [databricks_table](tables.md) data to manage tables within Unity Catalog. -* [databricks_schema](schemas.md) data to manage schemas within Unity Catalog. -* [databricks_catalog](catalogs.md) data to manage catalogs within Unity Catalog. +* [databricks_schema](schema.md) to manage schemas within Unity Catalog. +* [databricks_catalog](catalog.md) to manage catalogs within Unity Catalog. diff --git a/docs/resources/sql_endpoint.md b/docs/resources/sql_endpoint.md index c77cd3cf54..26a4c5f822 100644 --- a/docs/resources/sql_endpoint.md +++ b/docs/resources/sql_endpoint.md @@ -64,7 +64,7 @@ In addition to all arguments above, the following attributes are exported: ## Access control -* [databricks_permissions](permissions.md#Job-Endpoint-usage) can control which groups or individual users can *Can Use* or *Can Manage* SQL warehouses. +* [databricks_permissions](permissions.md#job-usage) can control which groups or individual users can *Can Use* or *Can Manage* SQL warehouses. * `databricks_sql_access` on [databricks_group](group.md#databricks_sql_access) or [databricks_user](user.md#databricks_sql_access). ## Timeouts