Skip to content

Commit

Permalink
[Fix] Tolerate databricks_workspace_conf deletion failures (#3737)
Browse files Browse the repository at this point in the history
## Changes
Currently, deleting `databricks_workspace_conf` can fail in some cases.
The provider currently tries to do a best-effort to reset config values
to an "original" state by setting all boolean fields to `false` and
string fields to `""`. However, the provider isn't aware of three key
bits of information: the set of allowed keys in the workspace
configuration, the types of values accepted for those keys, and the
default values for new workspaces (the last of which can vary from
workspace to workspace).

As a result, it does a best-effort approach to interpret the value for
each key as a boolean, setting it to `false` on delete if it can be
interpreted as a boolean, otherwise setting the value for the key to
`""`. This can fail, however, if a user has specified a value like
`TRue`, this may not be interpreted as a boolean by the provider. The
provider then tries to reset its value to `""`, which fails.

This PR adds two layers of protection around deletion: first, it
attempts to lower-case the value before checking if it is a boolean to
handle possible issues with case-sensitivity with the values in
workspace_conf. Secondly, if the configuration update fail due to
invalid key types, the provider will print a warning message guiding
users to our docs page to help them review their workspace config, but
it will consider the resource to be successfully deleted.

As part of this, I added a small internal module to provide links to
docs pages when the provider needs to print them in log messages.

Resolves #3316, #1802.

## Tests

- [x] `make test` run locally
- [x] relevant change in `docs/` folder
- [x] covered with integration tests in `internal/acceptance`
- [x] relevant acceptance tests are passing
- [x] using Go SDK
  • Loading branch information
mgyucht authored Aug 5, 2024
1 parent 9e5f71b commit 560e753
Show file tree
Hide file tree
Showing 6 changed files with 180 additions and 35 deletions.
15 changes: 9 additions & 6 deletions docs/resources/workspace_conf.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,24 @@
---
subcategory: "Workspace"
---

# databricks_workspace_conf Resource

-> **Note** This resource has an evolving API, which may change in future versions of the provider.

Manages workspace configuration for expert usage. Currently, more than one instance of resource can exist in Terraform state, though there's no deterministic behavior, when they manage the same property. We strongly recommend to use a single `databricks_workspace_conf` per workspace.

-> **Note** Deleting `databricks_workspace_conf` resources may fail depending on the configuration properties set, including but not limited to `enableIpAccessLists`, `enableGp3`, and `maxTokenLifetimeDays`. The provider will print a warning if this occurs. You can verify the workspace configuration by reviewing [the workspace settings in the UI](https://docs.databricks.com/en/admin/workspace-settings/index.html).

## Example Usage

Allows specification of custom configuration properties for expert usage:

* `enableIpAccessLists` - enables the use of [databricks_ip_access_list](ip_access_list.md) resources
* `maxTokenLifetimeDays` - (string) Maximum token lifetime of new tokens in days, as an integer. If zero, new tokens are permitted to have no lifetime limit. Negative numbers are unsupported. **WARNING:** This limit only applies to new tokens, so there may be tokens with lifetimes longer than this value, including unlimited lifetime. Such tokens may have been created before the current maximum token lifetime was set.
* `enableTokensConfig` - (boolean) Enable or disable personal access tokens for this workspace.
* `enableDeprecatedClusterNamedInitScripts` - (boolean) Enable or disable [legacy cluster-named init scripts](https://docs.databricks.com/clusters/init-scripts.html#disable-legacy-cluster-named-init-scripts-for-a-workspace) for this workspace.
* `enableDeprecatedGlobalInitScripts` - (boolean) Enable or disable [legacy global init scripts](https://docs.databricks.com/clusters/init-scripts.html#migrate-legacy-scripts) for this workspace.
- `enableIpAccessLists` - enables the use of [databricks_ip_access_list](ip_access_list.md) resources
- `maxTokenLifetimeDays` - (string) Maximum token lifetime of new tokens in days, as an integer. If zero, new tokens are permitted to have no lifetime limit. Negative numbers are unsupported. **WARNING:** This limit only applies to new tokens, so there may be tokens with lifetimes longer than this value, including unlimited lifetime. Such tokens may have been created before the current maximum token lifetime was set.
- `enableTokensConfig` - (boolean) Enable or disable personal access tokens for this workspace.
- `enableDeprecatedClusterNamedInitScripts` - (boolean) Enable or disable [legacy cluster-named init scripts](https://docs.databricks.com/clusters/init-scripts.html#disable-legacy-cluster-named-init-scripts-for-a-workspace) for this workspace.
- `enableDeprecatedGlobalInitScripts` - (boolean) Enable or disable [legacy global init scripts](https://docs.databricks.com/clusters/init-scripts.html#migrate-legacy-scripts) for this workspace.

```hcl
resource "databricks_workspace_conf" "this" {
Expand All @@ -29,7 +32,7 @@ resource "databricks_workspace_conf" "this" {

The following arguments are available:

* `custom_config` - (Required) Key-value map of strings that represent workspace configuration. Upon resource deletion, properties that start with `enable` or `enforce` will be reset to `false` value, regardless of initial default one.
- `custom_config` - (Required) Key-value map of strings that represent workspace configuration. Upon resource deletion, properties that start with `enable` or `enforce` will be reset to `false` value, regardless of initial default one.

## Import

Expand Down
6 changes: 4 additions & 2 deletions internal/acceptance/workspace_conf_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,10 +60,12 @@ func TestAccWorkspaceConfFullLifecycle(t *testing.T) {
// Assert on server side error returned
ExpectError: regexp.MustCompile(`cannot update workspace conf: Invalid keys`),
}, step{
// Set enableIpAccessLists to true
// Set enableIpAccessLists to true with strange case and maxTokenLifetimeDays to verify
// failed deletion case
Template: `resource "databricks_workspace_conf" "this" {
custom_config = {
"enableIpAccessLists": "true"
"enableIpAccessLists": "TRue",
"maxTokenLifetimeDays": 90
}
}`,
Check: func(s *terraform.State) error {
Expand Down
15 changes: 15 additions & 0 deletions internal/docs/docs.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
package docs

import "github.com/databricks/terraform-provider-databricks/common"

func ProviderRegistryUrl() string {
return "https://registry.terraform.io/providers/databricks/databricks/" + common.Version() + "/"
}

func ProviderDocumentationRootUrl() string {
return ProviderRegistryUrl() + "docs/"
}

func ResourceDocumentationUrl(resource string) string {
return ProviderDocumentationRootUrl() + "resources/" + resource
}
21 changes: 21 additions & 0 deletions internal/docs/docs_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
package docs_test

import (
"testing"

"github.com/databricks/terraform-provider-databricks/common"
"github.com/databricks/terraform-provider-databricks/internal/docs"
"github.com/stretchr/testify/assert"
)

func TestProviderRegistryUrl(t *testing.T) {
assert.Equal(t, "https://registry.terraform.io/providers/databricks/databricks/"+common.Version()+"/", docs.ProviderRegistryUrl())
}

func TestProviderDocumentationRootUrl(t *testing.T) {
assert.Equal(t, "https://registry.terraform.io/providers/databricks/databricks/"+common.Version()+"/docs/", docs.ProviderDocumentationRootUrl())
}

func TestResourceDocumentationUrl(t *testing.T) {
assert.Equal(t, "https://registry.terraform.io/providers/databricks/databricks/"+common.Version()+"/docs/resources/my_resource", docs.ResourceDocumentationUrl("my_resource"))
}
76 changes: 52 additions & 24 deletions workspace/resource_workspace_conf.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,19 @@ package workspace

import (
"context"
"errors"
"fmt"
"log"
"sort"
"strconv"
"strings"

"github.com/databricks/terraform-provider-databricks/common"
"github.com/databricks/terraform-provider-databricks/internal/docs"

"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/settings"
"github.com/hashicorp/terraform-plugin-log/tflog"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

Expand Down Expand Up @@ -85,6 +90,52 @@ func updateWorkspaceConf(ctx context.Context, d *schema.ResourceData, c *common.
return nil
}

func deleteWorkspaceConf(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
w, err := c.WorkspaceClient()
if err != nil {
return err
}
config := d.Get("custom_config").(map[string]any)
// Delete keys one at a time to reset as many configuration values as possible to their original state.
// Delete in alphabetical order by key to ensure deterministic behavior.
keys := make([]string, 0, len(config))
for k := range config {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
v := config[k]
patch := settings.WorkspaceConf{}
// The default value for all configurations is assumed to be "false" for boolean
// configurations and an empty string for string configurations.
switch r := v.(type) {
default:
patch[k] = ""
case string:
_, err := strconv.ParseBool(strings.ToLower(r))
if err != nil {
patch[k] = ""
} else {
patch[k] = "false"
}
case bool:
patch[k] = "false"
}
err = w.WorkspaceConf.SetStatus(ctx, patch)
// Tolerate errors like the following on deletion:
// cannot delete workspace conf: Some values are not allowed: {"enableGp3":"","enableIpAccessLists":""}
// The API for workspace conf is quite limited and doesn't support a generic "reset to original state"
// operation. So if our attempted reset fails, don't fail resource deletion.
var apiErr *apierr.APIError
if errors.As(err, &apiErr) && strings.HasPrefix(apiErr.Message, "Some values are not allowed") {
tflog.Warn(ctx, fmt.Sprintf("Encountered error while deleting databricks_workspace_conf: %s. The workspace configuration may not be fully restored to its original state. For more information, see %s", apiErr.Message, docs.ResourceDocumentationUrl("workspace_conf")))
} else if err != nil {
return err
}
}
return nil
}

// ResourceWorkspaceConf maintains workspace configuration for specified keys
func ResourceWorkspaceConf() common.Resource {
return common.Resource{
Expand Down Expand Up @@ -116,30 +167,7 @@ func ResourceWorkspaceConf() common.Resource {
log.Printf("[DEBUG] Setting new config to state: %v", config)
return d.Set("custom_config", config)
},
Delete: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
patch := settings.WorkspaceConf{}
config := d.Get("custom_config").(map[string]any)
for k, v := range config {
switch r := v.(type) {
default:
patch[k] = ""
case string:
_, err := strconv.ParseBool(r)
if err != nil {
patch[k] = ""
} else {
patch[k] = "false"
}
case bool:
patch[k] = "false"
}
}
w, err := c.WorkspaceClient()
if err != nil {
return err
}
return w.WorkspaceConf.SetStatus(ctx, patch)
},
Delete: deleteWorkspaceConf,
Schema: map[string]*schema.Schema{
"custom_config": {
Type: schema.TypeMap,
Expand Down
82 changes: 79 additions & 3 deletions workspace/resource_workspace_conf_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -181,10 +181,28 @@ func TestWorkspaceConfDelete(t *testing.T) {
Method: http.MethodPatch,
Resource: "/api/2.0/workspace-conf",
ExpectedRequest: map[string]string{
"enableFancyThing": "false",
"enableSomething": "false",
"enableFancyThing": "false",
},
},
{
Method: http.MethodPatch,
Resource: "/api/2.0/workspace-conf",
ExpectedRequest: map[string]string{
"enableSomething": "false",
},
},
{
Method: http.MethodPatch,
Resource: "/api/2.0/workspace-conf",
ExpectedRequest: map[string]string{
"enforceSomethingElse": "false",
"someProperty": "",
},
},
{
Method: http.MethodPatch,
Resource: "/api/2.0/workspace-conf",
ExpectedRequest: map[string]string{
"someProperty": "",
},
},
},
Expand Down Expand Up @@ -223,6 +241,64 @@ func TestWorkspaceConfDelete_Error(t *testing.T) {
assert.Equal(t, "_", d.Id())
}

func TestWorkspaceConfDelete_MixedCaseBooleanValue(t *testing.T) {
qa.ResourceFixture{
Fixtures: []qa.HTTPFixture{
{
Method: http.MethodPatch,
Resource: "/api/2.0/workspace-conf",
ExpectedRequest: map[string]string{
"enableIpAccessLists": "false",
},
Response: map[string]string{
"enableIpAccessLists": "false",
},
Status: 200,
},
},
Resource: ResourceWorkspaceConf(),
Delete: true,
HCL: `custom_config {
enableIpAccessLists = "TRue"
}`,
ID: "_",
}.ApplyNoError(t)
}

func TestWorkspaceConfDelete_SomeValuesAreNotAllowed(t *testing.T) {
qa.ResourceFixture{
Fixtures: []qa.HTTPFixture{
{
Method: http.MethodPatch,
Resource: "/api/2.0/workspace-conf",
ExpectedRequest: map[string]string{
"enableGp3": "",
},
Response: common.APIErrorBody{
ErrorCode: "INVALID_REQUEST",
Message: `Some values are not allowed: {"enableGp3":""}`,
},
Status: 400,
},
{
Method: http.MethodPatch,
Resource: "/api/2.0/workspace-conf",
ExpectedRequest: map[string]string{
"enableTokens": "false",
},
Status: 200,
},
},
Resource: ResourceWorkspaceConf(),
Delete: true,
HCL: ` custom_config {
enableGp3 = ""
enableTokens = "true"
}`,
ID: "_",
}.ApplyNoError(t)
}

func TestWorkspaceConfUpdateOnInvalidConf(t *testing.T) {
d, err := qa.ResourceFixture{
Fixtures: []qa.HTTPFixture{
Expand Down

0 comments on commit 560e753

Please sign in to comment.