Skip to content

Commit

Permalink
Merge branch 'main' into feature/account-ip-acl
Browse files Browse the repository at this point in the history
  • Loading branch information
nkvuong authored Nov 15, 2024
2 parents b31136d + 5031fea commit 4492533
Show file tree
Hide file tree
Showing 25 changed files with 146 additions and 168 deletions.
10 changes: 5 additions & 5 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,16 +119,16 @@ We are migrating the resource from SDKv2 to Plugin Framework provider and hence
- `sdkv2`: Contains the changes specific to SDKv2. This package shouldn't depend on pluginfw or common.

### Adding a new resource
1. Check if the directory for this particular resource exists under `internal/providers/pluginfw/resources`, if not create the directory eg: `cluster`, `volume` etc... Please note: Resources and Data sources are organized under the same package for that service.
2. Create a file with resource_resource-name.go and write the CRUD methods, schema for that resource. For reference, please take a look at existing resources eg: `resource_quality_monitor.go`
1. Check if the directory for this particular resource exists under `internal/providers/pluginfw/products`, if not create the directory eg: `cluster`, `volume` etc... Please note: Resources and Data sources are organized under the same package for that service.
2. Create a file with resource_resource-name.go and write the CRUD methods, schema for that resource. For reference, please take a look at existing resources eg: `resource_quality_monitor.go`. Make sure to set the user agent in all the CRUD methods. In the `Metadata()`, if the resource is to be used as default, use the method `GetDatabricksProductionName()` else use `GetDatabricksStagingName()` which suffixes the name with `_pluginframework`.
3. Create a file with `resource_resource-name_acc_test.go` and add integration tests here.
4. Create a file with `resource_resource-name_test.go` and add unit tests here. Note: Please make sure to abstract specific method of the resource so they are unit test friendly and not testing internal part of terraform plugin framework library. You can compare the diagnostics, for example: please take a look at: `data_cluster_test.go`
5. Add the resource under `internal/providers/pluginfw/pluginfw.go` in `Resources()` method. Please update the list so that it stays in alphabetically sorted order.
6. Create a PR and send it for review.
### Adding a new data source
1. Check if the directory for this particular datasource exists under `internal/providers/pluginfw/resources`, if not create the directory eg: `cluster`, `volume` etc... Please note: Resources and Data sources are organized under the same package for that service.
2. Create a file with `data_resource-name.go` and write the CRUD methods, schema for that data source. For reference, please take a look at existing data sources eg: `data_cluster.go`
1. Check if the directory for this particular datasource exists under `internal/providers/pluginfw/products`, if not create the directory eg: `cluster`, `volume` etc... Please note: Resources and Data sources are organized under the same package for that service.
2. Create a file with `data_resource-name.go` and write the CRUD methods, schema for that data source. For reference, please take a look at existing data sources eg: `data_cluster.go`. Make sure to set the user agent in the READ method. In the `Metadata()`, if the resource is to be used as default, use the method `GetDatabricksProductionName()` else use `GetDatabricksStagingName()` which suffixes the name with `_pluginframework`.
3. Create a file with `data_resource-name_acc_test.go` and add integration tests here.
4. Create a file with `data_resource-name_test.go` and add unit tests here. Note: Please make sure to abstract specific method of the resource so they are unit test friendly and not testing internal part of terraform plugin framework library. You can compare the diagnostics, for example: please take a look at: `data_cluster_test.go`
5. Add the resource under `internal/providers/pluginfw/pluginfw.go` in `DataSources()` method. Please update the list so that it stays in alphabetically sorted order.
Expand All @@ -141,7 +141,7 @@ Ideally there shouldn't be any behaviour change when migrating a resource or dat
### Code Organization
Each resource and data source should be defined in package `internal/providers/pluginfw/resources/<resource>`, e.g.: `internal/providers/pluginfw/resources/volume` package will contain both resource, data sources and other utils specific to volumes. Tests (both unit and integration tests) will also remain in this package.
Each resource and data source should be defined in package `internal/providers/plugnifw/products/<resource>`, e.g.: `internal/providers/plugnifw/products/volume` package will contain both resource, data sources and other utils specific to volumes. Tests (both unit and integration tests) will also remain in this package.
Note: Only Docs will stay under root docs/ directory.
Expand Down
59 changes: 0 additions & 59 deletions clusters/resource_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,26 +26,6 @@ var clusterSchema = resourceClusterSchema()
var clusterSchemaVersion = 4

const (
numWorkerErr = `num_workers may be 0 only for single-node clusters. To create a single node
cluster please include the following configuration in your cluster configuration:
spark_conf = {
"spark.databricks.cluster.profile" : "singleNode"
"spark.master" : "local[*]"
}
custom_tags = {
"ResourceClass" = "SingleNode"
}
Please note that the Databricks Terraform provider cannot detect if the above configuration
is defined in a policy used by the cluster. Please define this in the cluster configuration
itself to create a single node cluster.
For more details please see:
1. https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster#fixed-size-or-autoscaling-cluster
2. https://docs.databricks.com/clusters/single-node.html`

unsupportedExceptCreateEditClusterSpecErr = "unsupported type %T, must be one of %scompute.CreateCluster, %scompute.ClusterSpec or %scompute.EditCluster. Please report this issue to the GitHub repo"
)

Expand Down Expand Up @@ -130,39 +110,6 @@ func ZoneDiffSuppress(k, old, new string, d *schema.ResourceData) bool {
return false
}

func Validate(cluster any) error {
var profile, master, resourceClass string
switch c := cluster.(type) {
case compute.CreateCluster:
if c.NumWorkers > 0 || c.Autoscale != nil {
return nil
}
profile = c.SparkConf["spark.databricks.cluster.profile"]
master = c.SparkConf["spark.master"]
resourceClass = c.CustomTags["ResourceClass"]
case compute.EditCluster:
if c.NumWorkers > 0 || c.Autoscale != nil {
return nil
}
profile = c.SparkConf["spark.databricks.cluster.profile"]
master = c.SparkConf["spark.master"]
resourceClass = c.CustomTags["ResourceClass"]
case compute.ClusterSpec:
if c.NumWorkers > 0 || c.Autoscale != nil {
return nil
}
profile = c.SparkConf["spark.databricks.cluster.profile"]
master = c.SparkConf["spark.master"]
resourceClass = c.CustomTags["ResourceClass"]
default:
return fmt.Errorf(unsupportedExceptCreateEditClusterSpecErr, cluster, "", "", "")
}
if profile == "singleNode" && strings.HasPrefix(master, "local") && resourceClass == "SingleNode" {
return nil
}
return errors.New(numWorkerErr)
}

// This method is a duplicate of ModifyRequestOnInstancePool() in clusters/clusters_api.go that uses Go SDK.
// Long term, ModifyRequestOnInstancePool() in clusters_api.go will be removed once all the resources using clusters are migrated to Go SDK.
func ModifyRequestOnInstancePool(cluster any) error {
Expand Down Expand Up @@ -443,9 +390,6 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, c *commo
clusters := w.Clusters
var createClusterRequest compute.CreateCluster
common.DataToStructPointer(d, clusterSchema, &createClusterRequest)
if err := Validate(createClusterRequest); err != nil {
return err
}
if err = ModifyRequestOnInstancePool(&createClusterRequest); err != nil {
return err
}
Expand Down Expand Up @@ -596,9 +540,6 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, c *commo

if hasClusterConfigChanged(d) {
log.Printf("[DEBUG] Cluster state has changed!")
if err := Validate(cluster); err != nil {
return err
}
if err = ModifyRequestOnInstancePool(&cluster); err != nil {
return err
}
Expand Down
82 changes: 49 additions & 33 deletions clusters/resource_cluster_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1630,22 +1630,6 @@ func TestResourceClusterCreate_SingleNode(t *testing.T) {
assert.NoError(t, err)
assert.Equal(t, 0, d.Get("num_workers"))
}

func TestResourceClusterCreate_SingleNodeFail(t *testing.T) {
_, err := qa.ResourceFixture{
Create: true,
Resource: ResourceCluster(),
State: map[string]any{
"autotermination_minutes": 120,
"cluster_name": "Single Node Cluster",
"spark_version": "7.3.x-scala12",
"node_type_id": "Standard_F4s",
"is_pinned": false,
},
}.Apply(t)
assert.EqualError(t, err, numWorkerErr)
}

func TestResourceClusterCreate_NegativeNumWorkers(t *testing.T) {
_, err := qa.ResourceFixture{
Create: true,
Expand All @@ -1662,27 +1646,59 @@ func TestResourceClusterCreate_NegativeNumWorkers(t *testing.T) {
require.Equal(t, true, strings.Contains(err.Error(), "expected num_workers to be at least (0)"))
}

func TestResourceClusterUpdate_FailNumWorkersZero(t *testing.T) {
_, err := qa.ResourceFixture{
ID: "abc",
Update: true,
Resource: ResourceCluster(),
InstanceState: map[string]string{
"autotermination_minutes": "15",
"cluster_name": "Shared Autoscaling",
"spark_version": "7.1-scala12",
"node_type_id": "i3.xlarge",
"num_workers": "100",
func TestResourceClusterCreate_NumWorkersIsZero(t *testing.T) {
d, err := qa.ResourceFixture{
Fixtures: []qa.HTTPFixture{
nothingPinned,
{
Method: "POST",
Resource: "/api/2.1/clusters/create",
ExpectedRequest: compute.CreateCluster{
NumWorkers: 0,
ClusterName: "Zero workers cluster",
SparkVersion: "7.3.x-scala12",
NodeTypeId: "Standard_F4s",
AutoterminationMinutes: 120,
ForceSendFields: []string{"NumWorkers"},
},
Response: compute.ClusterDetails{
ClusterId: "abc",
State: compute.StateRunning,
},
},
{
Method: "GET",
ReuseRequest: true,
Resource: "/api/2.1/clusters/get?cluster_id=abc",
Response: compute.ClusterDetails{
ClusterId: "abc",
ClusterName: "Zero workers cluster",
SparkVersion: "7.3.x-scala12",
NodeTypeId: "Standard_F4s",
AutoterminationMinutes: 120,
State: compute.StateRunning,
},
},
{
Method: "GET",
Resource: "/api/2.0/libraries/cluster-status?cluster_id=abc",
Response: compute.ClusterLibraryStatuses{
LibraryStatuses: []compute.LibraryFullStatus{},
},
},
},
Create: true,
Resource: ResourceCluster(),
State: map[string]any{
"autotermination_minutes": 15,
"cluster_name": "Shared Autoscaling",
"spark_version": "7.1-scala12",
"node_type_id": "i3.xlarge",
"num_workers": 0,
"autotermination_minutes": 120,
"cluster_name": "Zero workers cluster",
"spark_version": "7.3.x-scala12",
"node_type_id": "Standard_F4s",
"is_pinned": false,
},
}.Apply(t)
assert.EqualError(t, err, numWorkerErr)
assert.NoError(t, err)
assert.Equal(t, 0, d.Get("num_workers"))
}

func TestModifyClusterRequestAws(t *testing.T) {
Expand Down
40 changes: 21 additions & 19 deletions go.mod
Original file line number Diff line number Diff line change
@@ -1,19 +1,21 @@
module github.com/databricks/terraform-provider-databricks

go 1.22
go 1.22.0

toolchain go1.22.5

require (
github.com/databricks/databricks-sdk-go v0.51.0
github.com/golang-jwt/jwt/v4 v4.5.1
github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320
github.com/hashicorp/hcl v1.0.0
github.com/hashicorp/hcl/v2 v2.22.0
github.com/hashicorp/terraform-plugin-framework v1.11.0
github.com/hashicorp/terraform-plugin-framework-validators v0.13.0
github.com/hashicorp/terraform-plugin-go v0.23.0
github.com/hashicorp/terraform-plugin-framework v1.13.0
github.com/hashicorp/terraform-plugin-framework-validators v0.15.0
github.com/hashicorp/terraform-plugin-go v0.25.0
github.com/hashicorp/terraform-plugin-log v0.9.0
github.com/hashicorp/terraform-plugin-mux v0.16.0
github.com/hashicorp/terraform-plugin-sdk/v2 v2.34.0
github.com/hashicorp/terraform-plugin-mux v0.17.0
github.com/hashicorp/terraform-plugin-sdk/v2 v2.35.0
github.com/hashicorp/terraform-plugin-testing v1.10.0
github.com/stretchr/testify v1.9.0
github.com/zclconf/go-cty v1.15.0
Expand All @@ -23,7 +25,7 @@ require (
require (
cloud.google.com/go/auth v0.4.2 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.2 // indirect
cloud.google.com/go/compute/metadata v0.3.0 // indirect
cloud.google.com/go/compute/metadata v0.5.0 // indirect
github.com/ProtonMail/go-crypto v1.1.0-alpha.2 // indirect
github.com/agext/levenshtein v1.2.3 // indirect
github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect
Expand All @@ -45,14 +47,14 @@ require (
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-plugin v1.6.0 // indirect
github.com/hashicorp/go-plugin v1.6.2 // indirect
github.com/hashicorp/go-retryablehttp v0.7.7 // indirect
github.com/hashicorp/go-uuid v1.0.3 // indirect
github.com/hashicorp/go-version v1.7.0 // indirect
github.com/hashicorp/hc-install v0.8.0 // indirect
github.com/hashicorp/hc-install v0.9.0 // indirect
github.com/hashicorp/logutils v1.0.0 // indirect
github.com/hashicorp/terraform-exec v0.21.0 // indirect
github.com/hashicorp/terraform-json v0.22.1 // indirect
github.com/hashicorp/terraform-json v0.23.0 // indirect
github.com/hashicorp/terraform-registry-address v0.2.3 // indirect
github.com/hashicorp/terraform-svchost v0.1.1 // indirect
github.com/hashicorp/yamux v0.1.1 // indirect
Expand All @@ -74,20 +76,20 @@ require (
go.opentelemetry.io/otel v1.24.0 // indirect
go.opentelemetry.io/otel/metric v1.24.0 // indirect
go.opentelemetry.io/otel/trace v1.24.0 // indirect
golang.org/x/crypto v0.26.0 // indirect
golang.org/x/mod v0.19.0 // indirect
golang.org/x/net v0.26.0 // indirect
golang.org/x/oauth2 v0.20.0 // indirect
golang.org/x/crypto v0.28.0 // indirect
golang.org/x/mod v0.21.0 // indirect
golang.org/x/net v0.28.0 // indirect
golang.org/x/oauth2 v0.22.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sys v0.23.0 // indirect
golang.org/x/text v0.17.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/text v0.19.0 // indirect
golang.org/x/time v0.5.0 // indirect
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect
google.golang.org/api v0.182.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240521202816-d264139d666e // indirect
google.golang.org/grpc v1.64.1 // indirect
google.golang.org/protobuf v1.34.1 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142 // indirect
google.golang.org/grpc v1.67.1 // indirect
google.golang.org/protobuf v1.35.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
Loading

0 comments on commit 4492533

Please sign in to comment.