Skip to content

Commit

Permalink
fix conflicts
Browse files Browse the repository at this point in the history
  • Loading branch information
tanmay-db committed Nov 14, 2024
2 parents c420332 + 6e7ca4c commit c8d85ad
Show file tree
Hide file tree
Showing 23 changed files with 60 additions and 103 deletions.
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,15 +119,15 @@ We are migrating the resource from SDKv2 to Plugin Framework provider and hence
- `sdkv2`: Contains the changes specific to SDKv2. This package shouldn't depend on pluginfw or common.

### Adding a new resource
1. Check if the directory for this particular resource exists under `internal/providers/pluginfw/resources`, if not create the directory eg: `cluster`, `volume` etc... Please note: Resources and Data sources are organized under the same package for that service.
1. Check if the directory for this particular resource exists under `internal/providers/pluginfw/products`, if not create the directory eg: `cluster`, `volume` etc... Please note: Resources and Data sources are organized under the same package for that service.
2. Create a file with resource_resource-name.go and write the CRUD methods, schema for that resource. For reference, please take a look at existing resources eg: `resource_quality_monitor.go`. Make sure to set the user agent in all the CRUD methods. In the `Metadata()`, if the resource is to be used as default, use the method `GetDatabricksProductionName()` else use `GetDatabricksStagingName()` which suffixes the name with `_pluginframework`.
3. Create a file with `resource_resource-name_acc_test.go` and add integration tests here.
4. Create a file with `resource_resource-name_test.go` and add unit tests here. Note: Please make sure to abstract specific method of the resource so they are unit test friendly and not testing internal part of terraform plugin framework library. You can compare the diagnostics, for example: please take a look at: `data_cluster_test.go`
5. Add the resource under `internal/providers/pluginfw/pluginfw.go` in `Resources()` method. Please update the list so that it stays in alphabetically sorted order.
6. Create a PR and send it for review.
### Adding a new data source
1. Check if the directory for this particular datasource exists under `internal/providers/pluginfw/resources`, if not create the directory eg: `cluster`, `volume` etc... Please note: Resources and Data sources are organized under the same package for that service.
1. Check if the directory for this particular datasource exists under `internal/providers/pluginfw/products`, if not create the directory eg: `cluster`, `volume` etc... Please note: Resources and Data sources are organized under the same package for that service.
2. Create a file with `data_resource-name.go` and write the CRUD methods, schema for that data source. For reference, please take a look at existing data sources eg: `data_cluster.go`. Make sure to set the user agent in the READ method. In the `Metadata()`, if the resource is to be used as default, use the method `GetDatabricksProductionName()` else use `GetDatabricksStagingName()` which suffixes the name with `_pluginframework`.
3. Create a file with `data_resource-name_acc_test.go` and add integration tests here.
4. Create a file with `data_resource-name_test.go` and add unit tests here. Note: Please make sure to abstract specific method of the resource so they are unit test friendly and not testing internal part of terraform plugin framework library. You can compare the diagnostics, for example: please take a look at: `data_cluster_test.go`
Expand All @@ -141,7 +141,7 @@ Ideally there shouldn't be any behaviour change when migrating a resource or dat
### Code Organization
Each resource and data source should be defined in package `internal/providers/pluginfw/resources/<resource>`, e.g.: `internal/providers/pluginfw/resources/volume` package will contain both resource, data sources and other utils specific to volumes. Tests (both unit and integration tests) will also remain in this package.
Each resource and data source should be defined in package `internal/providers/plugnifw/products/<resource>`, e.g.: `internal/providers/plugnifw/products/volume` package will contain both resource, data sources and other utils specific to volumes. Tests (both unit and integration tests) will also remain in this package.
Note: Only Docs will stay under root docs/ directory.
Expand Down
59 changes: 0 additions & 59 deletions clusters/resource_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,26 +26,6 @@ var clusterSchema = resourceClusterSchema()
var clusterSchemaVersion = 4

const (
numWorkerErr = `num_workers may be 0 only for single-node clusters. To create a single node
cluster please include the following configuration in your cluster configuration:
spark_conf = {
"spark.databricks.cluster.profile" : "singleNode"
"spark.master" : "local[*]"
}
custom_tags = {
"ResourceClass" = "SingleNode"
}
Please note that the Databricks Terraform provider cannot detect if the above configuration
is defined in a policy used by the cluster. Please define this in the cluster configuration
itself to create a single node cluster.
For more details please see:
1. https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster#fixed-size-or-autoscaling-cluster
2. https://docs.databricks.com/clusters/single-node.html`

unsupportedExceptCreateEditClusterSpecErr = "unsupported type %T, must be one of %scompute.CreateCluster, %scompute.ClusterSpec or %scompute.EditCluster. Please report this issue to the GitHub repo"
)

Expand Down Expand Up @@ -130,39 +110,6 @@ func ZoneDiffSuppress(k, old, new string, d *schema.ResourceData) bool {
return false
}

func Validate(cluster any) error {
var profile, master, resourceClass string
switch c := cluster.(type) {
case compute.CreateCluster:
if c.NumWorkers > 0 || c.Autoscale != nil {
return nil
}
profile = c.SparkConf["spark.databricks.cluster.profile"]
master = c.SparkConf["spark.master"]
resourceClass = c.CustomTags["ResourceClass"]
case compute.EditCluster:
if c.NumWorkers > 0 || c.Autoscale != nil {
return nil
}
profile = c.SparkConf["spark.databricks.cluster.profile"]
master = c.SparkConf["spark.master"]
resourceClass = c.CustomTags["ResourceClass"]
case compute.ClusterSpec:
if c.NumWorkers > 0 || c.Autoscale != nil {
return nil
}
profile = c.SparkConf["spark.databricks.cluster.profile"]
master = c.SparkConf["spark.master"]
resourceClass = c.CustomTags["ResourceClass"]
default:
return fmt.Errorf(unsupportedExceptCreateEditClusterSpecErr, cluster, "", "", "")
}
if profile == "singleNode" && strings.HasPrefix(master, "local") && resourceClass == "SingleNode" {
return nil
}
return errors.New(numWorkerErr)
}

// This method is a duplicate of ModifyRequestOnInstancePool() in clusters/clusters_api.go that uses Go SDK.
// Long term, ModifyRequestOnInstancePool() in clusters_api.go will be removed once all the resources using clusters are migrated to Go SDK.
func ModifyRequestOnInstancePool(cluster any) error {
Expand Down Expand Up @@ -443,9 +390,6 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, c *commo
clusters := w.Clusters
var createClusterRequest compute.CreateCluster
common.DataToStructPointer(d, clusterSchema, &createClusterRequest)
if err := Validate(createClusterRequest); err != nil {
return err
}
if err = ModifyRequestOnInstancePool(&createClusterRequest); err != nil {
return err
}
Expand Down Expand Up @@ -596,9 +540,6 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, c *commo

if hasClusterConfigChanged(d) {
log.Printf("[DEBUG] Cluster state has changed!")
if err := Validate(cluster); err != nil {
return err
}
if err = ModifyRequestOnInstancePool(&cluster); err != nil {
return err
}
Expand Down
82 changes: 49 additions & 33 deletions clusters/resource_cluster_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1630,22 +1630,6 @@ func TestResourceClusterCreate_SingleNode(t *testing.T) {
assert.NoError(t, err)
assert.Equal(t, 0, d.Get("num_workers"))
}

func TestResourceClusterCreate_SingleNodeFail(t *testing.T) {
_, err := qa.ResourceFixture{
Create: true,
Resource: ResourceCluster(),
State: map[string]any{
"autotermination_minutes": 120,
"cluster_name": "Single Node Cluster",
"spark_version": "7.3.x-scala12",
"node_type_id": "Standard_F4s",
"is_pinned": false,
},
}.Apply(t)
assert.EqualError(t, err, numWorkerErr)
}

func TestResourceClusterCreate_NegativeNumWorkers(t *testing.T) {
_, err := qa.ResourceFixture{
Create: true,
Expand All @@ -1662,27 +1646,59 @@ func TestResourceClusterCreate_NegativeNumWorkers(t *testing.T) {
require.Equal(t, true, strings.Contains(err.Error(), "expected num_workers to be at least (0)"))
}

func TestResourceClusterUpdate_FailNumWorkersZero(t *testing.T) {
_, err := qa.ResourceFixture{
ID: "abc",
Update: true,
Resource: ResourceCluster(),
InstanceState: map[string]string{
"autotermination_minutes": "15",
"cluster_name": "Shared Autoscaling",
"spark_version": "7.1-scala12",
"node_type_id": "i3.xlarge",
"num_workers": "100",
func TestResourceClusterCreate_NumWorkersIsZero(t *testing.T) {
d, err := qa.ResourceFixture{
Fixtures: []qa.HTTPFixture{
nothingPinned,
{
Method: "POST",
Resource: "/api/2.1/clusters/create",
ExpectedRequest: compute.CreateCluster{
NumWorkers: 0,
ClusterName: "Zero workers cluster",
SparkVersion: "7.3.x-scala12",
NodeTypeId: "Standard_F4s",
AutoterminationMinutes: 120,
ForceSendFields: []string{"NumWorkers"},
},
Response: compute.ClusterDetails{
ClusterId: "abc",
State: compute.StateRunning,
},
},
{
Method: "GET",
ReuseRequest: true,
Resource: "/api/2.1/clusters/get?cluster_id=abc",
Response: compute.ClusterDetails{
ClusterId: "abc",
ClusterName: "Zero workers cluster",
SparkVersion: "7.3.x-scala12",
NodeTypeId: "Standard_F4s",
AutoterminationMinutes: 120,
State: compute.StateRunning,
},
},
{
Method: "GET",
Resource: "/api/2.0/libraries/cluster-status?cluster_id=abc",
Response: compute.ClusterLibraryStatuses{
LibraryStatuses: []compute.LibraryFullStatus{},
},
},
},
Create: true,
Resource: ResourceCluster(),
State: map[string]any{
"autotermination_minutes": 15,
"cluster_name": "Shared Autoscaling",
"spark_version": "7.1-scala12",
"node_type_id": "i3.xlarge",
"num_workers": 0,
"autotermination_minutes": 120,
"cluster_name": "Zero workers cluster",
"spark_version": "7.3.x-scala12",
"node_type_id": "Standard_F4s",
"is_pinned": false,
},
}.Apply(t)
assert.EqualError(t, err, numWorkerErr)
assert.NoError(t, err)
assert.Equal(t, 0, d.Get("num_workers"))
}

func TestModifyClusterRequestAws(t *testing.T) {
Expand Down
16 changes: 8 additions & 8 deletions internal/providers/pluginfw/pluginfw_rollout_utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@ import (
"slices"
"strings"

"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/resources/catalog"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/resources/cluster"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/resources/library"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/resources/notificationdestinations"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/resources/qualitymonitor"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/resources/registered_model"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/resources/sharing"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/resources/volume"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/catalog"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/cluster"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/library"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/notificationdestinations"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/qualitymonitor"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/registered_model"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/sharing"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/volume"
"github.com/hashicorp/terraform-plugin-framework/datasource"
"github.com/hashicorp/terraform-plugin-framework/resource"
)
Expand Down

0 comments on commit c8d85ad

Please sign in to comment.