Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

managed_resource_group_name not working for databricks #15909

Open
mevenks opened this issue Mar 21, 2022 · 4 comments
Open

managed_resource_group_name not working for databricks #15909

mevenks opened this issue Mar 21, 2022 · 4 comments

Comments

@mevenks
Copy link

mevenks commented Mar 21, 2022

Hi ,

I am using terraform to create a databricks . If I do NOT include the [managed_resource_group_name] parameter , databricks is created successfully in my resource group and a new managed resource groupe is created automatically , until now everything is normal .

But if I include the [managed_resource_group_name] parameter adding a personalised freshly new created resource group , databricks get created but not the resources that are supposed to be created under the managed resource group (storage,vnet and workers-sg) .

Terraform v1.1.5
on windows_amd64

provider registry.terraform.io/hashicorp/azurerm v2.99.0
My naming convention :
databricks resource gp naming convension : xx-xx-rg-project
databricks managed resource gp naming convension : xx-xx-rg-project-databricks-workspace

The same main.tf is used to created resource groupe for databricks and databrick's workspace , i've added depends_on under databricks creation to make sure both resources groupes are created before databricks creation.

main.tf

module "databricks" {

source = "./modules/databricks"

name = var.databricks_name
resource_group_name = var.resource_group_name
location = var.resource_group_location
sku = var.databricks_sku
managed_resource_group_name = var.managed_resource_group_name

depends_on = [
module.resource_group_workspace , module.resource_group
]

}

my module un modules\databricks\main.tf :

resource "azurerm_databricks_workspace" "this" {

name = var.name
resource_group_name = var.resource_group_name
location = var.location
sku = var.sku

managed_resource_group_name = var.managed_resource_group_name

}

Terraform does not display any error but only success on creation.
Message I have on databricks : The workspace 'xx-xx-xxxxx' is in a failed state and hence cannot be launched. Please delete and re-create the workspace.

thank you for your help

@mycloud91
Copy link

mycloud91 commented Sep 23, 2022

I'm also facing the same issue. any one have update/solution on this. version for reference.

azurerm = {
  source  = "hashicorp/azurerm"
  version = "=2.64.0"
}
databricks = {
  source  = "databrickslabs/databricks"
  version = "0.5.1"
}

@mevenks
Copy link
Author

mevenks commented Sep 23, 2022

Hi,

My working version :

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>3.2.0"
}

databricks = {
  source = "databrickslabs/databricks"
  version = "0.5.7"
}

}

backend "local" {
path = "./terraform.tfstate"
}

}

my naming convention :

variable "RESOURCE_GROUP_NAME" {
type = string
default = "XXX.POC.XX.XXXX.XXXX.01"
}
variable "MANAGED_RESOURCE_GROUP_NAME" {
type = string
default = "XXX.POC.XX.XXXX.XXXXXX.XXXXXX.01"
}

my module:

resource "azurerm_databricks_workspace" "this" {

name = var.name
resource_group_name = var.resource_group_name
location = var.location
sku = var.sku
managed_resource_group_name = var.managed_resource_group_name
public_network_access_enabled = true

custom_parameters {
no_public_ip = true
virtual_network_id = var.virtual_network_id
private_subnet_name = var.private_subnet_name
private_subnet_network_security_group_association_id = var.private_nsg
public_subnet_name = var.public_subnet_name
public_subnet_network_security_group_association_id = var.public_nsg
}

tags = {
Environnement = "POC"
Support = "Architecture Technique SI"
}

}

@mycloud91
Copy link

mycloud91 commented Oct 3, 2022

@mevenks Thanks for your update! I have resolve the issue. (cause: used existing managed group name for newly created data bricks resource). I hope different managed group name is required for each data bricks if incase we use same virtual network.

@caldempsey
Copy link

experienced this issue. every time terraform wants to make a change to the databricks workspace by deleting the workspace it errors out complaining it doesn't know what to do with the managed resource group. @mycloud91 did you have a solution that lets you define a managed resource group for the databricks cluster (rather than it creating its own)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants