-
Notifications
You must be signed in to change notification settings - Fork 400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Updated AWS UC storage credential to include permissions for file events #4406
base: main
Are you sure you want to change the base?
[Feature] Updated AWS UC storage credential to include permissions for file events #4406
Conversation
3ae3e94
to
9085086
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aws/data_aws_unity_catalog_policy.go
Outdated
"arn:aws:sqs:*:*:*", | ||
"arn:aws:sns:*:*:*", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are customers comfortable with granting us permission on all SQS queues and SNS destinations? It may be sensible as a default but I expect they will want to be more selective. I wonder if e.g. there is a specific prefix that Databricks always uses so we can restrict this somewhat?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these sns/sqs will be created by Databricks, and will follow the pattern arn:aws:sqs:<region>:<aws_account_id>:csms-*
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added ARN resource ID prefix. LMK if this works.
@@ -60,6 +60,59 @@ func generateReadContext(ctx context.Context, d *schema.ResourceData, m *common. | |||
Resources: []string{kmsArn}, | |||
}) | |||
} | |||
policy.Statements = append(policy.Statements, &awsIamPolicyStatement{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree with @mgyucht - these can be left as opt-out/in, as our official documentation mentioned this is optional but strongly recommended
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're moving from opt-in/out to mandatory (see PRD: Maximizing coverage of managed file events)
@mgyucht thanks for the review. Ideally we would not make these changes opt-in/out as we're moving towards making file events mandatory (see PRD: Maximizing coverage of managed file events) |
If integration tests don't run automatically, an authorized user can run them manually by following the instructions below: Trigger: Inputs:
Checks will be approved automatically on success. |
Changes
Databricks documentation for storage credentials contains instructions to add permissions for file events, but as of yet these are missing from the terraform provider. This PR adds them for AWS. PRs for Azure and GCP will follow soon
Tests
Updated test:
aws/data_aws_unity_catalog_policy_test.go
make test
run locallydocs/
folderinternal/acceptance