diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 870dadcd183..d2bb72552bd 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -9,6 +9,7 @@ To learn more about the writing conventions used in the dbt Labs docs, see the [ - [ ] I have reviewed the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) so my content adheres to these guidelines. - [ ] The topic I'm writing about is for specific dbt version(s) and I have versioned it according to the [version a whole page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) and/or [version a block of content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content) guidelines. - [ ] I have added checklist item(s) to this list for anything anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." +- [ ] The content in this PR requires a dbt release note, so I added one to the [release notes page](https://docs.getdbt.com/docs/dbt-versions/dbt-cloud-release-notes). -Note: The **Defer to staging/production** [toggle](/docs/cloud/about-cloud-develop-defer#defer-in-the-dbt-cloud-ide) button doesn't apply when running Semantic Layer commands in the dbt Cloud IDE. To use defer for Semantic layer commands in the IDE, toggle the button on and manually add the `--defer` flag to the command. This is a temporary workaround and will be available soon. You can install [MetricFlow](https://github.com/dbt-labs/metricflow#getting-started) from [PyPI](https://pypi.org/project/dbt-metricflow/). You need to use `pip` to install MetricFlow on Windows or Linux operating systems: + + 1. Create or activate your virtual environment `python -m venv venv` 2. Run `pip install dbt-metricflow` * You can install MetricFlow using PyPI as an extension of your dbt adapter in the command line. To install the adapter, run `python -m pip install "dbt-metricflow[your_adapter_name]"` and add the adapter name at the end of the command. For example, for a Snowflake adapter run `python -m pip install "dbt-metricflow[snowflake]"` + + + + +1. Create or activate your virtual environment `python -m venv venv` +2. Run `pip install dbt-metricflow` + * You can install MetricFlow using PyPI as an extension of your dbt adapter in the command line. To install the adapter, run `python -m pip install "dbt-metricflow[adapter_package_name]"` and add the adapter name at the end of the command. For example, for a Snowflake adapter run `python -m pip install "dbt-metricflow[dbt-snowflake]"` + + + **Note**, you'll need to manage versioning between dbt Core, your adapter, and MetricFlow. Something to note, MetricFlow `mf` commands return an error if you have a Metafont latex package installed. To run `mf` commands, uninstall the package. diff --git a/website/docs/docs/build/metricflow-time-spine.md b/website/docs/docs/build/metricflow-time-spine.md index e932fb36f53..9932a35839c 100644 --- a/website/docs/docs/build/metricflow-time-spine.md +++ b/website/docs/docs/build/metricflow-time-spine.md @@ -463,9 +463,22 @@ For example, if you use a custom calendar in your organization, such as a fiscal - This is useful for calculating metrics based on a custom calendar, such as fiscal quarters or weeks. - Use the `custom_granularities` key to define a non-standard time period for querying data, such as a `retail_month` or `fiscal_week`, instead of standard options like `day`, `month`, or `year`. -- Ensure the the `standard_granularity_column` is a date time type. - This feature provides more control over how time-based metrics are calculated. + + +When working with custom calendars in MetricFlow, it's important to ensure: + +- Consistent data types — Both your dimension column and the time spine column should use the same data type to allow accurate comparisons. Functions like `DATE_TRUNC` don't change the data type of the input in some databases (like Snowflake). Using different data types can lead to mismatches and inaccurate results. + + We recommend using `DATETIME` or `TIMESTAMP` data types for your time dimensions and time spine, as they support all granularities. The `DATE` data type may not support smaller granularities like hours or minutes. + +- Time zones — MetricFlow currently doesn't perform any timezone manipulation. When working with timezone-aware data, inconsistent time zones may lead to unexpected results during aggregations and comparisons. + +For example, if your time spine column is `TIMESTAMP` type and your dimension column is `DATE` type, comparisons between these columns might not work as intended. To fix this, convert your `DATE` column to `TIMESTAMP`, or make sure both columns are the same data type. + + + ### Add custom granularities To add custom granularities, the Semantic Layer supports custom calendar configurations that allow users to query data using non-standard time periods like `fiscal_year` or `retail_month`. You can define these custom granularities (all lowercased) by modifying your model's YAML configuration like this: diff --git a/website/docs/docs/cloud-integrations/configure-auto-exposures.md b/website/docs/docs/cloud-integrations/configure-auto-exposures.md index 4574d69c164..8fa83c027e6 100644 --- a/website/docs/docs/cloud-integrations/configure-auto-exposures.md +++ b/website/docs/docs/cloud-integrations/configure-auto-exposures.md @@ -25,7 +25,6 @@ To access the features, you should meet the following: 3. You have set up a [production](/docs/deploy/deploy-environments#set-as-production-environment) deployment environment for each project you want to explore, with at least one successful job run. 4. You have [admin permissions](/docs/cloud/manage-access/enterprise-permissions) in dbt Cloud to edit project settings or production environment settings. 5. Use Tableau as your BI tool and enable metadata permissions or work with an admin to do so. Compatible with Tableau Cloud or Tableau Server with the Metadata API enabled. -6. Run a production job _after_ saving the Tableau collections. ## Set up in Tableau @@ -63,7 +62,6 @@ To set up [personal access tokens (PATs)](/docs/dbt-cloud-apis/user-tokens#using dbt Cloud automatically imports and syncs any workbook within the selected collections. New additions to the collections will be added to the lineage in dbt Cloud during the next automatic sync (usually once per day). 5. Click **Save**. -6. Run a production job _after_ saving the Tableau collections. dbt Cloud imports everything in the collection(s) and you can continue to view them in Explorer. For more information on how to view and use auto-exposures, refer to [View auto-exposures from dbt Explorer](/docs/collaborate/auto-exposures) page. diff --git a/website/docs/docs/cloud/about-cloud-develop-defer.md b/website/docs/docs/cloud/about-cloud-develop-defer.md index 3ee5ac71666..fc55edf8a38 100644 --- a/website/docs/docs/cloud/about-cloud-develop-defer.md +++ b/website/docs/docs/cloud/about-cloud-develop-defer.md @@ -40,9 +40,6 @@ To enable defer in the dbt Cloud IDE, toggle the **Defer to production** button For example, if you were to start developing on a new branch with [nothing in your development schema](/reference/node-selection/defer#usage), edit a single model, and run `dbt build -s state:modified` — only the edited model would run. Any `{{ ref() }}` functions will point to the production location of the referenced models. - -Note: The **Defer to staging/production** toggle button doesn't apply when running [dbt Semantic Layer commands](/docs/build/metricflow-commands) in the dbt Cloud IDE. To use defer for Semantic layer commands in the IDE, toggle the button on and manually add the `--defer` flag to the command. This is a temporary workaround and will be available soon. - ### Defer in dbt Cloud CLI diff --git a/website/docs/docs/cloud/about-cloud/architecture.md b/website/docs/docs/cloud/about-cloud/architecture.md index 52614f0cbcd..ecf67b83a96 100644 --- a/website/docs/docs/cloud/about-cloud/architecture.md +++ b/website/docs/docs/cloud/about-cloud/architecture.md @@ -48,7 +48,7 @@ The git repo information is stored on dbt Cloud servers to make it accessible du ### Authentication services -The default settings of dbt Cloud enable local users with credentials stored in dbt Cloud. Still, integrations with various authentication services are offered as an alternative, including [single sign-on services](/docs/cloud/manage-access/sso-overview). Access to features can be granted/restricted by role using [RBAC](/docs/cloud/manage-access/enterprise-permissions). +The default settings of dbt Cloud enable local users with credentials stored in dbt Cloud. Still, integrations with various authentication services are offered as an alternative, including [single sign-on services](/docs/cloud/manage-access/sso-overview). Access to features can be granted/restricted by role using [RBAC](/docs/cloud/manage-access/about-user-access#role-based-access-control-). SSO features are essential because they reduce the number of credentials a user must maintain. Users sign in once and the authentication token is shared among integrated services (such as dbt Cloud). The token expires and must be refreshed at predetermined intervals, requiring the user to go through the authentication process again. If the user is disabled in the SSO provider service, their access to dbt Cloud is disabled, and they cannot override this with local auth credentials. diff --git a/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md b/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md index 398b0cff2a1..c9d2cbbad30 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md @@ -53,7 +53,7 @@ To understand how to navigate the IDE and its user interface elements, refer to | Feature | Description | |---|---| | [**Keyboard shortcuts**](/docs/cloud/dbt-cloud-ide/keyboard-shortcuts) | You can access a variety of [commands and actions](/docs/cloud/dbt-cloud-ide/keyboard-shortcuts) in the IDE by choosing the appropriate keyboard shortcut. Use the shortcuts for common tasks like building modified models or resuming builds from the last failure. | -| **IDE version control** | The IDE version control section and git button allow you to apply the concept of [version control](/docs/collaborate/git/version-control-basics) to your project directly into the IDE.

- Create or change branches, execute git commands using the git button.
- Commit or revert individual files by right-clicking the edited file
- [Resolve merge conflicts](/docs/collaborate/git/merge-conflicts)
- Link to the repo directly by clicking the branch name
- Edit, format, or lint files and execute dbt commands in your primary protected branch, and commit to a new branch.
- Use Git diff view to view what has been changed in a file before you make a pull request.
- From dbt version 1.6 and higher, use the **Prune branches** [button](/docs/cloud/dbt-cloud-ide/ide-user-interface#prune-branches-modal) to delete local branches that have been deleted from the remote repository, keeping your branch management tidy. | +| **IDE version control** | The IDE version control section and git button allow you to apply the concept of [version control](/docs/collaborate/git/version-control-basics) to your project directly into the IDE.

- Create or change branches, execute git commands using the git button.
- Commit or revert individual files by right-clicking the edited file
- [Resolve merge conflicts](/docs/collaborate/git/merge-conflicts)
- Link to the repo directly by clicking the branch name
- Edit, format, or lint files and execute dbt commands in your primary protected branch, and commit to a new branch.
- Use Git diff view to view what has been changed in a file before you make a pull request.
- Use the **Prune branches** [button](/docs/cloud/dbt-cloud-ide/ide-user-interface#prune-branches-modal) (dbt v1.6 and higher) to delete local branches that have been deleted from the remote repository, keeping your branch management tidy.
- Sign your [git commits](/docs/cloud/dbt-cloud-ide/git-commit-signing) to mark them as 'Verified'. | | **Preview and Compile button** | You can [compile or preview](/docs/cloud/dbt-cloud-ide/ide-user-interface#console-section) code, a snippet of dbt code, or one of your dbt models after editing and saving. | | [**dbt Copilot**](/docs/cloud/dbt-copilot) | A powerful AI engine that can generate documentation, tests, and semantic models for your dbt SQL models. Available for dbt Cloud Enterprise plans. | | **Build, test, and run button** | Build, test, and run your project with a button click or by using the Cloud IDE command bar. diff --git a/website/docs/docs/cloud/dbt-cloud-ide/git-commit-signing.md b/website/docs/docs/cloud/dbt-cloud-ide/git-commit-signing.md new file mode 100644 index 00000000000..afaa0751669 --- /dev/null +++ b/website/docs/docs/cloud/dbt-cloud-ide/git-commit-signing.md @@ -0,0 +1,80 @@ +--- +title: "Git commit signing" +description: "Learn how to sign your Git commits when using the IDE for development." +sidebar_label: Git commit signing +--- + +# Git commit signing + +To prevent impersonation and enhance security, you can sign your Git commits before pushing them to your repository. Using your signature, a Git provider can cryptographically verify a commit and mark it as "verified", providing increased confidence about its origin. + +You can configure dbt Cloud to sign your Git commits when using the IDE for development. To set up, enable the feature in dbt Cloud, follow the flow to generate a keypair, and upload the public key to your Git provider to use for signature verification. + + +## Prerequisites + +- GitHub or GitLab is your Git provider. Currently, Azure DevOps is not supported. +- You have a dbt Cloud account on the [Enterprise plan](https://www.getdbt.com/pricing/). + +## Generate GPG keypair in dbt Cloud + +To generate a GPG keypair in dbt Cloud, follow these steps: +1. Go to your **Personal profile** page in dbt Cloud. +2. Navigate to **Signed Commits** section. +3. Enable the **Sign commits originating from this user** toggle. +4. This will generate a GPG keypair. The private key will be used to sign all future Git commits. The public key will be displayed, allowing you to upload it to your Git provider. + + + +## Upload public key to Git provider + +To upload the public key to your Git provider, follow the detailed documentation provided by the supported Git provider: + +- [GitHub instructions](https://docs.github.com/en/authentication/managing-commit-signature-verification/adding-a-gpg-key-to-your-github-account) +- [GitLab instructions](https://docs.gitlab.com/ee/user/project/repository/signed_commits/gpg.html) + +Once you have uploaded the public key to your Git provider, your Git commits will be marked as "Verified" after you push the changes to the repository. + + + +## Considerations + +- The GPG keypair is tied to the user, not a specific account. There is a 1:1 relationship between the user and keypair. The same key will be used for signing commits on any accounts the user is a member of. +- The GPG keypair generated in dbt Cloud is linked to the email address associated with your account at the time of keypair creation. This email identifies the author of signed commits. +- For your Git commits to be marked as "verified", your dbt Cloud email address must be a verified email address with your Git provider. The Git provider (such as, GitHub, GitLab) checks that the commit's signed email matches a verified email in your Git provider account. If they don’t match, the commit won't be marked as "verified." +- Keep your dbt Cloud email and Git provider's verified email in sync to avoid verification issues. If you change your dbt Cloud email address: + - Generate a new GPG keypair with the updated email, following the [steps mentioned earlier](/docs/cloud/dbt-cloud-ide/git-commit-signing#generate-gpg-keypair-in-dbt-cloud). + - Add and verify the new email in your Git provider. + + + +## FAQs + + + + + +If you delete your GPG keypair in dbt Cloud, your Git commits will no longer be signed. You can generate a new GPG keypair by following the [steps mentioned earlier](/docs/cloud/dbt-cloud-ide/git-commit-signing#generate-gpg-keypair-in-dbt-cloud). + + + + +GitHub and GitLab support commit signing, while Azure DevOps does not. Commit signing is a [git feature](https://git-scm.com/book/ms/v2/Git-Tools-Signing-Your-Work), and is independent of any specific provider. However, not all providers support the upload of public keys, or the display of verification badges on commits. + + + + + +If your Git Provider does not explicitly support the uploading of public GPG keys, then +commits will still be signed using the private key, but no verification information will +be displayed by the provider. + + + + + +If your Git provider is configured to enforce commit verification, then unsigned commits +will be rejected. To avoid this, ensure that you have followed all previous steps to generate +a keypair, and uploaded the public key to the provider. + + diff --git a/website/docs/docs/cloud/dbt-cloud-ide/ide-user-interface.md b/website/docs/docs/cloud/dbt-cloud-ide/ide-user-interface.md index 8d80483485c..4aec3353544 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/ide-user-interface.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/ide-user-interface.md @@ -193,7 +193,7 @@ Use menus and modals to interact with IDE and access useful options to help your * Toggling between dark or light mode for a better viewing experience * Restarting the IDE - * Fully recloning your repository to refresh your git state and view status details + * Rollback your repo to remote, to refresh your git state and view status details * Viewing status details, including the IDE Status modal. - + diff --git a/website/docs/docs/cloud/manage-access/about-access.md b/website/docs/docs/cloud/manage-access/about-access.md index 6b02d9eb17b..b970b0d5763 100644 --- a/website/docs/docs/cloud/manage-access/about-access.md +++ b/website/docs/docs/cloud/manage-access/about-access.md @@ -89,7 +89,7 @@ There are three license types in dbt Cloud: - **Developer** — User can be granted _any_ permissions. - **Read-Only** — User has read-only permissions applied to all dbt Cloud resources regardless of the role-based permissions that the user is assigned. -- **IT** — User has [Security Admin](/docs/cloud/manage-access/enterprise-permissions#security-admin) and [Billing Admin](/docs/cloud/manage-access/enterprise-permissions#billing-admin) permissions applied, regardless of the group permissions assigned. +- **IT** — User has Security Admin and Billing Admin [permissions](/docs/cloud/manage-access/enterprise-permissions) applied, regardless of the group permissions assigned. Developer licenses will make up a majority of the users in your environment and have the highest impact on billing, so it's important to monitor how many you have at any given time. diff --git a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md index da19f30ab4c..66d821b90d0 100644 --- a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md +++ b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md @@ -55,7 +55,7 @@ If you're on an Enterprise plan and have the correct [permissions](/docs/cloud/m - To add a user, go to **Account Settings** and select **Users**. - Click the [**Invite Users**](/docs/cloud/manage-access/invite-users) button. - - For fine-grained permission configuration, refer to [Role based access control](/docs/cloud/manage-access/enterprise-permissions). + - For fine-grained permission configuration, refer to [Role based access control](/docs/cloud/manage-access/about-user-access#role-based-access-control-).
diff --git a/website/docs/docs/cloud/manage-access/enterprise-permissions.md b/website/docs/docs/cloud/manage-access/enterprise-permissions.md index a1f6d795c23..5a56900d529 100644 --- a/website/docs/docs/cloud/manage-access/enterprise-permissions.md +++ b/website/docs/docs/cloud/manage-access/enterprise-permissions.md @@ -22,22 +22,14 @@ The following roles and permission sets are available for assignment in dbt Clou :::tip Licenses or Permission sets -The user's [license](/docs/cloud/manage-access/seats-and-users) type always overrides their assigned permission set. This means that even if a user belongs to a dbt Cloud group with 'Account Admin' permissions, having a 'Read-Only' license would still prevent them from performing administrative actions on the account. +The user's [license](/docs/cloud/manage-access/about-user-access) type always overrides their assigned permission set. This means that even if a user belongs to a dbt Cloud group with 'Account Admin' permissions, having a 'Read-Only' license would still prevent them from performing administrative actions on the account. ::: -## How to set up RBAC Groups in dbt Cloud +## Additional resources -Role-Based Access Control (RBAC) is helpful for automatically assigning permissions to dbt admins based on their SSO provider group associations. RBAC does not apply to [model groups](/docs/collaborate/govern/model-access#groups). +- [Grant users access](/docs/cloud/manage-access/about-user-access#grant-access) +- [Role-based access control](/docs/cloud/manage-access/about-user-access#role-based-access-control-) +- [Environment-level permissions](/docs/cloud/manage-access/environment-permissions) -1. Click the gear icon to the top right and select **Account Settings**. Click **Groups & Licenses** - - - -2. Select an existing group or create a new group to add RBAC. Name the group (this can be any name you like, but it's recommended to keep it consistent with the SSO groups). If you have configured SSO with SAML 2.0, you may have to use the GroupID instead of the name of the group. -3. Configure the SSO provider groups you want to add RBAC by clicking **Add** in the **SSO** section. These fields are case-sensitive and must match the source group formatting. -4. Configure the permissions for users within those groups by clicking **Add** in the **Access** section of the window. - - -5. When you've completed your configurations, click **Save**. Users will begin to populate the group automatically once they have signed in to dbt Cloud with their SSO credentials. diff --git a/website/docs/docs/cloud/manage-access/environment-permissions.md b/website/docs/docs/cloud/manage-access/environment-permissions.md index 44cf2dc9a64..b99da64609c 100644 --- a/website/docs/docs/cloud/manage-access/environment-permissions.md +++ b/website/docs/docs/cloud/manage-access/environment-permissions.md @@ -77,4 +77,4 @@ If the user has the same roles across projects, you can apply environment access ## Related docs --[Environment-level permissions setup](/docs/cloud/manage-access/environment-permissions-setup) +- [Environment-level permissions setup](/docs/cloud/manage-access/environment-permissions-setup) diff --git a/website/docs/docs/cloud/manage-access/licenses-and-groups.md b/website/docs/docs/cloud/manage-access/licenses-and-groups.md deleted file mode 100644 index b91af80f9b3..00000000000 --- a/website/docs/docs/cloud/manage-access/licenses-and-groups.md +++ /dev/null @@ -1,145 +0,0 @@ ---- -title: "Licenses and groups" -id: "licenses-and-groups" ---- - -## Overview - -dbt Cloud administrators can use dbt Cloud's permissioning model to control -user-level access in a dbt Cloud account. This access control comes in two flavors: -License-based and Role-based. - -- **License-based Access Controls:** User are configured with account-wide - license types. These licenses control the specific parts of the dbt Cloud application - that a given user can access. -- **Role-based Access Control (RBAC):** Users are assigned to _groups_ that have - specific permissions on specific projects or the entire account. A user may be - a member of multiple groups, and those groups may have permissions on multiple - projects. - -## License-based access control - -Each user on an account is assigned a license type when the user is first -invited to a given account. This license type may change over time, but a -user can only have one type of license at any given time. - -A user's license type controls the features in dbt Cloud that the user is able -to access. dbt Cloud's three license types are: - - **Read-Only** - - **Developer** - - **IT** - -For more information on these license types, see [Seats & Users](/docs/cloud/manage-access/seats-and-users). -At a high-level, Developers may be granted _any_ permissions, whereas Read-Only -users will have read-only permissions applied to all dbt Cloud resources -regardless of the role-based permissions that the user is assigned. IT users will have Security Admin and Billing Admin permissions applied regardless of the role-based permissions that the user is assigned. - -## Role-based access control - -:::info dbt Cloud Enterprise - -Role-based access control is a feature of the dbt Cloud Enterprise plan - -::: - -Role-based access control allows for fine-grained permissioning in the dbt Cloud -application. With role-based access control, users can be assigned varying -permissions to different projects within a dbt Cloud account. For teams on the -Enterprise tier, role-based permissions can be generated dynamically from -configurations in an [Identity Provider](sso-overview). - -Role-based permissions are applied to _groups_ and pertain to _projects_. The -assignable permissions themselves are granted via _permission sets_. - - -### Groups - -A group is a collection of users. Users may belong to multiple groups. Members -of a group inherit any permissions applied to the group itself. - -Users can be added to a dbt Cloud group based on their group memberships in the -configured [Identity Provider](sso-overview) for the account. In this way, dbt -Cloud administrators can manage access to dbt Cloud resources via identity -management software like Microsoft Entra ID (formerly Azure AD), Okta, or GSuite. See _SSO Mappings_ below for -more information. - -You can view the groups in your account or create new groups from the **Team > Groups** -page in your Account Settings. - - - - -### SSO Mappings - -SSO Mappings connect Identity Provider (IdP) group membership to dbt Cloud group -membership. When a user logs into dbt Cloud via a supported identity provider, -their IdP group memberships are synced with dbt Cloud. Upon logging in -successfully, the user's group memberships (and therefore, permissions) are -adjusted accordingly within dbt Cloud automatically. - -:::tip Creating SSO Mappings - -While dbt Cloud supports mapping multiple IdP groups to a single dbt Cloud -group, we recommend using a 1:1 mapping to make administration as simple as -possible. Consider using the same name for your dbt Cloud groups and your IdP -groups. - -::: - - -### Permission Sets - -Permission sets are predefined collections of granular permissions. Permission -sets combine low-level permission grants into high-level roles that can be -assigned to groups. Some examples of existing permission sets are: - - Account Admin - - Git Admin - - Job Admin - - Job Viewer - - ...and more - -For a full list of enterprise permission sets, see [Enterprise Permissions](/docs/cloud/manage-access/enterprise-permissions). -These permission sets are available for assignment to groups and control the ability -for users in these groups to take specific actions in the dbt Cloud application. - -In the following example, the _dbt Cloud Owners_ group is configured with the -**Account Admin** permission set on _All Projects_ and the **Job Admin** permission -set on the _Internal Analytics_ project. - - - - -### Manual assignment - -dbt Cloud administrators can manually assign users to groups independently of -IdP attributes. If a dbt Cloud group is configured _without_ any -SSO Mappings, then the group will be _unmanaged_ and dbt Cloud will not adjust -group membership automatically when users log into dbt Cloud via an identity -provider. This behavior may be desirable for teams that have connected an identity -provider, but have not yet configured SSO Mappings between dbt Cloud and the -IdP. - -If an SSO Mapping is added to an _unmanaged_ group, then it will become -_managed_, and dbt Cloud may add or remove users to the group automatically at -sign-in time based on the user's IdP-provided group membership information. - - -## FAQs -- **When are IdP group memberships updated for SSO Mapped groups?** Group memberships - are updated every time a user logs into dbt Cloud via a supported SSO provider. If - you've changed group memberships in your identity provider or dbt Cloud, ask your - users to log back into dbt Cloud for these group memberships to be synchronized. - -- **Can I set up SSO without RBAC?** Yes, see the documentation on - [Manual Assignment](#manual-assignment) above for more information on using - SSO without RBAC. - -- **Can I configure a user's License Type based on IdP Attributes?** Yes, see - the docs on [managing license types](/docs/cloud/manage-access/seats-and-users#managing-license-types) - for more information. diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md b/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md index e4ff998015c..2b2575efc57 100644 --- a/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md +++ b/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md @@ -117,7 +117,7 @@ If the verification information looks appropriate, then you have completed the c ## Setting up RBAC Now you have completed setting up SSO with GSuite, the next steps will be to set up -[RBAC groups](/docs/cloud/manage-access/enterprise-permissions) to complete your access control configuration. +[RBAC groups](/docs/cloud/manage-access/about-user-access#role-based-access-control-) to complete your access control configuration. ## Troubleshooting diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md index 53986513ce2..fda32f118ef 100644 --- a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md +++ b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md @@ -190,4 +190,4 @@ configured in the steps above. ## Setting up RBAC Now you have completed setting up SSO with Okta, the next steps will be to set up -[RBAC groups](/docs/cloud/manage-access/enterprise-permissions) to complete your access control configuration. +[RBAC groups](/docs/cloud/manage-access/about-user-access#role-based-access-control-) to complete your access control configuration. diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 76d881e7389..2665b8f6a97 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -15,6 +15,13 @@ Your account will be automatically migrated on or after its scheduled date. Howe ## Recommended actions +:::info Rescheduling your migration + +If you're on the dbt Cloud Enterprise tier, you can postpone your account migration by up to 45 days. To reschedule your migration, navigate to **Account Settings** → **Migration guide**. + +For help, contact the dbt Support Team at [support@getdbt.com](mailto:support@getdbt.com). +::: + We highly recommended you take these actions: - Ensure pending user invitations are accepted or note outstanding invitations. Pending user invitations might be voided during the migration. You can resend user invitations after the migration is complete. diff --git a/website/docs/docs/collaborate/data-tile.md b/website/docs/docs/collaborate/data-tile.md index 70318922d68..1d5b26e26b7 100644 --- a/website/docs/docs/collaborate/data-tile.md +++ b/website/docs/docs/collaborate/data-tile.md @@ -92,7 +92,7 @@ Follow these steps to embed the data health tile in PowerBI: ```html Website = - "" + "" ``` @@ -120,12 +120,34 @@ Follow these steps to embed the data health tile in Tableau: 3. Insert a **Web Page** object. 4. Insert the URL and click **Ok**. - `https://metadata.cloud.getdbt.com/exposure-tile?uniqueId=exposure.snowflake_tpcds_sales_spoke.customer360_test&environmentType=production&environmentId=220370&token=` - + ```html + https://metadata.ACCESS_URL/exposure-tile?uniqueId=exposure.EXPOSURE_NAME&environmentType=production&environmentId=220370&token= + ``` + *Note, replace the placeholders with your actual values.* 5. You should now see the data health tile embedded in your Tableau dashboard. + + + +Follow these steps to embed the data health tile in Sigma: + + + +1. Create a dashboard in Sigma and connect to your database to pull in the data. +2. Ensure you've copied the URL or iFrame snippet available in dbt Explorer's **Data health** section, under the **Embed data health into your dashboard** toggle. +3. Add a new embedded UI element in your Sigma Workbook in the following format: + + ```html + https://metadata.ACCESS_URL/exposure-tile?uniqueId=exposure.EXPOSURE_NAME&environmentType=production&environmentId=ENV_ID_NUMBER&token= + ``` + + *Note, replace the placeholders with your actual values.* +4. You should now see the data health tile embedded in your Sigma dashboard. + + + ## Job-based data health diff --git a/website/docs/docs/collaborate/govern/project-dependencies.md b/website/docs/docs/collaborate/govern/project-dependencies.md index 2e73eee028b..c054d1b27b7 100644 --- a/website/docs/docs/collaborate/govern/project-dependencies.md +++ b/website/docs/docs/collaborate/govern/project-dependencies.md @@ -21,6 +21,7 @@ This year, dbt Labs is introducing an expanded notion of `dependencies` across m - Use a supported version of dbt (v1.6, v1.7, or go versionless with "[Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless)") for both the upstream ("producer") project and the downstream ("consumer") project. - Define models in an upstream ("producer") project that are configured with [`access: public`](/reference/resource-configs/access). You need at least one successful job run after defining their `access`. - Define a deployment environment in the upstream ("producer") project [that is set to be your Production environment](/docs/deploy/deploy-environments#set-as-production-environment), and ensure it has at least one successful job run in that environment. +- If the upstream project has a Staging environment, run a job in that Staging environment to ensure the downstream cross-project ref resolves. - Each project `name` must be unique in your dbt Cloud account. For example, if you have a dbt project (codebase) for the `jaffle_marketing` team, you should not create separate projects for `Jaffle Marketing - Dev` and `Jaffle Marketing - Prod`. That isolation should instead be handled at the environment level. - We are adding support for environment-level permissions and data warehouse connections; please contact your dbt Labs account team for beta access. - The `dbt_project.yml` file is case-sensitive, which means the project name must exactly match the name in your `dependencies.yml`. For example, if your project name is `jaffle_marketing`, you should use `jaffle_marketing` (not `JAFFLE_MARKETING`) in all related files. @@ -110,7 +111,10 @@ Read [Why use a staging environment](/docs/deploy/deploy-environments#why-use-a- #### Staging with downstream dependencies -dbt Cloud begins using the Staging environment to resolve cross-project references from downstream projects as soon as it exists in a project without "fail-over" to Production. To avoid causing downtime for downstream developers, you should define and trigger a job before marking the environment as Staging: +dbt Cloud begins using the Staging environment to resolve cross-project references from downstream projects as soon as it exists in a project without "fail-over" to Production. This means that dbt Cloud will consistently use metadata from the Staging environment to resolve references in downstream projects, even if there haven't been any successful runs in the configured Staging environment. + +To avoid causing downtime for downstream developers, you should define and trigger a job before marking the environment as Staging: + 1. Create a new environment, but do NOT mark it as **Staging**. 2. Define a job in that environment. 3. Trigger the job to run, and ensure it completes successfully. diff --git a/website/docs/docs/collaborate/model-query-history.md b/website/docs/docs/collaborate/model-query-history.md index 0f43c9b163f..0180757f980 100644 --- a/website/docs/docs/collaborate/model-query-history.md +++ b/website/docs/docs/collaborate/model-query-history.md @@ -25,7 +25,7 @@ So for example, if `model_super_santi` was queried 10 times in the past week, it To access the features, you should meet the following: -1. You have a dbt Cloud account on the [Enterprise plan](https://www.getdbt.com/pricing/). +1. You have a dbt Cloud account on the [Enterprise plan](https://www.getdbt.com/pricing/). Single-tenant accounts should contact their account representative for setup. 2. You have set up a [production](https://docs.getdbt.com/docs/deploy/deploy-environments#set-as-production-environment) deployment environment for each project you want to explore, with at least one successful job run. 3. You have [admin permissions](/docs/cloud/manage-access/enterprise-permissions) in dbt Cloud to edit project settings or production environment settings. 4. Use Snowflake or BigQuery as your data warehouse and can enable query history permissions or work with an admin to do so. Support for additional data platforms coming soon. diff --git a/website/docs/docs/dbt-cloud-apis/service-tokens.md b/website/docs/docs/dbt-cloud-apis/service-tokens.md index fe8ace5fa34..4c0f6222fe9 100644 --- a/website/docs/docs/dbt-cloud-apis/service-tokens.md +++ b/website/docs/docs/dbt-cloud-apis/service-tokens.md @@ -36,80 +36,37 @@ You can assign service account tokens to any permission set available in dbt Clo ### Team plans using service account tokens -The following permissions can be assigned to a service account token on a Team plan. +The following permissions can be assigned to a service account token on a Team plan. Refer to [Enterprise permissions](/docs/cloud/manage-access/enterprise-permissions) for more information about these roles. -**Account Admin**
-Account Admin service tokens have full `read + write` access to an account, so please use them with caution. A Team plan refers to this permission set as an "Owner role." For more on these permissions, see [Account Admin](/docs/cloud/manage-access/enterprise-permissions#account-admin). - -**Metadata Only**
-Metadata-only service tokens authorize requests to the Discovery API. - -**Semantic Layer Only**
-Semantic Layer-only service tokens authorize requests to the Semantic Layer APIs. - -**Job Admin**
-Job admin service tokens can authorize requests for viewing, editing, and creating environments, triggering runs, and viewing historical runs. - -**Job Runner**
-Job runner service tokens can authorize requests for triggering runs and viewing historical runs. - -**Member**
-Member service tokens can authorize requests for viewing and editing resources, triggering runs, and inviting members to the account. Tokens assigned the Member permission set will have the same permissions as a Member user. For more information about Member users, see "[Self-service Team plan permissions](/docs/cloud/manage-access/self-service-permissions)". - -**Read-only**
-Read-only service tokens can authorize requests for viewing a read-only dashboard, viewing generated documentation, and viewing source freshness reports. This token can access and retrieve account-level information endpoints on the [Admin API](/docs/dbt-cloud-apis/admin-cloud-api) and authorize requests to the [Discovery API](/docs/dbt-cloud-apis/discovery-api). +- Account Admin — Account Admin service tokens have full `read + write` access to an account, so please use them with caution. A Team plan refers to this permission set as an "Owner role." +- Billing Admin +- Job Admin +- Metadata Only +- Member +- Read-only +- Semantic Layer Only ### Enterprise plans using service account tokens -The following permissions can be assigned to a service account token on an Enterprise plan. For more details about these permissions, see "[Enterprise permissions](/docs/cloud/manage-access/enterprise-permissions)." - -**Account Admin**
-Account Admin service tokens have full `read + write` access to an account, so please use them with caution. For more on these permissions, see [Account Admin](/docs/cloud/manage-access/enterprise-permissions#account-admin). - -**Security Admin**
-Security Admin service tokens have certain account-level permissions. For more on these permissions, see [Security Admin](/docs/cloud/manage-access/enterprise-permissions#security-admin). - -**Billing Admin**
-Billing Admin service tokens have certain account-level permissions. For more on these permissions, see [Billing Admin](/docs/cloud/manage-access/enterprise-permissions#billing-admin). - -**Manage marketplace apps**
-Used only for service tokens assigned to marketplace apps (for example, the [Snowflake Native app](/docs/cloud-integrations/snowflake-native-app)). - -**Metadata Only**
-Metadata-only service tokens authorize requests to the Discovery API. - -**Semantic Layer Only**
-Semantic Layer-only service tokens authorize requests to the Semantic Layer APIs. - -**Job Admin**
-Job Admin service tokens can authorize requests for viewing, editing, and creating environments, triggering runs, and viewing historical runs. For more on these permissions, see [Job Admin](/docs/cloud/manage-access/enterprise-permissions#job-admin). - -**Account Viewer**
-Account Viewer service tokens have read-only access to dbt Cloud accounts. For more on these permissions, see [Account Viewer](/docs/cloud/manage-access/enterprise-permissions#account-viewer) on the Enterprise Permissions page. - -**Admin**
-Admin service tokens have unrestricted access to projects in dbt Cloud accounts. You have the option to grant that permission all projects in the account or grant the permission only on specific projects. For more on these permissions, see [Admin Service](/docs/cloud/manage-access/enterprise-permissions#admin-service) on the Enterprise Permissions page. - -**Git Admin**
-Git admin service tokens have all the permissions listed in [Git admin](/docs/cloud/manage-access/enterprise-permissions#git-admin) on the Enterprise Permissions page. - -**Database Admin**
-Database admin service tokens have all the permissions listed in [Database admin](/docs/cloud/manage-access/enterprise-permissions#database-admin) on the Enterprise Permissions page. - -**Team Admin**
-Team admin service tokens have all the permissions listed in [Team admin](/docs/cloud/manage-access/enterprise-permissions#team-admin) on the Enterprise Permissions page. - -**Job Viewer**
-Job viewer admin service tokens have all the permissions listed in [Job viewer](/docs/cloud/manage-access/enterprise-permissions#job-viewer) on the Enterprise Permissions page. - -**Developer**
-Developer service tokens have all the permissions listed in [Developer](/docs/cloud/manage-access/enterprise-permissions#developer) on the Enterprise Permissions page. - -**Analyst**
-Analyst admin service tokens have all the permissions listed in [Analyst](/docs/cloud/manage-access/enterprise-permissions#analyst) on the Enterprise Permissions page. - -**Stakeholder**
-Stakeholder service tokens have all the permissions listed in [Stakeholder](/docs/cloud/manage-access/enterprise-permissions#stakeholder) on the Enterprise Permissions page. +Refer to [Enterprise permissions](/docs/cloud/manage-access/enterprise-permissions) for more information about these roles. + +- Account Admin — Account Admin service tokens have full `read + write` access to an account, so please use them with caution. +- Account Viewer +- Admin +- Analyst +- Billing Admin +- Database Admin +- Developer +- Git Admin +- Job Admin +- Job Runner +- Job Viewer +- Manage marketplace apps +- Metadata Only +- Semantic Layer Only +- Security Admin +- Stakeholder +- Team Admin ## Service token update diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md index 662fd0f381a..982d72e8fc0 100644 --- a/website/docs/docs/dbt-versions/release-notes.md +++ b/website/docs/docs/dbt-versions/release-notes.md @@ -20,6 +20,9 @@ Release notes are grouped by month for both multi-tenant and virtual private clo ## October 2024 +- **New**: The dbt Cloud IDE supports signed commits for Git, available for Enterprise plans. You can sign your Git commits when pushing them to the repository to prevent impersonation and enhance security. Supported Git providers are GitHub and GitLab. Refer to [Git commit signing](/docs/cloud/dbt-cloud-ide/git-commit-signing.md) for more information. +- **New:** With dbt Mesh, you can now enable bidirectional dependencies across your projects. Previously, dbt enforced dependencies to only go in one direction. dbt checks for cycles across projects and raises errors if any are detected. For details, refer to [Cycle detection](/docs/collaborate/govern/project-dependencies#cycle-detection). There's also the [Intro to dbt Mesh](/best-practices/how-we-mesh/mesh-1-intro) guide to help you learn more best practices. + Documentation for new features and functionality announced at Coalesce 2024: diff --git a/website/docs/docs/deploy/advanced-ci.md b/website/docs/docs/deploy/advanced-ci.md index 6213e475be6..e3f7cc7c9ae 100644 --- a/website/docs/docs/deploy/advanced-ci.md +++ b/website/docs/docs/deploy/advanced-ci.md @@ -7,7 +7,13 @@ description: "Advanced CI enables developers to compare changes by demonstrating # Advanced CI -[Continuous integration workflows](/docs/deploy/continuous-integration) help increase the governance and improve the quality of the data. Additionally for these CI jobs, you can use Advanced CI features, such as [compare changes](#compare-changes), that provide details about the changes between what's currently in your production environment and the pull request's latest commit, giving you observability into how data changes are affected by your code changes. By analyzing the data changes that code changes produce, you can ensure you're always shipping trustworthy data products as you're developing. +[Continuous integration workflows](/docs/deploy/continuous-integration) help increase the governance and improve the quality of the data. Additionally for these CI jobs, you can use Advanced CI features, such as [compare changes](#compare-changes), that provide details about the changes between what's currently in your production environment and the pull request's latest commit, giving you observability into how data changes are affected by your code changes. By analyzing the data changes that code changes produce, you can ensure you're always shipping trustworthy data products as you're developing. + +:::info How to enable this feature + +You can opt into Advanced CI in dbt Cloud. Please refer to [Account access to Advance CI features](/docs/cloud/account-settings#account-access-to-advanced-ci-features) to learn how enable it in your dbt Cloud account. + +::: :::tip More features dbt Labs plans to provide additional Advanced CI features in the near future. More info coming soon. diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md index cd04d1f4035..0bdf9e711f5 100644 --- a/website/docs/docs/deploy/ci-jobs.md +++ b/website/docs/docs/deploy/ci-jobs.md @@ -14,7 +14,7 @@ dbt Labs recommends that you create your CI job in a dedicated dbt Cloud [deploy - You have a dbt Cloud account. - CI features: - For both the [concurrent CI checks](/docs/deploy/continuous-integration#concurrent-ci-checks) and [smart cancellation of stale builds](/docs/deploy/continuous-integration#smart-cancellation) features, your dbt Cloud account must be on the [Team or Enterprise plan](https://www.getdbt.com/pricing/). - - The [SQL linting](/docs/deploy/continuous-integration#sql-linting) feature is currently available in beta to a limited group of users and is gradually being rolled out. If you're in the beta, the **Linting** option is available for use. + - The [SQL linting](/docs/deploy/continuous-integration#sql-linting) feature is currently available in [beta](/docs/dbt-versions/product-lifecycles#dbt-cloud) to a limited group of users and is gradually being rolled out. If you're in the beta, the **Linting** option is available for use. - [Advanced CI](/docs/deploy/advanced-ci) features: - For the [compare changes](/docs/deploy/advanced-ci#compare-changes) feature, your dbt Cloud account must be on the [Enterprise plan](https://www.getdbt.com/pricing/) and have enabled Advanced CI features. Please ask your [dbt Cloud administrator to enable](/docs/cloud/account-settings#account-access-to-advanced-ci-features) this feauture for you. After enablement, the **Run compare changes** option becomes available in the CI job settings. - Set up a [connection with your Git provider](/docs/cloud/git/git-configuration-in-dbt-cloud). This integration lets dbt Cloud run jobs on your behalf for job triggering. diff --git a/website/docs/docs/deploy/continuous-integration.md b/website/docs/docs/deploy/continuous-integration.md index 2119724e609..c10cdfc9db1 100644 --- a/website/docs/docs/deploy/continuous-integration.md +++ b/website/docs/docs/deploy/continuous-integration.md @@ -58,8 +58,8 @@ CI runs don't consume run slots. This guarantees a CI check will never block a p ### SQL linting -When enabled for your CI job, dbt invokes [SQLFluff](https://sqlfluff.com/) which is a modular and configurable SQL linter that warns you of complex functions, syntax, formatting, and compilation errors. By default, it lints all the SQL files in your project. +When enabled for your CI job, dbt invokes [SQLFluff](https://sqlfluff.com/) which is a modular and configurable SQL linter that warns you of complex functions, syntax, formatting, and compilation errors. By default, it lints all the changed SQL files in your project (compared to the last deferred production state). If the linter runs into errors, you can specify whether dbt should fail the job or continue running it. When failing jobs, it helps reduce compute costs by avoiding builds for pull requests that don't meet your SQL code quality CI check. -To override the default linting behavior, create an `.sqlfluff` config file in your project and add your linting rules to it. dbt Cloud will use the rules defined in the config file when linting. For details about linting rules, refer to [Custom Usage](https://docs.sqlfluff.com/en/stable/gettingstarted.html#custom-usage) in the SQLFluff documentation. +You can use [SQLFluff Configuration Files](https://docs.sqlfluff.com/en/stable/configuration/setting_configuration.html#configuration-files) to override the default linting behavior in dbt. Create an `.sqlfluff` configuration file in your project, add your linting rules to it, and dbt Cloud will use them when linting. For complete details, refer to [Custom Usage](https://docs.sqlfluff.com/en/stable/gettingstarted.html#custom-usage) in the SQLFluff documentation. diff --git a/website/docs/docs/deploy/deployment-overview.md b/website/docs/docs/deploy/deployment-overview.md index 11575e6b0b7..9382634812f 100644 --- a/website/docs/docs/deploy/deployment-overview.md +++ b/website/docs/docs/deploy/deployment-overview.md @@ -79,6 +79,12 @@ Learn how to use dbt Cloud's features to help your team ship timely and quality link="/docs/deploy/job-notifications" icon="dbt-bit"/> + + ### Unsubscribe from email notifications -1. From the gear menu, choose **Notification settings**. +1. Select your profile icon and click on **Notification settings**. 1. On the **Email notifications** page, click **Unsubscribe from all email notifications**. ## Slack notifications -You can receive Slack alerts about jobs by setting up the Slack integration, then configuring the dbt Cloud Slack notification settings. dbt Cloud integrates with Slack via OAuth to ensure secure authentication. +You can receive Slack alerts about jobs by setting up the Slack integration and then configuring the dbt Cloud Slack notification settings. dbt Cloud integrates with Slack via OAuth to ensure secure authentication. :::note If there has been a change in user roles or Slack permissions where you no longer have access to edit a configured Slack channel, please [contact support](mailto:support@getdbt.com) for assistance. @@ -62,7 +62,7 @@ If there has been a change in user roles or Slack permissions where you no longe ### Set up the Slack integration -1. From the gear menu, select **Account settings** and then select **Integrations** from the left sidebar. +1. Select **Account settings** and then select **Integrations** from the left sidebar. 1. Locate the **OAuth** section with the Slack application and click **Link**. @@ -76,13 +76,13 @@ If you're logged out or the Slack app/website is closed, you must authenticate b 1. Complete the field defining the Slack workspace you want to integrate with dbt Cloud. -2. Sign in with an existing identity or use email address and password. +2. Sign in with an existing identity or use the email address and password. 3. Once you have authenticated successfully, accept the permissions. ### Configure Slack notifications -1. From the gear menu, choose **Notification settings**. +1. Select your profile icon and then click on **Notification settings**. 1. Select **Slack notifications** in the left sidebar. 1. Select the **Notification channel** you want to receive the job run notifications from the dropdown. @@ -98,5 +98,5 @@ If you're logged out or the Slack app/website is closed, you must authenticate b ### Disable the Slack integration -1. From the gear menu, select **Account settings**. On the **Integrations** page, scroll to the **OAuth** section. +1. Select **Account settings** and on the **Integrations** page, scroll to the **OAuth** section. 1. Click the trash can icon (on the far right of the Slack integration) and click **Unlink**. Channels that you configured will no longer receive Slack notifications. _This is not an account-wide action._ Channels configured by other account admins will continue to receive Slack notifications if they still have active Slack integrations. To migrate ownership of a Slack channel notification configuration, have another account admin edit their configuration. diff --git a/website/docs/docs/deploy/model-notifications.md b/website/docs/docs/deploy/model-notifications.md new file mode 100644 index 00000000000..a6d4c467f0b --- /dev/null +++ b/website/docs/docs/deploy/model-notifications.md @@ -0,0 +1,87 @@ +--- +title: "Model notifications" +description: "While a job is running, receive email notifications in real time about any issues with your models and tests. " +--- + +# Model notifications + +Set up dbt to notify the appropriate model owners through email about issues as soon as they occur, while the job is still running. Model owners can specify which statuses to receive notifications about: + +- `Success` and `Fails` for models +- `Warning`, `Success`, and `Fails` for tests + +With model-level notifications, model owners can be the first ones to know about issues before anyone else (like the stakeholders). + +:::info Beta feature + +This feature is currently available in [beta](/docs/dbt-versions/product-lifecycles#dbt-cloud) to a limited group of users and is gradually being rolled out. If you're in the beta, please contact the Support team at support@getdbt.com for assistance or questions. + +::: + +To be timely and keep the number of notifications to a reasonable amount when multiple models or tests trigger them, dbt observes the following guidelines when notifying the owners: + +- Send a notification to each unique owner/email during a job run about any models (with status of failure/success) or tests (with status of warning/failure/success). Each owner receives only one notification, the initial one. +- Don't send any notifications about subsequent models or tests while a dbt job is still running. +- At the end of a job run, each owner receives a notification, for each of the statuses they specified to be notified about, with a list of models and tests that have that status. + +Create configuration YAML files in your project for dbt to send notifications about the status of your models and tests. + +## Prerequisites +- Your dbt Cloud administrator has [enabled the appropriate account setting](#enable-access-to-model-notifications) for you. +- Your environment(s) must be on ["Versionless"](/docs/dbt-versions/versionless-cloud). + + +## Configure groups + +Add your group configuration in either the `dbt_project.yml` or `groups.yml` file. For example: + +```yml +version: 2 + +groups: + - name: finance + description: "Models related to the finance department" + owner: + # 'name' or 'email' is required + name: "Finance Team" + email: finance@dbtlabs.com + slack: finance-data + + - name: marketing + description: "Models related to the marketing department" + owner: + name: "Marketing Team" + email: marketing@dbtlabs.com + slack: marketing-data +``` + +## Set up models + +Set up your model configuration in either the `dbt_project.yml` or `groups.yml` file; doing this automatically sets up notifications for tests, too. For example: + +```yml +version: 2 + +models: + - name: sales + description: "Sales data model" + config: + group: finance + + - name: campaigns + description: "Campaigns data model" + config: + group: marketing + +``` + +## Enable access to model notifications + +Provide dbt Cloud account members the ability to configure and receive alerts about issues with models or tests that are encountered during job runs. + +To use model-level notifications, your dbt Cloud account must have access to the feature. Ask your dbt Cloud administrator to enable this feature for account members by following these steps: + +1. Navigate to **Notification settings** from your profile name in the sidebar (lower left-hand side). +1. From **Email notications**, enable the setting **Enable group/owner notifications on models** under the **Model notifications** section. Then, specify which statuses to receive notifications about (Success, Warning, and/or Fails). + + diff --git a/website/docs/docs/deploy/monitor-jobs.md b/website/docs/docs/deploy/monitor-jobs.md index 8382743dd03..1cbba23161e 100644 --- a/website/docs/docs/deploy/monitor-jobs.md +++ b/website/docs/docs/deploy/monitor-jobs.md @@ -13,6 +13,7 @@ This portion of our documentation will go over dbt Cloud's various capabilities - [Run visibility](/docs/deploy/run-visibility) — View your run history to help identify where improvements can be made to scheduled jobs. - [Retry jobs](/docs/deploy/retry-jobs) — Rerun your errored jobs from start or the failure point. - [Job notifications](/docs/deploy/job-notifications) — Receive email or Slack notifications when a job run succeeds, encounters warnings, fails, or is canceled. +- [Model notifications](/docs/deploy/model-notifications) — Receive email notifications about any issues encountered by your models and tests as soon as they occur while running a job. - [Webhooks](/docs/deploy/webhooks) — Use webhooks to send events about your dbt jobs' statuses to other systems. - [Leverage artifacts](/docs/deploy/artifacts) — dbt Cloud generates and saves artifacts for your project, which it uses to power features like creating docs for your project and reporting freshness of your sources. - [Source freshness](/docs/deploy/source-freshness) — Monitor data governance by enabling snapshots to capture the freshness of your data sources. diff --git a/website/docs/docs/deploy/webhooks.md b/website/docs/docs/deploy/webhooks.md index dfb8b7c7406..ffea38b5b84 100644 --- a/website/docs/docs/deploy/webhooks.md +++ b/website/docs/docs/deploy/webhooks.md @@ -29,7 +29,7 @@ You can also check out the free [dbt Fundamentals course](https://learn.getdbt.c ## Prerequisites - You have a dbt Cloud account that is on the [Team or Enterprise plan](https://www.getdbt.com/pricing/). - For `write` access to webhooks: - - **Enterprise plan accounts** — Permission sets are the same for both API service tokens and the dbt Cloud UI. You, or the API service token, must have the [Account Admin](/docs/cloud/manage-access/enterprise-permissions#account-admin), [Admin](/docs/cloud/manage-access/enterprise-permissions#admin), or [Developer](/docs/cloud/manage-access/enterprise-permissions#developer) permission set. + - **Enterprise plan accounts** — Permission sets are the same for both API service tokens and the dbt Cloud UI. You, or the API service token, must have the Account Admin, Admin, or Developer [permission set](/docs/cloud/manage-access/enterprise-permissions). - **Team plan accounts** — For the dbt Cloud UI, you need to have a [Developer license](/docs/cloud/manage-access/self-service-permissions). For API service tokens, you must assign the service token to have the [Account Admin or Member](/docs/dbt-cloud-apis/service-tokens#team-plans-using-service-account-tokens) permission set. - You have a multi-tenant or an AWS single-tenant deployment model in dbt Cloud. For more information, refer to [Tenancy](/docs/cloud/about-cloud/tenancy). - Your destination system supports [Authorization headers](#troubleshooting). diff --git a/website/docs/faqs/Git/gitignore.md b/website/docs/faqs/Git/gitignore.md index 16575861289..4386a27d4f2 100644 --- a/website/docs/faqs/Git/gitignore.md +++ b/website/docs/faqs/Git/gitignore.md @@ -80,9 +80,9 @@ dbt_modules/ * `target`, `dbt_modules`, `dbt_packages`, `logs` 7. Commit (save) the deletions to the main branch. 8. Switch to the dbt Cloud IDE, and open the project that you're fixing. -9. Reclone your repo in the IDE by clicking on the three dots next to the **IDE Status** button on the lower right corner of the IDE screen, then select **Reclone Repo**. - * **Note** — Any saved but uncommitted changes will be lost, so make sure you copy any modified code that you want to keep in a temporary location outside of dbt Cloud. -10. Once you reclone the repo, open the `.gitignore` file in the branch you're working in. If the new changes aren't included, you'll need to merge the latest commits from the main branch into your working branch. +9. [Rollback your repo to remote](/docs/collaborate/git/version-control-basics#the-git-button-in-the-cloud-ide) in the IDE by clicking on the three dots next to the **IDE Status** button on the lower right corner of the IDE screen, then select **Rollback to remote**. + * **Note** — Rollback to remote resets your repo back to an earlier clone from your remote. Any saved but uncommitted changes will be lost, so make sure you copy any modified code that you want to keep in a temporary location outside of dbt Cloud. +10. Once you rollback to remote, open the `.gitignore` file in the branch you're working in. If the new changes aren't included, you'll need to merge the latest commits from the main branch into your working branch. 11. Go to the **File Explorer** to verify the `.gitignore` file contains the correct entries and make sure the untracked files/folders in the .gitignore file are in *italics*. 12. Great job 🎉! You've configured the `.gitignore` correctly and can continue with your development! @@ -111,9 +111,9 @@ dbt_modules/ 8. Open a merge request using the git provider web interface. The merge request should attempt to merge the changes into the 'main' branch that all development branches are created from. 9. Follow the necessary procedures to get the branch approved and merged into the 'main' branch. You can delete the branch after the merge is complete. 10. Once the merge is complete, go back to the dbt Cloud IDE, and open the project that you're fixing. -11. Reclone your repo in the IDE by clicking on the three dots next to the **IDE Status** button on the lower right corner of the IDE screen, then select **Reclone Repo**. - * **Note** — Any saved but uncommitted changes will be lost, so make sure you copy any modified code that you want to keep in a temporary location outside of dbt Cloud. -12. Once you reclone the repo, open the `.gitignore` file in the branch you're working in. If the new changes aren't included, you'll need to merge the latest commits from the main branch into your working branch. +11. [Rollback your repo to remote](/docs/collaborate/git/version-control-basics#the-git-button-in-the-cloud-ide) in the IDE by clicking on the three dots next to the **IDE Status** button on the lower right corner of the IDE screen, then select **Rollback to remote**. + * **Note** — Rollback to remote resets your repo back to an earlier clone from your remote. Any saved but uncommitted changes will be lost, so make sure you copy any modified code that you want to keep in a temporary location outside of dbt Cloud. +12. Once you rollback to remote, open the `.gitignore` file in the branch you're working in. If the new changes aren't included, you'll need to merge the latest commits from the main branch into your working branch. 13. Go to the **File Explorer** to verify the `.gitignore` file contains the correct entries and make sure the untracked files/folders in the .gitignore file are in *italics*. 14. Great job 🎉! You've configured the `.gitignore` correctly and can continue with your development! diff --git a/website/docs/guides/mesh-qs.md b/website/docs/guides/mesh-qs.md index 0d13d043059..47ece7b29ec 100644 --- a/website/docs/guides/mesh-qs.md +++ b/website/docs/guides/mesh-qs.md @@ -300,6 +300,8 @@ To run your first deployment dbt Cloud job, you will need to create a new dbt Cl 5. After the run is complete, click **Explore** from the upper menu bar. You should now see your lineage, tests, and documentation coming through successfully. +For details on how dbt Cloud uses metadata from the Staging environment to resolve references in downstream projects, check out the section on [Staging with downstream dependencies](/docs/collaborate/govern/project-dependencies#staging-with-downstream-dependencies). + ## Reference a public model in your downstream project In this section, you will set up the downstream project, "Jaffle | Finance", and [cross-project reference](/docs/collaborate/govern/project-dependencies) the `fct_orders` model from the foundational project. Navigate to the **Develop** page to set up our project: diff --git a/website/docs/guides/teradata-qs.md b/website/docs/guides/teradata-qs.md index da951620515..88edcfcaa70 100644 --- a/website/docs/guides/teradata-qs.md +++ b/website/docs/guides/teradata-qs.md @@ -59,52 +59,50 @@ If you created your Teradata Vantage database instance at https://clearscape.ter ::: -1. Use your preferred SQL IDE editor to create two databases: `jaffle_shop` and `stripe`: +1. Use your preferred SQL IDE editor to create the database, `jaffle_shop`: ```sql CREATE DATABASE jaffle_shop AS PERM = 1e9; - CREATE DATABASE stripe AS PERM = 1e9; ``` -2. In the databases `jaffle_shop` and `stripe`, create three foreign tables and reference the respective csv files located in object storage: - - ```sql - CREATE FOREIGN TABLE jaffle_shop.customers ( - id integer, - first_name varchar (100), - last_name varchar (100) - ) - USING ( - LOCATION ('/s3/dbt-tutorial-public.s3.amazonaws.com/jaffle_shop_customers.csv') - ) - NO PRIMARY INDEX; - - CREATE FOREIGN TABLE jaffle_shop.orders ( - id integer, - user_id integer, - order_date date, - status varchar(100) - ) - USING ( - LOCATION ('/s3/dbt-tutorial-public.s3.amazonaws.com/jaffle_shop_orders.csv') - ) - NO PRIMARY INDEX; - - CREATE FOREIGN TABLE stripe.payment ( - id integer, - orderid integer, - paymentmethod varchar (100), - status varchar (100), - amount integer, - created date - ) - USING ( - LOCATION ('/s3/dbt-tutorial-public.s3.amazonaws.com/stripe_payments.csv') - ) - NO PRIMARY INDEX; - ``` - -## Connect dbt cloud to Teradata +2. In `jaffle_shop` database, create three foreign tables and reference the respective csv files located in object storage: + + ```sql + CREATE FOREIGN TABLE jaffle_shop.customers ( + id integer, + first_name varchar (100), + last_name varchar (100), + email varchar (100) + ) + USING ( + LOCATION ('/gs/storage.googleapis.com/clearscape_analytics_demo_data/dbt/rawcustomers.csv') + ) + NO PRIMARY INDEX; + + CREATE FOREIGN TABLE jaffle_shop.orders ( + id integer, + user_id integer, + order_date date, + status varchar(100) + ) + USING ( + LOCATION ('/gs/storage.googleapis.com/clearscape_analytics_demo_data/dbt/raw_orders.csv') + ) + NO PRIMARY INDEX; + + CREATE FOREIGN TABLE jaffle_shop.payments ( + id integer, + orderid integer, + paymentmethod varchar (100), + amount integer + ) + USING ( + LOCATION ('/gs/storage.googleapis.com/clearscape_analytics_demo_data/dbt/raw_payments.csv') + ) + NO PRIMARY INDEX; + ``` + +## Connect dbt Cloud to Teradata 1. Create a new project in dbt Cloud. From **Account settings** (using the gear menu in the top right corner), click **New Project**. 2. Enter a project name and click **Continue**. @@ -136,12 +134,10 @@ Now that you have a repository configured, you can initialize your project and s 1. Click **Start developing in the IDE**. It might take a few minutes for your project to spin up for the first time as it establishes your git connection, clones your repo, and tests the connection to the warehouse. 2. Above the file tree to the left, click **Initialize your project** to build out your folder structure with example models. 3. Make your initial commit by clicking **Commit and sync**. Use the commit message `initial commit` to create the first commit to your managed repo. Once you’ve created the commit, you can open a branch to add new dbt code. -4. You can now directly query data from your warehouse and execute `dbt run`. You can try this out now: - - Click **Create new file**, add this query to the new file, and click **Save as** to save the new file: - ```sql - select * from jaffle_shop.customers - ``` - - In the command line bar at the bottom, enter `dbt run` and click **Enter**. You should see a `dbt run succeeded` message. + +## Delete the example models + + ## Build your first model @@ -153,7 +149,7 @@ You have two options for working with files in the dbt Cloud IDE: Name the new branch `add-customers-model`. 1. Click the **...** next to the `models` directory, then select **Create file**. -2. Name the file `customers.sql`, then click **Create**. +2. Name the file `bi_customers.sql`, then click **Create**. 3. Copy the following query into the file and click **Save**. ```sql @@ -208,7 +204,7 @@ final as ( from customers - left join customer_orders using (customer_id) + left join customer_orders on customers.customer_id = customer_orders.customer_id ) @@ -222,19 +218,75 @@ You can connect your business intelligence (BI) tools to these views and tables ## Change the way your model is materialized - +One of the most powerful features of dbt is that you can change the way a model is materialized in your warehouse, simply by changing a configuration value. You can change things between tables and views by changing a keyword rather than writing the data definition language (DDL) to do this behind the scenes. -## Delete the example models +By default, everything gets created as a view. You can override that at the directory level so everything in that directory will materialize to a different materialization. - +1. Edit your `dbt_project.yml` file. + - Update your project `name` to: + + + ```yaml + name: 'jaffle_shop' + ``` + + + - Configure `jaffle_shop` so everything in it will be materialized as a table; and configure `example` so everything in it will be materialized as a view. Update your `models` config block to: + + + + ```yaml + models: + jaffle_shop: + +materialized: table + example: + +materialized: view + ``` + + + - Click **Save**. + +2. Enter the `dbt run` command. Your `bi_customers` model should now be built as a table! + :::info + To do this, dbt had to first run a `drop view` statement (or API call on BigQuery), then a `create table as` statement. + ::: + +3. Edit `models/bi_customers.sql` to override the `dbt_project.yml` for the `customers` model only by adding the following snippet to the top, and click **Save**: + + + + ```sql + {{ + config( + materialized='view' + ) + }} + + with customers as ( + + select + id as customer_id + ... + + ) + + ``` + + + +4. Enter the `dbt run` command. Your model, `bi_customers`, should now build as a view. + +### FAQs + + + + ## Build models on top of other models 1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in your original query. -2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in your original query. - ```sql @@ -248,6 +300,7 @@ You can connect your business intelligence (BI) tools to these views and tables +2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in your original query. ```sql @@ -262,9 +315,9 @@ You can connect your business intelligence (BI) tools to these views and tables -3. Edit the SQL in your `models/customers.sql` file as follows: +3. Edit the SQL in your `models/bi_customers.sql` file as follows: - + ```sql with customers as ( @@ -397,4 +450,61 @@ Sources make it possible to name and describe the data loaded into your warehous - +## Commit your changes + +Now that you've built your customer model, you need to commit the changes you made to the project so that the repository has your latest code. + +**If you edited directly in the protected primary branch:**
+1. Click the **Commit and sync git** button. This action prepares your changes for commit. +2. A modal titled **Commit to a new branch** will appear. +3. In the modal window, name your new branch `add-customers-model`. This branches off from your primary branch with your new changes. +4. Add a commit message, such as "Add customers model, tests, docs" and commit your changes. +5. Click **Merge this branch to main** to add these changes to the main branch on your repo. + + +**If you created a new branch before editing:**
+1. Since you already branched out of the primary protected branch, go to **Version Control** on the left. +2. Click **Commit and sync** to add a message. +3. Add a commit message, such as "Add customers model, tests, docs." +4. Click **Merge this branch to main** to add these changes to the main branch on your repo. + +## Deploy dbt + +Use dbt Cloud's Scheduler to deploy your production jobs confidently and build observability into your processes. You'll learn to create a deployment environment and run a job in the following steps. + +### Create a deployment environment + +1. In the upper left, select **Deploy**, then click **Environments**. +2. Click **Create Environment**. +3. In the **Name** field, write the name of your deployment environment. For example, "Production." +4. In the **dbt Version** field, select the latest version from the dropdown. +5. Under **Deployment connection**, enter the name of the dataset you want to use as the target, such as "Analytics". This will allow dbt to build and work with that dataset. For some data warehouses, the target dataset may be referred to as a "schema". +6. Click **Save**. + +### Create and run a job + +Jobs are a set of dbt commands that you want to run on a schedule. For example, `dbt build`. + +As the `jaffle_shop` business gains more customers, and those customers create more orders, you will see more records added to your source data. Because you materialized the `bi_customers` model as a table, you'll need to periodically rebuild your table to ensure that the data stays up-to-date. This update will happen when you run a job. + +1. After creating your deployment environment, you should be directed to the page for a new environment. If not, select **Deploy** in the upper left, then click **Jobs**. +2. Click **Create one** and provide a name, for example, "Production run", and link to the Environment you just created. +3. Scroll down to the **Execution Settings** section. +4. Under **Commands**, add this command as part of your job if you don't see it: + * `dbt build` +5. Select the **Generate docs on run** checkbox to automatically [generate updated project docs](/docs/collaborate/build-and-view-your-docs) each time your job runs. +6. For this exercise, do _not_ set a schedule for your project to run — while your organization's project should run regularly, there's no need to run this example project on a schedule. Scheduling a job is sometimes referred to as _deploying a project_. +7. Select **Save**, then click **Run now** to run your job. +8. Click the run and watch its progress under "Run history." +9. Once the run is complete, click **View Documentation** to see the docs for your project. + + +Congratulations 🎉! You've just deployed your first dbt project! + + +#### FAQs + + + + + diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index 4735a41bf09..1f1b0706cb4 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -17,8 +17,9 @@ When materializing a model as `table`, you may include several optional configs | partition_by | Partition the created table by the specified columns. A directory is created for each partition. | Optional | SQL, Python | `date_day` | | liquid_clustered_by | Cluster the created table by the specified columns. Clustering method is based on [Delta's Liquid Clustering feature](https://docs.databricks.com/en/delta/clustering.html). Available since dbt-databricks 1.6.2. | Optional | SQL | `date_day` | | clustered_by | Each partition in the created table will be split into a fixed number of buckets by the specified columns. | Optional | SQL, Python | `country_code` | -| buckets | The number of buckets to create while clustering | Required if `clustered_by` is specified | SQL, Python | `8` | -| tblproperties | [Tblproperties](https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-tblproperties.html) to be set on the created table | Optional | SQL | `{'this.is.my.key': 12}` | +| buckets | The number of buckets to create while clustering. | Required if `clustered_by` is specified | SQL, Python | `8` | +| tblproperties | [Tblproperties](https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-tblproperties.html) to be set on the created table. | Optional | SQL | `{'this.is.my.key': 12}` | +| compression | Set the compression algorithm. | Optional | SQL, Python | `zstd` | @@ -34,6 +35,7 @@ When materializing a model as `table`, you may include several optional configs | clustered_by | Each partition in the created table will be split into a fixed number of buckets by the specified columns. | Optional | SQL, Python | `country_code` | | buckets | The number of buckets to create while clustering | Required if `clustered_by` is specified | SQL, Python | `8` | | tblproperties | [Tblproperties](https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-tblproperties.html) to be set on the created table | Optional | SQL, Python* | `{'this.is.my.key': 12}` | +| compression | Set the compression algorithm. | Optional | SQL, Python | `zstd` | \* Beginning in 1.7.12, we have added tblproperties to Python models via an alter statement that runs after table creation. We do not yet have a PySpark API to set tblproperties at table creation, so this feature is primarily to allow users to anotate their python-derived tables with tblproperties. @@ -53,7 +55,8 @@ We do not yet have a PySpark API to set tblproperties at table creation, so this | clustered_by | Each partition in the created table will be split into a fixed number of buckets by the specified columns. | Optional | SQL, Python | `country_code` | | buckets | The number of buckets to create while clustering | Required if `clustered_by` is specified | SQL, Python | `8` | | tblproperties | [Tblproperties](https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-tblproperties.html) to be set on the created table | Optional | SQL, Python* | `{'this.is.my.key': 12}` | -| databricks_tags | [Tags](https://docs.databricks.com/en/data-governance/unity-catalog/tags.html) to be set on the created table | Optional | SQL+, Python+ | `{'my_tag': 'my_value'}` | +| databricks_tags | [Tags](https://docs.databricks.com/en/data-governance/unity-catalog/tags.html) to be set on the created table | Optional | SQL+, Python+ | `{'my_tag': 'my_value'}` | +| compression | Set the compression algorithm. | Optional | SQL, Python | `zstd` | \* Beginning in 1.7.12, we have added tblproperties to Python models via an alter statement that runs after table creation. We do not yet have a PySpark API to set tblproperties at table creation, so this feature is primarily to allow users to anotate their python-derived tables with tblproperties. diff --git a/website/docs/reference/resource-configs/snowflake-configs.md b/website/docs/reference/resource-configs/snowflake-configs.md index 342e8290458..abb516d2258 100644 --- a/website/docs/reference/resource-configs/snowflake-configs.md +++ b/website/docs/reference/resource-configs/snowflake-configs.md @@ -299,7 +299,7 @@ Snowflake allows two configuration scenarios for scheduling automatic refreshes: - **Time-based** — Provide a value of the form ` { seconds | minutes | hours | days }`. For example, if the dynamic table needs to be updated every 30 minutes, use `target_lag='30 minutes'`. - **Downstream** — Applicable when the dynamic table is referenced by other dynamic tables. In this scenario, `target_lag='downstream'` allows for refreshes to be controlled at the target, instead of at each layer. -Learn more about `target_lag` in Snowflake's [docs](https://docs.snowflake.com/en/user-guide/dynamic-tables-refresh#understanding-target-lag). +Learn more about `target_lag` in Snowflake's [docs](https://docs.snowflake.com/en/user-guide/dynamic-tables-refresh#understanding-target-lag). Please note that Snowflake supports a target lag of 1 minute or longer. @@ -695,270 +695,6 @@ models:
- -The Snowflake adapter supports [dynamic tables](https://docs.snowflake.com/en/user-guide/dynamic-tables-about). -This materialization is specific to Snowflake, which means that any model configuration that -would normally come along for the ride from `dbt-core` (e.g. as with a `view`) may not be available -for dynamic tables. This gap will decrease in future patches and versions. -While this materialization is specific to Snowflake, it very much follows the implementation -of [materialized views](/docs/build/materializations#Materialized-View). -In particular, dynamic tables have access to the `on_configuration_change` setting. -Dynamic tables are supported with the following configuration parameters: - - - -| Parameter | Type | Required | Default | Change Monitoring Support | -|--------------------|------------|----------|-------------|---------------------------| -| [`on_configuration_change`](/reference/resource-configs/on_configuration_change) | `` | no | `apply` | n/a | -| [`target_lag`](#target-lag) | `` | yes | | alter | -| [`snowflake_warehouse`](#configuring-virtual-warehouses) | `` | yes | | alter | - - - - -| Parameter | Type | Required | Default | Change Monitoring Support | -|--------------------|------------|----------|-------------|---------------------------| -| [`on_configuration_change`](/reference/resource-configs/on_configuration_change) | `` | no | `apply` | n/a | -| [`target_lag`](#target-lag) | `` | yes | | alter | -| [`snowflake_warehouse`](#configuring-virtual-warehouses) | `` | yes | | alter | -| [`refresh_mode`](#refresh-mode) | `` | no | `AUTO` | refresh | -| [`initialize`](#initialize) | `` | no | `ON_CREATE` | n/a | - - - - - - - - - - - -```yaml -models: - [](/reference/resource-configs/resource-path): - [+](/reference/resource-configs/plus-prefix)[materialized](/reference/resource-configs/materialized): dynamic_table - [+](/reference/resource-configs/plus-prefix)[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail - [+](/reference/resource-configs/plus-prefix)[target_lag](#target-lag): downstream | - [+](/reference/resource-configs/plus-prefix)[snowflake_warehouse](#configuring-virtual-warehouses): - -``` - - - - - - - - - - -```yaml -version: 2 - -models: - - name: [] - config: - [materialized](/reference/resource-configs/materialized): dynamic_table - [on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail - [target_lag](#target-lag): downstream | - [snowflake_warehouse](#configuring-virtual-warehouses): - -``` - - - - - - - - - - -```jinja - -{{ config( - [materialized](/reference/resource-configs/materialized)="dynamic_table", - [on_configuration_change](/reference/resource-configs/on_configuration_change)="apply" | "continue" | "fail", - [target_lag](#target-lag)="downstream" | " seconds | minutes | hours | days", - [snowflake_warehouse](#configuring-virtual-warehouses)="", - -) }} - -``` - - - - - - - - - - - - - - - - - -```yaml -models: - [](/reference/resource-configs/resource-path): - [+](/reference/resource-configs/plus-prefix)[materialized](/reference/resource-configs/materialized): dynamic_table - [+](/reference/resource-configs/plus-prefix)[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail - [+](/reference/resource-configs/plus-prefix)[target_lag](#target-lag): downstream | - [+](/reference/resource-configs/plus-prefix)[snowflake_warehouse](#configuring-virtual-warehouses): - [+](/reference/resource-configs/plus-prefix)[refresh_mode](#refresh-mode): AUTO | FULL | INCREMENTAL - [+](/reference/resource-configs/plus-prefix)[initialize](#initialize): ON_CREATE | ON_SCHEDULE - -``` - - - - - - - - - - -```yaml -version: 2 - -models: - - name: [] - config: - [materialized](/reference/resource-configs/materialized): dynamic_table - [on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail - [target_lag](#target-lag): downstream | - [snowflake_warehouse](#configuring-virtual-warehouses): - [refresh_mode](#refresh-mode): AUTO | FULL | INCREMENTAL - [initialize](#initialize): ON_CREATE | ON_SCHEDULE - -``` - - - - - - - - - - -```jinja - -{{ config( - [materialized](/reference/resource-configs/materialized)="dynamic_table", - [on_configuration_change](/reference/resource-configs/on_configuration_change)="apply" | "continue" | "fail", - [target_lag](#target-lag)="downstream" | " seconds | minutes | hours | days", - [snowflake_warehouse](#configuring-virtual-warehouses)="", - [refresh_mode](#refresh-mode)="AUTO" | "FULL" | "INCREMENTAL", - [initialize](#initialize)="ON_CREATE" | "ON_SCHEDULE", - -) }} - -``` - - - - - - - - - -Learn more about these parameters in Snowflake's [docs](https://docs.snowflake.com/en/sql-reference/sql/create-dynamic-table): - -### Target lag - -Snowflake allows two configuration scenarios for scheduling automatic refreshes: -- **Time-based** — Provide a value of the form ` { seconds | minutes | hours | days }`. For example, if the dynamic table needs to be updated every 30 minutes, use `target_lag='30 minutes'`. -- **Downstream** — Applicable when the dynamic table is referenced by other dynamic tables. In this scenario, `target_lag='downstream'` allows for refreshes to be controlled at the target, instead of at each layer. - -Learn more about `target_lag` in Snowflake's [docs](https://docs.snowflake.com/en/user-guide/dynamic-tables-refresh#understanding-target-lag). - - - -### Refresh mode - -Snowflake allows three options for refresh mode: -- **AUTO** — Enforces an incremental refresh of the dynamic table by default. If the `CREATE DYNAMIC TABLE` statement does not support the incremental refresh mode, the dynamic table is automatically created with the full refresh mode. -- **FULL** — Enforces a full refresh of the dynamic table, even if the dynamic table can be incrementally refreshed. -- **INCREMENTAL** — Enforces an incremental refresh of the dynamic table. If the query that underlies the dynamic table can’t perform an incremental refresh, dynamic table creation fails and displays an error message. - -Learn more about `refresh_mode` in [Snowflake's docs](https://docs.snowflake.com/en/user-guide/dynamic-tables-refresh). - -### Initialize - -Snowflake allows two options for initialize: -- **ON_CREATE** — Refreshes the dynamic table synchronously at creation. If this refresh fails, dynamic table creation fails and displays an error message. -- **ON_SCHEDULE** — Refreshes the dynamic table at the next scheduled refresh. - -Learn more about `initialize` in [Snowflake's docs](https://docs.snowflake.com/en/user-guide/dynamic-tables-refresh). - - - -### Limitations - -As with materialized views on most data platforms, there are limitations associated with dynamic tables. Some worth noting include: - -- Dynamic table SQL has a [limited feature set](https://docs.snowflake.com/en/user-guide/dynamic-tables-tasks-create#query-constructs-not-currently-supported-in-dynamic-tables). -- Dynamic table SQL cannot be updated; the dynamic table must go through a `--full-refresh` (DROP/CREATE). -- Dynamic tables cannot be downstream from: materialized views, external tables, streams. -- Dynamic tables cannot reference a view that is downstream from another dynamic table. - -Find more information about dynamic table limitations in Snowflake's [docs](https://docs.snowflake.com/en/user-guide/dynamic-tables-tasks-create#dynamic-table-limitations-and-supported-functions). - -For dbt limitations, these dbt features are not supported: -- [Model contracts](/docs/collaborate/govern/model-contracts) -- [Copy grants configuration](/reference/resource-configs/snowflake-configs#copying-grants) - - - -#### Changing materialization to and from "dynamic_table" - -Version `1.6.x` does not support altering the materialization from a non-dynamic table be a dynamic table and vice versa. -Re-running with the `--full-refresh` does not resolve this either. -The workaround is manually dropping the existing model in the warehouse prior to calling `dbt run`. -This only needs to be done once for the conversion. - -For example, assume for the example model below, `my_model`, has already been materialized to the underlying data platform via `dbt run`. -If the model config is updated to `materialized="dynamic_table"`, dbt will return an error. -The workaround is to execute `DROP TABLE my_model` on the data warehouse before trying the model again. - - - -```yaml - -{{ config( - materialized="table" # or any model type (e.g. view, incremental) -) }} - -``` - - - - - ## Source freshness known limitation Snowflake calculates source freshness using information from the `LAST_ALTERED` column, meaning it relies on a field updated whenever any object undergoes modification, not only data updates. No action must be taken, but analytics teams should note this caveat. diff --git a/website/sidebars.js b/website/sidebars.js index d31f6d03f57..c8a711815a8 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -288,6 +288,7 @@ const sidebarSettings = { "docs/cloud/dbt-cloud-ide/keyboard-shortcuts", "docs/cloud/dbt-cloud-ide/ide-user-interface", "docs/cloud/dbt-cloud-ide/lint-format", + "docs/cloud/dbt-cloud-ide/git-commit-signing", { type: "category", label: "dbt Copilot", @@ -499,6 +500,7 @@ const sidebarSettings = { "docs/deploy/run-visibility", "docs/deploy/retry-jobs", "docs/deploy/job-notifications", + "docs/deploy/model-notifications", "docs/deploy/webhooks", "docs/deploy/artifacts", "docs/deploy/source-freshness", diff --git a/website/snippets/_mesh-cycle-detection.md b/website/snippets/_mesh-cycle-detection.md index 2a48d0a15bd..2b4b17385ba 100644 --- a/website/snippets/_mesh-cycle-detection.md +++ b/website/snippets/_mesh-cycle-detection.md @@ -1,8 +1,5 @@ -Currently, the default behavior for "project" dependencies enforces that these relationships only go in one direction, meaning that the `jaffle_finance` project could not add a new model that depends, on any public models produced by the `jaffle_marketing` project. dbt will check for cycles across projects and raise errors if any are detected. +You can enable bidirectional dependencies across projects so these relationships can go in either direction, meaning that the `jaffle_finance` project can add a new model that depends on any public models produced by the `jaffle_marketing` project, so long as the new dependency doesn't introduce any node-level cycles. dbt checks for cycles across projects and raises errors if any are detected. -However, many teams may want to be able to share data assets back and forth between teams. _We've added support for enabling bidirectional dependencies across projects, currently in beta_. - -To enable this in your account, set the environment variable `DBT_CLOUD_PROJECT_CYCLES_ALLOWED` to `TRUE` in all your dbt Cloud environments. This allows you to create bidirectional dependencies between projects, so long as the new dependency does not introduce any node-level cycles. When setting up projects that depend on each other, it's important to do so in a stepwise fashion. Each project must run and produce public models before the original producer project can take a dependency on the original consumer project. For example, the order of operations would be as follows for a simple two-project setup: @@ -10,5 +7,3 @@ When setting up projects that depend on each other, it's important to do so in a 2. The `project_b` project adds `project_a` as a dependency. 3. The `project_b` project runs in a deployment environment and produces public models. 4. The `project_a` project adds `project_b` as a dependency. - -If you enable this feature and experience any issues, please reach out to [dbt Cloud support](mailto:support@getdbt.com). diff --git a/website/snippets/_v2-sl-prerequisites.md b/website/snippets/_v2-sl-prerequisites.md index f8108849f4f..68f6e7d10b7 100644 --- a/website/snippets/_v2-sl-prerequisites.md +++ b/website/snippets/_v2-sl-prerequisites.md @@ -1,4 +1,4 @@ -- Have a dbt Cloud Team or Enterprise account. Single-tenant accounts should contact their account representative setup. +- Have a dbt Cloud Team or Enterprise account. Single-tenant accounts should contact their account representative for setup. - Ensure your production and development environments use [dbt version 1.6 or higher](/docs/dbt-versions/upgrade-dbt-version-in-cloud). - Use Snowflake, BigQuery, Databricks, or Redshift. - Create a successful run in the environment where you configure the Semantic Layer. diff --git a/website/src/components/hero/index.js b/website/src/components/hero/index.js index e4bef8e234b..073fb518998 100644 --- a/website/src/components/hero/index.js +++ b/website/src/components/hero/index.js @@ -1,7 +1,7 @@ import React from 'react'; import styles from './styles.module.css'; -function Hero({ heading, subheading, showGraphic = false, customStyles = {}, classNames = '', colClassNames = '' }) { +function Hero({ heading, subheading, showGraphic = false, customStyles = {}, classNames = '', colClassNames = '', callToActionsTitle, callToActions = [] }) { return (
{showGraphic && ( @@ -12,6 +12,22 @@ function Hero({ heading, subheading, showGraphic = false, customStyles = {}, cla

{heading}

{subheading}

+ {callToActionsTitle && {callToActionsTitle}} + {callToActions.length > 0 && ( +
+ {callToActions.map((c, index) => ( + c.href ? ( + + {c.title} + + ) : ( + + {c.title} + + ) + ))} +
+ )}
diff --git a/website/src/components/hero/styles.module.css b/website/src/components/hero/styles.module.css index f596b53762a..67d3c8c5d68 100644 --- a/website/src/components/hero/styles.module.css +++ b/website/src/components/hero/styles.module.css @@ -49,3 +49,34 @@ width: 60%; } } + +.callToActionsTitle { + font-weight: bold; + margin-top: 20px; + margin-bottom: 20px; + font-size: 1.25rem; + display: block; +} + +.callToActions { + display: flex; + flex-flow: wrap; + gap: 0.8rem; + justify-content: center; +} + +.callToAction { + outline: #fff solid 1px; + border-radius: 4px; + padding: 0 12px; + color: #fff; + transition: all .2s; + cursor: pointer; +} + +.callToAction:hover, .callToAction:active, .callToAction:focus { + text-decoration: none; + outline: rgb(4, 115, 119) solid 1px; + background-color: rgb(4, 115, 119); + color: #fff; +} diff --git a/website/src/components/quickstartGuideList/index.js b/website/src/components/quickstartGuideList/index.js index 0f4b5764340..2b87ae3e4a1 100644 --- a/website/src/components/quickstartGuideList/index.js +++ b/website/src/components/quickstartGuideList/index.js @@ -61,7 +61,7 @@ function QuickstartList({ quickstartData }) { // Update the URL with the new search parameters history.replace({ search: params.toString() }); -}; + }; // Handle all filters const handleDataFilter = () => { @@ -98,6 +98,30 @@ function QuickstartList({ quickstartData }) { handleDataFilter(); }, [selectedTags, selectedLevel, searchInput]); // Added searchInput to dependency array + // Set the featured guides that will show as CTAs in the hero section + // The value of the tag must match a tag in the frontmatter of the guides in order for the filter to apply after clicking + const heroCTAs = [ + { + title: 'Quickstart guides', + value: 'Quickstart' + }, + { + title: 'Use Jinja to improve your SQL code', + value: 'Jinja' + }, + { + title: 'Orchestration', + value: 'Orchestration' + }, + ]; + + // Function to handle CTA clicks + const handleCallToActionClick = (value) => { + const params = new URLSearchParams(location.search); + params.set('tags', value); + history.replace({ search: params.toString() }); + }; + return ( @@ -111,6 +135,13 @@ function QuickstartList({ quickstartData }) { showGraphic={false} customStyles={{ marginBottom: 0 }} classNames={styles.quickstartHero} + callToActions={heroCTAs.map(guide => ({ + title: guide.title, + href: guide.href, + onClick: () => handleCallToActionClick(guide.value), + newTab: guide.newTab + }))} + callToActionsTitle={'Popular guides'} />
@@ -135,7 +166,7 @@ function QuickstartList({ quickstartData }) {
- ) + ); } export default QuickstartList; diff --git a/website/static/img/docs/collaborate/dbt-explorer/sigma-example.jpg b/website/static/img/docs/collaborate/dbt-explorer/sigma-example.jpg new file mode 100644 index 00000000000..b1aa4533e08 Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/sigma-example.jpg differ diff --git a/website/static/img/docs/dbt-cloud/cloud-ide/ide-options-menu-with-save.jpg b/website/static/img/docs/dbt-cloud/cloud-ide/ide-options-menu-with-save.jpg index bd57ee514ee..8a968f684bd 100644 Binary files a/website/static/img/docs/dbt-cloud/cloud-ide/ide-options-menu-with-save.jpg and b/website/static/img/docs/dbt-cloud/cloud-ide/ide-options-menu-with-save.jpg differ diff --git a/website/static/img/docs/dbt-cloud/cloud-ide/restart-ide.jpg b/website/static/img/docs/dbt-cloud/cloud-ide/restart-ide.jpg index 98d71403cdd..031ec19227f 100644 Binary files a/website/static/img/docs/dbt-cloud/cloud-ide/restart-ide.jpg and b/website/static/img/docs/dbt-cloud/cloud-ide/restart-ide.jpg differ diff --git a/website/static/img/docs/dbt-cloud/example-enable-model-notifications.png b/website/static/img/docs/dbt-cloud/example-enable-model-notifications.png new file mode 100644 index 00000000000..16cf5457db5 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/example-enable-model-notifications.png differ diff --git a/website/static/img/docs/dbt-cloud/example-git-signed-commits-setting.png b/website/static/img/docs/dbt-cloud/example-git-signed-commits-setting.png new file mode 100644 index 00000000000..bf3f8169359 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/example-git-signed-commits-setting.png differ diff --git a/website/static/img/docs/dbt-cloud/git-sign-verified.jpg b/website/static/img/docs/dbt-cloud/git-sign-verified.jpg new file mode 100644 index 00000000000..86fbdd58dc9 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/git-sign-verified.jpg differ