diff --git a/stable/enterprise/Chart.lock b/stable/enterprise/Chart.lock index 5debea5e..4a09b0b3 100644 --- a/stable/enterprise/Chart.lock +++ b/stable/enterprise/Chart.lock @@ -7,6 +7,6 @@ dependencies: version: 17.11.8 - name: feeds repository: https://charts.anchore.io/stable - version: 2.4.3 -digest: sha256:9679bd4d060c7c348f874a0ab1d16f3c4cddfbb644941843b4dd00ae428ca219 -generated: "2024-04-17T12:43:40.046686-04:00" + version: 2.5.0 +digest: sha256:8235632dbf137dc1a826936d50b6cd0293c5e246bd148b6d00c68d063386f11a +generated: "2024-04-30T17:22:45.494615-07:00" diff --git a/stable/enterprise/Chart.yaml b/stable/enterprise/Chart.yaml index 2def8053..c1c65705 100644 --- a/stable/enterprise/Chart.yaml +++ b/stable/enterprise/Chart.yaml @@ -1,7 +1,7 @@ apiVersion: v2 name: enterprise -version: "2.5.6" -appVersion: "5.4.1" +version: "2.6.0" +appVersion: "5.5.0" kubeVersion: 1.23.x - 1.28.x || 1.23.x-x - 1.29.x-x description: | Anchore Enterprise is a complete container security workflow solution for professional teams. Easily integrating with CI/CD systems, diff --git a/stable/enterprise/README.md b/stable/enterprise/README.md index fedf3146..96778e24 100644 --- a/stable/enterprise/README.md +++ b/stable/enterprise/README.md @@ -28,6 +28,7 @@ See the [Anchore Enterprise Documentation](https://docs.anchore.com) for more de - [Scaling Individual Services](#scaling-individual-services) - [Using TLS Internally](#using-tls-internally) - [Migrating to the Anchore Enterprise Helm Chart](#migrating-to-the-anchore-enterprise-helm-chart) +- [Object storage migration](#object-storage-migration) - [Parameters](#parameters) - [Release Notes](#release-notes) @@ -909,6 +910,89 @@ In case of issues during the migration, execute the following rollback steps: This rollback procedure is designed to revert your environment to its pre-migration state, allowing for a fresh migration attempt. +## Object Storage Migration + +To cleanly migrate data from one archive driver to another, Anchore Enterprise includes some tooling that automates the process in the ‘anchore-enterprise-manager’ tool packaged with the system. +The enterprise helm chart provides a way to run the migration steps listed in the [object store migration docs](https://docs.anchore.com/current/docs/configuration/storage/object_store/migration/#migrating-analysis-archive-data) +automatically by spinning up a job and crafting the configs required and running the necessary migration commands. + +The source's config.yaml uses the `anchoreConfig.catalog.object_store` and `anchoreConfig.catalog.analysis_archive` objects as it's configs. This is currently what your system is deployed with. + +The dest-config.yaml uses the `osaaMigrationJob.objectStoreMigration.object_store` and `osaaMigrationJob.analysisArchiveMigration.analysis_archive` respectively to know what it will be migrating to. + +To enable the job that runs the migration, update the osaaMigrationJob's values as needed, then run a `helm upgrade`. This will create a job using the pre-upgrade hook to ensure all services are spun down before the migration is ran. It uses the same service account as the upgrade job unless specified otherwise. This service account must have permissions to list and scale down deployments and pods. As the upgrade may take a while, you may want to run your helm upgrade using a longer `--timeout` option to allow the upgrade job to run through without failing due to the timeout. + +```yaml +# example config +osaaMigrationJob: + enabled: true # note that we are enabling the migration job + analysisArchiveMigration: + run: true # we are specifying to run the analysis_archive migration + bucket: "analysis_archive" + mode: to_analysis_archive + # the deployment will be migrated to use the following configs for catalog.analysis_archive + analysis_archive: + enabled: true + compression: + enabled: true + min_size_kbytes: 100 + storage_driver: + name: s3 + config: + access_key: my_access_key + secret_key: my_secret_key + url: 'http://myminio.mynamespace.svc.cluster.local:9000' + region: null + bucket: analysisarchive + objectStoreMigration: + run: true + # note that since this is the same as anchoreConfig.catalog.object_store, the migration + # command for migrating the object store will still run, but it will not do anything as there + # is nothing to be done + object_store: + verify_content_digests: true + compression: + enabled: false + min_size_kbytes: 100 + storage_driver: + name: db + config: {} + +# the deployment was previously deployed using the following configs +anchoreConfig: + default_admin_password: foobar + catalog: + analysis_archive: + enabled: true + compression: + enabled: true + min_size_kbytes: 100 + storage_driver: + name: db + config: {} + object_store: + verify_content_digests: true + compression: + enabled: true + min_size_kbytes: 100 + storage_driver: + name: db + config: {} +``` + +After the migration is complete, the deployment of Anchore will use the `osaaMigrationJob`'s `analysis_archive` and `object_store` configs depending on if you specified to `run` the migration for the respective config. Since the migration only needs to be ran once, you should update your values.yaml to replace your old `anchoreConfig.catalog.analysis_archive` and `anchoreConfig.catalog.object_store` sections with what you declared in the `osaaMigrationJob` section. You can then set the `osaaMigrationJob.enabled` value to false as to not spin up the job anymore since it is no longer needed. + +### Object Storage Migration Rollback + +To restore your deployment to using your previous driver configurations: + +1. put your original `catalog.analysis_archive` and `catalog.object_store` configs in the `osaaMigrationJob` configs. Your `catalog.analysis_archive` and `catalog.object_store` should currently be what you tried to migrate to (what was in the `osaaMigrationJob` configs) as per the instructions above saying + - """Since the migration only needs to be ran once, you should update your values.yaml to replace your old `anchoreConfig.catalog.analysis_archive` and `anchoreConfig.catalog.object_store` sections with what you declared in the `osaaMigrationJob` section."""" +2. set to true, `osaaMigrationJob.enable` and (`osaaMigrationJob.objectStoreMigration.run` and/or `osaaMigrationJob.analysisArchiveMigration.run`) +3. set `osaaMigrationJob.analysisArchiveMigration.mode=from_analysis_archive` +4. do a `helm upgrade` (remember to increase your timeout based on how much data is being migrated) +5. Once the migration completes, move your original configs (what is currently in `osaaMigrationJob`) to `anchoreConfig.catalog.analysis_archive` and `anchoreConfig.catalog.object_store`, and update your values file to set `osaaMigrationJob.enabled=false` + ## Parameters ### Global Resource Parameters @@ -922,7 +1006,7 @@ This rollback procedure is designed to revert your environment to its pre-migrat | Name | Description | Value | | --------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------- | -| `image` | Image used for all Anchore Enterprise deployments, excluding Anchore UI | `docker.io/anchore/enterprise:v5.4.1` | +| `image` | Image used for all Anchore Enterprise deployments, excluding Anchore UI | `docker.io/anchore/enterprise:v5.5.0` | | `imagePullPolicy` | Image pull policy used by all deployments | `IfNotPresent` | | `imagePullSecretName` | Name of Docker credentials secret for access to private repos | `anchore-enterprise-pullcreds` | | `startMigrationPod` | Spin up a Database migration pod to help migrate the database to the new schema | `false` | @@ -969,7 +1053,23 @@ This rollback procedure is designed to revert your environment to its pre-migrat | Name | Description | Value | | -------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------ | | `anchoreConfig.service_dir` | Path to directory where default Anchore config files are placed at startup | `/anchore_service` | -| `anchoreConfig.log_level` | The log level for Anchore services | `INFO` | +| `anchoreConfig.log_level` | The log level for Anchore services: NOTE: This is deprecated, use logging.log_level | `INFO` | +| `anchoreConfig.logging.colored_logging` | Enable colored output in the logs | `false` | +| `anchoreConfig.logging.exception_backtrace_logging` | Enable stack traces in the logs | `false` | +| `anchoreConfig.logging.exception_diagnose_logging` | Enable detailed exception information in the logs | `false` | +| `anchoreConfig.logging.file_rotation_rule` | Maximum size of a log file before it is rotated | `10 MB` | +| `anchoreConfig.logging.file_retention_rule` | Number of log files to retain before deleting the oldest | `10` | +| `anchoreConfig.logging.log_level` | Log level for the service code | `INFO` | +| `anchoreConfig.logging.server_access_logging` | Set whether to print server access to logging | `true` | +| `anchoreConfig.logging.server_response_debug_logging` | Log the elapsed time to process the request and the response size (debug log level) | `false` | +| `anchoreConfig.logging.server_log_level` | Log level specifically for the server (uvicorn) | `info` | +| `anchoreConfig.logging.structured_logging` | Enable structured logging output (JSON) | `false` | +| `anchoreConfig.server.max_connection_backlog` | Max connections permitted in the backlog before dropping | `2048` | +| `anchoreConfig.server.max_wsgi_middleware_worker_queue_size` | Max number of requests to queue for processing by ASGI2WSGI middleware | `100` | +| `anchoreConfig.server.max_wsgi_middleware_worker_count` | Max number of workers to have in the ASGI2WSGI middleware worker pool | `50` | +| `anchoreConfig.server.timeout_graceful_shutdown` | Seconds to permit for graceful shutdown or false to disable | `false` | +| `anchoreConfig.server.timeout_keep_alive` | Seconds to keep a connection alive before closing | `5` | +| `anchoreConfig.audit.enabled` | Enable audit logging | `true` | | `anchoreConfig.allow_awsecr_iam_auto` | Enable AWS IAM instance role for ECR auth | `true` | | `anchoreConfig.keys.secret` | The shared secret used for signing & encryption, auto-generated by Helm if not set. | `""` | | `anchoreConfig.keys.privateKeyFileName` | The file name of the private key used for signing & encryption, found in the k8s secret specified in .Values.certStoreSecretName | `""` | @@ -1249,7 +1349,7 @@ This rollback procedure is designed to revert your environment to its pre-migrat | Name | Description | Value | | ---------------------------- | ----------------------------------------------------------------------------- | ---------------------------------------- | -| `ui.image` | Image used for the Anchore UI container | `docker.io/anchore/enterprise-ui:v5.4.0` | +| `ui.image` | Image used for the Anchore UI container | `docker.io/anchore/enterprise-ui:v5.5.0` | | `ui.imagePullPolicy` | Image pull policy for Anchore UI image | `IfNotPresent` | | `ui.existingSecretName` | Name of an existing secret to be used for Anchore UI DB and Redis endpoints | `anchore-enterprise-ui-env` | | `ui.ldapsRootCaCertName` | Name of the custom CA certificate file store in `.Values.certStoreSecretName` | `""` | @@ -1341,6 +1441,29 @@ This rollback procedure is designed to revert your environment to its pre-migrat | `postgresql.primary.extraEnvVars` | An array to add extra environment variables | `[]` | | `postgresql.image.tag` | Specifies the image to use for this chart. | `13.11.0-debian-11-r15` | +### Anchore Object Store and Analysis Archive Migration + +| Name | Description | Value | +| ------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------- | ---------------------- | +| `osaaMigrationJob.enabled` | Enable the Anchore Object Store and Analysis Archive migration job | `false` | +| `osaaMigrationJob.kubectlImage` | The image to use for the job's init container that uses kubectl to scale down deployments for the migration | `bitnami/kubectl:1.27` | +| `osaaMigrationJob.extraEnv` | An array to add extra environment variables | `[]` | +| `osaaMigrationJob.extraVolumes` | Define additional volumes for Anchore Object Store and Analysis Archive migration job | `[]` | +| `osaaMigrationJob.extraVolumeMounts` | Define additional volume mounts for Anchore Object Store and Analysis Archive migration job | `[]` | +| `osaaMigrationJob.resources` | Resource requests and limits for Anchore Object Store and Analysis Archive migration job | `{}` | +| `osaaMigrationJob.labels` | Labels for Anchore Object Store and Analysis Archive migration job | `{}` | +| `osaaMigrationJob.annotations` | Annotation for Anchore Object Store and Analysis Archive migration job | `{}` | +| `osaaMigrationJob.nodeSelector` | Node labels for Anchore Object Store and Analysis Archive migration job pod assignment | `{}` | +| `osaaMigrationJob.tolerations` | Tolerations for Anchore Object Store and Analysis Archive migration job pod assignment | `[]` | +| `osaaMigrationJob.affinity` | Affinity for Anchore Object Store and Analysis Archive migration job pod assignment | `{}` | +| `osaaMigrationJob.serviceAccountName` | Service account name for Anchore Object Store and Analysis Archive migration job pods | `""` | +| `osaaMigrationJob.analysisArchiveMigration.bucket` | The name of the bucket to migrate | `analysis_archive` | +| `osaaMigrationJob.analysisArchiveMigration.run` | Run the analysis_archive migration | `false` | +| `osaaMigrationJob.analysisArchiveMigration.mode` | The mode for the analysis_archive migration. valid values are 'to_analysis_archive' and 'from_analysis_archive'. | `to_analysis_archive` | +| `osaaMigrationJob.analysisArchiveMigration.analysis_archive` | The configuration of the catalog.analysis_archive for the dest-config.yaml | `{}` | +| `osaaMigrationJob.objectStoreMigration.run` | Run the object_store migration | `false` | +| `osaaMigrationJob.objectStoreMigration.object_store` | The configuration of the object_store for the dest-config.yaml | `{}` | + ## Release Notes @@ -1350,6 +1473,12 @@ For the latest updates and features in Anchore Enterprise, see the official [Rel - **Minor Chart Version Change (e.g., v0.1.2 -> v0.2.0)**: Indicates a significant change to the deployment that does not require manual intervention. - **Patch Chart Version Change (e.g., v0.1.2 -> v0.1.3)**: Indicates a backwards-compatible bug fix or documentation update. +### V2.6.x + +- Deploys Anchore Enterprise v5.5.x. See the [Release Notes](https://docs.anchore.com/current/docs/releasenotes/550/) for more information. +- Adds support for service specific annotations. +- Adds a configurable job for object/analysis store backend migration. + ### V2.5.x - Deploys Anchore Enterprise v5.4.x. See the [Release Notes](https://docs.anchore.com/current/docs/releasenotes/540/) for more information. diff --git a/stable/enterprise/files/default_config.yaml b/stable/enterprise/files/default_config.yaml index b6744018..350d7ce3 100644 --- a/stable/enterprise/files/default_config.yaml +++ b/stable/enterprise/files/default_config.yaml @@ -1,6 +1,12 @@ service_dir: ${ANCHORE_SERVICE_DIR} tmp_dir: ${ANCHORE_TMP_DIR} -log_level: ${ANCHORE_LOG_LEVEL} +log_level: ${ANCHORE_LOG_LEVEL} # Deprecated - prefer use of logging.log_level + +logging: + {{- toYaml .Values.anchoreConfig.logging | nindent 2 }} + +server: + {{- toYaml .Values.anchoreConfig.server | nindent 2 }} allow_awsecr_iam_auto: ${ANCHORE_ALLOW_ECR_IAM_AUTO} host_id: "${ANCHORE_HOST_ID}" @@ -19,6 +25,37 @@ max_import_content_size_mb: ${ANCHORE_MAX_IMPORT_CONTENT_SIZE_MB} max_compressed_image_size_mb: ${ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB} +audit: + enabled: {{ .Values.anchoreConfig.audit.enabled }} + mode: log + verbs: + - post + - put + - delete + - patch + resource_uris: + - "/accounts" + - "/accounts/{account_name}" + - "/accounts/{account_name}/state" + - "/accounts/{account_name}/users" + - "/accounts/{account_name}/users/{username}" + - "/accounts/{account_name}/users/{username}/api-keys" + - "/accounts/{account_name}/users/{username}/api-keys/{key_name}" + - "/accounts/{account_name}/users/{username}/credentials" + - "/rbac-manager/roles" + - "/rbac-manager/roles/{role_name}/members" + - "/rbac-manager/saml/idps" + - "/rbac-manager/saml/idps/{name}" + - "/rbac-manager/saml/idps/{name}/user-group-mappings" + - "/system/user-groups" + - "/system/user-groups/{group_uuid}" + - "/system/user-groups/{group_uuid}/roles" + - "/system/user-groups/{group_uuid}/users" + - "/user/api-keys" + - "/user/api-keys/{key_name}" + - "/user/credentials" + + metrics: enabled: ${ANCHORE_ENABLE_METRICS} auth_disabled: ${ANCHORE_DISABLE_METRICS_AUTH} diff --git a/stable/enterprise/files/osaa_config.yaml b/stable/enterprise/files/osaa_config.yaml new file mode 100644 index 00000000..219e3595 --- /dev/null +++ b/stable/enterprise/files/osaa_config.yaml @@ -0,0 +1,260 @@ +service_dir: ${ANCHORE_SERVICE_DIR} +tmp_dir: ${ANCHORE_TMP_DIR} +log_level: ${ANCHORE_LOG_LEVEL} + +allow_awsecr_iam_auto: ${ANCHORE_ALLOW_ECR_IAM_AUTO} +host_id: "${ANCHORE_HOST_ID}" +internal_ssl_verify: ${ANCHORE_INTERNAL_SSL_VERIFY} +image_analyze_timeout_seconds: ${ANCHORE_IMAGE_ANALYZE_TIMEOUT_SECONDS} + +global_client_connect_timeout: ${ANCHORE_GLOBAL_CLIENT_CONNECT_TIMEOUT} +global_client_read_timeout: ${ANCHORE_GLOBAL_CLIENT_READ_TIMEOUT} +server_request_timeout_seconds: ${ANCHORE_GLOBAL_SERVER_REQUEST_TIMEOUT_SEC} + +license_file: ${ANCHORE_LICENSE_FILE} +auto_restart_services: false + +max_source_import_size_mb: ${ANCHORE_MAX_IMPORT_SOURCE_SIZE_MB} +max_import_content_size_mb: ${ANCHORE_MAX_IMPORT_CONTENT_SIZE_MB} + +max_compressed_image_size_mb: ${ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB} + +metrics: + enabled: ${ANCHORE_ENABLE_METRICS} + auth_disabled: ${ANCHORE_DISABLE_METRICS_AUTH} + +webhooks: {{- toYaml .Values.anchoreConfig.webhooks | nindent 2 }} + +default_admin_password: "${ANCHORE_ADMIN_PASSWORD}" +default_admin_email: ${ANCHORE_ADMIN_EMAIL} + +keys: + secret: "${ANCHORE_SAML_SECRET}" + public_key_path: ${ANCHORE_AUTH_PRIVKEY} + private_key_path: ${ANCHORE_AUTH_PUBKEY} + +user_authentication: + oauth: + enabled: ${ANCHORE_OAUTH_ENABLED} + default_token_expiration_seconds: ${ANCHORE_OAUTH_TOKEN_EXPIRATION} + refresh_token_expiration_seconds: ${ANCHORE_OAUTH_REFRESH_TOKEN_EXPIRATION} + hashed_passwords: ${ANCHORE_AUTH_ENABLE_HASHED_PASSWORDS} + sso_require_existing_users: ${ANCHORE_SSO_REQUIRES_EXISTING_USERS} + allow_api_keys_for_saml_users: {{ .Values.anchoreConfig.user_authentication.allow_api_keys_for_saml_users }} + max_api_key_age_days: {{ .Values.anchoreConfig.user_authentication.max_api_key_age_days }} + max_api_keys_per_user: {{ .Values.anchoreConfig.user_authentication.max_api_keys_per_user }} + remove_deleted_user_api_keys_older_than_days: {{ .Values.anchoreConfig.user_authentication.remove_deleted_user_api_keys_older_than_days }} + +credentials: + database: + user: "${ANCHORE_DB_USER}" + password: "${ANCHORE_DB_PASSWORD}" + host: "${ANCHORE_DB_HOST}" + port: "${ANCHORE_DB_PORT}" + name: "${ANCHORE_DB_NAME}" + db_connect_args: + timeout: ${ANCHORE_DB_TIMEOUT} + ssl: ${ANCHORE_DB_SSL} + {{- if .Values.anchoreConfig.database.ssl }} + sslmode: ${ANCHORE_DB_SSL_MODE} + sslrootcert: ${ANCHORE_DB_SSL_ROOT_CERT} + {{- end }} + db_pool_size: ${ANCHORE_DB_POOL_SIZE} + db_pool_max_overflow: ${ANCHORE_DB_POOL_MAX_OVERFLOW} + {{- with .Values.anchoreConfig.database.engineArgs }} + db_engine_args: {{- toYaml . | nindent 6 }} + {{- end }} + +services: + apiext: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + {{- if .Values.anchoreConfig.apiext.external.enabled }} + external_tls: {{ .Values.anchoreConfig.apiext.external.useTLS }} + external_hostname: {{ .Values.anchoreConfig.apiext.external.hostname }} + external_port: {{ .Values.anchoreConfig.apiext.external.port }} + {{- end }} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + analyzer: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + cycle_timer_seconds: 1 + cycle_timers: {{- toYaml .Values.anchoreConfig.analyzer.cycle_timers | nindent 6 }} + analyzer_driver: 'nodocker' + layer_cache_enable: ${ANCHORE_LAYER_CACHE_ENABLED} + layer_cache_max_gigabytes: ${ANCHORE_LAYER_CACHE_SIZE_GB} + enable_hints: ${ANCHORE_HINTS_ENABLED} + enable_owned_package_filtering: ${ANCHORE_OWNED_PACKAGE_FILTERING_ENABLED} + keep_image_analysis_tmpfiles: ${ANCHORE_KEEP_IMAGE_ANALYSIS_TMPFILES} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + catalog: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + cycle_timer_seconds: 1 + cycle_timers: {{- toYaml .Values.anchoreConfig.catalog.cycle_timers | nindent 6 }} + event_log: {{- toYaml .Values.anchoreConfig.catalog.event_log | nindent 6 }} + runtime_inventory: + inventory_ttl_days: ${ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_TTL_DAYS} + inventory_ingest_overwrite: ${ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_INGEST_OVERWRITE} + image_gc: + max_worker_threads: ${ANCHORE_CATALOG_IMAGE_GC_WORKERS} + runtime_compliance: + object_store_bucket: "runtime_compliance_check" + down_analyzer_task_requeue: ${ANCHORE_ANALYZER_TASK_REQUEUE} + import_operation_expiration_days: ${ANCHORE_IMPORT_OPERATION_EXPIRATION_DAYS} + {{- if and .Values.osaaMigrationJob.enabled .Values.osaaMigrationJob.analysisArchiveMigration.run }} + analysis_archive: {{- toYaml .Values.osaaMigrationJob.analysisArchiveMigration.analysis_archive | nindent 6 }} + {{- else }} + analysis_archive: {{- toYaml .Values.anchoreConfig.catalog.analysis_archive | nindent 6 }} + {{- end }} + {{- if and .Values.osaaMigrationJob.enabled .Values.osaaMigrationJob.objectStoreMigration.run }} + object_store: {{- toYaml .Values.osaaMigrationJob.objectStoreMigration.object_store | nindent 6 }} + {{- else }} + object_store: {{- toYaml .Values.anchoreConfig.catalog.object_store | nindent 6 }} + {{- end }} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + simplequeue: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + policy_engine: + enabled: true + require_auth: true + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + policy_evaluation_cache_ttl: ${ANCHORE_POLICY_EVAL_CACHE_TTL_SECONDS} + cycle_timer_seconds: 1 + cycle_timers: {{- toYaml .Values.anchoreConfig.policy_engine.cycle_timers | nindent 6 }} + enable_package_db_load: ${ANCHORE_POLICY_ENGINE_ENABLE_PACKAGE_DB_LOAD} + vulnerabilities: + sync: + enabled: true + ssl_verify: ${ANCHORE_FEEDS_SSL_VERIFY} + connection_timeout_seconds: 3 + read_timeout_seconds: 60 + data: + grypedb: + enabled: true + url: {{ template "enterprise.grypeProviderURL" . }} + packages: + enabled: ${ANCHORE_FEEDS_DRIVER_PACKAGES_ENABLED} + url: {{ template "enterprise.feedsURL" . }} + matching: + default: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_DEFAULT_SEARCH_BY_CPE_ENABLED} + ecosystem_specific: + dotnet: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_DOTNET_SEARCH_BY_CPE_ENABLED} + golang: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_GOLANG_SEARCH_BY_CPE_ENABLED} + java: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVA_SEARCH_BY_CPE_ENABLED} + javascript: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVASCRIPT_SEARCH_BY_CPE_ENABLED} + python: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_PYTHON_SEARCH_BY_CPE_ENABLED} + ruby: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_RUBY_SEARCH_BY_CPE_ENABLED} + stock: + search: + by_cpe: + # Disabling search by CPE for the stock matcher will entirely disable binary-only matches and is not advised + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_STOCK_SEARCH_BY_CPE_ENABLED} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + reports: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + enable_graphiql: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_GRAPHIQL} + cycle_timers: {{- toYaml .Values.anchoreConfig.reports.cycle_timers | nindent 6 }} + max_async_execution_threads: ${ANCHORE_ENTERPRISE_REPORTS_MAX_ASYNC_EXECUTION_THREADS} + async_execution_timeout: ${ANCHORE_ENTERPRISE_REPORTS_ASYNC_EXECUTION_TIMEOUT} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + use_volume: {{ .Values.anchoreConfig.reports.use_volume }} + + reports_worker: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + enable_data_ingress: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_INGRESS} + enable_data_egress: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_EGRESS} + data_egress_window: ${ANCHORE_ENTERPRISE_REPORTS_DATA_EGRESS_WINDOW} + data_refresh_max_workers: ${ANCHORE_ENTERPRISE_REPORTS_DATA_REFRESH_MAX_WORKERS} + data_load_max_workers: ${ANCHORE_ENTERPRISE_REPORTS_DATA_LOAD_MAX_WORKERS} + cycle_timers: {{- toYaml .Values.anchoreConfig.reports_worker.cycle_timers | nindent 6 }} + runtime_report_generation: + inventory_images_by_vulnerability: true + vulnerabilities_by_k8s_namespace: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_NAMESPACE} + vulnerabilities_by_k8s_container: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_CONTAINER} + vulnerabilities_by_ecs_container: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_ECS_CONTAINER} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + notifications: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + cycle_timers: {{- toYaml .Values.anchoreConfig.notifications.cycle_timers | nindent 6 }} + ui_url: ${ANCHORE_ENTERPRISE_UI_URL} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} diff --git a/stable/enterprise/templates/_common.tpl b/stable/enterprise/templates/_common.tpl index 2d03fe2e..e0bc00a1 100644 --- a/stable/enterprise/templates/_common.tpl +++ b/stable/enterprise/templates/_common.tpl @@ -19,6 +19,25 @@ When calling this template, .component can be included in the context for compon {{- end }} {{- end -}} +{{/* +Service annotations +{{- include "enterprise.service.annotations" (merge (dict "component" $component) .) }} +*/}} +{{- define "enterprise.service.annotations" -}} +{{- $component := .component -}} +{{- if and (not .nil) (not .Values.annotations) (not (index .Values (print $component)).service.annotations) }} + {{- print "{}" }} +{{- else }} + {{- with .Values.annotations -}} +{{ toYaml . }} + {{- end }} + {{- if $component }} + {{- with (index .Values (print $component)).service.annotations }} +{{ toYaml . }} + {{- end }} + {{- end }} +{{- end }} +{{- end -}} {{/* Setup a container for the cloudsql proxy to run in all pods when .Values.cloudsql.enabled = true @@ -224,7 +243,7 @@ Setup the common pod spec configs {{- with .Values.securityContext }} securityContext: {{- toYaml . | nindent 2 }} {{- end }} -{{- if or .Values.serviceAccountName (index .Values (print $component)).serviceAccountName (eq $component "upgradeJob") }} +{{- if or .Values.serviceAccountName (index .Values (print $component)).serviceAccountName (eq $component "upgradeJob") (eq $component "osaaMigrationJob") }} serviceAccountName: {{ include "enterprise.serviceAccountName" (merge (dict "component" $component) .) }} {{- end }} {{- with .Values.imagePullSecretName }} @@ -309,9 +328,15 @@ Setup the common anchore volumes configMap: name: {{ .Release.Name }}-enterprise-scripts defaultMode: 0755 +{{- if .Values.osaaMigrationJob.enabled }} +- name: config-volume + configMap: + name: {{ template "enterprise.osaaMigrationJob.fullname" . }} +{{- else }} - name: config-volume configMap: name: {{ template "enterprise.fullname" . }} +{{- end }} {{- with .Values.certStoreSecretName }} - name: certs secret: diff --git a/stable/enterprise/templates/_helpers.tpl b/stable/enterprise/templates/_helpers.tpl index b57dde0a..35638b75 100644 --- a/stable/enterprise/templates/_helpers.tpl +++ b/stable/enterprise/templates/_helpers.tpl @@ -46,6 +46,16 @@ Allows sourcing of a specified file in the entrypoint of all containers when .Va {{- end }} {{- end }} +{{/* +Allows passing in a feature flag to the ui application on startup +*/}} +{{- define "enterprise.ui.featureFlags" }} + {{- range $index, $val := .Values.ui.extraEnv -}} + {{- if eq .name "ANCHORE_FEATURE_FLAG" }} + {{- printf "-f %v" .value }} + {{- end }} + {{- end }} +{{- end }} {{/* Returns the proper URL for the feeds service @@ -97,7 +107,7 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this {{- with (index .Values (print $component)).serviceAccountName }} {{- print . | trunc 63 | trimSuffix "-" -}} {{- else }} - {{- if and .Values.upgradeJob.rbacCreate (eq $component "upgradeJob") }} + {{- if and .Values.upgradeJob.rbacCreate (or (eq $component "upgradeJob") (eq $component "osaaMigrationJob") ) }} {{- printf "%s-%s" (include "enterprise.fullname" .) "upgrade-sa" -}} {{- else if .Values.serviceAccountName }} {{- print .Values.serviceAccountName | trunc 63 | trimSuffix "-" -}} diff --git a/stable/enterprise/templates/_names.tpl b/stable/enterprise/templates/_names.tpl index ec057737..f38f5e69 100644 --- a/stable/enterprise/templates/_names.tpl +++ b/stable/enterprise/templates/_names.tpl @@ -66,6 +66,11 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this {{- printf "%s-%s-%s-%s%s" .Release.Name $name (.Chart.AppVersion | replace "." "") "upgrade" $forcedRevision| trunc 63 | trimSuffix "-" -}} {{- end -}} +{{- define "enterprise.osaaMigrationJob.fullname" -}} +{{- $name := default .Chart.Name .Values.global.nameOverride -}} +{{- printf "%s-%s-%s-%s" .Release.Name $name (.Chart.AppVersion | replace "." "") "osaa-migration-job" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + {{- define "enterprise.feeds.fullname" -}} {{- if .Values.feeds.fullnameOverride }} {{- .Values.feeds.fullnameOverride | trunc 63 | trimSuffix "-" }} diff --git a/stable/enterprise/templates/api_deployment.yaml b/stable/enterprise/templates/api_deployment.yaml index ae7490de..1fb20d6f 100644 --- a/stable/enterprise/templates/api_deployment.yaml +++ b/stable/enterprise/templates/api_deployment.yaml @@ -79,7 +79,7 @@ metadata: name: {{ template "enterprise.api.fullname" . }} namespace: {{ .Release.Namespace }} labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 4 }} - annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component) .) | nindent 4 }} + annotations: {{- include "enterprise.service.annotations" (merge (dict "component" $component) .) | nindent 4 }} spec: type: {{ .Values.api.service.type }} ports: diff --git a/stable/enterprise/templates/catalog_deployment.yaml b/stable/enterprise/templates/catalog_deployment.yaml index f054e7dc..18586225 100644 --- a/stable/enterprise/templates/catalog_deployment.yaml +++ b/stable/enterprise/templates/catalog_deployment.yaml @@ -80,7 +80,7 @@ metadata: name: {{ template "enterprise.catalog.fullname" . }} namespace: {{ .Release.Namespace }} labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 4 }} - annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component) .) | nindent 4 }} + annotations: {{- include "enterprise.service.annotations" (merge (dict "component" $component) .) | nindent 4 }} spec: type: {{ .Values.catalog.service.type }} ports: diff --git a/stable/enterprise/templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml b/stable/enterprise/templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml new file mode 100644 index 00000000..d0ee3d19 --- /dev/null +++ b/stable/enterprise/templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml @@ -0,0 +1,140 @@ +{{- if .Values.osaaMigrationJob.enabled -}} +{{- $component := "osaaMigrationJob" -}} + +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "enterprise.osaaMigrationJob.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 4 }} + annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component "nil" true) .) | nindent 4 }} + "helm.sh/hook": pre-upgrade + "helm.sh/hook-weight": "1" # we want the migration to run before the upgrade jobs but after the rbac creation (if any) + "helm.sh/hook-delete-policy": before-hook-creation +spec: + template: + metadata: + name: {{ template "enterprise.osaaMigrationJob.fullname" . }} + labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 8 }} + annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component "nil" true) .) | nindent 8 }} + {{- if and (not .Values.injectSecretsViaEnv) (not .Values.useExistingSecrets) }} + checksum/secrets: {{ include (print $.Template.BasePath "/anchore_secret.yaml") . | sha256sum }} + {{- end }} + spec: + {{- include "enterprise.common.podSpec" (merge (dict "component" $component) .) | indent 6 }} + restartPolicy: Never + volumes: + {{- include "enterprise.common.extraVolumes" (merge (dict "component" $component) .) | nindent 8 }} + - name: anchore-license + secret: + secretName: {{ .Values.licenseSecretName }} + - name: anchore-scripts + configMap: + name: {{ .Release.Name }}-enterprise-scripts + defaultMode: 0755 + - name: config-volume + configMap: + name: {{ template "enterprise.fullname" . }} + - name: dest-config + configMap: + name: {{ template "enterprise.osaaMigrationJob.fullname" . }} + items: + - key: "config.yaml" + path: "dest-config.yaml" + {{- with .Values.certStoreSecretName }} + - name: certs + secret: + secretName: {{ . }} + {{- end }} + {{- if .Values.cloudsql.useExistingServiceAcc }} + - name: {{ .Values.cloudsql.serviceAccSecretName }} + secret: + secretName: {{ .Values.cloudsql.serviceAccSecretName }} + {{- end }} + initContainers: + - name: scale-down-anchore + image: {{ .Values.osaaMigrationJob.kubectlImage }} + command: ["/bin/bash", "-c"] + args: + - | + kubectl scale deployments --all --replicas=0 -l app.kubernetes.io/name={{ template "enterprise.fullname" . }}; + while [[ $(kubectl get pods -l app.kubernetes.io/name={{ template "enterprise.fullname" . }} --field-selector=status.phase=Running --no-headers | tee /dev/stderr | wc -l) -gt 0 ]]; do + echo 'waiting for pods to go down...' && sleep 5; + done + {{- with .Values.containerSecurityContext }} + securityContext: {{ toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.osaaMigrationJob.resources }} + resources: {{- toYaml . | nindent 12 }} + {{- end }} + - name: wait-for-db + image: {{ .Values.image }} + imagePullPolicy: {{ .Values.imagePullPolicy }} + env: {{- include "enterprise.common.environment" (merge (dict "component" $component) .) | nindent 12 }} + command: ["/bin/bash", "-c"] + args: + - | + while true; do + CONNSTR=postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" + if [[ ${ANCHORE_DB_SSL_MODE} != null ]]; then + CONNSTR=${CONNSTR}?sslmode=${ANCHORE_DB_SSL_MODE} + fi + if [[ ${ANCHORE_DB_SSL_ROOT_CERT} != null ]]; then + CONNSTR=${CONNSTR}\&sslrootcert=${ANCHORE_DB_SSL_ROOT_CERT} + fi + err=$(anchore-enterprise-manager db --db-connect ${CONNSTR} pre-upgrade-check 2>&1 > /dev/null) + if [[ !$err ]]; then + echo "Database is ready" + exit 0 + fi + echo "Database is not ready yet, sleeping 10 seconds..." + sleep 10 + done + {{- with .Values.containerSecurityContext }} + securityContext: {{ toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.osaaMigrationJob.resources }} + resources: {{- toYaml . | nindent 12 }} + {{- end }} + containers: + {{- if .Values.cloudsql.enabled }} + {{- include "enterprise.common.cloudsqlContainer" . | nindent 8 }} + {{- end }} + - name: migrate-analysis-archive + image: {{ .Values.image }} + imagePullPolicy: {{ .Values.imagePullPolicy }} + {{- with .Values.containerSecurityContext }} + securityContext: {{ toYaml . | nindent 12 }} + {{- end }} + envFrom: {{- include "enterprise.common.envFrom" . | nindent 12 }} + env: {{- include "enterprise.common.environment" (merge (dict "component" $component) .) | nindent 12 }} + volumeMounts: + {{- include "enterprise.common.volumeMounts" (merge (dict "component" $component) .) | nindent 12 }} + - name: dest-config + mountPath: /config/dest-config.yaml + subPath: dest-config.yaml + {{- with .Values.osaaMigrationJob.resources }} + resources: {{- toYaml . | nindent 12 }} + {{- end }} + command: + - "/bin/bash" + - "-c" + - | + echo "checking destination config..." + {{- print (include "enterprise.doSourceFile" .) | nindent 14 }} anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" check /config/dest-config.yaml + {{- if .Values.osaaMigrationJob.objectStoreMigration.run }} + echo "running object store migration" + {{- print (include "enterprise.doSourceFile" .) | nindent 14 }} anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" migrate /config/config.yaml /config/dest-config.yaml --dontask + {{- end }} + {{- if .Values.osaaMigrationJob.analysisArchiveMigration.run }} + echo "running analysis archive migration" + {{- if eq .Values.osaaMigrationJob.analysisArchiveMigration.mode "to_analysis_archive" }} + echo "running in to_analysis_archive mode (migrating source to dest using driver located in dest analysis archive section)" + {{- print (include "enterprise.doSourceFile" .) | nindent 14 }} anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" migrate --to-analysis-archive --bucket {{ .Values.osaaMigrationJob.analysisArchiveMigration.bucket }} /config/config.yaml /config/dest-config.yaml --dontask + {{- else if eq .Values.osaaMigrationJob.analysisArchiveMigration.mode "from_analysis_archive" }} + echo "running in from_analysis_archive mode (migrating source to dest using driver located in source analysis archive section)" + {{- print (include "enterprise.doSourceFile" .) | nindent 14 }} anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" migrate --from-analysis-archive --bucket {{ .Values.osaaMigrationJob.analysisArchiveMigration.bucket }} /config/config.yaml /config/dest-config.yaml --dontask + {{- end }} + {{- end }} + echo "migration complete" +{{- end -}} diff --git a/stable/enterprise/templates/notifications_deployment.yaml b/stable/enterprise/templates/notifications_deployment.yaml index aa9e3fe5..25bcc040 100644 --- a/stable/enterprise/templates/notifications_deployment.yaml +++ b/stable/enterprise/templates/notifications_deployment.yaml @@ -57,7 +57,7 @@ metadata: name: {{ template "enterprise.notifications.fullname" . }} namespace: {{ .Release.Namespace }} labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 4 }} - annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component) .) | nindent 4 }} + annotations: {{- include "enterprise.service.annotations" (merge (dict "component" $component) .) | nindent 4 }} spec: type: {{ .Values.notifications.service.type }} ports: diff --git a/stable/enterprise/templates/osaa_configmap.yaml b/stable/enterprise/templates/osaa_configmap.yaml new file mode 100644 index 00000000..66ab2340 --- /dev/null +++ b/stable/enterprise/templates/osaa_configmap.yaml @@ -0,0 +1,19 @@ +{{- if .Values.osaaMigrationJob.enabled -}} +{{- $component := "osaaMigrationJob" -}} +kind: ConfigMap +apiVersion: v1 +metadata: + name: {{ template "enterprise.osaaMigrationJob.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: {{- include "enterprise.common.labels" . | nindent 4 }} + annotations: + "helm.sh/hook": pre-upgrade + "helm.sh/hook-weight": "0" + {{- include "enterprise.common.annotations" (merge (dict "component" $component "nil" true) .) | nindent 4 }} + +data: + config.yaml: | + # Anchore Object Store and Analysis Archive Migration configuration file, mounted from a configmap + # +{{ tpl (.Files.Get "files/osaa_config.yaml") . | indent 4 }} +{{- end -}} diff --git a/stable/enterprise/templates/policyengine_deployment.yaml b/stable/enterprise/templates/policyengine_deployment.yaml index 4bed6fbe..3d54db31 100644 --- a/stable/enterprise/templates/policyengine_deployment.yaml +++ b/stable/enterprise/templates/policyengine_deployment.yaml @@ -65,7 +65,7 @@ metadata: name: {{ template "enterprise.policyEngine.fullname" . }} namespace: {{ .Release.Namespace }} labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 4 }} - annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component) .) | nindent 4 }} + annotations: {{- include "enterprise.service.annotations" (merge (dict "component" $component) .) | nindent 4 }} spec: type: {{ .Values.policyEngine.service.type }} ports: diff --git a/stable/enterprise/templates/reports_deployment.yaml b/stable/enterprise/templates/reports_deployment.yaml index 5d41532f..7ffa87d7 100644 --- a/stable/enterprise/templates/reports_deployment.yaml +++ b/stable/enterprise/templates/reports_deployment.yaml @@ -69,7 +69,7 @@ metadata: name: {{ template "enterprise.reports.fullname" . }} namespace: {{ .Release.Namespace }} labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 4 }} - annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component) .) | nindent 4 }} + annotations: {{- include "enterprise.service.annotations" (merge (dict "component" $component) .) | nindent 4 }} spec: type: {{ .Values.reports.service.type }} ports: diff --git a/stable/enterprise/templates/reportsworker_deployment.yaml b/stable/enterprise/templates/reportsworker_deployment.yaml index 9ffde13e..321cc474 100644 --- a/stable/enterprise/templates/reportsworker_deployment.yaml +++ b/stable/enterprise/templates/reportsworker_deployment.yaml @@ -57,7 +57,7 @@ metadata: name: {{ template "enterprise.reportsWorker.fullname" . }} namespace: {{ .Release.Namespace }} labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 4 }} - annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component) .) | nindent 4 }} + annotations: {{- include "enterprise.service.annotations" (merge (dict "component" $component) .) | nindent 4 }} spec: type: {{ .Values.reportsWorker.service.type }} ports: diff --git a/stable/enterprise/templates/simplequeue_deployment.yaml b/stable/enterprise/templates/simplequeue_deployment.yaml index 21ab1342..16db860a 100644 --- a/stable/enterprise/templates/simplequeue_deployment.yaml +++ b/stable/enterprise/templates/simplequeue_deployment.yaml @@ -56,7 +56,7 @@ metadata: name: {{ template "enterprise.simpleQueue.fullname" . }} namespace: {{ .Release.Namespace }} labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 4 }} - annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component) .) | nindent 4 }} + annotations: {{- include "enterprise.service.annotations" (merge (dict "component" $component) .) | nindent 4 }} spec: type: {{ .Values.simpleQueue.service.type }} ports: diff --git a/stable/enterprise/templates/ui_deployment.yaml b/stable/enterprise/templates/ui_deployment.yaml index 73514682..e98af7b6 100644 --- a/stable/enterprise/templates/ui_deployment.yaml +++ b/stable/enterprise/templates/ui_deployment.yaml @@ -53,7 +53,7 @@ spec: {{- end }} command: ["/bin/sh", "-c"] args: - - {{ print (include "enterprise.doSourceFile" .) }} /docker-entrypoint.sh node /home/node/aui/build/server.js + - {{ print (include "enterprise.doSourceFile" .) }} /docker-entrypoint.sh node /home/node/aui/build/server.js {{ print (include "enterprise.ui.featureFlags" .) }} env: {{- include "enterprise.common.environment" (merge (dict "component" $component) .) | nindent 12 }} {{- if .Values.anchoreConfig.database.ssl }} - name: PGSSLROOTCERT @@ -117,7 +117,7 @@ metadata: name: {{ template "enterprise.ui.fullname" . }} namespace: {{ .Release.Namespace }} labels: {{- include "enterprise.common.labels" (merge (dict "component" $component) .) | nindent 4 }} - annotations: {{- include "enterprise.common.annotations" (merge (dict "component" $component) .) | nindent 4 }} + annotations: {{- include "enterprise.service.annotations" (merge (dict "component" $component) .) | nindent 4 }} spec: sessionAffinity: {{ .Values.ui.service.sessionAffinity }} type: {{ .Values.ui.service.type }} diff --git a/stable/enterprise/tests/__snapshot__/configmap_test.yaml.snap b/stable/enterprise/tests/__snapshot__/configmap_test.yaml.snap index 2e1ee3a3..a447b069 100644 --- a/stable/enterprise/tests/__snapshot__/configmap_test.yaml.snap +++ b/stable/enterprise/tests/__snapshot__/configmap_test.yaml.snap @@ -46,7 +46,26 @@ should render the configmaps: # service_dir: ${ANCHORE_SERVICE_DIR} tmp_dir: ${ANCHORE_TMP_DIR} - log_level: ${ANCHORE_LOG_LEVEL} + log_level: ${ANCHORE_LOG_LEVEL} # Deprecated - prefer use of logging.log_level + + logging: + colored_logging: false + exception_backtrace_logging: false + exception_diagnose_logging: false + file_retention_rule: 10 + file_rotation_rule: 10 MB + log_level: INFO + server_access_logging: true + server_log_level: info + server_response_debug_logging: false + structured_logging: false + + server: + max_connection_backlog: 2048 + max_wsgi_middleware_worker_count: 50 + max_wsgi_middleware_worker_queue_size: 100 + timeout_graceful_shutdown: false + timeout_keep_alive: 5 allow_awsecr_iam_auto: ${ANCHORE_ALLOW_ECR_IAM_AUTO} host_id: "${ANCHORE_HOST_ID}" @@ -65,6 +84,37 @@ should render the configmaps: max_compressed_image_size_mb: ${ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB} + audit: + enabled: true + mode: log + verbs: + - post + - put + - delete + - patch + resource_uris: + - "/accounts" + - "/accounts/{account_name}" + - "/accounts/{account_name}/state" + - "/accounts/{account_name}/users" + - "/accounts/{account_name}/users/{username}" + - "/accounts/{account_name}/users/{username}/api-keys" + - "/accounts/{account_name}/users/{username}/api-keys/{key_name}" + - "/accounts/{account_name}/users/{username}/credentials" + - "/rbac-manager/roles" + - "/rbac-manager/roles/{role_name}/members" + - "/rbac-manager/saml/idps" + - "/rbac-manager/saml/idps/{name}" + - "/rbac-manager/saml/idps/{name}/user-group-mappings" + - "/system/user-groups" + - "/system/user-groups/{group_uuid}" + - "/system/user-groups/{group_uuid}/roles" + - "/system/user-groups/{group_uuid}/users" + - "/user/api-keys" + - "/user/api-keys/{key_name}" + - "/user/credentials" + + metrics: enabled: ${ANCHORE_ENABLE_METRICS} auth_disabled: ${ANCHORE_DISABLE_METRICS_AUTH} diff --git a/stable/enterprise/tests/__snapshot__/osaa_configmap_test.yaml.snap b/stable/enterprise/tests/__snapshot__/osaa_configmap_test.yaml.snap new file mode 100644 index 00000000..3acbdf44 --- /dev/null +++ b/stable/enterprise/tests/__snapshot__/osaa_configmap_test.yaml.snap @@ -0,0 +1,672 @@ +should render the configmaps for osaa migration if enabled: + 1: | + apiVersion: v1 + data: + config.yaml: | + # Anchore Service Configuration File, mounted from a configmap + # + service_dir: ${ANCHORE_SERVICE_DIR} + tmp_dir: ${ANCHORE_TMP_DIR} + log_level: ${ANCHORE_LOG_LEVEL} # Deprecated - prefer use of logging.log_level + + logging: + colored_logging: false + exception_backtrace_logging: false + exception_diagnose_logging: false + file_retention_rule: 10 + file_rotation_rule: 10 MB + log_level: INFO + server_access_logging: true + server_log_level: info + server_response_debug_logging: false + structured_logging: false + + server: + max_connection_backlog: 2048 + max_wsgi_middleware_worker_count: 50 + max_wsgi_middleware_worker_queue_size: 100 + timeout_graceful_shutdown: false + timeout_keep_alive: 5 + + allow_awsecr_iam_auto: ${ANCHORE_ALLOW_ECR_IAM_AUTO} + host_id: "${ANCHORE_HOST_ID}" + internal_ssl_verify: ${ANCHORE_INTERNAL_SSL_VERIFY} + image_analyze_timeout_seconds: ${ANCHORE_IMAGE_ANALYZE_TIMEOUT_SECONDS} + + global_client_connect_timeout: ${ANCHORE_GLOBAL_CLIENT_CONNECT_TIMEOUT} + global_client_read_timeout: ${ANCHORE_GLOBAL_CLIENT_READ_TIMEOUT} + server_request_timeout_seconds: ${ANCHORE_GLOBAL_SERVER_REQUEST_TIMEOUT_SEC} + + license_file: ${ANCHORE_LICENSE_FILE} + auto_restart_services: false + + max_source_import_size_mb: ${ANCHORE_MAX_IMPORT_SOURCE_SIZE_MB} + max_import_content_size_mb: ${ANCHORE_MAX_IMPORT_CONTENT_SIZE_MB} + + max_compressed_image_size_mb: ${ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB} + + audit: + enabled: true + mode: log + verbs: + - post + - put + - delete + - patch + resource_uris: + - "/accounts" + - "/accounts/{account_name}" + - "/accounts/{account_name}/state" + - "/accounts/{account_name}/users" + - "/accounts/{account_name}/users/{username}" + - "/accounts/{account_name}/users/{username}/api-keys" + - "/accounts/{account_name}/users/{username}/api-keys/{key_name}" + - "/accounts/{account_name}/users/{username}/credentials" + - "/rbac-manager/roles" + - "/rbac-manager/roles/{role_name}/members" + - "/rbac-manager/saml/idps" + - "/rbac-manager/saml/idps/{name}" + - "/rbac-manager/saml/idps/{name}/user-group-mappings" + - "/system/user-groups" + - "/system/user-groups/{group_uuid}" + - "/system/user-groups/{group_uuid}/roles" + - "/system/user-groups/{group_uuid}/users" + - "/user/api-keys" + - "/user/api-keys/{key_name}" + - "/user/credentials" + + + metrics: + enabled: ${ANCHORE_ENABLE_METRICS} + auth_disabled: ${ANCHORE_DISABLE_METRICS_AUTH} + + webhooks: + {} + + default_admin_password: "${ANCHORE_ADMIN_PASSWORD}" + default_admin_email: ${ANCHORE_ADMIN_EMAIL} + + keys: + secret: "${ANCHORE_SAML_SECRET}" + public_key_path: ${ANCHORE_AUTH_PRIVKEY} + private_key_path: ${ANCHORE_AUTH_PUBKEY} + + user_authentication: + oauth: + enabled: ${ANCHORE_OAUTH_ENABLED} + default_token_expiration_seconds: ${ANCHORE_OAUTH_TOKEN_EXPIRATION} + refresh_token_expiration_seconds: ${ANCHORE_OAUTH_REFRESH_TOKEN_EXPIRATION} + hashed_passwords: ${ANCHORE_AUTH_ENABLE_HASHED_PASSWORDS} + sso_require_existing_users: ${ANCHORE_SSO_REQUIRES_EXISTING_USERS} + allow_api_keys_for_saml_users: false + max_api_key_age_days: 365 + max_api_keys_per_user: 100 + remove_deleted_user_api_keys_older_than_days: 365 + + credentials: + database: + user: "${ANCHORE_DB_USER}" + password: "${ANCHORE_DB_PASSWORD}" + host: "${ANCHORE_DB_HOST}" + port: "${ANCHORE_DB_PORT}" + name: "${ANCHORE_DB_NAME}" + db_connect_args: + timeout: ${ANCHORE_DB_TIMEOUT} + ssl: ${ANCHORE_DB_SSL} + db_pool_size: ${ANCHORE_DB_POOL_SIZE} + db_pool_max_overflow: ${ANCHORE_DB_POOL_MAX_OVERFLOW} + + services: + apiext: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + analyzer: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + cycle_timer_seconds: 1 + cycle_timers: + image_analyzer: 1 + analyzer_driver: 'nodocker' + layer_cache_enable: ${ANCHORE_LAYER_CACHE_ENABLED} + layer_cache_max_gigabytes: ${ANCHORE_LAYER_CACHE_SIZE_GB} + enable_hints: ${ANCHORE_HINTS_ENABLED} + enable_owned_package_filtering: ${ANCHORE_OWNED_PACKAGE_FILTERING_ENABLED} + keep_image_analysis_tmpfiles: ${ANCHORE_KEEP_IMAGE_ANALYSIS_TMPFILES} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + catalog: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + cycle_timer_seconds: 1 + cycle_timers: + analyzer_queue: 1 + archive_tasks: 43200 + artifact_lifecycle_policy_tasks: 43200 + events_gc: 43200 + image_gc: 60 + image_watcher: 3600 + k8s_image_watcher: 150 + notifications: 30 + policy_bundle_sync: 300 + policy_eval: 3600 + repo_watcher: 60 + resource_metrics: 60 + service_watcher: 15 + vulnerability_scan: 14400 + event_log: + max_retention_age_days: 180 + notification: + enabled: false + level: + - error + runtime_inventory: + inventory_ttl_days: ${ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_TTL_DAYS} + inventory_ingest_overwrite: ${ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_INGEST_OVERWRITE} + image_gc: + max_worker_threads: ${ANCHORE_CATALOG_IMAGE_GC_WORKERS} + runtime_compliance: + object_store_bucket: "runtime_compliance_check" + down_analyzer_task_requeue: ${ANCHORE_ANALYZER_TASK_REQUEUE} + import_operation_expiration_days: ${ANCHORE_IMPORT_OPERATION_EXPIRATION_DAYS} + analysis_archive: + {} + object_store: + compression: + enabled: true + min_size_kbytes: 100 + storage_driver: + config: {} + name: db + verify_content_digests: true + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + simplequeue: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + policy_engine: + enabled: true + require_auth: true + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + policy_evaluation_cache_ttl: ${ANCHORE_POLICY_EVAL_CACHE_TTL_SECONDS} + cycle_timer_seconds: 1 + cycle_timers: + feed_sync: 14400 + feed_sync_checker: 3600 + enable_package_db_load: ${ANCHORE_POLICY_ENGINE_ENABLE_PACKAGE_DB_LOAD} + vulnerabilities: + sync: + enabled: true + ssl_verify: ${ANCHORE_FEEDS_SSL_VERIFY} + connection_timeout_seconds: 3 + read_timeout_seconds: 60 + data: + grypedb: + enabled: true + url: http://test-release-feeds:8448/v2/databases/grypedb + packages: + enabled: ${ANCHORE_FEEDS_DRIVER_PACKAGES_ENABLED} + url: http://test-release-feeds:8448/v2/feeds + matching: + default: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_DEFAULT_SEARCH_BY_CPE_ENABLED} + ecosystem_specific: + dotnet: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_DOTNET_SEARCH_BY_CPE_ENABLED} + golang: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_GOLANG_SEARCH_BY_CPE_ENABLED} + java: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVA_SEARCH_BY_CPE_ENABLED} + javascript: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVASCRIPT_SEARCH_BY_CPE_ENABLED} + python: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_PYTHON_SEARCH_BY_CPE_ENABLED} + ruby: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_RUBY_SEARCH_BY_CPE_ENABLED} + stock: + search: + by_cpe: + # Disabling search by CPE for the stock matcher will entirely disable binary-only matches and is not advised + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_STOCK_SEARCH_BY_CPE_ENABLED} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + reports: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + enable_graphiql: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_GRAPHIQL} + cycle_timers: + reports_scheduled_queries: 600 + max_async_execution_threads: ${ANCHORE_ENTERPRISE_REPORTS_MAX_ASYNC_EXECUTION_THREADS} + async_execution_timeout: ${ANCHORE_ENTERPRISE_REPORTS_ASYNC_EXECUTION_TIMEOUT} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + use_volume: false + + reports_worker: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + enable_data_ingress: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_INGRESS} + enable_data_egress: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_EGRESS} + data_egress_window: ${ANCHORE_ENTERPRISE_REPORTS_DATA_EGRESS_WINDOW} + data_refresh_max_workers: ${ANCHORE_ENTERPRISE_REPORTS_DATA_REFRESH_MAX_WORKERS} + data_load_max_workers: ${ANCHORE_ENTERPRISE_REPORTS_DATA_LOAD_MAX_WORKERS} + cycle_timers: + reports_extended_runtime_vuln_load: 1800 + reports_image_egress: 600 + reports_image_load: 600 + reports_image_refresh: 7200 + reports_metrics: 3600 + reports_runtime_inventory_load: 600 + reports_tag_egress: 600 + reports_tag_load: 600 + reports_tag_refresh: 7200 + runtime_report_generation: + inventory_images_by_vulnerability: true + vulnerabilities_by_k8s_namespace: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_NAMESPACE} + vulnerabilities_by_k8s_container: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_CONTAINER} + vulnerabilities_by_ecs_container: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_ECS_CONTAINER} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + notifications: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + cycle_timers: + notifications: 30 + ui_url: ${ANCHORE_ENTERPRISE_UI_URL} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + kind: ConfigMap + metadata: + annotations: + bar: baz + foo: bar + labels: + app.kubernetes.io/instance: test-release + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: test-release-enterprise + app.kubernetes.io/part-of: anchore + app.kubernetes.io/version: 9.9.9 + bar: baz + foo: bar + helm.sh/chart: enterprise-9.9.9 + name: test-release-enterprise + namespace: test-namespace + 2: | + apiVersion: v1 + data: + config.yaml: | + # Anchore Object Store and Analysis Archive Migration configuration file, mounted from a configmap + # + service_dir: ${ANCHORE_SERVICE_DIR} + tmp_dir: ${ANCHORE_TMP_DIR} + log_level: ${ANCHORE_LOG_LEVEL} + + allow_awsecr_iam_auto: ${ANCHORE_ALLOW_ECR_IAM_AUTO} + host_id: "${ANCHORE_HOST_ID}" + internal_ssl_verify: ${ANCHORE_INTERNAL_SSL_VERIFY} + image_analyze_timeout_seconds: ${ANCHORE_IMAGE_ANALYZE_TIMEOUT_SECONDS} + + global_client_connect_timeout: ${ANCHORE_GLOBAL_CLIENT_CONNECT_TIMEOUT} + global_client_read_timeout: ${ANCHORE_GLOBAL_CLIENT_READ_TIMEOUT} + server_request_timeout_seconds: ${ANCHORE_GLOBAL_SERVER_REQUEST_TIMEOUT_SEC} + + license_file: ${ANCHORE_LICENSE_FILE} + auto_restart_services: false + + max_source_import_size_mb: ${ANCHORE_MAX_IMPORT_SOURCE_SIZE_MB} + max_import_content_size_mb: ${ANCHORE_MAX_IMPORT_CONTENT_SIZE_MB} + + max_compressed_image_size_mb: ${ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB} + + metrics: + enabled: ${ANCHORE_ENABLE_METRICS} + auth_disabled: ${ANCHORE_DISABLE_METRICS_AUTH} + + webhooks: + {} + + default_admin_password: "${ANCHORE_ADMIN_PASSWORD}" + default_admin_email: ${ANCHORE_ADMIN_EMAIL} + + keys: + secret: "${ANCHORE_SAML_SECRET}" + public_key_path: ${ANCHORE_AUTH_PRIVKEY} + private_key_path: ${ANCHORE_AUTH_PUBKEY} + + user_authentication: + oauth: + enabled: ${ANCHORE_OAUTH_ENABLED} + default_token_expiration_seconds: ${ANCHORE_OAUTH_TOKEN_EXPIRATION} + refresh_token_expiration_seconds: ${ANCHORE_OAUTH_REFRESH_TOKEN_EXPIRATION} + hashed_passwords: ${ANCHORE_AUTH_ENABLE_HASHED_PASSWORDS} + sso_require_existing_users: ${ANCHORE_SSO_REQUIRES_EXISTING_USERS} + allow_api_keys_for_saml_users: false + max_api_key_age_days: 365 + max_api_keys_per_user: 100 + remove_deleted_user_api_keys_older_than_days: 365 + + credentials: + database: + user: "${ANCHORE_DB_USER}" + password: "${ANCHORE_DB_PASSWORD}" + host: "${ANCHORE_DB_HOST}" + port: "${ANCHORE_DB_PORT}" + name: "${ANCHORE_DB_NAME}" + db_connect_args: + timeout: ${ANCHORE_DB_TIMEOUT} + ssl: ${ANCHORE_DB_SSL} + db_pool_size: ${ANCHORE_DB_POOL_SIZE} + db_pool_max_overflow: ${ANCHORE_DB_POOL_MAX_OVERFLOW} + + services: + apiext: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + analyzer: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + cycle_timer_seconds: 1 + cycle_timers: + image_analyzer: 1 + analyzer_driver: 'nodocker' + layer_cache_enable: ${ANCHORE_LAYER_CACHE_ENABLED} + layer_cache_max_gigabytes: ${ANCHORE_LAYER_CACHE_SIZE_GB} + enable_hints: ${ANCHORE_HINTS_ENABLED} + enable_owned_package_filtering: ${ANCHORE_OWNED_PACKAGE_FILTERING_ENABLED} + keep_image_analysis_tmpfiles: ${ANCHORE_KEEP_IMAGE_ANALYSIS_TMPFILES} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + catalog: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + cycle_timer_seconds: 1 + cycle_timers: + analyzer_queue: 1 + archive_tasks: 43200 + artifact_lifecycle_policy_tasks: 43200 + events_gc: 43200 + image_gc: 60 + image_watcher: 3600 + k8s_image_watcher: 150 + notifications: 30 + policy_bundle_sync: 300 + policy_eval: 3600 + repo_watcher: 60 + resource_metrics: 60 + service_watcher: 15 + vulnerability_scan: 14400 + event_log: + max_retention_age_days: 180 + notification: + enabled: false + level: + - error + runtime_inventory: + inventory_ttl_days: ${ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_TTL_DAYS} + inventory_ingest_overwrite: ${ANCHORE_ENTERPRISE_RUNTIME_INVENTORY_INGEST_OVERWRITE} + image_gc: + max_worker_threads: ${ANCHORE_CATALOG_IMAGE_GC_WORKERS} + runtime_compliance: + object_store_bucket: "runtime_compliance_check" + down_analyzer_task_requeue: ${ANCHORE_ANALYZER_TASK_REQUEUE} + import_operation_expiration_days: ${ANCHORE_IMPORT_OPERATION_EXPIRATION_DAYS} + analysis_archive: + compression: + enabled: true + min_size_kbytes: 100 + enabled: true + storage_driver: + config: + access_key: itsa + bucket: analysisarchive + region: null + secret_key: test + url: http://myminio.mynamespace.svc.cluster.local:9000 + name: s3 + object_store: + compression: + enabled: true + min_size_kbytes: 100 + storage_driver: + config: {} + name: db + verify_content_digests: true + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + simplequeue: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + policy_engine: + enabled: true + require_auth: true + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + policy_evaluation_cache_ttl: ${ANCHORE_POLICY_EVAL_CACHE_TTL_SECONDS} + cycle_timer_seconds: 1 + cycle_timers: + feed_sync: 14400 + feed_sync_checker: 3600 + enable_package_db_load: ${ANCHORE_POLICY_ENGINE_ENABLE_PACKAGE_DB_LOAD} + vulnerabilities: + sync: + enabled: true + ssl_verify: ${ANCHORE_FEEDS_SSL_VERIFY} + connection_timeout_seconds: 3 + read_timeout_seconds: 60 + data: + grypedb: + enabled: true + url: http://test-release-feeds:8448/v2/databases/grypedb + packages: + enabled: ${ANCHORE_FEEDS_DRIVER_PACKAGES_ENABLED} + url: http://test-release-feeds:8448/v2/feeds + matching: + default: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_DEFAULT_SEARCH_BY_CPE_ENABLED} + ecosystem_specific: + dotnet: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_DOTNET_SEARCH_BY_CPE_ENABLED} + golang: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_GOLANG_SEARCH_BY_CPE_ENABLED} + java: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVA_SEARCH_BY_CPE_ENABLED} + javascript: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_JAVASCRIPT_SEARCH_BY_CPE_ENABLED} + python: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_PYTHON_SEARCH_BY_CPE_ENABLED} + ruby: + search: + by_cpe: + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_RUBY_SEARCH_BY_CPE_ENABLED} + stock: + search: + by_cpe: + # Disabling search by CPE for the stock matcher will entirely disable binary-only matches and is not advised + enabled: ${ANCHORE_VULN_MATCHING_ECOSYSTEM_SPECIFIC_STOCK_SEARCH_BY_CPE_ENABLED} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + reports: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + enable_graphiql: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_GRAPHIQL} + cycle_timers: + reports_scheduled_queries: 600 + max_async_execution_threads: ${ANCHORE_ENTERPRISE_REPORTS_MAX_ASYNC_EXECUTION_THREADS} + async_execution_timeout: ${ANCHORE_ENTERPRISE_REPORTS_ASYNC_EXECUTION_TIMEOUT} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + use_volume: false + + reports_worker: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + enable_data_ingress: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_INGRESS} + enable_data_egress: ${ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_EGRESS} + data_egress_window: ${ANCHORE_ENTERPRISE_REPORTS_DATA_EGRESS_WINDOW} + data_refresh_max_workers: ${ANCHORE_ENTERPRISE_REPORTS_DATA_REFRESH_MAX_WORKERS} + data_load_max_workers: ${ANCHORE_ENTERPRISE_REPORTS_DATA_LOAD_MAX_WORKERS} + cycle_timers: + reports_extended_runtime_vuln_load: 1800 + reports_image_egress: 600 + reports_image_load: 600 + reports_image_refresh: 7200 + reports_metrics: 3600 + reports_runtime_inventory_load: 600 + reports_tag_egress: 600 + reports_tag_load: 600 + reports_tag_refresh: 7200 + runtime_report_generation: + inventory_images_by_vulnerability: true + vulnerabilities_by_k8s_namespace: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_NAMESPACE} + vulnerabilities_by_k8s_container: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_K8S_CONTAINER} + vulnerabilities_by_ecs_container: ${ANCHORE_ENTERPRISE_REPORTS_VULNERABILITIES_BY_ECS_CONTAINER} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + + notifications: + enabled: true + require_auth: true + endpoint_hostname: ${ANCHORE_ENDPOINT_HOSTNAME} + listen: '0.0.0.0' + port: ${ANCHORE_PORT} + max_request_threads: ${ANCHORE_MAX_REQUEST_THREADS} + cycle_timers: + notifications: 30 + ui_url: ${ANCHORE_ENTERPRISE_UI_URL} + ssl_enable: ${ANCHORE_SSL_ENABLED} + ssl_cert: ${ANCHORE_SSL_CERT} + ssl_key: ${ANCHORE_SSL_KEY} + kind: ConfigMap + metadata: + annotations: + bar: baz + foo: bar + helm.sh/hook: pre-upgrade + helm.sh/hook-weight: "0" + labels: + app.kubernetes.io/instance: test-release + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: test-release-enterprise + app.kubernetes.io/part-of: anchore + app.kubernetes.io/version: 9.9.9 + bar: baz + foo: bar + helm.sh/chart: enterprise-9.9.9 + name: test-release-enterprise-999-osaa-migration-job + namespace: test-namespace diff --git a/stable/enterprise/tests/__snapshot__/prehook_upgrade_resources_test.yaml.snap b/stable/enterprise/tests/__snapshot__/prehook_upgrade_resources_test.yaml.snap index 90572fce..05e3ee23 100644 --- a/stable/enterprise/tests/__snapshot__/prehook_upgrade_resources_test.yaml.snap +++ b/stable/enterprise/tests/__snapshot__/prehook_upgrade_resources_test.yaml.snap @@ -1,3 +1,481 @@ +migration job should match snapshot: + 1: | + containers: + - command: + - /bin/bash + - -c + - |- + echo "checking destination config..." + anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" check /config/dest-config.yaml + echo "migration complete" + env: + - name: foo + value: bar + - name: bar + value: baz + - name: ANCHORE_ENDPOINT_HOSTNAME + value: test-release-enterprise-999-osaa-migration-job.test-namespace.svc.cluster.local + - name: ANCHORE_PORT + value: "null" + - name: ANCHORE_HOST_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + envFrom: + - configMapRef: + name: test-release-enterprise-config-env-vars + - secretRef: + name: test-release-enterprise + image: docker.io/anchore/enterprise:v5.5.0 + imagePullPolicy: IfNotPresent + name: migrate-analysis-archive + volumeMounts: + - mountPath: /home/anchore/license.yaml + name: anchore-license + subPath: license.yaml + - mountPath: /config/config.yaml + name: config-volume + subPath: config.yaml + - mountPath: /scripts + name: anchore-scripts + - mountPath: /config/dest-config.yaml + name: dest-config + subPath: dest-config.yaml + imagePullSecrets: + - name: anchore-enterprise-pullcreds + initContainers: + - args: + - | + kubectl scale deployments --all --replicas=0 -l app.kubernetes.io/name=test-release-enterprise; + while [[ $(kubectl get pods -l app.kubernetes.io/name=test-release-enterprise --field-selector=status.phase=Running --no-headers | tee /dev/stderr | wc -l) -gt 0 ]]; do + echo 'waiting for pods to go down...' && sleep 5; + done + command: + - /bin/bash + - -c + image: bitnami/kubectl:1.27 + name: scale-down-anchore + - args: + - | + while true; do + CONNSTR=postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" + if [[ ${ANCHORE_DB_SSL_MODE} != null ]]; then + CONNSTR=${CONNSTR}?sslmode=${ANCHORE_DB_SSL_MODE} + fi + if [[ ${ANCHORE_DB_SSL_ROOT_CERT} != null ]]; then + CONNSTR=${CONNSTR}\&sslrootcert=${ANCHORE_DB_SSL_ROOT_CERT} + fi + err=$(anchore-enterprise-manager db --db-connect ${CONNSTR} pre-upgrade-check 2>&1 > /dev/null) + if [[ !$err ]]; then + echo "Database is ready" + exit 0 + fi + echo "Database is not ready yet, sleeping 10 seconds..." + sleep 10 + done + command: + - /bin/bash + - -c + env: + - name: foo + value: bar + - name: bar + value: baz + - name: ANCHORE_ENDPOINT_HOSTNAME + value: test-release-enterprise-999-osaa-migration-job.test-namespace.svc.cluster.local + - name: ANCHORE_PORT + value: "null" + - name: ANCHORE_HOST_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + image: docker.io/anchore/enterprise:v5.5.0 + imagePullPolicy: IfNotPresent + name: wait-for-db + restartPolicy: Never + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsUser: 1000 + serviceAccountName: test-release-enterprise-upgrade-sa + volumes: + - name: anchore-license + secret: + secretName: anchore-enterprise-license + - configMap: + defaultMode: 493 + name: test-release-enterprise-scripts + name: anchore-scripts + - configMap: + name: test-release-enterprise + name: config-volume + - configMap: + items: + - key: config.yaml + path: dest-config.yaml + name: test-release-enterprise-999-osaa-migration-job + name: dest-config +migration job should match snapshot analysisArchiveMigration and objectStoreMigration to true: + 1: | + containers: + - command: + - /bin/bash + - -c + - |- + echo "checking destination config..." + anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" check /config/dest-config.yaml + echo "running object store migration" + anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" migrate /config/config.yaml /config/dest-config.yaml --dontask + echo "running analysis archive migration" + echo "running in to_analysis_archive mode (migrating source to dest using driver located in dest analysis archive section)" + anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" migrate --to-analysis-archive --bucket analysis_archive /config/config.yaml /config/dest-config.yaml --dontask + echo "migration complete" + env: + - name: foo + value: bar + - name: bar + value: baz + - name: ANCHORE_ENDPOINT_HOSTNAME + value: test-release-enterprise-999-osaa-migration-job.test-namespace.svc.cluster.local + - name: ANCHORE_PORT + value: "null" + - name: ANCHORE_HOST_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + envFrom: + - configMapRef: + name: test-release-enterprise-config-env-vars + - secretRef: + name: test-release-enterprise + image: docker.io/anchore/enterprise:v5.5.0 + imagePullPolicy: IfNotPresent + name: migrate-analysis-archive + volumeMounts: + - mountPath: /home/anchore/license.yaml + name: anchore-license + subPath: license.yaml + - mountPath: /config/config.yaml + name: config-volume + subPath: config.yaml + - mountPath: /scripts + name: anchore-scripts + - mountPath: /config/dest-config.yaml + name: dest-config + subPath: dest-config.yaml + imagePullSecrets: + - name: anchore-enterprise-pullcreds + initContainers: + - args: + - | + kubectl scale deployments --all --replicas=0 -l app.kubernetes.io/name=test-release-enterprise; + while [[ $(kubectl get pods -l app.kubernetes.io/name=test-release-enterprise --field-selector=status.phase=Running --no-headers | tee /dev/stderr | wc -l) -gt 0 ]]; do + echo 'waiting for pods to go down...' && sleep 5; + done + command: + - /bin/bash + - -c + image: bitnami/kubectl:1.27 + name: scale-down-anchore + - args: + - | + while true; do + CONNSTR=postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" + if [[ ${ANCHORE_DB_SSL_MODE} != null ]]; then + CONNSTR=${CONNSTR}?sslmode=${ANCHORE_DB_SSL_MODE} + fi + if [[ ${ANCHORE_DB_SSL_ROOT_CERT} != null ]]; then + CONNSTR=${CONNSTR}\&sslrootcert=${ANCHORE_DB_SSL_ROOT_CERT} + fi + err=$(anchore-enterprise-manager db --db-connect ${CONNSTR} pre-upgrade-check 2>&1 > /dev/null) + if [[ !$err ]]; then + echo "Database is ready" + exit 0 + fi + echo "Database is not ready yet, sleeping 10 seconds..." + sleep 10 + done + command: + - /bin/bash + - -c + env: + - name: foo + value: bar + - name: bar + value: baz + - name: ANCHORE_ENDPOINT_HOSTNAME + value: test-release-enterprise-999-osaa-migration-job.test-namespace.svc.cluster.local + - name: ANCHORE_PORT + value: "null" + - name: ANCHORE_HOST_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + image: docker.io/anchore/enterprise:v5.5.0 + imagePullPolicy: IfNotPresent + name: wait-for-db + restartPolicy: Never + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsUser: 1000 + serviceAccountName: test-release-enterprise-upgrade-sa + volumes: + - name: anchore-license + secret: + secretName: anchore-enterprise-license + - configMap: + defaultMode: 493 + name: test-release-enterprise-scripts + name: anchore-scripts + - configMap: + name: test-release-enterprise + name: config-volume + - configMap: + items: + - key: config.yaml + path: dest-config.yaml + name: test-release-enterprise-999-osaa-migration-job + name: dest-config +migration job should match snapshot analysisArchiveMigration to true: + 1: | + containers: + - command: + - /bin/bash + - -c + - |- + echo "checking destination config..." + anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" check /config/dest-config.yaml + echo "running analysis archive migration" + echo "running in to_analysis_archive mode (migrating source to dest using driver located in dest analysis archive section)" + anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" migrate --to-analysis-archive --bucket analysis_archive /config/config.yaml /config/dest-config.yaml --dontask + echo "migration complete" + env: + - name: foo + value: bar + - name: bar + value: baz + - name: ANCHORE_ENDPOINT_HOSTNAME + value: test-release-enterprise-999-osaa-migration-job.test-namespace.svc.cluster.local + - name: ANCHORE_PORT + value: "null" + - name: ANCHORE_HOST_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + envFrom: + - configMapRef: + name: test-release-enterprise-config-env-vars + - secretRef: + name: test-release-enterprise + image: docker.io/anchore/enterprise:v5.5.0 + imagePullPolicy: IfNotPresent + name: migrate-analysis-archive + volumeMounts: + - mountPath: /home/anchore/license.yaml + name: anchore-license + subPath: license.yaml + - mountPath: /config/config.yaml + name: config-volume + subPath: config.yaml + - mountPath: /scripts + name: anchore-scripts + - mountPath: /config/dest-config.yaml + name: dest-config + subPath: dest-config.yaml + imagePullSecrets: + - name: anchore-enterprise-pullcreds + initContainers: + - args: + - | + kubectl scale deployments --all --replicas=0 -l app.kubernetes.io/name=test-release-enterprise; + while [[ $(kubectl get pods -l app.kubernetes.io/name=test-release-enterprise --field-selector=status.phase=Running --no-headers | tee /dev/stderr | wc -l) -gt 0 ]]; do + echo 'waiting for pods to go down...' && sleep 5; + done + command: + - /bin/bash + - -c + image: bitnami/kubectl:1.27 + name: scale-down-anchore + - args: + - | + while true; do + CONNSTR=postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" + if [[ ${ANCHORE_DB_SSL_MODE} != null ]]; then + CONNSTR=${CONNSTR}?sslmode=${ANCHORE_DB_SSL_MODE} + fi + if [[ ${ANCHORE_DB_SSL_ROOT_CERT} != null ]]; then + CONNSTR=${CONNSTR}\&sslrootcert=${ANCHORE_DB_SSL_ROOT_CERT} + fi + err=$(anchore-enterprise-manager db --db-connect ${CONNSTR} pre-upgrade-check 2>&1 > /dev/null) + if [[ !$err ]]; then + echo "Database is ready" + exit 0 + fi + echo "Database is not ready yet, sleeping 10 seconds..." + sleep 10 + done + command: + - /bin/bash + - -c + env: + - name: foo + value: bar + - name: bar + value: baz + - name: ANCHORE_ENDPOINT_HOSTNAME + value: test-release-enterprise-999-osaa-migration-job.test-namespace.svc.cluster.local + - name: ANCHORE_PORT + value: "null" + - name: ANCHORE_HOST_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + image: docker.io/anchore/enterprise:v5.5.0 + imagePullPolicy: IfNotPresent + name: wait-for-db + restartPolicy: Never + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsUser: 1000 + serviceAccountName: test-release-enterprise-upgrade-sa + volumes: + - name: anchore-license + secret: + secretName: anchore-enterprise-license + - configMap: + defaultMode: 493 + name: test-release-enterprise-scripts + name: anchore-scripts + - configMap: + name: test-release-enterprise + name: config-volume + - configMap: + items: + - key: config.yaml + path: dest-config.yaml + name: test-release-enterprise-999-osaa-migration-job + name: dest-config +migration job should match snapshot objectStoreMigration to true: + 1: | + containers: + - command: + - /bin/bash + - -c + - |- + echo "checking destination config..." + anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" check /config/dest-config.yaml + echo "running object store migration" + anchore-enterprise-manager objectstorage --db-connect postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" migrate /config/config.yaml /config/dest-config.yaml --dontask + echo "migration complete" + env: + - name: foo + value: bar + - name: bar + value: baz + - name: ANCHORE_ENDPOINT_HOSTNAME + value: test-release-enterprise-999-osaa-migration-job.test-namespace.svc.cluster.local + - name: ANCHORE_PORT + value: "null" + - name: ANCHORE_HOST_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + envFrom: + - configMapRef: + name: test-release-enterprise-config-env-vars + - secretRef: + name: test-release-enterprise + image: docker.io/anchore/enterprise:v5.5.0 + imagePullPolicy: IfNotPresent + name: migrate-analysis-archive + volumeMounts: + - mountPath: /home/anchore/license.yaml + name: anchore-license + subPath: license.yaml + - mountPath: /config/config.yaml + name: config-volume + subPath: config.yaml + - mountPath: /scripts + name: anchore-scripts + - mountPath: /config/dest-config.yaml + name: dest-config + subPath: dest-config.yaml + imagePullSecrets: + - name: anchore-enterprise-pullcreds + initContainers: + - args: + - | + kubectl scale deployments --all --replicas=0 -l app.kubernetes.io/name=test-release-enterprise; + while [[ $(kubectl get pods -l app.kubernetes.io/name=test-release-enterprise --field-selector=status.phase=Running --no-headers | tee /dev/stderr | wc -l) -gt 0 ]]; do + echo 'waiting for pods to go down...' && sleep 5; + done + command: + - /bin/bash + - -c + image: bitnami/kubectl:1.27 + name: scale-down-anchore + - args: + - | + while true; do + CONNSTR=postgresql://"${ANCHORE_DB_USER}":"${ANCHORE_DB_PASSWORD}"@"${ANCHORE_DB_HOST}":"${ANCHORE_DB_PORT}"/"${ANCHORE_DB_NAME}" + if [[ ${ANCHORE_DB_SSL_MODE} != null ]]; then + CONNSTR=${CONNSTR}?sslmode=${ANCHORE_DB_SSL_MODE} + fi + if [[ ${ANCHORE_DB_SSL_ROOT_CERT} != null ]]; then + CONNSTR=${CONNSTR}\&sslrootcert=${ANCHORE_DB_SSL_ROOT_CERT} + fi + err=$(anchore-enterprise-manager db --db-connect ${CONNSTR} pre-upgrade-check 2>&1 > /dev/null) + if [[ !$err ]]; then + echo "Database is ready" + exit 0 + fi + echo "Database is not ready yet, sleeping 10 seconds..." + sleep 10 + done + command: + - /bin/bash + - -c + env: + - name: foo + value: bar + - name: bar + value: baz + - name: ANCHORE_ENDPOINT_HOSTNAME + value: test-release-enterprise-999-osaa-migration-job.test-namespace.svc.cluster.local + - name: ANCHORE_PORT + value: "null" + - name: ANCHORE_HOST_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + image: docker.io/anchore/enterprise:v5.5.0 + imagePullPolicy: IfNotPresent + name: wait-for-db + restartPolicy: Never + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsUser: 1000 + serviceAccountName: test-release-enterprise-upgrade-sa + volumes: + - name: anchore-license + secret: + secretName: anchore-enterprise-license + - configMap: + defaultMode: 493 + name: test-release-enterprise-scripts + name: anchore-scripts + - configMap: + name: test-release-enterprise + name: config-volume + - configMap: + items: + - key: config.yaml + path: dest-config.yaml + name: test-release-enterprise-999-osaa-migration-job + name: dest-config rbac should match snapshot: 1: | apiVersion: v1 @@ -143,6 +621,6 @@ should render proper initContainers: valueFrom: fieldRef: fieldPath: metadata.name - image: docker.io/anchore/enterprise:v5.4.1 + image: docker.io/anchore/enterprise:v5.5.0 imagePullPolicy: IfNotPresent name: wait-for-db diff --git a/stable/enterprise/tests/api_resources_test.yaml b/stable/enterprise/tests/api_resources_test.yaml index 81912250..93fc7953 100644 --- a/stable/enterprise/tests/api_resources_test.yaml +++ b/stable/enterprise/tests/api_resources_test.yaml @@ -41,6 +41,7 @@ tests: - it: should render component annotations template: api_deployment.yaml + documentIndex: 0 set: api.annotations: api: test @@ -457,3 +458,40 @@ tests: app.kubernetes.io/name: test-release-enterprise app.kubernetes.io/component: api count: 1 + + - it: should render service annotations + template: api_deployment.yaml + documentIndex: 1 + set: + api: + service: + annotations: + bar: baz + foo: bar + asserts: + - isSubset: + path: metadata.annotations + content: + bar: baz + foo: bar + + - it: should render service annotations and global annotations + template: api_deployment.yaml + documentIndex: 1 + set: + api: + service: + annotations: + s1: a1 + s2: a2 + annotations: + g1: v1 + g2: v2 + asserts: + - isSubset: + path: metadata.annotations + content: + g1: v1 + g2: v2 + s1: a1 + s2: a2 diff --git a/stable/enterprise/tests/catalog_resources_test.yaml b/stable/enterprise/tests/catalog_resources_test.yaml index 4e4eb2b0..6e1c9ee2 100644 --- a/stable/enterprise/tests/catalog_resources_test.yaml +++ b/stable/enterprise/tests/catalog_resources_test.yaml @@ -41,6 +41,7 @@ tests: - it: should render component annotations template: catalog_deployment.yaml + documentIndex: 0 set: catalog.annotations: catalog: test @@ -57,7 +58,7 @@ tests: catalog: test test: foobar template: catalog_deployment.yaml - documentIndex: 0 + - it: should render component matchLabels template: catalog_deployment.yaml @@ -470,3 +471,40 @@ tests: app.kubernetes.io/name: test-release-enterprise app.kubernetes.io/component: catalog count: 1 + + - it: should render service annotations + template: catalog_deployment.yaml + documentIndex: 1 + set: + catalog: + service: + annotations: + bar: baz + foo: bar + asserts: + - isSubset: + path: metadata.annotations + content: + bar: baz + foo: bar + + - it: should render service annotations and global annotations + template: catalog_deployment.yaml + documentIndex: 1 + set: + catalog: + service: + annotations: + s1: a1 + s2: a2 + annotations: + g1: v1 + g2: v2 + asserts: + - isSubset: + path: metadata.annotations + content: + g1: v1 + g2: v2 + s1: a1 + s2: a2 diff --git a/stable/enterprise/tests/notifications_resources_test.yaml b/stable/enterprise/tests/notifications_resources_test.yaml index 996ec1dd..d36d76a5 100644 --- a/stable/enterprise/tests/notifications_resources_test.yaml +++ b/stable/enterprise/tests/notifications_resources_test.yaml @@ -40,6 +40,7 @@ tests: - it: should render component annotations template: notifications_deployment.yaml + documentIndex: 0 set: notifications.annotations: notifications: test @@ -56,7 +57,7 @@ tests: notifications: test test: foobar template: notifications_deployment.yaml - documentIndex: 0 + - it: should render component matchLabels template: notifications_deployment.yaml @@ -375,3 +376,40 @@ tests: app.kubernetes.io/name: test-release-enterprise app.kubernetes.io/component: notifications count: 1 + + - it: should render service annotations + template: notifications_deployment.yaml + documentIndex: 1 + set: + notifications: + service: + annotations: + bar: baz + foo: bar + asserts: + - isSubset: + path: metadata.annotations + content: + bar: baz + foo: bar + + - it: should render service annotations and global annotations + template: notifications_deployment.yaml + documentIndex: 1 + set: + notifications: + service: + annotations: + s1: a1 + s2: a2 + annotations: + g1: v1 + g2: v2 + asserts: + - isSubset: + path: metadata.annotations + content: + g1: v1 + g2: v2 + s1: a1 + s2: a2 diff --git a/stable/enterprise/tests/osaa_configmap_test.yaml b/stable/enterprise/tests/osaa_configmap_test.yaml new file mode 100644 index 00000000..b99c8388 --- /dev/null +++ b/stable/enterprise/tests/osaa_configmap_test.yaml @@ -0,0 +1,52 @@ +suite: OSAA ConfigMap Tests +templates: + - templates/osaa_configmap.yaml + - templates/anchore_configmap.yaml +set: + osaaMigrationJob: + enabled: true + analysisArchiveMigration: + run: true + bucket: "analysis_archive" + mode: to_analysis_archive + analysis_archive: + enabled: true + compression: + enabled: true + min_size_kbytes: 100 + storage_driver: + name: s3 + config: + access_key: itsa + secret_key: test + url: 'http://myminio.mynamespace.svc.cluster.local:9000' + region: null + bucket: analysisarchive + objectStoreMigration: + run: false + object_store: + verify_content_digests: true + compression: + enabled: false + min_size_kbytes: 100 + storage_driver: + name: s3 + config: + access_key: itsa + secret_key: test + url: 'http://myminio.mynamespace.svc.cluster.local:9000' + region: null + bucket: objectstore +values: + - values.yaml +release: + name: test-release + namespace: test-namespace +chart: + version: 9.9.9 + appVersion: 9.9.9 + +tests: + - it: should render the configmaps for osaa migration if enabled + asserts: + - matchSnapshot: {} diff --git a/stable/enterprise/tests/policyengine_resources_test.yaml b/stable/enterprise/tests/policyengine_resources_test.yaml index 8fb94526..d7c1538a 100644 --- a/stable/enterprise/tests/policyengine_resources_test.yaml +++ b/stable/enterprise/tests/policyengine_resources_test.yaml @@ -40,6 +40,7 @@ tests: - it: should render component annotations template: policyengine_deployment.yaml + documentIndex: 0 set: policyEngine.annotations: policyEngine: test @@ -56,7 +57,7 @@ tests: policyEngine: test test: foobar template: policyengine_deployment.yaml - documentIndex: 0 + - it: should render component matchLabels template: policyengine_deployment.yaml @@ -429,3 +430,40 @@ tests: app.kubernetes.io/name: test-release-enterprise app.kubernetes.io/component: policyengine count: 1 + + - it: should render service annotations + template: policyengine_deployment.yaml + documentIndex: 1 + set: + policyEngine: + service: + annotations: + bar: baz + foo: bar + asserts: + - isSubset: + path: metadata.annotations + content: + bar: baz + foo: bar + + - it: should render service annotations and global annotations + template: policyengine_deployment.yaml + documentIndex: 1 + set: + policyEngine: + service: + annotations: + s1: a1 + s2: a2 + annotations: + g1: v1 + g2: v2 + asserts: + - isSubset: + path: metadata.annotations + content: + g1: v1 + g2: v2 + s1: a1 + s2: a2 diff --git a/stable/enterprise/tests/prehook_upgrade_resources_test.yaml b/stable/enterprise/tests/prehook_upgrade_resources_test.yaml index 35585f40..079a96c5 100644 --- a/stable/enterprise/tests/prehook_upgrade_resources_test.yaml +++ b/stable/enterprise/tests/prehook_upgrade_resources_test.yaml @@ -3,6 +3,7 @@ templates: - templates/hooks/pre-upgrade/upgrade_job.yaml - templates/hooks/pre-upgrade/upgrade_rbac.yaml - anchore_secret.yaml + - templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml values: - values.yaml release: @@ -64,7 +65,7 @@ tests: namespace: test-namespace documentIndex: 2 - - it: pre-hook job does not get created when pre-upgrade hook is enabled + - it: pre-hook job does not get created when post-upgrade hook is enabled templates: *upgrade-resources set: upgradeJob: @@ -388,3 +389,196 @@ tests: mountPath: /mnt/global-extra-vol readOnly: false count: 1 + + - it: should render migration job if enabled + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + asserts: + - hasDocuments: + count: 1 + + - it: migration job should match snapshot + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + analysisArchiveMigration: + run: false + objectStoreMigration: + run: false + asserts: + - matchSnapshot: + path: spec.template.spec + + - it: migration job should match snapshot analysisArchiveMigration to true + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + analysisArchiveMigration: + run: true + objectStoreMigration: + run: false + asserts: + - matchSnapshot: + path: spec.template.spec + + - it: migration job should match snapshot objectStoreMigration to true + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + analysisArchiveMigration: + run: false + objectStoreMigration: + run: true + asserts: + - matchSnapshot: + path: spec.template.spec + + - it: migration job should match snapshot analysisArchiveMigration and objectStoreMigration to true + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + analysisArchiveMigration: + run: true + objectStoreMigration: + run: true + asserts: + - matchSnapshot: + path: spec.template.spec + + - it: migration job should render proper analysis archive migration command if analysisArchiveMigration.run is true + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + analysisArchiveMigration: + run: true + asserts: + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: check /config/dest-config.yaml + count: 1 + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate --to-analysis-archive --bucket analysis_archive /config/config.yaml /config/dest-config.yaml --dontask + count: 1 + - notMatchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate /config/config.yaml /config/dest-config.yaml --dontask + count: 1 + + - it: migration job should render proper analysis archive migration command if objectStoreMigration.run is true + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + objectStoreMigration: + run: true + asserts: + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: check /config/dest-config.yaml + count: 1 + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate /config/config.yaml /config/dest-config.yaml --dontask + count: 1 + - notMatchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate --to-analysis-archive --bucket analysis_archive /config/config.yaml /config/dest-config.yaml --dontask + count: 1 + + - it: migration job should render proper analysis archive migration command if both run is true + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + objectStoreMigration: + run: true + analysisArchiveMigration: + run: true + asserts: + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: check /config/dest-config.yaml + count: 1 + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate /config/config.yaml /config/dest-config.yaml --dontask + count: 1 + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate --to-analysis-archive --bucket analysis_archive /config/config.yaml /config/dest-config.yaml --dontask + count: 1 + + - it: migration job should render proper analysis archive migration command if options are set + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + analysisArchiveMigration: + run: true + bucket: custom-bucket + mode: from_analysis_archive + asserts: + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: check /config/dest-config.yaml + count: 1 + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate --from-analysis-archive --bucket custom-bucket /config/config.yaml /config/dest-config.yaml --dontask + count: 1 + + - it: migration job should not run analysis archive migration command if invalid mode set + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + analysisArchiveMigration: + run: true + bucket: custom-bucket + mode: yolo + asserts: + - matchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: check /config/dest-config.yaml + count: 1 + - notMatchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate --from-analysis-archive + count: 1 + - notMatchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate --to-analysis-archive + count: 1 + - notMatchRegex: + path: spec.template.spec.containers[0].command[2] + pattern: migrate + count: 1 + + - it: migration job should use the upgrade job's service account name + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: test-release-enterprise-upgrade-sa + + - it: migration job should use the component level service account name if set + template: templates/hooks/pre-upgrade/object_store_analysis_archive_migration_job.yaml + set: + osaaMigrationJob: + enabled: true + serviceAccountName: test-service-account + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: test-service-account diff --git a/stable/enterprise/tests/reports_resources_test.yaml b/stable/enterprise/tests/reports_resources_test.yaml index 9fa668b4..543a7869 100644 --- a/stable/enterprise/tests/reports_resources_test.yaml +++ b/stable/enterprise/tests/reports_resources_test.yaml @@ -40,6 +40,7 @@ tests: - it: should render component annotations template: reports_deployment.yaml + documentIndex: 0 set: reports.annotations: reports: test @@ -56,7 +57,6 @@ tests: reports: test test: foobar template: reports_deployment.yaml - documentIndex: 0 - it: should render component matchLabels template: reports_deployment.yaml @@ -490,3 +490,40 @@ tests: mountPath: /mnt/global-extra-vol readOnly: false count: 1 + + - it: should render service annotations + template: reports_deployment.yaml + documentIndex: 1 + set: + reports: + service: + annotations: + bar: baz + foo: bar + asserts: + - isSubset: + path: metadata.annotations + content: + bar: baz + foo: bar + + - it: should render service annotations and global annotations + template: reports_deployment.yaml + documentIndex: 1 + set: + reports: + service: + annotations: + s1: a1 + s2: a2 + annotations: + g1: v1 + g2: v2 + asserts: + - isSubset: + path: metadata.annotations + content: + g1: v1 + g2: v2 + s1: a1 + s2: a2 diff --git a/stable/enterprise/tests/reportsworker_resources_test.yaml b/stable/enterprise/tests/reportsworker_resources_test.yaml index ceabd3b5..81d6da6d 100644 --- a/stable/enterprise/tests/reportsworker_resources_test.yaml +++ b/stable/enterprise/tests/reportsworker_resources_test.yaml @@ -40,6 +40,7 @@ tests: - it: should render component annotations template: reportsworker_deployment.yaml + documentIndex: 0 set: reportsWorker.annotations: reports: test @@ -56,7 +57,6 @@ tests: reports: test test: foobar template: reportsworker_deployment.yaml - documentIndex: 0 - it: should render component matchLabels template: reportsworker_deployment.yaml @@ -375,3 +375,40 @@ tests: mountPath: /mnt/global-extra-vol readOnly: false count: 1 + + - it: should render service annotations + template: reportsworker_deployment.yaml + documentIndex: 1 + set: + reportsWorker: + service: + annotations: + bar: baz + foo: bar + asserts: + - isSubset: + path: metadata.annotations + content: + bar: baz + foo: bar + + - it: should render service annotations and global annotations + template: reportsworker_deployment.yaml + documentIndex: 1 + set: + reportsWorker: + service: + annotations: + s1: a1 + s2: a2 + annotations: + g1: v1 + g2: v2 + asserts: + - isSubset: + path: metadata.annotations + content: + g1: v1 + g2: v2 + s1: a1 + s2: a2 diff --git a/stable/enterprise/tests/simplequeue_resources_test.yaml b/stable/enterprise/tests/simplequeue_resources_test.yaml index fd9d71f6..8c18e20a 100644 --- a/stable/enterprise/tests/simplequeue_resources_test.yaml +++ b/stable/enterprise/tests/simplequeue_resources_test.yaml @@ -40,6 +40,7 @@ tests: - it: should render component annotations template: simplequeue_deployment.yaml + documentIndex: 0 set: simpleQueue.annotations: simplequeue: test @@ -56,7 +57,6 @@ tests: simplequeue: test test: foobar template: simplequeue_deployment.yaml - documentIndex: 0 - it: should render component matchLabels template: simplequeue_deployment.yaml @@ -350,3 +350,40 @@ tests: mountPath: /mnt/global-extra-vol readOnly: false count: 1 + + - it: should render service annotations + template: simplequeue_deployment.yaml + documentIndex: 1 + set: + simpleQueue: + service: + annotations: + bar: baz + foo: bar + asserts: + - isSubset: + path: metadata.annotations + content: + bar: baz + foo: bar + + - it: should render service annotations and global annotations + template: simplequeue_deployment.yaml + documentIndex: 1 + set: + simpleQueue: + service: + annotations: + s1: a1 + s2: a2 + annotations: + g1: v1 + g2: v2 + asserts: + - isSubset: + path: metadata.annotations + content: + g1: v1 + g2: v2 + s1: a1 + s2: a2 diff --git a/stable/enterprise/tests/ui_resources_test.yaml b/stable/enterprise/tests/ui_resources_test.yaml index e28cfd4d..29e986e5 100644 --- a/stable/enterprise/tests/ui_resources_test.yaml +++ b/stable/enterprise/tests/ui_resources_test.yaml @@ -40,6 +40,7 @@ tests: - it: should render component annotations template: ui_deployment.yaml + documentIndex: 0 set: ui.annotations: ui: test @@ -56,7 +57,6 @@ tests: ui: test test: foobar template: ui_deployment.yaml - documentIndex: 0 - it: should render component matchLabels template: ui_deployment.yaml @@ -149,6 +149,32 @@ tests: pattern: ^/docker-entrypoint\.sh node \/home\/node\/aui\/build\/server.js$ count: 1 + - it: should render component entrypoint args with feature flags + template: ui_deployment.yaml + documentIndex: 0 + set: + ui.extraEnv: + - name: ANCHORE_FEATURE_FLAG + value: "test" + asserts: + - matchRegex: + path: spec.template.spec.containers[0].args[0] + pattern: ^/docker-entrypoint\.sh node \/home\/node\/aui\/build\/server.js -f test$ + count: 1 + + - it: should not render component entrypoint args with feature flags if ANCHORE_FEATURE_FLAG extraEnv isn't set + template: ui_deployment.yaml + documentIndex: 0 + set: + ui.extraEnv: + - name: NOT_ANCHORE_FEATURE_FLAG + value: "test" + asserts: + - matchRegex: + path: spec.template.spec.containers[0].args[0] + pattern: ^/docker-entrypoint\.sh node \/home\/node\/aui\/build\/server.js$ + count: 1 + - it: should render ui component environment variables template: ui_deployment.yaml documentIndex: 0 @@ -386,3 +412,40 @@ tests: mountPath: /mnt/global-extra-vol readOnly: false count: 1 + + - it: should render service annotations + template: ui_deployment.yaml + documentIndex: 1 + set: + ui: + service: + annotations: + bar: baz + foo: bar + asserts: + - isSubset: + path: metadata.annotations + content: + bar: baz + foo: bar + + - it: should render service annotations and global annotations + template: ui_deployment.yaml + documentIndex: 1 + set: + ui: + service: + annotations: + s1: a1 + s2: a2 + annotations: + g1: v1 + g2: v2 + asserts: + - isSubset: + path: metadata.annotations + content: + g1: v1 + g2: v2 + s1: a1 + s2: a2 diff --git a/stable/enterprise/values.yaml b/stable/enterprise/values.yaml index 1edb0cea..c73e8371 100644 --- a/stable/enterprise/values.yaml +++ b/stable/enterprise/values.yaml @@ -5,6 +5,7 @@ global: ## @param global.fullnameOverride overrides the fullname set on resources ## + ## fullnameOverride: "" ## @param global.nameOverride overrides the name set on resources @@ -18,7 +19,7 @@ global: ## @param image Image used for all Anchore Enterprise deployments, excluding Anchore UI ## -image: docker.io/anchore/enterprise:v5.4.1 +image: docker.io/anchore/enterprise:v5.5.0 ## @param imagePullPolicy Image pull policy used by all deployments ## ref: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy @@ -222,11 +223,52 @@ anchoreConfig: ## service_dir: /anchore_service - ## @param anchoreConfig.log_level The log level for Anchore services - ## options available: FATAL, ERROR, WARN, INFO, DEBUG, SPEW + ## @param anchoreConfig.log_level The log level for Anchore services: NOTE: This is deprecated, use logging.log_level + ## options available: CRITICAL, ERROR, WARNING, SUCCESS, INFO, DEBUG, TRACE ## log_level: INFO + ## @param anchoreConfig.logging.colored_logging Enable colored output in the logs + ## @param anchoreConfig.logging.exception_backtrace_logging Enable stack traces in the logs + ## @param anchoreConfig.logging.exception_diagnose_logging Enable detailed exception information in the logs + ## @param anchoreConfig.logging.file_rotation_rule Maximum size of a log file before it is rotated + ## @param anchoreConfig.logging.file_retention_rule Number of log files to retain before deleting the oldest + ## @param anchoreConfig.logging.log_level Log level for the service code + ## @param anchoreConfig.logging.server_access_logging Set whether to print server access to logging + ## @param anchoreConfig.logging.server_response_debug_logging Log the elapsed time to process the request and the response size (debug log level) + ## @param anchoreConfig.logging.server_log_level Log level specifically for the server (uvicorn) + ## @param anchoreConfig.logging.structured_logging Enable structured logging output (JSON) + ## + logging: + colored_logging: false + exception_backtrace_logging: false + exception_diagnose_logging: false + file_rotation_rule: "10 MB" + file_retention_rule: 10 + log_level: INFO + server_access_logging: true + server_response_debug_logging: false + server_log_level: "info" + structured_logging: false + + ## @param anchoreConfig.server.max_connection_backlog Max connections permitted in the backlog before dropping + ## @param anchoreConfig.server.max_wsgi_middleware_worker_queue_size Max number of requests to queue for processing by ASGI2WSGI middleware + ## @param anchoreConfig.server.max_wsgi_middleware_worker_count Max number of workers to have in the ASGI2WSGI middleware worker pool + ## @param anchoreConfig.server.timeout_graceful_shutdown Seconds to permit for graceful shutdown or false to disable + ## @param anchoreConfig.server.timeout_keep_alive Seconds to keep a connection alive before closing + ## + server: + max_connection_backlog: 2048 + max_wsgi_middleware_worker_queue_size: 100 + max_wsgi_middleware_worker_count: 50 + timeout_graceful_shutdown: false + timeout_keep_alive: 5 + + ## @param anchoreConfig.audit.enabled Enable audit logging + ## + audit: + enabled: true + ## @param anchoreConfig.allow_awsecr_iam_auto Enable AWS IAM instance role for ECR auth ## When set, if a registry credential username is set to 'iamauto' for an ecr registry, the engine will ## use whatever aws creds are available in the standard boto search path (.aws, env, etc) @@ -1277,7 +1319,7 @@ simpleQueue: ui: ## @param ui.image Image used for the Anchore UI container ## - image: docker.io/anchore/enterprise-ui:v5.4.1 + image: docker.io/anchore/enterprise-ui:v5.5.0 ## @param ui.imagePullPolicy Image pull policy for Anchore UI image ## @@ -1595,3 +1637,101 @@ postgresql: ## image: tag: 13.11.0-debian-11-r15 + +######################################################################################## +## @section Anchore Object Store and Analysis Archive Migration +## Migration job uses a Helm pre-install-hook to ensure all pods are +## scaled down before running the migration job +######################################################################################## + +osaaMigrationJob: + + ## @param osaaMigrationJob.enabled Enable the Anchore Object Store and Analysis Archive migration job + ## + enabled: false + + ## @param osaaMigrationJob.kubectlImage The image to use for the job's init container that uses kubectl to scale down deployments for the migration + ## This is only used in the osaaMigrationJob job. + ## + kubectlImage: bitnami/kubectl:1.27 + + ## @param osaaMigrationJob.extraEnv An array to add extra environment variables + ## + extraEnv: [] + + ## @param osaaMigrationJob.extraVolumes Define additional volumes for Anchore Object Store and Analysis Archive migration job + ## + extraVolumes: [] + + ## @param osaaMigrationJob.extraVolumeMounts Define additional volume mounts for Anchore Object Store and Analysis Archive migration job + ## + extraVolumeMounts: [] + + ## @param osaaMigrationJob.resources Resource requests and limits for Anchore Object Store and Analysis Archive migration job + ## ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + ## Commented values below are just a suggested baseline. Contact Anchore support for deployment specific recommendations. + ## + resources: {} + # requests: + # cpu: 1000m + # memory: 2000Mi + # limits: + # memory: 2000Mi + + ## @param osaaMigrationJob.labels Labels for Anchore Object Store and Analysis Archive migration job + ## + labels: {} + + ## @param osaaMigrationJob.annotations Annotation for Anchore Object Store and Analysis Archive migration job + ## + annotations: {} + + ## @param osaaMigrationJob.nodeSelector Node labels for Anchore Object Store and Analysis Archive migration job pod assignment + ## + nodeSelector: {} + + ## @param osaaMigrationJob.tolerations Tolerations for Anchore Object Store and Analysis Archive migration job pod assignment + ## + tolerations: [] + + ## @param osaaMigrationJob.affinity Affinity for Anchore Object Store and Analysis Archive migration job pod assignment + ## + affinity: {} + + ## @param osaaMigrationJob.serviceAccountName Service account name for Anchore Object Store and Analysis Archive migration job pods + ## + serviceAccountName: "" + + ## @param osaaMigrationJob.analysisArchiveMigration.bucket The name of the bucket to migrate + ## @param osaaMigrationJob.analysisArchiveMigration.run Run the analysis_archive migration + ## @param osaaMigrationJob.analysisArchiveMigration.mode The mode for the analysis_archive migration. valid values are 'to_analysis_archive' and 'from_analysis_archive'. + ## @param osaaMigrationJob.analysisArchiveMigration.analysis_archive The configuration of the catalog.analysis_archive for the dest-config.yaml + ## ref: https://docs.anchore.com/current/docs/configuration/storage/object_store/migration/#migrating-analysis-archive-data + ## + analysisArchiveMigration: + run: false + bucket: "analysis_archive" + mode: "to_analysis_archive" + analysis_archive: {} + ## compression: + ## enabled: true + ## min_size_kbytes: 100 + ## storage_driver: + ## # Valid storage driver names: 'db', 's3', 'swift' + ## name: db + ## config: {} + + ## @param osaaMigrationJob.objectStoreMigration.run Run the object_store migration + ## @param osaaMigrationJob.objectStoreMigration.object_store [object] The configuration of the object_store for the dest-config.yaml + ## ref: https://docs.anchore.com/current/docs/configuration/storage/object_store/ + ## + objectStoreMigration: + run: false + object_store: + verify_content_digests: true + compression: + enabled: true + min_size_kbytes: 100 + storage_driver: + name: db + config: {}