Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

preflight does not seem to allow multiple creds in the same registry #495

Closed
tonyskapunk opened this issue Mar 16, 2022 · 14 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/dependency-change Categorizes issue or PR as related to changing dependencies lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@tonyskapunk
Copy link

Bug Description

(A clear and concise description of the issue)

preflight is unable to use an authfile with multiple creds on the same domain.

This is similar to the issue reported in opm render operator-framework/operator-registry#935

Version and Command Invocation

(The output of preflight --version)

preflight version 0.0.0 <commit: 06a9fd1>

Steps to Reproduce:

(How can we reproduce this?)

  1. Make use of an auth file with multiple credentials in the same domain
{
  "auths": {
    "quay.io/telcoci/simple-demo-operator-bundle": {
      "auth": "XXXXXX"
    },
    "quay.io/telcoci": {
      "auth": "YYYYYY"
    },
    "quay.io": {
      "auth": "ZZZZZZ"
    }
  }
}

Expected Result

(What did you expect to happen and why?)

Tools like podman, skopeo, buildah, opm index will allow entries like the above, following the order from more-specific to less specific. [1]

This is quite useful when using multiple credentials to different namespaces/images in a registry.

$ check operator quay.io/telcoci/simple-demo-operator-bundle:v0.0.3

time="2022-03-16T17:48:26-05:00" level=info msg="certification library version 0.0.0 <commit: 06a9fd1520a37dc5b39328655f6ceaabf6471a51>"
--- snip ---
{
    "image": "quay.io/telcoci/simple-demo-operator-bundle:v0.0.3",
    "passed": false,
    "certification_hash": "01139f58c09b2e5efcf99c9c8371dba9",
    "test_library": {
        "name": "github.com/redhat-openshift-ecosystem/openshift-preflight",
        "version": "0.0.0",
        "commit": "06a9fd1520a37dc5b39328655f6ceaabf6471a51"
    },
    "results": {
        "passed": [
            {
                "name": "ValidateOperatorBundle",
                "elapsed_time": 53,
                "description": "Validating Bundle image that checks if it can validate the content and format of the operator bundle"
            }
--- snip ---

Actual Result

(What actually happened)

$ check operator quay.io/telcoci/simple-demo-operator-bundle:v0.0.3

time="2022-03-16T17:44:13-05:00" level=info msg="certification library version 0.0.0 <commit: 06a9fd1520a37dc5b39328655f6ceaabf6471a51>"
Error: failed to pull remote container: GET https://quay.io/v2/telcoci/simple-demo-operator-bundle/manifests/v0.0.3: UNAUTHORIZED: access to the requested resource is not authorized; map[]
time="2022-03-16T17:44:14-05:00" level=fatal msg="failed to pull remote container: GET https://quay.io/v2/telcoci/simple-demo-operator-bundle/manifests/v0.0.3: UNAUTHORIZED: access to the requested resource is not authorized; map[]"

Additional Context

(Anything else you think might help us troubleshoot, like your platform, dependency versions, etc).

[1] https://man.archlinux.org/man/community/containers-common/containers-auth.json.5.en#FORMAT

@tonyskapunk tonyskapunk added the kind/bug Categorizes issue or PR as related to a bug. label Mar 16, 2022
@bcrochet
Copy link
Contributor

Where is your authfile located?

@tonyskapunk
Copy link
Author

Hi @bcrochet, my bad, the auth file is located under: ./repoconfig/config.json, an env var is passed to preflight to make use of it like this:

$ DOCKER_CONFIG=repoconfig/ preflight ...

@tonytcampbell
Copy link
Contributor

@tonyskapunk would you mind testing this with a single cred in your auth file? I have a feeling that may not work either. This may be an issue with all auth files.

@tonytcampbell
Copy link
Contributor

tonytcampbell commented Mar 18, 2022

@tonyskapunk
Copy link
Author

Hi @tonytcampbell , it does work with a single cred, some other errors are listed in this example, but it's able to download the image this time.

$ KUBECONFIG=./kubeconfig \
  PFLT_INDEXIMAGE=quay.io/telcoci/simple-demo-operator-catalog:v0.0.3 \
  DOCKER_CONFIG=repoconfig/ \
  ./preflight-1.1.0-beta3 check operator quay.io/telcoci/simple-demo-operator-bundle:v0.0.3

time="2022-03-18T07:56:12-05:00" level=info msg="certification library version 0.0.0 <commit: 06a9fd1520a37dc5b39328655f6ceaabf6471a51>"
time="2022-03-18T08:00:13-05:00" level=error msg="operator-sdk scorecard failed to run properly."
time="2022-03-18T08:00:13-05:00" level=error msg="stderr: time=\"2022-03-18T07:56:13-05:00\" level=debug msg=\"Debug logging is set\"\nError: error running tests context deadline exceeded\nUsage:\n  operator-sdk scorecard [flags]\n\nFlags:\n  -c, --config string            path to scorecard config file\n  -h, --help                     help for scorecard\n      --kubeconfig string        kubeconfig path\n  -L, --list
               Option to enable listing which tests are run\n  -n, --namespace string         namespace to run the test images in\n  -o, --output string            Output format for results. Valid values: text, json (default \"text\")\n  -l, --selector string          label selector to determine which tests are run\n  -s, --service-account string   Service account to use for tests (default \"default\")\n  -x, --skip-cleanu
p             Disable resource cleanup after tests are run\n  -w, --wait-time duration       seconds to wait for tests to complete. Example: 35s (default 30s)\n\nGlobal Flags:\n      --plugins strings   plugin keys to be used for this subcommand execution\n      --verbose           Enable verbose logging\n\ntime=\"2022-03-18T08:00:13-05:00\" level=fatal msg=\"error running tests context deadline exceeded\"\n"
time="2022-03-18T08:00:13-05:00" level=info msg="check completed: ScorecardBasicSpecCheck" ERROR="failed to run operator-sdk scorecard: exit status 1" result="failed to run operator-sdk scorecard: exit status 1"
time="2022-03-18T08:04:14-05:00" level=error msg="operator-sdk scorecard failed to run properly."
time="2022-03-18T08:04:14-05:00" level=error msg="stderr: time=\"2022-03-18T08:00:14-05:00\" level=debug msg=\"Debug logging is set\"\nError: error running tests context deadline exceeded\nUsage:\n  operator-sdk scorecard [flags]\n\nFlags:\n  -c, --config string            path to scorecard config file\n  -h, --help                     help for scorecard\n      --kubeconfig string        kubeconfig path\n  -L, --list
               Option to enable listing which tests are run\n  -n, --namespace string         namespace to run the test images in\n  -o, --output string            Output format for results. Valid values: text, json (default \"text\")\n  -l, --selector string          label selector to determine which tests are run\n  -s, --service-account string   Service account to use for tests (default \"default\")\n  -x, --skip-cleanu
p             Disable resource cleanup after tests are run\n  -w, --wait-time duration       seconds to wait for tests to complete. Example: 35s (default 30s)\n\nGlobal Flags:\n      --plugins strings   plugin keys to be used for this subcommand execution\n      --verbose           Enable verbose logging\n\ntime=\"2022-03-18T08:04:14-05:00\" level=fatal msg=\"error running tests context deadline exceeded\"\n"
time="2022-03-18T08:04:14-05:00" level=info msg="check completed: ScorecardOlmSuiteCheck" ERROR="failed to run operator-sdk scorecard: exit status 1" result="failed to run operator-sdk scorecard: exit status 1"
I0318 08:04:15.489501 2507695 request.go:665] Waited for 1.044028012s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/node.k8s.io/v1beta1?timeout=32s
I0318 08:04:25.494734 2507695 request.go:665] Waited for 3.093151651s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/authorization.openshift.io/v1?timeout=32s
I0318 08:04:37.201729 2507695 request.go:665] Waited for 1.042710886s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/controlplane.operator.openshift.io/v1alpha1?timeout=32s
I0318 08:04:49.039463 2507695 request.go:665] Waited for 1.044348974s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/controlplane.operator.openshift.io/v1alpha1?timeout=32s
I0318 08:05:00.875010 2507695 request.go:665] Waited for 1.042510525s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/discovery.k8s.io/v1beta1?timeout=32s
I0318 08:05:12.710719 2507695 request.go:665] Waited for 1.0414188s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/oauth.openshift.io/v1?timeout=32s
I0318 08:05:24.545751 2507695 request.go:665] Waited for 1.043759954s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/v2v.kubevirt.io/v1beta1?timeout=32s
I0318 08:05:36.382984 2507695 request.go:665] Waited for 1.044266887s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/operators.coreos.com/v1alpha2?timeout=32s
I0318 08:05:48.218531 2507695 request.go:665] Waited for 1.043156932s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/networkaddonsoperator.network.kubevirt.io/v1?timeout=32s
I0318 08:06:00.056183 2507695 request.go:665] Waited for 1.043827476s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/authorization.k8s.io/v1beta1?timeout=32s
I0318 08:06:11.890890 2507695 request.go:665] Waited for 1.043902426s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/apps/v1?timeout=32s
I0318 08:06:23.726483 2507695 request.go:665] Waited for 1.043811066s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/operators.coreos.com/v1alpha2?timeout=32s
I0318 08:06:35.562651 2507695 request.go:665] Waited for 1.043013115s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/trident.netapp.io/v1?timeout=32s
I0318 08:06:47.397729 2507695 request.go:665] Waited for 1.04320213s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/v2v.kubevirt.io/v1beta1?timeout=32s
I0318 08:06:59.231902 2507695 request.go:665] Waited for 1.043664818s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/cdi.kubevirt.io/v1alpha1?timeout=32s
I0318 08:07:11.074823 2507695 request.go:665] Waited for 1.044067397s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/kubevirt.io/v1alpha3?timeout=32s
I0318 08:07:22.910150 2507695 request.go:665] Waited for 1.043643639s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/admissionregistration.k8s.io/v1?timeout=32s
time="2022-03-18T08:07:31-05:00" level=error msg="failed to fetch the subscription simple-demo-operator from namespace simple-demo-operator: context deadline exceeded"
time="2022-03-18T08:07:31-05:00" level=error msg="could not retrieve the object simple-demo-operator/simple-demo-operator: context deadline exceeded"
I0318 08:07:32.940595 2507695 request.go:665] Waited for 1.243508106s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/autoscaling/v1?timeout=32s
I0318 08:07:42.974747 2507695 request.go:665] Waited for 3.442894205s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/batch/v1?timeout=32s
I0318 08:07:53.003138 2507695 request.go:665] Waited for 1.693883626s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/operators.coreos.com/v1alpha1?timeout=32s
time="2022-03-18T08:07:55-05:00" level=info msg="check completed: DeployableByOLM" ERROR="unable to fetch the requested resource from k8s API server: error: context deadline exceeded" result="unable to fetch the requested resource from k8s API server: error: context deadline exceeded"
time="2022-03-18T08:07:55-05:00" level=warning msg="Warning: Value : (simple-demo-operator.v0.0.3) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects."
time="2022-03-18T08:07:55-05:00" level=info msg="check completed: ValidateOperatorBundle" result=PASSED
{
    "image": "quay.io/telcoci/simple-demo-operator-bundle:v0.0.3",
    "passed": false,
    "certification_hash": "01139f58c09b2e5efcf99c9c8371dba9",
    "test_library": {
        "name": "github.com/redhat-openshift-ecosystem/openshift-preflight",
        "version": "0.0.0",
        "commit": "06a9fd1520a37dc5b39328655f6ceaabf6471a51"
    },
    "results": {
        "passed": [
            {
                "name": "ValidateOperatorBundle",
                "elapsed_time": 135,
                "description": "Validating Bundle image that checks if it can validate the content and format of the operator bundle"
            }
        ],
        "failed": [],
        "errors": [
            {
                "name": "ScorecardBasicSpecCheck",
                "elapsed_time": 240203,
                "description": "Check to make sure that all CRs have a spec block.",
                "help": "There was a fatal error while running operator-sdk scorecard tests. Please see the preflight log for details. If necessary, set logging to be more verbose."
            },
            {
                "name": "ScorecardOlmSuiteCheck",
                "elapsed_time": 240144,
                "description": "Operator-sdk scorecard OLM Test Suite Check",
                "help": "There was a fatal error while running operator-sdk scorecard tests. Please see the preflight log for details. If necessary, set logging to be more verbose."
            },
            {
                "name": "DeployableByOLM",
                "elapsed_time": 221177,
                "description": "Checking if the operator could be deployed by OLM",
                "help": "It is required that your operator could be deployed by OLM"
            }
        ]
    }
}

@acornett21
Copy link
Contributor

@tonyskapunk With the multi-auth file would you be able to test with the tool crane ie this

And then try to run the below and let us know if you have the same error?
crane pull <your-image>

@acornett21
Copy link
Contributor

The auths map has an entry per registry, and the auth field contains your username and password encoded as HTTP 'Basic' Auth.

I think based on crane's readme for the authn package, this is not supported.

@tonyskapunk
Copy link
Author

Sorry for the late response, but yes it fails with crane.

I have the config.json with multiple creds to the same quay.io registry and fails:

$ DOCKER_CONFIG=./ ./crane pull  quay.io/telcoci/simple-demo-operator-catalog:0.0.3 sdoc.tar
Error: GET https://quay.io/v2/telcoci/simple-demo-operator-catalog/manifests/0.0.3: UNAUTHORIZED: access to the requested resource is not authorized; map[]

$ grep quay config.json 
        "quay.io/telcoci/simple-demo-operator-bundle": {
        "quay.io": {

When I remove the addition entry, then it works:

$ DOCKER_CONFIG=./ ./crane pull  quay.io/telcoci/simple-demo-operator-catalog:0.0.3 sdoc.tar 
$ echo $?
0

$ grep quay config.json 
        "quay.io/telcoci/simple-demo-operator-bundle": {

@acornett21
Copy link
Contributor

@tonyskapunk No worries, we would have to request for this feature to be added to crane since we are using that tool in preflight. I will file a feature request today.

@acornett21
Copy link
Contributor

I raised an issue for crane which can be found below:

@acornett21 acornett21 added the kind/dependency-change Categorizes issue or PR as related to changing dependencies label Mar 30, 2022
@tonyskapunk
Copy link
Author

Thanks for doing that @acornett21 I've tested with the latest branch as suggested and does not seem to work for me.

My authfile looks like this, I've left out the auth strings:

❯ cut -d ' ' -f1-7 config.json 
{
  "auths": {
    "quay.io/telcoci": {
      "auth":
    },
    "quay.io": {
      "auth":
    }
  }
}

Running crane in main branch does not seem to work:

❯ crane version
v0.8.1-0.20220328141311-efc62d802606

❯ DOCKER_CONFIG=./ crane pull  quay.io/telcoci/simple-demo-operator-catalog:0.0.3 sdoc.tar
Error: GET https://quay.io/v2/telcoci/simple-demo-operator-catalog/manifests/0.0.3: UNAUTHORIZED: access to the requested resource is not authorized; map[]

❯ echo $?
1

If I leave only one auth it works as expected:

❯ cut -d ' ' -f1-7 config.json 
{
  "auths": {
    "quay.io/telcoci": {
      "auth":
    }
  }
}

❯ DOCKER_CONFIG=./ crane pull  quay.io/telcoci/simple-demo-operator-catalog:0.0.3 sdoc.tar && echo $?
0

@bcrochet
Copy link
Contributor

bcrochet commented May 2, 2022

This may be fixed in #588. Please test with that patch. If it isn't, then I will work to ensure that it is.

@tkrishtop
Copy link
Contributor

Thank you @bcrochet, going to check.

@komish komish added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2022
@komish
Copy link
Contributor

komish commented Oct 4, 2022

Closing this issue as stale. It appears it may also be potentially resolved according to @bcrochet's latest comment. Please feel free to re-open if this issue needs further attention.

@komish komish closed this as completed Oct 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/dependency-change Categorizes issue or PR as related to changing dependencies lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

6 participants