-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
preflight does not seem to allow multiple creds in the same registry #495
Comments
Where is your authfile located? |
Hi @bcrochet, my bad, the auth file is located under: $ DOCKER_CONFIG=repoconfig/ preflight ... |
@tonyskapunk would you mind testing this with a single cred in your auth file? I have a feeling that may not work either. This may be an issue with all auth files. |
Hi @tonytcampbell , it does work with a single cred, some other errors are listed in this example, but it's able to download the image this time. $ KUBECONFIG=./kubeconfig \
PFLT_INDEXIMAGE=quay.io/telcoci/simple-demo-operator-catalog:v0.0.3 \
DOCKER_CONFIG=repoconfig/ \
./preflight-1.1.0-beta3 check operator quay.io/telcoci/simple-demo-operator-bundle:v0.0.3
time="2022-03-18T07:56:12-05:00" level=info msg="certification library version 0.0.0 <commit: 06a9fd1520a37dc5b39328655f6ceaabf6471a51>"
time="2022-03-18T08:00:13-05:00" level=error msg="operator-sdk scorecard failed to run properly."
time="2022-03-18T08:00:13-05:00" level=error msg="stderr: time=\"2022-03-18T07:56:13-05:00\" level=debug msg=\"Debug logging is set\"\nError: error running tests context deadline exceeded\nUsage:\n operator-sdk scorecard [flags]\n\nFlags:\n -c, --config string path to scorecard config file\n -h, --help help for scorecard\n --kubeconfig string kubeconfig path\n -L, --list
Option to enable listing which tests are run\n -n, --namespace string namespace to run the test images in\n -o, --output string Output format for results. Valid values: text, json (default \"text\")\n -l, --selector string label selector to determine which tests are run\n -s, --service-account string Service account to use for tests (default \"default\")\n -x, --skip-cleanu
p Disable resource cleanup after tests are run\n -w, --wait-time duration seconds to wait for tests to complete. Example: 35s (default 30s)\n\nGlobal Flags:\n --plugins strings plugin keys to be used for this subcommand execution\n --verbose Enable verbose logging\n\ntime=\"2022-03-18T08:00:13-05:00\" level=fatal msg=\"error running tests context deadline exceeded\"\n"
time="2022-03-18T08:00:13-05:00" level=info msg="check completed: ScorecardBasicSpecCheck" ERROR="failed to run operator-sdk scorecard: exit status 1" result="failed to run operator-sdk scorecard: exit status 1"
time="2022-03-18T08:04:14-05:00" level=error msg="operator-sdk scorecard failed to run properly."
time="2022-03-18T08:04:14-05:00" level=error msg="stderr: time=\"2022-03-18T08:00:14-05:00\" level=debug msg=\"Debug logging is set\"\nError: error running tests context deadline exceeded\nUsage:\n operator-sdk scorecard [flags]\n\nFlags:\n -c, --config string path to scorecard config file\n -h, --help help for scorecard\n --kubeconfig string kubeconfig path\n -L, --list
Option to enable listing which tests are run\n -n, --namespace string namespace to run the test images in\n -o, --output string Output format for results. Valid values: text, json (default \"text\")\n -l, --selector string label selector to determine which tests are run\n -s, --service-account string Service account to use for tests (default \"default\")\n -x, --skip-cleanu
p Disable resource cleanup after tests are run\n -w, --wait-time duration seconds to wait for tests to complete. Example: 35s (default 30s)\n\nGlobal Flags:\n --plugins strings plugin keys to be used for this subcommand execution\n --verbose Enable verbose logging\n\ntime=\"2022-03-18T08:04:14-05:00\" level=fatal msg=\"error running tests context deadline exceeded\"\n"
time="2022-03-18T08:04:14-05:00" level=info msg="check completed: ScorecardOlmSuiteCheck" ERROR="failed to run operator-sdk scorecard: exit status 1" result="failed to run operator-sdk scorecard: exit status 1"
I0318 08:04:15.489501 2507695 request.go:665] Waited for 1.044028012s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/node.k8s.io/v1beta1?timeout=32s
I0318 08:04:25.494734 2507695 request.go:665] Waited for 3.093151651s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/authorization.openshift.io/v1?timeout=32s
I0318 08:04:37.201729 2507695 request.go:665] Waited for 1.042710886s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/controlplane.operator.openshift.io/v1alpha1?timeout=32s
I0318 08:04:49.039463 2507695 request.go:665] Waited for 1.044348974s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/controlplane.operator.openshift.io/v1alpha1?timeout=32s
I0318 08:05:00.875010 2507695 request.go:665] Waited for 1.042510525s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/discovery.k8s.io/v1beta1?timeout=32s
I0318 08:05:12.710719 2507695 request.go:665] Waited for 1.0414188s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/oauth.openshift.io/v1?timeout=32s
I0318 08:05:24.545751 2507695 request.go:665] Waited for 1.043759954s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/v2v.kubevirt.io/v1beta1?timeout=32s
I0318 08:05:36.382984 2507695 request.go:665] Waited for 1.044266887s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/operators.coreos.com/v1alpha2?timeout=32s
I0318 08:05:48.218531 2507695 request.go:665] Waited for 1.043156932s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/networkaddonsoperator.network.kubevirt.io/v1?timeout=32s
I0318 08:06:00.056183 2507695 request.go:665] Waited for 1.043827476s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/authorization.k8s.io/v1beta1?timeout=32s
I0318 08:06:11.890890 2507695 request.go:665] Waited for 1.043902426s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/apps/v1?timeout=32s
I0318 08:06:23.726483 2507695 request.go:665] Waited for 1.043811066s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/operators.coreos.com/v1alpha2?timeout=32s
I0318 08:06:35.562651 2507695 request.go:665] Waited for 1.043013115s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/trident.netapp.io/v1?timeout=32s
I0318 08:06:47.397729 2507695 request.go:665] Waited for 1.04320213s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/v2v.kubevirt.io/v1beta1?timeout=32s
I0318 08:06:59.231902 2507695 request.go:665] Waited for 1.043664818s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/cdi.kubevirt.io/v1alpha1?timeout=32s
I0318 08:07:11.074823 2507695 request.go:665] Waited for 1.044067397s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/kubevirt.io/v1alpha3?timeout=32s
I0318 08:07:22.910150 2507695 request.go:665] Waited for 1.043643639s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/admissionregistration.k8s.io/v1?timeout=32s
time="2022-03-18T08:07:31-05:00" level=error msg="failed to fetch the subscription simple-demo-operator from namespace simple-demo-operator: context deadline exceeded"
time="2022-03-18T08:07:31-05:00" level=error msg="could not retrieve the object simple-demo-operator/simple-demo-operator: context deadline exceeded"
I0318 08:07:32.940595 2507695 request.go:665] Waited for 1.243508106s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/autoscaling/v1?timeout=32s
I0318 08:07:42.974747 2507695 request.go:665] Waited for 3.442894205s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/batch/v1?timeout=32s
I0318 08:07:53.003138 2507695 request.go:665] Waited for 1.693883626s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster5.dfwt5g.lab:6443/apis/operators.coreos.com/v1alpha1?timeout=32s
time="2022-03-18T08:07:55-05:00" level=info msg="check completed: DeployableByOLM" ERROR="unable to fetch the requested resource from k8s API server: error: context deadline exceeded" result="unable to fetch the requested resource from k8s API server: error: context deadline exceeded"
time="2022-03-18T08:07:55-05:00" level=warning msg="Warning: Value : (simple-demo-operator.v0.0.3) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects."
time="2022-03-18T08:07:55-05:00" level=info msg="check completed: ValidateOperatorBundle" result=PASSED
{
"image": "quay.io/telcoci/simple-demo-operator-bundle:v0.0.3",
"passed": false,
"certification_hash": "01139f58c09b2e5efcf99c9c8371dba9",
"test_library": {
"name": "github.com/redhat-openshift-ecosystem/openshift-preflight",
"version": "0.0.0",
"commit": "06a9fd1520a37dc5b39328655f6ceaabf6471a51"
},
"results": {
"passed": [
{
"name": "ValidateOperatorBundle",
"elapsed_time": 135,
"description": "Validating Bundle image that checks if it can validate the content and format of the operator bundle"
}
],
"failed": [],
"errors": [
{
"name": "ScorecardBasicSpecCheck",
"elapsed_time": 240203,
"description": "Check to make sure that all CRs have a spec block.",
"help": "There was a fatal error while running operator-sdk scorecard tests. Please see the preflight log for details. If necessary, set logging to be more verbose."
},
{
"name": "ScorecardOlmSuiteCheck",
"elapsed_time": 240144,
"description": "Operator-sdk scorecard OLM Test Suite Check",
"help": "There was a fatal error while running operator-sdk scorecard tests. Please see the preflight log for details. If necessary, set logging to be more verbose."
},
{
"name": "DeployableByOLM",
"elapsed_time": 221177,
"description": "Checking if the operator could be deployed by OLM",
"help": "It is required that your operator could be deployed by OLM"
}
]
}
} |
@tonyskapunk With the multi-auth file would you be able to test with the tool And then try to run the below and let us know if you have the same error? |
I think based on |
Sorry for the late response, but yes it fails with I have the $ DOCKER_CONFIG=./ ./crane pull quay.io/telcoci/simple-demo-operator-catalog:0.0.3 sdoc.tar
Error: GET https://quay.io/v2/telcoci/simple-demo-operator-catalog/manifests/0.0.3: UNAUTHORIZED: access to the requested resource is not authorized; map[]
$ grep quay config.json
"quay.io/telcoci/simple-demo-operator-bundle": {
"quay.io": {
When I remove the addition entry, then it works: $ DOCKER_CONFIG=./ ./crane pull quay.io/telcoci/simple-demo-operator-catalog:0.0.3 sdoc.tar
$ echo $?
0
$ grep quay config.json
"quay.io/telcoci/simple-demo-operator-bundle": { |
@tonyskapunk No worries, we would have to request for this feature to be added to |
I raised an issue for |
Thanks for doing that @acornett21 I've tested with the latest branch as suggested and does not seem to work for me. My authfile looks like this, I've left out the auth strings: ❯ cut -d ' ' -f1-7 config.json
{
"auths": {
"quay.io/telcoci": {
"auth":
},
"quay.io": {
"auth":
}
}
} Running crane in main branch does not seem to work: ❯ crane version
v0.8.1-0.20220328141311-efc62d802606
❯ DOCKER_CONFIG=./ crane pull quay.io/telcoci/simple-demo-operator-catalog:0.0.3 sdoc.tar
Error: GET https://quay.io/v2/telcoci/simple-demo-operator-catalog/manifests/0.0.3: UNAUTHORIZED: access to the requested resource is not authorized; map[]
❯ echo $?
1 If I leave only one auth it works as expected: ❯ cut -d ' ' -f1-7 config.json
{
"auths": {
"quay.io/telcoci": {
"auth":
}
}
}
❯ DOCKER_CONFIG=./ crane pull quay.io/telcoci/simple-demo-operator-catalog:0.0.3 sdoc.tar && echo $?
0 |
This may be fixed in #588. Please test with that patch. If it isn't, then I will work to ensure that it is. |
Thank you @bcrochet, going to check. |
Closing this issue as stale. It appears it may also be potentially resolved according to @bcrochet's latest comment. Please feel free to re-open if this issue needs further attention. |
Bug Description
(A clear and concise description of the issue)
preflight is unable to use an authfile with multiple creds on the same domain.
This is similar to the issue reported in opm render operator-framework/operator-registry#935
Version and Command Invocation
(The output of
preflight --version
)preflight version 0.0.0 <commit: 06a9fd1>
Steps to Reproduce:
(How can we reproduce this?)
Expected Result
(What did you expect to happen and why?)
Tools like podman, skopeo, buildah, opm index will allow entries like the above, following the order from more-specific to less specific. [1]
This is quite useful when using multiple credentials to different namespaces/images in a registry.
Actual Result
(What actually happened)
Additional Context
(Anything else you think might help us troubleshoot, like your platform, dependency versions, etc).
[1] https://man.archlinux.org/man/community/containers-common/containers-auth.json.5.en#FORMAT
The text was updated successfully, but these errors were encountered: