-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to check configurations are correct without the need to import or export #872
Comments
The Deploy CLI actually employs basic schema validation currently (example). Granted, some are stricter than others but there is enough to prevent egregious errors from occurring. We rely on the server to impose most of the validation because it accounts for critical information that doesn't exist on the client like tenant tier, feature flags and account-level resource limits. Further, what does "correct" mean here? Valid JSON? Accepted by the API? Or that it expresses your particular use case? My point being, the Deploy CLI, nor most client-side tools can know if your configuration is correct. To your credit, I can imagine a frustrating situation where you need to deploy your resources in order to test their validity, but this is what the built-in schema validation is supposed to address. I don't think a dedicated command is the way to go and instead, I'd rather improve the schema validation. If you could help identify specific instances, that would be helpful for us to address. |
Hi @willvedd . Yes I understand if some if not many of the validations are performed server side, but still the schema validations you pointed out are not available to use unless you are willing to deploy your stuff, but some times you are in the process of making changes but still don't want to mess even a test environment. I understand that if I try to push invalid configs the validations will be executed first, but when I say "mess" I mean I'm writing changes that still I don't want to deploy, e.g. reducing or adding grants to an app, but still I want to know in advance the changes are valid, instead of creating a pull request with the changes without knowing in advance whether they will work once approved and merged to the upstream branch to be deployed. I think actually 2 levels of validations could be performed: offline, in which case those schema can be used without executing a deploy, and online where all the configs are sent to Auth0 but for validations without deploying them, although I understand that would require changes in the Auth0 API, making it harder to implement in your side. Thinking about the "ideal" implementation, maybe a more "Unix" style would be better, and in Unix most commands that allow to test something without executing the actual intent have a
In this way, it's also more clear what are you trying to test (a |
In case the 2 levels of validations can be implemented, I would add another option: This is also an option that many commands include, like |
One more think about my point of including this:
In a project that I used to work we used Terraform to allocate resources from AWS, but when making changes in a feature branch to e.g. add new EC2 instances I didn't want AWS to actually allocate the resources, but wanted to check whether my changes were valid before creating the PR, so I used the |
This is a fair point. My immediate suggestion is to provision a dedicated dev tenant, separate from your staging and prod tenants where you can test these. Some customers even provision ephemeral tenants to assist in developing of discrete features. The Deploy CLI should make the cloning of tenants fairly trivial. The Interesting that you mention Terraform because you may consider adopting the official Auth0 Terraform Provider. It enforces stronger validations and will give you better insights into errors, diffs. It also better suits incremental development (as you describe). |
Yes I was thinking about doing it now that importing/exporting as you said is easier to implement. Don't know from the cost point of view whether it's best than a CLI checker though.
I don't think it's a fair point, it's like because nobody achieve 100% test coverage you assume automated tests in reduced environments are useless (like unit tests). Think about the ability of running the schema validations as when you are writing code in any language and you execute the compiler locally, do you really need to execute your compiler when you can just push your code to the repo and see what happen when is deployed in a stage environment? if you code in JS or Python, you don't even need to compile your code but you still want a linter, an IDE or a package manager to validate what you code, although there are many errors that only executing the code in an actual environment will allow you to detect the error, but reducing the chances earlier and faster is not a option but a must. |
Closing anyway because #70 represents better what I was proposing. |
Checklist
Describe the problem you'd like to have solved
Specially when you keep track of your configs in a git repo, and having configured CI tooling to check the correctness of your code, would be useful to have a check command that perform validations over the configurations, without the need to actually execute an import or export, an offline checker that does the basic check of whether the JSON / Yaml files are correct, including the keywords used.
Describe the ideal solution
The command would be something like:
And it will exit with
0
and no error message if all is OK, or a non0
code exit with messages in the standard error stream explaining why the configs are not valid.Possible errors that can be checked:
abc
where is expected a number.Alternatives and current workarounds
For the first point: checking for malformed JSON / YAML files that is easy, there are plenty of commands that allow to do that, but not for the rest of the checks that are very specific to Auth0, although defining schemas for the JSON files or the YAML files, the rest can be achieved as well with tools that check the files are in complaint with a given schema or not. The problem is that defining those schemas is a long work, so that should be built-in in this command.
Additional context
No response
The text was updated successfully, but these errors were encountered: