Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts on creating Shiny app validation reports? #21

Open
LDSamson opened this issue Nov 23, 2023 · 2 comments
Open

Thoughts on creating Shiny app validation reports? #21

LDSamson opened this issue Nov 23, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@LDSamson
Copy link

This is not an issue but rather a question.

I am developing a Shiny application that needs to be validated and I am using BDD unit-testing following the recommendations in your online book. For me personally, a section about how to create validation reports for shiny applications would be great addition for the book and if you would ever find the time to include that, I think it would be more than helpful.

For example, is it possible to extract the descriptive text in the unit tests and the outcome of the tests into a validation report (HTML or PDF)? I am aware of the valtools package, but I am not sure if that is compatible with this workflow. I would love to hear your opinion about that. Do you have any thoughts on how this can be done, or any recommendations?

@mjfrigaard
Copy link
Owner

I think this is a great idea! In fact, I'm trying to design a package (pickler) that's similar to valtools (and covtracer). In essence, pickler will scan the tests in tests/testthat/ that use describe() and it() and compile them into a markdown report. The goal is to generate a FEATURES.md file (similar to the README file covrpage creates) that will store the features (in describe()) and scenarios (in it()).

From the pickler package documentation:

"Behavior-driven development (BDD) starts with a clearly defined business goal (or goals). Developers will collaborate with users to ensure each feature is aligned with a goal (i.e., in each describe() call) and is captured in a plain-language, real-life scenario. Scenarios are used to write tests to verify the feature behaves as expected, creating up-to-date documentation on how the software works (and what it can do).

pickler automates this process by compiling the features (in describe()), scenarios (in it() and test_that()), and expectations (expect_*() functions) into a FEATURE.md markdown report."

@LDSamson
Copy link
Author

This sound really great and sounds like what I am actually looking for. Do you have an example of how to create such a FEATURE.md report? I could not find it in the documentation, I probably overlooked it. Is there a specific function that does this?

@mjfrigaard mjfrigaard added the enhancement New feature or request label Mar 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants