-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathtest_tools.qmd
821 lines (644 loc) · 28.7 KB
/
test_tools.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
# Test tools {#sec-tests-tools}
<!---
https://jakubsob.github.io/blog/
https://www.freecodecamp.org/news/clean-coding-for-beginners/
-->
```{r}
#| eval: true
#| echo: false
#| include: false
source("_common.R")
library(rlang)
library(shiny)
```
```{r}
#| label: co_box_tldr
#| echo: false
#| results: asis
#| eval: true
co_box(
color = "b",
look = "default", hsize = "1.10", size = "1.05",
header = "TLDR Test Tools", fold = TRUE,
contents = "
<br>
<h5>**Test tools**</h5>
- Use test fixtures to set up consistent environments for each test, ensuring reliability and simplifying setup/teardown processes.
\`\`\` bash
tests/testthat/fixtures/
├── make-tidy_ggp2_movies.R
└── tidy_ggp2_movies.rds
1 directory, 2 files
\`\`\`
- Implement test helpers to streamline your tests by reusing common code, making tests cleaner and maintenance easier.
\`\`\` bash
tests/testthat/
├── _snaps/
├── fixtures/
└── helper.R
6 directories, 1 file
\`\`\`
- Leverage test snapshots for comparing UI/output states over time, catching unintended changes quickly.
\`\`\` bash
tests/testthat/_snaps/
├── app-feature-01
│ ├── feature-01-senario-a-001.json
│ ├── feature-01-senario-a-001_.png
│ ├── feature-01-senario-b-001.json
│ └── feature-01-senario-b-001_.png
├── ggp2_app-feature-01
│ ├── ggp2_launch_app-feature-01-001.json
│ ├── ggp2_launch_app-feature-01-001_.new.png
│ └── ggp2_launch_app-feature-01-001_.png
└── shinytest2
├── feature-01-001.json
└── feature-01-001_.png
4 directories, 9 files
\`\`\`
By integrating fixtures, helpers, and snapshots from `testthat` into your Shiny app testing workflow, you boost test efficiency and application quality.
"
)
```
---
This chapter introduces tools to help write clean and efficient tests. These files, folders and methods can be included in our app-package test suite and are thoroughly described in [R Packages, 2ed](https://r-pkgs.org/testing-design.html#sec-testing-design-principles) and the [`testthat` documentation](https://testthat.r-lib.org/articles/third-edition.html), but within the context of a standard R package. The sections below contain examples of each tool implemented in our example app-package.[^test_tools-documentation]
:::: {.callout-tip collapse='true' appearance='default'}
## [Accessing the code examples]{style='font-weight: bold; font-size: 1.15em;'}
::: {style='font-size: 0.95em; color: #282b2d;'}
I've created the [`shinypak` R package](https://mjfrigaard.github.io/shinypak/) In an effort to make each section accessible and easy to follow:
Install `shinypak` using `pak` (or `remotes`):
```{r}
#| code-fold: false
#| message: false
#| warning: false
#| eval: false
# install.packages('pak')
pak::pak('mjfrigaard/shinypak')
```
Review the chapters in each section:
```{r}
#| code-fold: false
#| message: false
#| warning: false
#| collapse: true
library(shinypak)
list_apps(regex = '^12')
```
Launch an app:
```{r}
#| code-fold: false
#| eval: false
launch(app = "12.1_tests-fixtures")
```
:::
::::
[^test_tools-documentation]: The three chapters in R Packages, 2ed, dedicated to testing are [Testing basics](https://r-pkgs.org/testing-basics.html), [Designing your test suite](https://r-pkgs.org/testing-design.html), and [Advanced testing techniques](https://r-pkgs.org/testing-advanced.html)
One of the recent updates to the `testthat` package (version 3.2.0) emphasizes limiting code that exists outside of our tests.[^test_tools-testthat-recent-changes]
[^test_tools-testthat-recent-changes]: Read a summary of the changes to the `testthat` package on the [tidyverse blog](https://www.tidyverse.org/blog/2023/10/testthat-3-2-0/)
> "*Eliminating (or at least minimizing) top-level code outside of `test_that()` will have the beneficial effect of making your tests more hermetic. This is basically the testing analogue of the general programming advice that it's wise to avoid unstructured sharing of state.*" - [Self-sufficient tests, R Packages, 2ed](https://r-pkgs.org/testing-design.html#self-sufficient-tests)
Strategies for reducing or removing the code outside of `test_that()` tests include:[^test_tools-pkg-dev-masterclass]
> - *Move file-scope logic to either narrower scope (just this test) or a broader scope (all files).*
>
> - *It's ok to copy and paste: test code doesn't have to be super DRY. Obvious >>> DRY*
[^test_tools-pkg-dev-masterclass]: these tips (and more!) were covered in the Package Development Masterclass delivered at [posit::conf(2023)](https://github.com/posit-conf-2023)
Code outside of `test_that()` usually serves a specific purpose (load or manipulate data, set options, create folders or files, etc). The topics in this chapter will demonstrate how to include these behaviors in our tests without placing code outside of the test scope.[^bdd-chapter]
[^bdd-chapter]: Read more about BDD functions in the [appendix.](bdd.qmd)
## Test fixtures {#sec-test-fixtures}
<!-- https://www.tidyverse.org/blog/2020/04/self-cleaning-test-fixtures/ -->
Test fixtures can be anything used to create repeatable test conditions (data, file paths, functions, etc.). Good test fixtures provide a consistent, well-defined test environment, and then are removed/destroyed when the test is executed. This ensures any changes made during the test don't persist or interfere with future tests.[^test_tools-5]
[^test_tools-5]: For a concrete example, see [this article](https://www.tidyverse.org/blog/2020/04/self-cleaning-test-fixtures/) on self-cleaning tests.
```{r}
#| label: git_box_12.1_tests-fixtures
#| echo: false
#| results: asis
#| eval: true
git_margin_box(
contents = "launch",
fig_pw = '65%',
branch = "12.1_tests-fixtures",
repo = 'sap')
```
In R packages, test fixtures are stored in the `tests/testthat/fixtures/` folder:[^tests-fixtures-folder-name]
[^tests-fixtures-folder-name]: The name '`fixtures`' isn't required (you can name this folder anything).
```{bash}
#| eval: false
#| code-fold: false
tests/
├── testthat/
│ └── fixtures/
└── testthat.R
```
### Test data
Large static data files are an example of a test fixture.[^test_tools-6] Any code used to create test data should be stored with the output file (using a clear naming convention).
[^test_tools-6]: Creating a tidied version of `ggplot2movies::movies` would be costly to re-create with every test, so it's advised to store it as an [static test fixture.](https://r-pkgs.org/testing-advanced.html#sec-testing-advanced-concrete-fixture)
For example, I've stored the code used to create a ['tidy' version](https://github.com/mjfrigaard/sap/blob/12a_tests-fixtures/tests/testthat/fixtures/make-tidy_ggp2_movies.R) of the `ggplot2movies::movies` data along with the output dataset in `tests/testthat/fixtures/`:
```{bash}
#| eval: false
#| code-fold: false
tests
├── testthat
│ ├── fixtures
│ │ ├── make-tidy_ggp2_movies # <1>
│ │ └── tidy_ggp2_movies.rds # <2>
│ └── test-scatter_plot.R
└── testthat.R
3 directories, 4 files
```
1. The code used to create the test data (`make-make_tidy_ggp2_movies.R`)
2. The test data file (i.e., `tidy_ggp2_movies.rds`):
Data files stored in `tests/testthat/fixtures/` can be accessed with `testthat::test_path()` inside each test.
### Example test fixture
Below is a test for the `scatter_plot()` utility function that answers the question, '*does the plot generate without producing an error?*' using the `tidy_ggp2_movies.rds` test fixture.
This type of test appropriate because we want to confirm the data source will generate a plot object when passed to our graphing utility function, not necessarily to verify the specific contents of the graph.[^test_tools-is.ggplot]
[^test_tools-is.ggplot]: Snapshot tests would be more appropriate for answering the question, ['*is the plot visually correct?*'](https://shiny.posit.co/r/articles/improve/server-function-testing/index.html#complex-outputs-plots-htmlwidgets).
```{r}
#| eval: false
#| code-fold: false
describe( # <1>
"Feature: Scatter plot data visualization
As a film data analyst
I want to explore IMDB.com movie review data
So that I can examine relationships between movie reivew metrics
Background:
Given I have IMDB movie reviews data
And the data contains continuous variables like 'rating'
And the data contains categorical variables like 'mpaa'",
code = { # <1>
it( # <2>
"Scenario: Scatter plot initial x, y, color values
When I launch the Scatter Plot Data Visualization
And I have a IMDB dataset of movie reviews
Then the plot should show 'Rating' on the x-axis
And the plot should show 'Length' on the y-axis
And the points on the plot should be colored by 'MPAA' rating",
code = { # <2>
ggp2_scatter_inputs <- list( # <3>
x = "rating",
y = "length",
z = "mpaa",
alpha = 0.75,
size = 3,
plot_title = "Enter plot title"
) # <3>
tidy_ggp2_movies <- readRDS(test_path("fixtures", # <4>
"tidy_ggp2_movies.rds")) # <4>
app_graph <- scatter_plot(tidy_ggp2_movies, # <5>
x_var = ggp2_scatter_inputs$x,
y_var = ggp2_scatter_inputs$y,
col_var = ggp2_scatter_inputs$z,
alpha_var = ggp2_scatter_inputs$alpha,
size_var = ggp2_scatter_inputs$size
) # <5>
expect_true(ggplot2::is.ggplot(app_graph)) # <6>
})
}) # <1>
```
1. Feature
2. Scenario
3. Test inputs
4. Test fixture
5. Create observed object
6. Expectation
If the `tidy_ggp2_movies.rds` file in the `tests/testthat/fixtures/` folder are going to be used repeatedly, it might make sense to store the data in `inst/extdata/` or `data-raw/`. Test fixtures are described in-depth in [R Packages, 2ed](https://r-pkgs.org/testing-advanced.html#test-fixtures) and in the [`testthat` documentation](https://testthat.r-lib.org/articles/test-fixtures.html#test-fixtures).
::: {.panel-tabset}
### Tests in RStudio ![](images/rstudio-icon.png){height=20}
### Tests in Positron ![](images/positron.png){height=20}
:::
## Test helpers {#sec-test-helpers}
<!-- https://r-pkgs.org/testing-design.html#testthat-helper-files -->
> "*Helper files are a mighty weapon in the battle to eliminate code floating around at the top-level of test files.*" Testthat helper files, [R Packages, 2ed](https://r-pkgs.org/testing-design.html#testthat-helper-files)
```{r}
#| label: git_box_12.2_tests-helpers
#| echo: false
#| results: asis
#| eval: true
git_margin_box(
contents = "launch",
fig_pw = '65%',
branch = "12.2_tests-helpers",
repo = 'sap')
```
Test helpers reduce repeated/duplicated test code. In general, objects or values that aren't large enough to justify storing as static test fixtures can be created with helper functions. Helper functions are stored in `tests/testthat/helper.R`, which is automatically loaded with `devtools::load_all()`:
```{bash}
#| eval: false
#| code-fold: false
tests/
├── testthat/
│ ├── fixtures/ # <1>
│ │ ├── make-tidy_ggp2_movies.R # <1>
│ │ └── tidy_ggp2_movies.rds # <1>
│ ├── helper.R # <2>
│ └── test-scatter_plot.R # <3>
└── testthat.R
```
1. Test fixture scripts and `.rds` files\
2. Helper functions\
3. Test file
Test helpers should only be created if they make testing easier **when the tests fail.** The article, ['Why Good Developers Write Bad Unit Tests'](https://mtlynch.io/good-developers-bad-tests/), provides great advice on complexity vs. clarity when writing unit tests,
> *'think about what will make the problem obvious when a test fails. Refactoring may reduce duplication, but it also increases complexity and potentially obscures information when things break.'*
R programmers resist copy + paste programming, and in most cases this makes sense. After all, R *is* a functional programming language, so it's tempting to bundle any repeated code into a function and store it in the `tests/testthat/helper.R` file.
However, when we're writing tests, it's more important that tests are easy to read and understand **when they fail**.
For example, consider the `ggp2_scatter_inputs` inputs passed to the `scatter_plot()` function in the previous test:
```{r}
#| eval: false
#| code-fold: false
ggp2_scatter_inputs <- list(
x = "rating",
y = "length",
z = "mpaa",
alpha = 0.75,
size = 3,
plot_title = "Enter plot title"
)
```
We could write a `var_inputs()` function that stores these values in a list. In our tests, this would allow us to use `var_inputs()` with the same 'reactive syntax' we use with `scatter_plot()` in the module server function:
```{r}
#| eval: true
#| code-fold: false
#| collapse: true
var_inputs <- function() {
list(x = "rating",
y = "length",
z = "mpaa",
alpha = 0.75,
size = 3,
plot_title = "Enter plot title")
}
var_inputs()
```
While this removes duplicated code, it also makes it less clear for the reader what `var_inputs()` contains and where it was created (without opening the `helper.R` file).
```{r}
#| eval: false
#| code-fold: false
#| collapse: true
tidy_ggp2_movies <- readRDS(test_path("fixtures", # <1>
"tidy_ggp2_movies.rds")) # <1>
app_graph <- scatter_plot(
tidy_ggp2_movies,
x_var = var_inputs()$x, # <2>
y_var = var_inputs()$y,
col_var = var_inputs()$z,
alpha_var = var_inputs()$alpha,
size_var = var_inputs()$size) # <2>
testthat::expect_true(ggplot2::is.ggplot(app_graph))
```
1. Load test fixture
2. Identical to the code in `mod_scatter_display_server()`
In contrast, the `make_ggp2_inputs()` function below creates inputs for the `scatter_plot()` utility function:
```{r}
#| eval: false
#| code-fold: false
make_ggp2_inputs <- function() {
glue::glue_collapse("list(x = 'rating',
y = 'length',
z = 'mpaa',
alpha = 0.75,
size = 3,
plot_title = 'Enter plot title'
)"
)
}
```
I can call `make_ggp2_inputs()` in the **Console** and it will return the list of values to paste into each test:
```{r}
#| eval: false
#| code-fold: false
make_ggp2_inputs()
list(y = 'audience_score',
x = 'imdb_rating',
z = 'mpaa_rating',
alpha = 0.5,
size = 2,
plot_title = 'Enter plot title'
)
```
This reduces the number of keystrokes per test, but doesn't obscure the source of the values in the test.
`glue::glue_collapse()` is your friend when you want to quickly reproduce code for your tests. `make_var_inputs()` creates the list of inputs for testing the original `movies` data:
```{r}
#| eval: true
#| code-fold: false
make_var_inputs <- function() {
glue::glue_collapse("list(y = 'audience_score',
x = 'imdb_rating',
z = 'mpaa_rating',
alpha = 0.5,
size = 2,
plot_title = 'Enter plot title'
)")
}
```
```{r}
#| label: co_box_dry
#| echo: false
#| results: asis
#| eval: true
co_box(
color = "r", fold = TRUE,
look = "default", hsize = "1.10", size = "1.05",
header = "Violating the DRY principle",
contents = "
If you have repeated code in your tests, consider the following questions below before creating a helper function:
1. Does the code help explain what behavior is being tested?
2. Would a helper make it harder to debug the test when it fails?
It’s more important that test code is obvious than DRY, because it’s more likely you’ll be dealing with this test when it fails (and you're not likely to remember why all the top-level code is there).
"
)
```
### Test logger
I prefer test outputs to be verbose, so I usually create a `test_logger()` helper function that allows me to give more context and information with each test:
```{r}
#| code-fold: false
#| eval: true
# test logger helper
test_logger <- function(start = NULL, end = NULL, msg) {
if (is.null(start) & is.null(end)) {
cat("\n")
logger::log_info("{msg}")
} else if (!is.null(start) & is.null(end)) {
cat("\n")
logger::log_info("\n[ START {start} = {msg}]")
} else if (is.null(start) & !is.null(end)) {
cat("\n")
logger::log_info("\n[ END {end} = {msg}]")
} else {
cat("\n")
logger::log_info("\n[ START {start} = {msg}]")
cat("\n")
logger::log_info("\n[ END {end} = {msg}]")
}
}
```
`test_logger()` can be used to 'log' the `start` and `end` of each test, and it includes a message argument (`msg`) I'll use to reference the test `description` argument in each `it()` call.[^test_tools-logging]
[^test_tools-logging]: If you like verbose logging outputs, check out the [`logger` package](https://daroczig.github.io/logger/)
I tend to use functions like `test_logger()` enough to justify placing them in a testing utility file ([`R/testthat.R`](https://github.com/mjfrigaard/sap/blob/13_tests-modules/R/testthat.R)) below `R/`. Including testing functions in the `R/` folder also ensures they're documented (and any dependencies become part of your app-package).[^test_tools-helpers-r-dir]
[^test_tools-helpers-r-dir]: Placing common files for testing below `R/` is covered in [R Packages, 2ed](https://r-pkgs.org/testing-design.html#hiding-in-plain-sight-files-below-r)
### Test development {#sec-test-coverage-active-file}
While developing, using keyboard shortcuts makes it easier to iterate between building fixtures, writing and running tests, and checking code coverage.
```{r}
#| label: hot_key_tf_01
#| echo: false
#| results: asis
#| eval: true
hot_key(fun = 'tf')
```
```{verbatim}
#| eval: false
#| code-fold: false
devtools:::test_active_file()
FAIL 0 | WARN 0 | SKIP 0 | PASS 0 ]
INFO [2023-10-27 12:39:23] [ START fixture = tidy_ggp2_movies.rds]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 1 ]
INFO [2023-10-27 12:39:23] [ START fixture = tidy_ggp2_movies.rds]
INFO [2023-10-27 12:39:23] [ START data = movies.rda]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 2 ]
INFO [2023-10-27 12:39:23] [ END data = movies.rda]
```
```{r}
#| label: hot_key_cf_01
#| echo: false
#| results: asis
#| eval: true
hot_key(fun = 'cf')
```
```{r}
#| eval: false
#| code-fold: false
devtools:::test_coverage_active_file()
```
![Test coverage on active file](images/tests_coverage_scatter_plot.png){width="100%" fig-align="center"}
## Test snapshots {#sec-test-snapshots}
```{r}
#| label: git_box_12.3_tests-snapshots
#| echo: false
#| results: asis
#| eval: true
git_margin_box(
contents = "launch",
fig_pw = '65%',
branch = "12.3_tests-snapshots",
repo = 'sap')
```
Writing tests for UI outputs can be difficult because their "correctness" is somewhat subjective and requires human judgment. If the expected output we're interesting in testing is cumbersome to describe programmatically, we can consider using a snapshot tests. Examples of this include UI elements (which are mostly HTML created by Shiny's UI layout and input/output functions) and data visualizations.[^test_tools-graph-snapshots]
[^test_tools-graph-snapshots]: Graph snapshots are covered in @sec-tests-snapshots-vdiffr.
```{r}
#| label: co_box_mocking
#| echo: false
#| results: asis
#| eval: true
co_box(
color = "b", fold = TRUE,
look = "default", hsize = "1.10", size = "1.05",
header = "Mocking",
contents = "
Test mocks are covered in @sec-tests-mocks-snapshots, but the example isn't from our app-package (it comes from the package development masterclass given at [posit::conf(2023)](https://github.com/posit-conf-2023/pkg-dev-masterclass)).
"
)
```
I've included a small UI function (`text_logo()`) in the `R/` folder of `sap`:
```{r}
#| eval: true
#| code-fold: show
#| code-summary: 'show/hide text_logo()'
#' Create a Text-based Logo in HTML
#'
#' `text_logo()` generates a text-based logo enclosed within HTML tags, allowing
#' for the specification of the heading size (h1 through h6). The logo features
#' a stylized representation of a face within arrows, using ASCII characters.
#'
#' @param size A character string specifying the heading size for the logo.
#' Valid options include "h1", "h2", "h3", "h4", "h5", and "h6".
#' Defaults to "h3".
#'
#' @return An HTML object containing the logo. This object can be directly used
#' in R Markdown documents or Shiny applications to render the logo in a web page.
#'
#' @examples
#' # Generate a logo with default size h3
#' text_logo()
#'
#' # Generate a logo with size h1
#' text_logo(size = "h1")
#'
#'
#' @export
text_logo <- function(size = 'h3') {
if (any(size %in% c("h1", "h2", "h3", "h4", "h5", "h6"))) {
htmltools::HTML(
paste0(
"<span>\n",
paste0(" <", size, ">", collapse = ""), "\n",
" <code>√\\/--‹(•_•)›--√\\/</code>\n",
paste0(" <", size, "/>", collapse = ""), "\n",
"</span>\n"
)
)
} else {
rlang::abort(paste(size, "isnt supported", sep = " "))
}
}
```
This function generates the HTML so we can include a small logo in the UI.[^test_tools-text_logo]
[^test_tools-text_logo]: This is a simple example, but I chose it because it needs some tricky escape characters to work.
:::{layout="[50,50]" layout-valign="center"}
The output in the UI:
``` r
text_logo()
```
The pre-rendered HTML:
``` html
<span>
<h3>
<code>√\/--‹(•_•)›--√\/</code>
</h3>
</span>
```
:::
Reviewing the entire HTML contents of `movies_ui()` to see if `text_logo()` is working isn't practical, but we can store it's results in a snapshot file using `expect_snapshot()`. In `tests/testthat/test-text_logo.R`, I'll write the following tests:[^test_tools-snapshot-advice]
```{r}
#| label: co_box_snaps_warning
#| echo: false
#| results: asis
#| eval: true
co_box(
color = "r",
look = "default", hsize = "1.10", size = "1.05",
fold = TRUE,
header = "Warning: snapshots and multiple line descriptions",
contents = "
It's important to include any multiple-line text in the `describe()` block when creating snapshots with `expect_snapshot()`. Multiline text in the `desc` of `test_that()` will always overwrite the snapshot file.
Thank you to [@LDSamson](https://github.com/LDSamson) for bringing this to my attention!
See [this issue](https://github.com/r-lib/testthat/issues/1900) for more information.
"
)
```
[^test_tools-snapshot-advice]: Mastering Shiny covers [creatng a snapshot file](https://mastering-shiny.org/scaling-testing.html#user-interface-functions) to test UI elements, but also notes this is probably not the best approach.
- Confirm the default `size` argument:
```{r}
#| code-fold: false
#| eval: false
describe(
"Scenario: Generating a logo with default size
Given the user did not specify a [size] in text_logo()
When text_logo() is invoked without a [size] argument
Then the correct HTML for a ['h3'] text logo is returned",
code = {
test_that("text_logo()", code = {
test_logger(start = "snap", msg = "text_logo()")
expect_snapshot(text_logo())
test_logger(end = "snap", msg = "text_logo()")
})
})
```
- Confirm a new `size` argument:
```{r}
#| code-fold: false
#| eval: false
describe(
"Scenario: Generating a logo of a specified size
Given the user wants a ['h1'] sized text logo
When text_logo(size = 'h1') is invoked
Then the correct HTML for a ['h1'] text logo is returned",
code = {
test_that("text_logo('h1')", code = {
test_logger(start = "snap", msg = "text_logo('})')")
expect_snapshot(text_logo("h1"))
test_logger(start = "snap", msg = "text_logo('h1')")
})
})
```
- Confirm an invalid `size` argument:
```{r}
#| code-fold: false
#| eval: false
describe(
"Scenario: Attempting to generate a logo with an invalid size
Given the user specifies an invalid [size] for the text logo
When text_logo() is invoked with an invalid [size] argument
Then an error is returned stating the [size] is not recognized",
code = {
test_that("text_logo('invalid')", code = {
test_logger(start = "snap", msg = "text_logo('invalid')")
expect_error(text_logo("massive"), NULL)
test_logger(end = "snap", msg = "text_logo('invalid')")
})
})
```
When I test this file, I see the following results and output from `test_logger()` along with two warnings about the creating of the snapshot files:
```{r}
#| code-fold: false
#| eval: false
devtools:::test_active_file()
```
```{verbatim}
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 0 ]
INFO [2023-11-17 08:50:55] [ START snap = text_logo()]
[ FAIL 0 | WARN 1 | SKIP 0 | PASS 1 ]
INFO [2023-11-17 08:50:55] [ END snap = text_logo()]
INFO [2023-11-17 08:50:55] [ START snap = text_logo('h1')]
[ FAIL 0 | WARN 2 | SKIP 0 | PASS 2 ]
INFO [2023-11-17 08:50:55] [ START snap = text_logo('h1')]
INFO [2023-11-17 08:50:55] [ START snap = text_logo('invalid')]
[ FAIL 0 | WARN 2 | SKIP 0 | PASS 3 ]
INFO [2023-11-17 08:50:55] [ END snap = text_logo('invalid')]
```
```{verbatim}
── Warning (test-text_logo.R:13:7): text_logo() ──
Adding new snapshot:
Code
text_logo()
Output
<span>
<h3>
<code>√\/--‹(•_•)›--√\/</code>
</h3>
</span>
── Warning (test-text_logo.R:22:7): text_logo('h1') ──
Adding new snapshot:
Code
text_logo("h1")
Output
<span>
<h1>
<code>√\/-‹(•_•)›-√\/</code>
</h1>
</span>
[ FAIL 0 | WARN 2 | SKIP 0 | PASS 3 ]
```
### Reviewing `_snaps/` {#sec-test-snaps-review}
Snapshots are stored in the `tests/testthat/_snaps/` folder. The output from `expect_snapshot_file()` is a markdown file with contents similar to what we saw above in the warning message. Should the markdown file contents change in future test runs, the tests will fail and we'll be prompted to review the changes. [^test_tools-shinytest2-snaps-prelude]
For example, if the `<span>` changed to a `<div>` in `text_logo()` and caused a test failure, I could review the changes in `tests/testthat/_snaps/text_logo.md` with `testthat::snapshot_review('text_logo')`:
:::{#fig-tests_tools_review_snapshot}
![`testthat::snapshot_review('text_logo')`](images/tests_tools_review_snapshot.png){#fig-tests_tools_review_snapshot width="100%" fig-align="center"}
Click **Accept**: `Accepting snapshot: 'tests/testthat/_snaps/text_logo.md'`
:::
[^test_tools-shinytest2-snaps-prelude]: We'll encounter this folder again in the @sec-tests-system on system tests with `shinytest2`.
For examples of snapshot tests for graphical outputs, review the examples in @sec-tests-mocks-snapshots.
## Recap {.unnumbered}
```{r}
#| label: co_box_app_recap
#| echo: false
#| results: asis
#| eval: true
co_box(
color = "g",
header = "Recap: Test tools",
fold = FALSE,
look = "default", hsize = "1.10", size = "1.05",
contents = "
<h5>**Test tools**</h5>
In this chapter, we've covered powerful testing tools provided by the `testthat` package. Let's briefly recep what we've learned:
- **Test Fixtures**: We've seen how test fixtures are reliable for setting up a consistent testing environment. Using fixtures to load test data ensures that each test runs under uniform conditions. This approach enhances our tests' reliability and simplifies the testing process by abstracting standard setup and teardown tasks.
- Fixtures prepare the test environment and the initial state. The `tidy_ggp2_movies.rds` data is a test fixture, and it creates the `tidy_ggp2_movies` data within the test scope.
- **Test Helpers**: Helpers allow us to encapsulate repetitive tasks or setup configurations, making our tests cleaner, more readable, and easier to maintain. Whether it's a custom function to create mock Shiny sessions or a utility for simulating user input, helpers streamline the testing workflow significantly.
- If find yourself writing small, reusable pieces of code to perform specific tasks inside your tests, consider converting them into a function in `tests/testthat/helper.R`.
- **Test Snapshots**: Snapshot testing introduces the `tests/testthat/_snaps` folder for capturing and comparing the Shiny application state over time. Snapshot testing helps ensure that changes to the application do not unintentionally alter its behavior or appearance, and snapshot files allow us to quickly identify and address regressions by automatically detecting differences between the expected and actual snapshots.
- Test complex outputs using snapshots, but cautiously. Snapshots are susceptibile to failure with small (sometimes inconsequential) changes, which produce false negatives test failures.
"
)
```
```{r}
#| label: git_contrib_box
#| echo: false
#| results: asis
#| eval: true
git_contrib_box()
```