Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: nk027/bvar
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.1.4
Choose a base ref
...
head repository: nk027/bvar
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref

Commits on Jul 9, 2019

  1. Adapt some of Lukas intro text for the README

    Nikolas Kuschnig committed Jul 9, 2019
    Copy the full SHA
    f874fd1 View commit details

Commits on Jul 10, 2019

  1. Check and cut copyrighted FRED-QD series

    Nikolas Kuschnig committed Jul 10, 2019
    Copy the full SHA
    e623e92 View commit details

Commits on Aug 25, 2019

  1. Add some calcium

    Nikolas Kuschnig committed Aug 25, 2019
    Copy the full SHA
    dec1c86 View commit details

Commits on Aug 26, 2019

  1. Change error wording slightly

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    1f92095 View commit details
  2. Functionalise beta_comp creation

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    e807b80 View commit details
  3. Add predict method to calc fcast ex-post

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    f6ee369 View commit details
  4. gst

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    f6dca08 View commit details
  5. Revert "gst"

    This reverts commit f6dca08.
    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    851a15e View commit details
  6. Try not to be too dumb

    This reverts commit 851a15e.
    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    47af848 View commit details
  7. Work on new fancy forecasts

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    e0e1de8 View commit details
  8. Wrap up forecast changes

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    49849e1 View commit details
  9. Test adapted forecasts

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    cc69eda View commit details
  10. Fix newdata, add examples

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    64eb86e View commit details
  11. Change names and structure

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    6e6e38f View commit details
  12. Move from n_save to n_thin for consistency

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    0bfd597 View commit details
  13. Port fcast changes to irf

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    a930b61 View commit details
  14. Add some doc to priors

    Nikolas Kuschnig committed Aug 26, 2019
    Copy the full SHA
    156efe2 View commit details

Commits on Aug 27, 2019

  1. Work on fitted method

    Nikolas Kuschnig committed Aug 27, 2019
    Copy the full SHA
    b058f6f View commit details
  2. Address #23 for fitted()

    Nikolas Kuschnig committed Aug 27, 2019
    Copy the full SHA
    bbe5cf9 View commit details
  3. Work on adding residuals method

    Nikolas Kuschnig committed Aug 27, 2019
    Copy the full SHA
    a678ca5 View commit details
  4. Wrap resid and fitted methods

    Nikolas Kuschnig committed Aug 27, 2019
    Copy the full SHA
    5bf7eaa View commit details
  5. Fix confidence band issue, close #23

    Nikolas Kuschnig committed Aug 27, 2019
    Copy the full SHA
    fe096cf View commit details
  6. Copy the full SHA
    8330ddf View commit details

Commits on Aug 28, 2019

  1. Add doc info on predict & irf

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    d657751 View commit details
  2. Work on fevd method

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    92286d2 View commit details
  3. Merge remote-tracking branch 'refs/remotes/origin/master'

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    2b92067 View commit details
  4. Add print method to fevd, change stuff

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    6cb1ccd View commit details
  5. Work on coef and vcov methods, add print

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    318e62a View commit details
  6. Work on fevd methods, improve doc & prints

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    45852da View commit details
  7. Test a bit

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    84b5ef8 View commit details
  8. Work on docs, improve quantile_check

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    3d65a25 View commit details
  9. Work on docs, close #17

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    34f3309 View commit details
  10. Homogenise prior construction, add print methods

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    265c036 View commit details
  11. Add pkg doc, fix indentation of prior print

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    43bd1ca View commit details
  12. Build doc and devtool::check (lots of docs)

    Nikolas Kuschnig committed Aug 28, 2019
    Copy the full SHA
    b5bbbc1 View commit details

Commits on Aug 29, 2019

  1. Fix fcast summary, improve docs

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    e1cb577 View commit details
  2. Work on irf, add counter_lim to sign restr

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    448e2a9 View commit details
  3. Go through IRF docs

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    6436caa View commit details
  4. Go through new method docs

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    4a5eecc View commit details
  5. Run devtools::check, fix sign_lim error

    Thanks to CM for making this week possible
    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    0ab0205 View commit details
  6. Update comments with work done so far

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    39d81b9 View commit details
  7. Change saveRDS to v2, maybe fix #25

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    34f8c7a View commit details
  8. Add residual print method

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    056a8e7 View commit details
  9. Bypass vars import nicely

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    83ef85d View commit details
  10. Rework plotting, add density

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    27a4b25 View commit details
  11. Document, tiny plot and density fixes

    Nikolas Kuschnig committed Aug 29, 2019
    Copy the full SHA
    6d06488 View commit details

Commits on Aug 30, 2019

  1. Deprecate all bv_plot_ functions

    Nikolas Kuschnig committed Aug 30, 2019
    Copy the full SHA
    2fde016 View commit details
  2. Improve fitted, close #28

    Nikolas Kuschnig committed Aug 30, 2019
    Copy the full SHA
    ab3485d View commit details
  3. Run some checks, add stats imports

    Nikolas Kuschnig committed Aug 30, 2019
    Copy the full SHA
    947a5c8 View commit details
  4. Internal release, work on news & readme, prelim logLik

    Nikolas Kuschnig committed Aug 30, 2019
    Copy the full SHA
    0dfcce2 View commit details
Showing with 13,348 additions and 2,700 deletions.
  1. +12 −7 .Rbuildignore
  2. +314 −1 .gitignore
  3. +22 −12 DESCRIPTION
  4. +1,103 −0 LICENSE
  5. +80 −6 NAMESPACE
  6. +180 −0 NEWS.md
  7. +240 −210 R/10_bvar.R
  8. +102 −43 R/11_input.R
  9. +200 −88 R/12_aux.R
  10. +62 −0 R/13_mvtnorm.R
  11. +205 −0 R/15_prep_data.R
  12. +66 −65 R/20_ml.R
  13. +22 −29 R/21_draw.R
  14. +62 −0 R/22_calc.R
  15. +0 −37 R/22_get.R
  16. +0 −55 R/30_metropolis.R
  17. +79 −0 R/30_metropolis_setup.R
  18. +14 −0 R/35_metropolis_print.R
  19. +31 −23 R/{40_priors.R → 40_priors_setup.R}
  20. +150 −52 R/41_minnesota.R
  21. +54 −34 R/42_dummy.R
  22. +13 −16 R/43_sur_soc.R
  23. +78 −0 R/45_priors_print.R
  24. +0 −34 R/50_fcast.R
  25. +84 −0 R/50_fcast_setup.R
  26. +12 −16 R/51_fcast_compute.R
  27. +88 −0 R/52_fcast_cond.R
  28. +198 −0 R/54_predict.R
  29. +77 −0 R/55_fcast_print.R
  30. +214 −0 R/58_fcast_plot.R
  31. +0 −82 R/60_irf.R
  32. +130 −0 R/60_irf_setup.R
  33. +30 −27 R/61_irf_compute.R
  34. +103 −30 R/62_sign_restr.R
  35. +0 −25 R/63_fevd.R
  36. +25 −0 R/63_fevd_compute.R
  37. +314 −0 R/64_irf_method.R
  38. +135 −0 R/65_irf_print.R
  39. +188 −0 R/68_irf_plot.R
  40. +132 −0 R/69_fevd_plot.R
  41. +92 −0 R/70_hist_decomp.R
  42. +102 −0 R/71_rmse.R
  43. +112 −0 R/80_coda.R
  44. +0 −59 R/80_plot.R
  45. +118 −0 R/81_parallel.R
  46. +0 −138 R/84_plot_hyper.R
  47. +0 −134 R/85_plot_fcast.R
  48. +208 −0 R/85_transform.R
  49. +0 −138 R/86_plot_irf.R
  50. +0 −42 R/90_method.R
  51. +21 −0 R/90_print.R
  52. +195 −0 R/91_plot.R
  53. +174 −0 R/92_coef.R
  54. +114 −0 R/93_density.R
  55. +139 −0 R/94_logLik.R
  56. 0 R/94_method_priors.R
  57. +177 −0 R/95_fitted.R
  58. +0 −38 R/95_method_fcast.R
  59. +0 −64 R/96_method_irf.R
  60. +59 −0 R/96_summary.R
  61. +94 −0 R/97_companion.R
  62. +25 −0 R/BVAR-package.R
  63. +32 −15 R/data.R
  64. 0 R/{93_method_metropolis.R → deprecated.R}
  65. +27 −0 R/zzz.R
  66. +57 −7 README.md
  67. +0 −22 cran-comments.md
  68. BIN data/fred_md.rda
  69. +267 −0 data/fred_permitted.txt
  70. BIN data/fred_qd.rda
  71. +0 −36 details.Rmd
  72. +31 −0 inst/CITATION
  73. +0 −45 inst/REFERENCES.bib
  74. +279 −0 inst/fred_trans.csv
  75. +127 −0 inst/tinytest/internal.R
  76. +27 −0 inst/tinytest/manual.R
  77. +257 −0 inst/tinytest/test_BVAR.R
  78. +50 −0 man/BVAR-package.Rd
  79. +61 −0 man/WAIC.bvar.Rd
  80. +39 −9 man/bv_dummy.Rd
  81. +38 −12 man/bv_fcast.Rd
  82. +64 −34 man/bv_irf.Rd
  83. +77 −0 man/bv_metropolis.Rd
  84. +0 −44 man/bv_mh.Rd
  85. +121 −0 man/bv_minnesota.Rd
  86. +0 −67 man/bv_mn.Rd
  87. +0 −46 man/bv_plot_trace.Rd
  88. +21 −13 man/bv_priors.Rd
  89. +116 −57 man/bvar.Rd
  90. +92 −0 man/coda.Rd
  91. +64 −0 man/coef.bvar.Rd
  92. +53 −0 man/companion.Rd
  93. +81 −0 man/density.bvar.Rd
  94. +71 −0 man/fitted.bvar.Rd
  95. +35 −13 man/fred_qd.Rd
  96. +91 −0 man/fred_transform.Rd
  97. +46 −0 man/hist_decomp.bvar.Rd
  98. +112 −0 man/irf.bvar.Rd
  99. +39 −0 man/logLik.bvar.Rd
  100. +102 −0 man/par_bvar.Rd
  101. +59 −14 man/plot.bvar.Rd
  102. +69 −28 man/plot.bvar_fcast.Rd
  103. +64 −30 man/plot.bvar_irf.Rd
  104. +89 −0 man/predict.bvar.Rd
  105. +0 −41 man/print.bvar.Rd
  106. +59 −0 man/rmse.bvar.Rd
  107. +47 −0 man/summary.bvar.Rd
  108. +0 −27 scripts/dl_fred_qd.R
  109. +0 −192 scripts/example_paper.R
  110. +139 −0 scripts/fred_copyright.R
  111. +116 −0 scripts/fred_prep.R
  112. +0 −158 scripts/setup.R
  113. +0 −285 scripts/setup2.R
  114. +10 −0 tests/tinytest.R
  115. +1 −0 vignettes/.install_extras
  116. +40 −0 vignettes/Makefile
  117. +820 −0 vignettes/article.Rnw
  118. +1,653 −0 vignettes/jss.bst
  119. +510 −0 vignettes/jss.cls
  120. BIN vignettes/jsslogo.jpg
  121. +746 −0 vignettes/refs.bib
19 changes: 12 additions & 7 deletions .Rbuildignore
Original file line number Diff line number Diff line change
@@ -1,12 +1,17 @@
^.*\.Rproj$
^\.Rproj\.user$

^details\....$
^data/fred_qd_full\.rda$
^data/fred_md_full\.rda$
^data/fred_permitted\.txt$

^scripts/
^dl_fred_qd\.R$
^setup\.R$
^doc$
^Meta$
^scripts$

^cran-comments\.md$
^NEWS\.md$
^README\.md$
^vignettes/.*\.jpg$
^vignettes/.*\.png$
^vignettes/.*\.pdf$
^vignettes/jss.*$
^vignettes/article_jss.Rnw$
^vignettes/Makefile$
315 changes: 314 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,318 @@

### Custom

# These would include copyrighted data
data/fred_qd_full.rda
data/fred_md_full.rda

doc/
Meta/
# man/

vignettes/*.html
vignettes/*.png
vignettes/*.jpg
vignettes/*.pdf
vignettes/*.tex
vignettes/*.R
vignettes/article_jss.Rnw

# bvar.Rproj


### R stuff and temporary files
.Rproj.user

.Rhistory
.Rapp.history

.RData
.Ruserdata
.RDataTmp.Ruserdata

.Renviron

*.utf8.md
*.knit.md

*_cache/
/cache/

/*.tar.gz

.DS_Store


# Created by https://www.gitignore.io/api/latex
# Edit at https://www.gitignore.io/?templates=latex

### LaTeX ###
## Core latex/pdflatex auxiliary files:
*.aux
*.lof
*.log
*.lot
*.fls
*.out
*.toc
*.fmt
*.fot
*.cb
*.cb2
.*.lb

## Intermediate documents:
*.dvi
*.xdv
*-converted-to.*
# these rules might exclude image files for figures etc.
# *.ps
# *.eps
# *.pdf

## Generated if empty string is given at "Please type another file name for output:"
.pdf

## Bibliography auxiliary files (bibtex/biblatex/biber):
*.bbl
*.bcf
*.blg
*-blx.aux
*-blx.bib
*.run.xml

## Build tool auxiliary files:
*.fdb_latexmk
*.synctex
*.synctex(busy)
*.synctex.gz
*.synctex.gz(busy)
*.pdfsync

## Build tool directories for auxiliary files
# latexrun
latex.out/

## Auxiliary and intermediate files from other packages:
# algorithms
*.alg
*.loa

# achemso
acs-*.bib

# amsthm
*.thm

# beamer
*.nav
*.pre
*.snm
*.vrb

# changes
*.soc

# comment
*.cut

# cprotect
*.cpt

# elsarticle (documentclass of Elsevier journals)
*.spl

# endnotes
*.ent

# fixme
*.lox

# feynmf/feynmp
*.mf
*.mp
*.t[1-9]
*.t[1-9][0-9]
*.tfm

#(r)(e)ledmac/(r)(e)ledpar
*.end
*.?end
*.[1-9]
*.[1-9][0-9]
*.[1-9][0-9][0-9]
*.[1-9]R
*.[1-9][0-9]R
*.[1-9][0-9][0-9]R
*.eledsec[1-9]
*.eledsec[1-9]R
*.eledsec[1-9][0-9]
*.eledsec[1-9][0-9]R
*.eledsec[1-9][0-9][0-9]
*.eledsec[1-9][0-9][0-9]R

# glossaries
*.acn
*.acr
*.glg
*.glo
*.gls
*.glsdefs

# uncomment this for glossaries-extra (will ignore makeindex's style files!)
# *.ist

# gnuplottex
*-gnuplottex-*

# gregoriotex
*.gaux
*.gtex

# htlatex
*.4ct
*.4tc
*.idv
*.lg
*.trc
*.xref

# hyperref
*.brf

# knitr
*-concordance.tex
# TODO Comment the next line if you want to keep your tikz graphics files
*.tikz
*-tikzDictionary

# listings
*.lol

# luatexja-ruby
*.ltjruby

# makeidx
*.idx
*.ilg
*.ind

# minitoc
*.maf
*.mlf
*.mlt
*.mtc[0-9]*
*.slf[0-9]*
*.slt[0-9]*
*.stc[0-9]*

# minted
_minted*
*.pyg

# morewrites
*.mw

# nomencl
*.nlg
*.nlo
*.nls

# pax
*.pax

# pdfpcnotes
*.pdfpc

# sagetex
*.sagetex.sage
*.sagetex.py
*.sagetex.scmd

# scrwfile
*.wrt

# sympy
*.sout
*.sympy
sympy-plots-for-*.tex/

# pdfcomment
*.upa
*.upb

# pythontex
*.pytxcode
pythontex-files-*/

# tcolorbox
*.listing

# thmtools
*.loe

# TikZ & PGF
*.dpth
*.md5
*.auxlock

# todonotes
*.tdo

# vhistory
*.hst
*.ver

# easy-todo
*.lod

# xcolor
*.xcp

# xmpincl
*.xmpi

# xindy
*.xdy

# xypic precompiled matrices
*.xyc

# endfloat
*.ttt
*.fff

# Latexian
TSWLatexianTemp*

## Editors:
# WinEdt
*.bak
*.sav

# Texpad
.texpadtmp

# LyX
*.lyx~

# Kile
*.backup

# KBibTeX
*~[0-9]*

# auto folder when using emacs and auctex
./auto/*
*.el

# expex forward references with \gathertags
*-tags.tex

# standalone packages
*.sta

### LaTeX Patch ###
# glossaries
*.glstex

# End of https://www.gitignore.io/api/latex
34 changes: 22 additions & 12 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,21 +1,31 @@
Package: BVAR
Type: Package
Title: Hierarchical Bayesian Vector Autoregression
Version: 0.1.4
Date: 2019-06-30
Authors@R: c(person("Nikolas", "Kuschnig", role = c("aut", "cre"), email = "nikolas.kuschnig@wu.ac.at"),
person("Lukas", "Vashold", role = "aut"),
person("Michael", "McCracken", role = "dtc", comment = "author of the FRED-QD dataset"))
Author: Nikolas Kuschnig [aut, cre],
Lukas Vashold [aut],
Michael McCracken [dtc] (author of the FRED-QD dataset)
Version: 1.0.5
Date: 2024-02-13
Authors@R: c(person("Nikolas", "Kuschnig", role = c("aut", "cre"), email = "nikolas.kuschnig@wu.ac.at", comment = c(ORCID = "0000-0002-6642-2543")),
person("Lukas", "Vashold", role = "aut", comment = c(ORCID = "0000-0002-3562-3414")),
person("Nirai", "Tomass", role = "ctb"),
person("Michael", "McCracken", role = "dtc"),
person("Serena", "Ng", role = "dtc"))
Author: Nikolas Kuschnig [aut, cre] (<https://orcid.org/0000-0002-6642-2543>),
Lukas Vashold [aut] (<https://orcid.org/0000-0002-3562-3414>),
Nirai Tomass [ctb],
Michael McCracken [dtc], Serena Ng [dtc]
Maintainer: Nikolas Kuschnig <nikolas.kuschnig@wu.ac.at>
Description: Toolkit for the estimation of hierarchical Bayesian vector autoregressions. Implements hierarchical prior selection for conjugate priors in the fashion of Giannone, Lenza & Primiceri (2015) <doi:10.1162/REST_a_00483>. Allows for the computation of impulse responses and forecasts and provides functionality for assessing results.
Description: Estimation of hierarchical Bayesian vector autoregressive models
following Kuschnig & Vashold (2021) <doi:10.18637/jss.v100.i14>.
Implements hierarchical prior selection for conjugate priors in the fashion
of Giannone, Lenza & Primiceri (2015) <doi:10.1162/REST_a_00483>.
Functions to compute and identify impulse responses, calculate forecasts,
forecast error variance decompositions and scenarios are available.
Several methods to print, plot and summarise results facilitate analysis.
URL: https://github.com/nk027/bvar
BugReports: https://github.com/nk027/bvar/issues
Depends: R (>= 3.3.0)
Imports: MASS, stats, graphics, utils
License: GPL-3
Imports: mvtnorm, stats, graphics, utils, grDevices
Suggests: coda, vars, tinytest
License: GPL-3 | file LICENSE
Encoding: UTF-8
LazyData: true
RoxygenNote: 6.1.1
RoxygenNote: 7.3.1
1,103 changes: 1,103 additions & 0 deletions LICENSE

Large diffs are not rendered by default.

86 changes: 80 additions & 6 deletions NAMESPACE
Original file line number Diff line number Diff line change
@@ -1,44 +1,118 @@
# Generated by roxygen2: do not edit by hand

S3method("fevd<-",bvar)
S3method("irf<-",bvar)
S3method("predict<-",bvar)
S3method(WAIC,bvar)
S3method(WAIC,default)
S3method(coef,bvar)
S3method(companion,bvar)
S3method(companion,default)
S3method(density,bvar)
S3method(fevd,bvar)
S3method(fevd,bvar_fevd)
S3method(fevd,bvar_irf)
S3method(fevd,default)
S3method(fitted,bvar)
S3method(hist_decomp,bvar)
S3method(hist_decomp,default)
S3method(irf,bvar)
S3method(irf,bvar_irf)
S3method(irf,default)
S3method(logLik,bvar)
S3method(lps,bvar)
S3method(lps,default)
S3method(plot,bvar)
S3method(plot,bvar_chains)
S3method(plot,bvar_density)
S3method(plot,bvar_fcast)
S3method(plot,bvar_irf)
S3method(plot,bvar_resid)
S3method(predict,bvar)
S3method(predict,bvar_fcast)
S3method(print,bv_dummy)
S3method(print,bv_fcast)
S3method(print,bv_irf)
S3method(print,bv_metropolis)
S3method(print,bv_minnesota)
S3method(print,bv_priors)
S3method(print,bv_psi)
S3method(print,bvar)
S3method(print,bvar_coefs)
S3method(print,bvar_comp)
S3method(print,bvar_density)
S3method(print,bvar_fcast)
S3method(print,bvar_fcast_summary)
S3method(print,bvar_fevd)
S3method(print,bvar_fitted)
S3method(print,bvar_irf)
S3method(print,bvar_irf_summary)
S3method(print,bvar_resid)
S3method(print,bvar_summary)
S3method(print,bvar_vcovs)
S3method(residuals,bvar)
S3method(rmse,bvar)
S3method(rmse,default)
S3method(summary,bvar)
S3method(summary,bvar_fcast)
S3method(summary,bvar_irf)
S3method(vcov,bvar)
export("fevd<-")
export("irf<-")
export("predict<-")
export(WAIC)
export(bv_alpha)
export(bv_dummy)
export(bv_fcast)
export(bv_irf)
export(bv_lambda)
export(bv_metropolis)
export(bv_mh)
export(bv_minnesota)
export(bv_mn)
export(bv_plot)
export(bv_plot_density)
export(bv_plot_fcast)
export(bv_plot_irf)
export(bv_plot_trace)
export(bv_priors)
export(bv_psi)
export(bv_soc)
export(bv_sur)
export(bvar)
importFrom(MASS,mvrnorm)
export(companion)
export(fevd)
export(fred_code)
export(fred_transform)
export(hist_decomp)
export(independent_index)
export(irf)
export(lps)
export(par_bvar)
export(rmse)
importFrom(grDevices,rgb)
importFrom(graphics,abline)
importFrom(graphics,grid)
importFrom(graphics,lines)
importFrom(graphics,par)
importFrom(graphics,plot)
importFrom(graphics,polygon)
importFrom(mvtnorm,dmvnorm)
importFrom(mvtnorm,rmvnorm)
importFrom(stats,arima)
importFrom(stats,coef)
importFrom(stats,density)
importFrom(stats,dgamma)
importFrom(stats,dnorm)
importFrom(stats,fitted)
importFrom(stats,logLik)
importFrom(stats,optim)
importFrom(stats,predict)
importFrom(stats,quantile)
importFrom(stats,resid)
importFrom(stats,residuals)
importFrom(stats,rnorm)
importFrom(stats,runif)
importFrom(stats,ts)
importFrom(stats,ts.plot)
importFrom(stats,vcov)
importFrom(utils,head)
importFrom(utils,read.table)
importFrom(utils,setTxtProgressBar)
importFrom(utils,tail)
importFrom(utils,txtProgressBar)
180 changes: 180 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@

# v1.0.5, CRAN Update 10
- Add historical decompositions and RMSE/LPS, and WAIC for analysis
- Improve the algorithm to draw sign restrictions
- Shocks are now built and checked per variable
- Fix the CRAN *NOTE* on missing "tangle output"
- Update FRED-datasets to 2023-10 vintage
- Newer FRED-QD files include (unannounced) small naming changes


# v1.0.4, CRAN Update 9

- Fix bug from last update, where automatic ARIMA-priors where sqrt'd twice
- Thanks to Michael Wolf for pointing this out, and
- many thanks to Nirai Tomass for discovering the bug
- Update **FRED-MD** and **FRED-QD** datasets to the 2023-02 vintage


# v1.0.3, CRAN Update 8

- Fix bug where warnings caused an error during automatic ARIMA-based priors
- Thanks to Martin Feldkircher for pointing this out


# v1.0.2, CRAN Update 7 / JSS Release

- Add DOI in the CITATION file for a new **JSS publication**
- DOI will be registered after publication on CRAN
- References now use the DOI interface instead of URLs
- Update **FRED-QD** and **FRED-MD** datasets to 2021-10
- Fix minor issues with vignette (e.g. fixed dataset, references, etc)
- Add verbosity to ARIMA-based automatic prior settings
- Add hook to simplify use of shared generics with *vars*
- Add *README* file to the package with correct URLs


# v1.0.1, CRAN Update 6

- Add identification via zero and sign restrictions
- Update *CITATION* with the forthcoming JSS reference
- Prepare for tidy outputs in *BVARverse*
- Fix minor bugs and typos (internal checks, documentation, and vignette)


# v1.0.0, CRAN Update 5 / JSS Revision 2

- Add fancy **vignette** with background and demonstrations
- Add new features
- New **FRED-MD** database and updated FRED-QD (`data("fred_md")`)
- Transformation helper functions (`fred_transform()`, `fred_code()`)
- Add **Conditional forecasting** (see `?bv_fcast()`)
- New replacement functions for `irf()` and `predict()` (`irf(x) <- irf(x)`)
- Provide wrapper for parallelised execution (`par_bvar()`)
- Improve existing features
- Enhance IRF and forecast plotting
- `area` argument adds polygons for credible intervals
- `t_back` allows adding realised values before forecasts
- `col` and `fill` arguments allow changing colours
- transparence is applied to sequential lines / polygons
- Improved x-axis labelling
- Regex may be used for `vars`, `vars_response`, and `vars_impulse`
- New `type` for `coef()`, `fitted()`, etc, to retrieve means / quantiles
- Add constructors for the prior mean `b` argument in `bv_mn()`
- Auto `psi` now allows for one order of integration
- **Enhance speed** considerably (~2-10 times faster)
- Move IRF and forecasts out of MCMC
- Capitalise upon matrix properties
- Cached and customised multivariate normal drawing
- Optimised FEVD computation
- Fix bugs
- IRF calculation is now ordered properly (please recalculate)
- *coda* methods are now proper methods (potential issue on Windows)
- Vectorised `scale_hess` now works properly
- Remove deprecated functions and arguments
- Work on documentation and examples
- Update citation information
- Improve upon internal structure
- Unit tests with *tinytest* for development (skipped on CRAN)
- Outsource additional steps to dedicated functions
- More robustness checks and add verbosity to errors
- Tested extensively on R 4.0.0 and R 3.6.3


# v0.2.2, CRAN Update 4 / Impulse Response Hotfix

- Fix impulse response calculation (stray transpose of coefficients)
- Thanks, Maximilian Böck for helping us track this down
- Add some verbosity to error messages
- Prepare for next major release
- Include messages about future extensions and changes of FEVDs
- Update docs on future construction of sign restrictions
- Update docs on prediction and IRFs in `bvar()`
- Add FRED-MD co-author Serena Ng to data contributors


# v0.2.1, CRAN Update 3 / FRED-QD ODC-BY 1.0

- Clarify exact ToU of the FRED-QD dataset with St. Louis Fed
- Comply with the new modified **ODC-BY 1.0** for FRED-QD
- Mention and add license to *LICENSE* file (linked in *DESCRIPTION*)
- Add the copyrighted series we are allowed to use
- Mention updates and license in the data documentation
- Fix and improve documentation


# v0.2.0, CRAN Update 2 / JSS Revision 1

- Extend methods further
- Add `summary()` methods for `bvar`, `bvar_irf` and `bvar_fcast`
- Add `as.mcmc` method to interface with *coda*
- Support hyperparameters and coefficients in `plot.bvar()` and `density()`
- Add `companion()` method to retrieve the companion matrix
- Improve `ylim` of `plot.bvar()` density plots with multiple chains
- Add `logLik()` method for `bvar` objects
- Fix `bv_psi()` errors for modes other than "auto"
- Change `fcast` and `irf` defaults of `bvar()` to `NULL`
- Move from *MASS* to *mvtnorm* for `dmvnorm()` (used in `logLik()`)
- Be more specific in error messages
- Add *coda* to suggestions for convergence assessment etc.
- Add files *LICENSE*, *CITATION* and *NEWS.md*
- Improve examples for further test coverage
- `R CMD check --run-donttest`: One warning (deprecated functions)


# v0.1.6, Internal Release 1

- Prepare for minor release 0.2.0
- Provide several standard methods for objects generated with *BVAR*
- `predict()` for ex-post forecasts and updating quantiles
- `irf()` / `fevd()` for ex-post irfs, fevds and updating quantiles
- `fitted()`, `residuals()`, `coef()` and `vcov()`
- `density()` as shorthand for calling `density()` on hyperparameters
- Add `print()` methods for intermediate objects
- Includes `bv_minnesota`, `bv_metropolis` and method outputs
- Rework plotting
- `plot.bvar()` now supports types, subsets and multiple chains
- Deprecate `bv_plot_trace()` and `bv_plot_density()`
- Add `density()`, including a plot method
- Replace by `plot.bvar()`, incl. plotting of multiple chains
- Change plotting of `bvar_fcast` and `bvar_irf`
- Deprecate `bv_plot_irf(x)` for `plot(x$irf)` or `plot(irf(x))`
- Move `conf_bands` argument to `predict()` / `irf()`
- Add plot methods for `bvar_resid` and `bvar_density`
- Add `sign_lim` to `bv_irf()` to set maximum number of sign restriction draws
- Standardise prior construction further
- Align `alpha` and `lambda`, improve `psi` alignment
- `bv_mn()` may be called with a numeric vector for `alpha` and/or `lambda`
- Improve documentation
- Document related functions in joint
- Add details on priors
- Add less concise aliases for `bv_mh()` and `bv_mn()`
- `bv_metropolis()` and `bv_minnesota()`
- Fix bugs related to single confidence bands at 0.5
- Save `fred_qd` with format version 2, lowering R dependency to (>= 3.3.0)
- Add *vars* to suggests for shared methods, import is bypassed


# v0.1.5, CRAN Update 1

- Try to clarify licensing terms with the Federal Reserve
- Some copyrighted series may have to be removed
- Subset the dataset to only include variables in public domain for now
- Fix addition of prior pdfs to ML
- `alpha` needs an sd parameter
- `psi` now needs proper shape and scale parameters
- Add normalising constant
- Add lines to all density plots (when supplied via ellipsis)
- Add documentation on using `scale_hess` as a vector
- Add two pre-constructed dummy priors `bv_soc()` and `bv_sur()`
- Further split up calculation of marginal likelihood


# v0.1.3, CRAN Submission

- Update DESCRIPTION with linked DOI
- Change `\dontrun{}` examples to `\donttest{}`
- Fix bounds in `plot_hyper()`
- Update references with links via DOI
- Add and improve examples for `print` and `plot` methods
- `R CMD check --as-cran`: No errors or warnings, one note (New submission)
450 changes: 240 additions & 210 deletions R/10_bvar.R

Large diffs are not rendered by default.

145 changes: 102 additions & 43 deletions R/11_input.R
Original file line number Diff line number Diff line change
@@ -1,43 +1,61 @@
#' Check a numeric scalar

#' Lag a matrix
#'
#' Function to check whether an object is properly bounded and coercible to a
#' a numeric value.
#' Compute a lagged version of a matrix to be used in vector autoregressions.
#' Higher lags are further to the right.
#'
#' @param x Numeric scalar to check.
#' @param min Numeric scalar. Minimum value of \emph{x}.
#' @param max Numeric scalar. Maximum value of \emph{x}.
#' @param fun Function to apply to \emph{x} before returning.
#' @param msg String fed to \code{\link[base]{stop}} if an error occurs.
#' @param x Matrix (\eqn{N * M}) to lag.
#' @param lags Integer scalar. Number of lags to apply.
#'
#' @return Returns \code{fun(x)}.
#' @return Returns an \eqn{N * (M * lags)} matrix with consecutive lags on the
#' right. The elements of the first \emph{lags} rows are 0.
#'
#' @noRd
num_check <- function(
x, min = 0, max = Inf,
msg = "Please check the numeric parameters.",
fun = as.numeric) {
lag_var <- function(x, lags) {

x_rows <- nrow(x)
x_cols <- ncol(x)

if(!is.numeric(x) || length(x) != 1 || x < min || x > max) {
stop(msg)
x_lagged <- matrix(0, x_rows, lags * x_cols)
for(i in 1:lags) {
x_lagged[(lags + 1):x_rows, (x_cols * (i - 1) + 1):(x_cols * i)] <-
x[(lags + 1 - i):(x_rows - i), (1:x_cols)]
}

return(fun(x))
return(x_lagged)
}

#' @rdname num_check

#' Compute gamma coefficients
#'
#' Compute the shape \emph{k} and scale \emph{theta} of a Gamma
#' distribution via the mode and standard deviation.
#'
#' @param mode Numeric scalar.
#' @param sd Numeric scalar.
#'
#' @return Returns a list with shape \emph{k} and scale parameter \emph{theta}.
#'
#' @noRd
int_check <- function(
x, min = 0, max = Inf,
msg = "Please check the integer parameters.") {
gamma_coef <- function(mode, sd) {

mode_sq <- mode ^ 2
sd_sq <- sd ^ 2
k <- (2 + mode_sq / sd_sq + sqrt((4 + mode_sq / sd_sq) * mode_sq / sd_sq)) / 2
theta <- sqrt(sd_sq / k)

num_check(x, min, max, msg, fun = as.integer)
return(list("k" = k, "theta" = theta))
}

#' Set psi of the Minnesota prior

#' Auto-set psi of the Minnesota prior
#'
#' Automatically set the prior values of \emph{psi}. Fits an \eqn{AR(p)} model
#' and sets the mode to the square-root of the innovations variance. Boundaries
#' are set to the mode times / divided by 100.
#'
#' Set the prior values of \emph{psi} by fitting an \eqn{AR(p)} model and using
#' the squareroot of the innovations variance.
#' If the call to \code{\link[stats]{arima}} fails, an integrated
#' \eqn{ARIMA(p, 1, 0)} model is fitted instead.
#'
#' @param x Numeric matrix with the data.
#' @param lags Numeric scalar. Number of lags in the model.
@@ -50,39 +68,80 @@ int_check <- function(
#' @noRd
auto_psi <- function(x, lags) {

out <- list()
out[["mode"]] <- tryCatch(apply(x, 2, function(x) {
sqrt(arima(x, order = c(lags, 0, 0))$sigma2)
}), error = function(e) {
stop("Some of the data appears to be integrated. ",
"Setting psi automatically via `arima()` (p = ", lags, ") failed.")
})
out <- list("mode" = rep(NA_real_, ncol(x)))

for(j in seq_len(ncol(x))) {
ar_sigma2 <- tryCatch(sqrt(arima(x[, j], order = c(lags, 0, 0))$sigma2),
error = function(e) { # If this fails for, increment integration
message("Caught an error while automatically setting psi. ",
"Column ", j, " appears to be integrated; caught error:\n", e, "\n",
"Attempting to increase order of integration via an ARIMA(",
lags, ", 1, 0) model.")
# Integrated ARMA instead
tryCatch(sqrt(arima(x[, j], order = c(lags, 1, 0))$sigma2),
error = function(f) {
stop("Cannot set psi automatically via ARIMA(", lags, ", 0/1, 0)",
"Caught the error:\n", f, "\nPlease inspect the data ",
"or provide psi manually (see `?bv_psi`).")
})
}, warning = function(w) {
message("Caught a warning while setting psi automatically:\n", w, "\n")
suppressWarnings(sqrt(arima(x[, j], order = c(lags, 0, 0))$sigma2))
}
)
out[["mode"]][j] <- ar_sigma2
}
out[["min"]] <- out[["mode"]] / 100
out[["max"]] <- out[["mode"]] * 100

return(out)
}
# auto_psi <- function(x, lags) {
#
# out <- list("mode" = rep(NA_real_, ncol(x)))
#
# for(j in seq_len(ncol(x))) {
#
# y_0 <- cbind(1, lag_var(x[, j, drop = FALSE], lags = lags))
# y_0 <- y_0[(lags + 1):nrow(y_0), ]
# y_1 <- as.matrix(x[(lags + 1):nrow(x), j, drop = FALSE])
#
# ar_beta <- chol2inv(chol(crossprod(y_0))) %*% crossprod(y_0, y_1)
# ar_resid <- y_1 - y_0 %*% ar_beta
#
# out[["mode"]][j] <- sqrt(sum(ar_resid^2) / (nrow(y_0) - lags - 1))
# }
#
# out[["min"]] <- out[["mode"]] / 100
# out[["max"]] <- out[["mode"]] * 100
#
# return(out)
# }


#' Generate quantiles
#'
#' Check and create a given vector of confidence bands and create suitable
#' quantiles from it.

#' Compute companion matrix
#'
#' @param conf_bands Numeric vector of probabilities (\eqn{[0, 1]}).
#' Compute the companion form of the VAR coefficients.
#'
#' @return Returns a sorted vector of quantiles created from \emph{conf_bands}.
#' @param beta Numeric (\eqn{K * M}) matrix with VAR coefficients.
#' @param K Integer scalar. Number of columns in the independent data.
#' @param M Integer scalar. Number of columns in the dependent data.
#' @param lags Integer scalar. Number of lags applied.
#'
#' @examples
#' bvar:::quantile_check(c(0.1, 0.16))
#' @return Returns a numeric (\eqn{K - 1 * K -1}) matrix with \emph{beta} in
#' companion form.
#'
#' @noRd
quantile_check <- function(conf_bands) {
get_beta_comp <- function(beta, K, M, lags) {

beta_comp <- matrix(0, K - 1, K - 1)

if(any(!is.numeric(conf_bands), conf_bands > 1, conf_bands < 0)) {
stop("Confidence bands misspecified.")
beta_comp[1:M, ] <- t(beta[2:K, ]) # Kick constant
if(lags > 1) { # Add block-diagonal matrix beneath VAR coefficients
beta_comp[(M + 1):(K - 1), 1:(K - 1 - M)] <- diag(M * (lags - 1))
}

return(sort(c(conf_bands, 0.5, (1 - conf_bands))))
return(beta_comp)
}

288 changes: 200 additions & 88 deletions R/12_aux.R
Original file line number Diff line number Diff line change
@@ -1,157 +1,269 @@
#' Lag a matrix of time series

#' Check numeric scalar
#'
#' Compute a lagged version of matrix to be used in vector autoregressions.
#' Multiple lags are added side by side.
#' Check whether an object is bounded and coercible to a numeric value.
#'
#' @param x Matrix (\eqn{N * M}) to lag.
#' @param lags Numeric scalar. Number of lags to create.
#' @param x Numeric scalar.
#' @param min Numeric scalar. Minimum value of \emph{x}.
#' @param max Numeric scalar. Maximum value of \emph{x}.
#' @param fun Function to apply to \emph{x} before returning.
#' @param msg String fed to \code{\link[base]{stop}} if an error occurs.
#'
#' @return Returns an \eqn{N * (M * lags)} matrix with consecutive lags at the
#' right.
#' @return Returns \code{fun(x)}.
#'
#' @noRd
lag_var <- function(x, lags) {
num_check <- function(
x, min = 0, max = Inf,
msg = "Please check the numeric parameters.",
fun = as.numeric) {

x_rows <- nrow(x)
x_cols <- ncol(x)
if(!is.numeric(x) || length(x) != 1 || x < min || x > max) {stop(msg)}

x_lagged <- matrix(0, x_rows, lags * x_cols)
for(i in 1:lags) {
x_lagged[(lags + 1):x_rows, (x_cols * (i - 1) + 1):(x_cols * i)] <-
x[(lags + 1 - i):(x_rows - i), (1:x_cols)]
}
return(fun(x))
}


#' @noRd
int_check <- function(
x, min = 0L, max = Inf,
msg = "Please check the integer parameters.") {

return(x_lagged)
num_check(x, min, max, msg, fun = as.integer)
}


#' Compute gamma coefficients
#' Name hyperparameters
#'
#' Compute the shape \emph{k} and scale \emph{theta} of a gamma
#' distribution via mode and standard deviation.
#' Function to help name hyperparameters. Accounts for multiple occurences
#' of \emph{psi} by adding sequential numbers.
#'
#' @param mode Numeric scalar.
#' @param sd Numeric scalar.
#' @param x Character vector. Parameter names.
#' @param M Integer scalar. Number of columns in the data.
#'
#' @return Returns a list with shape \emph{k} and scale paramter \emph{theta}.
#' @return Returns a character vector of adjusted parameter names.
#'
#' @noRd
gamma_coef <- function(mode, sd) {
name_pars <- function(x, M) {

mode_sq <- mode ^ 2
sd_sq <- sd ^ 2
k <- (2 + mode_sq / sd_sq + sqrt((4 + mode_sq / sd_sq) * mode_sq / sd_sq)) / 2
theta <- sqrt(sd_sq / k)
out <- Reduce(c, sapply(x, function(y) {
if(y == "psi") {paste0(y, 1:M)} else {y}}))

return(list("k" = k, "theta" = theta))
return(out)
}


#' Create parameter names
#' Fill credible intervals
#'
#' Function to help name prior parameters. Accounts for multiple occurences
#' of \emph{psi} when \eqn{M > 1} by adding sequential numbers.
#' Helper function to fill data, colours or similar things based on credible
#' intervals. These are used in \code{\link{plot.bvar_irf}} and
#' \code{\link{plot.bvar_fcast}}.
#'
#' @param x Character vector. Names of all relevant paramters.
#' @param M Integer scalar. Number of columns in the data.
#' Note that transparency may get appended to recycled HEX colours. Also note
#' that no, i.e. a length 0 central element is required when drawing polygons.
#'
#' @return Returns a character vector of parameter names.
#' @param x Scalar or vector. The central element.
#' @param y Scalar or vector. Value(s) to surround the central element with.
#' The first value is closest, values may get recycled.
#' @param P Odd integer scalar. Number of total bands.
#'
#' @examples
#' bvar:::name_pars(c("lambda", "alpha"))
#' bvar:::name_pars(c("lambda", "psi"), M = 3)
#' @return Returns a vector or matrix (if \emph{x} is a vector) of \emph{x},
#' surrounded by \emph{y}.
#'
#' @noRd
name_pars <- function(x, M) {
fill_ci <- function(x, y, P) {

out <- Reduce(c, sapply(x, function(y) {if(y == "psi") {
paste0(y, 1:M)
} else {y}}))
n_y <- if(P %% 2 == 0) {
stop("No central position for x found.")
} else {P %/% 2}

return(out)
fill <- rep(y, length.out = n_y)

if(length(x) > 1) { # Matrix
n_row <- length(x)
return(cbind(t(rev(fill))[rep(1, n_row), ], x, t(fill)[rep(1, n_row), ]))
} else { # Vector
return(c(rev(fill), x, fill))
}
}


#' @noRd
fill_ci_na <- function(x, P) {

# Corner case when quantiles are missing (t_back or conditional forecasts)
if(P == 2) {return(if(length(x > 1)) {cbind(x, NA)} else {c(x, NA)})}

fill_ci(x = x, y = NA, P = P)
}


#' Credible interval colour vector
#' @noRd
fill_ci_col <- function(x, y, P) {

# Apply transparency to HEX colours
if(length(y) == 1 && is_hex(y, alpha = FALSE)) {
y <- paste0(y, alpha_hex(P))
}

fill_ci(x = x, y = y, P = P)
}


#' Get a transparency HEX code
#'
#' @param P Integer scalar. Number of total bands.
#'
#' Create a character vector of colours for time series with credible
#' intervals, e.g. \code{\link{bv_plot_irf}} and \code{\link{bv_plot_fcast}}.
#' The central element is coloured \code{"black"}, the rest \code{"darkgray"}.
#' @return Returns a character vector of transparency codes.
#'
#' @param P Integer scalar. Number of bands to plot.
#' @importFrom grDevices rgb
#'
#' @noRd
alpha_hex <- function(P) {

n_trans <- P %/% 2
out <- switch(n_trans, # Handpicked with love
"FF", c("FF", "80"), c("FF", "A8", "54"),
c("FF", "BF", "80", "40"), c("FF", "CC", "99", "66", "33"))

if(is.null(out)) { # Let rgb() sort it out otherwise
out <- substr(rgb(1, 1, 1, seq(1, 0, length.out = n_trans)), 8, 10)
}

return(out)
}


#' Check valid HEX colour
#'
#' @return Returns a character vector of colours.
#' @param x Character scalar or vector. String(s) to check.
#' @param alpha Logical scalar. Whether the string may contain alpha values.
#'
#' @examples
#' bvar:::set_gray(3)
#' @return Returns a logical scalar or vector.
#'
#' @noRd
set_gray <- function(P) {
is_hex <- function(x, alpha = FALSE) {

n_gray <- if(P %% 2 == 0) {0} else {P %/% 2}
if(alpha) return(grepl("^#[0-9a-fA-F]{6,8}$", x))

return(c(rep("darkgray", n_gray), "black", rep("darkgray", n_gray)))
return(grepl("^#[0-9a-fA-F]{6,6}$", x))
}


#' Get a subset of variables
#' Get variable positions
#'
#' Helper functions to aid with variable selection in \code{\link{bv_plot_irf}}
#' and \code{\link{bv_plot_fcast}}.
#' Helper functions to aid with variable selection, e.g. in
#' \code{\link{plot.bvar_irf}} and \code{\link{plot.bvar_fcast}}.
#'
#' @param vars Vector of variables to subset to. Numeric or character.
#' @param vars Numeric or character vector of variables to subset to.
#' @param variables Character vector of all variable names. Required if
#' \emph{vars} is provided as character vector.
#' @param M Integer scalar. Count of all variables.
#'
#' @return Returns a numeric vector with the positions of desired variables.
#'
#' @examples
#' # Assuming the variables are named.
#' bvar:::get_var_set("fx_rate", variables = c("gdp_pc", "fx_rate"))
#'
#' # Find via position
#' bvar:::get_var_set(c(1, 3), M = 3)
#'
#' # Get the full set
#' bvar:::get_var_set(NULL, M = 3)
#'
#' @noRd
get_var_set <- function(vars, variables, M) {
pos_vars <- function(vars, variables, M) {

if(is.null(vars)) {
return(1:M)
if(is.null(vars) || length(vars) == 0L) {
return(1:M) # Full set
}
if(is.numeric(vars)) {
return(sort(vapply(vars, int_check,
min = 1, max = M, msg = "Variable(s) not found.",
integer(1))))
return(vapply(vars, int_check, # By position
min = 1, max = M, msg = "Variable(s) not found.", integer(1)))
}
if(is.character(vars) && !is.null(variables)) {
return(which(variables %in% vars))
out <- do.call(c, lapply(vars, grep, variables)) # By name
if(length(out) > 0) {return(out)}
}

stop("Variables not found.")
stop("Variable(s) not found.")
}

#' Compute log pdf of inverse Gamma distribution

#' Name dependent / explanatory variables
#'
#' Compute the logged pdf of a draw of a variable assumed to be inverse-Gamma
#' (IG) distributed with parameters \emph{scale} and \emph{shape}.
#' @param variables Character vector of all variable names.
#' @param M Integer scalar. Count of all variables.
#' @param lags Integer scalar. Number of lags applied.
#'
#' @param x Numeric scalar. Draw of the IG-distributed variable
#' @param scale Numeric scalar. Scale of the IG prior distribution.
#' @param shape Numeric scalar. Shape of the IG prior distribution.
#' @return Returns a character vector of variable names.
#'
#' @noRd
name_deps <- function(variables, M) {

if(is.null(variables)) {
variables <- paste0("var", seq(M))
} else if(length(variables) != M) {
stop("Vector with variables is incomplete.")
}

return(variables)
}


#' @noRd
name_expl <- function(variables, M, lags) {

if(is.null(variables)) {
variables <- name_deps(variables, M)
}
explanatories <- c("constant", paste0(rep(variables, lags), "-lag",
rep(seq(lags), each = length(variables))))

return(explanatories)
}


#' Compute log distribution function of Inverse Gamma
#'
#' @return A numeric scalar of the draw's log-likelihood.
#' @param x Numeric scalar. Draw of the IG-distributed variable.
#' @param shape Numeric scalar.
#' @param scale Numeric scalar.
#'
#' @examples
#' # Computing log-likelihood of a draw with value 5
#' log_igamma_pdf(5, 0.004, 0.004)
#' @return Returns the log Inverse Gamma distribution function.
#'
#' @noRd
log_igamma_pdf <- function(x, scale, shape){
out <- scale * log(shape) - (scale + 1) * log(x) - shape / x - lgamma(scale)
p_log_ig <- function(x, shape, scale) {

return(out)
return(shape * log(scale) - (shape + 1) * log(x) - scale / x - lgamma(shape))
}


#' Check whether a package is installed
#'
#' @param package Character scalar.
#'
#' @noRd
has_package <- function(package) {

if(!requireNamespace(package, quietly = TRUE)) {
stop("Package \'", package, "\' required for this method.", call. = FALSE)
}

return(NULL)
}


#' Generate quantiles
#'
#' Check a vector of confidence bands and create quantiles from it.
#'
#' @param conf_bands Numeric vector of probabilities (\eqn{(0, 1)}).
#'
#' @return Returns a sorted, symmetric vector of quantiles.
#'
#' @noRd
quantile_check <- function(conf_bands) {

conf_bands <- sapply(conf_bands, num_check,
min = 0 + 1e-16, max = 1 - 1e-16, msg = "Confidence bands misspecified.")

# Allow only returning the median
if(length(conf_bands) == 1 && conf_bands == 0.5) {return(conf_bands)}

# Sort and make sure we have no duplicates (thank mr float)
quants <- sort(c(conf_bands, 0.5, (1 - conf_bands)))
quants <- quants[!duplicated(round(quants, digits = 12L))]

return(quants)
}
62 changes: 62 additions & 0 deletions R/13_mvtnorm.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@

#' Optimised multivariate normal drawing
#'
#' Function to quickly draw from a multivariate normal distribution in special
#' cases required by \code{\link{bvar}}. Based on the implementation by
#' Friedrich Leisch and Fabian Scheipl in \code{\link[mvtnorm]{rmvnorm}}.
#'
#' The two special cases are (1) the proposal, where the spectral decomposition
#' of \emph{sigma} only needs to be calculated once, and (2) drawing when only
#' the inverse of \emph{sigma} is available. Note that we skip steps to check
#' the symmetry and definiteness of \emph{sigma}, since these properties are
#' given per construction.
#'
#' @param n Numeric scalar. Number of draws.
#' @param mean Numeric vector. Mean of the draws.
#' @param sigma Numeric matrix. Variance-covariance of draws.
#' @param sigma_inv Numeric matrix. Inverse of variance-covariance of draws.
#' @param method Character scalar. Type of decomposition to use.
#'
#' @return Returns a numeric matrix of draws.
#'
#' @noRd
rmvn_proposal <- function(n, mean, sigma) {

# Univariate cornercase ---
if(length(sigma[["values"]]) == 1) {
out <- matrix(rnorm(n, mean = mean, sd = sigma[["values"]]))
colnames(out) <- names(mean)
return(out)
}
# Multivariate ---
m <- length(sigma[["values"]])
R <- t(sigma[["vectors"]] %*%
(t(sigma[["vectors"]]) * sqrt(sigma[["values"]])))
out <- matrix(rnorm(n * m), nrow = n, ncol = m, byrow = TRUE) %*% R
out <- sweep(out, 2, mean, "+")
colnames(out) <- names(mean)

return(out)
}


#' @noRd
rmvn_inv <- function(n, sigma_inv, method) {

if(method == "eigen") {
# Spectral ---
sigma_inv <- eigen(sigma_inv, symmetric = TRUE)
m <- length(sigma_inv[["values"]])
R <- t(sigma_inv[["vectors"]] %*%
(t(sigma_inv[["vectors"]]) * sqrt(1 / pmax(sigma_inv[["values"]], 0))))
out <- matrix(rnorm(n * m), nrow = n, ncol = m, byrow = TRUE) %*% R
} else if(method == "chol") {
# Cholesky ---
m <- ncol(sigma_inv)
R <- chol(sigma_inv)
out <- t(backsolve(R,
matrix(rnorm(n * m), nrow = m, ncol = n, byrow = TRUE)))
} else {stop("SOMEBODY TOUCHA MY SPAGHET!")}

return(out)
}
205 changes: 205 additions & 0 deletions R/15_prep_data.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,205 @@

#' Prepare BVAR data for methods
#'
#' Helper function to retrieve hyperparameters or coefficient values based on
#' name / position. Also supports multiple \code{bvar} objects and may be used
#' to check them for similarity.
#'
#' @param x A \code{bvar} object, obtained from \code{\link{bvar}}.
#' @param vars Character vector used to select variables. Elements are matched
#' to hyperparameters or coefficients. Coefficients may be matched based on
#' the dependent variable (by providing the name or position) or the
#' explanatory variables (by providing the name and the desired lag). See the
#' example section for a demonstration. Defaults to \code{NULL}, i.e. all
#' hyperparameters.
#' @param vars_response,vars_impulse Optional character or integer vectors used
#' to select coefficents. Dependent variables are specified with
#' \emph{vars_response}, explanatory ones with \emph{vars_impulse}. See the
#' example section for a demonstration.
#' @param chains List with additional \code{bvar} objects. Contents are then
#' added to trace and density plots.
#' @param check_chains Logical scalar. Whether to check \emph{x} and
#' \emph{chains} for similarity.
#' @param ... Fed to \code{\link{chains_fit}}.
#'
#' @return Returns a named list with:
#' \itemize{
#' \item \code{data} - Numeric matrix with desired data.
#' \item \code{vars} - Character vector with names for the desired data.
#' \item \code{chains} - List of numeric matrices with desired data.
#' \item \code{bounds} - Numeric matrix with optional boundaries.
#' }
#'
#' @noRd
prep_data <- function(
x,
vars = NULL,
vars_response = NULL, vars_impulse = NULL,
chains = list(),
check_chains = FALSE, ...) {

if(!inherits(x, "bvar")) {stop("Please provide a `bvar` object.")}

if(inherits(chains, "bvar")) {chains <- list(chains)}
lapply(chains, function(x) {if(!inherits(x, "bvar")) {
stop("Please provide `bvar` objects to the chains parameter.")
}})

if(check_chains) {chains_fit(x, chains, ...)}


# Prepare selection ---

vars_hyp <- c("ml", colnames(x[["hyper"]]))
vars_dep <- x[["variables"]]
vars_ind <- x[["explanatories"]]
if(is.null(vars_ind)) { # Compatibility to older versions (<= 0.2.2)
vars_ind <- name_expl(vars_dep,
M = x[["meta"]][["M"]], lags = x[["meta"]][["lags"]])
}

if(is.null(vars) && is.null(vars_impulse) && is.null(vars_response)) {
vars <- vars_hyp
}

choice_hyp <- vars_hyp[unique(do.call(c, lapply(vars, grep, vars_hyp)))]

choice_dep <- if(is.null(vars_response)) {
# Interpret numbers as positions, exclude independents
vars_dep[unique(c(as.integer(vars[grep("^[0-9]+$", vars)]),
do.call(c, lapply(vars[!grepl("(^const|lag[0-9]+$)", vars)],
grep, vars_dep))))]
} else {pos_vars(vars_response, vars_dep, M = x[["meta"]][["M"]])}
choice_dep <- choice_dep[!is.na(choice_dep)]

choice_ind <- if(is.null(vars_impulse)) {
# Limit to ones with "-lag#" or "constant" to separate from dependents
vars_ind[unique(do.call(c, lapply(vars[grep("(^const|lag[0-9]+$)", vars)],
grep, vars_ind)))]
} else {pos_vars(vars_impulse, vars_ind, M = x[["meta"]][["K"]])}

if(all(c(length(choice_hyp), length(choice_dep), length(choice_ind)) == 0)) {
stop("No matching data found.")
}


# Build up required outputs ---

out <- out_vars <- out_bounds <- out_chains <- list()
N <- x[["meta"]][["n_save"]]

if(length(choice_hyp) > 0) { # Hyperparameters
out[["hyper"]] <- cbind("ml" = x[["ml"]], x[["hyper"]])[seq(N), choice_hyp]
out_vars[["hyper"]] <- choice_hyp
out_bounds[["hyper"]] <- vapply(choice_hyp, function(z) {
if(z == "ml") {c(NA, NA)} else {
c(x[["priors"]][[z]][["min"]], x[["priors"]][[z]][["max"]])
}}, double(2))
out_chains[["hyper"]] <- lapply(chains, function(z) {
cbind("ml" = z[["ml"]], z[["hyper"]])[seq(N), choice_hyp]
})
} else {
out_chains[["hyper"]] <- rep(list(NULL), length(chains))
}

if(length(choice_dep) > 0 || length(choice_ind) > 0) { # Betas
pos_dep <- pos_vars(choice_dep,
variables = vars_dep, M = x[["meta"]][["M"]])
pos_ind <- pos_vars(choice_ind,
variables = vars_ind, M = x[["meta"]][["K"]])
K <- length(pos_dep) * length(pos_ind)

out[["betas"]] <- grab_betas(x, N, K, pos_dep, pos_ind)
out_vars[["betas"]] <- paste0(
rep(vars_dep[pos_dep], length(pos_ind)), "_",
rep(vars_ind[pos_ind], each = length(pos_dep)))
out_bounds[["betas"]] <- matrix(NA, ncol = K, nrow = 2)
out_chains[["betas"]] <- lapply(chains, grab_betas, N, K, pos_dep, pos_ind)
} else {
out_chains[["betas"]] <- rep(list(NULL), length(chains))
}

# Merge stuff and return ---

out_data <- cbind(out[["hyper"]], out[["betas"]])
out_vars <- c(out_vars[["hyper"]], out_vars[["betas"]])
out_chains <- mapply(cbind,
out_chains[["hyper"]], out_chains[["betas"]], SIMPLIFY = FALSE)
out_chains <- lapply(out_chains, `colnames<-`, out_vars)
colnames(out_data) <- out_vars

out <- list(
"data" = out_data, "vars" = out_vars, "chains" = out_chains,
"bounds" = cbind(out_bounds[["hyper"]], out_bounds[["betas"]]))

return(out)
}


#' Grab draws of certain betas
#'
#' Helper function for \code{\link{prep_data}}.
#'
#' @param x A \code{bvar} object, obtained from \code{\link{bvar}}.
#' @param N,K Integer scalars. Number of rows and columns to return.
#' @param pos_dep,pos_ind Numeric vectors. Positions of desired variables.
#'
#' @return Returns a matrix with the requested data.
#'
#' @noRd
grab_betas <- function(x, N, K, pos_dep, pos_ind) {
data <- matrix(NA, nrow = N, ncol = K)
k <- 1
for(i in pos_ind) {for(j in pos_dep) {
data[, k] <- x[["beta"]][seq(N), i, j] # seq() for longer chains
k <- k + 1
}}
return(data)
}


#' Check equalities across chains
#'
#' Function to help check whether \code{bvar} objects are close enough to
#' compare. Accessed via \code{\link{prep_data}}.
#'
#' @param x A \code{bvar} object, obtained from \code{\link{bvar}}.
#' @param chains List with additional \code{bvar} objects.
#' @param Ms Logical scalar. Whether to check equality of
#' \code{x[["meta"]][["M"]]}.
#' @param n_saves Logical scalar. Whether to check equality of
#' \code{x[["meta"]][["n_save"]]}.
#' @param hypers Logical scalar. Whether to check equality of
#' \code{x[["priors"]][["hyper"]]}.
#'
#' @return Returns \code{TRUE} or throws an error.
#'
#' @noRd
chains_fit <- function(
x, chains,
Ms = TRUE,
n_saves = FALSE,
hypers = FALSE) {

if(is.null(chains) || length(chains) == 0) {return(TRUE)}

if(Ms) {
Ms <- c(x[["meta"]][["M"]],
vapply(chains, function(x) {x[["meta"]][["M"]]}, integer(1)))
if(!all(duplicated(Ms)[-1])) {stop("Number of variables does not match.")}
}
if(n_saves) {
n_saves <- c(x[["meta"]][["n_save"]],
vapply(chains, function(x) {x[["meta"]][["n_save"]]}, integer(1)))
if(!all(duplicated(n_saves)[-1])) {
stop("Number of stored iterations does not match.")
}
}
if(hypers) {
hypers <- vapply(chains, function(z) {
all(x[["priors"]][["hyper"]] == z[["priors"]][["hyper"]])}, logical(1))
if(!all(hypers)) {stop("Hyperparameters do not match.")}
}

return(TRUE)
}
131 changes: 66 additions & 65 deletions R/20_ml.R
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@

#' Log-posterior of a BVAR
#'
#' Compute the log-posterior (or log-marginal-likelihood) of a Bayesian VAR
@@ -12,128 +13,128 @@
#' allowed for hyperparameters. If these are breached a value of -1e18 is
#' returned.
#' @param pars Named numeric vector with prior parameters. Values also found
#' in \emph{hyper} are overwritten with their hierarchical counterpart.
#' @param priors List created via \code{\link{bv_priors}}. Contains information
#' on the Minnesota prior and optional dummy priors (that are named in
#' \code{priors$dummy}).
#' in \emph{hyper} are overwritten with their hierarchical counterparts.
#' @param Y Numeric \eqn{N * M} matrix.
#' @param X Numeric \eqn{N * K} matrix.
#' @param XX Numeric \eqn{K * K} matrix. Crossproduct of \emph{X}, used to save
#' matrix calculations when no dummy priors are included.
#' @param K Integer scalar. Columns of \emph{X}, i.e. \eqn{M * lags + 1}.
#' @param M Integer scalar. Columns of \emph{Y}, i.e. number of variables.
#' @param N Integer scalar. Rows of \emph{Y}, alternatively \emph{X}.
#' @param lags Integer scalar. Number of lags in the model.
#' @param opt Optional logical scalar. Determines whether the return value is
#' a numeric scalar or a list. Used to call \code{\link{bv_ml}} in
#' \code{\link[stats]{optim}}.
#' @inheritParams bvar
#'
#' @return Returns a list with the following objects by default:
#' @return Returns a list by default, containing the following objects:
#' \itemize{
#' \item \code{log_ml} - A numeric scalar with the log-posterior.
#' \item \code{X}, \code{N} - The lagged data matrix with possible dummy
#' priors appended and its number of rows. Necessary for drawing from
#' posterior distributions with \code{\link{draw_post}}.
#' \item \code{XX}, \code{N} - The crossproduct of the lagged data matrix,
#' potentially with dummy priors and the number of rows including them.
#' Necessary for drawing from posterior distributions with
#' \code{\link{draw_post}}.
#' \item \code{psi}, \code{sse}, \code{beta_hat}, \code{omega_inv} - Further
#' values necessary for drawing from posterior distributions.
#' }
#' If opt is \code{TRUE} only the value of \code{log_ml} is returned.
#' If opt is \code{TRUE} only a numeric scalar with \code{log_ml} is returned.
#'
#' @importFrom stats dgamma
#'
#' @noRd
bv_ml <- function(
hyper, hyper_min = -Inf, hyper_max = Inf,
pars, priors, Y, X, K, M, N, lags,
pars, priors, Y, X, XX, K, M, N, lags,
opt = FALSE) {

# Bounds ------------------------------------------------------------------
# Check bounds ---

if(any(hyper_min > hyper | hyper > hyper_max)) {
if(opt) {return(-1e18)} else {return(list("log_ml" = -1e18))}
if(opt) {return(-1e18)} else {return(list("log_ml" = -1e18))}
}


# Priors ------------------------------------------------------------------
# Priors -----

# Overwrite passed parameters with hyperparameters
# Overwrite passed parameters with hyperparameters if provided
for(name in unique(names(hyper))) {
pars[names(pars) == name] <- hyper[names(hyper) == name]
}

psi <- diag(pars[grep("^psi[0-9]*", names(pars))])
psi_vec <- pars[grep("^psi[0-9]*", names(pars))]
psi <- diag(psi_vec)
omega <- vector("numeric", 1 + M * lags)
omega[1] <- priors[["var"]]
for(i in 1:lags) {
omega[seq(2 + M * (i - 1), 1 + i * M)] <-
pars[["lambda"]] ^ 2 / i ^ pars[["alpha"]] /
pars[grep("^psi[0-9]*", names(pars))]
for(i in seq.int(1, lags)) {
omega[seq.int(2 + M * (i - 1), 1 + i * M)] <- pars[["lambda"]] ^ 2 /
i ^ pars[["alpha"]] / psi_vec
}

# Dummy priors
if(length(priors[["dummy"]]) > 0) {
dmy <- lapply(priors[["dummy"]], function(x) {
tryCatch(priors[[x]][["fun"]](Y = Y, lags = lags, par = pars[[x]]),
error = function(e) {
message("Issue with generating dummy observations for ",
x, ". Make sure the function works properly.")
stop(e)})
error = function(e) {
message("Issue generating dummy observations for ",
x, ". Make sure the provided function works properly.")
stop(e)})
})
Y_dummy <- do.call(rbind,
lapply(dmy, function(x) matrix(x[["Y"]], ncol = M)))
X_dummy <- do.call(rbind,
lapply(dmy, function(x) matrix(x[["X"]], ncol = K)))
N_dummy <- nrow(Y_dummy)
Y <- rbind(Y_dummy, Y)
X <- rbind(X_dummy, X)
Y_dmy <- do.call(rbind, lapply(dmy, function(x) matrix(x[["Y"]], ncol = M)))
X_dmy <- do.call(rbind, lapply(dmy, function(x) matrix(x[["X"]], ncol = K)))
N_dummy <- nrow(Y_dmy)
Y <- rbind(Y_dmy, Y)
X <- rbind(X_dmy, X)
XX <- crossprod(X)
N <- nrow(Y)
}


# Calc --------------------------------------------------------------------
# Calc -----

omega_inv <- diag(1 / omega)
psi_inv <- solve(sqrt(psi))
psi_inv <- diag(1 / sqrt(psi_vec))
omega_sqrt <- diag(sqrt(omega))
b <- priors[["b"]]

# Likelihood
ev_full <- get_ev(omega_inv, omega_sqrt, psi_inv, X, Y, b, beta_hat = TRUE)
log_ml <- get_logml(M, N, psi, ev_full[["omega"]], ev_full[["psi"]])

if(length(priors[["dummy"]]) > 0) {
ev_dummy <- get_ev(omega_inv, omega_sqrt, psi_inv,
X_dummy, Y_dummy, b, beta_hat = FALSE)
log_ml <- log_ml -
get_logml(M, N_dummy, psi, ev_dummy[["omega"]], ev_dummy[["psi"]])
}
# Likelihood ---
ev_full <- calc_ev(omega_inv = omega_inv, omega_sqrt = omega_sqrt,
psi_inv = psi_inv, X = X, XX = XX, Y = Y, b = b, beta_hat = TRUE)
log_ml <- calc_logml(M = M, N = N, psi = psi,
omega_ldet = ev_full[["omega"]], psi_ldet = ev_full[["psi"]])

# Add prior-pdfs
log_ml <- log_ml + sum(sapply(priors[["hyper"]][which(!priors$hyper == "psi")],
function(x) {
log(dgamma(
pars[[x]],
shape = priors[[x]][["coef"]][["k"]],
scale = priors[[x]][["coef"]][["theta"]]))
})
)
# Add priors
log_ml <- log_ml + sum(vapply(
priors[["hyper"]][which(!priors$hyper == "psi")], function(x) {
dgamma(pars[[x]],
shape = priors[[x]][["coef"]][["k"]],
scale = priors[[x]][["coef"]][["theta"]], log = TRUE)
}, numeric(1L)))

if(any(priors[["hyper"]] == "psi")) {
log_ml <- log_ml + sum(sapply(names(pars)[grep("^psi[0-9]*", names(pars))],
function(x) {
log_igamma_pdf(
pars[[x]],
scale = priors[["psi"]][["scale"]],
shape = priors[["psi"]][["shape"]])
})
)
psi_coef <- priors[["psi"]][["coef"]]
log_ml <- log_ml + sum(vapply(
names(pars)[grep("^psi[0-9]*", names(pars))], function(x) {
p_log_ig(pars[[x]],
shape = psi_coef[["k"]], scale = psi_coef[["theta"]])
}, numeric(1L)))
}

if(length(priors[["dummy"]]) > 0) {
ev_dummy <- calc_ev(omega_inv = omega_inv, omega_sqrt = omega_sqrt,
psi_inv = psi_inv, X = X_dmy, XX = NULL, Y = Y_dmy, b = b,
beta_hat = FALSE)
log_ml <- log_ml - calc_logml(M = M, N = N_dummy, psi = psi,
omega_ldet = ev_dummy[["omega"]], psi_ldet = ev_dummy[["psi"]])
}


# Output ------------------------------------------------------------------
# Output -----

if(opt) {return(log_ml)}
if(opt) {return(log_ml)} # For optim

# Return log_ml and objects necessary for drawing
return(list("log_ml" = log_ml, "X" = X, "N" = N, "psi" = psi,
"sse" = ev_full[["sse"]], "beta_hat" = ev_full[["beta_hat"]],
"omega_inv" = omega_inv))
return(
list("log_ml" = log_ml, "XX" = XX, "N" = N, "psi" = psi,
"sse" = ev_full[["sse"]], "beta_hat" = ev_full[["beta_hat"]],
"omega_inv" = omega_inv)
)
}
51 changes: 22 additions & 29 deletions R/21_draw.R
Original file line number Diff line number Diff line change
@@ -1,46 +1,39 @@

#' BVAR posterior draws
#'
#' Draw beta and sigma from the posterior of a Bayesian VAR.
#' Draw \eqn{\beta} and \eqn{\sigma} from the posterior.
#'
#' @param X Numeric matrix. Possibly extended with dummy priors.
#' @param N Integer scalar. Rows of \emph{X}.
#' @param M Integer scalar. Columns of \emph{X}.
#' @param XX Numeric matrix. Crossproduct of a possibly extended \emph{X}.
#' @param N Integer scalar. Rows of \emph{X}. Note that \emph{X} may have been
#' extended with dummy observations.
#' @param M Integer scalar. Columns of \emph{Y}.
#' @param lags Integer scalar. Number of lags in the model.
#' @param b Numeric marix. Minnesota prior mean.
#' @param b Numeric marix. Minnesota prior's mean.
#' @param psi Numeric matrix. Scale of the IW prior on the residual covariance.
#' @param sse Numeric matrix. Squared VAR residuals.
#' @param beta_hat Numeric matrix.
#' @param omega_inv Numeric matrix.
#'
#' @return Returns a list with the following elements:
#' \itemize{
#' \item \code{beta_draw}, \code{sigma_draw} - Draws from the posterior.
#' \item \code{sigma_chol} - The lower part of a Cholesky decomposition
#' of sigma_draw. Calculated as \code{t(chol(sigma_draw))}.
#' }
#' @return Returns a list with the following posterior draws of \emph{beta} and
#' \emph{sigma}.
#'
#' @importFrom MASS mvrnorm
#' @importFrom mvtnorm rmvnorm
#'
#' @noRd
draw_post <- function(
X,
N = nrow(X), M = ncol(X), lags, b,
psi, sse, beta_hat, omega_inv) {
XX, N, M, lags,
b, psi, sse, beta_hat, omega_inv) {

S_post <- psi + sse + crossprod((beta_hat - b), omega_inv) %*% (beta_hat - b)
eta <- rmvn_inv(n = (N + M + 2), sigma_inv = S_post, method = "eigen")

chol_de <- chol(crossprod(eta))
sigma_chol <- forwardsolve(t(chol_de), diag(nrow(chol_de)))
sigma_draw <- crossprod(sigma_chol)

S_post <- psi + sse + t(beta_hat - b) %*% omega_inv %*% (beta_hat - b)
S_eig <- eigen(S_post)
S_inv <- S_eig[["vectors"]] %*%
diag(1 / abs(S_eig[["values"]])) %*% t(S_eig[["vectors"]])
noise <- rmvn_inv(n = M, sigma_inv = XX + omega_inv, method = "chol")

eta <- mvrnorm(n = (N + M + 2), mu = rep(0, M), Sigma = S_inv)
sigma_draw <- solve(crossprod(eta)) %*% diag(M)
sigma_chol <- t(chol(sigma_draw))
beta_draw <- beta_hat +
t(mvrnorm(n = M, mu = rep(0, (1 + M * lags)),
Sigma = solve(crossprod(X) + omega_inv) %*% diag(1 + M * lags))) %*%
sigma_chol
beta_draw <- beta_hat + crossprod(noise, sigma_chol)

return(list("beta_draw" = beta_draw,
"sigma_draw" = sigma_draw,
"sigma_chol" = sigma_chol))
return(list("beta_draw" = beta_draw, "sigma_draw" = sigma_draw))
}
62 changes: 62 additions & 0 deletions R/22_calc.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@

#' Calculate the log marginal likelihood
#'
#' @noRd
calc_logml <- function(M, N, psi, omega_ldet, psi_ldet) {

return(
(-M * N * log(pi) / 2) + sum(lgamma(((N + M + 2) - seq.int(0, M - 1)) / 2) -
lgamma(((M + 2) - seq.int(0, M -1)) / 2)) -
(N * sum(log(diag(psi))) / 2) -
# (M * omega_ldet / 2) -
(M * omega_ldet / 2) -
# ((N + M + 2) * psi_ldet / 2)
((N + M + 2) * psi_ldet / 2)
)
}


#' Calculate eigenvalues to bypass determinant computation
#'
#' @noRd
calc_ev <- function(
omega_inv, omega_sqrt, psi_inv,
X, XX, Y, b, beta_hat = TRUE) {

if(is.null(XX)) {XX <- crossprod(X)}

beta_hat <- if(beta_hat) {
solve(XX + omega_inv, crossprod(X, Y) + omega_inv %*% b)
} else {b}

sse <- crossprod(Y - X %*% beta_hat)
omega_ml <- omega_sqrt %*% XX %*% omega_sqrt
mostly_harmless <- sse + if(all(beta_hat == b)) {0} else {
crossprod(beta_hat - b, omega_inv) %*% (beta_hat - b)
}
psi_ml <- psi_inv %*% mostly_harmless %*% psi_inv

# Eigenvalues + 1 as another way of computing the determinants
# omega_ml_ev <- Re(eigen(omega_ml,
# symmetric = TRUE, only.values = TRUE)[["values"]])
# omega_ml_ev[omega_ml_ev < 1e-12] <- 0
# omega_ml_ev <- omega_ml_ev + 1
# omega_ldet <- sum(log(omega_ml_ev))

# psi_ml_ev <- Re(eigen(psi_ml,
# symmetric = TRUE, only.values = TRUE)[["values"]])
# psi_ml_ev[psi_ml_ev < 1e-12] <- 0
# psi_ml_ev <- psi_ml_ev + 1
# psi_ldet <- sum(log(psi_ml_ev))

# LU decomposition with LAPACK is a bit faster
omega_ldet <- determinant.matrix(diag(NCOL(omega_ml)) +
omega_ml, log = TRUE)[[1]]
psi_ldet <- determinant.matrix(diag(NCOL(psi_ml)) +
psi_ml, log = TRUE)[[1]]

return(
list("omega" = omega_ldet, "psi" = psi_ldet,
"sse" = sse, "beta_hat" = beta_hat)
)
}
37 changes: 0 additions & 37 deletions R/22_get.R

This file was deleted.

55 changes: 0 additions & 55 deletions R/30_metropolis.R

This file was deleted.

79 changes: 79 additions & 0 deletions R/30_metropolis_setup.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@

#' Metropolis-Hastings settings
#'
#' Function to provide settings for the Metropolis-Hastings step in
#' \code{\link{bvar}}. Options include scaling the inverse Hessian that is
#' used to draw parameter proposals and automatic scaling to achieve certain
#' acceptance rates.
#'
#' Note that adjustment of the acceptance rate by scaling the parameter
#' draw variability can only be done during the burn-in phase, as otherwise the
#' resulting draws do not feature the desirable properties of a Markov chain.
#' After the parameter draws have been scaled, some additional draws should be
#' burnt.
#'
#' @param scale_hess Numeric scalar or vector. Scaling parameter, determining
#' the range of hyperparameter draws. Should be calibrated so a reasonable
#' acceptance rate is reached. If provided as vector the length must equal
#' the number of hyperparameters (one per variable for \code{psi}).
#' @param adjust_acc Logical scalar. Whether or not to further scale the
#' variability of parameter draws during the burn-in phase.
#' @param adjust_burn Numeric scalar. How much of the burn-in phase should be
#' used to scale parameter variability. See Details.
#' @param acc_lower,acc_upper Numeric scalar. Lower (upper) bound of the target
#' acceptance rate. Required if \emph{adjust_acc} is set to \code{TRUE}.
#' @param acc_change Numeric scalar. Percent change applied to the Hessian
#' matrix for tuning acceptance rate. Required if \emph{adjust_acc} is set to
#' \code{TRUE}.
#'
#' @return Returns a named list of class \code{bv_metropolis} with options for
#' \code{\link{bvar}}.
#'
#' @keywords Metropolis-Hastings MCMC settings
#'
#' @export
#'
#' @examples
#' # Increase the scaling parameter
#' bv_mh(scale_hess = 1)
#'
#' # Turn on automatic scaling of the acceptance rate to [20%, 40%]
#' bv_mh(adjust_acc = TRUE, acc_lower = 0.2, acc_upper = 0.4)
#'
#' # Increase the rate of automatic scaling
#' bv_mh(adjust_acc = TRUE, acc_lower = 0.2, acc_upper = 0.4, acc_change = 0.1)
#'
#' # Use only 50% of the burn-in phase to adjust scaling
#' bv_mh(adjust_acc = TRUE, adjust_burn = 0.5)
bv_metropolis <- function(
scale_hess = 0.01,
adjust_acc = FALSE,
adjust_burn = 0.75,
acc_lower = 0.25, acc_upper = 0.45,
acc_change = 0.01) {

scale_hess <- vapply(scale_hess, num_check, numeric(1L),
min = 1e-16, max = 1e16,
msg = "Issue with scale_hess, please check the parameter again.")

if(isTRUE(adjust_acc)) {
adjust_burn <- num_check(adjust_burn, 1e-16, 1, "Issue with adjust_burn.")
acc_lower <- num_check(acc_lower, 0, 1 - 1e-16, "Issue with acc_lower.")
acc_upper <- num_check(acc_upper, acc_lower, 1, "Issue with acc_upper.")
acc_change <- num_check(acc_change, 1e-16, 1e16, "Issue with acc_change")
}

out <- structure(list(
"scale_hess" = scale_hess,
"adjust_acc" = adjust_acc, "adjust_burn" = adjust_burn,
"acc_lower" = acc_lower, "acc_upper" = acc_upper,
"acc_tighten" = 1 - acc_change, "acc_loosen" = 1 + acc_change),
class = "bv_metropolis")

return(out)
}


#' @rdname bv_metropolis
#' @export
bv_mh <- bv_metropolis
14 changes: 14 additions & 0 deletions R/35_metropolis_print.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@

#' @export
print.bv_metropolis <- function(x, ...) {

cat("Object with settings for the Metropolis-Hastings step in `bvar()`.\n",
"Scaling parameter: ", paste0(x[["scale_hess"]], collapse = ", "), "\n",
"Automatic acceptance adjustment: ", x[["adjust_acc"]], "\n", sep = "")
if(x[["adjust_acc"]]) {
cat("Target acceptance: [", x[["acc_lower"]], ", ", x[["acc_upper"]], "]\n",
"Change applied: ", x[["acc_loosen"]] - 1, "\n", sep = "")
}

return(invisible(x))
}
54 changes: 31 additions & 23 deletions R/40_priors.R → R/40_priors_setup.R
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@

#' Prior settings
#'
#' Function to provide priors and their parameters to \code{\link{bvar}}. Used
#' for adjusting the parameters treated as hyperparameters, the Minnesota prior
#' and adding various dummy priors through the ellipsis.
#' and adding various dummy priors through the ellipsis parameter.
#' Note that treating \eqn{\psi}{psi} (\emph{psi}) as a hyperparameter in a
#' model with many variables may lead to very low acceptance rates and thus
#' hinder convergence.
#'
#' @param hyper Character vector. Used to specify the parameters to be treated
#' as hyperparameters. May also be set to \code{"auto"} or \code{"full"} for
@@ -11,22 +15,28 @@
#' names of additional dummy priors included via \emph{...}.
#' @param mn List of class \code{"bv_minnesota"}. Options for the Minnesota
#' prior, set via \code{\link{bv_mn}}.
#' @param ... Optional lists of class \code{"bv_dummy"} with options for
#' dummy priors. \emph{Must be assigned a name in the function call}. Created
#' @param ... Optional lists of class \code{bv_dummy} with options for
#' dummy priors. \bold{Must be assigned a name in the function call}. Created
#' with \code{\link{bv_dummy}}.
#'
#' @return Returns a named list of class \code{bv_priors} with options for
#' \code{\link{bvar}}.
#' @export
#'
#' @keywords priors hierarchical Minnesota dummy settings
#'
#' @seealso \code{\link{bv_mn}}; \code{\link{bv_dummy}}
#'
#' @export
#'
#' @examples
#' # Extending hyperparameters to the full Minnesota prior
#' bv_priors(c("lambda", "alpha", "psi"))
#' # Extend the hyperparameters to the full Minnesota prior
#' bv_priors(hyper = c("lambda", "alpha", "psi"))
#' # Alternatively
#' # bv_priors("full")
#'
#' # Add a dummy prior via `bv_dummy()`
#' # Create a single-unit-root prior
#'
#' # Re-create the single-unit-root prior
#' add_sur <- function(Y, lags, par) {
#' sur <- if(lags == 1) {Y[1, ] / par} else {
#' colMeans(Y[1:lags, ]) / par
@@ -36,23 +46,19 @@
#'
#' return(list("Y" = Y_sur, "X" = X_sur))
#' }
#'
#' sur <- bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 50, fun = add_sur)
#'
#' # Adding the prior with `bv_prior()`
#' priors_dum <- bv_priors(hyper = "auto", sur = sur)
#' # Add the new prior
#' bv_priors(hyper = "auto", sur = sur)
bv_priors <- function(
hyper = "auto",
mn = bv_mn(bv_lambda(0.2, 0.4, 0.0001, 5),
bv_alpha(2, 0.25, 1, 3),
bv_psi(0.004, 0.004, "auto"),
var = 1e07),
mn = bv_mn(),
...) {

# Check inputs ------------------------------------------------------------
# Check inputs ---

if(!is.null(mn) && !inherits(mn, "bv_minnesota")) {
stop("Please use `bv_mn()` to set the minnesota prior.")
if(!inherits(mn, "bv_minnesota")) { # Require Minnesota prior
stop("Please use `bv_mn()` to set the Minnesota prior.")
}
dots <- list(...)
if(!all(vapply(dots, inherits, TRUE, "bv_dummy"))) {
@@ -69,16 +75,18 @@ bv_priors <- function(
}
}


# Output ------------------------------------------------------------------
# Prepare output ---

out <- if(!is.null(mn)) {
list(hyper = hyper, lambda = mn[["lambda"]], alpha = mn[["alpha"]],
psi = mn[["psi"]], var = mn[["var"]], b = mn[["b"]], ...)
structure(list(hyper = hyper,
lambda = mn[["lambda"]], alpha = mn[["alpha"]],
psi = mn[["psi"]], var = mn[["var"]], b = mn[["b"]],
..., dummy = names(list(...))
), class = "bv_priors")
} else {
list(hyper = hyper, ...)
structure(list(hyper = hyper,
..., dummy = names(list(...))), class = "bv_priors")
}
class(out) <- "bv_priors"

return(out)
}
202 changes: 150 additions & 52 deletions R/41_minnesota.R
Original file line number Diff line number Diff line change
@@ -1,94 +1,192 @@

#' Minnesota prior settings
#'
#' Provide settings for the Minnesota prior to \code{\link{bv_priors}}.
#' Provide settings for the Minnesota prior to \code{\link{bv_priors}}. See the
#' Details section for further information.
#'
#' Essentially this prior imposes the hypothesis, that individual variables
#' all follow random walk processes. This parsimonious specification typically
#' performs well in forecasts of macroeconomic time series and is often used as
#' a benchmark for evaluating accuracy (Kilian and Lütkepohl, 2017).
#' The key parameter is \eqn{\lambda}{lambda} (\emph{lambda}), which controls
#' the tightness of the prior. The parameter \eqn{\alpha}{alpha} (\emph{alpha})
#' governs variance decay with increasing lag order, while \eqn{\psi}{psi}
#' (\emph{psi}) controls the prior's standard deviation on lags of variables
#' other than the dependent.
#' The Minnesota prior is often refined with additional priors, trying to
#' minimise the importance of conditioning on initial observations. See
#' \code{\link{bv_dummy}} for more information on such priors.
#'
#' @param lambda List constructed via \code{\link{bv_lambda}}.
#' Possible parameters are \emph{mode}, \emph{sd}, \emph{min} and \emph{max}.
#' Arguments are \emph{mode}, \emph{sd}, \emph{min} and \emph{max}.
#' May also be provided as a numeric vector of length 4.
#' @param alpha List constructed via \code{\link{bv_alpha}}.
#' Possible parameters are \emph{mode}, \emph{min} and \emph{max}. High
#' values for \emph{mode} may affect invertibility of the augmented data matrix.
#' @param psi Named list with elements \emph{scale}, \emph{shape} and
#' \emph{mode}. Length needs to match the number of variables (i.e. columns) in
#' the data. By default parameters are set automatically as the squareroot of
#' the innovations variance after fitting an \eqn{AR(p)} model to the data.
#' Arguments are \emph{mode}, \emph{sd}, \emph{min} and \emph{max}. High values
#' for \emph{mode} may affect invertibility of the augmented data matrix.
#' May also be provided as a numeric vector of length 4.
#' @param psi List with elements \emph{scale}, \emph{shape} of the prior
#' as well as \emph{mode} and optionally \emph{min} and \emph{max}. The length
#' of these needs to match the number of variables (i.e. columns) in the data.
#' By default \emph{mode} is set automatically to the square-root of the
#' innovations variance after fitting an \eqn{AR(p)}{AR(p)} model to the data.
#' If \code{\link[stats]{arima}} fails due to a non-stationary time series the
#' order of integration is incremented by 1. By default \emph{min} / \emph{max}
#' are set to \emph{mode} divided / multiplied by 100.
#' @param var Numeric scalar with the prior variance on the model's constant.
#' @param b Numeric matrix with the prior mean.
#' @param mode Numeric scalar (/vector). Mode (or the like) of the parameter.
#' @param sd Numeric scalar with the standard deviation.
#' @param min Numeric scalar (/vector). Minimum allowed value.
#' @param max Numeric scalar (/vector). Maximum allowed value.
#' @param scale,shape Numeric scalar. Scale and shape parameters of the Gamma
#' @param b Numeric scalar, vector or matrix with the prior mean. A scalar is
#' applied to all variables, with a default value of 1. Consider setting it to
#' 0 for growth rates. A vector needs to match the number of variables (i.e.
#' columns) in the data, with a prior mean per variable. If provided, a matrix
#' needs to have a column per variable (\eqn{M}), and \eqn{M * p + 1} rows,
#' where \eqn{p} is the number of lags applied.
#' @param mode,sd Numeric scalar. Mode / standard deviation of the
#' parameter. Note that the \emph{mode} of \emph{psi} is set automatically by
#' default, and would need to be provided as vector.
#' @param min,max Numeric scalar. Minimum / maximum allowed value. Note that
#' for \emph{psi} these are set automatically or need to provided as vectors.
#' @param scale,shape Numeric scalar. Scale and shape parameters of a Gamma
#' distribution.
#'
#' @return Returns a named list of class \code{bv_minnesota} with options for
#' @return Returns a list of class \code{bv_minnesota} with options for
#' \code{\link{bvar}}.
#'
#' @references
#' Kilian, L. and Lütkepohl, H. (2017). \emph{Structural Vector
#' Autoregressive Analysis}. Cambridge University Press,
#' \doi{10.1017/9781108164818}
#'
#' @seealso \code{\link{bv_priors}}; \code{\link{bv_dummy}}
#'
#' @export
#'
#' @examples
#' # Adjust alpha fully and the prior variance.
#' bv_mn(
#' alpha = bv_alpha(mode = 0.5, sd = 1, min = 1e-12, max = 10),
#' var = 1e6
#' )
#' # Adjust alpha and the Minnesota prior variance.
#' bv_mn(alpha = bv_alpha(mode = 0.5, sd = 1, min = 1e-12, max = 10), var = 1e6)
#' # Optionally use a vector as shorthand
#' bv_mn(alpha = c(0.5, 1, 1e-12, 10), var = 1e6)
#'
#' # Only adjust lambda's standard deviation
#' bv_mn(
#' lambda = bv_lambda(sd = 2)
#' )
bv_mn <- function(
lambda = bv_lambda(0.2, 0.4, 0.0001, 5), # mode, sd, min, max
alpha = bv_alpha(2, 0.25, 1, 3), # mode, sd, min, max
psi = bv_psi(0.004, 0.004, "auto"), # scale, shape, mode
#' bv_mn(lambda = bv_lambda(sd = 2))
#'
#' # Provide prior modes for psi (for a VAR with three variables)
#' bv_mn(psi = bv_psi(mode = c(0.7, 0.3, 0.9)))
bv_minnesota <- function(
lambda = bv_lambda(),
alpha = bv_alpha(),
psi = bv_psi(), # scale, shape, mode
var = 1e07,
b = "auto") {
b = 1) {

if(!inherits(lambda, "bv_dummy") && !inherits(alpha, "bv_dummy")) {
stop("Please use `bv_lambda()` / `bv_alpha()` to set lambda / alpha.")
}
if(!inherits(psi, "bv_psi")) {
stop("Please use `bv_psi()` to set psi.")
}
# Input checks
lambda <- lazy_priors(lambda)
alpha <- lazy_priors(alpha)
if(!inherits(psi, "bv_psi")) {stop("Please use `bv_psi()` to set psi.")}
var <- num_check(var, min = 1e-16, max = Inf,
msg = "Issue with the prior variance var.")

# Prior mean
if(length(b) == 1 && b == "auto") {
b <- 1
} else if(is.numeric(b)) {
if(!is.matrix(b)) { # Matrix dimensions are checked later
b <- vapply(b, num_check, numeric(1L), min = -1e16, max = 1e16,
msg = "Issue with prior mean b, please check the argument again.")
}
} else {stop("Issue with prior mean b, wrong type provided.")}

out <- list("lambda" = lambda, "alpha" = alpha,
"psi" = psi, "b" = b, "var" = var)
class(out) <- "bv_minnesota"
# Outputs
out <- structure(list(
"lambda" = lambda, "alpha" = alpha, "psi" = psi, "b" = b, "var" = var),
class = "bv_minnesota")

return(out)
}


#' @rdname bv_minnesota
#' @export
bv_mn <- bv_minnesota


#' @describeIn bv_minnesota Tightness of the Minnesota prior
#' @export
#' @rdname bv_mn
bv_lambda <- function(mode = 0.2, sd = 0.4, min = 0.0001, max = 5) {

if(sd <= 0) {stop("Parameter sd misspecified.")}
sd <- num_check(sd, min = 0 + 1e-16, max = Inf,
msg = "Parameter sd misspecified.")

return(dummy(mode, min, max, sd = sd, coef = gamma_coef(mode, sd)))
return(
dummy(mode, min, max, sd = sd, coef = gamma_coef(mode = mode, sd = sd))
)
}


#' @describeIn bv_minnesota Variance decay with increasing lag order
#' @export
#' @rdname bv_mn
bv_alpha <- function(mode = 2, sd = 0.25, min = 1, max = 3) {

return(bv_lambda(mode = mode, sd = sd, min = min, max = max))
}


#' @describeIn bv_minnesota Prior standard deviation on other lags
#' @export
#' @rdname bv_mn
bv_psi <- function(scale = 0.004, shape = 0.004, mode = "auto",
min = "auto", max = "auto") {

if(any(scale <= 0, shape <= 0)) {stop("Parameters of psi misspecified.")}
if(mode != "auto") {
if(min == "auto") {min <- mode / 100}
if(max == "auto") {max <- mode * 100}
if(any(0 >= min, min >= max)) {stop("Boundaries misspecified.")}
bv_psi <- function(
scale = 0.004, shape = 0.004,
mode = "auto", min = "auto", max = "auto") {

# Checks ---

scale <- num_check(scale, min = 1e-16, max = Inf,
msg = "Invalid value for scale (outside of (0, Inf]")
shape <- num_check(shape, min = 1e-16, max = Inf,
msg = "Invalid value for shape (outside of (0, Inf]")

# Check mode, min and max
if(any(mode != "auto")) {
mode <- vapply(mode, num_check, numeric(1L), min = 0, max = Inf,
msg = "Invalid value(s) for mode (outside of [0, Inf]).")

if(length(min) == 1 && min == "auto") {min <- mode / 100}
if(length(max) == 1 && max == "auto") {max <- mode * 100}

min <- vapply(min, num_check, numeric(1L), min = 0, max = Inf,
msg = "Invalid value(s) for min (outside of [0, max)).")
max <- vapply(max, num_check, numeric(1L), min = 0, max = Inf,
msg = "Invalid value(s) for max (outside of (min, Inf]).")

if(length(mode) != length(min) || length(mode) != length(max)) {
stop("The length of mode and boundaries diverge.")
}

if(any(min >= max) || any(min > mode) || any(mode > max)) {
stop("Invalid values for min / max.")
}

} else if(any(c(min != "auto", max != "auto"))) {
stop("Boundaries are only adjustable with a given mode.")
}

out <- list(scale = scale, shape = shape, mode = mode, min = min, max = max)
class(out) <- "bv_psi"
# Outputs ---

out <- structure(list(
"mode" = mode, "min" = min, "max" = max,
"coef" = list("k" = shape, "theta" = scale)
), class = "bv_psi")

return(out)
}


#' @noRd
lazy_priors <- function(x) {

if(!inherits(x, "bv_dummy")) {
if(length(x) == 4 && is.numeric(x)) { # Allow length 4 numeric vectors
return(x = bv_lambda(x[1], x[2], x[3], x[4]))
}
stop("Please use `bv_lambda()` / `bv_alpha()` to set lambda / alpha.")
}

return(x)
}
88 changes: 54 additions & 34 deletions R/42_dummy.R
Original file line number Diff line number Diff line change
@@ -1,42 +1,39 @@
#' Dummy prior settings
#'
#' @param mode Numeric scalar. Mode (or the like) of the parameter.
#' @param min Numeric scalar. Minimum allowed value.
#' @param max Numeric scalar. Maximum allowed value.
#' @param ... Other possible parameters such as sd or fun.
#'
#' @return Returns a list of class \code{bv_dummy.}
#'
#' @noRd
dummy <- function(
mode = 1,
min = 0.0001, max = 5,
...) {

if(0 >= min || min >= max) {stop("Boundaries misspecified.")}
if(mode < 0) {stop("Parameter misspecified.")}

out <- list(mode = mode, min = min, max = max, ...)
class(out) <- "bv_dummy"

return(out)
}


#' Dummy prior settings
#'
#' Allows the creation of dummy observation priors for \code{\link{bv_priors}}.
#' See the Details section for information on common dummy priors.
#'
#' Dummy priors are often used to "reduce the importance of the deterministic
#' component implied by VARs estimated conditioning on the initial
#' observations" (Giannone, Lenza and Primiceri, 2015, p. 440).
#' One such prior is the sum-of-coefficients (SOC) prior, which imposes the
#' notion that a no-change forecast is optimal at the beginning of a time
#' series. Its key parameter \eqn{\mu}{mu} controls the tightness - i.e. for
#' low values the model is pulled towards a form with as many unit roots as
#' variables and no cointegration.
#' Another such prior is the single-unit-root (SUR) prior, that allows for
#' cointegration relationships in the data. It pushes variables either towards
#' their unconditional mean or towards the presence of at least one unit root.
#' These priors are implemented via Theil mixed estimation, i.e. by adding
#' dummy-observations on top of the data matrix. They are available via the
#' functions \code{\link{bv_soc}} and \code{\link{bv_sur}}.
#'
#' @param mode Numeric scalar. Mode (or the like) of the parameter.
#' @param sd Numeric scalar. Standard deviation (or the like) of the parameter.
#' @param min Numeric scalar. Minimum allowed value.
#' @param max Numeric scalar. Maximum allowed value.
#' @param fun Function taking \emph{Y}, \emph{lags} and the prior's parameter
#' \emph{par} to generate and return a named list with elements \emph{X} and
#' \emph{Y} (numeric matrices).
#' @inheritParams bv_mn
#'
#' @return Returns a named list of class \code{bv_dummy} for
#' \code{\link{bv_priors}}.
#'
#' @references
#' Giannone, D. and Lenza, M. and Primiceri, G. E. (2015) Prior Selection for
#' Vector Autoregressions. \emph{The Review of Economics and Statistics},
#' \bold{97:2}, 436-451, \doi{10.1162/REST_a_00483}.
#'
#' @seealso \code{\link{bv_priors}}; \code{\link{bv_minnesota}}
#'
#' @export
#'
#' @examples
@@ -50,7 +47,6 @@ dummy <- function(
#'
#' return(list("Y" = Y_soc, "X" = X_soc))
#' }
#'
#' soc <- bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 50, fun = add_soc)
#'
#' # Create a single-unit-root prior
@@ -66,16 +62,40 @@ dummy <- function(
#'
#' sur <- bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 50, fun = add_sur)
#'
#' # Adding them to the prior list with `bv_prior()`
#' priors_dum <- bv_priors(hyper = "auto", soc = soc, sur = sur)
#' # Add the new custom dummy priors
#' bv_priors(hyper = "auto", soc = soc, sur = sur)
bv_dummy <- function(
mode = 1, sd = 1,
min = 0.0001, max = 5,
fun) {

if(sd <= 0) {stop("Parameter sd misspecified.")}
sd <- num_check(sd, min = 0 + 1e-16, max = Inf,
msg = "Parameter sd misspecified.")
fun <- match.fun(fun)

return(dummy(mode, min, max, sd = sd, fun = fun,
coef = gamma_coef(mode, sd)))
return(
dummy(mode = mode, min = min, max = max, sd = sd, fun = fun,
coef = gamma_coef(mode, sd))
)
}


#' @rdname bv_dummy
#' @noRd
dummy <- function(
mode = 1,
min = 0.0001, max = 5,
...) {

mode <- num_check(mode, min = 0, max = Inf,
msg = "Invalid value for mode (outside of [0, Inf]).")
min <- num_check(min, min = 0, max = max - 1e-16,
msg = "Invalid value for min (outside of [0, max)).")
max <- num_check(max, min = min + 1e-16, max = Inf,
msg = "Invalid value for max (outside of (min, Inf]).")

out <- structure(list(
"mode" = mode, "min" = min, "max" = max, ...), class = "bv_dummy")

return(out)
}
29 changes: 13 additions & 16 deletions R/43_sur_soc.R
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
#' Sum-of-coefficients and single-unit-root prior functions

#' Sum-of-coefficients and single-unit-root prior creation functions
#'
#' @param Y Numeric scalar. Mode (or the like) of the parameter.
#' @param lags Numeric scalar. Minimum allowed value.
#' @param par Numeric scalar. Maximum allowed value.
#' @param Y Numeric matrix. Data to base the dummy observations on.
#' @param lags Integer scalar. Lag order of the model.
#' @param par Numeric scalar. Parameter value of the prior.
#'
#' @return Returns a list with \emph{Y} and \emph{X} extended with the
#' respective dummy observations.
@@ -12,39 +13,35 @@
soc <- if(lags == 1) {diag(Y[1, ]) / par} else {
diag(colMeans(Y[1:lags, ])) / par
}
Y_soc <- soc
X_soc <- cbind(rep(0, ncol(Y)), matrix(rep(soc, lags), nrow = ncol(Y)))

return(list("Y" = Y_soc, "X" = X_soc))
return(list("Y" = soc, "X" = X_soc))
}


#' @rdname .add_soc
#' @noRd
.add_sur <- function(Y, lags, par) {
sur <- if(lags == 1) {Y[1, ] / par} else {
colMeans(Y[1:lags, ]) / par
}
Y_sur <- sur
X_sur <- c(1 / par, rep(sur, lags))

return(list("Y" = Y_sur, "X" = X_sur))
return(list("Y" = sur, "X" = X_sur))
}


#' Sum-of-coefficients dummy prior
#'
#' @export
#' @rdname bv_dummy
#' @describeIn bv_dummy Sum-of-coefficients dummy prior
bv_soc <- function(mode = 1, sd = 1, min = 0.0001, max = 50) {

bv_dummy(mode, sd, min, max, fun = .add_soc)
bv_dummy(mode = mode, sd = sd, min = min, max = max, fun = .add_soc)
}

#' Single-unit-root dummy prior
#'

#' @export
#' @rdname bv_dummy
#' @describeIn bv_dummy Single-unit-root dummy prior
bv_sur <- function(mode = 1, sd = 1, min = 0.0001, max = 50) {

bv_dummy(mode, sd, min, max, fun = .add_sur)
bv_dummy(mode = mode, sd = sd, min = min, max = max, fun = .add_sur)
}
78 changes: 78 additions & 0 deletions R/45_priors_print.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@

#' @export
print.bv_priors <- function(x, ...) {

cat("Object with prior settings for `bvar()`.\n",
"Hyperparameters: ", paste0(x[["hyper"]], collapse = ", "),
"\n\n", sep = "")

if(!is.null(x[["lambda"]])) {print.bv_minnesota(x, indent = TRUE)}

dummy_pos <- names(x) %in% x[["dummy"]]
if(any(dummy_pos) && !length(x[["dummy"]]) == 0) {
cat("\nDummy prior(s):\n")
dummies <- names(x)[dummy_pos]
for(dummy in dummies) {
cat(dummy, ":\n", sep = ""); print.bv_dummy(x[[dummy]], indent = TRUE)
}
}

return(invisible(x))
}


#' @export
print.bv_minnesota <- function(x, indent = FALSE, ...) {

cat("Minnesota prior:\nlambda:\n"); print(x[["lambda"]], indent = indent)
cat("alpha:\n"); print(x[["alpha"]], indent = indent)
cat("psi:\n"); print(x[["psi"]], indent = indent)
cat("\nVariance of the constant term:", x[["var"]], "\n")

return(invisible(x))
}


#' @export
print.bv_dummy <- function(x, indent = FALSE, ...) {

print_priors(x, ...)
cat(if(indent) {"\t"}, "Mode / Bounds: ",
x[["mode"]], " / [", x[["min"]], ", ", x[["max"]], "]\n", sep = "")

return(invisible(x))
}


#' @export
print.bv_psi <- function(x, indent = FALSE, ...) {

print_priors(x, ...)
if(any(x[["mode"]] == "auto")) {
cat(if(indent) {"\t"}, "Mode / Bounds: retrieved automatically\n")
} else {
for(i in seq_along(x[["mode"]])) {
cat(if(indent) {"\t"}, "#", i, " Mode / Bounds: ",
x[["mode"]][i], " / [", x[["min"]][i], ", ", x[["max"]][i], "]\n",
sep = "")
}
}

return(invisible(x))
}


#' Priors print method
#'
#' @param x A \code{bv_dummy} or \code{bv_psi} object.
#' @param ... Not used.
#'
#' @noRd
print_priors <- function(x, indent = FALSE, ...) {

cat(if(indent) {"\t"}, "Shape / Scale: ",
round(x[["coef"]][["k"]], 3L), " / ",
round(x[["coef"]][["theta"]], 3L), "\n", sep = "")

return(invisible(x))
}
34 changes: 0 additions & 34 deletions R/50_fcast.R

This file was deleted.

84 changes: 84 additions & 0 deletions R/50_fcast_setup.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@

#' Forecast settings
#'
#' Provide forecast settings to \code{\link{predict.bvar}}. Allows adjusting
#' the horizon of forecasts, and for setting up conditional forecasts. See the
#' Details section for further information.
#'
#' Conditional forecasts are calculated using the algorithm by Waggoner and Zha
#' (1999). They are set up by imposing a path on selected variables.
#'
#' @param horizon Integer scalar. Horizon for which to compute forecasts.
#' @param cond_path Optional numeric vector or matrix used for conditional
#' forecasts. Supply variable path(s) on which forecasts are conditioned on.
#' Unrestricted future realisations should be filled with \code{NA}. Note that
#' not all variables can be restricted at the same time.
#' @param cond_vars Optional character or numeric vector. Used to subset
#' \emph{cond_path} to specific variable(s) via name or position. Not
#' needed when \emph{cond_path} is constructed for all variables.
#'
#' @return Returns a named list of class \code{bv_fcast} with options for
#' \code{\link{bvar}} or \code{\link{predict.bvar}}.
#'
#' @references
#' Waggoner, D. F., & Zha, T. (1999). Conditional Forecasts in Dynamic
#' Multivariate Models. \emph{Review of Economics and Statistics},
#' \bold{81:4}, 639-651, \doi{10.1162/003465399558508}.
#'
#' @seealso \code{\link{predict.bvar}}; \code{\link{plot.bvar_fcast}}
#'
#' @keywords BVAR forecast settings
#'
#' @export
#'
#' @examples
#' # Set forecast-horizon to 20 time periods for unconditional forecasts
#' bv_fcast(horizon = 20)
#'
#' # Define a path for the second variable (in the initial six periods).
#' bv_fcast(cond_path = c(1, 1, 1, 1, 1, 1), cond_var = 2)
#'
#' # Constrain the paths of the first and third variables.
#' paths <- matrix(NA, nrow = 10, ncol = 2)
#' paths[1:5, 1] <- 1
#' paths[1:10, 2] <- 2
#' bv_fcast(cond_path = paths, cond_var = c(1, 3))
bv_fcast <- function(
horizon = 12,
cond_path = NULL,
cond_vars = NULL) {

horizon <- int_check(horizon, min = 1, max = 1e6,
msg = "Invalid value for horizon (outside of [1, 1e6]).")

if(!is.null(cond_path)) {
if(!is.numeric(cond_path)) {stop("Invalid type of cond_path.")}

if(is.vector(cond_path)) {
if(is.null(cond_vars)) {
stop("Please provide the constrained variable(s) via cond_vars.")
}
cond_path <- matrix(cond_path)
}

if(!is.null(cond_vars)) {
if(!is.null(cond_vars) && ncol(cond_path) != length(cond_vars)) {
stop("Dimensions of cond_path and cond_vars do not match.")
}
if(any(duplicated(cond_vars))) {
stop("Duplicated value(s) found in cond_vars.")
}
}

if(nrow(cond_path) > horizon) {
horizon <- nrow(cond_path)
message("Increasing horizon to the length of cond_path.")
}
}

out <- structure(list(
"horizon" = horizon, "cond_path" = cond_path, "cond_vars" = cond_vars),
class = "bv_fcast")

return(out)
}
28 changes: 12 additions & 16 deletions R/51_fcast_compute.R
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@

#' Forecast computation
#'
#' Compute unconditional forecasts from the VAR's posterior draws obtained via
#' \code{\link{draw_post}}.
#' Compute unconditional forecasts without shocks from the VAR's posterior draws
#' obtained via \code{\link{draw_post}}.
#'
#' @param Y Numeric matrix (\eqn{N * M}).
#' @param K Integer scalar. Columns of \emph{X}, i.e. \eqn{M * lags + 1}.
@@ -14,31 +15,26 @@
#' the model in state space representation.
#' @param beta_const Numeric vector. Posterior draw of the VAR coefficients
#' corresponding to the constant of the model.
#' @param sigma Numeric matrix. Posterior draw of the vcov-matrix of the
#' model.
#' @param conditional Not yet implemented.
#'
#' @return Returns a matrix containing forecasts for all variables in the model.
#' @return Returns a matrix containing forecasts (without shocks) for all
#' variables in the model.
#'
#' @importFrom stats rnorm
#'
#' @noRd
compute_fcast <- function(
Y, K, M, N, lags,
horizon,
beta_comp, beta_const,
sigma,
conditional = NULL) {
beta_comp, beta_const) {

Y_f <- matrix(NA, horizon + 1, K - 1)
Y_f[1, ] <- sapply(t(Y[N:(N - lags + 1), ]), c)
Y_f[1, ] <- vapply(t(Y[N:(N - lags + 1), ]), c, numeric(1L))

for(i in 2:(1 + horizon)) {
Y_f[i, ] <- Y_f[i - 1, ] %*% t(beta_comp) +
c(beta_const, rep(0, M * (lags - 1))) +
c(sigma %*% rnorm(M), rep(0, M * (lags - 1)))
for(i in seq.int(2, 1 + horizon)) {
Y_f[i, ] <- tcrossprod(Y_f[i - 1, ], beta_comp) +
c(beta_const, rep(0, M * (lags - 1))) # Maybe go back to normal beta
}

# Remove Y_t and further lags from Y_f to get the forecast
return(Y_f[2:(1 + horizon), 1:M])
# Remove Y_t and lagged variables
return(Y_f[-1, 1:M])
}
88 changes: 88 additions & 0 deletions R/52_fcast_cond.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@

#' Conditional forecast computation
#'
#' Compute conditional forecasts using the algorithm of Waggoner and Zha (1999).
#'
#' @param constr_mat Numeric matrix with constrained paths of variables
#' and \code{NAs} for unrestricted values.
#' @param fcast_base Numeric matrix with unconditional forecasts without the
#' random shocks present in unconditional forecasts.
#' @param ortho_irf Numeric matrix with orthogonal impulse responses for all
#' variables. Computed by \code{\link{compute_irf}} or \code{\link{irf.bvar}}.
#' @param horizon Integer scalar. Horizon for which to compute forecasts.
#' @param M Integer scalar. Columns of \emph{Y}.
#'
#' @return Returns a numeric matrix with conditional forecasts.
#'
#' @references
#' Waggoner, D. F., & Zha, T. (1999). Conditional Forecasts in Dynamic
#' Multivariate Models. \emph{Review of Economics and Statistics},
#' \bold{81:4}, 639-651, \url{https://doi.org/10.1162/003465399558508}.
#'
#' @importFrom stats rnorm
#'
#' @noRd
cond_fcast <- function(constr_mat, fcast_base, ortho_irf, horizon, M) {

cond_fcast <- matrix(NA, horizon, M)
# First get constrained shocks
v <- sum(!is.na(constr_mat))
s <- M * horizon
r <- c(rep(0, v))
R <- matrix(0, v, s)
pos <- 1
for(i in seq_len(horizon)) {
for(j in seq_len(M)) {
if(is.na(constr_mat[i, j])) {next}
r[pos] <- constr_mat[i, j] - fcast_base[i, j]
for(k in seq_len(i)) {
R[pos, ((k - 1) * M + 1):(k * M)] <- ortho_irf[j, (i - k + 1) , ]
}
pos <- pos + 1
}
}

R_svd <- svd(R, nu = nrow(R), nv = ncol(R))
U <- R_svd[["u"]]
P_inv <- diag(1/R_svd[["d"]])
V1 <- R_svd[["v"]][, 1:v]
V2 <- R_svd[["v"]][, (v + 1):s]
eta <- V1 %*% P_inv %*% t(U) %*% r + V2 %*% rnorm(s - v)
eta <- matrix(eta, M, horizon)

# Use constrained shocks and unconditional forecasts (without shocks) to
# create conditional forecasts
for(h in seq_len(horizon)) {
temp <- matrix(0, M, 1)
for(k in seq_len(h)) {
temp <- temp + ortho_irf[, (h - k + 1), ] %*% eta[ , k]
}
cond_fcast[h, ] <- fcast_base[h, ] + t(temp)
}

return(cond_fcast)
}


#' Build a constraint matrix for conditional forecasts
#'
#' @inheritParams bv_irf
#' @param variables Character vector of all variable names.
#' @param M Integer scalar. Count of all variables.
#'
#' @return Returns a numeric matrix with the constrained paths of variables and
#' \code{NAs} for unrestricted values.
#'
#' @noRd
get_constr_mat <- function(horizon, path, vars = NULL, variables = NULL, M) {

pos <- pos_vars(vars, variables, M)
constr_mat <- matrix(NA_real_, horizon, M)
constr_mat[seq_len(nrow(path)), pos] <- path
colnames(constr_mat) <- variables
if(any(apply(constr_mat, 1, function(x) !any(is.na(x))))) {
stop("One variable must be unrestricted at each point in time.")
}

return(constr_mat)
}
198 changes: 198 additions & 0 deletions R/54_predict.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@

#' Predict method for Bayesian VARs
#'
#' Retrieves / calculates forecasts for Bayesian VARs generated via
#' \code{\link{bvar}}. If a forecast is already present and no settings are
#' supplied it is simply retrieved, otherwise it will be calculated.
#' To store the results you may want to assign the output using the setter
#' function (\code{predict(x) <- predict(x)}). May also be used to update
#' confidence bands.
#'
#' @param object A \code{bvar} object, obtained from \code{\link{bvar}}.
#' Summary and print methods take in a \code{bvar_fcast} object.
#' @param ... A \code{bv_fcast} object or parameters to be fed into
#' \code{\link{bv_fcast}}. Contains settings for the forecast.
#' @param conf_bands Numeric vector of confidence bands to apply.
#' E.g. for bands at 5\%, 10\%, 90\% and 95\% set this to \code{c(0.05, 0.1)}.
#' Note that the median, i.e. 0.5 is always included.
#' @param n_thin Integer scalar. Every \emph{n_thin}'th draw in \emph{object}
#' is used to predict, others are dropped.
#' @param newdata Optional numeric matrix or dataframe. Used to base the
#' prediction on.
#' @param vars Optional numeric or character vector. Used to subset the summary
#' to certain variables by position or name (must be available). Defaults to
#' \code{NULL}, i.e. all variables.
#' @param value A \code{bvar_fcast} object to assign.
#'
#' @return Returns a list of class \code{bvar_fcast} including forecasts
#' at desired confidence bands.
#' The summary method returns a numeric array of forecast paths at the
#' specified confidence bands.
#'
#' @seealso \code{\link{plot.bvar_fcast}}; \code{\link{bv_fcast}}
#'
#' @keywords BVAR forecast analysis
#'
#' @export
#'
#' @importFrom stats predict
#'
#' @examples
#' \donttest{
#' # Access a subset of the fred_qd dataset
#' data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")]
#' # Transform it to be stationary
#' data <- fred_transform(data, codes = c(5, 5, 1), lag = 4)
#'
#' # Estimate a BVAR using one lag, default settings and very few draws
#' x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE)
#'
#' # Calculate a forecast with an increased horizon
#' y <- predict(x, horizon = 20)
#'
#' # Add some confidence bands and store the forecast
#' predict(x) <- predict(x, conf_bands = c(0.05, 0.16))
#'
#' # Recalculate with different settings and increased thinning
#' predict(x, bv_fcast(24L), n_thin = 10L)
#'
#' # Simulate some new data to predict on
#' predict(x, newdata = matrix(rnorm(300), ncol = 3))
#'
#' # Calculate a conditional forecast (with a constrained second variable).
#' predict(x, cond_path = c(1, 1, 1, 1, 1, 1), cond_var = 2)
#'
#' # Get a summary of the stored forecast
#' summary(x)
#'
#' # Only get the summary for variable #2
#' summary(x, vars = 2L)
#' }
predict.bvar <- function(
object, ...,
conf_bands, n_thin = 1L,
newdata) {

dots <- list(...)
fcast_store <- object[["fcast"]]


# Calculate new forecast -----

if(is.null(fcast_store) || length(dots) != 0L || !missing(newdata)) {

# Setup ---

fcast <- if(length(dots) > 0 && inherits(dots[[1]], "bv_fcast")) {
dots[[1]]
} else {bv_fcast(...)}

# Checks
n_pres <- object[["meta"]][["n_save"]]
n_thin <- int_check(n_thin, min = 1, max = (n_pres / 10),
"Issue with n_thin. Maximum allowed is (n_draw - n_burn) / 10.")
n_save <- int_check((n_pres / n_thin), min = 1)

K <- object[["meta"]][["K"]]
M <- object[["meta"]][["M"]]
lags <- object[["meta"]][["lags"]]
beta <- object[["beta"]]
sigma <- object[["sigma"]]

if(missing(newdata)) {
Y <- object[["meta"]][["Y"]]
N <- object[["meta"]][["N"]]
} else {
if(!all(vapply(newdata, is.numeric, logical(1))) || any(is.na(newdata)) ||
ncol(newdata) != M) {stop("Problem with newdata.")}
Y <- as.matrix(newdata)
N <- nrow(Y)
}

# Conditional forecast
conditional <- !is.null(fcast[["cond_path"]])
if(conditional) {
constr_mat <- get_constr_mat(horizon = fcast[["horizon"]],
path = fcast[["cond_path"]], vars = fcast[["cond_vars"]],
object[["variables"]], M)
fcast[["setup"]] <- constr_mat
irf_store <- object[["irf"]]
if(is.null(irf_store) || !irf_store[["setup"]][["identification"]] ||
irf_store[["setup"]][["horizon"]] < fcast[["horizon"]]) {
message("No suitable impulse responses found. Calculating...")
irf_store <- irf.bvar(object,
horizon = fcast[["horizon"]], identification = TRUE, fevd = FALSE,
n_thin = n_thin)
}
}

# Sampling ---

fcast_store <- structure(list(
"fcast" = array(NA, c(n_save, fcast[["horizon"]], M)),
"setup" = fcast, "variables" = object[["variables"]], "data" = Y),
class = "bvar_fcast")

j <- 1
for(i in seq_len(n_save)) {
beta_comp <- get_beta_comp(beta[j, , ], K = K, M = M, lags = lags)
fcast_base <- compute_fcast(
Y = Y, K = K, M = M, N = N, lags = lags,
horizon = fcast[["horizon"]],
beta_comp = beta_comp, beta_const = beta[j, 1, ])

if(conditional) { # Conditional uses impulse responses
fcast_store[["fcast"]][i, , ] <- cond_fcast(
constr_mat = constr_mat, fcast_base = fcast_base,
ortho_irf = irf_store[["irf"]][j, , , ],
horizon = fcast[["horizon"]], M = M)
} else { # Unconditional gets noise
fcast_store[["fcast"]][i, , ] <- fcast_base + t(crossprod(sigma[j, , ],
matrix(rnorm(M * fcast[["horizon"]]), nrow = M)))
}
j <- j + n_thin
}
} # End new forecast

if(is.null(fcast_store[["quants"]]) || !missing(conf_bands)) {
fcast_store <- if(!missing(conf_bands)) {
predict.bvar_fcast(fcast_store, conf_bands)
} else {predict.bvar_fcast(fcast_store, c(0.16))}
}

return(fcast_store)
}


#' @noRd
#' @export
`predict<-.bvar` <- function(object, value) {

if(!inherits(value, "bvar_fcast")) {
stop("Please provide a `bvar_fcast` object to assign.")
}

object[["fcast"]] <- value

return(object)
}


#' @noRd
#' @export
#'
#' @importFrom stats predict quantile
predict.bvar_fcast <- function(object, conf_bands, ...) {

if(!missing(conf_bands)) {
quantiles <- quantile_check(conf_bands)
object[["quants"]] <- apply(object[["fcast"]], c(2, 3), quantile, quantiles)
}

return(object)
}


#' @rdname predict.bvar
#' @export
`predict<-` <- function(object, value) {UseMethod("predict<-", object)}
77 changes: 77 additions & 0 deletions R/55_fcast_print.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@

#' @export
print.bv_fcast <- function(x, ...) {

cat("Object with settings for computing forecasts.\n")

.print_fcast(x, ...)

return(invisible(x))
}


#' @export
print.bvar_fcast <- function(x, ...) {

cat("Forecast object from `bvar()`.\n")

.print_fcast(x[["setup"]], ...)

cat("Variables: ", dim(x[["fcast"]])[3], "\n",
"Iterations: ", dim(x[["fcast"]])[1], "\n", sep = "")

return(invisible(x))
}


#' @noRd
.print_fcast <- function(x, ...) {

cat("Horizon: ", x[["horizon"]], "\n",
"Conditional: ", !is.null(x[["cond_path"]]), "\n", sep = "")

return(invisible(x))
}


#' @rdname predict.bvar
#' @export
summary.bvar_fcast <- function(object, vars = NULL, ...) {

quants <- object[["quants"]]
has_quants <- length(dim(quants)) == 3
M <- if(has_quants) {dim(quants)[3]} else {dim(quants)[2]}

variables <- name_deps(variables = object[["variables"]], M = M)
pos <- pos_vars(vars, variables = variables, M = M)

out <- structure(list(
"fcast" = object, "quants" = quants,
"variables" = variables, "pos" = pos, "has_quants" = has_quants),
class = "bvar_fcast_summary")

return(out)
}


#' @export
print.bvar_fcast_summary <- function(x, digits = 2L, ...) {

print.bvar_fcast(x[["fcast"]])

if(!is.null(x[["fcast"]][["setup"]][["constr_mat"]])) {
cat("Constraints for conditional forecast:\n")
print(x[["fcast"]][["setup"]][["constr_mat"]])
}

cat(if(!x[["has_quants"]]) {"\nMedian forecast:\n"} else {"\nForecast:\n"})

for(i in x[["pos"]]) {
cat("\tVariable ", x[["variables"]][i], ":\n", sep = "")
print(round(
if(x[["has_quants"]]) {x[["quants"]][, , i]} else {x[["quants"]][, i]},
digits = digits))
}

return(invisible(x))
}
Loading