Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Devel #10

Merged
merged 19 commits into from
May 2, 2023
Merged

Devel #10

merged 19 commits into from
May 2, 2023

Conversation

Myfanwy
Copy link
Contributor

@Myfanwy Myfanwy commented May 2, 2023

This branch introduces a workflow order and README that will become the makefile; it also closes #8 and #9

Used to have separate scripts for tags, deps, dets; all accessed the same database. This is now consolidated in R/get_yolo_raw_data.R (renamed from R/wst yolo tags old script). This script is intended to be sourced once from CFS, and then the .rds can be utilized by anyone wanting to reproduce the data workflow. Final data/ folder with .rds' to reproduce the data cleaning process can be made available at the end of the project.
The goal is to have a combine_detections, combine_tags, combine_deployments script to combine the three table sources prior to qa/qc. Part of re-factoring for a cleaner workflow. Will eventually delete R/parse_lodi_dets.R as it's now obsolete
... to match the combine_detections script name convention; this combines the three tagging tables and saves it as both a .csv and .rds; the .rds gets used in further data cleaning, the .csv gets shared/formatted for the database. The metadata for yolo is coming from the original tagging spreadsheets rather than the .sqlite db; this is inconsistent and may need to be looked at again later to streamline the workflow, at the very least will need to delete the data/wst_yolo.rds tags table that gets saved in R/get_yolo_raw_data.R
…range cutoff from the qaqc script to the combine script just so we dont' have to do it a million times
The workflow is currently: combine_tags -> combine_detections -> parse_deployments -> qaqc_detections. These scripts are linear; one depends on the previous one.
needed to prep this file for parse_deployments; it's now akin to the get_yolo_raw_data.R script, it gets and preps the raw data for BARD.
This script combines and cleans the three deployment tables; edits made to generalize file paths.  depends on output of get_bard_deployments.R
README will eventually be turned into the makefile
Moved content from categorize_basins and receiver_map scripts to R/clean_deployments.R.  This script is now very long and still a bit fragile, but the list of dependencies is shorter and it's only intended to be sourced once.  Updated README to reflect workflow order of the scripts for the QAQC process; next step is to initialize a .sqlite database
Per convo with Laura, Zac and Matt, decided to have the database only consist of the merged detectiona and deployment tables, and then have the tags and deployment tables included as .csvs.  This is because the merged table is what's useful in their analyses, and it's too large for a .csv.
With the update in the raw data and re-export of all_rec_locs.csv, two incorrect receiver locations are fixed (closes #8 and closes #9).

Also updated filepaths to the raw data in clean_deployments, because we are hosting all raw data in the dropbox folder so that R/set_data_dir.R works as intended with multiple contributors.

Also moved some optional bounding code from clean_deployments.R to receiver_map.R in order to keep all mapping code in the same script; it is wrapped in if(FALSE) because it's not currently used.
@Myfanwy Myfanwy merged commit 6611fb5 into main May 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

SJR OFC location needs to be corrected
1 participant