You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Radiometric variability between NAIP acquisitions may make it difficult for the model to generalize between years. Training and/or validating with NAIP imagery and corresponding LiDAR data from multiple years should a) let us know how well we can predict to other years, and b) hopefully allow the model to generalize better.
Additionally, we should look at other LiDAR metrics, e.g. RH95 or understory cover, to see how well we can predict other attributes.
This should all be doable with the current sampling and modeling workflows just by modifying the notebooks, but there may be some convenience features we can add to simplify that process, if we're potentially going to be extracting a dozen attributes over a dozen years.
The text was updated successfully, but these errors were encountered:
With the dataset update in #11, we'll have one HDF5 dataset per LiDAR/NAIP aquisition that includes all relevant LiDAR attributes. To train on multiple acquisitions at once, we may want to use interleave, although I'm not 100% sure what that gives us over concatenate. In either case, we'll need to ensure that any merged datasets are shuffled.
Radiometric variability between NAIP acquisitions may make it difficult for the model to generalize between years. Training and/or validating with NAIP imagery and corresponding LiDAR data from multiple years should a) let us know how well we can predict to other years, and b) hopefully allow the model to generalize better.
Additionally, we should look at other LiDAR metrics, e.g. RH95 or understory cover, to see how well we can predict other attributes.
This should all be doable with the current sampling and modeling workflows just by modifying the notebooks, but there may be some convenience features we can add to simplify that process, if we're potentially going to be extracting a dozen attributes over a dozen years.
The text was updated successfully, but these errors were encountered: