You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TensorFlow installed from (source or binary, official build?): Irrelevant
TensorFlow version: Irrelevant
Keras version: Irrelevant
Python version: Irrelevant
CUDA/cuDNN version (Only neccessary if you are using Tensorflow-gpu): Irrelevant
GPU model and memory (Only neccessary if you are using Tensorflow-gpu): Irrelevant
Exact command/script to reproduce (optional): Irrelevant
Describe the problem
Current .h5 dataset loading mechanism is problematic due to the fact that astroNN load the whole dataset into memory regardless of the size. It will eventually be a serious problem if the dataset is too big and have too little memory (Already a little problem of loading APOGEE training data (~12GB on my 16GB RAM laptop and desktop)
Source code / logs
Irrelevant
Suggestion
Neural Network/Data generator should talk to H5Loader directly instead of H5Loader loads the whole dataset to memory to Neural Network/Data generator.
The text was updated successfully, but these errors were encountered:
Currently, this is viewed as a low priority performance related issue. Probably wont be fixed in near future
System information
Describe the problem
Current .h5 dataset loading mechanism is problematic due to the fact that astroNN load the whole dataset into memory regardless of the size. It will eventually be a serious problem if the dataset is too big and have too little memory (Already a little problem of loading APOGEE training data (~12GB on my 16GB RAM laptop and desktop)
Source code / logs
Irrelevant
Suggestion
Neural Network/Data generator should talk to H5Loader directly instead of H5Loader loads the whole dataset to memory to Neural Network/Data generator.
The text was updated successfully, but these errors were encountered: