You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks a lot for making this contribution, I will leave this PR for further discussion.
Before commiting, I run the pytest on my local machine every time. This non-reproducible problem is only caused by not updating the latest dataset in Google drive (and also the local Numpy versions).
To my understanding, the Action creates a new virtual envirnment each time to run the test code. But if is on a brand new environment, the pytest can not download the missing dataset automatically and raises FileNotFoundError in this case.
So this online test workflow may not be suitable for the cache rule. It requires the dataset initialize step for everytime in the Action virtual machine, and needs downloads such data again and again.
Personally prefer contributor running the pytest on their local enviroment, just like what scikit-learn and numpy require for pull request.
Then I need to figure out the following items:
A solution to check the local dataset versions and support auto-update if it is out-of-date.
Edit the 5 tests depend on the 3.3GB Lotus() dataset, using the 300MB TestData() only to decrease the required dataset size for testing
Thanks a lot for making this contribution, I will leave this PR for further discussion.
Before commiting, I run the pytest on my local machine every time. This non-reproducible problem is only caused by not updating the latest dataset in Google drive (and also the local Numpy versions).
To my understanding, the Action creates a new virtual envirnment each time to run the test code. But if is on a brand new environment, the pytest can not download the missing dataset automatically and raises FileNotFoundError in this case.
So this online test workflow may not be suitable for the cache rule. It requires the dataset initialize step for everytime in the Action virtual machine, and needs downloads such data again and again.
Personally prefer contributor running the pytest on their local enviroment, just like what
scikit-learn
andnumpy
require for pull request.Then I need to figure out the following items:
Lotus()
dataset, using the 300MBTestData()
only to decrease the required dataset size for testingOriginally posted by @HowcanoeWang in #87 (comment)
The text was updated successfully, but these errors were encountered: