-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
metric.values - speed up #2
Comments
Seemed mostly ok until had to define pipe, %>%. That is when it slowed down but otherwise got an error if dplyr wasn't loaded. line 270 in metric.values.R define pipe
Should clean up the extra data frames that were created for the dominant metrics. This may free up some memory and keep things from getting sluggish. |
Only minor improvement with removing extra data frames. Defining pipe seems to be the issue. |
Metric changes for nonclumpy and removed extra dominant data frames for speed. And ReadMe packages. Issue #4 and Issue #5
Nothing else to do at this point other than to disable the metrics if don't need. I could add a trigger in the calling routine to set the max number of Dominant metrics to generate. Then I could modify the code to loop through until have all of those metrics added. This might be a good idea even if not putting into master function call. |
Ran a 19 MB, 125k records (2000+ samples) file and it took somewhere between 45 min and 1 hour. Have added lots of metrics so may have to break apart and then merge back together. |
Things to try:
https://community.rstudio.com/t/dplyr-summarise-with-condition/100885/3 |
Adding 11 dominant metrics slowed up metric.values() considerably. Try to speed up.
The text was updated successfully, but these errors were encountered: