-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Coerce performance #15
Comments
I made naive benchmarks to compare performance with Bastion over here: https://github.com/pranaypratyush/actor_bench_test Currently, I am getting the following
Please note that Bastion is doing 1 iteration whereas Coerce is doing 1000, Bastion is taking way too much memory doing much lesser and doesn't spread the load to all cores, while Coerce looks memory efficient in comparison (not sure how much or is that even sensible efficiency) and spreads the load somewhat evenly. Also that this benchmark is probably flawed. |
Hi @pranaypratyush, The benchmarks included in this repository are in no way indicative of real-world performance, and were added only as a quick and dirty way to detect performance regressions with the Coerce library itself. The framework (and actor model as a whole) shines when you have many actors working concurrently, rather than just 1 sending and receiving sequentially. I'll look at adding in some better performance benchmarks soon that will give a clearer picture of how Coerce will perform in the real world. Thanks a lot! |
Sorry, didn't mean to close the issue! |
Yes, I am aware that the simple benchmarks you added are too naive to represent anything useful on it's own but are merely there to help you catch some obvious performance regressions. My benchmarks are naive as well but I would keep working on them. Helps me learn
And I get this for this bench
Maybe we can add some thread local stuff in coerce? Or perhaps some better examples of how to systematically use this for hot paths in a real project? |
Above is benchmarks from xtra. It also happens to be ridiculously memory efficient. |
I ran benchmarks provided in
coerce
crate on my 5950x and got thisQuite surprised that it takes so much time. I am trying to build a social network where each post can be an orderbook, so a lot of orderbooks. I liked Coerce's API compared to something like Bastion, but this benchmark surprised me. Is this going to be representative latencies in the final web server or is this just happening because we are awaiting one after the other on a multi-threaded runtime, and Tokio is wasting too much time in the scheduler doing nothing useful at all?
The text was updated successfully, but these errors were encountered: