Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory usages #301

Open
devfxplayer opened this issue Aug 6, 2019 · 10 comments
Open

Memory usages #301

devfxplayer opened this issue Aug 6, 2019 · 10 comments

Comments

@devfxplayer
Copy link

qerver

Hi,
Can you please let me know what to do in this case?

@darkl
Copy link
Member

darkl commented Aug 6, 2019

Whenever you perform a publication, a RawMessage object is created. It contains the JSON of the corresponding EVENT message and this value is sent to all subscribers of the topic. The idea is that this JSON value is computed once for all of the topic subscribers.

The fact that you have so many instances of this type indicates that you have a lot of publications going on in your router. I don't know if anything can be done to improve this.

@devfxplayer
Copy link
Author

devfxplayer commented Aug 6, 2019

Shouldn't be deleted right after all subscribers received it?
Can you point me in the library where it is stored?

@darkl
Copy link
Member

darkl commented Aug 6, 2019

It is stored in each client's queue (AsyncWampConnection.mSendBlock). The GC should handle cleaning up these objects when they are no longer referenced.

@devfxplayer
Copy link
Author

but GC does not clean them. thats the whole issue. somehow they stay alive.

@darkl
Copy link
Member

darkl commented Aug 8, 2019

Does your memory profiler show which object is holding them? Is it any other object other than AsyncWampConnection.mSendBlock?

@devfxplayer
Copy link
Author

is visible in the image, everything is held by ActionBlock

@darkl
Copy link
Member

darkl commented Aug 9, 2019

Then I don't believe it is a memory leak. I think in this case that you either have slow clients or that the GC doesn't think it's the right time to clean up the unused references. Not that the GC mechanism is way more complicated than one may imagine and it is non trivial to predict when will it clean up memory. I will add some references to read about this later.

Elad

@devfxplayer
Copy link
Author

it must be GC issue or somehow the queue gets flooded cause we have the same clients but none of them is slow. It happens over time like 4-6 hours with the same connections of around 20-30 clients. We are sending 500-1000 messages per second to each client. This started after we increased the number of messages sent and size

@darkl
Copy link
Member

darkl commented Aug 9, 2019

Regarding the references I promised you, see my comment here.

Elad

@darkl
Copy link
Member

darkl commented Aug 23, 2020

It's been a year. Did you find any memory leak?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants