Running queries: Compress large columns before transfer #1165
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is not necessarily a complete implementation, but it is fully operational as far as I've tested locally.
Curious to see what you think about this type of optimization.
I use DBADash primarily over a VPN connection. The Running Queries grid is regularly the slowest to load, especially now that I have plan collection enabled. After some testing, I found that the majority of the wait time was network IO.
By compressing the two larger columns (batch_text and text) and only pulling the query plan when it's actually requested, that significantly improved usage over a VPN.
Some test case results from my local machine over a VPN connection:
Since my changes to
RunningQueries_Get
is not backward compatible, I could see changing it so that the compression of the columns are optionally set? This way if any users are relying onRunningQueries_Get
to return the columns which were removed, they would stay put and a parameter could control whether they get swapped out with compressed versions?