-
-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Call flushCallBuffer
at regular intervals
#192
base: master
Are you sure you want to change the base?
Conversation
This seems like a bad idea to me. I'd rather have two flushes that occur at the right time, than a single nice flush that appears with a delay because it got caught by the timer. Also, although unlikely, this timer could split an otherwise good flush into two. |
This is really a trivial fix. But the current flush mechanism is also quite ad-hoc and causes problems with new users. Also i find quite unnecessary explicit flushing. What do you describe as a good flush? I've been using this for more the a year and never seem any breakage, because of split. I think a maximum time bound from the last flush and probably a maximum bound on request size is desired. So we can keep user delay and request size bounded. |
I was saying that it's better to have an explicit flush after every The timer surely works, but it adds a delay to the UI, which is undesirable. |
But this doesn't disable the explicit flush. You can still flush to minimize delay. When you have more then one onEvent and all commands inside those can be batched you start creating unnecessary flushes and doesn't get "good flush" if you use a flush on every |
I guess what I'm trying to say is that right now it won't work at all if you don't flush at the correct places. That makes it obvious (if know about flushing, of course) that something is wrong. With this PR, though, it's always going to work. However, if you miss a flush at the right place, your GUI is going to have more latency. That's subtle and hard to track. So I don't like this PR because it turns an obvious problem into something that works but lags. |
It could become an additional |
We can improve latency without compromising batch size by using a timeout from the last written command. Instead of polling periodically. Or maybe we have some explicit way to know when the event graph is finished or blocked and flush. I don't really like this fixed interval also. It's just the most trivial fix for the problem that i could think. |
flushCallBuffer
at regular intervals
This mode is like `BufferRun`, but will flush the call buffer every 300ms if nonempty.
This mode is like `BufferRun`, but will flush the call buffer every 300ms if nonempty.
Yep seem to solve my problem. |
@@ -5,6 +5,8 @@ import Control.Concurrent | |||
import Control.Concurrent.STM as STM | |||
import Control.Monad | |||
|
|||
import GHC.Conc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Importing a GHC-specific module is not a good idea. I would prefer to use functions from standard modules like Control.Concurrent
only.
@@ -23,6 +23,8 @@ import qualified System.Mem | |||
import Foreign.RemotePtr as Foreign | |||
import Foreign.JavaScript.CallBuffer | |||
import Foreign.JavaScript.Types | |||
import Data.Time | |||
import GHC.Conc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
@@ -119,6 +119,7 @@ Library | |||
,file-embed == 0.0.10 | |||
,hashable >= 1.1.0 && < 1.3 | |||
,safe == 0.3.* | |||
,time >= 1.6.1 && < 1.9 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to have time >= 1.4.*
here, to ensure compatibility with older GHC versions.
We have CI in place to test whether they compile.
@@ -251,10 +252,11 @@ data CallBufferMode | |||
-- to simplify usage. Users may choose 'BufferRun' instead if they want more control | |||
-- over flushing the buffer. | |||
| FlushPeriodically | |||
{ max_buffer_timeout :: Int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The standard style here is CamlCase, I would prefer this to be maxBufferTimeout
.
flushCallBufferPeriodically w@Window{..} = forever $ do | ||
b <- atomically $ do | ||
tl <- takeTMVar wCallBufferStats | ||
FlushPeriodically max_flush_delay <- readTVar wCallBufferMode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer CamlCase, maxFlushDelay
.
flushCallBuffer w | ||
threadDelay (delta*1000) | ||
Left delta -> | ||
threadDelay (delta*1000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move threadDelay
out of the case expression, eliminating one copy.
@@ -265,6 +267,7 @@ data Window = Window | |||
|
|||
, wCallBuffer :: TVar (String -> String) | |||
, wCallBufferMode :: TVar CallBufferMode | |||
, wCallBufferStats :: TMVar UTCTime |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the code can be clarified by using an ordinary TVar
instead.
The single use of takeMVar
can be rewritten by using the retry
function.
Nice, thanks! There are a couple of changes that I would like to see, if you are up to it. Otherwise, I'm happy to leave this issue open for now and return to it at a later point. |
This is a workaround for batches needing explicit calls to flushBuffersCall