Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Call flushCallBuffer at regular intervals #192

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

massudaw
Copy link

This is a workaround for batches needing explicit calls to flushBuffersCall

@meteficha
Copy link
Contributor

This seems like a bad idea to me. I'd rather have two flushes that occur at the right time, than a single nice flush that appears with a delay because it got caught by the timer.

Also, although unlikely, this timer could split an otherwise good flush into two.

@massudaw
Copy link
Author

This is really a trivial fix. But the current flush mechanism is also quite ad-hoc and causes problems with new users. Also i find quite unnecessary explicit flushing.

What do you describe as a good flush? I've been using this for more the a year and never seem any breakage, because of split.

I think a maximum time bound from the last flush and probably a maximum bound on request size is desired. So we can keep user delay and request size bounded.

@meteficha
Copy link
Contributor

I was saying that it's better to have an explicit flush after every onEvent rather than relying on the timer. So a good flush would be one that is as big as possible.

The timer surely works, but it adds a delay to the UI, which is undesirable.

@massudaw
Copy link
Author

massudaw commented Aug 25, 2017

But this doesn't disable the explicit flush. You can still flush to minimize delay.

When you have more then one onEvent and all commands inside those can be batched you start creating unnecessary flushes and doesn't get "good flush" if you use a flush on every onEvent.

@meteficha
Copy link
Contributor

I guess what I'm trying to say is that right now it won't work at all if you don't flush at the correct places. That makes it obvious (if know about flushing, of course) that something is wrong.

With this PR, though, it's always going to work. However, if you miss a flush at the right place, your GUI is going to have more latency. That's subtle and hard to track.

So I don't like this PR because it turns an obvious problem into something that works but lags.

@HeinrichApfelmus
Copy link
Owner

It could become an additional CallBufferMode, though, for people who like it. I think it's at least a valid way to deal with buffering. We may want to look into old documentation for screen terminals, they must have dealt with similar issues back in the day.

@massudaw
Copy link
Author

We can improve latency without compromising batch size by using a timeout from the last written command. Instead of polling periodically.

Or maybe we have some explicit way to know when the event graph is finished or blocked and flush.

I don't really like this fixed interval also. It's just the most trivial fix for the problem that i could think.

@HeinrichApfelmus HeinrichApfelmus changed the title Force flush of buffer Call flushCallBuffer at regular intervals Aug 27, 2017
HeinrichApfelmus added a commit that referenced this pull request Aug 27, 2017
This mode is like `BufferRun`, but will flush the
call buffer every 300ms if nonempty.
@HeinrichApfelmus
Copy link
Owner

I have implemented a new call buffer mode, FlushPeriodically, in commit cdc8f40 , which essentially implements the pull request here. @massudaw , does this work for you?

massudaw pushed a commit to massudaw/threepenny-gui that referenced this pull request Aug 28, 2017
This mode is like `BufferRun`, but will flush the
call buffer every 300ms if nonempty.
@massudaw
Copy link
Author

Yep seem to solve my problem.
I've implemented an improved version that reduces latency, avoids splitting a buffer when it's being written and doesn't. If you think is worth adding the complexity

@@ -5,6 +5,8 @@ import Control.Concurrent
import Control.Concurrent.STM as STM
import Control.Monad

import GHC.Conc
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Importing a GHC-specific module is not a good idea. I would prefer to use functions from standard modules like Control.Concurrent only.

@@ -23,6 +23,8 @@ import qualified System.Mem
import Foreign.RemotePtr as Foreign
import Foreign.JavaScript.CallBuffer
import Foreign.JavaScript.Types
import Data.Time
import GHC.Conc
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@@ -119,6 +119,7 @@ Library
,file-embed == 0.0.10
,hashable >= 1.1.0 && < 1.3
,safe == 0.3.*
,time >= 1.6.1 && < 1.9
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to have time >= 1.4.* here, to ensure compatibility with older GHC versions.
We have CI in place to test whether they compile.

@@ -251,10 +252,11 @@ data CallBufferMode
-- to simplify usage. Users may choose 'BufferRun' instead if they want more control
-- over flushing the buffer.
| FlushPeriodically
{ max_buffer_timeout :: Int
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The standard style here is CamlCase, I would prefer this to be maxBufferTimeout.

flushCallBufferPeriodically w@Window{..} = forever $ do
b <- atomically $ do
tl <- takeTMVar wCallBufferStats
FlushPeriodically max_flush_delay <- readTVar wCallBufferMode
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer CamlCase, maxFlushDelay.

flushCallBuffer w
threadDelay (delta*1000)
Left delta ->
threadDelay (delta*1000)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move threadDelay out of the case expression, eliminating one copy.

@@ -265,6 +267,7 @@ data Window = Window

, wCallBuffer :: TVar (String -> String)
, wCallBufferMode :: TVar CallBufferMode
, wCallBufferStats :: TMVar UTCTime
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the code can be clarified by using an ordinary TVar instead.
The single use of takeMVar can be rewritten by using the retry function.

@HeinrichApfelmus
Copy link
Owner

I've implemented an improved version that reduces latency

Nice, thanks! There are a couple of changes that I would like to see, if you are up to it. Otherwise, I'm happy to leave this issue open for now and return to it at a later point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants