-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maybe reimplement the demand
and force
for MonadValue, MonadThunk and such
#850
Comments
This refactor seems fully compatible to do after the work that guys do in #804. |
Nice find! I'm looking forward to benchmarks 🚀 |
Yeah. That was a style during HNix writing. Actions were threaded through the code, to resolve right at the very end of the functional stack. |
Currently, working gradually from the tail-up. |
…tion Argument order change has functional argument - so it is not possible to elegantly switch into new code, and requires to go though transition. So doing the work while doing needed refactoring at the same time. This style of code currenly may seem more noizy, but really it is more straight-forward, it mentions only operations & transformatios, types can be looked in the HLS. With the future work #850 this style of the code would radically start to simplify itself, so please bear with me.
demand
and force
for MonadValue and suchdemand
and force
for MonadValue, MonadThunk and such
This may look obnoxious currently, but this is a process of moving the `tryPath` to have it only once. But this form would allow to easily replace `demand` here during #850, and the structure would fold alpha convert simplify quite drastically.
Currenly simply duplicates, but this would allow me to `demand -> demandF` first and get working code, and so then working on switching to new `demand` would be easier, and this safe path also allows to use old version, `demandF`, in a couple of places if something, until everything figures-out. Towards #850.
In the #873 in the profiler, the presence of computation chain of Now, basically what is happening is just: After a couple of moves to the In the profiler it is now seen - the main computational complexity that is left is the frame messaging system (probably because it is declared with RankNTypes forall a .. z, GHC simply can not permit itself to do with the code anything and because of the data types processing) and the scopes (with wich the Obsidian guys have the idea what to do). |
- ✔️ Gradual switches where posible. - ✔️ All uses of it, including all `Builtins` are processed. - ✔️ A couplr of several level `do` blocks became 1. - ✔️ Several of the monadic binds became functors. - ✔️ ChangeLog (also would be rewritten couple of times by the further change updates) This almost fully closes the #850 What is further left there is basically to move Kleisli out of `inform`, `informF` is for that, then do the includes of the function uses inside the `do` blocks and fold the lambdas, binds and `do` blocks further. With the current lispy sectioning, it is easy and semi-automatic.
Just sitting & cleaning up after the `demand` update (#850).
Completed. Note into the future: there are now |
Progress:
Proper order the arguments in:
class MonadThunk t m a | t -> m, t -> a
implementations & all uses of:force
forceEff
further
queryM
class MonadValue v m
:demand
With #864 factor-out the Kleisli arrows from implementation, and do a new refactor over the code.
force
forceEff
further
queryM
class MonadValue v m
:demand
The thunk family of functions are what are the most computationally costly in the system.
demand
does theforce
(also known asforseThunk
) - which is one of the most computational-intensive parts of the project:Currently, they look as:
Notice how thunk gets passed first into the function.
But the thunk (
v
) - is what these functions were created to change, the thunk never gets reused in them.And, as I belabored everywhere already, - the source code does a lot of flipping of the arguments for them, and the implementation of them flips the arguments internally & to recurse on itself, which suggests that implementation itself should be flipped.
The resulting code:
That looks both a lot more intuitive and efficient.
demand f
passesdemand f
,inform f
-inform f
- stack reuse, and more importantly the most costly one of themforce df
now does a tail call of itself.We get a tail recursion on
force
- and reduce all arg flipping at once in the codebase. Which makes code more understandable.Is this sound thing to do, or am I tripping?
So,
Thunk.Basic
also flipsforce
which is an alias offorceThunk
, and now it becomes:Notice, in this
forceThunk
the very same thing - the first arg passed is never modified, andk
gets returned first, whatforceThunk
operates on - is the thunk.This way, probably means that constant
k
being the first argument - it is probably would get memory reuse.And moreover, we see that 1-st arguments passed do not get used, they just get passed-through the whole (*sic) chain. This means there should be an elegant way to not pass them, but just apply them to the result of all this. Which would save a lot of computations.
The
forceThunk
ends everywhere ink v
- shows that is just a passed-around function application that asks to become exterior, so after refactor theforceThunk
just returns thev
.Recently
ByteString
, which seen as a pretty optimized library, in the0.11
did so,bodigrim
rewrote 1-3 HOFs so that they do not pass args superfluously, and by thatByteString
suddenly got ~25% performance increase, probably that args passing was theByteString
bottleneck.(Judgment s) (InferT s m)
infer
wraps inPure
anddemand
is justf a
. -- established as vacuous(StdValue m) m
demand
just unwraps thePure
and otherwise is justf a
. -- seems also vacuous, maybe it can be simplified even further, especially since anyforce
use always wrapped inside ofdemand
(~100 cases), or where it is used raw - surrounding code simulatesdemand
(all other cases). Since allforce
use gets wrapped with 1 logic gate - it seems logical to put it inside theforce
, andgo
do current recursion internally, and reducedemand
toforce
everywhere. Anyway "demand" is a synonym to "force", which is in itself a suggestion. Reduction ofdemand
also means the reduction of type class inference search in those ~100 cases.Are these sound things, or am I tripping?
The text was updated successfully, but these errors were encountered: