-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please, apply YAGNI to the own implementation #879
Comments
Thank you faor challenging this design decision. I too was puzzled by the incredible complexity of having to pass around the continuation to all the query, force and demand functions. I guess there was a reason at some point for doing that, but I am not too sure anymore. As you are challenging the design, do you see a way to tune the type of values to ensure that they are already demand'ed ? If we are going to pass around simple values, we could as well disambiguate an (NValue f m) from and m (NValue f m) and ensure that the first one is at least WHNF with no thunks. It is weird that the type does not reveal that, and makes it a bit awkward to always demand values that are possibly already demanded. |
Well, I'm not challenging anything. We just do design clean-up after the initial creation phase. It can sound strange, but so far I did not really changed anything, it was just polishing the stuff. It is seen from ChangeLog entries. The changes introduced so far are just cleaning-up of the design. I'm so far still learning how it all works together. About changing how the central structure operates, it is changing the And currently, I'm thinking on the problem in terms of HNix core to the pure language just being a propagator. If there can be a direct analogy, HNix is currently still largely slow-sequential-one-threaded. You remember the main initial difference between Nix, pure, means referentially transparent. Language being "referentially transparent" gives propagator for free. This means Nix language is a lattice of actions to do to reach the peak of the lattice - that the peak is reached - means the Nix expression was sound and computed to the end result - it is cases we strive for, in real-world 99.9 the sound Nix code means people wanting code terminates. Since Nix paradigms - expressions can ask to run non-terminating processes, and that is just a "halting problem", so the only thing we can do - so even basic sound Nix implementation anyway needs to have rules where to stop the very long-winded computations and consider them correspond to the halting problem. HNix currently marks things So HNix looks very much like a propagator on lattices of particular Nix language expressions. Propagators have stages to solving the computational problem, so we naturally already have them - the first stage is parsing, next is evaluating, next is normalizing, next is executing substage that reified results into the store storage... The only stages that is not a pure propagator - are the higher-level stages, where HNix does the environment-changing imperative stuff happening, which starts inside derivation creation - their actions have a definite particular order of execution, but even there are a lot of Nix derivation building substages, some of which may be pure propagators, and some are definitely imperative sequences. Even the NixOS new generation creation and switch to are largely of a propagator nature, because Since HNix design currently is thunk-blocking-one-threaded, it is a great time to understand if there is really 1 central thunk (that is why I talked about GHC memory management understanding, and understanding the broad picture). So in the current project position moving the design from thunk-blocking-one-threaded into non-blocking parallelism direction by taking it with courage and take its results for the world to show the path to the further design steps to naturality - seems logical. Well, since as pointed-out, HNix core seems to be actually a propagator on Nix language, gradually moving things into non-blocking parallelism would show wherein the current implementation the propagator lattice does not commute, commuting the lattice brings naturality (in categorical terms), that would give HNix, Nix and Haskell way to guide us toward well, natural (word), design, to give naturality (term) guarantees on moving "functor of Nix expressions" into "doing actions" (which gut to language paradigms and our current implementation would yield (connect back with) the referential transparency and parametric polymorphism of the code on the Nix expressions), and since end goal in fact is already achieved - are Nix referential transparency is already ensured, and referential transparency guarantees naturality - so this whole move just allowing HNix to find the most elegant design for itself, and our project not executing stuff - that move, in reality going to be quite safe, especially with a lot of Haskell and Nix guard rails. But the one thing we need to do before that - is to be ready as much as possible. During ongoing work, I more and more see the Obsidian work being the right direction, to simplify the design into non-blocking and simplify scoping. Current scoping seems indeed too heavy, and approaching the design development from the start of doing non-blocking parallelism seems natural. If they do not show-up, in some time I plan to get their patch, study it, and implement it again to complete the work, there should be somebody that completes that work. |
But before all that - there are way simpler things. We can talk about the high matter, but the project needs basic stuff done, and changing List to proper monoids and ( So far we do gradual evolution, since nature is strong in the project - natural gradual evolution is a strong navigator, the more gradual progress happens the more the design reveals itself. The further project does gradual evolution I see we would know more and more what to actually do as next steps, I do not hold current beliefs, because know that going to have a better idea when the time would arrive. |
If tractated something wrong, please be honest to correct me, if it is so - would retract and reorganize thought processes. |
Implementation show that we so far not needed the custom type for it. Right and Left already embodies the whole semantic meaning, since Either is mostly used as Right fro computation being successful, and Ledt for failure. So why bother everyone with the custom constructors, since Either is already know to everybody. Current implementation by itself shows that Result custom type is being YAGNI, since replacing it with Either changes nothing. So it relates to the topic of #879. If we would need ad-hock abilities to "our custom class" - the `TypeSynonymInstances` are there for us. I did the main work abstracting implementation from the type and constructors here, so if we would want in the future to change the type - we would just need to replace the `either` function with `result` function and that is all. But just using `either` is more consise. Type synonym left the signature information there. I'd even considered to remove `Result` all togather, since the more I talk about it the more it becomes clear that its use is just what Haskellers use `Either` for.
Implementation show that we so far not needed the custom type for it. Right and Left already embodies the whole semantic meaning, since Either is mostly used as Right for computations being successful, and Left for failure. So why bother everyone with remembering the custom constructors and what they are for if they are not needed, Either is already know to everybody. Current implementation by itself shows that `Result` custom type is being YAGNI, since replacing it with Either changes nothing. So it relates to the topic of #879. If we would need ad-hock abilities to "our custom class" - the `TypeSynonymInstances` are there for us. I did the main work abstracting implementation from the type and constructors here, so if we would want in the future to change the type - we would just need to replace the `either` function with `result` function and that is all. But just using `either` is more consise. Type synonym left the signature information there. I'd even considered to remove `Result` all togather, since the more I talk about it the more it becomes clear that its use is just what Haskellers use `Either` for.
Implementation show that we so far not needed the custom type for it. Right and Left already embodies the whole semantic meaning, since Either is mostly used as Right for computations being successful, and Left for failure. So why bother everyone with remembering the custom constructors and what they are for if they are not needed, Either is already know to everybody. Current implementation by itself shows that `Result` custom type is being YAGNI, since replacing it with Either changes nothing. So it relates to the topic of #879. If we would need ad-hock abilities to "our custom class" - the `TypeSynonymInstances` are there for us. I did the main work abstracting implementation from the type and constructors here, so if we would want in the future to change the type - we would just need to replace the `either` function with `result` function and that is all. But just using `either` is more consise. Type synonym left the signature information there. I'd even considered to remove `Result` all togather, since the more I talk about it the more it becomes clear that its use is just what Haskellers use `Either` for.
Currently the separate |
First, lets say that implementing the project was very difficult task, and all my ramblings are not critique and not any detriment to the work done. I just discuss cleaning-up things after the creation stage. As my petty experience says - refactoring really in real world practice indeed takes 50% of the time of the coding. Frequently I make things work in 2.5 days, and then I clean-it-up 2.5 days until I am happy with the implementation.
Lets joke a little about our current state:
picture
It currently depicts the HNix engine implementation to what CLI currently does.
And the engine does not have a drag, it almost turns the blades.
The more work on the code polish done, the more YAGNI in the engine reveals themselves. The faster the engine turns, so - maybe we would be able to build a whole-lot-of-a-big-deal-plain on it that helps a lot of people.
The YAGNI needs to be reduced from the main computational paths and the main core of the engine, and in many cases, to save what additional functionality is already provided - we can easily preserve the current YAGNIs as such, optional, as in #864.
Good example of them, is Kleisli actions that were drugged around in functions when the function of those functions was completely other thing.
The #864 and #850 results are gargantuan, code inside the functions and around them became elegant, easy to use and understand, and moreover - more efficient, and allowed and allows more and more elegance.
Or in
Pretty
printing the outside package module function was used to place in its place a literal as'.'
or' '
, so with those function stacks the implementation and visual representation of the result were not understandable.After the literals became literal - the code became much more straightforward to understand and work with.
For example: the
queryM
initially was implemented as:There is a lot of mystery around it.
querryM
is quite a mysterious function. Was for me for the last 2 years of learning the project. I looked at it a couple of times and was not sure what its purpose is.Its type signature and how to use it, why it is so and what this function details of the purpuse are a mystery.
As Unkle Bob says:
picture
Mystery "was", because, until refactoring happened. Noting in advance:
¿ Why passing 2 arguments that play no role in what function does?
Since, function in itself to read the structure, and return a value if it is computed.
¿ Why blocking a thread and mutating a monad twice for just reading value?
Reading of a structure does not change monads of it, reading in a language with static data structures does not need locking - we can allow people to read static structure versions at any moment as much as people want.
Refactoring
qeryM
reveals its lean implementation nature:After the thoughtful polish - it is really all there is to its current implementation:
(Yes) -> return it.
(No) -> if the function does not know the value - it as a good human responds "I know nothing about it.".
The action becomes cheap, and does it is needed to be explained that GHC would be able to optimize it strongly, it is so light that GHC probably frequently would even just inline it in the place of use completely freeing it from abstraction overhead, module call overhead, type class overhead, it is basically a monad-aware
if then else
statement, which GHC sees and going to embed it so.Overall, since HNix in the most currently known ideal way most directly implements the most direct analog for the core language - the HNix core in itself from all signals can & should be very lean.
Because of the strong nature of the project the principle of:
The most strongly applies in this project.
Lets do things naturally, lets move currently mandatory YAGNIs that currently forced on our own implementation, these YAGNIs just hold the project back, let us move YAGNIs to what they are - the optional possibility we can provide additionally if that would be required, and allow main (and so - probably the canonical) lean implementation, design. This going to leverage the strong points of what HNix is & shine the genius of the implementation, and going to make us and the compiler really happy with the code.
If the initial agenda of the project is to be open, it is not enough to make the complex code open, since we really can make a majority of the code readable and understandable to all Haskellers, so anyone who comes around marvels, or we would have the ingenuity that is being elegant to the point of implementation being succinct, calm and humble.
The text was updated successfully, but these errors were encountered: