New type of “network” (bayesian network for now, maybe neural in future) that can be trained and tested.
- [ ] Graph network
- [ ] Prepare -> extract all training edges. For each pair of graphemes as <box, grapheme tags according to net config> link or not, edge tag according to net config.
- [ ] quevedo optional dep scikit-learn
- [ ] Train -> fit bayesian network, serialize it to disk
- [ ] Test -> load serialized model, for each pair of graphemes in a logogram predict edge or not and tag.
How to filter edges? This is global, and probably application dependent. Maybe a user script? “seed” nodes from where to compute maximum probability spanning trees?
Beyond the quick filter, this needs a new view.
maybe use https://developer.mozilla.org/en-US/docs/Web/HTML/Element/datalist
Remove “saved” message when doing changes (inconsistent), maybe load indicator
- [ ] “flags” (from v1.1)
- [ ] changes from v1.2
- [ ] changes from v1.3
Maybe just move the last to the hole?
User groups, record annotator in json.
When scripts modify images, don’t save them, but return that it has been modified (ie return modified_tags, modified_img) and then it is `run_script` that saves the image to the appropriate path. Coversely, in the web interface the updated image can be sent to the frontend to be previewed, and if they want to save it send it back to the server on “save”. The complication is that the image is now frontend state, not just a src link.
Study migrating to a python-based ML library.
Incorporate VISSE code of data augmentation to replace the existing module. The idea is to use user code to generate examples, because they know how their tags work. We can still provide the image generation and grapheme placement with force simulation for logogram generation.
- [ ] Generate graphemes
- [ ] Generate logograms
How to use, usefulness, etc.