ASL modulations #468
Labels
design
Issue needs design consideration
feature
new features yet to be added
help wanted
Extra attention is needed
ref to #438
Inspired by github.com/entzmingerc/drumcrow I'd love to see some extensions to ASL that would allow modulations within an ASL shape. Of course the best system in crow for modulations is ASL itself, so this leads to arbitrary nesting of ASLs. Of course we have CPU & memory limitations, but due to the ASL runtime running natively in C it should end up being far more performant than current solutions which run a fast timer & calculate these modulations in the lua script.
Right now ASLs are tied to hardware outputs, and statically allocated at boot time. This would need to be overhauled to allow a pool of ASLs that can nest.
Syntax and organization of ASL descriptions
At present most ASL descriptions are captured inside functions (eg:
ar
,lfo
,pulse
) which themselves return ASL descriptions. The benefit here is that the same description can be applied to any number of ASL machines. It would seem beneficial to continue this pattern, even if it leads to some limitations. This would avoid breaking existing code (which at v3+ seems the right thing to do).This suggests that nested ASLs should be declared inside of other ASL descriptions, being automatically allocated under the hood. The nested ASLs would need to be named such that they can be controlled with directives internal to a given ASL. Something along the lines of:
This is, of course, just a start & I haven't thought through all the repercussions of any specific design decisions here. Just a starting point for further exploration.
Dedicated generators
At present ASLs are always deterministic and randomness can only be approximated with algorithms like LCG (see drumcrow). To simplify usage in a more percussive or noisy context, it may be beneficial to use a dedicated noise generator.
I see 2 clear ways this could be implemented:
Noise as an output shaper
Similar to
log
orexp
etc, we could addnoise
andbnoise
(the latter for bipolar output). This has the benefit the of using the knownto
function, allowing for amplitude (destination voltage) and duration (time parameter). Thus the amplitude could be controlled by a nested asl to implement a enveloped output level. Setting the duration to a very big number would allow essentially infinite noise generation without ASL doing any substantial work in the VM, and instead just in the signal generation code.Noise as a generator object
This approach looks to add a new function analogous with
to
, where we can change the parameter list for configuring the generator. We'd still need a duration parameter so it can coexist withto
, but this could explicitly support-1
to mean "forever". There would need to be an amplitude control, but this could optionally take a min & max for explicit bipolar dimension.A third argument could be used as well to set the internal rate of the noise generation (how frequently a new random value is generated) so we could implement other types of noise than white. And a linked parameter would be related to slew times for transitioning between values (or even just a boolean on/off for discrete steps vs continuous slide). The motivation here is that it could have use in a modulation context for drunk-walk type modulations.
An alternative to these drunk-walk type modulations (and for sample-reduced interpolated noise) would be another, separate
dyn
directive. Where a specialdyn("noise")
could be enabled, which would deliver a new value every time it was called. Then the sample rate & slew behaviour could be configured in precise detail in a regular ASL description. The noise impulse would default to some pre-defined range (-1,1) or (0,5) etc. That would look something like:Other generators
The above could be categorized as:
Each of these 3 suggests the possibility of additional variations of each. Before making a decision about which path to follow it makes sense to try and enumerate the other ways in which that method could be extended. In general I would prefer the option that is most general purpose (ie allows the most meaningful variations), as ASL is intended to "describe all the possible modulations". If we are extending ASL to explicitly support audio processing, then we should ask what other audio-processing tasks can be captured within the existing syntax with minimal awkwardness.
Everything in ASL so far has been about generation. By introducing nested ASLs we are getting into processing (ie dynamic destination is essentially amplitude modulation). The next natural step is probably filtering, and it gives me pause to think about how that category of elements could be integrated within this little language. Of course we're not trying to make the next supercollider for crow -- the hardware simply doesn't have the processing power to make that worthwhile, let alone the increasingly sharp learning curve. The introduction of
dyn
was a big change, and very few people have really gone deep with it -- it's an inherently complex addition to the language -- so I'm hesitant to add a great deal of complexity to the system.Rambling...
That's all for now, but i'm very open to suggestions & have absolutely no timeframe scheduled. This is more of a thought experiment that could potentially become a feature down the road one day.
The text was updated successfully, but these errors were encountered: