-
-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code generation for modules/groups (inlining) #19
Comments
Should probably only inline smaller and/or few-instance modules? For larger modules, a function call is probably worth while - since, otherwise, you're paying with potentially long compile times. Very cool project btw! 😄👍 |
Thanks :) I'm hoping to increase performance of the audio thread by inlining things. The issue is that JS JITs don't optimize multiple return values AFAIK, so some other trick would have to be used such as storing the outputs on an object field, which is awkward. |
Have you thought about generating WebAssembly bytecode instead? I don't have any experience doing that myself, but would expect this is a whole lot faster than JS - you would get native 64-bit floating point operations, and you can use "arena" style memory management, avoiding allocations/deallocations at run-time entirely. You might even consider adding a module with WASM source code input as well - for things that modules are less helpful for, such as FFTs, which could then be created in userland. It's been a dream of mine for years, learning enough WASM to be able to build basically what you're building, but fast enough that it becomes a replacement for desktop products like Reaktor or PureData - but online, enabling users to build and share modules via community features etc. (I wish I had the time or energy to get invested in this. 😅 I built a shoddy prototype of a tool of this type in Pascal, probably 30 years ago while I was in school, and actually used it to build a reverb effect that still exists in a few products today. I worked in music software for a few years, but not since the 2000s. I just love that it's even possible to build something like this on the web now, and it's so cool that you're actually building it!) |
NoiseCraft already gets native I have been thinking about maybe building a more advanced version of NoiseCraft in rust. I think I can make it very portable, but I don't know how easy it would be to make it run in a browser. That would make it more of a desktop/laptop app by default. |
is it able to make use of SIMD optimizations? for some high level ops (e.g. adding or multiplying n numbers) the SIMD operations can still be up to 3-4 times faster, from what I've seen of JS vs WASM benchmarks. (?)
and an WASM, you could, right? this would definitely be a step up, I think? it would also unblock the main thread for UI work, wouldn't it?
from what I know, Figma does UI rendering on canvas with WebGL - offloading to a GPU, if available, so even better than just offloading the main thread... but yeah, that is definitely not simple or easy to do.
while that would be cool, there are definitely pros to being JS - mainly, a much larger audience can read the code. 🙂 also, I don't think you have a module yet with custom source code input, but that could also be a benefit to using JS - e.g. Reaper ships with a bunch of JS effects, but of course has some sort of hyper optimized custom run-time to make this possible, and it has the freedom to launch multiple instances of these in threads, and so on. another potentially cool thing about the run-time being native JS is you could allow exporting the internal generated source code, and someone could take the generated code and use it in their own webaudio projects. hmm, there is the AssemblyScript compiler, which can run in the browser - I wonder if this could be used to bridge the run-time generated code to WASM? you'd only need to add types. might be less disruptive to the project than trying to port it to WASM. another approach might be web workers? you need a certain level of complexity before offloading anything to workers is worth while, but for example, as I recall, SynC Modular (a very old modular synth for PC) used the approach of offloading individual voices (in polyphonic mode) to separate threads - that's a reasonably simple approach, but of course only really useful for polyphonic synthesizers. (I'm not even sure if NC has polyphony?) just throwing out ideas here. |
With respect to SIMD, I think there are other optimizations I would prioritize first if the goal was to increase performance.
No, in JS the main thread is tied to the browser event loop, and NoiseCraft already renders audio in a background thread.
If you use WebGL, you have to trust that different browsers will support it correctly and hope that your shaders will render the same across different implementations. A lot of the design choices behind NoiseCraft are about keeping things as simple as possible. That's why it's been working reliably for years whereas a lot of other browser-based music software doesn't. |
the more I try to read up on this, the more perplexed I get. 😐 if I understand correctly now, the "audio rendering thread" they talk about in the for crying out. 😅 so they designed an audio API that is restricted to everything running in a single thread? I mean, from what I could read in Github issues by the working group, they actually knew this, but apparently they were constrained by the browser and language design itself, so it's not that I'm trying to place blame here. as for Workers and WebAssembly, it sounds like these were not designed for the kind of real time scheduling that the web audio API requires - I've seen examples, and it's possible to offload to workers, but you're adding your own buffer over the internal web audio buffer, adding latency. it sounds like that's not really a good way to go either. so we're essentially stuck with a single thread until the web audio working group figures something out? 🥲 as for your idea to rebuild in Rust, well, this would give you a desktop version that runs great - but since the web version would run on WASM, it would be limited by the same constraints. where the compiled WASM module intersects with WebAudio, it will need to use a shared array buffer, same as any worker, meaning high latency... it would probably be fine for step sequencers and audio sculptures etc. but not great if you want to play your MIDI keyboard.
this is definitely a big part of the attraction for me - I am all about simple code we can read and understand. For example, I love the fact that you don't have an entire architecture of abstraction around block types - the "if this then that" model is so straight forward. there is nothing overwhelming in that file at all, handling for each block type is mostly simple. It's great. 😄👍 ugh, I really thought WebAudio was more mature than this. 🫠 |
The code in
compiler.js
currently doesn't handle modules, so we can't really use them in projects. To implement this, we should implement inlining, basically transforming the incoming graph until it contains no modules anymore and everything is flattened. This has to work recursively (modules inside of modules).The text was updated successfully, but these errors were encountered: