-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic Audio Connections #81
Comments
I made a branch of the Tympan_Library and tried to implement the full dynamic behavior -- both dynamic AudioConnection_F32 and dynamic AudioStream_F32. https://github.com/Tympan/Tympan_Library/tree/feature_createDestroyAudioStreamInstances. (*see note at bottom) Unfortunately, this exploration took a lot of changes that might not be advisable. Here are a few examples:
I started a Teensy forum post to see if there is appetite for me to submit a pull request to Teensy: https://forum.pjrc.com/index.php?threads/destructor-for-audiostream.75797/ (Note: If you try to use this branch, the branch includes |
Prior to the ability to dynamic connections, one had to instantiate all the possibly-desired audio pathways. One would then use switches and mixers to route audio blocks around to the different audio paths. It would look like this: One question is whether all of the AudioPaths are running their calculations, even though we only want one active a time. Are we burning up CPU? The answer is, no, generally not. The savior is that most audio processing elements (ie, AudioStream instances) have the behavior that they look for incoming blocks of audio. If there are no incoming blocks, they do no processing. So, by using the AudioSwitch to only route audio blocks to the desired audio path, only that audio path does any work. The other audio paths just sit idle. Nice! The one situation where (I think) this breaks down is when you have something that generates its own audio. So, if you're using an AudioSynthWaveform_F32 to make a sine wave, it'll make a sine wave. It doesn't need to look for an in-coming block of audio, so it doesn't know that its been cut off as part of an inactive audio path. For these cases, one should use the |
Out of nowhere, the Teensy folks are starting to build toward a new Teensy software release. This doesn't happen very often. Now is our chance to get the destructor into Teensy Cores! I've offered this Pull Request: PaulStoffregen/cores#755 |
I've been making an example that does the switching instead of create/destroy. It turns out that we don't need the initial switch. Instead, we can simply de-activate all of the processing blocks that we don't want and only activate the blocks of the one AudioPath that we desire. So, it'd look like this: |
@cab-creare-com I have a working example using my own AudioPath objects.
The example is in Tympan_Sandbox: https://github.com/Tympan/Tympan_Sandbox/tree/master/OpenHearing/TestSwitchedConnections It's currently set to RevE, but you can easily switch it to any of the revisions (D/E/F) just by changing the "myTympan" line. Once compiled and uploaded, you control it via the Serial Monitor. Use the menu (send an 'h') to switch between:
Chip |
@cab-creare-com , I've extended and generalized the example sketch to handle 4 channel I/O. It doesn't yet do PDM (that's next), but it does 4 channel. I've also clarified the small section of code that needs to be touched if you want to add your own AudioPath objects. See the big flashy comment block in the main *.ino file. https://github.com/Tympan/Tympan_Sandbox/tree/master/OpenHearing/TestSwitchedConnections |
@cab-creare-com , I've expanded the example to include an AudioPath (path 3) that uses PDM mics instead of using the analog inputs. The example still keeps the same two previous AudioPaths (sine and analog pass-thru). Because switching to PDM is a hardware request (not a change to an audio-processing algorithm), I had to expand the AudioPath interface to allow each AudioPath to configure the hardware. Here's what I did:
Beware that I did not implement a general "reset hardware" to put the hardware in a consistent known state. Instead, I figured that we'd stick with a minimal solution to get more experience. Until then, each AudioPath should fully and explicitly specify every setting that matters to it. Right now, here's one example of what I'm doing: To use the new example:
The example sketch is the usual one: https://github.com/Tympan/Tympan_Sandbox/tree/master/OpenHearing/TestSwitchedConnections I tested the code on a Rev E. My RevE does have an earpiece shield. I tested the firmware's ability to switch among the three AudiPaths. I correctly hear the hardware changing its configuration, which is great. The one (big) caveat is that I don't have earpieces here...so, I couldn't actually listen to the PDM stream. I'll test it more fully the next time that I'm in. If you try this example, I know that you don't have an earpiece shield. We d have spares, so we can get you one. Until then, you can try the code as-is (well, after switching to Rev F, if that's what you have). If the 4-channel code really really doesn't work, you can change it back to 2-channel mode via changing |
@cab-creare-com, I re-jiggered the method of connecting inputs and outputs to the AudioPath. I'm trying to move toward a place where one can use AudioConnection regardless of whether it's an AudioPath or an AudioStream. I'm not quite there yet, but the new rejiggered AudioPath_Base interface helps get us closer. It's built into the example on the Tympan_Sandbox repo. It also requires you to update your Tympan_Library. |
@cab-creare-com, I exampled our example to add an AudioPath that does an FFT on the in-coming audio! Run the latest version of the example: https://github.com/Tympan/Tympan_Sandbox/tree/master/OpenHearing/TestSwitchedConnections Upon startup, you should hear it do a test tone, which beeps on/off once per second. This AudioPath does not do the FFT. To do the FFT, send a "4" to switch to that AudioPath. Now it should play a steady tone. If you use a cable to connect the headphone output (black jack) over to the audio input (pink jack), you should see the FFT level printed once per second. To confirm that it works, you can change the frequency of the tone by sending an "f". The FFT reporting will stay at 1 kHz whereas your tone will move up to 1.4 kHz. Note that the reported FFT magnitude drops. Send "F" to move the tone frequency back down to 1kHz. You can change the amplitude of the tone by 3dB by sending "a" and "A" and you'll see the reported FFT value also change by 3dB. Cool! |
@chipaudette In an attempt to catch up to speed, can I state some assumptions to see if I am on the right track? There are existing Teensy functions for Disconnecting, rather than destroying, doesn't eat up CPU cycles, as the update function in audio objects first check whether there is incoming data. This occurs in Catching up on terminology...
AudioConnection_F32: A patchcord that ties the output of one AudioStream to the input of another.
AudioPath_F32: An aggregate audio effect that manages a list of AudioConnection_F32 and AudioStream_F32 objects. For example, To activate an AudioPath, add it to the master list of audio paths in your main sketch :
To enable/disable a particular AudioPath, use
Note to avoid destroying AudioPaths as there is no destructor for AudioStream due to limitations in the Teensy audio library. ........................... Referencing the sandbox example, AudioPath_Sine.h.
You can enable/disable audio stream objects with
|
Great job figuring it all out! Wow! My only comments are tiny clarifications:
From the main *.ino file, you don't have access to the sineWave pointer; it's not a public data member of the AudioPath. From the main *.ino file, you can only use the AudioPath's public methods (getters and setters) or public data members.
For example, through this work in developing example AudioPaths, I'm hoping to learn enough that we can dream up ways to generalize AudioStream_F32 so that composite AudioStream_F32 objects and connected just like regular AudioStream_F32 objects. My gut tells me that all these AudioPath shenanigans shouldn't be necessary. My gut tells me that one ought to be able to make composite AudioStream_F32 objects that are still, themselves, AudioStream_F32 objects. I'm hoping to figure this out in the next week. |
@cab-creare-com, this work on AudioPaths has continued to fuel my annoyance that there is no good way to easily make AudioStream objects composed of other AudioStream objects. Sure, AudioPaths were getting me most of the way there, but they had limitations...for example, you could not make an AudioPath using other AudioPath objects. This seems like something that we'll want to do. So, building upon our experience with AudioPath_F32, I have generalized AudioStream_F32 so that one can now make composite AudioStream_F32 classes and have them still be AudioStream_F32 objects. To encapsulate this idea, I created a new class "AudioStreamComposite_F32" which inherits from "AudioStream_F32". As a result, the composited object is also an AudioStream_F32 object. The code within the AudioStreamComposite_F32 class handles all the behind-the-scene shenanigans of redirecting inputs and outputs as needed from the outside world into the interior world its constituent AudioStream_F32 members. As a result, the user doesn't have to know or care. Moving to this approach required some modest changes to AudioStream_F32 and it required heavy replumbing of our example code that tests the audio path stuff. Benefits vs the previous AudioPath method:
At the moment, it all seems to work, but it's super fresh so it's possible/likely to still have bugs. I especially need to ensure that my changes to AudioStream_F32 haven't broken any of the other Tympan_Library examples. Until then, if you were curious to see what happened to the AudioPath examples, you're welcome to look at my working example in Tympan_Sandbox. There's still cleanup for me to do, but it's here: https://github.com/Tympan/Tympan_Sandbox/tree/master/OpenHearing/TestSwitchedConnections_composite |
Through this exercise of generalizing AudioStream_F32 to be AudioStreamComposite_F32, it really has brought to light the different kinds of behaviors that we were asking from our AudioPath_F32 classes:
While I'm hesitant to over-abstract my class design, the hardware setup methods and the remote control methods really seem to stick out as being unrelated to the compositing behavior. Once this all settles in a bit, I look forward to feedback as to whether to keep them together or whether to break them up. |
@chipaudette , I like this approach of breaking out the different responsibilities. In the context of OpenHearing TabSINT/Tympan communication, I'm not using the single-character user control at all. I can also envision re-using a complicated AudioStreamComposite_F32 (such as a fractional-octave-band sound level meter) with different input configurations. |
@cab-creare-com , thanks for jamming on this with me. If we want to separate the hardware setup / user control from the compositing, maybe we keep the class name "AudioPath" and move the hardware setup / user control over to that? These elements are far less likely to be re-used (as opposed to the composited AudioStream stuff) so segregating them from the compositing seems appropriate? In this approach, you'd still build up your audio processing stuff using AudioStream/AudioStreamComposite. That way, it could be re-used and further composited as desired. But, you'd use an AudioPath class to add on the routines for hardware setup and for any common control/communications interface. In other words, you'd let an instance of AudioStreamComposite do all the audio stuff and you'd let routines in AudioPath do all the interaction with the hardware and with the main *.ino program. Good? Bad? If this seems good, we get to decide what kind of relationship AudioPath and AudioStreamComposite have with each other. Clearly, an AudioPath needs an AudioStreamComposite. But, should AudioPath inherit from AudioStreamComposite, or should have merely hold an instance of AudioStreamComposite? This is the classic question of object-oriented design...should AudioPath be an AudioStreamComposite or should it have an AudioStreamComposite? Thoughts? |
The core Teensy Audio library supports dynamic audio connections. The Tympan Library does not. We should extend it to support dynamic audio connections.
There are two levels of "dynamic" that we could envisioned supporting:
Note: The Teensy library supports dynamic AudioConnection but not dynamic AudioStream. Given this, it seems like we ought to be able to get dynamic AudioConnection_F32 to work, but dynamic AudioStream_F32 is probably harder.
The text was updated successfully, but these errors were encountered: