-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ion channel example troubleshooting #314
Comments
@rgerkin as discussed at today's meeting: IVCurveTest needs ProtocolToFeatures test |
@gidili The IVCurveTest uses pyNeuroML's NML2ChannelAnalysis module to do most of the work. The |
@gidili Each test's You can skip
|
@rgerkin we are facing a few issues on this - recap below:
As a reminder we initially said we'd only support neuroml files, but since it was easy enough we also added support for LEMS files, and things will work ok if the entry point is either nml or LEMS and inclusions flow from there and the methods we call to set_protocol etc behave the same identical way for LEMS or nml. |
@gidili
For the second thing: So I guess if you can successfully run the file at |
Thanks @rgerkin - I think after these exchanges I was able to put my finger on the issue. The issue is that in our flow we have no concept of LEMS wrapper, so if we register a model as a LEMSModel class, we have no way of knowing which model this wrapper is including and associate a test class to that (ChannelModel in this case). And at that point it makes no sense to schedule a LEMSModel against IVCurveSSTest (that's not compatible by design too because it wants to test model features that the LEMS model doesn't know anything about). So by design we can only work with nml files that represent the actual model to be tested against (like we originally agreed to and like the Izh example works), even though we added support for LEMS because it was "easy" but we missed this little details that breaks the flow completely if there is a LEMS wrapper. And it took this long to understand this because we were fixing issues as they were showing up but going down the wrong rabbit hole. It would technically work if there existed tests that operate at the wrapper level, but the fact that the tests operate at the level of what's nested in the wrapper breaks our flow. This can of course be done if we introduce the concept of model wrappers and structure things in such a way that we know this wrapper includes this particular model and give a class to that too, and at that point let the user schedule model vs test and the system has knowledge of the wrapper and can use that as the simulation entry point. Hope this clarifies where we are with this and we can discuss further today at the meeting. As a side note, I think use of lems wrappers in the neuroml community is somewhat dying off (from talking to Padraig he's not pushing those anymore and there's only very old ones around) in favour of using nml to do this stuff instead (like in the Izh example you have). Also, I am guessing that it should be possible to rework the ion channel example to use inputs in nml instead of lems and in that case our pipeline will work for sure as expected and get scores out like we do with the other example. |
@rgerkin Update on this after our last conversation on the call last week. We are updating the code to take the modified file from the lems file path property after setup_protocol is called. This is seemingly "easy", but makes it so that the model (url) the user pointed to during model registration and that has been validated is now NOT the same entry point that will be simulated (since we have an extra lems wrapper generated) so it adds some complications to the flow because this wrapper also has to be loaded and parsed and it can get tricky making sure that the 2 are aligned and consistently exposing the same underlying model. Also since the lems file can (and often does) specify things that are also specified in the geppetto meta model (watched variables) and can be specified in the neuroml file itself (inputs) this leads to an array of untested scenarios on that pipeline (e.g. conflicting watched variables declaration on the lems wrapper vs what the user selects on the UI that ends up in the geppetto meta model, to mention one), so at best this is gonna get released as an experimental feature from what I have seen so far 😅 It's all doable of course (as almost anything you can think of is!) but we had to rethink some stuff and do some refactoring and it's still ongoing as I type this - let's see where we get this week and we can discuss on Friday. Also, to make it more interesting, we don't know yet if the generated lems / nml combo for that particular model will run smoothly on the Geppetto pipeline (we know it runs on jneuroml but scidash is using its specific scidash-simulator that has some extra logic to send results back). I tried running it from OSB but couldn't find a way to make it work out of the box with the lems file as an entry point - I was also talking to @pgleeson about it, but basically after we get the scidash pipeline to cater for this scenario that we hadn't designed for (because we did not expect neuronunit to generate a lems wrapper basically, so that went completely over our collective heads) any model changes required fall under the realm of compatibility with those lems/neuroml to neuron conversion libraries etc so it can keep going as a different conversation. |
From @pgleeson:
So from the sounds of it even if we finish up the pipeline part of it, it will still require some changes on the LEMS generation side of it to structure things so that they are compatible with the OSB pipeline, with possibly changes on both neuronunit & pyNeuroML to make this work. @rgerkin Given this info I am proposing to abort this particular mission (we could modify the pipeline but we cannot test it without those other changes mentioned above, meaning it cannot happen in the very short term) and release without support for that example for now and focus on getting everything else ticking along - so anything that can be triggered off a pure neuroml file as simulation entry point and with setup_protocol modifying the neuroml will work fine (assuming the model is well formed) as by design. At least we know what it takes to add support to the other case when we have scope to embark on that task in the near future. |
I am happy to give up on supporting this example, but would like to be in position to support whatever the approach of the near future will be. In the meantime, will it work (for a user, currently, with no additional changes) to put the URL to the wrapped LEMS file (i.e. the one made by the pynml ChannelAnalysis module) into the site, and select LEMSModel as the model type, and run a (new) test whose |
@rgerkin we have the code for the scidash pipeline changes in a branch so whenever the approach changes in the LEMS file generation (plus those pyNeuroML changes) we have that code as a functional starting point (some refactoring will be required though because we had to jump through some hoops and cut some corners and there's some unideal stuff in there). The flow is actually working, but as expected the simulation fails in line with those comments from Padraig. So yeah, it's already possible in terms of what happens within the boundaries of the scidash app itself but I am resistant to throw it in now as I don't want to risk destabilizing the build for something we can't effectively use yet. It already works with LEMS as an entry point if you have tests that are compatible with the particular LEMS model (currently checked during scheduling compatibility checks with sciunit) and this is a bonus compared to what we had originally planned to go with (nml files only). It was so close we decided to go the extra mile - maybe in hindsight it wasn't such a good decision given the domino effect triggered... but it's research software after all, so taking risks means we learn something, and learn we did! 😄The only constraint will be again in relation to Padraig's points above but the pipeline as a passthrough already works 👌 |
Noticed that some of the quoted text above was missing info on neuroml/lems element names due to brackeed text getting hidden, so here's the original info from slack: I do think the changes to get pynml ChannelAnalysis make a LEMS+NML file rather than one LEMS file with everything should be straightforward, and I can do that and test it over the next week if you like, but if you're moving on to other things, i have other work to keep me busy... |
Thanks @pgleeson (and thanks for reposting I hadn't noticed that markup was lost) the problem with the above is that it requires some changes to the scidash pipeline at the very least with the extra files, and probably requires some integration with neuronunit in the way it uses pyneuroml and in the way it exposes these new generated files. I think it's a good idea to schedule even a 30 minutes meeting for next week to explore this and coordinate with @rgerkin too, but I honestly don't think we should rush it in this release as it requires changes in 3 different code bases and a bunch of testing after that so best case scenario we are talking a couple of weeks (worst case a month to get everything to work and integrate everything). @rgerkin @pgleeson you ok if I throw a tentative block on the calendar for next week? |
I will be unavailable next week but maybe the week after that. It would be good for us all to better understand the roadmap and use cases for NeuroML/LEMS stuff going forward, so I would like to have the meetup. I don't expect anything to make it into the current SciDash work, though, since we are wrapping things up. But it would probably help all of us in the future. |
Sure. Next week is ok for me so let me know @gidili if you would like a quick chat. The week after is CNS. I can have a chat to Sharon there re SciDash, but it would be good the week after to have a hangout and go through where it's at (would be great to get an updated demo) and start thinking about how it relates to other NML/OSB plans. |
No description provided.
The text was updated successfully, but these errors were encountered: