Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ion channel example troubleshooting #314

Closed
gidili opened this issue Jun 13, 2019 · 15 comments
Closed

Ion channel example troubleshooting #314

gidili opened this issue Jun 13, 2019 · 15 comments
Assignees

Comments

@gidili
Copy link
Member

gidili commented Jun 13, 2019

No description provided.

@gidili
Copy link
Member Author

gidili commented Jun 13, 2019

@rgerkin as discussed at today's meeting: IVCurveTest needs ProtocolToFeatures test

@rgerkin
Copy link
Collaborator

rgerkin commented Jun 17, 2019

@gidili The IVCurveTest uses pyNeuroML's NML2ChannelAnalysis module to do most of the work. The ProtocolToFeaturesTest includes the steps "setup_protocol", "get_result", and "extract_features", but currently these are all sort of blended together in the NML2ChannelAnalysis module. Essentially, all of make_lems_file, run_lems_file and compute_iv_curve are done all in the same function there. I will see if I can separate those out to refactor the IVCurveTest into three steps.

@rgerkin
Copy link
Collaborator

rgerkin commented Jun 17, 2019

@gidili
OK, I think it's done. The tests are the ones here.

Each test's setup_protocol should now write a NeuroML file at model.lems_file_path that you can then send to Geppetto, as I think you have been doing previously.

You can skip model.get_results which runs the model locally, and proceed to model.extract_features with the results data from Geppetto.

model.extract_features is expecting results data which looks like the output of this. Since that uses the same pyNeuroML functions that I have been using for other models with NeuroML, I imagine that results data will have a similar format and be immediately usable.

@gidili
Copy link
Member Author

gidili commented Jun 27, 2019

@rgerkin we are facing a few issues on this - recap below:

  • Attempting to use the LEMS file as the model url during registration, the LEMSModel class does not pass the extra capability checks so we can't create the model from the UI. If we remove the capability check (from code), we can create the model but the compatibility check in the scheduler fails and we can't schedule the model against IVCurveSSTest (probably obvious because that test is looking for stuff that is not in the LEMS file but in the nml channel file?) because the test class is incompatible with the model class. Before we push forward on this path I wanted to check with you.
  • The ion channel model nml file can be registered fine with the ChannelModel class but when we run the simulation we are getting some conversion errors from the NeuroML pipeline in geppetto (Padraig's libraries, so we need to open another thread with him about that). The LEMS file is being bypassed in this case entirely so I am guessing it wouldn't work anyway without altering the model because there's no inputs in the nml model.

As a reminder we initially said we'd only support neuroml files, but since it was easy enough we also added support for LEMS files, and things will work ok if the entry point is either nml or LEMS and inclusions flow from there and the methods we call to set_protocol etc behave the same identical way for LEMS or nml.

@rgerkin
Copy link
Collaborator

rgerkin commented Jun 27, 2019

@gidili
For the first issue, I thought I fixed that as described here. Basically, the extra_capability_checks will not fail the model unless the model claims one of the capabilities in the keys of extra_capability_checks. The base LEMSModel doesn't claim any of them. Here is an example showing that this should not be failing. If it shows different output for you, for example if the output of [3] and [6] is not an empty list, then maybe you didn't pull the lastest sciunit dev branch since the fix.

In [1]: from neuronunit.models.lems import LEMSModel                                                                                                                      
In [2]: model = LEMSModel('EGL-19.channel.nml') # Assumes this nml file is in the cwd                                                                                    
In [3]: model.failed_extra_capabilities                                                                                                                                   
Out[3]: []

In [4]: from neuronunit.models.channel import ChannelModel                                                                                                                
In [5]: model = ChannelModel('EGL-19.channel.nml')                                                                                  
In [6]: model.failed_extra_capabilities                                                                                                                                   
Out[6]: []

For the second thing:
model.get_results, which I run locally but which you do not, because you simulated through the Geppetto server, is just calling pynml's run_lems_with_jNeuroML on the file at model.lems_file_path. So you should check to see if you are able to run the file at model.lems_file_path directly in jNeuroML, and then try it in Geppetto. Note that model.setup_protocol, which you are running (as I understand it) is calling pynml's channelanalysis module to generate the correct nml file, and then setting model.lems_file_path to its location. That is what creates all the inputs and so forth. Naturally, running the original file (i.e. the one passed to the model constructor, which should always be at model.orig_lems_file_path) won't do much since it has no inputs and is just a model description for a channel. pynml's channelanalysis module (written by Padraig) does all the work of making a lems file for simulation of ion channel experiments.

So I guess if you can successfully run the file at model.lems_file_path (after running model.setup_protocol) in jNeuroML, but not in Geppetto, then there isn't much that can be done until the Geppetto nml pipeline is fixed. A useful way to identify these cases could be to have Geppetto simulation errors fall back to jNeuroML, not as a way to make the site work, but just as a way to internally document whether the error is related to Geppetto or to the model. Either way I would expect an error score, but since Geppetto has a wider reach than just this site, it could help you identify models that the Geppetto nml pipeline is not prepared to handle. This would be outside of the requirements scope, but could be helpful for future Geppetto development.

@gidili
Copy link
Member Author

gidili commented Jun 28, 2019

Thanks @rgerkin - I think after these exchanges I was able to put my finger on the issue.

The issue is that in our flow we have no concept of LEMS wrapper, so if we register a model as a LEMSModel class, we have no way of knowing which model this wrapper is including and associate a test class to that (ChannelModel in this case). And at that point it makes no sense to schedule a LEMSModel against IVCurveSSTest (that's not compatible by design too because it wants to test model features that the LEMS model doesn't know anything about).

So by design we can only work with nml files that represent the actual model to be tested against (like we originally agreed to and like the Izh example works), even though we added support for LEMS because it was "easy" but we missed this little details that breaks the flow completely if there is a LEMS wrapper. And it took this long to understand this because we were fixing issues as they were showing up but going down the wrong rabbit hole. It would technically work if there existed tests that operate at the wrapper level, but the fact that the tests operate at the level of what's nested in the wrapper breaks our flow.

This can of course be done if we introduce the concept of model wrappers and structure things in such a way that we know this wrapper includes this particular model and give a class to that too, and at that point let the user schedule model vs test and the system has knowledge of the wrapper and can use that as the simulation entry point. Hope this clarifies where we are with this and we can discuss further today at the meeting.

As a side note, I think use of lems wrappers in the neuroml community is somewhat dying off (from talking to Padraig he's not pushing those anymore and there's only very old ones around) in favour of using nml to do this stuff instead (like in the Izh example you have).

Also, I am guessing that it should be possible to rework the ion channel example to use inputs in nml instead of lems and in that case our pipeline will work for sure as expected and get scores out like we do with the other example.

@gidili
Copy link
Member Author

gidili commented Jul 3, 2019

@rgerkin Update on this after our last conversation on the call last week.

We are updating the code to take the modified file from the lems file path property after setup_protocol is called. This is seemingly "easy", but makes it so that the model (url) the user pointed to during model registration and that has been validated is now NOT the same entry point that will be simulated (since we have an extra lems wrapper generated) so it adds some complications to the flow because this wrapper also has to be loaded and parsed and it can get tricky making sure that the 2 are aligned and consistently exposing the same underlying model. Also since the lems file can (and often does) specify things that are also specified in the geppetto meta model (watched variables) and can be specified in the neuroml file itself (inputs) this leads to an array of untested scenarios on that pipeline (e.g. conflicting watched variables declaration on the lems wrapper vs what the user selects on the UI that ends up in the geppetto meta model, to mention one), so at best this is gonna get released as an experimental feature from what I have seen so far 😅

It's all doable of course (as almost anything you can think of is!) but we had to rethink some stuff and do some refactoring and it's still ongoing as I type this - let's see where we get this week and we can discuss on Friday.

Also, to make it more interesting, we don't know yet if the generated lems / nml combo for that particular model will run smoothly on the Geppetto pipeline (we know it runs on jneuroml but scidash is using its specific scidash-simulator that has some extra logic to send results back). I tried running it from OSB but couldn't find a way to make it work out of the box with the lems file as an entry point - I was also talking to @pgleeson about it, but basically after we get the scidash pipeline to cater for this scenario that we hadn't designed for (because we did not expect neuronunit to generate a lems wrapper basically, so that went completely over our collective heads) any model changes required fall under the realm of compatibility with those lems/neuroml to neuron conversion libraries etc so it can keep going as a different conversation.

cc: @scrook @NeZanyat @ddelpiano

@gidili
Copy link
Member Author

gidili commented Jul 3, 2019

From @pgleeson:

Key thing to remember is all of NML is LEMS (as there are Lems comptype definitions for all the elements in the core language hiding in the Cells.xml etc.) but not all of Lems is NML... The key part of Lems that OSB doesn't recognise is the element, it generates one of these on the server side from the values you put in the run control panel...
So to run this type of example you need to give a file containing a element, wrapped in to OSB and it will be able to do the rest (as long as any new Lems componenttype definitions are included with , see for example https://github.com/OpenSourceBrain/MejiasEtAl2016/blob/master/NeuroML2/Interareal.net.nml#L19)
In theory we could redo OSB to be able rto accept the LEMS*.xml file, search for a and use that, but it would be too confusing as the element would be ignored...
So the key thing here is to separate the into a new file (which is included in the main Lems file) and use that... The solution would be to duplicate this template file: https://github.com/NeuroML/pyNeuroML/blob/master/pyneuroml/analysis/LEMS_Test_TEMPLATE.xml#L96, break out the bit, and fill in the template for both the LEMS*.xml and *.nml files (https://github.com/NeuroML/pyNeuroML/blob/master/pyneuroml/analysis/NML2ChannelAnalysis.py#L226) and then you have a nice pure nml file to run on OSB
Not sure how that fits into the rest of your pipeline, but happy to merge a PR with this into pyNeuroML and make a new release

So from the sounds of it even if we finish up the pipeline part of it, it will still require some changes on the LEMS generation side of it to structure things so that they are compatible with the OSB pipeline, with possibly changes on both neuronunit & pyNeuroML to make this work.

@rgerkin Given this info I am proposing to abort this particular mission (we could modify the pipeline but we cannot test it without those other changes mentioned above, meaning it cannot happen in the very short term) and release without support for that example for now and focus on getting everything else ticking along - so anything that can be triggered off a pure neuroml file as simulation entry point and with setup_protocol modifying the neuroml will work fine (assuming the model is well formed) as by design. At least we know what it takes to add support to the other case when we have scope to embark on that task in the near future.

@rgerkin
Copy link
Collaborator

rgerkin commented Jul 3, 2019

I am happy to give up on supporting this example, but would like to be in position to support whatever the approach of the near future will be.

In the meantime, will it work (for a user, currently, with no additional changes) to put the URL to the wrapped LEMS file (i.e. the one made by the pynml ChannelAnalysis module) into the site, and select LEMSModel as the model type, and run a (new) test whose setup_protocol does absolutely nothing? I would think this could work, since then the model is the same file throughout, but I don't know if I am missing something.

@gidili
Copy link
Member Author

gidili commented Jul 4, 2019

@rgerkin we have the code for the scidash pipeline changes in a branch so whenever the approach changes in the LEMS file generation (plus those pyNeuroML changes) we have that code as a functional starting point (some refactoring will be required though because we had to jump through some hoops and cut some corners and there's some unideal stuff in there). The flow is actually working, but as expected the simulation fails in line with those comments from Padraig. So yeah, it's already possible in terms of what happens within the boundaries of the scidash app itself but I am resistant to throw it in now as I don't want to risk destabilizing the build for something we can't effectively use yet.

It already works with LEMS as an entry point if you have tests that are compatible with the particular LEMS model (currently checked during scheduling compatibility checks with sciunit) and this is a bonus compared to what we had originally planned to go with (nml files only). It was so close we decided to go the extra mile - maybe in hindsight it wasn't such a good decision given the domino effect triggered... but it's research software after all, so taking risks means we learn something, and learn we did! 😄The only constraint will be again in relation to Padraig's points above but the pipeline as a passthrough already works 👌

@pgleeson
Copy link

pgleeson commented Jul 4, 2019

Noticed that some of the quoted text above was missing info on neuroml/lems element names due to brackeed text getting hidden, so here's the original info from slack:

image

I do think the changes to get pynml ChannelAnalysis make a LEMS+NML file rather than one LEMS file with everything should be straightforward, and I can do that and test it over the next week if you like, but if you're moving on to other things, i have other work to keep me busy...

@gidili
Copy link
Member Author

gidili commented Jul 4, 2019

I do think the changes to get pynml ChannelAnalysis make a LEMS+NML file rather than one LEMS file with everything should be straightforward

Thanks @pgleeson (and thanks for reposting I hadn't noticed that markup was lost) the problem with the above is that it requires some changes to the scidash pipeline at the very least with the extra files, and probably requires some integration with neuronunit in the way it uses pyneuroml and in the way it exposes these new generated files. I think it's a good idea to schedule even a 30 minutes meeting for next week to explore this and coordinate with @rgerkin too, but I honestly don't think we should rush it in this release as it requires changes in 3 different code bases and a bunch of testing after that so best case scenario we are talking a couple of weeks (worst case a month to get everything to work and integrate everything).

@rgerkin @pgleeson you ok if I throw a tentative block on the calendar for next week?

@rgerkin
Copy link
Collaborator

rgerkin commented Jul 5, 2019

I will be unavailable next week but maybe the week after that. It would be good for us all to better understand the roadmap and use cases for NeuroML/LEMS stuff going forward, so I would like to have the meetup. I don't expect anything to make it into the current SciDash work, though, since we are wrapping things up. But it would probably help all of us in the future.

@pgleeson
Copy link

pgleeson commented Jul 5, 2019

Sure. Next week is ok for me so let me know @gidili if you would like a quick chat.

The week after is CNS. I can have a chat to Sharon there re SciDash, but it would be good the week after to have a hangout and go through where it's at (would be great to get an updated demo) and start thinking about how it relates to other NML/OSB plans.

@gidili
Copy link
Member Author

gidili commented Jul 5, 2019

Cool @pgleeson we can do have a brief chat next week and I can show you some stuff and we can discuss steps forward to integrate this kind of examples, we can leave some notes for @rgerkin to enjoy when he's back 👍I'll be off myself the last 2 weeks of July (15-30) 🌞🌊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants