Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Explainable Artificial Intelligence (XAI) to create a real-time closed-loop stimulation #180

Open
1 of 5 tasks
HussainAther opened this issue Dec 7, 2021 · 4 comments

Comments

@HussainAther
Copy link

HussainAther commented Dec 7, 2021

Title

Using Explainable Artificial Intelligence (XAI) to create a real-time closed-loop stimulation

Leaders

Syed Hussain Ather (Twitter: @SHussainAther)

Collaborators

No response

Brainhack Global 2021 Event

BrainHack Toronto

Project Description

  • What are you doing, for whom, and why?

Like similar work in other fields (e.g., computer vision, ML, etc.) on established datasets, we propose a project to create a pipeline explainable artificial intelligence (XAI) on a neurostimulation experimental and theoretical procedure. Given input recording of brain signals from some source (EEG data most likely), there's research geared toward using existing or novel XAI techniques to a known neurostimulation paradigm to provide explanatory power to close-loop neurobehavioral modulation (e.g., counter-factual probes). We hope this can be a step toward more innovative future work in creating a real time, closed-loop stimulation for deep brain stimulation (DBS). We hope to use this pipeline in improving research that can be used to modulate neural activity in real time. These types of frameworks can be used to advance work in intelligent computational approaches able to sense, interpret, and modulate a large amount of data from behaviorally relevant neural circuits at the speed of thoughts.

  • What makes your project special and exciting?

The use of Artificial Intelligence and machine learning in basic research and clinical neuroscience is increasing. AI methods enable the interpretation of large multimodal datasets that can provide unbiased insights into the fundamental principles of brain function, potentially paving the way for earlier and more accurate detection of brain disorders and better-informed intervention protocols. Despite AI’s ability to create accurate predictions and classifications, in most cases, it lacks the ability to provide a mechanistic understanding of how inputs and outputs relate to each other. Explainable Artificial Intelligence (XAI) is a new set of techniques that attempts to provide such an understanding, here we report on some of these practical approaches.

  • How to get started?

Using one of these GUI tools (https://github.com/anguyen8/XAI-papers, most likely DeepVis), we hope to create a functioning pipeline that provides this explanatory power. Create a working model just like Figure 2. (https://www.frontiersin.org/files/Articles/490966/fnins-13-01346-HTML-r1/image_m/fnins-13-01346-g002.jpg)

  • Where to find key resources?

Install them as required by Deep Visualization (https://github.com/yosinski/deep-visualization-toolbox) there are

Link to project repository/sources

https://github.com/HussainAther/XAI

Goals for Brainhack Global

Goal: Create a functional, working pipeline that follows the three steps (pre-modelling, modelling, and post-modelling steps) from Figure 2 of Fellous et al., (https://www.frontiersin.org/files/Articles/490966/fnins-13-01346-HTML-r1/image_m/fnins-13-01346-g002.jpg)

    • Pre-modelling: Characterize input data
      • [ ]
      • [ ]
    • Modelling: Design explainable modelling architectures
      • [ ]
      • [ ]
    • Post-modelling: Extract explanations from output
      • [ ]
      • [ ]

Good first issues

  1. Issue one:

  2. issue two:

Communication channels

https://mattermost.brainhack.org/brainhack/channels/brainhack-toronto

Skills

  • Python: intermediate
  • Git: intermediate

Onboarding documentation

No response

What will participants learn?

I imagine that, at Brainhack 2021, like other or previous Brainhacks, we all learn skills in collaboration, organization, communication, team and project management, and other areas that can benefit any researcher interested in AI or similar fields related to programming and data.

Data to use

No response

Number of collaborators

3

Credit to collaborators

Project contributors are listed on the project README using all-contributors github bot.

Image

Leave this text if you don't have an image yet.

Type

pipeline_development

Development status

0_concept_no_content

Topic

data_visualisation

Tools

other

Programming language

Python

Modalities

fMRI

Git skills

0_no_git_skills

Anything else?

No response

Things to do after the project is submitted and ready to review.

  • Add a comment below the main post of your issue saying: Hi @brainhackorg/project-monitors my project is ready!
  • Twitter-sized summary of your project pitch.
@HussainAther
Copy link
Author

Hi @brainhackorg/project-monitors my project is ready!

@HussainAther
Copy link
Author

For Twitter:

Interested in harnessing the power of AI and finding solutions to problems in precision medicine? Join our team during Brainhack 2021 where we use the Deep Visualization Toolbox to do just that! Gain skills in simulation and data analysis that can be applied anywhere.

@HussainAther
Copy link
Author

@HussainAther
Copy link
Author

From fig. 1 here: (https://www.researchgate.net/publication/337930045_Explainable_Artificial_Intelligence_for_Neuroscience_Behavioral_Neurostimulation/figures?lo=1)

FIGURE 1 | An XAI-enabled closed-loop neurostimulation process can be described in four phases: (1) System-level recording of brain signals (e.g., spikes, LFPs, ECoG, EEG, neuromodulators, optical voltage/calcium indicators), (2) Multimodal fusion of neural data and dense behavioral/cognitive assessment measures. (3) XAI algorithm using unbiasedly discovered biomarkers to provide mechanistic explanations on how to improve behavioral/cognitive performance and reject stimulation artifacts. (4) Complex XAI-derived spatio-temporal brain stimulation patterns (e.g., TMS, ECT, DBS, ECoG, VNS, TDCS, ultrasound, optogenetics) that will validate the model and affect subsequent recordings. ADC, Analog to Digital Converter; AMP, Amplifier; CTRL, Control; DAC, Digital to Analog Converter; DNN, Deep Neural Network. XRay picture courtesy Ned T. Sahin. Diagram modified from Zhou et al. (2018).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment