You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We recently proposed an architecture for generating EEG signals called a Signal Space Generative Adversarial Network (SigS-GAN), that learns a latent space representation of the signals it was trained on. We impose a regularization on these latent representations of signals, which makes them useful for understanding and predicting the processes that were visible in the EEG activity.
The regularization (which is an extension of Path-Length Regularization to the frequency domain) encourages the learning of a latent space where a distance between two points approximately corresponds to a measure of distance between the two signals that would be generated if we were to put these points into the generator. This is useful as it (a) adds smoothness to the representation, such that signals that are similar correspond to points that are near each other, (b) directions in latent space start to correspond to useful features of the signals, which makes it easier to describe and perform classification, (c) you can use such a latent space in order to perform a new kind of EEG analysis, where you analyze, in the latent space, the differences between point corresponding to signals that, for example, lead to two different decisions.
The goal of this project is to develop the architecture, and create the analysis methods and tool needed to pursue this last opportunity (c) for a new way of EEG analysis. We will brainstorm what modification to the present architecture would make a latent space analysis of EEG signals easier and more fruitful. Implement them, and train the networks on several different datasets of ERP studies, where participants made different types of decisions based on a processing of emotional words. Next we will apply the developed methods in order to explain what differences in electrical activity correlated with different decisions and try to predict them on data unseen by the model. The techniques developed as a part of this project could lead to a scientific publication.
List of materials:
Either knowledge of python (we’ll use PyTorch for the neural net) or mathematics (linear algebra, multi-variate calculus), as we’ll spend some time working out Fourier-analysis-based regularization terms and statistical approaches to the analysis of a trained latent space.
It is not required to be proficient in the topics discussed in the abstract (GANs, path-length regularization, latent space representations), as we will spend some time at the beginning of the project acquainting ourselves with them.
Maximal allowed number of team members: 10
The text was updated successfully, but these errors were encountered:
Added as an issue for book keeping
Source: https://www.brainhack-krakow.org/projects
Team Leaders:
Adam Sobieszek, Hubert Plisiecki / [email protected]
github AdamSobieszek
Abstract:
We recently proposed an architecture for generating EEG signals called a Signal Space Generative Adversarial Network (SigS-GAN), that learns a latent space representation of the signals it was trained on. We impose a regularization on these latent representations of signals, which makes them useful for understanding and predicting the processes that were visible in the EEG activity.
The regularization (which is an extension of Path-Length Regularization to the frequency domain) encourages the learning of a latent space where a distance between two points approximately corresponds to a measure of distance between the two signals that would be generated if we were to put these points into the generator. This is useful as it (a) adds smoothness to the representation, such that signals that are similar correspond to points that are near each other, (b) directions in latent space start to correspond to useful features of the signals, which makes it easier to describe and perform classification, (c) you can use such a latent space in order to perform a new kind of EEG analysis, where you analyze, in the latent space, the differences between point corresponding to signals that, for example, lead to two different decisions.
The goal of this project is to develop the architecture, and create the analysis methods and tool needed to pursue this last opportunity (c) for a new way of EEG analysis. We will brainstorm what modification to the present architecture would make a latent space analysis of EEG signals easier and more fruitful. Implement them, and train the networks on several different datasets of ERP studies, where participants made different types of decisions based on a processing of emotional words. Next we will apply the developed methods in order to explain what differences in electrical activity correlated with different decisions and try to predict them on data unseen by the model. The techniques developed as a part of this project could lead to a scientific publication.
List of materials:
[1] Path-Length Regularization:
https://paperswithcode.com/method/path-length-regularization
https://arxiv.org/pdf/1912.04958.pdf
[2] EEG and GANs:
https://arxiv.org/abs/1806.01875
https://www.sciencedirect.com/science/article/abs/pii/S0208521621001273?via%3Dihub
List of requirements for taking part in the project:
Either knowledge of python (we’ll use PyTorch for the neural net) or mathematics (linear algebra, multi-variate calculus), as we’ll spend some time working out Fourier-analysis-based regularization terms and statistical approaches to the analysis of a trained latent space.
It is not required to be proficient in the topics discussed in the abstract (GANs, path-length regularization, latent space representations), as we will spend some time at the beginning of the project acquainting ourselves with them.
Maximal allowed number of team members: 10
The text was updated successfully, but these errors were encountered: