Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconstruction matrix blending #4

Open
lkorczowski opened this issue Dec 2, 2019 · 12 comments
Open

Reconstruction matrix blending #4

lkorczowski opened this issue Dec 2, 2019 · 12 comments
Assignees
Labels
enhancement New feature or request

Comments

@lkorczowski
Copy link
Collaborator

lkorczowski commented Dec 2, 2019

EDIT:
Following the 2020-01-15 meeting, we decided to investigate the blending option and see what will be the best way to implement it. Two mains options has been discussed:

  • post-RASR blending: a timeflux node blends two outputs of the RASR (one with new reconstruction matrix, the other with the old one) to smooth the signals and avoid discontinuties. Requirements: the node should have access to the reconstruction matrix and apply the filter itself (not using the output of RASR.transform()). Advantage: may be useful for other temporal or spatial filtering.
  • in-build RASR blending: RASR has a new parameters enabling blending. By default, blending will only NOT be applied. If allowed, the reconstruction matrix will be saved as a new attribute of the RASR class and blending will be applied for the first epoch when the reconstruction matrix is updated.

EDIT2:
Following the #5 and the drop of 2D compatibility, post-RASR blending seems the best option.
See #4 (comment)


Hi,

I'm struggling a bit to see if I'm stupid our there are some issues with the current matlab implementation of the the RASR process (transform for python).

First, I can't figure out in asr_process there is this weird "reconstruction to intermediate samples" that doesn't make sense to me.

  % do the reconstruction in intervals of length stepsize (or shorter at the end of a chunk)
        last_n = 0;
        for j=1:length(update_at)
            % apply the reconstruction to intermediate samples (using raised-cosine blending)
            n = update_at(j);
            if ~trivial || ~state.last_trivial
                subrange = range((last_n+1):n);
                blend = (1-cos(pi*(1:(n-last_n))/(n-last_n)))/2;
                data(:,subrange) = bsxfun(@times,blend,R*data(:,subrange)) + bsxfun(@times,1-blend,state.last_R*data(:,subrange));
            end
            [last_n,state.last_R,state.last_trivial] = deal(n,R,trivial);
        end

R being NOT updated in this loop, doing

data(:,subrange) = bsxfun(@times,blend,R*data(:,subrange)) + bsxfun(@times,1-blend,state.last_R*data(:,subrange));

sounds exatly like doing data(:,subrange) = R*data(:,subrange) in a more fancy (and inefficient) way. Did I miss something ?

I think that this line of code was supposed to do some kind of re-sampling interpolation but it doesn't make sense because there is no resampling. Another possibility: it was made to update R in the loop for overlapping windows (when you have to remove the contribution of the already filtered signal in the previous range). Anyway, I'm just putting this here because in the last commit, I considered that all that section was useless.

@lkorczowski
Copy link
Collaborator Author

lkorczowski commented Dec 2, 2019

the only time
data(:,subrange) = bsxfun(@times,blend,R*data(:,subrange)) + bsxfun(@times,1-blend,state.last_R*data(:,subrange));
is actually doing something is at the beginning of the loop, that's all (j=1). And I still don't understand why :/

@lkorczowski
Copy link
Collaborator Author

Not as important, but still...

        % use eigenvalues in descending order
        [D, order] = sort(reshape(diag(D),1,C));

is actually in ascending order... (a lot of weird stuff are everywhere like this)

@lkorczowski lkorczowski added the question Further information is requested label Dec 2, 2019
@lkorczowski lkorczowski mentioned this issue Dec 12, 2019
9 tasks
@lkorczowski
Copy link
Collaborator Author

lkorczowski commented Dec 12, 2019

Decision required here:
@bertrandlalo @sylvchev @mesca

  • Should we implement this ?

  • If yes, should it be in transform() or in another function like "adapative_transform()" as the reconstruction matrix R will be saved at each call of transform() and therefore it is not sklearn friendly.

@lkorczowski lkorczowski added the help wanted Extra attention is needed label Dec 12, 2019
@mesca
Copy link
Member

mesca commented Dec 17, 2019

Still not sure of what "this" exactly refers to.

We should stick to the sklearn API, so in any case let's keep the code in transform(). If the adaptative mode is needed, it should be a boolean parameter of the constructor.

@lkorczowski
Copy link
Collaborator Author

lkorczowski commented Dec 17, 2019

As I understand it, the code

               blend = (1-cos(pi*(1:(n-last_n))/(n-last_n)))/2;
                data(:,subrange) = bsxfun(@times,blend,R*data(:,subrange)) + bsxfun(@times,1-blend,state.last_R*data(:,subrange));

makes an interpolation between the reconstruction of the data with the current reconstruction matrix R and the previous one last_R.

It may smooth the signal when two call of the RASR.transform(). Indeed, the reconstruction matrix R depends directly of the statistics of the signal, therefore each successive call of RASR.transform() gives a different R. Discontinuities will therefore appears at the edge of the epochs and can be smoothed with the blending.

@mesca
Copy link
Member

mesca commented Dec 17, 2019

Oh, I remember our discussion now :) I think we should implement it, because of the weird artifacts that will appear otherwise, but I am open to counter-arguments.

@lkorczowski
Copy link
Collaborator Author

OK, I'll implement the blending with a condition if the reconstruction matrix changed.

I'm closing this issue when implemented

@sylvchev
Copy link
Collaborator

I'll be a little more cautious about this, since EEGLab did not respond to your comments on the matlab implementation.
I think it could be a parameter of transform(), like blending with a default true value to mimick the Matlab code. I have the feeling that we could get rid of this, it would be easier with a change of default behavior.-

@lkorczowski
Copy link
Collaborator Author

Noted:

  • it will be an option during the initialization (sklearn doesn't like additional criterion)
  • default behaviour will be without blending (sklearn learn friendly and required behaviour during standard cross-validation)
  • adaptative=True in RASR will save last reconstruction matrix R and apply blending (if necessary).

This behaviour won't be in sprint2 but in sprint3.

@lkorczowski lkorczowski self-assigned this Jan 8, 2020
@lkorczowski
Copy link
Collaborator Author

lkorczowski commented Jan 13, 2020

Comment:

Due to current behaviour in Timeflux, each epoch = 1 reconstruction matrix. In the case of sliding windows for epoching, several epochs may have same timestamp but different resulting signals after RASR (because the reconstruction matrix changed). Blending such as proposed in the matlab implementation might be a great solution in Timeflux to merge redundant timestamp signals when required. It may be either an option withing RASR or managed outside RASR (e.g. blending node).

@lkorczowski lkorczowski changed the title Help needed to explain RASR process Matlab implementation Reconstruction matrix blending Jan 20, 2020
@lkorczowski lkorczowski added enhancement New feature or request and removed help wanted Extra attention is needed question Further information is requested labels Jan 20, 2020
@lkorczowski
Copy link
Collaborator Author

lkorczowski commented Jan 20, 2020

Example of transition

Capture d’écran 2020-01-20 à 15 52 08

without(left) or with(right) blending. top: input signal. bottom: blended signal. Alternating between two reconstruction matrices (identity and [-1,1;1,1])

Pro of blending:

  • no discontinuity
  • smooth

Con of blending:

  • longer transitory state when reconstruction matrix updated (length is a parameters)
  • if artefact is present at the transition, it might be leaked in both epoch (instead of removing it completely in one epoch and keeping it in the other).
  • the transitory period of blending may induced spurious synchrony / spurious frequency modulation

Added to backlog #15

@lkorczowski lkorczowski mentioned this issue Feb 11, 2020
7 tasks
@lkorczowski
Copy link
Collaborator Author

Following #5:

  • if only managing 3D matrices, blending can be managed perfectly outside of RASR()

To do so:

  • input : continuous eeg 2D matrix *X with timestamps vector t
  • One node in timeflux do epoching X, t -> X_k, t_k with overlapping
  • X_k sent to RASR.transform() -> Xrasr_k
  • blending node take Xrasr_k and t_k as input and return Xrasrblended_k, t_k OR reconstruct Xrasrblended, t in 2D matrix (optional)

PRO:

  • blending node can manage many more outputs than RASR(), e.g. PCA
  • users can compare in realtime the output of RASR with/without blending

CON:

  • independent blending node is more time consuming and add a supplementary delay
  • require timestamps to merge properly
  • a bit complex to manage all case of use that are not related directly to RASR()
  • more memory intensive e.g. we need to save in attributes the last epoch X_k and last timestamps t_k (if managed inside RASR, only the last reconstruction matrix was required).

I can write down the code of a blending class, sklearn API compatible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants