The Average Builder Form

Parent Previous Next

In the following picture you can see how the Average Maker form appears.

It is organized into groups of settings that will be described one by one. Even if at first it might appear a bit complex, you will became very soon familiar with it and you will probably (and hopefully) appreciate the way it is structured and its flexibility. The reason why so many parameters are available depends on the fact that this tool was also designed to quickly compare different averages computed from the same file. For this reason it has to be flexible and fast at the same time to allow users to easily and quickly investigate and verify their own hypoteses.


It is divided into 7 main groups: Segmentation, Moving Range, Baseline Correction, Line Attributes, Conditions, Artifact Rejection and Measures. Each of them will be now illustrated in detail.


1) The Segmentation group deals with the way the segmentation process is performed, in terms of triggering events selection and time interval. Triggering Events are Spot events (the list contains those read from the opened file and their corresponding quantity) that the user can select (at least one, but even more) to compute an average. Time Interval indicates how many milliseconds of signals have to be averaged (the Duration parameter, 1000 ms in the figure) and the starting position relative to the triggering events (250 ms before the triggering event in the figure). The Conditioning States list, which includes all the State  events stored in the file, can be used to select only those triggers that occur during a group of states. For example one can select only those triggers that occurred when the subject had the eyes closed or, as in the case of the figure, which refers to a classical visual P300 BCI Speller, when the subject was watching at the letter “Z”. In this way it is possible to quickly select just a subset of triggers. If no conditioning states are selected, which is the most common situation, all the triggers will be considered for the averaging computation (if they will be really used it also depends on the settings relative to the other groups of options).


2) The Moving Range group was implemented to test some Brain-Computer Interface protocols. Users should normally not use it unless they clearly know and understand how it works. It is somehow in the middle between a normal averaging mode and a single trial analysis: it allows to create and navigate through sub-averages formed by a fixed user defined number of trials (Nr. Trials). For example if one has 100 triggers and selects 10 as the number of trials, he can compute averages of the trials 1-10, then switch to another average of 10 trials and so on. This has been proven useful on some Brain-Computer Interface protocols (visual P300) when a quick investigation on the (sub)optimal number of trials necessary to make the system working efficiently was searched. If you are interested in this procedure, please contact us from the braINterface website.


3) The Baseline Correction group allows you to set a baseline (computed on each trial) for vertically aligning your signals. This is a classical technique used in ERP. One can specify the time interval for its computation (expressed in milliseconds) or use the whole trial length. For each sensor the mean value is computed on the selected time interval and is then subtracted from the whole trial.


4) The Line Attributes group actually provides just the opportunity of setting the color and style of the traces. As two averages can be visualized simultaneously, it should be better to draw them in different colors or line style, to better distinguish them. These settings will also affect printing.


5) The Conditions group allows to select many different criteria to include or exclude triggers from the average computation. Spot conditions are mostly used for push buttons (e.g. accept or reject a trial if a button was pressed within a time interval relative to the triggering event) while states conditions allows to accept or reject trials if, for example, a light in the room was turned on.

6) The Artifact Rejection group is devoted to the settings of  the strategy employed to deal with artifacts. There are 5 different ways to handle artifacts, three of them based on an Amplitude Detection Criteria, which is typical of the on-line averaging techniques. In this case, every time a certain threshold value is reached or exceeded an artifact is detected. One can set the range of values for the detection (Max and Min values, this last if the Absolute value checkbox is unchecked otherwise the signals are first rectified by applying the absolute value operator and then, as they are now non-negative, only the threshold defined in Max is checked). Another option (Rejection criteria) allows to perform this check on all the channels or just on a subset of them. This is useful when one has one very noisy channel that, if selected, will cause most of the trials to be rejected. The 5 options are:

    1. None: means that no artifact rejection is performed;
    2. Use all trial length: means that whenever an artifact is detected according to the Amplitude Detection Criteria box during the whole trial duration, then it is discarded and will not contribute to the average.
    3. User defined interval: means that the same procedure should be applied only to a specific time range. Imagine that you have a very noisy signal and that you are interested in analyzing just the P300 component peak. Why discard all the trial if for example just the time interval 250-450 is of some interest? The Amplitude Detection criteria settings will be also used. Note that if one sets as time interval the same one used for the segmentation then the result will be the same as if he had chosen the option “Use all trial length” which exists just because it is faster to use (just a mouse click).
    4. Partial Trial by Threshold implements another strategy which uses the Amplitude Detection Criteria settings and that should be used when the number of trials is very small. When this option is chosen, instead of removing the whole trial, only those samples who exceed the threshold value will be discarded, thus preserving part of the trial. This option was originally implemented to analyze EEG/ERP data acquired simultaneously with fMRI in a very noisy environment (artifacts of many millivolts in amplitude!).
    5. Partial Trial by Event implements the same strategy described above (removal on a sample base criterion) but using events (States) for discarding samples instead of analyzing the amplitude. If a portion of a trial contains a specific event (e.g. “Artifact”) then only that portion of the trial will be discarded. To use this option it is then recommended to analyze the EEG raw data, mark the traces for artifacts, either manually or automatically with the Artifact Inserter facility, and then compute the averages. Note that when using one of the two Partial Trial rejection strategies the computed averages might appear unnaturally noisy (e.g. small stairs can be present) because the number of trials used for each sample in general varies. However, because the standard deviation is in any case computed for each sample and for each sensor and because the statistical facilities operate on a per sample basis, it is still possible to use these data for the t-test statistical analysis. Moreover, preserving a greater number of samples will improve the statistics: having a large population is better than having a small one!


The Measures....



Once you have set all the parameters in your Average Builder form you are ready to save the new average macro (press the Save As button) and return to the Average Manager form, select the desired macro and press the Automatic button to compute your first average.

After the average has been computed the NPX Lab program will be populated with five new windows (Views) and the main toolbar is populated with 5 buttons, one for each of them. Pressing one of these buttons will activate and maximize the corresponding window. The Views are:

1)  The Averages View, which allows to review, analyze, compare, select and perform many analyses on the computed Averages (including single trial selection/rejection and manual trigger adjusting).

2)  The Potential Map View, which allows to display an instantaneous or a mean map of an average.

3)  The Spectral Map View, which allows to visualize spectral maps of the ERPs.

4)  The Spectrum View, which allows to perform spectral analysis on the ERPs (on a per trial basis) and to compare different spectra.

5)  The Cartoon View, which allows to visualize potential and statistical maps at different time instants.


The following paragraphs will illustrate in more details these Views, their features and how to use them.

Created with the Personal Edition of HelpNDoc: Easy EBook and documentation generator