CBraMod: A Criss-Cross Brain Foundation Model for EEG Decoding
Paper • 2412.07236 • Published
C\ riss-\ C\ ross Bra\ in Mod\ el for EEG Decoding from Wang et al. (2025) [cbramod].
Architecture-only repository. Documents the
braindecode.models.CBraModclass. No pretrained weights are distributed here. Instantiate the model and train it on your own data.
pip install braindecode
from braindecode.models import CBraMod
model = CBraMod(
n_chans=22,
sfreq=200,
input_window_seconds=4.0,
n_outputs=2,
)
The signal-shape arguments above are illustrative defaults — adjust to match your recording.
| Parameter | Type | Description |
|---|---|---|
patch_size |
int, default=200 | Temporal patch size in samples (200 samples = 1 second at 200 Hz). |
dim_feedforward |
int, default=800 | Dimension of the feedforward network in Transformer layers. |
n_layer |
int, default=12 | Number of Transformer layers. |
nhead |
int, default=8 | Number of attention heads. |
activation |
type[nn.Module], default=nn.GELU | Activation function used in Transformer feedforward layers. |
emb_dim |
int, default=200 | Output embedding dimension. |
drop_prob |
float, default=0.1 | Dropout probability. |
return_encoder_output |
bool, default=False | If false (default), the features are flattened and passed through a final linear layer to produce class logits of size n_outputs. If True, the model returns the encoder output features. |
Cite the original architecture paper (see References above) and braindecode:
@article{aristimunha2025braindecode,
title = {Braindecode: a deep learning library for raw electrophysiological data},
author = {Aristimunha, Bruno and others},
journal = {Zenodo},
year = {2025},
doi = {10.5281/zenodo.17699192},
}
BSD-3-Clause for the model code (matching braindecode). Pretraining-derived weights, if you fine-tune from a checkpoint, inherit the licence of that checkpoint and its training corpus.