Basis Function#

Basis

Kernel Visualization

Examples

Evaluation/Convolution

Preferred Mode

B-Spline

../../_images/basis_figs_plot_bspline.png

Grid cells

BSplineEval
BSplineConv

🟢 Eval

Cyclic B-Spline

../../_images/basis_figs_plot_cyclic_bspline.png

Place cells

CyclicBSplineEval
CyclicBSplineConv

🟢 Eval

M-Spline

../../_images/basis_figs_plot_mspline.png

Place cells

MSplineEval
MSplineConv

🟢 Eval

Linearly Spaced Raised Cosine

../../_images/basis_figs_plot_raised_cosine_linear.png

RaisedCosineLinearEval
RaisedCosineLinearConv

🟢 Eval

Log Spaced Raised Cosine

../../_images/basis_figs_plot_raised_cosine_log.png

Head Direction

RaisedCosineLogEval
RaisedCosineLogConv

🔵 Conv

Orthogonalized Exponential Decays

../../_images/basis_figs_plot_orth_exp_basis.png

OrthExponentialEval
OrthExponentialConv

🟢 Eval

Identity Function

../../_images/basis_figs_plot_identity_basis.png

Custom Features

IdentityEval

🟢 Eval

History Effects

../../_images/basis_figs_plot_history_basis.png

Coupled GLM

HistoryConv

🔵 Conv

Overview#

A basis function is a collection of simple building blocks—functions that, when combined (weighted and summed together), can represent more complex, non-linear relationships. Think of them as tools for constructing predictors in GLMs, helping to model:

  1. Non-linear mappings between task variables (like velocity or position) and firing rates.

  2. Linear temporal effects, such as spike history, neuron-to-neuron couplings, or how stimuli are integrated over time.

In a GLM, we assume a non-linear mapping exists between task variables and neuronal firing rates. This mapping isn’t something we can directly observe—what we do see are the inputs (task covariates) and the resulting neural activity. The challenge is to infer a “good” approximation of this hidden relationship.

Basis functions help simplify this process by representing the non-linearity as a weighted sum of fixed functions, \(\psi_1(x), \dots, \psi_n(x)\), with weights \(\alpha_1, \dots, \alpha_n\). Mathematically:

\[ f(x) \approx \alpha_1 \psi_1(x) + \dots + \alpha_n \psi_n(x) \]

Here, \(\approx\) means “approximately equal”.

Instead of tackling the hard problem of learning an unknown function \(f(x)\) directly, we reduce it to the simpler task of learning the weights \(\{\alpha_i\}\). This preserves convexity, resulting in a much simpler optimization problem.

Basis in NeMoS#

NeMoS provides a variety of basis functions (see the table above). For each basis type, there are two dedicated classes of objects, corresponding to the two uses described above:

  • Eval basis objects: For representing non-linear mappings between task variables and outputs. These objects all have names ending with Eval.

  • Conv basis objects: For linear temporal effects. These objects all have names ending with Conv.

Eval and Conv objects can be combined to construct multi-dimensional basis functions, enabling complex feature construction.

Learn More#