Terminology & Notation#
This page defines common abbreviations and the mathematical notation used throughout the FIPS documentation and code. Different scientific fields often use different conventions — this guide helps bridge those gaps.
Mathematical Notation#
Notation Framework#
FIPS uses a consistent notation system throughout:
Dimensionality Convention
Lowercase letters (\(x\), \(z\), \(y\)) represent 1-D vectors
Uppercase letters (\(H\), \(S\), \(K\), \(A\)) represent 2-D matrices
Hat Notation
Hat \(\hat{\ }\) = posterior (a posteriori estimate after incorporating observations)
\(\hat{x}\) = posterior state
\(\hat{S}\) = posterior covariance
\(\hat{y}\) = posterior modeled observations
The Forward Model
The fundamental relationship is:
where:
\(x\) = state vector (the unknowns we’re solving for)
\(z\) = observations (measured data)
\(c\) = constant (background or offset)
\(y\) = modeled observations (what the forward model predicts)
\(H\) = forward operator (maps state space → observation space)
Subscript Conventions
The subscript system indicates which space or time a variable belongs to:
Subscript \(_0\) = prior information / a priori (before incorporating observations)
\(x_0\) = prior state vector
\(S_0\) = prior error covariance matrix
Subscript \(_z\) = observation space (associated with \(z\))
\(S_z\) = observation/model-data mismatch error covariance matrix
Covariance Matrices
Following the uppercase convention for matrices, covariance matrices are denoted with \(S\):
\(S\) = any covariance matrix (uppercase because 2-D)
\(S_0\) = prior error covariance (subscript _0 for a priori)
\(S_z\) = observation error covariance (subscript _z because it’s in observation space)
\(\hat{S}\) = posterior error covariance (hat for a posteriori)
This framework applies consistently: any variable with subscript \(_0\) refers to the prior, any variable in observation space gets subscript \(_z\), and posterior quantities get a hat.
Quick Reference#
Symbol |
Name |
Description |
|---|---|---|
\(x\) |
State vector |
The unknown quantities being estimated (e.g., fluxes, densities) |
\(x_0\) |
Prior state |
A priori estimate before incorporating observations |
\(\hat{x}\) |
Posterior state |
A posteriori optimized state estimate after inversion |
\(z\) |
Observations |
Measured data (e.g., concentrations, gravity anomalies) |
\(c\) |
Constant / Background |
Additive offset or background field |
\(y\) |
Modeled observations |
Forward model output \(y = Hx + c\) |
\(y_0\) |
Prior observations |
\(y_0 = Hx_0 + c\) |
\(\hat{y}\) |
Posterior observations |
\(\hat{y} = H\hat{x} + c\) |
\(H\) |
Forward operator / Jacobian |
Operator mapping state space to observation space |
\(S_0\) |
Prior error covariance |
Uncertainty in the prior state estimate |
\(S_z\) |
Observation error covariance |
Combined measurement error and model representation error |
\(\hat{S}\) |
Posterior error covariance |
Reduced uncertainty after incorporating observations |
\(K\) |
Kalman gain |
Weighting matrix that determines how observations update the prior |
\(A\) |
Averaging kernel |
Shows which states are constrained by observations: \(A = KH\) |
Diagnostic Metrics#
Symbol |
Name |
Description |
|---|---|---|
DOFS |
Degrees of Freedom for Signal |
Number of independent pieces of information from observations. Equal to \(\text{Tr}(A)\) |
\(\chi^2\) |
Chi-squared statistic |
Goodness-of-fit metric comparing observations to model predictions |
\(R^2\) |
Coefficient of determination |
Fraction of variance explained by the model (0 to 1) |
Inverse Problem Terminology#
- Prior#
The initial estimate of the state (and its uncertainty) before incorporating observations. Often comes from inventory data, climatology, or a process model.
- Posterior#
The updated estimate of the state (and its uncertainty) after incorporating observations through Bayesian inference.
- Forward Model / Forward Operator#
The mathematical operator \(H\) that predicts observations from a given state: \(y = Hx + c\). Sometimes called the Jacobian, observation operator, or sensitivity matrix.
- Jacobian#
In the linear case, identical to the forward operator \(H\). For nonlinear problems, the Jacobian is the local linearization of the forward model.
- Observation Operator#
Another name for the forward operator, emphasizing its role in mapping state space to observation space.
- Kalman Gain#
The matrix \(K\) that optimally weights how much each observation updates the prior state. Derived from minimizing posterior uncertainty.
- Averaging Kernel#
Matrix \(A = KH\) showing which true state variables are constrained by the observations. Diagonal elements near 1 indicate strong constraint; near 0 indicates weak constraint.
- Model-Data Mismatch#
The combined error in observations and forward model representation, captured in the covariance matrix \(S_z\). Includes measurement error, transport error, aggregation error, etc.
- Covariance Matrix#
A symmetric positive-definite matrix encoding uncertainties and their correlations. Diagonal elements are variances; off-diagonal elements are covariances.
- Posterior Error Reduction#
The decrease in uncertainty from prior to posterior, often expressed as \(1 - \text{diag}(\hat{S}) / \text{diag}(S_0)\).
See also
Getting Started — Quick introduction to FIPS with minimal example
User Guide — Detailed guide to data structures and workflows
Estimators — Full mathematical details of estimators