Skip to article frontmatterSkip to article content

Linear Tikhonov Inversion

1Linear Tikhonov Inversion

Note This Jupyter notebook is a linked copy of the LinearTikhonovInversion_App and contains additional cells that reproduce the specific examples presented throughout 2 Linear Tikhonov Inversion.

from geoscilabs.inversion.LinearInversionDirect import LinearInversionDirectApp
from ipywidgets import interact, FloatSlider, ToggleButtons, IntSlider, FloatText, IntText
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['font.size'] = 14
app = LinearInversionDirectApp()

1.1Background

This app is based upon the inversion tutorial: “INVERSION FOR APPLIED GEOPHYSICS” by Oldenburg and Li (2005).

Douglas W. Oldenburg and Yaoguo Li (2005) 5. Inversion for Applied Geophysics: A Tutorial. Near-Surface Geophysics: pp. 89-150. eISBN: 978-1-56080-171-9 print ISBN: 978-1-56080-130-6 Oldenburg & Li (2005)

1.2Purpose

We illustrate how a generic linear inverse problem can be solved using a Tikhonov approach. The default parameters provided for the Forward and Inverse problems below generate a reasonable example for illustrating the inversion but the learning comes when these parameters are changed and outcomes are observed.

1.3Outline

The app is divided into two sections:

1.3.1Forward Problem

  • Mathematical Background for the Forward Problem

  • Step 1: Create a model, m\mathbf{m}.

  • Step 2: Generate a sensitivity matrix G\mathbf{G}.

  • Step 3: Simulate data (d=Gm\mathbf{d} = \mathbf{G}\mathbf{m}) and add noise.

These steps are explored individually but additional text is given in 2 Linear Tikhonov Inversion. For convenience, the widgets used to carry out all three steps are consolidated at the end of the section. A brief mathematical description is also provided.

1.3.2Inverse Problem

  • Mathematical Background for the Inverse Problem

  • Step 4: Invert the data, and explore the results

Here we provide widgets to adjust the parameters for the inverse problem. Some basic information is provided but details about the parameters are provided in the text 2 Linear Tikhonov Inversion.

1.4Mathematical Background for the Forward Problem

Let gj(x)g_j(x) denote the kernel function for jjth datum. With a given model m(x)m(x), the jjth datum can be computed by solving following integral equation:

dj=abgj(x)m(x)dxd_j = \int_a^{b} g_j(x) m(x) dx

where

gj(x)=epjxcos(2πqjx)g_j(x) = e^{p_jx} cos (2 \pi q_jx)

is the jthj^{th} kernel function. By integrating gj(x)g_j(x) over cells of width Δx\Delta x and using the midpoint rule cell we obtain the sensitivities

gj(x)=epjxcos(2πqjx)Δx\mathbf{g}_j(\mathbf{x}) = e^{p_j\mathbf{x}} cos (2 \pi q_j \mathbf{x}) \Delta x

where

  • gj\mathbf{g}_j: jjth row vector for the sensitivty matrix (1×M1 \times M)

  • x\mathbf{x}: model location (1×M1 \times M)

  • pjp_j: decaying constant (<0)

  • qjq_j: oscillating constant (>0)

By stacking multiple rows of gj\mathbf{g}_j, we obtain sensitivity matrix, G\mathbf{G}:

G=[g1gN]\begin{aligned} \mathbf{G} = \begin{bmatrix} \mathbf{g}_1\\ \vdots\\ \mathbf{g}_{N} \end{bmatrix} \end{aligned}

Here, the size of the matrix G\mathbf{G} is (N×M)(N \times M). Finally data, d\mathbf{d}, can be written as a linear equation:

d=Gm\mathbf{d} = \mathbf{G}\mathbf{m}

where m\mathbf{m} is an inversion model; this is a column vector (M×1M \times 1).

In real measurments, there will be various noise sources, and hence observation, dobs\mathbf{d}^{obs}, can be written as

dobs=Gm+noise\mathbf{d}^{obs} = \mathbf{G}\mathbf{m} + \mathbf{noise}

1.5Step 1: Create a model, m\mathbf{m}

The model mm is a function defined on the interval [0,1] and discretized into MM equal intervals. It is the sum of a: (a) background mbackgroundm_{background}, (b) box car m1m1 and (c) Gaussian m2m2.

  • m_background : background value

The box car is defined by

  • m1 : amplitude
  • m1_center : center
  • m1_width : width

The Gaussian is defined by

  • m2 : amplitude
  • m2_center : center
  • m2_sigma : width of Gaussian (as defined by a standard deviation ε)
  • M : number of model parameters
Q_model = app.interact_plot_model()
app.return_axis = True
ax = app.plot_model_only(
    m_background = 0.,
    m1 = 1,
    m1_center = 0.2,
    dm1 = 0.2,
    m2 = 2,
    m2_center = 0.75,
    sigma_2 = 0.07,
    M=100
)
ax.set_xlabel("x")
app.return_axis = False

1.6Step 2: Generate a sensitivity matrix, G\mathbf{G}

By using the following app, we explore each row vector, gj\mathbf{g}_j, of the kernel or sensitivity matrix , G\mathbf{G}. Parameters of the apps are:

  • M: number of model parameters
  • N: number of data
  • pmin, pmax: minimum and maximum of the MM-length range of decaying constant values (<0)
  • qmin, qmax: minimum and maximum of the MM-length range of oscillating constant values (>0)
  • ymin, ymax: minimum and maximum of the y-axis
Q_kernel = app.interact_plot_G()
#plot for 3 kernels
app.return_axis = True
axs = app.plot_G(
    N=3,
    M=100,
    pmin=0,
    pmax=-2,
    qmin=1,
    qmax=3,
    scale='log',
    fixed=False,
    ymin=-0.005,
    ymax=0.011,
)
axs[0].set_title("kernel functions")
app.return_axis = False;

1.7Step 3: Simulate data, d=Gm\mathbf{d}=\mathbf{Gm}, and add noise

The jj-th datum is the inner product of the jj-th kernel gj(x)g_j(x) and the model m(x)m(x). In discrete form it can be written as the dot product of the vector gj\mathbf{g}_j and the model vector m\mathbf{m}.

dj=gjmd_j = \mathbf{g}_j \mathbf{m}

If there are NN data, these data can be written as a column vector, d\mathbf{d}:

d=Gm=[d1dN]\begin{aligned} \mathbf{d} = \mathbf{G}\mathbf{m} = \begin{bmatrix} d_1\\ \vdots\\ d_{N} \end{bmatrix} \end{aligned}

1.7.1Adding Noise

Observational data are always contaminated with noise. Here we add Gaussian noise N(0,ϵ)N(0,\epsilon) (zero mean and standard deviation ε). Here we choose

ϵ=%d+floor\epsilon = \% |d| + \text{floor}
Q_data = app.interact_plot_data()
#plot accurate data
app.return_axis = True
ax = app.plot_data_only(
    add_noise=True,
    percentage=0,
    floor=0.03,    
)
app.return_axis = False

1.8Composite Widget for Forward Modelling

app.reset_to_defaults()
app.interact_plot_all_three_together()
Loading...
# Default parameters: accurate data
app.return_axis = True
axs = app.plot_model(
    m_background = 0,
    m1 = 1,
    m2 = 2,
    m1_center = 0.2,
    dm1 = 0.2,
    m2_center = 0.75,
    sigma_2 = 0.07,
    percentage = 0,
    floor = 0.0,
    pmin=-0.25,
    pmax=-3,
    qmin=0.,
    qmax=5,    
    
)    
axs[0].set_title("Model")
app.return_axis = False
# Default parameters: noisey data
app.return_axis = True
axs = app.plot_model(
    m_background = 0,
    m1 = 1,
    m2 = 2,
    m1_center = 0.2,
    dm1 = 0.2,
    m2_center = 0.75,
    sigma_2 = 0.07,
    percentage = 0,
    floor = 0.03,
    pmin=-0.25,
    pmax=-3,
    qmin=0.,
    qmax=5,    
    
)    
axs[2].set_title("Noisy data")
app.return_axis = False

1.9Mathematical Background for the Inverse Problem

In the inverse problem we attempt to find the model m\mathbf{m} that gave rise to the observational data dobs\mathbf{d}^{obs}. The inverse problem is formulated as an optimization problem:

minimize   ϕ(m)=ϕd(m)+βϕm(m)\text{minimize} \ \ \ \phi(\mathbf{m}) = \phi_d(\mathbf{m}) + \beta \phi_m(\mathbf{m})

where

  • ϕd\phi_d: data misfit
  • ϕm\phi_m: model regularization
  • β: trade-off (Tikhonov) parameter 0<β<0<\beta<\infty

Data misfit is defined as

ϕd=j=1N(gjmdjobsϵj)2\phi_d = \sum_{j=1}^{N}\Big(\frac{\mathbf{g}_j\mathbf{m}-d^{obs}_j}{\epsilon_j}\Big)^2

where ϵj\epsilon_j is an estimate of the standard deviation of the jjth datum.

The model regularization term, ϕm\phi_m, can be written as

ϕm(m)=αs(mmref)2dx+αx(dmdx)2dx\phi_m(\mathbf{m}) = \alpha_s \int \left(\mathbf{m}-\mathbf{m}_{ref}\right)^2 dx + \alpha_x \int \left(\frac{d \mathbf{m}}{dx}\right)^2 dx

The first term is referred to as the “smallness” term. Minimizing this generates a model that is close to a reference model mref\mathbf{m}_{ref}. The second term penalizes roughness of the model. It is generically referred to as a “flattest” or “smoothness” term.

1.10Step 4: Invert the data, and explore the results

In the inverse problem we define parameters needed to evaluate the data misfit and the model regularization terms. We then deal with parameters associated with the inversion.

1.10.1Parameters

  • mode: Run or Explore
    • Run: Each click of the app, will run n_beta inversions
    • Explore: Not running inversions, but explore result of the previously run inversions

1.10.1.1Misfit

  • percent: estiamte uncertainty as a percentage of the data (%)

  • floor: estimate uncertainty floor

  • chifact: chi factor for stopping criteria (when ϕd=N\phi_d^{\ast}=N \rightarrow chifact=1)

1.10.1.2Model norm

  • mref: reference model

  • alpha_s: αs\alpha_s weight for smallness term

  • alpha_x: αx\alpha_x weight for smoothness term

1.10.1.3Beta

  • beta_min: minimum β

  • beta_max: maximum β

  • n_beta: the number of β

1.10.1.4Plotting options

  • data: obs & pred or normalized misfit

    • obs & pred: show observed and predicted data
    • normalized misfit: show normalized misfit
  • tikhonov: phi_d & phi_m or phi_d vs phi_m

    • phi_d & phi_m: show ϕd\phi_d and ϕm\phi_m as a function of β
    • phi_d vs phi_m: show tikhonov curve
  • i_beta: i-th β value

  • scale: linear or log

    • linear: linear scale for plotting the third panel
    • log: log scale for plotting the third panel
app.interact_plot_inversion()
app.return_axis = True
axs = app.plot_inversion(
    mode="Run", #"Explore"
    mref=0.0,
    percentage=app.percentage,
    floor=app.floor,
    beta_min=1e-3,
    beta_max=1e5,
    n_beta=81,
    alpha_s=1,
    alpha_x=0,
    tikhonov="phi_d & phi_m",
    data_option="obs & pred",
    scale="log",
    i_beta=0,
    chifact=1,
)
app.return_axis = False
References
  1. Oldenburg, D. W., & Li, Y. (2005). 5. Inversion for Applied Geophysics: A Tutorial. In Near-Surface Geophysics (pp. 89–150). Society of Exploration Geophysicists. 10.1190/1.9781560801719.ch5