next up previous contents
Next: Initial configurations and kernel Up: Learning matrices Previous: Inverting in subspaces   Contents

Boundary conditions

For a differential operator invertability can be achieved by adding an operator restricted to a subset $B \subset X\times Y$ (boundary). More general, we consider an projector ${\bf Q}_B$ on a space which we will call boundary and the projector on the interior ${\bf Q}_I$ = ${\bf I} - {\bf Q}_B$. We write ${\bf Q}_k {{\bf K}} {\bf Q}_l$ = ${{\bf K}}_{kl}$ for $k,l\in\{I,B\}$, and require ${{\bf K}}_{BI} = 0$. That means ${{\bf K}}$ is not symmetric, but ${{\bf K}}_{II}$ can be, and we have

\begin{displaymath}
{{\bf K}}
= ({\bf I} - {\bf Q}_B) {{\bf K}} + {\bf Q}_{B} {...
...f Q}_{B}
=
{{\bf K}}_{II} + {{\bf K}}_{IB} + {{\bf K}}_{BB}.
\end{displaymath} (668)

For such an ${{\bf K}}$ an equation of the form ${{\bf K}} \phi = T$ can be decomposed into
$\displaystyle {{\bf K}}_{BB} \phi_B$ $\textstyle =$ $\displaystyle T_B,$ (669)
$\displaystyle {{\bf K}}_{IB} \phi_B + {{\bf K}}_{II} \phi_I$ $\textstyle =$ $\displaystyle T_I,$ (670)

with projected $\phi_k = {\bf Q}_k \phi$, $T_k = {\bf Q}_k T$, so that
$\displaystyle \phi_B$ $\textstyle =$ $\displaystyle {{\bf K}}_{BB}^{-1} T_B,$ (671)
$\displaystyle \phi_I$ $\textstyle =$ $\displaystyle {{\bf K}}_{II} ^{-1}
\left( T_I - {{\bf K}}_{IB} {{\bf K}}_{BB}^{-1}T_B \right).$ (672)

The boundary part is independent of the interior, however, the interior can depend on the boundary. A basis can be chosen so that the projector onto the boundary is diagonal, i.e.,

\begin{displaymath}
{\bf Q}_B =
{\bf I}_B
= \sum_{j:(x_j,y_j)\in B}
(\delta(x...
...mes \delta(y_j)) \otimes (\delta(x_j) \otimes \delta(y_j))^T
.
\end{displaymath}

Eliminating the boundary results in an equation for the interior with adapted inhomogeneity. The special case ${{\bf K}}_{BB}$ = ${\bf I}_{BB}$, i.e., $\phi_B = T_B$ on the boundary, is known as Dirichlet boundary conditions.

As trivial example of an equation ${\bf K}\phi$ = $T$ with boundary conditions, consider a one-dimensional finite difference approximation for a negative Laplacian ${\bf K}$, adapted to include boundary conditions as in Eq. (668),

\begin{displaymath}
\left(
\begin{tabular}{ c c c c c c }
1 & 0 & 0 & 0 & 0 & 0...
..._3$\ \\
$T_4$\ \\
$T_5$\ \\
$b$\
\end{tabular}\right)
.
\end{displaymath} (673)

Then Eq. (669) is equivalent to the boundary conditions, $\phi_1$ = $b$, $\phi_6$ = $b$, and the interior equation Eq. (670) reads
\begin{displaymath}
\left(
\begin{tabular}{ c c c c }
2 & $-1$& 0 & 0 \\
$-1$...
... $b$\ \\
$0$\ \\
$0$\ \\
$b$\ \\
\end{tabular}\right)
.
\end{displaymath} (674)

(Useful references dealing with the numerical solution of partial differential equations are, for example, [8,161,87,196,84].)

Similarly to boundary conditions for ${\bf K}$, we may use a learning matrix ${\bf A}$ with boundary conditions (corresponding for example to those used for ${\bf K}$):

$\displaystyle {\bf A} =
{\bf A}_{II} + {\bf A}_{IB} +{\bf A}_{BB}$ $\textstyle :$ $\displaystyle \mbox{Boundary}$ (675)
$\displaystyle {\bf A} = {\bf A}_{II} + {\bf A}_{IB} + {\bf I}_{BB}$ $\textstyle :$ $\displaystyle {\rm Dirichlet \,\,boundary}$ (676)

For linear ${\bf A}_{BB}$ the form (675) corresponds to general linear boundary conditions. (It is also possible to require nonlinear boundary conditions.) ${\bf A}_{II}$ can be chosen symmetric, and therefore positive definite, and the boundary of ${\bf A}$ can be changed during iteration. Solving ${\bf A} (\phi^{(i+1)} - \phi^{(i)})$ = $\eta (T^{(i)} - {{\bf K}}^{(i)} \phi^{(i)})$ gives on the boundary and for the interior
\begin{displaymath}
\phi_B^{(i+1)} = \phi_B^{i} +
\eta {\bf A}_{BB}^{-1}
\left(
...
...i)} \phi_B^{(i)} - {{\bf K}}_{BI}^{(i)} \phi_I^{(i)}
\right),
\end{displaymath} (677)


\begin{displaymath}
\phi_I^{(i+1)} = \phi_I^{i} +
\eta {\bf A}_{II}^{-1}
\left(
...
...{-1}
{\bf A}_{IB}
\left( \phi_B^{(i+1)} - \phi_B^{(i)}\right),
\end{displaymath} (678)

For fulfilled boundary conditions with $\phi_B^{(i)} = \left({{\bf K}}_{BB}^{(i)}\right)^{-1} T_B^{(i)}$ and ${{\bf K}}_{BI}^{(i)} = 0$, or for $\eta {\bf A}_{BB}^{-1} \rightarrow 0$ so the boundary is not updated, the term $\phi_B^{(i+1)} - \phi_B^{(i)}$ vanishes. Otherwise, inserting the first in the second equation gives
$\displaystyle \phi_I^{(i+1)}$ $\textstyle =$ $\displaystyle \phi_I^{i} +
\eta {\bf A}_{II}^{-1}
\left(
T_I^{(i)} - {{\bf K}}_{II}^{(i)} \phi_I^{(i)} - {{\bf K}}_{IB}^{(i)} \phi_B^{(i)}
\right)$ (679)
    $\displaystyle -\eta {\bf A}_{II}^{-1}
{\bf A}_{IB}
{\bf A}_{BB}^{-1}
\left(
T_B...
...{{\bf K}}_{BB}^{(i)} \phi_B^{(i)} - {{\bf K}}_{BI}^{(i)} \phi_I^{(i)}
\right)
.$  

Even if ${{\bf K}}$ is not defined with boundary conditions, an invertible ${\bf A}$ can be obtained from ${{\bf K}}$ by introducing a boundary for ${\bf A}$. The updating process is then restricted to the interior. In such cases the boundary should be systematically changed during iteration. Block-wise updating of $\phi $ represent a special case of such learning matrices with variable boundary.

The following table summarizes the learning matrices we have discussed in some detail for the setting of density estimation (for conjugate gradient and quasi-Newton methods see, for example, [196]):

Learning algorithm Learning matrix
Gradient ${\bf A} = {\bf I}$
Jacobi ${\bf A} = {\bf D}$
Gauss-Seidel ${\bf A} = {\bf L} + {\bf D}$
Newton ${\bf A} = -{\bf H} $
prior relaxation ${\bf A} = {{\bf K}} $
massive relaxation ${\bf A} = {\bf A}_0 + m^2 {\bf I}$
linear boundary ${\bf A} = {\bf A}_{II}+ {\bf A}_{IB} + {\bf A}_{BB} $
Dirichlet boundary ${\bf A} = {\bf A}_{II}+ {\bf A}_{IB} + {\bf I}_{BB} $
Gaussian ${\bf A} = \sum_{k=0}^\infty \frac{1}{k!}
\left( \frac{{\bf M}^2 }{2\tilde\sigma^2}\right)^k
=e^{\frac{{\bf M}^2}{2\tilde\sigma^2}}$


next up previous contents
Next: Initial configurations and kernel Up: Learning matrices Previous: Inverting in subspaces   Contents
Joerg_Lemm 2001-01-21