next up previous contents
Next: Classical approximation Up: Numerical case studies in Previous: Approximately periodic potentials   Contents

Inverse two-body problems

As a second example we study the reconstruction of a two-body potential by measuring inter-particle distances $x_r$. Consider the two-body problem

\begin{displaymath}
\frac{P_1^2}{2m_1} + \frac{P_2^2}{2m_2}
+v(x_1-x_2)
\delta ...
...^\prime_1+x^\prime_2 )
\delta (x_1+x_2-x^\prime_1-x^\prime_2 )
\end{displaymath} (52)

with single particle momenta $P_i$ = $-i\partial/\partial x_i$. The problem is transformed to a one-body problem in the relative coordinates in the usual way by introducing i.e., $x_r$ = $x_1-x_2$, $P_r$ = $(m_1P_1-m_2P_2)/(m_1+m_2)$, $x_c$ = $(m_1x_1+m_2x_2)/(m_1+m_2)$, $P_c$ = $P_1+P_2$, $m$ = $(m_1m_2)/(m_1+m_2)$, and $M$ = $m_1+m_2$ resulting in
\begin{displaymath}
\left(\frac{P_r^2}{2m} +v(x_r)\right) \psi_\alpha(x_r)
= E_\alpha \psi_\alpha(x_r)
.
\end{displaymath} (53)

The total energy is additive $E^{\rm total.}_{\alpha}(P_c)$ = $E_\alpha+P_c^2/(2M)$ so the thermal probabilities $p^{\rm total}$ factorize and integrating out the center of mass motion leaves $p_\alpha$ = ${e^{-\beta E_\alpha}}/{Z}$, with $E_\alpha$ being the eigenvalues of Eq. (53).

Figure 4: Approximation of symmetric potential. Shown are likelihoods (left hand side) and potentials (right hand side): Original likelihood and potential (thin lines), approximated likelihood and potential (thick lines), empirical density (bars), The parameters used are: 20 data points for a particle with $m$ = 0.1, truncated RBF covariances as in Eq. (54) with $\sigma_{\rm RBF}$ = $7$, $\lambda $ = $0.001$, energy penalty term $E_U$ with $\mu $ = 20 and reference value $\kappa $ = $-9.66$ = $U (v_{\rm true})$ (average energy $U(v)$ = $-9.33$ for the approximated $v$, ground state energy $E_{0}(v)$ = $-9.52$) inverse physical temperature $\beta $ = 1, and a potential fulfilling $v(x)$ = $v(-x)$ and $v$ = 0 at the boundaries.
\begin{figure}\begin{center}
\epsfig{file=FLDpic12a.eps, width= 67mm}$\!\!\!$\ep...
...ebox(0,0){$v_{\rm true}$}}
\end{picture}\end{center}\vspace{-0.5cm}
\end{figure}

Figure 5: Same data and parameter as for Fig. 4 with the exception of $\sigma_{\rm RBF}$ = $4$, that means with a smaller smoothness constraint, and $\mu = 5$. ($U(v)$ = $-9.46$.) To allow an easier comparison with the reconstructed likelihood the figure shows the symmetrized empirical density $P_{\rm sym}$ = $(P_{\rm emp}(x)+P_{\rm emp}(-x))/2$.
\begin{figure}\begin{center}
\epsfig{file=FLDSpic7e.eps, width= 67mm}$\!\!\!$\ep...
...ebox(0,0){$v_{\rm true}$}}
\end{picture}\end{center}\vspace{-0.5cm}
\end{figure}

Figure 6: Same data and parameter as for Fig. 5 but with even smaller smoothness constraint $\sigma_{\rm RBF}$ = $1$. ($\mu = 5$, empirical density symmetrized, $U(v)$ = $-9.59$.) Compared with Figs. 4 and 5, the empirical density is better approximated but not the original potential and its likelihood function.
\begin{figure}\begin{center}
\epsfig{file=FLDSpic5e.eps, width= 67mm}$\!\!\!$\ep...
...ebox(0,0){$v_{\rm true}$}}
\end{picture}\end{center}\vspace{-0.5cm}
\end{figure}

Figs. 4 - 6 show typical results for the numerical reconstruction of a one-dimensional, strictly symmetric potential, fulfilling $v(x)$ = $v(-x)$ and set to zero at the boundaries. Training data have been sampled from a `true' likelihood function (thin lines), resulting in an empirical density $P_{\rm emp}(x)$ = $n(x)/n$ (shown by bars), where $n(x)$ denotes the number of times the value $x$ occurs in the training data. The `true' likelihood has been constructed from a `true' potential (thin lines) choosing periodic boundary conditions for the wavefunctions. In contrast to Sect. 3.3.1 a zero reference potential $v_0 \equiv 0$ and a truncated Radial Basis Function (RBF) prior [32] has been used

\begin{displaymath}
{\bf K}_0
=
\sum_{k=0}^3
\frac{\sigma_{\rm RBF}^{2k}}{k!2^k} (-1)^k {\Laplace}^k
,
\end{displaymath} (54)

(${\Laplace}^k$ denoting the $k$th iterated Laplacian) which includes, compared to a Laplacian prior, higher derivatives, hence producing a rounder reconstructed potential (cmp. Sect. 3.2.2). The approximated potentials (thick lines) have been obtained by iterating Eq. (49), including a term $E_U$ adapting the thermal energy average to that of the original potential. As iteration matrix we used ${\bf A}$ = $\lambda {\bf K}_0$ together with an adaptive step size $\eta$. An initial guess for the potential has been obtained by adding negative $\delta$-peaks on the data points (except for data on the boundary), i.e., $v^{(0)}$ = $-\sum_i\delta_{x,x_i}$. The number of iterations necessary to obtain convergence has been typically between 50 and 100.

Comparing Figs. 4 - 6 one sees that a smaller smoothness leads to a better fit of the empirical density. A larger smoothness, on the other hand, leads to better fit in regions where smoothness is an adequate prior. Near the boundaries, however, where the original is relatively steep, a higher smoothness leads to a poorer approximation. A remedy would be, for example, an adapted reference potential $v_0$.

Figure 7: Approximation of symmetric potential with mixture of Gaussian process priors. The left hand side shows likelihoods and the right hand side potentials: Original likelihood and potential (thin lines), approximated likelihood and potential (thick lines), symmetrized empirical density (bars), and the two reference potentials $v_1$,$v_2$ (dashed, $v_2$ deeper in the middle). The parameters used are: 20 data points for a particle with $m$ = 0.1, inverse physical temperature $\beta $ = 1, ${\bf K}_0$ = $-\Laplace $, inverse mixture temperature $\lambda $ = 0.1, energy penalty factor $\mu $ = 10 for average energy $\kappa $ = $-9.66$ = $U (v_{\rm true})$ (and $U(v)$ = $-9.55$, $E_0(v)$ = $-9.82$) $v(x)$ = $v(-x)$ symmetric, and $v$ = 0 at the boundaries. Because the data support both reference potentials $v_1$ and $v_2$, the approximated $v$ is in regions with no data essentially a smoothed mixture between $v_1$ and $v_2$ with mixture coefficients for prior components $p_0(1\vert v)$ = 0.3, $p_0(2\vert v)$ = 0.7.
\begin{figure}\begin{center}
\psfig{file=FLDSpic4.eps, width= 67mm}$\!\!\!$\epsf...
...,16){\makebox(0,0){$v_2$}}
\end{picture}\end{center}\vspace{-0.5cm}
\end{figure}

Fig. 7 presents an application of a mixture of Gaussian process priors as given in Eq. (40). Such mixture priors can in principle be used to construct an arbitrary prior density, adapted to the situation under study. For the numerical example a two component mixture has been chosen with equal component variances ${\bf K}_k$ = ${\bf K}_0$ of the form of Eq. (54) and two reference potentials $v_i$ (shown as dashed lines) with the same average energy $U$. In the special situation shown in the figure both reference functions $v_i$ fit similarly well to the empirical data. (The final mixture coefficients for $v_1$ and $v_2$ are $p_0(1\vert v)$ = 0.3 and $p_0(2\vert v)$ = 0.7.) Hence, in the no-data region the approximated potential $v$ becomes a smoothed, weighted average of $v_1$ and $v_2$. Because both reference potentials coincide also relatively well with the original $v$ near the boundaries, the approximation in Fig. 7 is better than in Figs. 4 - 6.

In conclusion, the two one-dimensional examples show that a direct numerical solution of the presented Bayesian approach to inverse quantum theory can be feasible.


next up previous contents
Next: Classical approximation Up: Numerical case studies in Previous: Approximately periodic potentials   Contents
Joerg_Lemm 2000-06-06